I have currently started to work with karmasphere eclipse plugin for doing mapreduce jobs. I followed the instructions from documentation, I can run the local development, deployment jobs in host machine. Later on I downloaded Cloudera CDH3 and running as a VM(through VMWare), I can run the map reduce jobs locally in the VM (guest machine), I can monitor the mapreduce jobs that is happening in the VM from the eclipse Hadoop perspective(host machine) and when i try with karmasphere remote deployment all I can see is the available files in the HDFS but i cannot access the files neither i can run the map reduce program nor create new files in HDFS from my eclipse IDE. I get the following exception:
java.io.IOException: Blocklist for /user/cloudera/wordcount/input/wordcount/file01 has changed!
java.io.IOException: Blocklist for /user/cloudera/wordcount/input/wordcount/file01 has changed!
at com.karmasphere.studio.hadoop.client.hdfs.vfsio.DFSInputStream.openInfo(DFSInputStream.java:81)
at com.karmasphere.studio.hadoop.client.hdfs.vfsio.DFSInputStream.chooseDataNode(DFSInputStream.java:357)
at com.karmasphere.studio.hadoop.client.hdfs.vfsio.DFSInputStream.blockSeekTo(DFSInputStream.java:206)
at com.karmasphere.studio.hadoop.client.hdfs.vfsio.DFSInputStream.read(DFSInputStream.java:311)
at java.io.BufferedInputStream.fill(Unknown Source)
at java.io.BufferedInputStream.read1(Unknown Source)
at java.io.BufferedInputStream.read(Unknown Source)
at org.apache.commons.vfs.util.MonitorInputStream.read(MonitorInputStream.java:74)
at java.io.FilterInputStream.read(Unknown Source)
at com.karmasphere.studio.hadoop.mapreduce.model.hadoop.HadoopBootstrapModel.createCacheFile(HadoopBootstrapModel.java:198)
at com.karmasphere.studio.hadoop.mapreduce.model.hadoop.HadoopBootstrapModel.update(HadoopBootstrapModel.java:169)
at com.karmasphere.studio.hadoop.mapreduce.model.core.AbstractOperatorModel.run(AbstractOperatorModel.java:369)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:577)
at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:1030)
Can any one please help me through this? I'm new with karmasphere as well as with Hadoop.