While installing or running hadoop one gets different errors at different times. Here is a list of some that i could think of:
- class org.apache.hadoop.mapred.ShuffleHandler not found - This errors comes in the YARN nodemanager. This is due to fact that the system could not find YARN map-reduce folder. In CDH installation check for /usr/lib/hadoop-mapreduce folder. If its not there, there were installation problems. If its there, then its missing from classpath and other environment variables. hadoop-mapreduce is not to be confused with hadoop-mapreduce-0.20.x
- java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/lib/partition/InputSampler$Sampler- This is due to missing environment variable. Check HADOOPHOME, HADOOPMAPRED_HOME etc.
- ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)- check HADOOPMAPREDHOME.
- ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Caught exception in status-updater - Check yarn-sites.xml for resource manager address, check if its 0.0.0.0
- File /path/to/file could only be replicated to 0 nodes instead of minReplication (=1) - Data Node disk is Full
- Data Node is Busy with block report and block scanning
- If Block Size is Negative value(dfs.block.size in hdfs-site.xml)
- Cannot initialise Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. - This error comes when you have configured hadoop YARN as your hadoop framework but you are trying to run an application developed for MRv1.
- Check HADOOPMAPREDHOME. For MRv1 it must be set to /usr/lib/hadoop-0.20-mapreduce folder.