I am trying to write a Mapper/Reducer for hbase and I added the jar. However after adding the jar file in lib directory, I cannot start hbase. I want to debug what is going wrong? How can I change the log level? Will it help?
Following is the exception:
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:143)
at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:217)
at org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:153)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:224)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2290)
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.ipc.RPC.getProtocolProxy(Ljava/lang/Class;JLjava/net/InetSocketAddress;Lorg/apache/hadoop/security/UserGroupInformation;Lorg/apache/hadoop/conf/Configuration;Ljavax/net/SocketFactory;ILorg/apache/hadoop/io/retry/RetryPolicy;Ljava/util/concurrent/atomic/AtomicBoolean;)Lorg/apache/hadoop/ipc/ProtocolProxy;
at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:420)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:316)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:665)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:601)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1004)
at org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:562)
at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:364)
at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.(HMasterCommandLine.java:307)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
... 7 more
So it seems the error was due to mismatch of hadoop libraries in the lib directory of HBase(which was hadoop--2.5.1) against my actual Hadoop installation(hadoop--2.6.0). My jar was looking for classes which it was not finding in the older version of the hadoop libraries due to which it was failing. this answer made me realize the issue. After I copied all the hadoop-*-2.6.0 jars in the lib directory, HBase started as expected. The same is also mentioned in HBase-Hadoop compatibility documentation.
Related
I have manually downloaded and added the libraries log4j, jsoup and neo4j for a project in Intellij Idea.
Before adding neo4j, I could build a jar as an artifact with all the libraries extracted to it so i could directly copy it somewhere and run with java -jar file.jar.
When I run the application in IntelliJ Idea it always works!
Now i get the following error when running the jar:
Error: A JNI error has occurred, please check your installation and try again Exception in thread "main" java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:314)
at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:268)
at java.util.jar.JarVerifier.processEntry(JarVerifier.java:316)
at java.util.jar.JarVerifier.update(JarVerifier.java:228)
at java.util.jar.JarFile.initializeVerifier(JarFile.java:383)
at java.util.jar.JarFile.getInputStream(JarFile.java:450)
at sun.misc.JarIndex.getJarIndex(JarIndex.java:137)
at sun.misc.URLClassPath$JarLoader$1.run(URLClassPath.java:839)
at sun.misc.URLClassPath$JarLoader$1.run(URLClassPath.java:831)
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:830)
at sun.misc.URLClassPath$JarLoader.<init>(URLClassPath.java:803)
at sun.misc.URLClassPath$3.run(URLClassPath.java:530)
at sun.misc.URLClassPath$3.run(URLClassPath.java:520)
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath.getLoader(URLClassPath.java:519)
at sun.misc.URLClassPath.getLoader(URLClassPath.java:492)
at sun.misc.URLClassPath.getNextLoader(URLClassPath.java:457)
at sun.misc.URLClassPath.getResource(URLClassPath.java:211)
at java.net.URLClassLoader$1.run(URLClassLoader.java:365)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
I already searched for the error and found that a library seems to be signed and I can't just pack it into another jar, because it thinks it was modified then.
The suggested solution was to exclude the signing files through the build system.
I tried to manually delete the BCKEY.dsa and the BCKEY.sf files (only signing files in META-INF folder) in the resulting jar but then I get the following error:
Exception in thread "main" java.lang.RuntimeException: Error starting org.neo4j.kernel.impl.factory.CommunityFacadeFactory, /home/xuiqzy/Documents/uni/4.Semester/PRG_practicum/WikiXtractor/data
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:144)
at org.neo4j.kernel.impl.factory.CommunityFacadeFactory.newFacade(CommunityFacadeFactory.java:40)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:108)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.newDatabase(GraphDatabaseFactory.java:100)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.lambda$createDatabaseCreator$0(GraphDatabaseFactory.java:89)
at org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:183)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.newEmbeddedDatabase(GraphDatabaseFactory.java:65)
at de.bened.wikixtractor.DatabaseManager.initialize(DatabaseManager.java:43)
at de.bened.wikixtractor.Main.main(Main.java:83)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.NeoStoreDataSource#5c67716' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:444)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:99)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:434)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:107)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:140)
... 8 more
Caused by: org.neo4j.kernel.impl.util.UnsatisfiedDependencyException: No dependency satisfies type class org.neo4j.kernel.api.index.SchemaIndexProvider
at org.neo4j.kernel.impl.util.Dependencies.resolveDependency(Dependencies.java:71)
at org.neo4j.kernel.impl.util.Dependencies.resolveDependency(Dependencies.java:67)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:418)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:434)
... 13 more
I also searched that error, a META-INF/services/org.neo4j.kernel.extension.KernelExtensionFactory file exists in my jar, there are also the jars lucene-analyzers-commen, lucene-codecs, lucene-core, lucene-queryparser, neo4j-lucene-index and neo4j-lucene-upgrade all integrated in the über-jar and the build in IntelliJ Idea seems to have enough dependencies, too.
So I'm a bit clueless why the jar doesn't work as opposed to the build in the IDE.
Long term I'm willing to switch to a build system but for now:
Is there a way to get the jar to run?
Any help or pointers in the right direction are appreciated! :)
I'm attempting to run a MapReduce job from a jar file and keep getting a ClassNotFoundException error. I'm running Hadoop 1.2.1 on a Centos 6 virtual machine.
First I compiled the file exercise.java (and class) into a jar file exercise.jar using the following shell script compile.sh :
#!/bin/bash
javac -classpath /pathto/hadoop-common-1.2.1.jar:\
/pathto/hadoop-core-1.2.1.jar /pathto/exercise.java
jar cvf exercise.jar /pathto/*.class
This runs fine and the jar completes successfully. I then attempt to run the actual MapReduce job using shell script exec.sh:
#!/bin/bash
export CLASSPATH=$CLASSPATH:/pathto/hadoop-common-1.2.1.jar:\
/pathto/hadoop-core-1.2.1.jar:/pathto/exercise.class
hadoop jar exercise.jar exercise /data/input/inputfile.txt /data/output
This trows the ClassNotFoundException error :
Exception in thread "main" java.lang.ClassNotFoundException: exercise
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
I realize the explicit path names might not be necessary but I've been a little desperate to double check everything. I've confirmed that in my exercise.java file exercise.class is in the job configuration via job.setJarByClass(exercise.class); and confirmed exercise.class is contained in exercise.jar. Can't seem to figure it out.
UPDATE
The exec.sh script with the full path of exercise.class. It's stored in my Eclipse project directory:
#!/bin/bash
export CLASSPATH=$CLASSPATH:/pathto/hadoop-common-1.2.1.jar:\
/pathto/hadoop-core-1.2.1.jar:/home/username/workspace/MVN_Hadoop/src/main/java.com.amend.hadoop.MapReduce/*
hadoop jar \
exercise.jar \
/home/username/workspace/MVN_Hadoop/src/main/java.com.amend.hadoop.MapReduce/exercise \
/data/input/inputfile.txt \
/data/output
When I actually try and run the exec.sh script using the explicitly written out path names, I also get a completely different set of errors:
Exception in thread "main" java.lang.ClassNotFoundException: /home/hdadmin/workspace/MVN_Hadoop/src/main/java/come/amend/hadoop/MapReduce/exercise
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
I could see this possible errors.
From the Hadoop jar exercise.jar exercise /data/input/inputfile.txt /data/output please specify the full path of the exercise class. I.e org.name.package.exercise if exists. To cross check open the jar file and check the location of exercise.class location.
To continue, Hadoop doesn't expect jars to be included within the jars, since the path of Hadoop is set globally.
NEW:
See, the following path is some thing weird. "/home/hdadmin/workspace/MVN_Hadoop/src/main/java/come/amend/hadoop/MapReduce/exercise"
If you are running using your jar, how could a class path be so specific, instead of jar path. It could only be "come/amend/hadoop/MapReduce/exercise" this.
I am using hadoop 2.4.1 version. I am trying to run a mapreduce job which moves data from local system to hdfs cluster(output directory). If I set the output directory as my local system path, the program is running fine. But when I set the output directory as a path in hdfs cluster I am getting the below error
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/protobuf/ServiceException
at org.apache.hadoop.ipc.ProtobufRpcEngine.<clinit>(ProtobufRpcEngine.java:69)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1834)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1799)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1893)
at org.apache.hadoop.ipc.RPC.getProtocolEngine(RPC.java:203)
at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:537)
at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:328)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:235)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:139)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:160)
at s1.run(s1.java:66)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at s1.main(s1.java:75)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.ServiceException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 25 more
I saw some posts which stated the issue could be related to protobuf dependecy.
Hadoop 2.2.0 mapreduce job not running after upgrading from hadoop 1.0.4
I am using hadoop commons jar 2.5.2 which has the protobuf. Any help to solve this would be appreciated.
Made it working ! Found that there were some jars of 2.2 version which were incompatible with the current version. When i updated those, the program works fine.
if you compile zhe *.java use default java CLASSPATH is ok.
Edit the hadoop_env.sh
export HADOOP_CLASSPATH=${CLASSPATH}
restart the hadoop server
NoClassDefFoundError is thrown by jvm at runtime when a class is not present in classpath.
Check your classpath.
Check also this answer. Could be useful if you solved the NoClassDefFoundError link
I am trying to perform map reduce on Pentaho 5. For Pentaho 5, the Pentaho applications come pre-configured for Apache Hadoop 0.20.2 and it says no further configuration is required for this version. I installed Hadoop 0.20.2 on windows using cygwin and every thing works fine. I run a simple job on Pentaho, which copies file in HDFS which finished successfully and the file system was copied into HDFS. But as soon as I run map reduce job, it says the job was finished on pentaho but the map reduce task was failed and on the output directory on HDFS the result is missing and the log file says:
Error: java.lang.ClassNotFoundException: org.pentaho.di.trans.step.RowListener
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:762)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:807)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:833)
at org.apache.hadoop.mapred.JobConf.getMapRunnerClass(JobConf.java:790)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:354)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170
Please Help me out.
Maybe a little bit old but I thought it might help someone.
This can be caused by:
For Hadoop version > 0.20: check if you set up your environment correctly, see Pentaho Support : Creating a New Hadoop Configuration
Check on HDFS if you have an /opt/pentaho/mapreduce/ and check folder permissions on HDFS for /opt/pentaho (Did you find the kettle-*.jar files in lib folder?)
Check classpath separator ("," in Windows, ":" in Linux). In order to change it, edit spoon.sh (or spoon.bat) and modify OPT variable like this: OPT="$OPT -Dhadoop.cluster.path.separator=,"
I'm trying to use one-jar to generate one jar file that contains clojure jar file and java class file: Creating one jar file that for execution from Java/Clojure
Following the instruction, I could generate directories using one-jar-appgen-0.97.jar. As instructed, I replaced the java source, and added the ThingOne-1.0.0-SNAPSHOT-standalone.jar
Running ant, it builds jar file without an error, but I got error messages when I try to execute the jar file.
java -jar build/test-one-jar.jar
test_one_jar main entry point, args=[]
Hello from Java!
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.simontuffs.onejar.Boot.run(Boot.java:342)
at com.simontuffs.onejar.Boot.main(Boot.java:168)
Caused by: java.lang.ExceptionInInitializerError
at clojure.lang.Namespace.<init>(Namespace.java:34)
at clojure.lang.Namespace.findOrCreate(Namespace.java:176)
at clojure.lang.Var.internPrivate(Var.java:149)
at ThingOne.core.<clinit>(Unknown Source)
at onejar.main.TestOneJarMain.run(TestOneJarMain.java:27)
at onejar.main.TestOneJarMain.main(TestOneJarMain.java:20)
... 6 more
Caused by: java.lang.NullPointerException
at clojure.lang.RT.lastModified(RT.java:374)
at clojure.lang.RT.load(RT.java:408)
at clojure.lang.RT.load(RT.java:398)
at clojure.lang.RT.doInit(RT.java:434)
at clojure.lang.RT.<clinit>(RT.java:316)
... 12 more
What might be wrong?
Lines 168 and 342 in the Boot.java class of One-Jar indicate a problem with setting properties. This problem is happening when unit tests fail. My guess is it's related to bug 3090800 in the SourceForge One-Jar Bug Tracker.