I have started having failures on hive command as follows:
Logging initialized using configuration in file:/usr/local/someuser/hive/conf/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
... 7 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
... 12 more
Caused by: javax.jdo.JDOFatalUserException: There is no available StoreManager of type "rdbms". Make sure that you have put the relevant DataNucleus store plugin in your CLASSPATH and if defining a connection via JNDI or DataSource you also need to provide persistence property "datanucleus.storeManagerType"
NestedThrowables:
org.datanucleus.exceptions.NucleusUserException: There is no available StoreManager of type "rdbms". Make sure that you have put the relevant DataNucleus store plugin in your CLASSPATH and if defining a connection via JNDI or DataSource you also need to provide persistence property "datanucleus.storeManagerType"
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:528)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:310)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:339)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:248)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:223)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:497)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:475)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:356)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:171)
... 17 more
Caused by: org.datanucleus.exceptions.NucleusUserException: There is no available StoreManager of type "rdbms". Make sure that you have put the relevant DataNucleus store plugin in your CLASSPATH and if defining a connection via JNDI or DataSource you also need to provide persistence property "datanucleus.storeManagerType"
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1217)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
... 46 more
Here is the hive-site
<configuration>
<property>
<name>mapred.reduce.tasks</name>
<value>6</value>
<name>mapred.map.tasks</name>
<value>7</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/data/cloud/hive/logs/hive-${user.name}</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/data/cloud/hive/logs/hive-${user.name}</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://somehost:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>xx</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>xx</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
</configuration>
the version is 0.13.1
I have checked the lib directories of hadoop and hive both, as the logs complains about the classpath, but the jar datanucleus-rdbms-3.2.9.jar is already in the folder. I have tried to restart the metastore service as well but it didnt help at all.
any other point which might cause the error above? i was curious if there were any config changes prior to the errors, but it seems like no config updates either...
EDIT
Verbose logs: http://pastebin.com/jggaXF1X
After digging the logs, as we were discussing with #NeilStockton in the comments, i have tried to redownload the related jar file, in the folder where the jar itself already exists... and it worked after the redownload...
Related
I am using hadoop 2.9.1 and hbase 2.1.0 at stand-alone local mode.
When I tried staring HBase 2.1.0 using sudo start-hbase.sh at bin folder, I got below error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/htrace/core/HTraceConfiguration
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:153)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2983)
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.core.HTraceConfiguration
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
This is my hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>/home/niyazmohamed/bigdata/upgraded_versions/hbase-2.1.0/hbasedir</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/niyazmohamed/bigdata/upgraded_versions/hbase-2.1.0/zookeeper</value>
</property>
</configuration>
When I tried to start HBase version 1.2.0 , it started successfully and hbase shell was also accessible and CRUD operations were successful.
Hadoop and HBase path are set. Only by that , I was able to run HBase-1.2.0.
Only with HBase-2.1.0, this problem occurs.
Any help appreciated! Thanks in advance!
Related:
Starting HBASE, java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
htrace-core-*-incubating.jar was missing from some early versions of HBase 2.x
If the htrace-core jar is in $HBASE_HOME/lib/client-facing-thirdparty
copy the jar to $HBASE_HOME/lib, otherwise
Download the Jar from Maven here
and place into $HBASE_HOME/lib
You can see in HBase pom.xml for version hbase 2.1 that htrace 4.2.0 is the correct version of the dependency.
https://github.com/apache/hbase/blob/rel/2.1.0/pom.xml#L1364
Goodluck.
I created a MapReduce job and I'm testing in a multi-cluster environment, but I'm getting the following error:
Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://bigcluster:9000/opt/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:269)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:390)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:483)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at com.company.hbase.mapreduce.message.maestro.threadIndex.fakecolum.MockTestThreadIndexData.run(MockTestThreadIndexData.java:47)
at com.company.hbase.mapreduce.MaestroUpdateJob.main(MaestroUpdateJob.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
I see that hadoop-common-2.6.0.jar jar is missing on hdfs://bigcluster:9000/opt/hadoop/share/hadoop/common
The jar file exists on /opt/hadoop/share/hadoop/common, but my job is looking for inside HDFS.
If I copy all the jars (there are a lot of them) to HDFS, it worked. But the problem is, I want to understand, is it really necessary? Someone can explain to me WHY?
If I want to run it in production, do I need to make this? Is that correct?
Also, I see the answer Why do I need to keep hbase/lib folder in hdfs? and yes, if I change the MapReduce framework to YARN, it works also. But I don't want to work with YARN and I just want to understand why I have to move all Hadoop libs to HDFS to run a MapReduce job.
Updated
Here is how I instantiate the jobconf
Job job = Job.getInstance(config, "MyJob");
Scan scan = createScan();
Filter filter = createMyFilter();
FilterList filters = createMyFilter();
scan.setFilter(filters);
TableMapReduceUtil.initTableMapperJob(
MY_TABLE,
scan,
MyMapper.class,
null,
null,
job
);
TableMapReduceUtil.initTableReducerJob(
MY_TABLE,
null,
job
);
job.setNumReduceTasks(0);
Here is my mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>myhost:9001</value>
</property>
<property>
<name>hadoop.ssl.enabled</name>
<value>true</value>
</property>
<property>
<name>hadoop.ssl.require.client.cert</name>
<value>false</value>
<final>true</final>
</property>
<property>
<name>hadoop.ssl.hostname.verifier</name>
<value>DEFAULT</value>
<final>true</final>
</property>
<property>
<name>hadoop.ssl.keystores.factory.class</name>
<value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>
<final>true</final>
</property>
<property>
<name>hadoop.ssl.server.conf</name>
<value>ssl-server.xml</value>
<final>true</final>
</property>
<property>
<name>hadoop.ssl.client.conf</name>
<value>ssl-client.xml</value>
<final>true</final>
</property>
</configuration>
How I run this:
HADOOP_CLASSPATH=`/opt/hbase/bin/hbase classpath` /opt/hadoop/bin/hadoop jar /tmp/mymapred-1.0-SNAPSHOT-jar-with-dependencies.jar
Solution
Finally, I got the answer from this comment: https://stackoverflow.com/a/31950822/13305602
Inside core-site.xml, there are two properties to configure the default File System inside Hadoop.
<property>
<name>fs.defaultFS</name>
<value>hdfs://myhost.mycompany.com:9000</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://myhost.mycompany.com:9000</value>
</property>
The default value of these two properties is file:// see here: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml
You can change this property on core-site.xml or if you are in an environment where you don't have access to that, you can do it only in job context setting on jobConf.
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS", "file:///");
configuration.set("fs.default.name", "file:///");
Job job = Job.getInstance(configuration, "MyJob");
I have a web application that is deployed locally to a Liberty Profile server, and that is already working with log4j2. My end goal is to log all of the PreparedStatements with their parameter values included in the query string, just before they are run against a DB2 database.
I've been following the instructions at https://code.google.com/p/log4jdbc-log4j2 to set up log4jdbc-log4j2. I was able to pull down the dependency files with Maven:
<dependency>
<groupId>org.bgee.log4jdbc-log4j2</groupId>
<artifactId>log4jdbc-log4j2-jdbc4</artifactId>
<version>1.16</version>
</dependency>
However, I've been stuck at steps 3.1 and 3.2 for awhile, and so far, nothing on stackoverflow or instructional blogs has helped me move forward, so I thought it was time to ask my own question.
Could someone please let me know in which file(s), and how, I should make the changes mentioned in steps 3.1 ("Change your JDBC URL") and 3.2 ("Change the driver used")? Please let me know if there's something I can clarify further in order to help get my question answered, and thank you in advance for any help or guidance you can provide.
Update
After making the changes to server.xml suggested aguibert and including all log4j*.jar files from the dependency in the db2 drivers directory, my server.xml entry looks like this:
<dataSource id="myDataSource" jndiName="jdbc/myDataSource" type="javax.sql.DataSource">
<jdbcDriver javax.sql.DataSource="net.sf.log4jdbc.sql.jdbcapi.DataSourceSpy">
<library>
<fileset dir="<path to dir>/db2" includes="db2jcc_license_cisuz.jar db2jcc4.jar log4j-api-2.3.jar log4j-core-2.3.jar log4jdbc-log4j2-jdbc4-1.16-sources.jar log4jdbc-log4j2-jdbc4-1.16.jar"/>
</library>
</jdbcDriver>
<properties
password="password"
user="user"
URL="jdbc:log4jdbc:db2://<normal jdbc url>" />
</dataSource>
Now, when the first query is made, I get an InstantiationException on net.sf.log4jdbc.sql.jdbcapi.DataSourceSpy:
java.lang.Exception:
at <my files>
at javax.servlet.http.HttpServlet.service(HttpServlet.java:595) [com.ibm.ws.javaee.servlet.3.0_1.0.8.jar:?]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:668) [com.ibm.ws.javaee.servlet.3.0_1.0.8.jar:?]
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1285) [com.ibm.ws.webcontainer_1.0.8.jar:?]
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:776) [com.ibm.ws.webcontainer_1.0.8.jar:?]
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:473) [com.ibm.ws.webcontainer_1.0.8.jar:?]
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1104) [com.ibm.ws.webcontainer_1.0.8.jar:?]
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:4845) [com.ibm.ws.webcontainer_1.0.8.jar:?]
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.handleRequest(DynamicVirtualHost.java:297) [com.ibm.ws.webcontainer_1.0.8.jar:?]
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:981) [com.ibm.ws.webcontainer_1.0.8.jar:?]
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:262) [com.ibm.ws.webcontainer_1.0.8.jar:?]
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:955) [com.ibm.ws.transport.http_1.0.8.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_60]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_60]
Caused by: javax.naming.NamingException: CWWKN0008E: An object could not be obtained for name jdbc/myDataSource.
at com.ibm.ws.jndi.internal.WSContext.resolveObject(WSContext.java:128) ~[?:?]
at com.ibm.ws.jndi.internal.WSContext.lookup(WSContext.java:364) ~[?:?]
at com.ibm.ws.jndi.internal.WSContext.lookup(WSContext.java:359) ~[?:?]
at org.apache.aries.jndi.DelegateContext.lookup(DelegateContext.java:161) ~[?:?]
at javax.naming.InitialContext.lookup(InitialContext.java:411) ~[?:1.7.0_60]
at <my files>
... 16 more
[ERROR ] CWWKE0701E: FrameworkEvent ERROR Bundle:com.ibm.ws.jdbc(id=69) org.osgi.framework.ServiceException: Exception in com.ibm.ws.resource.internal.ResourceFactoryTrackerData$1.getService()
at org.eclipse.osgi.internal.serviceregistry.ServiceFactoryUse.factoryGetService(ServiceFactoryUse.java:222)
at [internal classes]
at javax.naming.InitialContext.lookup(InitialContext.java:411)
at <my files>
at javax.servlet.http.HttpServlet.service(HttpServlet.java:595)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:668)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1285)
at [internal classes]
Caused by: java.lang.RuntimeException: java.sql.SQLNonTransientException: java.lang.InstantiationException: net.sf.log4jdbc.sql.jdbcapi.DataSourceSpy
at com.ibm.ws.resource.internal.ResourceFactoryTrackerData$1.getService(ResourceFactoryTrackerData.java:109)
... 10 more
Caused by: java.sql.SQLNonTransientException: java.lang.InstantiationException: net.sf.log4jdbc.sql.jdbcapi.DataSourceSpy
at com.ibm.ws.jdbc.internal.JDBCDriverService.create(JDBCDriverService.java:287)
... 10 more
Caused by: java.lang.InstantiationException: net.sf.log4jdbc.sql.jdbcapi.DataSourceSpy
at java.lang.Class.newInstance(Class.java:359)
at com.ibm.ws.jdbc.internal.JDBCDriverService$1.run(JDBCDriverService.java:228)
... 10 more
Event:org.osgi.framework.FrameworkEvent[source=com.ibm.ws.jdbc_1.0.8.cl50520150305-2202 [69]]
If it looks like there's anything that I've missed, please let me know. Searching for the errors in the stack trace hasn't resulted in any solutions.
Final Status
As aguibert pointed out, it seems like a different direction will be best here. Based on a comment in Logging PreparedStatements in Java, I've decided to implement a LoggableStatement wrapper as described here: ibm.com/developerworks/java/library/j-loggable
For a WebSphere Liberty server, all of the global server config is done in the server.xml file (located by defualt at WLP_INSTALL/usr/servers//server.xml).
You will probably want something along these lines in your server.xml:
<dataSource id="myDataSource" jndiname="jdbc/myDataSource" type="javax.sql.DataSource">
<jdbcDriver javax.sql.DataSource="net.sf.log4jdbc.sql.jdbcapi.DataSourceSpy">
<library>
<fileset dir="C:/path/to/libs" includes="thedb2jar.jar log4j.jar" />
</library>
</jdbcDriver>
<properties user="user" password="password"
url="jdbc:log4jdbc:<the normal jdbc url>"/>
</dataSource>
The key parts here is that the element has the "javax.sql.DataSource" property set and the value is the name of the DataSource class for the log4j jar. Also, in the element, you'll see that the url is specified with the "jdbc:log4jdbc" prefix as described in section 3.1.
This is untested advice, but you may need to include both jars (the db2 jar and the log4j jar) in the same folder so they are picked up in the same element.
I am trying to run a MapReduce job.
When I execute the following command to run the MapReduce job:
hduser#ubuntu:/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output
It gives me following output:
/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output Warning: $HADOOP_HOME is deprecated.
15/03/20 22:03:42 ERROR security.UserGroupInformation: PriviledgedActionException as:suzon cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException: Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocol
at org.apache.hadoop.hdfs.server.namenode.NameNode.getProtocolVersion(NameNode.java:152) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) org.apache.hadoop.ipc.RemoteException: java.io.IOException: Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocol
at org.apache.hadoop.hdfs.server.namenode.NameNode.getProtocolVersion(NameNode.java:152) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411) at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:499)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:490)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:473)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
My JPS:
*suzon#Suzon:/usr/local/hadoop$ jps
14944 Jps <br>
14413 SecondaryNameNode
14233 DataNode
14076 NameNode*
Here is my core-site.xml configuration:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54311</value>
<description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description>
</property>
</configuration>
My mapred-site.xml configuration:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description>
</property>
</configuration>
My hdfs-site.xml configuration:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description>
</property>
</configuration>
I'm trying to register my castor mapping files with spring and I appear to be getting a null pointer exception.
In my application context I have:
<bean id="xmlContext" class="org.castor.spring.xml.XMLContextFactoryBean">
<property name="mappingLocations">
<list>
<value>DistributionSamplerMappings.xml</value>
</list>
</property>
<property name="castorProperties">
<props>
<prop key="org.exolab.castor.xml.strictelements">false</prop>
</props>
</property>
</bean>
<bean id="marshaller"
class="org.castor.spring.xml.CastorMarshallerFactoryBean">
<property name="xmlContext"><ref local="xmlContext"/></property>
</bean>
<bean id="unmarshaller"
class="org.castor.spring.xml.CastorUnmarshallerFactoryBean">
<property name="xmlContext"> <ref local="xmlContext"/></property>
<property name="ignoreExtraElements"><value>true</value></property>
<property name="ignoreExtraAttributes"><value>true</value></property>
</bean>
Where DistributionSamplerMappings.xml lives in the same dir as the application context.
I've tried using the spring-xml jar 1.2.1 and 1.5.3. but none of them seem to help.
The exception being thrown back is:
SEVERE: Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'xmlContext' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NullPointerException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1338)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409)
at java.security.AccessController.doPrivileged(Native Method)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:423)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3830)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4337)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:719)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:516)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:566)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)
Caused by: java.lang.NullPointerException
at org.castor.spring.xml.XMLContextFactoryBean.afterPropertiesSet(XMLContextFactoryBean.java:118)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1369)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1335)
... 30 more
I'm using Spring 2.5.6 and Castor 1.3.1.
Looking around I find I'm not the only one who has had this problem, but I don't seem to be able to find a solution.
Any help would be much appreciated.
First, look at the code - line 118 of the XMLContextFactoryBean is the last one here. It suggests that somehow mappingResource is null. That suggests that getClass().getClassLoader().getResource(mappingLocation) is returning a null, so perhaps it can't find your file.
mappingLocation = (String) iter.next();
URL mappingResource = getClass().getClassLoader()
.getResource(mappingLocation);
mapping.loadMapping(new InputSource(mappingResource
.openStream())); // NPE occurs on this line.
Now if you want the class loader to find a file, you need to put the file in the same place that it would look for classes. Putting your DistributionSamplerMappings.xml in the same directory as applicationContext isn't good enough. Try WEB-INF/classes, or whichever is the classes folder that has the root of your compiled classes inside it. If you're using Eclipse, you can do this by putting the file inside your source folder -- it looks a bit untidy, since you'd rather have config info elsewhere, but at least it will work.
This exception can also occur if a mapped class doesn't have a default public constructor.