Cannot find the hive-hwi-<version>.war file - java

I cannot start Hive Web interface as described here. This is the output of hive --service hwi:
ls: cannot access /usr/local/hive/lib/hive-hwi-*.war: No such file or directory
14/09/09 13:07:59 INFO hwi.HWIServer: HWI is starting up
14/09/09 13:08:00 FATAL hwi.HWIServer: HWI WAR file not found at /usr/local/hive/lib/hive-hwi-0.13.1.war
It appears that there is no .war file under /usr/local/hive/lib!!! am I supposed to generate the war file?
I've correctly set $ANT_LIB and $HIVE_HOME, and here is my hive-site.xml:
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://hadoop-server/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/lib/hive-hwi-0.13.1.war</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
</configuration>
My hive version is 0.13.1, and hadoop version is 2.5.0.

There is HIVE-7233
you may need to change the version of hive or copy war file of another version

Related

java.lang.NoClassDefFoundError: org/apache/htrace/core/HTraceConfiguration

I am using hadoop 2.9.1 and hbase 2.1.0 at stand-alone local mode.
When I tried staring HBase 2.1.0 using sudo start-hbase.sh at bin folder, I got below error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/htrace/core/HTraceConfiguration
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:153)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2983)
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.core.HTraceConfiguration
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
This is my hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>/home/niyazmohamed/bigdata/upgraded_versions/hbase-2.1.0/hbasedir</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/niyazmohamed/bigdata/upgraded_versions/hbase-2.1.0/zookeeper</value>
</property>
</configuration>
When I tried to start HBase version 1.2.0 , it started successfully and hbase shell was also accessible and CRUD operations were successful.
Hadoop and HBase path are set. Only by that , I was able to run HBase-1.2.0.
Only with HBase-2.1.0, this problem occurs.
Any help appreciated! Thanks in advance!
Related:
Starting HBASE, java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
htrace-core-*-incubating.jar was missing from some early versions of HBase 2.x
If the htrace-core jar is in $HBASE_HOME/lib/client-facing-thirdparty
copy the jar to $HBASE_HOME/lib, otherwise
Download the Jar from Maven here
and place into $HBASE_HOME/lib
You can see in HBase pom.xml for version hbase 2.1 that htrace 4.2.0 is the correct version of the dependency.
https://github.com/apache/hbase/blob/rel/2.1.0/pom.xml#L1364
Goodluck.

Java's error: Could not find or load main class Name while using Hadoop

I am trying to launch Hadoop on my computer but when I execute any relating command in CMD such as hadoop version or hdfs namenode -format I get an error (exact as next):
Error: Could not find or load main class Name
The OS is Windows 10.
Hadoop version 2.7.1.
JDK 1.8.0.131.
I have the following user variables:
HADOOP_HOME = C:\hadoop-2.7.1\bin
HAVA_HOME = C:\Progra~2\Java\jdk1.8.0_131
And within the system variable PATH there are two locations set:
%JAVA_HOME%\bin;C:\hadoop-2.7.1\bin
In hadoop-env.cmd there is variable:
JAVA_HOME = %JAVA_HOME%
Among core-site.xml, mapred-site.xml, hdfs-site.xml and yarn-site.xml links to directories are set only in hdfs-site.xml. The full configuration tag in this file is the next:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/c:/hadoop-2.7.1/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/c:/hadoop-2.7.1/data/datanode</value>
</property>
</configuration>

Hadoop namenode not starting, says file:/// has no athority

I am trying to run hadoop 2.6.0 for windows and I am following this guide
https://wiki.apache.org/hadoop/Hadoop2OnWindows
I have everything setup, all the values are set within the correct xml files.
However, I am running to this error when trying to start my namenode
Invalid URI for NameNode address (checkfs.defaultFS): file:/// has no authority.
This is what I have in my core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://0.0.0.0:19000</value>
</property>
</configuration>
What is going on??

java.io.IOException: Cannot initialize Cluster in Hadoop2 with YARN

This is my first time posting to stackoverflow, so I apologize if I did something wrong.
I recently set up a new hadoop cluster, and this is my first time trying to use Hadoop 2 and YARN. I currently get the following error when I submit my job.
java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
Here are my configuration files:
mapred-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/temp1/nn,/temp2/nn</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/temp1/dn,/temp2/dn</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/temp1/snn</value>
</property>
<property>
<name>dfs.permissions.supergroup</name>
<value>hrdbms</value>
</property>
<property>
<name>dfs.block.size</name>
<value>268435456</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>172.31.20.99</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/temp1/y1,/temp2/y1</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/temp1/y2,/temp2/y2</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
Here is my java code:
Configuration conf = new Configuration();
conf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
conf.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
conf.setBoolean("mapred.compress.map.output",true);
conf.addResource(new org.apache.hadoop.fs.Path("/usr/local/hadoop-2.5.1/etc/hadoop/core-site.xml"));
conf.addResource(new org.apache.hadoop.fs.Path("/usr/local/hadoop-2.5.1/etc/hadoop/hdfs-site.xml"));
conf.addResource(new org.apache.hadoop.fs.Path("/usr/local/hadoop-2.5.1/etc/hadoop/yarn-site.xml"));
conf.set("mapreduce.framework.name", "yarn");
conf.setClass("mapred.map.output.compression.codec", org.apache.hadoop.io.compress.SnappyCodec.class, CompressionCodec.class);
Job job = new Job(conf);
job.setJarByClass(LoadMapper.class);
job.setJobName("Load " + schema + "." + table);
job.setMapperClass(LoadMapper.class);
job.setReducerClass(LoadReducer.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(ALOWritable.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(ALOWritable.class);
job.setNumReduceTasks(workerNodes.size());
job.setOutputFormatClass(LoadOutputFormat.class);
job.setReduceSpeculativeExecution(false);
job.setMapSpeculativeExecution(false);
String glob2 = glob.substring(6);
FileInputFormat.addInputPath(job, new org.apache.hadoop.fs.Path(glob2));
HRDBMSWorker.logger.debug("Submitting MR job");
boolean allOK = job.waitForCompletion(true);
Here are all of the environment variables that are in place when I start the JVM
HADOOP_DATANODE_OPTS=-Dhadoop.security.logger=ERROR,RFAS
HOSTNAME=ip-172-31-20-103
HADOOP_IDENT_STRING=hrdbms
SHELL=/bin/bash
TERM=xterm
HADOOP_HOME=/usr/local/hadoop-2.5.1
HISTSIZE=1000
HADOOP_PID_DIR=
YARN_HOME=/usr/local/hadoop-2.5.1
USER=hrdbms
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
HADOOP_SECURE_DN_PID_DIR=
HADOOP_SECURE_DN_LOG_DIR=/
MAIL=/var/spool/mail/hrdbms
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hrdbms/bin
HADOOP_HDFS_HOME=/usr/local/hadoop-2.5.1
HADOOP_CLIENT_OPTS=-Xmx512m
HADOOP_COMMON_HOME=/usr/local/hadoop-2.5.1
PWD=/home/hrdbms
JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/jre
HADOOP_CLASSPATH=/home/hrdbms/HRDBMS.jar:/contrib/capacity-scheduler/*.jar
HADOOP_CONF_DIR=/etc/hadoop
LANG=en_US.UTF-8
HADOOP_PORTMAP_OPTS=-Xmx512m
HADOOP_OPTS= -Djava.net.preferIPv4Stack=true
HADOOP_SECONDARYNAMENODE_OPTS=-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/hrdbms
YARN_CONF_DIR=/etc/hadoop
HADOOP_SECURE_DN_USER=
HADOOP_NAMENODE_OPTS=-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
HADOOP_MAPRED_HOME=/usr/local/hadoop-2.5.1
LOGNAME=hrdbms
HADOOP_NFS3_OPTS=
LESSOPEN=|/usr/bin/lesspipe.sh %s
HADOOP_YARN_USER=hrdbms
G_BROKEN_FILENAMES=1
_=/bin/env
Here is a list of all jars in the client classpath
activation-1.1.jar
antlr-4.2.1-complete.jar
aopalliance-1.0.jar
apacheds-i18n-2.0.0-M15.jar
apacheds-kerberos-codec-2.0.0-M15.jar
api-asn1-api-1.0.0-M20.jar
api-util-1.0.0-M20.jar
asm-3.2.jar
avro-1.7.4.jar
commons-beanutils-1.7.0.jar
commons-beanutils-core-1.8.0.jar
commons-cli-1.2.jar
commons-codec-1.3.jar
commons-codec-1.4.jar
commons-collections-3.2.1.jar
commons-compress-1.4.1.jar
commons-configuration-1.6.jar
commons-daemon-1.0.13.jar
commons-digester-1.8.jar
commons-el-1.0.jar
commons-httpclient-3.1.jar
commons-io-2.4.jar
commons-lang-2.6.jar
commons-logging-1.1.3.jar
commons-math3-3.1.1.jar
commons-net-3.1.jar
guava-11.0.2.jar
guice-3.0.jar
guice-servlet-3.0.jar
hadoop-annotations-2.5.1.jar
hadoop-archives-2.5.1.jar
hadoop-auth-2.5.1.jar
hadoop-common-2.5.1-tests.jar
hadoop-common-2.5.1.jar
hadoop-datajoin-2.5.1.jar
hadoop-distcp-2.5.1.jar
hadoop-extras-2.5.1.jar
hadoop-gridmix-2.5.1.jar
hadoop-hdfs-2.5.1-tests.jar
hadoop-hdfs-2.5.1.jar
hadoop-hdfs-nfs-2.5.1.jar
hadoop-mapreduce-client-app-2.5.1.jar
hadoop-mapreduce-client-common-2.5.1.jar
hadoop-mapreduce-client-core-2.5.1.jar
hadoop-mapreduce-client-hs-2.5.1.jar
hadoop-mapreduce-client-hs-plugins-2.5.1.jar
hadoop-mapreduce-client-jobclient-2.5.1-tests.jar
hadoop-mapreduce-client-jobclient-2.5.1.jar
hadoop-mapreduce-client-shuffle-2.5.1.jar
hadoop-mapreduce-examples-2.5.1.jar
hadoop-nfs-2.5.1.jar
hadoop-openstack-2.5.1.jar
hadoop-rumen-2.5.1.jar
hadoop-sls-2.5.1.jar
hadoop-streaming-2.5.1.jar
hadoop-yarn-api-2.5.1.jar
hadoop-yarn-applications-distributedshell-2.5.1.jar
hadoop-yarn-applications-unmanaged-am-launcher-2.5.1.jar
hadoop-yarn-client-2.5.1.jar
hadoop-yarn-common-2.5.1.jar
hadoop-yarn-server-applicationhistoryservice-2.5.1.jar
hadoop-yarn-server-common-2.5.1.jar
hadoop-yarn-server-nodemanager-2.5.1.jar
hadoop-yarn-server-resourcemanager-2.5.1.jar
hadoop-yarn-server-tests-2.5.1.jar
hadoop-yarn-server-web-proxy-2.5.1.jar
hamcrest-core-1.3.jar
httpclient-4.2.5.jar
httpcore-4.2.5.jar
jackson-core-asl-1.9.13.jar
jackson-jaxrs-1.9.13.jar
jackson-mapper-asl-1.9.13.jar
jackson-xc-1.9.13.jar
jasper-compiler-5.5.23.jar
jasper-runtime-5.5.23.jar
java-xmlbuilder-0.4.jar
javax.inject-1.jar
jaxb-api-2.2.2.jar
jaxb-impl-2.2.3-1.jar
jersey-client-1.9.jar
jersey-core-1.9.jar
jersey-guice-1.9.jar
jersey-json-1.9.jar
jersey-server-1.9.jar
jets3t-0.9.0.jar
jettison-1.1.jar
jetty-6.1.26.jar
jetty-util-6.1.26.jar
jline-0.9.94.jar
jsch-0.1.50.jar
jsp-api-2.1.jar
jsr305-1.3.9.jar
junit-4.11.jar
leveldbjni-all-1.8.jar
log4j-1.2.17.jar
metrics-core-3.0.0.jar
mockito-all-1.8.5.jar
netty-3.6.2.Final.jar
paranamer-2.3.jar
preflight-app-1.8.7.jar
protobuf-java-2.5.0.jar
servlet-api-2.5.jar
slf4j-api-1.7.5.jar
slf4j-log4j12-1.7.5.jar
snappy-java-1.0.4.1.jar
stax-api-1.0-2.jar
xmlenc-0.52.jar
zookeeper-3.4.6.jar
Please help! Thanks!
EDIT: I just found these debug log messages.
2014-10-27 19:31:21,789 DEBUG Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.LocalClientProtocolProvider
2014-10-27 19:31:21,789 DEBUG Cluster: Cannot pick org.apache.hadoop.mapred.LocalClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I have run into similar issues today. In my case I was building an über jar, where some dependency (I have not found the culprit yet) was bringing in a META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider with the contents:
org.apache.hadoop.mapred.LocalClientProtocolProvider
I provided my own in the project (e.g. put it on the classpath) with the following:
org.apache.hadoop.mapred.YarnClientProtocolProvider
and the correct one is picked up. I suspect you are seeing similar. To fix, please create the file described above, and put it on the classpath. If I find the culprit Jar, I will update the answer.
Turn off security in your cluster (if that is ok with you in your environment obviously).
i.e. turn off this HDFS setting, i.e. :
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
it is on by default.
In Cloudera manager it is accessible from the Configuration panel.
I ran into the same issue trying to run a MR job from Eclipse with the latest CDH5.5 Hadoop 2.6 distribution.
I am not sure what specifically was the issue , it might very well be a classloading issue that #timrobertson100 mentioned .... but in my case, I was able to overcome this by adding all jars from the paths below to the Eclipse project's classpath:
.../hadoop-2.6.0-cdh5.5.1/share/hadoop/common/hadoop-common-2.6.0-cdh5.5.1.jar
.../hadoop-2.6.0-cdh5.5.1/share/hadoop/common/lib/*
.../hadoop-2.6.0-cdh5.5.1/share/hadoop/mapreduce2/*
.../hadoop-2.6.0-cdh5.5.1/share/hadoop/yarn/*
Marina

java.lang.NoSuchMethodError using JOOQ

I was trying to use JOOQ in glassfish. I used code generator like this:
java -cp jOOQ-lib/jooq-3.3.1.jar:jOOQ-lib/jooq-meta-3.3.1.jar:jOOQ-lib/jooq-codegen-3.3.1.jar:mysql-connector-java-5.1.29-bin.jar:. org.jooq.util.GenerationTool /db.xml
Then imported the generated folder to my project.(I'm not using jooq maven plugin). When I deploy web app in glassfish I see this in server.log
[#|2014-04-06T14:53:37.720+0430|SEVERE|glassfish3.1.2|com.sun.xml.ws.server.sei.TieHandler|_ThreadID=670;_ThreadName=Thread-2;|org.jooq.impl.TableImpl.<init>(Ljava/lang/String;Lorg/jooq/Schema;Lorg/jooq/Table;[Lorg/jooq/Field;Ljava/lang/String;)V
java.lang.NoSuchMethodError: org.jooq.impl.TableImpl.<init>(Ljava/lang/String;Lorg/jooq/Schema;Lorg/jooq/Table;[Lorg/jooq/Field;Ljava/lang/String;)V
I have not changed any maven config just netbeans default config. maven artifact:
<dependency>
<groupId>org.jooq</groupId>
<artifactId>jooq</artifactId>
<version>3.3.1</version>
</dependency>
my db.xml:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration>
<!-- Configure the database connection here -->
<jdbc>
<driver>com.mysql.jdbc.Driver</driver>
<url>jdbc:mysql://127.0.0.1/bulkdb?useUnicode=true</url>
<user>user</user>
<password>pass</password>
</jdbc>
<generator>
<database>
<name>org.jooq.util.mysql.MySQLDatabase</name>
<inputSchema>bulkdb</inputSchema>
<includes>.*</includes>
<excludes></excludes>
</database>
<target>
<packageName>bulkdb</packageName>
<directory>/home/user/jooq</directory>
</target>
</generator>
</configuration>
What is going wrong? Can someone help?
[UPDATE]
Actually there is two version of JOOQ in app server class path: one in lib directory of the domain(domain1/lib/) with version 3.1 and second one is 3.3.1 that is bundled in war file. Does this cause problems?
Actually there is two version of JOOQ in app server class path: one in lib directory of the domain(domain1/lib/) with version 3.1 and second one is 3.3.1 that is bundled in war file. Does this cause problems?
Yes, of course :-)
If you want to use both versions in parallel (do you really?), then you will probably need to resort to something like OSGi to be able to load the same class names in separate class loaders.
In your case, jOOQ 3.1 is loaded first by your application server, and thus jOOQ 3.3 cannot be loaded fully any more. The code generated with jOOQ 3.3 operates on new internal methods in TableImpl, which have been added in jOOQ 3.2 or 3.3, but since you're loading jOOQ 3.1, those methods aren't there. Note that this can happen with any external dependencies.
The solution here is really to remove jOOQ 3.1 from your application server.

Categories