Hbase error while executing ./bin/start-hbase.sh (windows) - java

I installed Hadoop with this tutorial https://www.youtube.com/watch?v=g7Qpnmi0Q-s and it´s working. I installed it in C:/hadoop.
I installed it only because I read that hadoop is a prerequisite for executing (no single mode) and the error messages are regarding some hadoop configurations. But it didn´t help.
I tried to install Hbase with this tutorial https://ics.upjs.sk/~novotnyr/blog/334/setting-up-hbase-on-windows. But I´m getting this error while executing ./bin/start-hbase.sh
Output in cygwin terminal:
$ ./bin/start-hbase.sh
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further detail
s.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further detail
s.
: Name or service not knownstname laptop-l6543teb
running master, logging to /cygdrive/c/java/hbase-2.2.4-bin/hbase-2.2.4//logs/hbase-maiwa-master-LAPTOP-L6543TEB.out
: running regionserver, logging to /cygdrive/c/java/hbase-2.2.4-bin/hbase-2.2.4//logs/hbase-maiwa-regionserver-LAPTOP-L6543TEB.out
hbase-site-xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///C:/cygwin/root/tmp/hbase/data</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>C:\Java\hbase-2.2.4-bin\hbase-2.2.4\logs</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>false</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
</configuration>
Environment variables:
Path variables:

The error output produced by start-hbase.sh has three different errors.
1. Issue with HADOOP_HOME variable
WARNING: DEFAULT_LIBEXEC_DIR ignored. It has been replaced by HADOOP_DEFAULT_LIBEXEC_DIR. WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
ERROR: Invalid HADOOP_COMMON_HOME
Update the Environment variables with HADOOP_HOME pointing to the Hadoop installation folder (not the bin folder within the installation folder).
As per your setting,
HADOOP_HOME=C:\hadoop\
Additionally, set the location of the configuration files
HADOOP_CONF_DIR=C:\hadoop\etc\hadoop\
2. Issue with interpreting Linux style path or Invalid path
cygpath: can't convert empty path
In hbase-env.sh (under C:\Java\hbase-2.2.4-bin\hbase-2.2.4\conf\), update the values for HBASE_HOME and HBASE_CLASSPATH
As per your installation,
export HBASE_HOME=/cygdrive/c/Java/hbase-2.2.4-bin/hbase-2.2.4/
export HBASE_CLASSPATH=/cygdrive/c/Java/hbase-2.2.4-bin/hbase-2.2.4/lib/
And in your environment variables, make sure HBASE_HOME is configured similar to HADOOP_HOME.
3. Unable to resolve hostname
: Name or service not knownstname laptop-l6543teb
Update your hosts file with correct IP - Hostname mapping.

Related

FLUME [HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_USER: Bad substitution]

I am trying to run the typical Flume first example to get tweets and store them in HDFS using Apache FLume.
[Hadoop version 3.1.3; Apache Flume 1.9.0]
I have configured flume-env.sh:
`
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/
export CLASSPATH=$CLASSPATH:/FLUME_HOME/lib/*
Configured the agent as shown in the TwitterStream.properties config file:
# Naming the components on the current agent.
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
# Describing/Configuring the source
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.consumerKey = **********************
TwitterAgent.sources.Twitter.consumerSecret = **********************
TwitterAgent.sources.Twitter.accessToken = **********************
TwitterAgent.sources.Twitter.accessTokenSecret = **********************
TwitterAgent.sources.Twitter.keywords = tutorials point, java, bigdata, mapreduce, mahout, hbase, nosql
# Describing/Configuring the sink
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/twitter_data/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
# Describing/Configuring the channel
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100
# Binding the source and sink to the channel
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sinks.HDFS.channel = MemChannel
And then running the command:
bin/flume-ng agent -c /home/jiglesia/hadoop/flume/conf/ -f TwitterStream.properties -n TwitterAgent -Dflume.root.logger=INFO, console -n TwitterAgent
Getting the following ERROR during the execution:
Info: Sourcing environment configuration script /home/jiglesia/hadoop/flume/conf/flume-env.sh
Info: Including Hadoop libraries found via (/home/jiglesia/hadoop/bin/hadoop) for HDFS access
/home/jiglesia/hadoop/libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_USER: bad substitution
/home/jiglesia/hadoop/libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_OPTS: bad substitution
Info: Including Hive libraries found via () for Hive access
I don't know why it says Bad Substitution.
I finally attach the entire log if it could say anything to you:
Info: Sourcing environment configuration script /home/jiglesia/hadoop/flume/conf/flume-env.sh
Info: Including Hadoop libraries found via (/home/jiglesia/hadoop/bin/hadoop) for HDFS access
/home/jiglesia/hadoop/libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_USER: bad substitution
/home/jiglesia/hadoop/libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_OPTS: bad substitution
Info: Including Hive libraries found via () for Hive access
+ exec /usr/lib/jvm/java-8-openjdk-amd64/jre//bin/java -Xmx20m -Dflume.root.logger=INFO, -cp '/home/jiglesia/hadoop/flume/conf:/home/jiglesia/hadoop/flume/lib/*:/home/jiglesia/hadoop/etc/hadoop:/home/jiglesia/hadoop/share/hadoop/common/lib/*:/home/jiglesia/hadoop/share/hadoop/common/*:/home/jiglesia/hadoop/share/hadoop/hdfs:/home/jiglesia/hadoop/share/hadoop/hdfs/lib/*:/home/jiglesia/hadoop/share/hadoop/hdfs/*:/home/jiglesia/hadoop/share/hadoop/mapreduce/lib/*:/home/jiglesia/hadoop/share/hadoop/mapreduce/*:/home/jiglesia/hadoop/share/hadoop/yarn:/home/jiglesia/hadoop/share/hadoop/yarn/lib/*:/home/jiglesia/hadoop/share/hadoop/yarn/*:/lib/*' -Djava.library.path=:/home/jiglesia/hadoop/lib/native org.apache.flume.node.Application -f TwitterStream.properties -n TwitterAgent console -n TwitterAgent
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/jiglesia/hadoop/flume/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/jiglesia/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.flume.node.Application).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
The environment variables configured in the bashrc file:
# HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/
export HADOOP_HOME=/home/jiglesia/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
# HADOOP VARIABLES END
# FLUME VARIABLES START
FLUME_HOME=/home/jiglesia/hadoop/flume
PATH=$PATH:/FLUME_HOME/bin
CLASSPATH=$CLASSPATH:/FLUME_HOME/lib/*
# FLUME VARIABLES END
Thanks for your help !
You've referenced bash variables incorrectly.
Try this
FLUME_HOME=$HADOOP_HOME/flume
PATH=$PATH:$FLUME_HOME/bin
CLASSPATH=$CLASSPATH:$FLUME_HOME/lib/*.jar
Note: I'd suggest not putting flume as a subdirectory of Hadoop.
I recommend using Apache Ambari to install and configure Hadoop and Flume processes

NodeManager and ResourceManager processes do not start

I am setting up a Multi-Node cluster and my NodeManager and ResourceManager processes are not starting for some reason and I can't figure out why. When I run the jps command, I only see the NameNode and SecondaryNameNode and JPS processes. As a result, my MapReduce job won't work. This is my configuration
yarn-site.xml - across NameNode and DataNodes
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ec2PathToMyNameNode.compute-1.amazonaws.com</value>
</property>
</configuration>
And my hosts file is this on the NameNode:
nameNodeIP nameNodePublicDNS.compute-1.amazonaws.com
dataNode1IP dataNode1PublicDNS.compute-1.amazonaws.com
dataNode2IP dataNode2PublicDNS.compute-1.amazonaws.com
dataNode3IP dataNode3PublicDNS.compute-1.amazonaws.com
127.0.0.1 localhost
When I run my MapReduce job it says it's unable to connect at 8032. I am using Hadoop 3.1.2
Edit:
I Checked the logs and i found the following exception:
Caused by: java.lang.ClassNotFoundException: javax.activation.DataSource
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:190)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
... 83 more
Error injecting constructor, java.lang.NoClassDefFoundError: javax/activation/DataSource
at org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.(JAXBContextResolver.java:41)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:54)
while locating org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
1 error
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1025)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051)
at com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory$GuiceInstantiatedComponentProvider.getInstance(GuiceComponentProviderFactory.java:345)
Trying to figure out the issue
(1) Start-dfs.sh vs Start-all.sh
Check that you are using Start-all.sh command when you are trying to start hadoop because Start-dfs.sh will only start the namenode and datanodes
(2) Check the Hadoop logs
Check for the HADOOP_LOG_DIR global variable value to get the Log dir, because it will include all exception thrown when trying to start the Namenode Manager and the Resource Manager
(3) Check for the installed Java version
The error may be thrown by an incompatible Java version, check that you have installed the latest Java version.
Fix Java 9 incompatibilies in Hadoop
Hadoop Error starting ResourceManager and NodeManager
(4) Check Hadoop Common issues
Based on the error you provided in the answer update you may find these issue links relevant:
[JDK9] Fail to run yarn application after building hadoop pkg with jdk9 in jdk9 env
[JDK9] Resource Manager failed to start after using hadoop pkg(built with jdk9)
More information
For more information you can check my article on Medium, it may give you some insights:
Installing Hadoop 3.1.0 multi-node cluster on Ubuntu 16.04 Step by Step
my problem is that I used java11 to cooperate with hadoop.
so what i do is
1.rm /Library/Java/*
2.download java8 from https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
3.install java8jdk and
4.fix the JAVA_HOME in hadoop-env.sh
5.stop-all.sh
6.start-dfs.sh
7.start-yarn.sh
[pdash#localhost hadoop]$ export YARN_RESOURCEMANAGER_OPTS="--add-modules=ALL-SYSTEM"
[pdash#localhost hadoop]$ export YARN_NODEMANAGER_OPTS="--add-modules=ALL-SYSTEM"
It will work for sure I tried from apache JIRA log ....Thank PRAFUL

Hadoop Error starting ResourceManager and NodeManager

I'm trying to setup Hadoop3-alpha3 with a Single Node Cluster (Psuedo-distributed) and using the apache guide to do so. I've tried running the example MapReduce job but every time the connection is refused. After running sbin/start-all.sh I've been seeing these exceptions in the ResourceManager log (and similarly in the NodeManager log):
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property.
xxxx-xx-xx xx:xx:xx,xxx DEBUG org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Exception is:
java.beans.IntrospectionException: bad write method arg count: public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)
at java.desktop/java.beans.PropertyDescriptor.findPropertyType(PropertyDescriptor.java:696)
at java.desktop/java.beans.PropertyDescriptor.setWriteMethod(PropertyDescriptor.java:356)
at java.desktop/java.beans.PropertyDescriptor.<init>(PropertyDescriptor.java:142)
at org.apache.commons.beanutils.FluentPropertyBeanIntrospector.createFluentPropertyDescritor(FluentPropertyBeanIntrospector.java:178)
at org.apache.commons.beanutils.FluentPropertyBeanIntrospector.introspect(FluentPropertyBeanIntrospector.java:141)
at org.apache.commons.beanutils.PropertyUtilsBean.fetchIntrospectionData(PropertyUtilsBean.java:2245)
at org.apache.commons.beanutils.PropertyUtilsBean.getIntrospectionData(PropertyUtilsBean.java:2226)
at org.apache.commons.beanutils.PropertyUtilsBean.getPropertyDescriptor(PropertyUtilsBean.java:954)
at org.apache.commons.beanutils.PropertyUtilsBean.isWriteable(PropertyUtilsBean.java:1478)
at org.apache.commons.configuration2.beanutils.BeanHelper.isPropertyWriteable(BeanHelper.java:521)
at org.apache.commons.configuration2.beanutils.BeanHelper.initProperty(BeanHelper.java:357)
at org.apache.commons.configuration2.beanutils.BeanHelper.initBeanProperties(BeanHelper.java:273)
at org.apache.commons.configuration2.beanutils.BeanHelper.initBean(BeanHelper.java:192)
at org.apache.commons.configuration2.beanutils.BeanHelper$BeanCreationContextImpl.initBean(BeanHelper.java:669)
at org.apache.commons.configuration2.beanutils.DefaultBeanFactory.initBeanInstance(DefaultBeanFactory.java:162)
at org.apache.commons.configuration2.beanutils.DefaultBeanFactory.createBean(DefaultBeanFactory.java:116)
at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:459)
at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:479)
at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:492)
at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResultInstance(BasicConfigurationBuilder.java:447)
at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResult(BasicConfigurationBuilder.java:417)
at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.getConfiguration(BasicConfigurationBuilder.java:285)
at org.apache.hadoop.metrics2.impl.MetricsConfig.loadFirst(MetricsConfig.java:119)
at org.apache.hadoop.metrics2.impl.MetricsConfig.create(MetricsConfig.java:98)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:478)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:188)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:163)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:62)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:58)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:678)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1129)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:315)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1407)
And then later in the file:
xxxx-xx-xx xx:xx:xx,xxx FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
java.lang.ExceptionInInitializerError
at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:64)
at com.google.inject.internal.BytecodeGen.newFastClass(BytecodeGen.java:204)
at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.<init>(ProviderMethod.java:256)
at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:71)
at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:275)
at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:144)
at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:123)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:349)
at com.google.inject.AbstractModule.install(AbstractModule.java:122)
at com.google.inject.servlet.ServletModule.configure(ServletModule.java:52)
at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements.getElements(Elements.java:110)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:332)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:377)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1116)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1218)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1408)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module #173f73e7
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:337)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:281)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:197)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:191)
at com.google.inject.internal.cglib.core.$ReflectUtils$2.run(ReflectUtils.java:56)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at com.google.inject.internal.cglib.core.$ReflectUtils.<clinit>(ReflectUtils.java:46)
... 29 more
For reference my core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
and yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
I have no idea what is causing these exceptions, any help with them would be helpful.
Edit: Added hadoop-env.sh:
export JAVA_HOME=/usr/local/jdk-9
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
case ${HADOOP_OS_TYPE} in
Darwin*)
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
;;
esac
export HADOOP_ROOT_LOGGER=DEBUG,console
export HADOOP_DAEMON_ROOT_LOGGER=DEBUG,RFA
At mentioned by #tk421 in the comments. Java 9 is not compatible with Hadoop 3 (and possibly any hadoop version) yet.
https://issues.apache.org/jira/browse/HADOOP-11123
I've changed to Java 8.181 and both are starting up now:
hadoop#hadoop:/usr/local/hadoop$ sbin/start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop]
Starting resourcemanager
Starting nodemanagers
hadoop#hadoop:/usr/local/hadoop$ jps
8756 SecondaryNameNode
8389 NameNode
9173 NodeManager
9030 ResourceManager
8535 DataNode
9515 Jps
my problem is that I used java11 to cooperate with hadoop.
so what i do is
1.rm /Library/Java/*
2.download java8 from https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
3.install java8jdk and
4.fix the JAVA_HOME in hadoop-env.sh
5.stop-all.sh
6.start-dfs.sh
7.start-yarn.sh
I want to share my observations. OpenJDK 17 was used to interact with Hadoop.
I thought the newer the Java version, the better. I was very wrong, I had to switch to OpenJDK 8.
So, how do we fix the problem?
You need to uninstall the previous version of Java. There are several removal options.
Deleting only OpenJDK: $ sudo apt-get remove openjdk*
Deleting OpenJDK along with dependencies: $ sudo apt-get remove --auto-remove openjdk*
Deleting OpenJDK and its configuration files: $ sudo apt-get purge openjdk*
Deleting OpenJDK along with dependencies and it’s configuration files: $ sudo apt-get purge --auto-remove openjdk*
As for me, I used the latter option.
You need to install OpenJDK 8.
Installing OpenJDK: $ sudo apt install openjdk-8-jdk -y
After the installation is complete, you can check the Java version: $ java -version; javac -version
You need to edit the path to the JAVA_HOME variable. To do this, you need to open hadoop-env.sh
To open a file hadoop-env.sh, you can use the command:
sudo nano $HADOOP_HOME/etc/hadoop/hadoop-env.sh
where $HADOOP_HOME is the location of your Hadoop (for example, /home/hdoop/hadoop-3.2.4).
The contents of the JAVA_HOME variable looks like this: export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 . Of course, that all depends on your Java location.
Go to the catalog hadoop-3.2.4/sbin.
Next, you need to stop the daemons on all nodes of the cluster: ./stop-all.sh
Running NameNode and DataNode: ./start-dfs.sh
Running YARN resource and NodeManager: ./start-yarn.sh
Check if all daemons are active and running as Java processes: jps.
The resulting list should look (approximately) as follows:
33706 SecondaryNameNode
33330 NameNode
34049 NodeManager
33900 ResourceManager
33482 DataNode
34410 Jps
Hadoop setup is DONE!
P.S. I hope my answer will be useful. I tried to cover all the details in my answer. I wish you all success.

com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed

Im facing the below issue when trying to connect to cassandra cluster and display table contents:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /175.14.3.164:9042 (com.datastax.driver.core.TransportException: [/172.16.3.163:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:248)
at com.datastax.driver.core.Cluster.connect(Cluster.java:281)
at com.X.Y.App.main(App.java:27)
In my cassandra.yaml file
native_transport_port: 9042
My listen_address : 172.14.3.164
seeds: 172.14.3.164
rpc_address: 172.14.3.164
And code :
cluster=Cluster.builder().addContactPoint("172.14.3.164").build();
I had seen other links related to it but followed them but still could n't fix it. Kindly help

Hadoop CDH4 Error:SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder"

I am using ubuntu 14.04. CDH4.7
I am installing as per the procedure given in the link below
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Quick-Start/cdh4qs_topic_3_2.html
The problem is I am not able to start the data node . I am getting the error as
naveensrikanthd#ubuntu:/$ for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done
[sudo] password for naveensrikanthd:
* Starting Hadoop datanode:
starting datanode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-datanode-ubuntu.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
* Starting Hadoop namenode:
namenode running as process 15437. Stop it first.
* Starting Hadoop secondarynamenode:
secondarynamenode running as process 3061. Stop it first.
naveensrikanthd#ubuntu:/$ jps
7467 RunJar
8048 RunJar
18363 Jps
No Hadoop process is running and this three statements given above[slf4J] are shuffling between namenode,datanode:
Below given is the log file for the path:
/var/log/hadoop-hdfs/hadoop-hdfs-datanode-ubuntu.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
ulimit -a for user hdfs
What should I do to rid of this error anyone please help in crossing this error
The output shows that in fact the namenodes are already running. You should double-check where you think they are supposed to run and what your config says, because it's saying you already succeeded.
The error from log4j has nothing to do with Hadoop functionality.

Categories