When i try to run below command, an error pops up
Alis-Mac:hadoop-2.7.3 naziaimran$ sbin/start-dfs.sh
Below is the error,
2018-06-05 01:04:31.424 java[1879:21215] Unable to load realm info from SCDynamicStore
18/06/05 01:04:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-namenode-Alis-Mac.out
localhost: Exception in thread "main" java.lang.ExceptionInInitializerError
localhost: at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
localhost: at org.apache.hadoop.hdfs.server.common.HdfsServerConstants$RollingUpgradeStartupOption.getAllOptionString(HdfsServerConstants.java:80)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.<clinit>(NameNode.java:249)
localhost: Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
localhost: at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3107)
localhost: at java.base/java.lang.String.substring(String.java:1873)
localhost: at org.apache.hadoop.util.Shell.<clinit>(Shell.java:51)
localhost: ... 3 more
localhost: starting datanode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-datanode-Alis-Mac.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-secondarynamenode-Alis-Mac.out
0.0.0.0: Exception in thread "main" java.lang.ExceptionInInitializerError
0.0.0.0: at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:667)
0.0.0.0: Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
0.0.0.0: at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3107)
0.0.0.0: at java.base/java.lang.String.substring(String.java:1873)
0.0.0.0: at org.apache.hadoop.util.Shell.<clinit>(Shell.java:51)
0.0.0.0: ... 2 more
2018-06-05 01:04:48.170 java[2203:22211] Unable to load realm info from SCDynamicStore
18/06/05 01:04:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I am stuck here for days now, any help will be highly appreciated.
Thanks in advance :)
The problem is that Hadoop 2.7 is incompatible with Java 9/10.
I had the same issue and solved it by downgrading to Java 8.
Check the answer by VK321 here, if you are unsure about how to downgrade and get it to work:
https://stackoverflow.com/a/48422257/5181904
Related
I tried running this sample project which uses MapR.
I tried executing the class ml.Flight in the sandbox and from the below line,
val spark: SparkSession = SparkSession.builder().appName("churn").getOrCreate()
I got this error.
[user01#maprdemo ~]$ spark-submit --class ml.Flight --master local[2] spark-ml-flightdelay-1.0.jar
Warning: Unable to determine $DRILL_HOME
18/12/19 05:39:09 WARN Utils: Your hostname, maprdemo.local resolves to a loopback address: 127.0.0.1; using 10.0.3.1 instead (on interface enp0s3)
18/12/19 05:39:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/12/19 05:39:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/12/19 05:39:28 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: Could not create FileClient
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:656)
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:709)
at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1419)
at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:1093)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:522)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
at ml.Flight$.main(Flight.scala:37)
at ml.Flight.main(Flight.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:899)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Could not create FileClient
at com.mapr.fs.MapRClientImpl.<init>(MapRClientImpl.java:137)
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:650)
... 22 more
I'm new to Scala/Spark and any help is welcome. Thanks in advance.
I think you are using or exporting a different python version of spark-submit.
For example:
/opt/mapr/spark/spark-2.3.1/bin/spark-submit
Java HotSpot(TM) Client VM warning: You have loaded library /home/happyhadoop/hadoop-2.7.3/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
17/04/30 21:30:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
happyhadoop#localhost's password:
localhost: namenode running as process 13997. Stop it first.
happyhadoop#localhost's password:
localhost: datanode running as process 14153. Stop it first.
Starting secondary namenodes [0.0.0.0]
happyhadoop#0.0.0.0's password:
0.0.0.0: secondarynamenode running as process 14432. Stop it first.
Java HotSpot(TM) Client VM warning: You have loaded library /home/happyhadoop/hadoop-2.7.3/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
17/04/30 21:30:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Can someone please help me with this warning?
i just can't find any answers for this problem:
[hadoop#evghost ~]$ start-dfs.sh
15/10/21 21:59:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
evghost: ssh: connect to host evghost port 22: Connection refused
evghost: ssh: connect to host evghost port 22: Connection refused
Starting secondary namenodes [0.0.0.0]
Error: Please specify one of --hosts or --hostnames options and not both.
15/10/21 21:59:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Does somebody know any solution?
i should enable daemon sshd to connect and post
export HADOOP_OPTS="$HADOOP_OPTS
-Djava.library.path=/usr/local/hadoop/lib/native"
in .bashrc
I have installed hadoop in Ubuntu and created the dir for namenode and data node. But I ma not able to see the namnode and data node is not running.
hduser#sanjeebpanda:/usr/local/hadoop/etc/hadoop$ jps
9445 Jps
5311 JobHistoryServer
hduser#sanjeebpanda:/usr/local/hadoop/etc/hadoop$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/11/09 21:14:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your **platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode,** logging to /usr/local/hadoop-2.4.0/logs/hadoop-hduser-namenode-sanjeebpanda.out
localhost: starting datanode, logging to /usr/local/hadoop-2.4.0/logs/hadoop-hduser-datanode-sanjeebpanda.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.4.0/logs/hadoop-hduser-secondarynamenode-sanjeebpanda.out
14/11/09 21:14:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-sanjeebpanda.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.4.0/logs/yarn-hduser-nodemanager-sanjeebpanda.out
hduser#sanjeebpanda:/usr/local/hadoop/etc/hadoop$ jps
**10134 NodeManager
10007 ResourceManager
10436 Jps
5311 JobHistoryServer**
But I can see the both the directory have been created.
hduser#sanjeebpanda:/usr/local/hadoop/yarn_data/hdfs$ ls -ltr
total 8
drwxr-xr-x 3 hduser hadoop 4096 Nov 9 21:13 namenode
drwx------ 2 hduser hadoop 4096 Nov 9 21:14 datanode
hduser#sanjeebpanda:/usr/local/hadoop/yarn_data/hdfs$
//Regarding listing files
You are using ls, which lists files in local directory
You have to use hadoop fs -ls to list files in hdfs
follow this link , your problem will definitely solve
http://codesfusion.blogspot.in/2013/10/setup-hadoop-2x-220-on-ubuntu.html
I recently installed hadoop on my system (about a couple of days ago). Everything was running fine.
However, today all hadoop commands are talking longer than they used to (and longer than they should) I restarted my system, but it didn't help.
INDhruvk:~ Dhruv$ /usr/local/hadoop/sbin/start-dfs.sh
2014-01-01 **20:20:00.384** java[331:1903] Unable to load realm info from SCDynamicStore
14/01/01 20:20:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-Dhruv-namenode-INDhruvk.local.out
localhost: 2014-01-01 20:20:44.966 java[396:1d03] Unable to load realm info from SCDynamicStore
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-Dhruv-datanode-INDhruvk.local.out
localhost: 2014-01-01 20:20:48.846 java[467:1d03] Unable to load realm info from SCDynamicStore
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-Dhruv-secondarynamenode-INDhruvk.local.out
0.0.0.0: 2014-01-01 20:21:42.445 java[561:1d03] Unable to load realm info from SCDynamicStore
2014-01-01 20:22:30.064 java[611:1903] Unable to load realm info from SCDynamicStore
14/01/01 **20:22:45** WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
As you can see, it took about 3 minutes for the SecondaryNameNode, NameNode and DataNode to run.
Although this is not really a big issue, but it seems that there is something wrong. Any tips/ideas?
Thank you. B/w Happy New Year :)