Hadoop commands taking too long to run - java

I recently installed hadoop on my system (about a couple of days ago). Everything was running fine.
However, today all hadoop commands are talking longer than they used to (and longer than they should) I restarted my system, but it didn't help.
INDhruvk:~ Dhruv$ /usr/local/hadoop/sbin/start-dfs.sh
2014-01-01 **20:20:00.384** java[331:1903] Unable to load realm info from SCDynamicStore
14/01/01 20:20:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-Dhruv-namenode-INDhruvk.local.out
localhost: 2014-01-01 20:20:44.966 java[396:1d03] Unable to load realm info from SCDynamicStore
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-Dhruv-datanode-INDhruvk.local.out
localhost: 2014-01-01 20:20:48.846 java[467:1d03] Unable to load realm info from SCDynamicStore
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-Dhruv-secondarynamenode-INDhruvk.local.out
0.0.0.0: 2014-01-01 20:21:42.445 java[561:1d03] Unable to load realm info from SCDynamicStore
2014-01-01 20:22:30.064 java[611:1903] Unable to load realm info from SCDynamicStore
14/01/01 **20:22:45** WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
As you can see, it took about 3 minutes for the SecondaryNameNode, NameNode and DataNode to run.
Although this is not really a big issue, but it seems that there is something wrong. Any tips/ideas?
Thank you. B/w Happy New Year :)

Related

Hadoop's command start-dfs.sh is showing a strange error

When i try to run below command, an error pops up
Alis-Mac:hadoop-2.7.3 naziaimran$ sbin/start-dfs.sh
Below is the error,
2018-06-05 01:04:31.424 java[1879:21215] Unable to load realm info from SCDynamicStore
18/06/05 01:04:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-namenode-Alis-Mac.out
localhost: Exception in thread "main" java.lang.ExceptionInInitializerError
localhost: at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
localhost: at org.apache.hadoop.hdfs.server.common.HdfsServerConstants$RollingUpgradeStartupOption.getAllOptionString(HdfsServerConstants.java:80)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.<clinit>(NameNode.java:249)
localhost: Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
localhost: at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3107)
localhost: at java.base/java.lang.String.substring(String.java:1873)
localhost: at org.apache.hadoop.util.Shell.<clinit>(Shell.java:51)
localhost: ... 3 more
localhost: starting datanode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-datanode-Alis-Mac.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /Users/naziaimran/Desktop/hadoop-2.7.3/logs/hadoop-naziaimran-secondarynamenode-Alis-Mac.out
0.0.0.0: Exception in thread "main" java.lang.ExceptionInInitializerError
0.0.0.0: at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:667)
0.0.0.0: Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
0.0.0.0: at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3107)
0.0.0.0: at java.base/java.lang.String.substring(String.java:1873)
0.0.0.0: at org.apache.hadoop.util.Shell.<clinit>(Shell.java:51)
0.0.0.0: ... 2 more
2018-06-05 01:04:48.170 java[2203:22211] Unable to load realm info from SCDynamicStore
18/06/05 01:04:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I am stuck here for days now, any help will be highly appreciated.
Thanks in advance :)
The problem is that Hadoop 2.7 is incompatible with Java 9/10.
I had the same issue and solved it by downgrading to Java 8.
Check the answer by VK321 here, if you are unsure about how to downgrade and get it to work:
https://stackoverflow.com/a/48422257/5181904

Hadoop_instalation: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Java HotSpot(TM) Client VM warning: You have loaded library /home/happyhadoop/hadoop-2.7.3/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
17/04/30 21:30:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
happyhadoop#localhost's password:
localhost: namenode running as process 13997. Stop it first.
happyhadoop#localhost's password:
localhost: datanode running as process 14153. Stop it first.
Starting secondary namenodes [0.0.0.0]
happyhadoop#0.0.0.0's password:
0.0.0.0: secondarynamenode running as process 14432. Stop it first.
Java HotSpot(TM) Client VM warning: You have loaded library /home/happyhadoop/hadoop-2.7.3/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
17/04/30 21:30:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Can someone please help me with this warning?

Spark-submit fails without an error

I used the following command to run the spark java example of wordcount:-
time spark-submit --deploy-mode cluster --master spark://192.168.0.7:7077 --class org.apache.spark.examples.JavaWordCount /home/pi/Desktop/example/new/target/javaword.jar /books_500.txt
I have copied the same jar file into all nodes in the same location. (Copying into HDFS didn't work for me.) When I run it, the following is the output:-
Running Spark using the REST application submission protocol.
16/07/14 16:32:18 INFO rest.RestSubmissionClient: Submitting a request to launch an application in spark://192.168.0.7:7077.
16/07/14 16:32:30 WARN rest.RestSubmissionClient: Unable to connect to server spark://192.168.0.7:7077.
Warning: Master endpoint spark://192.168.0.7:7077 was not a REST server. Falling back to legacy submission gateway instead.
16/07/14 16:32:30 WARN util.Utils: Your hostname, master02 resolves to a loopback address: 127.0.1.1; using 192.168.0.7 instead (on interface wlan0)
16/07/14 16:32:30 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/07/14 16:32:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
It just stops there, quits the job and waits for the next command on terminal. I didn't understand this error without an error message. Help needed please...!!

Hadoop on archlinux | dfs cannot start | ssh port 22 connection refused

i just can't find any answers for this problem:
[hadoop#evghost ~]$ start-dfs.sh
15/10/21 21:59:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
evghost: ssh: connect to host evghost port 22: Connection refused
evghost: ssh: connect to host evghost port 22: Connection refused
Starting secondary namenodes [0.0.0.0]
Error: Please specify one of --hosts or --hostnames options and not both.
15/10/21 21:59:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Does somebody know any solution?
i should enable daemon sshd to connect and post
export HADOOP_OPTS="$HADOOP_OPTS
-Djava.library.path=/usr/local/hadoop/lib/native"
in .bashrc

Name node and Datanode are not starting

I have installed hadoop in Ubuntu and created the dir for namenode and data node. But I ma not able to see the namnode and data node is not running.
hduser#sanjeebpanda:/usr/local/hadoop/etc/hadoop$ jps
9445 Jps
5311 JobHistoryServer
hduser#sanjeebpanda:/usr/local/hadoop/etc/hadoop$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/11/09 21:14:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your **platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode,** logging to /usr/local/hadoop-2.4.0/logs/hadoop-hduser-namenode-sanjeebpanda.out
localhost: starting datanode, logging to /usr/local/hadoop-2.4.0/logs/hadoop-hduser-datanode-sanjeebpanda.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.4.0/logs/hadoop-hduser-secondarynamenode-sanjeebpanda.out
14/11/09 21:14:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-sanjeebpanda.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.4.0/logs/yarn-hduser-nodemanager-sanjeebpanda.out
hduser#sanjeebpanda:/usr/local/hadoop/etc/hadoop$ jps
**10134 NodeManager
10007 ResourceManager
10436 Jps
5311 JobHistoryServer**
But I can see the both the directory have been created.
hduser#sanjeebpanda:/usr/local/hadoop/yarn_data/hdfs$ ls -ltr
total 8
drwxr-xr-x 3 hduser hadoop 4096 Nov 9 21:13 namenode
drwx------ 2 hduser hadoop 4096 Nov 9 21:14 datanode
hduser#sanjeebpanda:/usr/local/hadoop/yarn_data/hdfs$
//Regarding listing files
You are using ls, which lists files in local directory
You have to use hadoop fs -ls to list files in hdfs
follow this link , your problem will definitely solve
http://codesfusion.blogspot.in/2013/10/setup-hadoop-2x-220-on-ubuntu.html

Categories