I'm currently learning Hadoop by http://tecadmin.net/steps-to-install-hadoop-on-centosrhel-6/
in the 5th step when I apply this command $ bin/hadoop namenode -format I get following error
I also have checked these links for resolving my problem
"hadoop namenode -format" returns a java.net.UnknownHostException
java.net.UnknownHostException: Invalid hostname for server: local
I don't know where is domain name in the configuration files for replacing it by localhost.
also I went to /etc/hosts file and replaced text by localhost.. still I haven't resolve the problem please someone help me.
The unknownHostException could be resolved by the following steps:
Go to /etc/hosts
Edit the "hosts" file with IP 127.0.0.1 [space] HostName (e.g. static.98.35.ebonenet.com)
Save the file and try again
With the help of Aadil's answer I resolved The unknownHostException by the following steps:
Step-1 Go to /etc/hosts
Step-2 Edit the "hosts" file with IP 127.0.0.1 [space/tab] localhost [space/tab] HostName (e.g. static.98.35.ebonenet.com)
Step-3 Save the file and try again
Problem:
If anyone facing:
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
ubuntu: ubuntu: unknown error
Solution:
Go to /etc/hosts
Disable/Remove if any IPV6 configuration maintained in /etc/hosts file
Save the file and try again
Related
What is the source of this error and how could it be fixed?
2015-11-29 19:40:04,670 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to anmol-vm1-new/10.0.1.190:8020. Exiting.
java.io.IOException: All specified directories are not accessible or do not exist.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:217)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:974)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:945)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
at java.lang.Thread.run(Thread.java:745)
2015-11-29 19:40:04,670 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to anmol-vm1-new/10.0.1.190:8020
2015-11-29 19:40:04,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
there are 2 Possible Solutions to resolve
First:
Your namenode and datanode cluster ID does not match, make sure to make them the same.
In name node, change ur cluster id in the file located in:
$ nano HADOOP_FILE_SYSTEM/namenode/current/VERSION
In data node you cluster id is stored in the file:
$ nano HADOOP_FILE_SYSTEM/datanode/current/VERSION
Second:
Format the namenode at all:
Hadoop 1.x: $ hadoop namenode -format
Hadoop 2.x: $ hdfs namenode -format
I met the same problem and solved it by doing the following steps:
step 1. remove the hdfs directory (for me it was the default directory "/tmp/hadoop-root/")
rm -rf /tmp/hadoop-root/*
step 2. run
bin/hdfs namenode -format
to format the directory
The root cause of this is datanode and namenode clusterID different, please unify them with namenode clusterID then restart hadoop then it should be resolved.
The issue arises because of mismatch of cluster ID's of datanode and namenode.
Follow these steps:
GO to Hadoop_home/data/namenode/CURRENT and copy cluster ID from "VERSION".
GO to Hadoop_home/data/datanode/CURRENT and paste this cluster ID in "VERSION" replacing the one present there.
then format the namenode
start datanode and namenode again.
The issue arises because of mismatch of cluster ID's of datanode and namenode.
Follow these steps:
1- GO to Hadoop_home/ delete folder Data
2- create folder with anthor name data123
3- create two folder namenode and datanode
4-go to hdfs-site and past your path
<name>dfs.namenode.name.dir</name>
<value>........../data123/namenode</value>
<name>dfs.datanode.data.dir</name>
<value>............../data123/datanode</value>
.
This problem may occur when there are some storage i/o errors. In this scenario, the VERSION file is not available hence appearing as the error above.
You may need to exclude the storage locations on those bad drives in hdfs-site.xml.
For me, this worked -
Delete (or make a backup) of HADOOP_FILE_SYSTEM/namenode/current directory
restart the datanode service
This should create the current directory again, with the correct clusterID in the VERSION file
Source - https://community.pivotal.io/s/article/Cluster-Id-is-incompatible-error-reported-when-starting-datanode-service?language=en_US
I am trying to setup Hadoop 2.6.0 in my Mac OS
I am following this article:
http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleCluster.html#Standalone_Operation
when i run this command
bin/hdfs namenode -format
i get following error.
15/01/08 09:35:03 WARN net.DNS: Unable to determine local hostname -falling back to "localhost"
java.net.UnknownHostException: unknown.prolexic.com: unknown.prolexic.com: unknown error
However when i do
ssh localhost
, i am able to login. Please help
Set hostname using:
scutil --st HostName "localhost"
and it will fix the issue.
Reference: http://biomedicalontologies.com/2012/11/14/fixing-java-net-local-host-name-unknown-error-on-mac-os-x/
I am trying to setup a hadoop single node cluster on my local machine.
I installed hadoop using the following instructions
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
after starting the cluster using
bin/start-all.sh
I get the following output from jps
19623 TaskTracker
19388 SecondaryNameNode
19670 Jps
19479 JobTracker
I can see the nameNode is notrunning. I pulled out the logs from the /logs directory and look like this.
2014-01-24 11:30:20,614 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /app/hadoop/tmp/dfs/name does not exist.
2014-01-24 11:30:20,617 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2014-01-24 11:30:20,619 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2014-01-24 11:30:20,620 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ishan-HP-Pavilion-dv6700-Notebook-PC/127.0.1.1
It says the direcoty path /app/hadoop/tmp/dfs/name I tried creating this directory path for hadoop user but I got same error again.
Can someone please help me fix this.
Please note: I have read similar posts on here but none of them helped.
Thnks !
I would suggest you check permissions of the directory "/app/hadoop/tmp/dfs/name" given to hduser. Alternatively, you could make sure none of the components (secondary name node etc.) are up and running, and then format the namenode using this command:
$HADOOP_INSTALL/hadoop/bin/hadoop namenode -format
Try to start your cluster again and see if it works.
I think "/app/hadoop/tmp/dfs/name" should automatically get created. This folder keeps current info(cache) of namenode. Similarly there must be another folder "namesecondary" with "name" folder. If these folder are not there check "conf/core-site.xml" again. Id's for each daemons is created during each run and these id's are stored in these folders(and also some other information(i don't know exaclty)).
and
if these folders are available
just remove all content of "name" folder.
format namenode.
instead of "start-all" do start-dfs.sh and start-mapred.sh separately
Expecting this should work.
You donot have permissions to create this direcory. Try giving a path in your home directory. you should mention this path in core-site.xml for property hadoop.tmp.dir. I have mentioned it in following link
http://lets-do-something-big.blogspot.in/2014/01/hadoop-series-single-node-installation.html
I'm installing CSVN using jdk1.6.0_23 and I'm getting the following Java error:
2011-02-10 16:25:50,951 [WrapperJarAppMain] WARN util.GrailsUtil - [WARNING] Property [ldapServerPort] of domain class com.collabnet.svnedge.console.Server has type [int] and doesn't support constraint [nullable]. This constraint will not be checked during validation.
2011-02-10 16:25:51,117 [WrapperJarAppMain] ERROR ehcache.Cache - Unable to set localhost. This prevents creation of a GUID. Cause was: vkqgae01: vkqgae01
java.net.UnknownHostException: vkqgae01: vkqgae01
at java.net.InetAddress.getLocalHost(InetAddress.java:1354)
at net.sf.ehcache.Cache.<clinit>(Cache.java:143)
My server has 3 NICs (eth0, eth1 and eth2). I've added an entry to the hosts file bellow localhost containing the following:
127.0.0.1 vkqgae01
I can successfully ping vkqgae01, but nslookup cannot resolve it.
Any ideas?
That is related with
hostname
and
/etc/hosts
If /etc/hosts doesn't containt the definition of the hostname it fails. Just add your hostname to /etc/host for example if your hostname is work add or modified the following line:
127.0.0.1 work localhost
I can succesfully ping vkqgae01, but nslookup cannot resolve it.
Any ideas?
What happens?
vkqgae01 is resolved locally thanks to your hosts file.
nslookup sends a query to your DNS, where vkqgae01 is unknown.
Suggestion: add vkqgae01 to hosts file of every machine where you "use" it.
Basically, the fact that the local hosts file on vkqgae01 contains 127.0.0.1 localhost vkqgae01 doesn't help other machines to solve its name.
Just added the line below in /etc/hosts and it worked.
127.0.0.1 imac
nslookup queries DNS specifically and directly. This means it will not be able to show anything added directly to an /etc/hosts file (as that isn't DNS).
If you want to properly make sure your system will resolve a name, use getent:
'getent hosts vkqgae01'
You need to restart the container if /etc/hosts was changed as far as JVM caches local addresses/names on the first InetAddress call. It looks like InetAddress implementation bug, but still not fixed.
I have a server that is "named" and it seems to cause Grails to be unable to find localhost.
Running Grails application..
2011-01-12 20:45:14,046 [main] ERROR ehcache.Cache - Unable to set localhost. This prevents creation of a GUID. Cause was: zaftra: zaftra
java.net.UnknownHostException: zaftra: zaftra
at java.net.InetAddress.getLocalHost(InetAddress.java:1426)
at net.sf.ehcache.Cache.<clinit>(Cache.java:143)
at net.sf.ehcache.config.ConfigurationHelper.createCache(ConfigurationHelper.java:463)
at net.sf.ehcache.config.ConfigurationHelper.createDefaultCache(ConfigurationHelper.java:369)
at net.sf.ehcache.CacheManager.configure(CacheManager.java:445)
at net.sf.ehcache.CacheManager.init(CacheManager.java:302)
at net.sf.ehcache.CacheManager.<init>(CacheManager.java:260)
at net.sf.ehcache.hibernate.EhCacheProvider.start(EhCacheProvider.java:128)
Contents of /etc/hosts (as shown):
127.0.0.1 localhost localhost.localdomain zaftra
::1 localhost localhost.localdomain zaftra
I'm going to assume that you're on some flavor of linux. If that's the case, you might have a look at your /etc/hosts file - is there an entry for localhost? I'd expect to see something like:
127.0.0.1 localhost zaftra
::1 localhost
I did some Googling - there's a similar question over on SuperUser - the suggestion there was to add the following to /etc/resolv.conf:
search (domainname) // in your case, search (zaftra)
You might also try:
search zaftra
// or
search zaftra.example.com // if there's a more fully-qualified domain name you can use
(That's based off of an entry I've got in resolv.conf on one of my Ubuntu machines).
I'm using AWS(running amazon linux) and ran into the exact same issue, my fix was to add this to /etc/hosts:
102.130.27.257 LAMP-LIVE01-N123 www.mydomain.com
where those values came from:
{internal IP} {instance name} {domain for grails app}
and then I restarted httpd and my grails app server(tomcat)