I am setting up a Multi-Node cluster and my NodeManager and ResourceManager processes are not starting for some reason and I can't figure out why. When I run the jps command, I only see the NameNode and SecondaryNameNode and JPS processes. As a result, my MapReduce job won't work. This is my configuration
yarn-site.xml - across NameNode and DataNodes
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ec2PathToMyNameNode.compute-1.amazonaws.com</value>
</property>
</configuration>
And my hosts file is this on the NameNode:
nameNodeIP nameNodePublicDNS.compute-1.amazonaws.com
dataNode1IP dataNode1PublicDNS.compute-1.amazonaws.com
dataNode2IP dataNode2PublicDNS.compute-1.amazonaws.com
dataNode3IP dataNode3PublicDNS.compute-1.amazonaws.com
127.0.0.1 localhost
When I run my MapReduce job it says it's unable to connect at 8032. I am using Hadoop 3.1.2
Edit:
I Checked the logs and i found the following exception:
Caused by: java.lang.ClassNotFoundException: javax.activation.DataSource
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:190)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
... 83 more
Error injecting constructor, java.lang.NoClassDefFoundError: javax/activation/DataSource
at org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.(JAXBContextResolver.java:41)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:54)
while locating org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
1 error
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1025)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051)
at com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory$GuiceInstantiatedComponentProvider.getInstance(GuiceComponentProviderFactory.java:345)
Trying to figure out the issue
(1) Start-dfs.sh vs Start-all.sh
Check that you are using Start-all.sh command when you are trying to start hadoop because Start-dfs.sh will only start the namenode and datanodes
(2) Check the Hadoop logs
Check for the HADOOP_LOG_DIR global variable value to get the Log dir, because it will include all exception thrown when trying to start the Namenode Manager and the Resource Manager
(3) Check for the installed Java version
The error may be thrown by an incompatible Java version, check that you have installed the latest Java version.
Fix Java 9 incompatibilies in Hadoop
Hadoop Error starting ResourceManager and NodeManager
(4) Check Hadoop Common issues
Based on the error you provided in the answer update you may find these issue links relevant:
[JDK9] Fail to run yarn application after building hadoop pkg with jdk9 in jdk9 env
[JDK9] Resource Manager failed to start after using hadoop pkg(built with jdk9)
More information
For more information you can check my article on Medium, it may give you some insights:
Installing Hadoop 3.1.0 multi-node cluster on Ubuntu 16.04 Step by Step
my problem is that I used java11 to cooperate with hadoop.
so what i do is
1.rm /Library/Java/*
2.download java8 from https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
3.install java8jdk and
4.fix the JAVA_HOME in hadoop-env.sh
5.stop-all.sh
6.start-dfs.sh
7.start-yarn.sh
[pdash#localhost hadoop]$ export YARN_RESOURCEMANAGER_OPTS="--add-modules=ALL-SYSTEM"
[pdash#localhost hadoop]$ export YARN_NODEMANAGER_OPTS="--add-modules=ALL-SYSTEM"
It will work for sure I tried from apache JIRA log ....Thank PRAFUL
Related
I'm trying to setup Hadoop3-alpha3 with a Single Node Cluster (Psuedo-distributed) and using the apache guide to do so. I've tried running the example MapReduce job but every time the connection is refused. After running sbin/start-all.sh I've been seeing these exceptions in the ResourceManager log (and similarly in the NodeManager log):
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property.
xxxx-xx-xx xx:xx:xx,xxx DEBUG org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Exception is:
java.beans.IntrospectionException: bad write method arg count: public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)
at java.desktop/java.beans.PropertyDescriptor.findPropertyType(PropertyDescriptor.java:696)
at java.desktop/java.beans.PropertyDescriptor.setWriteMethod(PropertyDescriptor.java:356)
at java.desktop/java.beans.PropertyDescriptor.<init>(PropertyDescriptor.java:142)
at org.apache.commons.beanutils.FluentPropertyBeanIntrospector.createFluentPropertyDescritor(FluentPropertyBeanIntrospector.java:178)
at org.apache.commons.beanutils.FluentPropertyBeanIntrospector.introspect(FluentPropertyBeanIntrospector.java:141)
at org.apache.commons.beanutils.PropertyUtilsBean.fetchIntrospectionData(PropertyUtilsBean.java:2245)
at org.apache.commons.beanutils.PropertyUtilsBean.getIntrospectionData(PropertyUtilsBean.java:2226)
at org.apache.commons.beanutils.PropertyUtilsBean.getPropertyDescriptor(PropertyUtilsBean.java:954)
at org.apache.commons.beanutils.PropertyUtilsBean.isWriteable(PropertyUtilsBean.java:1478)
at org.apache.commons.configuration2.beanutils.BeanHelper.isPropertyWriteable(BeanHelper.java:521)
at org.apache.commons.configuration2.beanutils.BeanHelper.initProperty(BeanHelper.java:357)
at org.apache.commons.configuration2.beanutils.BeanHelper.initBeanProperties(BeanHelper.java:273)
at org.apache.commons.configuration2.beanutils.BeanHelper.initBean(BeanHelper.java:192)
at org.apache.commons.configuration2.beanutils.BeanHelper$BeanCreationContextImpl.initBean(BeanHelper.java:669)
at org.apache.commons.configuration2.beanutils.DefaultBeanFactory.initBeanInstance(DefaultBeanFactory.java:162)
at org.apache.commons.configuration2.beanutils.DefaultBeanFactory.createBean(DefaultBeanFactory.java:116)
at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:459)
at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:479)
at org.apache.commons.configuration2.beanutils.BeanHelper.createBean(BeanHelper.java:492)
at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResultInstance(BasicConfigurationBuilder.java:447)
at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.createResult(BasicConfigurationBuilder.java:417)
at org.apache.commons.configuration2.builder.BasicConfigurationBuilder.getConfiguration(BasicConfigurationBuilder.java:285)
at org.apache.hadoop.metrics2.impl.MetricsConfig.loadFirst(MetricsConfig.java:119)
at org.apache.hadoop.metrics2.impl.MetricsConfig.create(MetricsConfig.java:98)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:478)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:188)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:163)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:62)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:58)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:678)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1129)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:315)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1407)
And then later in the file:
xxxx-xx-xx xx:xx:xx,xxx FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
java.lang.ExceptionInInitializerError
at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:64)
at com.google.inject.internal.BytecodeGen.newFastClass(BytecodeGen.java:204)
at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.<init>(ProviderMethod.java:256)
at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:71)
at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:275)
at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:144)
at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:123)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:349)
at com.google.inject.AbstractModule.install(AbstractModule.java:122)
at com.google.inject.servlet.ServletModule.configure(ServletModule.java:52)
at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements.getElements(Elements.java:110)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:332)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:377)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1116)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1218)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1408)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module #173f73e7
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:337)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:281)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:197)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:191)
at com.google.inject.internal.cglib.core.$ReflectUtils$2.run(ReflectUtils.java:56)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at com.google.inject.internal.cglib.core.$ReflectUtils.<clinit>(ReflectUtils.java:46)
... 29 more
For reference my core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
and yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
I have no idea what is causing these exceptions, any help with them would be helpful.
Edit: Added hadoop-env.sh:
export JAVA_HOME=/usr/local/jdk-9
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
case ${HADOOP_OS_TYPE} in
Darwin*)
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
;;
esac
export HADOOP_ROOT_LOGGER=DEBUG,console
export HADOOP_DAEMON_ROOT_LOGGER=DEBUG,RFA
At mentioned by #tk421 in the comments. Java 9 is not compatible with Hadoop 3 (and possibly any hadoop version) yet.
https://issues.apache.org/jira/browse/HADOOP-11123
I've changed to Java 8.181 and both are starting up now:
hadoop#hadoop:/usr/local/hadoop$ sbin/start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop]
Starting resourcemanager
Starting nodemanagers
hadoop#hadoop:/usr/local/hadoop$ jps
8756 SecondaryNameNode
8389 NameNode
9173 NodeManager
9030 ResourceManager
8535 DataNode
9515 Jps
my problem is that I used java11 to cooperate with hadoop.
so what i do is
1.rm /Library/Java/*
2.download java8 from https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
3.install java8jdk and
4.fix the JAVA_HOME in hadoop-env.sh
5.stop-all.sh
6.start-dfs.sh
7.start-yarn.sh
I want to share my observations. OpenJDK 17 was used to interact with Hadoop.
I thought the newer the Java version, the better. I was very wrong, I had to switch to OpenJDK 8.
So, how do we fix the problem?
You need to uninstall the previous version of Java. There are several removal options.
Deleting only OpenJDK: $ sudo apt-get remove openjdk*
Deleting OpenJDK along with dependencies: $ sudo apt-get remove --auto-remove openjdk*
Deleting OpenJDK and its configuration files: $ sudo apt-get purge openjdk*
Deleting OpenJDK along with dependencies and it’s configuration files: $ sudo apt-get purge --auto-remove openjdk*
As for me, I used the latter option.
You need to install OpenJDK 8.
Installing OpenJDK: $ sudo apt install openjdk-8-jdk -y
After the installation is complete, you can check the Java version: $ java -version; javac -version
You need to edit the path to the JAVA_HOME variable. To do this, you need to open hadoop-env.sh
To open a file hadoop-env.sh, you can use the command:
sudo nano $HADOOP_HOME/etc/hadoop/hadoop-env.sh
where $HADOOP_HOME is the location of your Hadoop (for example, /home/hdoop/hadoop-3.2.4).
The contents of the JAVA_HOME variable looks like this: export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 . Of course, that all depends on your Java location.
Go to the catalog hadoop-3.2.4/sbin.
Next, you need to stop the daemons on all nodes of the cluster: ./stop-all.sh
Running NameNode and DataNode: ./start-dfs.sh
Running YARN resource and NodeManager: ./start-yarn.sh
Check if all daemons are active and running as Java processes: jps.
The resulting list should look (approximately) as follows:
33706 SecondaryNameNode
33330 NameNode
34049 NodeManager
33900 ResourceManager
33482 DataNode
34410 Jps
Hadoop setup is DONE!
P.S. I hope my answer will be useful. I tried to cover all the details in my answer. I wish you all success.
Using win7-64, jdk8, sparks1.6.2.
I have spark running, winutils, HADOOP_HOME, etc
Per documentation Note: The launch scripts do not currently support Windows. To run a Spark cluster on Windows, start the master and workers by hand. But does not say how?
How do I launch spark master on windows?
Tried running sh start-master.sh thru git bash : failed to launch org.apache.spark.deploy.master.Master: Even though it prints out Master --ip Sam-Toshiba --port 7077 --webui-port 8080 - So I don't know what all this means.
But when I try spark-submit --class " " --master spark://Sam-Toshiba:7077 target/ .jar -
I get errors:
WARN AbstractLifeCycle: FAILED SelectChannelConnector#0.0.0.0:
4040: java.net.BindException: Address already in use: bind
java.net.BindException: Address already in use
WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/01/12 14:44:29 WARN AppClient$ClientEndpoint: Failed to connect to master Sam-Toshiba:7077
java.io.IOException: Failed to connect to Sam-Toshiba/192.168.137.1:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
Also tried spark://localhost:7077 - same errors
On Windows you can launch Master using below command. Open command prompt and go to Spark bin folder and execute
spark-class.cmd org.apache.spark.deploy.master.Master
Above command will print like Master: Starting Spark master at spark://192.168.99.1:7077 in console as per IP of your machine. You can check the UI at http://192.168.99.1:8080/
If you want to launch worker once your master is up you can use below command. This will use all the available cores of your machine.
spark-class.cmd org.apache.spark.deploy.worker.Worker spark://192.168.99.1:7077
If you want to utilize 2 cores of your 4 cores of machine then use
spark-class.cmd org.apache.spark.deploy.worker.Worker -c 2 spark://192.168.99.1:7077
What is the source of this error and how could it be fixed?
2015-11-29 19:40:04,670 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to anmol-vm1-new/10.0.1.190:8020. Exiting.
java.io.IOException: All specified directories are not accessible or do not exist.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:217)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:974)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:945)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
at java.lang.Thread.run(Thread.java:745)
2015-11-29 19:40:04,670 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to anmol-vm1-new/10.0.1.190:8020
2015-11-29 19:40:04,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
there are 2 Possible Solutions to resolve
First:
Your namenode and datanode cluster ID does not match, make sure to make them the same.
In name node, change ur cluster id in the file located in:
$ nano HADOOP_FILE_SYSTEM/namenode/current/VERSION
In data node you cluster id is stored in the file:
$ nano HADOOP_FILE_SYSTEM/datanode/current/VERSION
Second:
Format the namenode at all:
Hadoop 1.x: $ hadoop namenode -format
Hadoop 2.x: $ hdfs namenode -format
I met the same problem and solved it by doing the following steps:
step 1. remove the hdfs directory (for me it was the default directory "/tmp/hadoop-root/")
rm -rf /tmp/hadoop-root/*
step 2. run
bin/hdfs namenode -format
to format the directory
The root cause of this is datanode and namenode clusterID different, please unify them with namenode clusterID then restart hadoop then it should be resolved.
The issue arises because of mismatch of cluster ID's of datanode and namenode.
Follow these steps:
GO to Hadoop_home/data/namenode/CURRENT and copy cluster ID from "VERSION".
GO to Hadoop_home/data/datanode/CURRENT and paste this cluster ID in "VERSION" replacing the one present there.
then format the namenode
start datanode and namenode again.
The issue arises because of mismatch of cluster ID's of datanode and namenode.
Follow these steps:
1- GO to Hadoop_home/ delete folder Data
2- create folder with anthor name data123
3- create two folder namenode and datanode
4-go to hdfs-site and past your path
<name>dfs.namenode.name.dir</name>
<value>........../data123/namenode</value>
<name>dfs.datanode.data.dir</name>
<value>............../data123/datanode</value>
.
This problem may occur when there are some storage i/o errors. In this scenario, the VERSION file is not available hence appearing as the error above.
You may need to exclude the storage locations on those bad drives in hdfs-site.xml.
For me, this worked -
Delete (or make a backup) of HADOOP_FILE_SYSTEM/namenode/current directory
restart the datanode service
This should create the current directory again, with the correct clusterID in the VERSION file
Source - https://community.pivotal.io/s/article/Cluster-Id-is-incompatible-error-reported-when-starting-datanode-service?language=en_US
When I try to run hadoop command
vinit#ubuntu:~/hadoop-1.0.4$ bin/hadoop dfs -ls
I get following things as output.
13/04/17 06:26:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9010. Already tried 0 time(s).
13/04/17 06:26:38 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9010. Already tried 1 time(s).
Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:9010 failed on connection exception: java.net.ConnectException: Connection refused
I am new to hadoop and even Java.Please Help!
Check if your HDFS processes are running? Run 'jps' command to check the running java processes.
You shoudl have at least 'Namenode' and 'Datanode' processes running. Please check and let me know.
Cheers
Rags
I have struggled two days and the night between to find out the answer to this problem.
In my case( and I'm sure this is the problem in most cases ) had to create the hadoop temporary folder by hand and add them to the hdfs-site.xml !
<property>
<name>dfs.data.dir</name>
<value>/home/stefan/Downloads/hadoop-2.7.1/tmp/dfs/name/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/stefan/Downloads/hadoop-2.7.1/tmp/dfs/name</value>
<final>true</final>
</property>
I hope this helps you guys not to go through the same hell as me.
Besides that
chown user_name hadoop_folder hadoop_temp_folder
chmod 755 hadoop_folder hadoop_temp_folder
I have installed hadoop 1.0.4 on my cluster, of 1 master and 3 slaves,
and now I am installing HTTPFS(hadoop-hdfs-httpfs-0.20.2-cdh3u5-SNAPSHOT) to access the HDFS contents using http protocol,
I am able to access the normal page through it
curl -i "http://myhost:14000"
its works fine :)
but If I tried to access HDFS then its giving me error(ubantu is my user) :(
curl -i "http://myhost:14000/webhdfs/v1?user.name=ubantu&op=OPEN"
error:
{"RemoteException":{"message":"User: ubantu is not allowed to impersonate ubantu",
"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException"}}
Thanks in advance.
Did you configure core-site.xml as described here: http://hadoop.apache.org/docs/current/hadoop-hdfs-httpfs/ServerSetup.html
where #HTTPFSUSER# is the user that starts the httpfs daemon? (presumably "ubantu")?
After doing this, restart the HDFS daemons.
Resolved this issue, As I havent added my user ubantu into the hadoop user groups.
added user ubantu into the hadoop group and
updated property in core-site.xml as
<property>
<name>hadoop.proxyuser.myhttpfsuser.hosts</name>
<value>httpfs-host.foo.com</value>
</property>
<property>
<name>hadoop.proxyuser.myhttpfsuser.groups</name>
<value>hadoop</value>
</property>
now its working fine.
Simple googling shows someone else with the same error that did not bounce the servers to get the configuration changes.
https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!topic/cdh-user/dSJP-a_Lcqo