Hadoop Yarn job: Wrong FS - java

I installed a cloudera cluster with a vagrant box.
I get an error when I launch the following example:
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep input output23 'dfs[a-z.]+'
I went to check the log in /var/log/hadoop-yarn.
There several log file, in yarn-yarn-nodemanager-cdh-master.log, there is the following stackstrace:
2015-06-17 11:42:42,398 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1434535025160_0001_000001 (auth:SIMPLE)
2015-06-17 11:42:42,597 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1434535025160_0001_01_
000001 by user vagrant
2015-06-17 11:42:42,762 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app appli
cation_1434535025160_0001
2015-06-17 11:42:42,776 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1434535025160_0001 tran
sitioned from NEW to INITING
2015-06-17 11:42:42,778 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=vagrant IP=10.10.50.5 OPERATION=Start Container Request
TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1434535025160_0001 CONTAINERID=container_1434535025160_0001_01_000001
2015-06-17 11:42:43,997 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.IllegalArgumentException: Wrong FS: hdfs://var/log/hadoop-yarn, expected: hdfs://cdh-master:8020
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1128)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.verifyAndCreateRemoteLogDir(LogAggregationService.java:192)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:319)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:443)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:67)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
at java.lang.Thread.run(Thread.java:744)
2015-06-17 11:42:44,000 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1434535025160_0001_01_000001 t
o application application_1434535025160_0001
2015-06-17 11:42:44,001 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
2015-06-17 11:42:44,034 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:8042
2015-06-17 11:42:44,035 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Applications still running : [application_14345350
I've seen this error
java.lang.IllegalArgumentException: Wrong FS:
hdfs://var/log/hadoop-yarn, expected: hdfs://cdh-master:8020
in the following post: Failed to start Jobtracker and Tasktracker in CDH pseudo cluster, but this did not helped me much.
Does anyone has an idea?
Thx

Change the property yarn.nodemanager.remote-app-log-dir in yarn-site.xml config file either to:
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>hdfs://cdh-master:8020/var/log/hadoop-yarn/apps</value>
</property>
or
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/var/log/hadoop-yarn/apps</value>
</property>
The second option will use the default filesystem that should be set to HDFS anyway.

Related

NiFi not starting on Windows

I am getting this error on starting NiFi.
2021-07-27 11:48:16,955 WARN [main] org.apache.nifi.bootstrap.Command Failed to set permissions so that only the owner can read pid file
and
org.springframework.beans.factory.BeanCreationException: Error creating bean with name
I have tried
using cmd window in Administrator mode
Changed to http settings instead of https
changed ports
set PATH JAVA_HOME and JRE_HOME
tries running on windows server and work desktop
when I run "run-nifi.bat"
i get:
2021-07-27 15:22:00,668 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2021-07-27 15:22:00,668 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0
2021-07-27 15:22:00,669 INFO [main] org.apache.nifi.bootstrap.Command Command: C:\Program Files\Java\jdk1.8.0_301\bin\java.exe -classpath C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\conf;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\javax.servlet-api-3.1.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\jcl-over-slf4j-1.7.30.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\jetty-schemas-3.1.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\jul-to-slf4j-1.7.30.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\log4j-over-slf4j-1.7.30.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\logback-classic-1.2.3.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\logback-core-1.2.3.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-api-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-framework-api-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-nar-utils-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-properties-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-property-utils-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-runtime-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-server-api-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-stateless-api-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\nifi-stateless-bootstrap-1.14.0.jar;C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\lib\slf4j-api-1.7.30.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m -Dcurator-log-only-first-connection-issue-as-error-level=true -Djavax.security.auth.useSubjectCredsOnly=true -Djava.security.egd=file:/dev/urandom -Dzookeeper.admin.enableServer=false -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\.\conf\nifi.properties -Dnifi.bootstrap.listen.port=49291 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\bin\..\\logs org.apache.nifi.NiFi
2021-07-27 15:22:00,898 WARN [main] org.apache.nifi.bootstrap.Command Failed to set permissions so that only the owner can read pid file C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\bin\..\run\nifi.pid; this may allows others to have access to the key needed to communicate with NiFi. Permissions should be changed so that only the owner can read this file
2021-07-27 15:22:00,900 WARN [main] org.apache.nifi.bootstrap.Command Failed to set permissions so that only the owner can read status file C:\NiFi\NIFI-1~1.0-B\NIFI-1~1.0\bin\..\run\nifi.status; this may allows others to have access to the key needed to communicate with NiFi. Permissions should be changed so that only the owner can read this file
2021-07-27 15:22:00,902 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 12948
And then it closes.
any ideas what to try next.

Failed to load an FSImage file! || How to solve

I am trying to show all the services using the Jps command, but when i hit the console the below nodes are only showing
3633 SecondaryNameNode
4228 Jps
3493 DataNode
4198 NodeManager
4088 ResourceManager
I am trying to start all services using start-dfs.sh and start-yarn.sh.But after that also the result is same.I went into the logs to find the exception,i saw below exception .
2018-06-29 16:02:31,414 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:50070
2018-06-29 16:02:31,414 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking.
2018-06-29 16:02:31,416 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false
2018-06-29 16:02:31,423 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2018-06-29 16:02:31,425 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2018-06-29 16:02:31,425 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2018-06-29 16:02:31,425 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: Failed to load an FSImage file!
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:673)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1006)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:736)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:587)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
2018-06-29 16:02:31,428 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2018-06-29 16:02:31,454 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
I have no clue to solve this , please help.I am using hadoop-2.5.0-cdh5.3.2.
Follow these steps:
Check the path to your FSImage, i.e, where the Namenode is storing the FSImage. In my case it is /hadoop/hdfs/namenode/current
Check the last create FSImage in Namenode and Secondary Namenode. Find the latest FSImage available.
Copy the latest FSImage from Secondary Namenode to Namenode with the same permissions it had in Secondary Namenode. By default, it is hdfs:hadoop in my case
After copying, try restarting all the services.
Format the namenode: "hdfs namenode -format"
Now, ensure the clusterID= of namenode and datanode as same. If
not,replace with one another.
In my case,
/Path_installation_dir/hdata/dfs/name/current/VERSION
/Path_installation_dir/hdata/dfs/data/current/VERSION
All done. start dfs, yarn.
In my case, I had 2 namenodes running and after a server reboot data got corrupted. I was getting "Failed to load image from FSImageFile" in the logs.
In my case, namenode-0 was still healthy and namenode-1 was having the problem
I proceeded as follows:
scale down namenode to 1: leave only namenode-0
delete namenode-1 PVC
make sure the volume is not there with kubectl get pvc -n hadoop
scale namenode back to 2
namenode-0 took care of Data Corruption and made it available to namenode-1

never ending job in mapreduce

I have set some MapReduce configuration in my main method as so
configuration.set("mapreduce.jobtracker.address", "localhost:54311");
configuration.set("mapreduce.framework.name", "yarn");
configuration.set("yarn.resourcemanager.address", "localhost:8032");
Now when I launch the mapreduce task, the process is tracked (I can see it in my cluster dashboard (the one listening on port 8088)), but the process never finishes. It remains blocked at the following line:
15/06/30 15:56:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/30 15:56:17 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/06/30 15:56:18 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/06/30 15:56:18 INFO input.FileInputFormat: Total input paths to process : 1
15/06/30 15:56:18 INFO mapreduce.JobSubmitter: number of splits:1
15/06/30 15:56:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1435241671439_0008
15/06/30 15:56:19 INFO impl.YarnClientImpl: Submitted application application_1435241671439_0008
15/06/30 15:56:19 INFO mapreduce.Job: The url to track the job: http://10.0.0.10:8088/proxy/application_1435241671439_0008/
15/06/30 15:56:19 INFO mapreduce.Job: Running job: job_1435241671439_0008
Someone has an idea?
Edit : in my yarn nodemanager log, I have this message
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Event EventType: KILL_CONTAINER sent to absent container container_1435241671439_0003_03_000001
2015-06-30 15:44:38,396 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Event EventType: KILL_CONTAINER sent to absent container container_1435241671439_0002_04_000001
Edit 2 :
I also have in the yarn manager log, some exception that happened sooner (for a precedent mapreduce call) :
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in use; For more details see:
Solution : I killed all the daemon processes and restarted again hadoop ! In fact, when I ran jps, I was still getting hadoop daemons though I had stopped them. This was a mismatch of HADOOP_PID_DIR
The default port of nodemanage of yarn is 8040. The error says that the port is already in use. Stop all the hadoop process, if you dont have data, may be format namenode once and try running the job again. From both of your edits, the issue is surely with node manager
Solution : I killed all the daemon processes and restarted again hadoop ! In fact, when I ran jps, I was still getting hadoop daemons though I had stopped them. This was related to a mismatch of HADOOP_PID_DIR

Tez crashes on Hadoop-2.5.2 cluster

I successfully Build Tez-0.6.0 against Hadoop-2.5.2
Then I configured Tez-0.6.0 as like in http://tez.apache.org/install.html
Moved Tez lib package to HDFS location and updated my tez-site.xml
<property>
<name>tez.lib.uris</name>
<value>${fs.default.name}/apps/Tez/,${fs.default.name}/apps/Tez/lib/</value>
</property>
After that I tried the sample test for tez
hadoop jar tez-examples-0.6.0.jar orderedwordcount <input> <output>
But I face following error while running this command
Running OrderedWordCount
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/Hadoop/
share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBind
er.class]
SLF4J: Found binding in [jar:file:/C:/Tez/lib
/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/04/15 10:47:57 INFO client.TezClient: Tez Client Version: [ component=tez-api
, version=0.6.0, revision=${buildNumber}, SCM-URL=scm:git:https://git-wip-us.apa
che.org/repos/asf/tez.git, buildTime=2015-04-15T01:13:02Z ]
15/04/15 10:48:00 INFO client.TezClient: Submitting DAG application with id: app
lication_1429073725727_0005
15/04/15 10:48:00 INFO Configuration.deprecation: fs.default.name is deprecated.
Instead, use fs.defaultFS
15/04/15 10:48:00 INFO client.TezClientUtils: Using tez.lib.uris value from conf
iguration: hdfs://HA-Cluster/apps/Tez/,hdfs://HA-Cluster/apps/Tez/lib/
15/04/15 10:48:01 INFO client.TezClient: Stage directory /tmp/app/tez/sta
ging doesn't exist and is created
15/04/15 10:48:01 INFO client.TezClient: Tez system stage directory hdfs://HA-cluster
/tmp/app/tez/staging/.tez/application_1429073725727_0005 doesn't ex
ist and is created
15/04/15 10:48:02 INFO client.TezClient: Submitting DAG to YARN, applicationId=a
pplication_1429073725727_0005, dagName=OrderedWordCount
15/04/15 10:48:03 INFO impl.YarnClientImpl: Submitted application application_14
29073725727_0005
15/04/15 10:48:03 INFO client.TezClient: The url to track the Tez AM: http://syn
cserver34:8088/proxy/application_1429073725727_0005/
15/04/15 10:48:03 INFO client.DAGClientImpl: Waiting for DAG to start running
15/04/15 10:48:09 INFO client.DAGClientImpl: DAG completed. FinalState=FAILED
OrderedWordCount failed with diagnostics: [Application application_1429073725727
_0005 failed 2 times due to AM Container for appattempt_1429073725727_0005_00000
2 exited with exitCode: -1073741515 due to: Exception from container-launch: Ex
itCodeException exitCode=-1073741515:
ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:
702)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.la
unchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:300)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:744)
1 file(s) moved.
Container exited with a non-zero exit code -1073741515
.Failing this attempt.. Failing the application.]
While Seeing at Resourcemanager log:
15/04/15 12:56:15 ERROR scheduler.SchedulerApplicationAttempt: Error trying to a
ssign container token and NM token to an allocated container container_142908227
1173_0001_01_000001
java.lang.IllegalArgumentException: java.net.UnknownHostException: MasterNode
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUti
l.java:373)
at org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(Bu
ilderUtils.java:247)
at org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTok
enSecretManager.createContainerToken(RMContainerTokenSecretManager.java:199)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerAppl
icationAttempt.pullNewlyAllocatedContainersAndNMTokens(SchedulerApplicationAttem
pt.java:425)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.F
iCaSchedulerApp.getAllocation(FiCaSchedulerApp.java:248)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.Capa
cityScheduler.allocate(CapacityScheduler.java:736)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAtte
mptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:816)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAtte
mptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:809)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.
doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMa
chineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMach
ineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine
.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAtte
mptImpl.handle(RMAppAttemptImpl.java:649)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAtte
mptImpl.handle(RMAppAttemptImpl.java:104)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$Applica
tionAttemptEventDispatcher.handle(ResourceManager.java:761)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$Applica
tionAttemptEventDispatcher.handle(ResourceManager.java:742)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher
.java:173)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.ja
va:106)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.UnknownHostException: MasterNode
... 19 more
Problem might be while connecting to nodemanager it unable to handshake with ResourceManager.
If I try in single node hadoop cluster mean It working correctly.
Try to add property
yarn.nodemanager.delete.debug-delay-sec
1200
One thing while running the "launchcontainer.cmd" located in hadoop \tmp..\appcache location.It arise an issue in accessing the Dll for running mapreduce on windows platform, ie MSVCR100.dll is missing to handle the Tez job.As bellow
"The program can't start because MSCVR100.dll is missing from your
computer. Try reinstalling the program to fix this issue"
Provide Full privilege to hadoop-tmp directory and try to replaced/Moved msvcr100.dll(C:\Windows\System32) file in windows machine to run the mapreduce program for TEZ job.

Not able to run the examples of HBase-The definitive guide

I've been trying to run examples from HBase-The definitve guide and i've been encountering with this error and i'm not able to get past it. I'm running in Stand alone mode if that helps.
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: �
17136#ubuntulocalhost,32992,1373877731444
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:615)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:94)
at util.HBaseHelper.<init>(HBaseHelper.java:29)
at util.HBaseHelper.getHelper(HBaseHelper.java:33)
at client.PutExample.main(PutExample.java:22)
But my HMaster process is running:
hduser#ubuntu:/home/ubuntu/hbase-book/ch03$ jps
17602 Jps
8709 NameNode
8929 DataNode
9472 TaskTracker
9252 JobTracker
9172 SecondaryNameNode
17136 HMaster
This is my hbase-site.xml file:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///usr/local/hbase/hbase-data/</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/hbase/zookeeper-data/</value>
This is my /etc/hosts file:
127.0.0.1 localhost
127.0.1.1 ubuntu
127.0.0.1 ubuntu.ubuntu-domain ubuntu
Specifically, i'm trying to run the 3rd chapter examples and i'm just not understanding why my setup is not running..
Any idea where i'm going wrong?
Edit: Here are the logs:
2013-07-15 03:56:32,663 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:60119
2013-07-15 03:56:32,672 WARN org.apache.zookeeper.server.ZooKeeperServer: Connection request from old client /127.0.0.1:60119; will be dropped if server is in r-o mode
2013-07-15 03:56:32,672 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:60119
2013-07-15 03:56:32,674 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x13fe17e7f1d0006 with negotiated timeout 40000 for client /127.0.0.1:60119
2013-07-15 03:57:11,653 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=1.17 MB, free=247.24 MB, max=248.41 MB, blocks=2, accesses=68, hits=55, hitRatio=80.88%, , cachingAccesses=61, cachingHits=53, cachingHitsRatio=86.88%, , evictions=0, evicted=6, evictedPerRun=Infinity
2013-07-15 03:57:14,333 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x13fe17e7f1d0006, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:724)
2013-07-15 03:57:14,334 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:60119 which had sessionid 0x13fe17e7f1d0006
2013-07-15 03:57:24,551 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=1 regions=1 average=1.0 mostloaded=1 leastloaded=1
2013-07-15 03:57:24,568 DEBUG org.apache.hadoop.hbase.client.MetaScanner: Scanning .META. starting at row= for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation#189ddf
2013-07-15 03:57:24,578 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: Scanned 1 catalog row(s) and gc'd 0 unreferenced parent region(s)
It has nothing to do with standalone or distributed mode. Make sure your setup is working fine. I can see that RegionServer and Zookeeper are not running. Comment out the line 127.0.1.1 ubuntu in your /etc/hosts file and restart HBase. You might have to kill it.
P.S : Since you already have Hadoop configured and it is running fine, you can run HBase in pseudo-distributed setup.

Categories