I'm trying to install Hadoop and run it.
And I'm sure I've installed Hadoop and formatted namenode successfully.
However, when I tried to run start-dfs.sh, I got the error below:
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-wenruo-namenode-linux.out
localhost: /usr/local/hadoop/bin/hdfs: line 304: /usr/local/hadoop/usr/lib/jvm/java-8-oracle/bin/java: No such file or directory
My JAVA_HOME is below:
echo $JAVA_HOME
/usr/lib/jvm/java-7-openjdk-amd64
My hadoop-env.sh file:
export JAVA_HOME=${JAVA_HOME}
How could Hadoop is still looking for JDK8 as I already set JAVA_HOME to JDK7?
Thank you very much.
In general each Hadoop distribution/version should have a few basic script files that set this JAVA_HOME environment variablesuch as yarn-env.sh file if you have yarn.
Also depending on your hadoop version you might also have the path in your *-site.xml files such as hdfs-site.xml, core-site.xml, yarn-site.xml, mapred-site.xml, and a few others depending on what services you have. It is likely your update to hadoop-env.sh did not regenerate the client configuration files unless you did it through a cluster manager application then redeployed client configuration files.
Sometimes these also I find get to set use the systems bin/java executable. You can use the following command to find out what java your OS has in your bin/ path.
readlink -f /usr/bin/java
/usr/bin/java -version
Did you also update hadoop-env.sh on each node then restart all services to make sure it is picked up again?
Leave it. The problem is resolved.
In hadoop-env.sh, I changed export JAVA_HOME=${JAVA_HOME} to echo $JAVA_HOME /usr/lib/jvm/java-7-openjdk-amd64.
It looks like ${JAVA_HOME} doesn't work.
Related
I've installed Spark 2.1.1 on Ubuntu and no matter what I do, it doesn't seem to agree with the java path. When I run "spark-submit --version" or "spark-shell" I get the following error:
/usr/local/spark/bin/spark-class: line 71: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin//bin/java: No such file or directory
Now obviously the "/bin//bin/java" is problematic, but I'm not sure where to change the configuration. The spark-class file has the following lines:
if [ -n "${JAVA_HOME}" ]; then
RUNNER="${JAVA_HOME}/bin/java"
I was originally using a version of Spark meant for Hadoop 2.4 and when I changed it to "RUNNER="${JAVA_HOME}" it would either give me the error "[path] is a directory" or "[path] is not a directory." This was after also trying multiple path permutations in /etc/environment
What I now have in /etc/environment is:
JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/"
This is the current java setup that I have:
root#ubuntu:~# update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
bashrc has the following:
export SPARK_HOME="/usr/local/spark"
export PATH="$PATH:$SPARK_HOME/bin"
Can anyone advise: 1) What files I need to change and 2) how I need to change them? Thanks in advance.
spark-class file is in the link, just in case:
http://vaughn-s.net/hadoop/spark-class
In the /etc/environment file replace
JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/
with
JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/
then execute
source /etc/environment
also RUNNER="${JAVA_HOME}/bin/java" should be kept as it is
Windows Environment:
Open Advanced system settings -> Environment Variables to set JAVA_HOME path, and the most common mistake is setting the path to JAVA folder:
JAVA_HOME: Directory-Name:\java
rather than setting it to JDK folder
JAVA_HOME: Directory-Name:\jdk
This is how it worked for me.
I have installed tomcat7 on my Ubuntu machine. When I try to restart the server I get message to set JAVA_HOME but it is set in .bashrc
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export CATALINA_HOME=/usr/share/tomcat7
Error:
omkars#<ubuntu_14.04>:~$ sudo service tomcat7 restart
[sudo] password for omkars:
* no JDK or JRE found - please set JAVA_HOME
omkars#<ubuntu_14.04>:~$ echo $JAVA_HOME
/usr/lib/jvm/java-8-oracle
What could be missing ?
Thanks.
Now, its working!
Changes I have done are:
changed .bashrc as explained in the question.
changed /etc/init.d/tomcat7 to point to oracle Java8 which is missing here!
JDK_DIRS="/usr/lib/jvm/default-java ${OPENJDKS} /usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-sun /usr/lib/jvm/java-7-oracle **/usr/lib/jvm/java-8-oracle**"
Then,
root#omkars-Dell-System-Inspiron-N4110:~# sudo service tomcat7 restart
* Starting Tomcat servlet engine tomcat7 [ OK ]
Got a hint from this page:
https://mifosforge.jira.com/wiki/display/MIFOSX/Install+Tomcat+7+on+Ubuntu+11.10+for+Mifos+X
Thanks
It seems like the preferred way of handling this is to uncomment the JAVA_HOME entry in /etc/default/tomcat7 and adjust the path accordingly. If you're using the webupd8 repository with the oracle-java8-installer, it's JAVA_HOME=/usr/lib/jvm/java-8-oracle.
It'll need to be set for the user that runs the tomcat service, rather than for your user.
Set it in the system wide profile, somewhere in /etc/profile or /etc/profile.d/, depending on how your machine is configured.
The startup script at /etc/init.d/tomcat7 sources the file /etc/default/rcS before searching for some well-known install locations.
Adding the line JAVA_HOME=/usr/lib/jvm/java-8-oracle to /etc/default/rcS corrects the no JDK or JRE found startup problem without directly modifying the /etc/init.d/tomcat7 script.
You can set an environmental variable in the setenv.sh script. According to the Running The Apache Tomcat 7.0 document:
Apart from CATALINA_HOME and CATALINA_BASE, all environment variables can
be specified in the "setenv" script. The script is placed either into
CATALINA_BASE/bin or into CATALINA_HOME/bin directory and is named
setenv.bat (on Windows) or setenv.sh (on *nix).
So just add the following line to setenv.sh:
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
This way you are setting the variable locally.
I have that same problem but I solve it by changing
JDK_DIR variable in /etc/init.d/tomcat as follow :
JDK_DIRS="/usr/lib/jvm/default-java ${OPENJDKS} /usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-sun /usr/lib/jvm/java-7-oracle /usr/lib/jvm/java-8-oracle"
Try install Java using the repository of http://www.webupd8.org .
This is for Java 8: http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html
I have installed and configured hadoop in a linux machine .Now i am trying to run a sample MR job.I have started the hadoop via the command /usr/local/hadoop/bin/start-all.sh and the output is
namenode running as process 7876. Stop it first.
localhost: datanode running as process 8083. Stop it first.
localhost: secondarynamenode running as process 8304. Stop it first.
jobtracker running as process 8398. Stop it first.
localhost: tasktracker running as process 8612. Stop it first.
so i think that my hadoop is configured successfully.But when i am tryinh to run below command it is giving
jeet#jeet-Vostro-2520:~$ hadoop fs -put gettysburg.txt /user/jeet/getty/gettysburg.txt
hadoop: command not found
i am new in hadoop.somebody please help .I am also posting the screenshot of what i am trying
As it seems from your commands history, you can replace hadoop by /usr/local/hadoop/bin/hadoop and it should help.
If you want to use hadoop command without specifying the full path to it, you can edit ~/.bashrc file and add the following line:
export PATH=$PATH:/usr/local/hadoop/bin/
Then you need to reopen your terminal.
edit PATH variable, if you want to be able to invoke hadoop without specifying full path
export PATH=$PATH:/usr/local/hadoop/bin/
if you want it for each bash profile then edit ~/.bash_profile to include this
I got the same error, and this worked for me
I configured path variable in.bashrc.
export HADOOP_HOME=/opt/hadoop
export PATH= $PATH:$HADOOP_HOME/bin
Sometime restarting your machine can resolve the issue ,only if you have configured everything correct.
cd ~
vi .bashrc
export PATH=$PATH:<hadoop installation path>
for example replace <hadoop installation path> by /usr/local/hadoop/bin/
once echo your path ,if your path hasn't set ,then go to your .bashrc file
vi ~/.bashrc
and add folliwing in that.
export PATH=$PATH:/usr/local/hadoop/bin/
Please make sure that you are logged into the particular user, whose .bashrc file has got this entry
export PATH=$PATH:/usr/local/hadoop/bin/
Assuming that your hadoop setup is lying at /usr/local
Example You have set the .bashrc file for user hadoopuser in /home/hadoopuser/.bashrc, then you should be logged in as hadoop user only and not as any other user.
Hadoop command not found?
put this 3 commands at the end of ~/.bashrc file
sudo gedit ~/.bashrc
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/home/Work/hadoop-1.2.1
export PATH=$HADOOP_HOME/bin:$PATH*
java-8-openjdk-amd64 - put your folder name
hadoop-1.2.1 - put your folder name
save the file and use below command
source ~/.bashrc
or simply close the terminal and open it again
I followed "http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html" to install hadoop on ubuntu. But, upon checking the hadoop version I get the following error:
Error: Could not find or load main class
org.apache.hadoop.util.VersionInfo
Also, when I try: hdfs namenode -format
I get the following error:
Error: Could not find or load main class
org.apache.hadoop.hdfs.server.namenode.NameNode
The java version used is:
java version "1.7.0_25"
OpenJDK Runtime Environment (IcedTea 2.3.10) (7u25-2.3.10-1ubuntu0.12.04.2)
OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode)
It is a problem of environmental variables setup. Apparently, I didnt find one which can work until NOW. I was trying on 2.6.4. Here is what we should do
export HADOOP_HOME=/home/centos/HADOOP/hadoop-2.6.4
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
Add these into your .bashrc and dont forget to do
source ~/.bashrc
I think your problem will be solved as was mine.
You probably did not follow the instructions correctly. Here are some things to try and help us / you diagnose this:
In the shell that you ran hadoop version, run export and show us the list of relevant environment variables.
Show us what you put in the /usr/local/hadoop/etc/hadoop/hadoop-env.sh file.
If neither of the above gives you / us any clues, then find and use a text editor to (temporarily) modify the hadoop wrapper shell script. Add the line "set -xv" somewhere near the beginning. Then run hadoop version, and show us what it produces.
Adding this line to ~/.bash_profile worked for me.
export HADOOP_PREFIX=/<where ever you install hadoop>/hadoop
So just:
$ sudo open ~/.bash_profile then add the aforesaid line
$ source ~/.bash_profile
Hope this helps (:
I was facing the same issue. Although it may seem so simple but took away 2 hrs of my time. I tried all the things above but it didn't help.
I just exit the shell i was in and tried again by logging into the system again. Then things worked!
Try to check:
JAVA_HOME, all PATH related variables in Hadoop config
run: . ~/.bashrc (note the dot in front) to make those variables available in your environment. It seems that the guide does not mention this.
I got the same problem with hadoop 2.7.2
after I applied the trick shown I was able to start hdfs but later I discovered that the tar archivie I was using was missing some important pieces. So downloading the 2.7.3 everything worked as it is supposed to work.
My first suggestion is to download again the tar.gz at the same version or major.
If you are continuing to reading... this how I solved the problem...
After a fresh install hadoop was not able to find the jars.
I did this small trick:
I located where the jars are
I did a symbolic link of the folder to
$HADOOP_HOME/share/hadoop/common
ln -s $HADOOP_HOME/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib $HADOOP_HOME/share/hadoop/common
for version command you need hadoop-common-2.7.2.jar, this helped me to find where the jars where stored.
After that...
$ bin/hadoop version
Hadoop 2.7.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41
Compiled by jenkins on 2016-01-26T00:08Z
Compiled with protoc 2.5.0
From source with checksum d0fda26633fa762bff87ec759ebe689c
This command was run using /opt/hadoop-2.7.2/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/hadoop-common-2.7.2.jar
Of course any hadoop / hdfs command works now.
I'm again an happy man, I know this is not a polite solution but works at least for me.
I got that error , I fixed that by editing ~/.bashrc
as follow
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$PATH
then open terminal and write this command
source ~/.bashrc
then check
hadoop version
Here is how it works for Windows 10 Git Bash (mingw64):
export HADOOP_HOME="/PATH-TO/hadoop-3.3.0"
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_CLASSPATH=$(cygpath -pw $(hadoop classpath)):$HADOOP_CLASSPATH
hadoop version
copied slf4j-api-1.6.1.jar into hadoop-3.3.0\share\hadoop\common
I added the environment variables described above but still didn't work. Setting the HADOOP_CLASSPATH as follows in my ~/.bashrc worked for me:
export HADOOP_CLASSPATH=$(hadoop classpath):$HADOOP_CLASSPATH
I used
export PATH=$HADOOP_HOME/bin:$PATH
Instead of
export PATH=$PATH:$HADOOP_HOME/bin
Then it worked for me!
i got the following error when i tried to install GlassFish Server glassfish-3.1.2.2-windows().exe
Executing command :C:\glassfish3\glassfish\bin\asadmin.bat --user admin --passwordfile - create-domain --savelogin --checkports=false --adminport 4646 --instanceport 7070 --domainproperties=jms.port=7676:domain.jmxPort=8686:orb.listener.port=3700:http.ssl.port=8181:orb.ssl.port=3820:orb.mutualauth.port=3920 domain1
C:\glassfish3\glassfish\bin\asadmin.bat --user admin --passwordfile - create-domain --savelogin --checkports=false --adminport 4646 --instanceport 7070 --domainproperties=jms.port=7676:domain.jmxPort=8686:orb.listener.port=3700:http.ssl.port=8181:orb.ssl.port=3820:orb.mutualauth.port=3920 do main1 The system cannot find the path specified.
print screen of error is following
I just ran into this same problem and it appears to be created by the batch files asadmin.bat and asenv.bat. The batch files read as follows (I've removed the REM statements and lines that didn't pertain to the problem:
asadmin.bat in c:\glassfish3\glassfish\bin
REM Always use JDK 1.6 or higher
REM Depends on Java from ..\config\asenv.bat
call "%~dp0..\config\asenv.bat"
if "%AS_JAVA%x" == "x" goto UsePath
set JAVA="%AS_JAVA%\bin\java"
goto run
:UsePath
set JAVA=java
:run
%JAVA% -jar "%~dp0..\modules\admin-cli.jar" %*
asenv.bat in c:\glassfish3\glassfish\conf
set AS_JAVA=C:\Program Files (x86)\Java
I could not figure out how to get Glassfish to just use the environment variable during install. I attempted to use the -j "(javapath)" argument, but this didn't solve the problem for me.
What worked, and I'm not proud of this solution, is to give Glassfish what it's looking for. If you put together the path its constructing above, you get C:\Program Files (x86)\Java\bin\java.exe. Since Java installs to C:\Program Files (x86)\Java\jre7\bin\java.exe, I simply copied the contents of C:\Program Files (x86)\Java\jre7\ to C:\Program Files (x86)\Java\ and Glassfish installed correctly.
If someone else has a better solution to this problem, PLEASE post it!
Full Disclosure:
Installing Glassfish 3.1.2.2 on Windows Server 2008, running on a VM.
Update: A co-worker of mine came up with a different solution that doesn't involve copying the contents of C:\Program Files(x86)\java\jre7.
During the Glassfish install, at the point where it's requesting a password for the admin account, edit the asenv.bat file and add the "jre7\" to the line I quoted above. This forces Glassfish to look in the proper folder.