I've installed Spark 2.1.1 on Ubuntu and no matter what I do, it doesn't seem to agree with the java path. When I run "spark-submit --version" or "spark-shell" I get the following error:
/usr/local/spark/bin/spark-class: line 71: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin//bin/java: No such file or directory
Now obviously the "/bin//bin/java" is problematic, but I'm not sure where to change the configuration. The spark-class file has the following lines:
if [ -n "${JAVA_HOME}" ]; then
RUNNER="${JAVA_HOME}/bin/java"
I was originally using a version of Spark meant for Hadoop 2.4 and when I changed it to "RUNNER="${JAVA_HOME}" it would either give me the error "[path] is a directory" or "[path] is not a directory." This was after also trying multiple path permutations in /etc/environment
What I now have in /etc/environment is:
JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/"
This is the current java setup that I have:
root#ubuntu:~# update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
bashrc has the following:
export SPARK_HOME="/usr/local/spark"
export PATH="$PATH:$SPARK_HOME/bin"
Can anyone advise: 1) What files I need to change and 2) how I need to change them? Thanks in advance.
spark-class file is in the link, just in case:
http://vaughn-s.net/hadoop/spark-class
In the /etc/environment file replace
JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/
with
JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/
then execute
source /etc/environment
also RUNNER="${JAVA_HOME}/bin/java" should be kept as it is
Windows Environment:
Open Advanced system settings -> Environment Variables to set JAVA_HOME path, and the most common mistake is setting the path to JAVA folder:
JAVA_HOME: Directory-Name:\java
rather than setting it to JDK folder
JAVA_HOME: Directory-Name:\jdk
This is how it worked for me.
Related
I try to set JAVA_HOME variable on an ubuntu server. I get the Java path with this command
which java
/usr/bin/java
I set the result in /etc/environment
JAVA_HOME="/usr/bin/java"
When I try to run a mvn command I get this error
Error: JAVA_HOME is not defined correctly.
We cannot execute /usr/bin/java/bin/java
You should not set JAVA_HOME to /usr/bin/java, because that's just a symbolic link to the java executable, which points to where the real executable is.
JAVA_HOME should point to the Java installation directory, and not to the java executable (or a link to the executable).
Find out where your Java installation directory is and then set JAVA_HOME to that directory (and not to the java executable). If you installed Java using Ubuntu's package management system, then the Java home directory is probably one of the subdirectories in /usr/lib/jvm.
Per the Oracle site:
export JAVA_HOME=jdk-install-location
export PATH=$JAVA_HOME/bin:$PATH
You can add these lines into your ~/.bash_profile (or ~/.bashrc), and then refresh using source ~/.bash_profile
I am starting to learn cassandra and downloaded the file from the Apache Cassandra. When I navigated through the bin folder of the apache-cassandra-2.2.1/bin and run the command cassandra it gives me the error saying Unable to find java executable. Check JAVA_HOME and PATH environment variables.
But at the same path when I hit java I am able to see the java is accessible there. What should I do to get rid of this? I am using Windows OS.
Edit:
I rather used Datastax windows installer and now I see no error in running cqlsh.
I was also facing the same problem. Actually, somewhere in installation scripts it adds "bin/java" to "$JAVA_HOME". In my case, java path was "/usr/bin/java" so I had to configure $JAVA_HOME=/user.
export JAVA_HOME=/usr/
Set JAVA_HOME:
Right click My Computer and select Properties.
On the Advanced tab, select Environment Variables, and then edit JAVA_HOME to point to where the JDK software is located, for example, C:\Program Files\Java\jdk1.6.0_02.
FROM:
http://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/index.html
login with root because Cassandra will start with the root
readlink -f $(which java)
vi ~/.bashrc
source ~/.bashrc
please note: java version should be same on cluster nodes
BR// nitin.k
I followed "http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html" to install hadoop on ubuntu. But, upon checking the hadoop version I get the following error:
Error: Could not find or load main class
org.apache.hadoop.util.VersionInfo
Also, when I try: hdfs namenode -format
I get the following error:
Error: Could not find or load main class
org.apache.hadoop.hdfs.server.namenode.NameNode
The java version used is:
java version "1.7.0_25"
OpenJDK Runtime Environment (IcedTea 2.3.10) (7u25-2.3.10-1ubuntu0.12.04.2)
OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode)
It is a problem of environmental variables setup. Apparently, I didnt find one which can work until NOW. I was trying on 2.6.4. Here is what we should do
export HADOOP_HOME=/home/centos/HADOOP/hadoop-2.6.4
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
Add these into your .bashrc and dont forget to do
source ~/.bashrc
I think your problem will be solved as was mine.
You probably did not follow the instructions correctly. Here are some things to try and help us / you diagnose this:
In the shell that you ran hadoop version, run export and show us the list of relevant environment variables.
Show us what you put in the /usr/local/hadoop/etc/hadoop/hadoop-env.sh file.
If neither of the above gives you / us any clues, then find and use a text editor to (temporarily) modify the hadoop wrapper shell script. Add the line "set -xv" somewhere near the beginning. Then run hadoop version, and show us what it produces.
Adding this line to ~/.bash_profile worked for me.
export HADOOP_PREFIX=/<where ever you install hadoop>/hadoop
So just:
$ sudo open ~/.bash_profile then add the aforesaid line
$ source ~/.bash_profile
Hope this helps (:
I was facing the same issue. Although it may seem so simple but took away 2 hrs of my time. I tried all the things above but it didn't help.
I just exit the shell i was in and tried again by logging into the system again. Then things worked!
Try to check:
JAVA_HOME, all PATH related variables in Hadoop config
run: . ~/.bashrc (note the dot in front) to make those variables available in your environment. It seems that the guide does not mention this.
I got the same problem with hadoop 2.7.2
after I applied the trick shown I was able to start hdfs but later I discovered that the tar archivie I was using was missing some important pieces. So downloading the 2.7.3 everything worked as it is supposed to work.
My first suggestion is to download again the tar.gz at the same version or major.
If you are continuing to reading... this how I solved the problem...
After a fresh install hadoop was not able to find the jars.
I did this small trick:
I located where the jars are
I did a symbolic link of the folder to
$HADOOP_HOME/share/hadoop/common
ln -s $HADOOP_HOME/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib $HADOOP_HOME/share/hadoop/common
for version command you need hadoop-common-2.7.2.jar, this helped me to find where the jars where stored.
After that...
$ bin/hadoop version
Hadoop 2.7.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41
Compiled by jenkins on 2016-01-26T00:08Z
Compiled with protoc 2.5.0
From source with checksum d0fda26633fa762bff87ec759ebe689c
This command was run using /opt/hadoop-2.7.2/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/hadoop-common-2.7.2.jar
Of course any hadoop / hdfs command works now.
I'm again an happy man, I know this is not a polite solution but works at least for me.
I got that error , I fixed that by editing ~/.bashrc
as follow
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$PATH
then open terminal and write this command
source ~/.bashrc
then check
hadoop version
Here is how it works for Windows 10 Git Bash (mingw64):
export HADOOP_HOME="/PATH-TO/hadoop-3.3.0"
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_CLASSPATH=$(cygpath -pw $(hadoop classpath)):$HADOOP_CLASSPATH
hadoop version
copied slf4j-api-1.6.1.jar into hadoop-3.3.0\share\hadoop\common
I added the environment variables described above but still didn't work. Setting the HADOOP_CLASSPATH as follows in my ~/.bashrc worked for me:
export HADOOP_CLASSPATH=$(hadoop classpath):$HADOOP_CLASSPATH
I used
export PATH=$HADOOP_HOME/bin:$PATH
Instead of
export PATH=$PATH:$HADOOP_HOME/bin
Then it worked for me!
I have the following on my .bashrc:
JAVA_HOME="/usr/bin/java"
GRAILS_HOME="/root/grails"
PATH=$PATH:$JAVA_HOME:$GRAILS_HOME/bin
export JAVA_HOME
export GRAILS_HOME
export PATH
However, when I execute > grails in the terminal, I get:
root#localhost:~# grails
grails: JAVA_HOME is not a directory: /usr/bin/java
when I tried to replace to
JAVA_HOME="/usr/share/java"
then the outcome is:
root#localhost:~# grails
grails: JAVA_HOME is not defined correctly; can not execute: /usr/share/java/bin/java
what am I missing in there?
I would expect JAVA_HOME to contain the bin directory containing java (and others).
So clearly those two options you've selected are not correct.
Looking at my Ubuntu installation, I have numerous Java packages under /usr/lib/jvm, and I would select an appropriate one there e.g.
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk-amd64
Make sure to source the changed file eg: $ source [filename of the changes made] in your example .bashrc
I have downloaded both Java jdk1.7.0_06 and Java jre7. and i added the following system variable JAVA_HOME C:\Program Files\Java\jdk1.7.0_06\bin to my windows 7. But when I type the following in the CMD command line on my windows 7 C:\activiti-5.10\activiti-5.10\setup>ant demo.start to run a demo application I got the following error in the command line '
"java.exe"' is not recognized as an internal or external command, operable program or batch file
So does anyone know how i can solve this problem ?
BR
If you look at the "ant.bat" file, you will see that it looks for the "java" command in the following way:
If the %JAVACMD% environment variable is set, then it uses that.
Otherwise, if the %JAVA_HOME% environment variable is set, it tries to use %JAVA_HOME%\bin\java.exe
Otherwise, it tries to use java.exe; i.e. it will look on your %PATH%.
In your case, you have %JAVA_HOME% set ... but set to the Java installation's "bin" directory, not to the root of the installation. So the Ant.bat script looks in the wrong place for java.exe.
Just set %JAVA_HOME% correctly, and it should work.
JAVA_HOME C:\Program Files\Java\jdk1.7.0_06
As you can see from the above, you do not need to have the Java "bin" directory on your %PATH% for Ant to work, but it is a good idea to set it anyway. That way you can run the Java commands simply from the command line.
The setting of %CLASSPATH% is not relevant to this problem. Indeed, unless the build.xml file is broken, Ant will ignore your %CLASSPATH% environment variable.
You need to put the file java.exe in your PATH variable but the JRE in JAVA_HOME
JAVA_HOME is the path of JDK root folder.eg: C:\Program Files\Java\jdk1.7.0_06 but path define C:\Program Files\Java\jdk1.7.0_06\bin
JAVA_HOME C:\Program Files\Java\jdk1.7.0_06
JRE_HOME C:\Program Files\Java\jre1.7.0_06
path = C:\Program Files\Java\jdk1.7.0_06\bin;C:\Program Files\Java\jre1.7.0_06\bin
Typically JAVA_HOME should be the parent directory of the "bin" folder.(jre or jdk)
In this case ant expects the java to be from the JDK.
try following in a cmd window
set JAVA_HOME=C:\Program Files\Java\jdk1.7.0_06
set path="%JAVA_HOME%/bin;%path%;
ant
(side note: adding java.exe to path is not a requirement for ant; it is a convenience thing for the user)
Just delete the following set of files from your %windir/System32 folder. Actually deleting java.exe is enough but for consistency sake just delete all the java related binaries.
java.exe
javaw.exe
javaws.exe
Actually oracle windows installer places a copy of these files into %windir/System32 folder (which I don't understand why) but looks like they are not needed (as they are available anyway under JDK folder where you install them).
I have tried all the various solutions posted in the SO and other forums as well but none of them worked for me. I have also set all the relevant environment variables (JAVA_PATH, CLASS_PATH etc) correctly as well. Finally this is the only solution that has worked for me.
Go to the \squirrel-sql-3.9.0>squirrel-sql.bat .open that squirrel-sql.bat in Notepad and comment out the existing logic which is
=======================================
if exist "%IZPACK_JAVA%\bin\javaw.exe" (
set LOCAL_JAVA=%IZPACK_JAVA%\bin\javaw.exe
) else (
set LOCAL_JAVA=javaw.exe
)
echo Using java: %LOCAL_JAVA%
=================================
and add the below logic
#echo off
set LOCAL_JAVA=C:\Program Files (x86)\Java\jre7\bin\javaw.exe
echo Using java: %LOCAL_JAVA%
================================
make sure you add the correct path of javax.exe while adding above logic set LOCAL_JAVA=
and start the .bat file from CMD ..that's it. It should work. It worked for me.
I started getting this error in Android Studio after I updated it to version "Electric Eel".
It happened because Android Studio has changed where they put their JRE:
it used to be C:\Program Files\Android\Android Studio\jre
but now it is C:\Program Files\Android\Android Studio\jbr
To fix:
updated my JAVA_HOME environment variable to point to the new java location (C:\Program Files\Android\Android Studio\jre)
and then restarted Android Studio, and now it is fixed.
I agree with the above explanation but if the problem still persists try setting:
CLASSPATH = C:\Program Files\Java\jdk1.7.0_06\bin