im working on distributed systems and my program is complete, i have tested it and it runs fine over 10 machines, but every time i want to test the program i have to:
- Copy the file for each machine
- ssh to each machine and type "java -jar file"
To avoid that painfull process I made this
for i in {1..11}
do
if [ $i -ne 6 ];
then
sshpass -p "qwerty" scp myJar.jar user#l040101-ws$i.XXXX.XX.XX:someFolder;
sshpass -p "qwerty" ssh user#l040101-ws$i.XXXX.XX.XX "java -jar someFolder/myJar.jar &" &
fi
done
And for some reason it doesnt work like it should, the scp command executes as it should, but the other one doesnt.
The program should produce a folder with 2 logs inside and if i do it manually it does, so i guess is not permission problem, but not with the script.
The weired thing is if i run top, i can see the java processes running in each machine.
BTW: those 2 & is so it the script doesnt get stuck after running each jar
I recommend using SSH keys rather than expressing the password in a command (which gets logged and is visible to other users, not to mention its presence in your script). The github ssh key generation docs are pretty good for this (to add, append to the server's ~/.ssh/authorized_keys file).
Once you have generated a key on the client and added its pubkey to the server, you should be able to run:
for i in {1..11}
do
if [ $i -ne 6 ] # skip server six
then
cat myJar.jar |ssh user#l040101-ws$i.XXXX.XX.XX \
"cd someFolder; cat > myJar.jar; java -jar myJar.jar" &
fi
done
Note the single ampersand there, which is outside the quotes (so it is run on your client, not your server). You can't send an SSH session to the background because the parent would be killed.
I wrangled this into one line in order to minimize the number of connections (the first cat command dumps the file into standard output while the second cat command writes the standard input (the contents of myJar.jar) to the target location). I wasn't sure if I could just pipe it straight to java (cat myJar.jar |ssh user#host "cd someFolder; java -jar -"), so I left that alone.
I'm assuming you don't have to run the .jar from the parent of someFolder. It seems simpler to actually be in the target directory. If there's a chance the target directory does not exist, add mkdir -p someFolder; before the cd command. The -p will ensure nothing happens if the directory already exists. If you do have to run it from the parent, remove the cd command and replace "myJar.jar" with "someFolder/myJar.jar"
Related
I've spent the past couple days working on this, and at this point I am super stuck. I have a Java program that must be run not as a service. This program must also be capable of updating itself when a new file is given for updating.
As a result, I have a script that is started with Linux that starts the Java application and then checks every 5 seconds if the application has been terminated. If the application has been terminated, it should check if there is an update and then start appropriately.
This is the code for that script:
#!/bin/bash
JAVA_HOME=/usr/lib/jvm/java-16-openjdk-amd64
WORKING_DIR=~/Data
LOG=$WORKING_DIR/logs/Bash.log
rm $LOG
echo "Script started" > $LOG
while true; do
source $WORKING_DIR/Server.pid
if ! kill -0 $AppPID; then
echo "App must be started" >> $LOG
source $WORKING_DIR/UpdateStatus
if [ "$UpdateReady" -eq "1" ]; then
echo "Moving files for update" >> $LOG
mv $WORKING_DIR/Server.jar $WORKING_DIR/old.jar
mv $WORKING_DIR/new.jar $WORKING_DIR/Server.jar
fi
nohup ${JAVA_HOME}/bin/java -jar ${WORKING_DIR}/Server.jar &
echo AppPID="$!" > $WORKING_DIR/Server.pid
echo "Server started" >> $LOG
if [ "$UpdateReady" -eq "1" ]; then
echo "Checking for safe update" >> $LOG
source $WORKING_DIR/Server.pid
echo UpdateReady="0" > $WORKING_DIR/UpdateStatus
sleep 5;
if kill -0 $AppPID; then
echo "Update successful" >> $LOG
rm $WORKING_DIR/old.jar
else
echo "Update failed, restarting old jar" >> $LOG
rm $WORKING_DIR/Server.jar
mv $WORKING_DIR/old.jar $WORKING_DIR/Server.jar
nohup ${JAVA_HOME}/bin/java -jar ${WORKING_DIR}/Server.jar &
echo AppPID="$!" > $WORKING_DIR/Server.pid
fi
fi
echo "Server start process finished, going into idle state" >> $LOG
fi
sleep 5
echo "5 seconds idle passed" >> $LOG
done
To initialize the update, I have tried a couple of different things, both with the same result. First I had set UpdateReady="1" through Java, then used exit(0);. I have also tried having Java call a Bash script which also sets UpdateReady="1" but uses kill $AppPID to shutdown the java application.
The result is that both the Java application and the Bash script stop executing causing the update and restart to fail! I have looked through a significant amount of Stack Overflow questions and answers finding things such as nohup, all to no avail.
I will once again state that the Java application cannot be run as a service. No packages other than what is included in Java or made by Apache can be used, and no programs can be installed to Linux. I would prefer to solve the problem with Bash.
Upon testing some things mentioned in comments, I may have missed something that turns out to be important. While all other runs of the startup script will be run by the startup applications manager, the initial run is not.
The install is taken care of remotely with an SSH connection sending the following command string:
cd /home/UserName; unzip -u -o Server.zip; chmod 777 install.sh; bash install.sh &; exit
install.sh is as follows:
#!/bin/bash
INSTALL_DIR=~/Data
mkdir ${INSTALL_DIR}
mkdir ${INSTALL_DIR}/logs
mkdir ${INSTALL_DIR}/data
cp Server.jar ${INSTALL_DIR}/Server.jar
cp service-start.sh ${INSTALL_DIR}/service-start.sh
chmod 777 ${INSTALL_DIR}/service-start.sh
rm Server.jar
rm service-start.sh
rm Server.zip
nohup bash $INSTALL_DIR/service-start.sh &
Upon rebooting my machine, I noticed that this problem goes away! This means that there must be a problem with the initial setup script. When the shell command is run, it does seem to be sticky and not actually let go after the bash install.sh &. I have tried putting nohup at the beginning of this, however the entire line will not run in that case for reasons I am not able to determine.
I would prefer to not have the user be forced to restart after install and can't seem to find any way to force the Startup Application manager to start an application at any time other than startup.
Well, after a lot of searching and some prompting from the comments, I found that the issue lied with how the program was initially being started.
As mentioned in the update, the first run is always started by an ssh connection. I knew there was a slight problem with this ssh connection, as it seemed to hold onto the connection no matter what I did. It turns out that this was causing the problem that resulted in the Bash instance and the Java instance remaining attached.
The solution for this problem was found here: jsch ChannelExec run a .sh script with nohup "lose" some commands
After managing to get the initial setup to start with nohup properly, the issue has gone away.
I want to run a jar as a background process on a remote machine over ssh connection. There is bash script on remote machine to execute jar
#!/bin/sh
export JAVA_HOME=/location/of/java/
export PATH=$JAVA_HOME/bin:$PATH
nohup java -jar jar_name.jar config.properties &
If I execute the above script directly from remote machine(sudo ./start_script.sh), jar is started as background process and stdout is directed to nohup.out in the same folder as jar. But when I run script from local machine : ssh vm_name 'sudo ./start_script.sh', the process starts up. but it blocks and the output is directed to local terminal.
is there way to achieve this?
EDIT: I need to run script as root and also pass parameters to script, added placeholder path for JAVA_HOME to avoid confusion
You need to tell the ssh to connect as a terminal.
ssh vm_name -t 'sudo ./start_script.sh'
It's likely recognizing that you are not on as a terminal and altering behavior accordingly.
Same issue related here:
https://serverfault.com/questions/955268/what-is-the-difference-between-running-a-command-in-ssh-shell-manually-vs-runnin
Try
$ ssh user#host bash -l ./build.sh
I want to execute a jar file of DOMO CLI from a shell script. The jar file itself has some functions which I want to call after I call the main jar file. The problem which I am facing is that after it executes the jar file, I am not able to pass the additional commands to execute inside that jar through a shell script. It just stops after calling jar and doesn't take further commands. Can anyone please help? Below is the code I am calling from a shell script.
java -jar XX.jar
The commands are as below which follow the above jar. So once we enter into the above jar we have to execute the below commands one after the other. I am not sure how to achieve this through a shell script.
connect -s X.domo.com -t Ysssss
upload-dataset -a -i dhdhdhdh -f /prehdfs/dev/comres/herm/data/yyyy.csv
Did you try using pipes and inputs.
When you execute above it runs it under a child shell.
You may try below format if not tried already
$ (echo "connect -s X.domo.com -t Ysssss" && cat) | java -jar XX.jar
If you can reference a file in your use case, you could put your commands in a file.
File: list_my_datasets.domo
connect -t ... -s ...
list-dataset
quit
then run the command:
java -jar domoUtil.jar -script list_my_datasets.domo > datasets
I wanted the data from it so I piped to a file (where I had to grep what I wanted), but you would omit that I believe, unless it has some output you'd want to check. I haven't tested with the upload command, but I would hope any commands substituted or added to the example work similarly.
Domo docs on scripting
I'm currently developing a simple deployment script for vms running ubuntu.
All these machines are supposed to run a java application provided as a jar.
This is the relevant part of the script installing java, copying a jar from local machine to remote machine and then starting the application:
ssh ubuntu#$line -i ~/.ssh/key.pem -o StrictHostKeyChecking=no <java_installation.sh
scp -i ~/.ssh/key.pem $JARFILE ubuntu#$line:~/storagenode.jar
ssh ubuntu#$line -i ~/.ssh/key.pem <java_start_jar.sh
the installation via the java_installation.sh script succeeds, the scp command does as well.
The problem occurs when trying to execute the commands in java_start_jar.sh via ssh.
java_start_jar.sh:
#!/bin/sh
# this script starts a jar file and creates a shellscript which can be used to stop the execution.
nohup java -jar ~/storagenode.jar & > ~/storagenode.log
pId=$!
echo "kill $pId" > ~/stop_storagenode.sh
chmod u+x ~/stop_storagenode.sh
The scripts starts the execution of the .jar file, but then simply blocks.
Ssh does not return, the rest of the local code is only executed after manually closing connection.
Any ideas why the java application is not properly running as a background process?
Move the & to the end of the line
#!/bin/sh
# this script starts a jar file and creates a shellscript which can be used to stop the execution.
nohup java -jar ~/storagenode.jar > ~/storagenode.log &
pId=$!
echo "kill $pId" > ~/stop_storagenode.sh
chmod u+x ~/stop_storagenode.sh
I have a Java program that spawns a bash script that calls another script. In that second script, I'm finding that the $HOME variable is not set. Here's the gist of it:
In Java:
Process p = new ProcessBuilder("bash", "-l", "/foo/a.sh").start();
// code to actually execute p
/foo/a.sh:
#!/bin/bash
/foo/b.sh
/foo/b.sh:
#!/bin/bash
echo "HOME=$HOME"
This echoes "HOME=". The eventual problem is that $HOME/bin is supposed to be added to my PATH in ~/.profile, but since that's not happening, a bunch of custom executables aren't being made accessible.
I worked around it by doing:
if [ -d ~/bin ] ; then
PATH=~/bin:"$PATH"
fi
And that works fine. But I guess I just want to understand why $HOME wasn't being set. It seems like $HOME and ~ should be largely equivalent, no? There's probably something I'm fundamentally missing about how this environment is getting set up.
I am running Ubuntu 12.04.5, if that makes a difference.
The evidence suggests that HOME is missing from the environment in which the Java app is running. Assuming that the app doesn't explicit unset HOME, the most likely reason is that the app is being started from some context other than a login by the user the app is running as.
It's correct that ~ and $HOME are similar. If HOME is present in the environment, even if it is set to the empty string, ~ will be replaced with $HOME. However, if HOME is not present in the environment, bash will attempt to find the home directory for the currently logged in user, and use that for ~.
eg.
$ bash -c 'echo ~'
/home/rici
$ HOME='Hello, world!' bash -c 'echo ~'
Hello, world!
$ HOME= bash -c 'echo ~'
$ (unset HOME; bash -c 'echo ~';)
/home/rici
Since your workaround requires the equivalent of the last scenario, I conclude that HOME has not been set, or has been unset.