difference between running shell command directly and from shell script sh file - java

I have a java program in which I am reading from stdin
BufferedInputStream bis = new BufferedInputStream(System.in);
byte[] b = new byte[1];
int cmd = bis.read(b);
System.out.println("Read command: " + new String(b));
And a shell script to start-stop this program
'start')
if [ -p myfifo ]; then
rm myfifo
rm myfifo-cat-pid
fi
mkfifo myfifo
cat > myfifo &
echo $! > myfifo-cat-pid
java -jar lib/myJar.jar >/dev/null 2>&1 0<myfifo &
echo `date +%D-%T` $! >> process.pid
echo "Started process: "$!
;;
'stop')
echo 0 > myfifo
echo "Stopped process: "
rm myfifo
;;
When I run commands in start one by one the program waits until i echo on fifo. But when I run it from .sh file it immediately reads from stdin. Dont understand what is the difference between if run a command directly on command prompt and if I make a .sh file and run it then

The difference is not on the Java side, but instead on the fact that your shell handles differently the job control when launching a script. From man bash:
JOB CONTROL
Job control refers to the ability to selectively stop (suspend) the
execution of processes and continue (resume) their execution at a later
point. A user typically employs this facility via an interactive
interface supplied jointly by the operating system kernel's terminal
driver and bash.
As explained here, by default job control is disabled in a script.
When cat > myfifo & is executed in an interactive shell, it remains in "Stopped" mode waiting to be put in foreground again (with fg). When launched in a script, instead, job control is disabled so, as soon as cat tries to read from the (detached) terminal, it exists, closing the pipe (and your Java process reads EOF).
If you use set -m at the top of your shell script (hence enabling forcefully job control), you should see a consistent behavior.
set [+abefhkmnptuvxBCEHPT] [+o option-name] [arg ...]
-m Monitor mode. Job control is enabled. This option is on by
default for interactive shells on systems that support it
(see JOB CONTROL above). Background processes run in a sep‐
arate process group and a line containing their exit status
is printed upon their completion.

Related

Stop Java from killing Bash script that started it

I've spent the past couple days working on this, and at this point I am super stuck. I have a Java program that must be run not as a service. This program must also be capable of updating itself when a new file is given for updating.
As a result, I have a script that is started with Linux that starts the Java application and then checks every 5 seconds if the application has been terminated. If the application has been terminated, it should check if there is an update and then start appropriately.
This is the code for that script:
#!/bin/bash
JAVA_HOME=/usr/lib/jvm/java-16-openjdk-amd64
WORKING_DIR=~/Data
LOG=$WORKING_DIR/logs/Bash.log
rm $LOG
echo "Script started" > $LOG
while true; do
source $WORKING_DIR/Server.pid
if ! kill -0 $AppPID; then
echo "App must be started" >> $LOG
source $WORKING_DIR/UpdateStatus
if [ "$UpdateReady" -eq "1" ]; then
echo "Moving files for update" >> $LOG
mv $WORKING_DIR/Server.jar $WORKING_DIR/old.jar
mv $WORKING_DIR/new.jar $WORKING_DIR/Server.jar
fi
nohup ${JAVA_HOME}/bin/java -jar ${WORKING_DIR}/Server.jar &
echo AppPID="$!" > $WORKING_DIR/Server.pid
echo "Server started" >> $LOG
if [ "$UpdateReady" -eq "1" ]; then
echo "Checking for safe update" >> $LOG
source $WORKING_DIR/Server.pid
echo UpdateReady="0" > $WORKING_DIR/UpdateStatus
sleep 5;
if kill -0 $AppPID; then
echo "Update successful" >> $LOG
rm $WORKING_DIR/old.jar
else
echo "Update failed, restarting old jar" >> $LOG
rm $WORKING_DIR/Server.jar
mv $WORKING_DIR/old.jar $WORKING_DIR/Server.jar
nohup ${JAVA_HOME}/bin/java -jar ${WORKING_DIR}/Server.jar &
echo AppPID="$!" > $WORKING_DIR/Server.pid
fi
fi
echo "Server start process finished, going into idle state" >> $LOG
fi
sleep 5
echo "5 seconds idle passed" >> $LOG
done
To initialize the update, I have tried a couple of different things, both with the same result. First I had set UpdateReady="1" through Java, then used exit(0);. I have also tried having Java call a Bash script which also sets UpdateReady="1" but uses kill $AppPID to shutdown the java application.
The result is that both the Java application and the Bash script stop executing causing the update and restart to fail! I have looked through a significant amount of Stack Overflow questions and answers finding things such as nohup, all to no avail.
I will once again state that the Java application cannot be run as a service. No packages other than what is included in Java or made by Apache can be used, and no programs can be installed to Linux. I would prefer to solve the problem with Bash.
Upon testing some things mentioned in comments, I may have missed something that turns out to be important. While all other runs of the startup script will be run by the startup applications manager, the initial run is not.
The install is taken care of remotely with an SSH connection sending the following command string:
cd /home/UserName; unzip -u -o Server.zip; chmod 777 install.sh; bash install.sh &; exit
install.sh is as follows:
#!/bin/bash
INSTALL_DIR=~/Data
mkdir ${INSTALL_DIR}
mkdir ${INSTALL_DIR}/logs
mkdir ${INSTALL_DIR}/data
cp Server.jar ${INSTALL_DIR}/Server.jar
cp service-start.sh ${INSTALL_DIR}/service-start.sh
chmod 777 ${INSTALL_DIR}/service-start.sh
rm Server.jar
rm service-start.sh
rm Server.zip
nohup bash $INSTALL_DIR/service-start.sh &
Upon rebooting my machine, I noticed that this problem goes away! This means that there must be a problem with the initial setup script. When the shell command is run, it does seem to be sticky and not actually let go after the bash install.sh &. I have tried putting nohup at the beginning of this, however the entire line will not run in that case for reasons I am not able to determine.
I would prefer to not have the user be forced to restart after install and can't seem to find any way to force the Startup Application manager to start an application at any time other than startup.
Well, after a lot of searching and some prompting from the comments, I found that the issue lied with how the program was initially being started.
As mentioned in the update, the first run is always started by an ssh connection. I knew there was a slight problem with this ssh connection, as it seemed to hold onto the connection no matter what I did. It turns out that this was causing the problem that resulted in the Bash instance and the Java instance remaining attached.
The solution for this problem was found here: jsch ChannelExec run a .sh script with nohup "lose" some commands
After managing to get the initial setup to start with nohup properly, the issue has gone away.

Shell scripts run one after another

I have 5 shell scripts. Each has a java command. Previous jobs output is input to the next job.
I created a superScript.sh
//mail - to inform beginning
sh script1.sh;
sh script2.sh;
sh script3.sh;
sh script4.sh;
sh script5.sh;
//mail to inform end
Sample script1.sh
cd toBaseDirectory;
java -cp /path/to/application.jar main.class parameter
But all the jobs are started at the same time. How can I make this sequential?
Try to run javas like this
java -cp /path/to/application.jar main.class parameter & wait
If want to run the second command only if the first exited successfully. To do so, join them with &&
command1 && command2
simple Example
~$ cat abc.sh
#!/usr/bin/env bash
echo "hi"
~$ cat pqr.sh
#!/usr/bin/env bash
echo "batman say : $1"
If abc.sh execute successfully then only execute pqr.sh
~$ retval=$(./abc.sh) && result=$(./pqr.sh "$retval") && echo "$result"
batman say : hi
you can also try similar approach with your java command execution using shell script
Note:
To execute Shell scripts as command sequentially, waiting for the first to finish before the next one starts. You can use ;
command1; command2
wait waits for a process to finish
You can also choose to run the second command only if the first exited successfully. To do so, join them with &&
cmd1 && cmd2
but In repetitive cases like this I recommended using a simple loop in the body of your script:
for n in {1..5} ; do sh script${n}.sh ; done
The loop not only to run them in order but is easier to tweak and reuse when needed, with using brace expansion

Kubectl Java client returns exit code 3 when using exec

I am writing a little backup programm for an application. This will run as a CronJob within my k8s cluster. At one point, it should trigger an mysql dump on the database inside another pod.
My code:
Exec exec = new Exec();
Process process = exec.exec(
"default",
"database-pod",
new String[]{"sh", "-c", ""mysqldump -u {{user}} --p={{password}} schema > dbdump.sql",
false,
tty
);
process.waitFor();
process.destroy();
int exitValue = process.exitValue();
process.exitValue() always contains 3 + the mysql dump file is created, but does not contain any sql statements.
Does somebody have a clue what I am doing wrong?
The base image of my backup programm is gcr.io/distroless/java:11 if that helps and was built using Jib
So after a bit of reading I figured out what was going wrong. Instead of writing --p I should have used --password all along
When kubectl exec (or oc exec command) returns non-zero exit code, you should manually connect to the pod, and execute the same command directly, to inspect what's wrong:
[my-host]$ kubectl exec ${pod-id} -n ${namespace} bash -ti
[root#my-pod]# command...
# print command exit code (should be the same as before)
[root#my-pod]# echo $?

Run java application as background process via ssh

I'm currently developing a simple deployment script for vms running ubuntu.
All these machines are supposed to run a java application provided as a jar.
This is the relevant part of the script installing java, copying a jar from local machine to remote machine and then starting the application:
ssh ubuntu#$line -i ~/.ssh/key.pem -o StrictHostKeyChecking=no <java_installation.sh
scp -i ~/.ssh/key.pem $JARFILE ubuntu#$line:~/storagenode.jar
ssh ubuntu#$line -i ~/.ssh/key.pem <java_start_jar.sh
the installation via the java_installation.sh script succeeds, the scp command does as well.
The problem occurs when trying to execute the commands in java_start_jar.sh via ssh.
java_start_jar.sh:
#!/bin/sh
# this script starts a jar file and creates a shellscript which can be used to stop the execution.
nohup java -jar ~/storagenode.jar & > ~/storagenode.log
pId=$!
echo "kill $pId" > ~/stop_storagenode.sh
chmod u+x ~/stop_storagenode.sh
The scripts starts the execution of the .jar file, but then simply blocks.
Ssh does not return, the rest of the local code is only executed after manually closing connection.
Any ideas why the java application is not properly running as a background process?
Move the & to the end of the line
#!/bin/sh
# this script starts a jar file and creates a shellscript which can be used to stop the execution.
nohup java -jar ~/storagenode.jar > ~/storagenode.log &
pId=$!
echo "kill $pId" > ~/stop_storagenode.sh
chmod u+x ~/stop_storagenode.sh

Running a process in background, linux ubuntu

I want to run a .jar file so what I do is that I put "&" at the very end of the command (actually there's no need to log the output , I only want to be able to disconnect from the remote server which hosts my java program, the program itself saves the result after being finished)
I do as it follows , but it doesn't run in the background and it keeps me waiting :
java -Xmx72G -cp //home/arian/rapidminer/lib/rapidminer.jar com.rapidminer.RapidMinerCommandLine -f //home/arian/RMRepository/testRemote.rmp &
Any idea that why it doesn't work ?
Thanks ,
Arian
I don't know why it wouldn't work. It really should, in most shells.
Anyway, if you intend to disconnect you'll usually find that just putting the job in the background is not enough: the disconnect will close the console (which will break many programs, alone) and send a SIGHUP signal (which will cause just about any program to exit).
You should considered using nohup to run the program (with the &). Alternatively, if you ever do need to come back and interact with the program later then screen or byobu might fit the bill better. Yet another alternative might be to add the task to your crontab.
What do you mean by "it keeps me waiting"? Does the RapidMinerCommandLineby any chance read from stdin or another stream?
If you want to run a process in the background and disconnect from the tty session you should use nohup, eg.:
nohup java -Xmx.... com.rapidminer.RapidMinerCommandLine &
(Do remember the & at the end!)
You may add ... 1> /dev/null before the & to disregard all stdout.
You could also consider the screen utility, which allows you to dis- and reconnect to the session, but that's more usable with (semi-)interactive sessions.
(Also, quite a hefty max heap size you're specifying?)
Cheers,
You can use JSVC, this is an utility interresting to daemonize Java applications
http://commons.apache.org/daemon/jsvc.html
It will give you a var pid file, useful to create a real start/stop script.
EDIT : Other solution, maybe could help
Here is a very old start/stop script I've done for Slackware Linux on embedded systems :
#!/bin/sh
application_start() {
cd /usr/local/YOURHOME
/usr/lib/java/bin/java \
-Xmx72G \
-classpath //home/arian/rapidminer/lib/rapidminer.jar \
com.rapidminer.RapidMinerCommandLine \
-f //home/arian/RMRepository/testRemote.rmp &
echo -n "Starting App daemon: $CMDLINE"
ps -Ao pid,command | grep java | grep com.rapidminer.RapidMinerCommandLine | awk '{print $1}' > /var/run/app.pid
echo
}
application_stop() {
echo -n "Stopping DataBaseSynchronizerClient daemon..."
kill `cat /var/run/DataBaseSynchronizerClient.pid`
echo
sleep 1
rm -f /var/run/DataBaseSynchronizerClient.pid
killall DataBaseSynchronizerClient 2> /dev/null
}
application_restart() {
application_stop
sleep 1
application_start
}
case "$1" in
'start')
application_start
;;
'stop')
application_stop
;;
'restart')
application_restart
;;
*)
echo "usage $0 start|stop|restart"
esac
I agree it should work as it is, but I had problems with Java running in background too. My solution was to use the screen utility (which normally is installed in most Linux distributions) where you can open a shell from which you can detach. If I remember well the commands are something like this (but there is a good manpage too)
screen -S myCustomName # runs a new shell called myCustomName
CTRL + D # detach from the current screen instance
screen -ls # list active screen instances
screen -r myCustomName # reattach to the screen instance.
Hope it will solve your problem.

Categories