Usage of ssehexec task in Jenkins build doesn't stop its execution - java

I have an Ant Task in the Jenkins Ant Execution Plugin, as a Post Build Step, to remotely run a shell script in one of our servers. The shell scripts starts a java process in the background. When I execute the shell script on the server directly it starts the java process in the back ground and comes out. When I run it from Jenkins via the sshexec task the shell script is run, but it never comes out and the Jenkins Build waits.
Later when I added the timeout attribute onto the sshexec it times out after the given number of milliseconds, but the Jenkins build is shown as failed. How do I make the sshexec task to come out cleanly from the shell script execution?
Here is my ssheexec task
<sshexec host="${deploy.host}" username="${deploy.username}" password="${deploy.password}" command=". /etc/profile; cd ${deploy.path}; sh start.sh i1" trust="true" timeout="10000" />
The start.sh file is as given:
nohup java -Xms512m -Xmx1024m -cp calculation.jar com.tes.StartCalculation $1 &
echo $! > calculation-$1-java.pid

It looks like, the ssh executed job is not fully daemonized. Starting with nohup is not sufficient in many cases.
See the discussion that related to it (in a different context)
The issue is that you are not closing your file descriptors when you
push something into the background. The & is fine when you are in a
shell, but is not enough when you want to disconnect and leave a
process running, you need the process to disconnect from the shell.
.... Fix to to correct the script.
If someone writes a naive service script that does not properly detach
from the terminal, I want to know the first time that that script is
used in a deployment - the SCM changes will enable the breaking change
to be quickly identified.
It is wrong to hide the problem to enable incorrect code to be
released to production - and I would not be happy if the first I knew
about it was when a production system administrator complained.
If this is the same problem, you need to daemonize the script

Related

Spawning process in background on Jenkins - job that won't stay in queue

I want to make job on Jenkins that starts server (MockServer on WireMock).
Server is launched from *.jar file, from terminal like that.
java -jar serverLaunch.jar
It takes over my console. To avoid that I modify this and do:
java -jar serverLaunch.jar &>/dev/null &
And that works for me on my local PC. Now I want to move it to Jenkins.
If I try to do this from "Shell command" block in Jenkins Job then:
a) java -jar serverLaunch.jar
I have task locked in queue in my Jenkins and I don't want that but server starts and works.
b) java -jar serverLaunch.jar &>/dev/null &
Job ends with success but my server is not alive.
I have wrapped this command also in .sh script and .rb script. Any idea how to make it work?
I've tried this:
https://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build
And then in Jenkins "Shell script":
daemonize -E BUILD_ID=dontKillMe /bin/bash launch.sh
But it also passes but server is not alive.
I had to check "Inject environment variables to the build process" and add:
BUILD_ID=dontKillMe
Now it is working.
Try using nohup e.g.:
nohup java -jar serverLaunch.jar &
That should prevent the process being terminated when the parent shell process exits (which I suspect is your problem).
Another effective approach would be to add a post-build action that executes a shell spawning the server.

Java JAR file does not execute in startup script in Ubuntu 14.04

The following process normally works for my startup scripts. However, when I introduce a command to execute a JAR file, it does not work. This script works while I am logged in. However, it does not work as a startup script.
In /etc/init.d I create a bash script (test.sh) with the following contents:
#!/bin/bash
pw=$(curl http://169.254.169.254/latest/meta-data/instance-id)
pwh=$(/usr/bin/java -jar PWH.jar $pw &)
echo $pwh > test.txt
Make script executable
In /etc/rc.local, I add the following line:
sh /etc/init.d/test.sh
Notes:
I make a reference to the script in /etc/rc.local, because this script needs to run last after all services have started.
Please do not ask me to change the process (i.e., create script in /etc/init.d/ and reference it from /etc/rc.local), because it works for my other startup scripts.
I have tried adding nohup in front of java command, and it still did not work.
Thanks
As written, there is insufficient information to say what is going wrong. There are too many possibilities to enumerate.
Try running those commands one at a time in an interactive shell. The java command is probably writing something to standard error, and that will give you some clues.

How to start Play using Bamboo without having the deployment continue forever?

We have created a Play application in Java and are deploying it to a dev-environment virtual machine using Atlassian Bamboo's SSH task: cd path/to/application/directory && start "" play run. This goes to the proper location, launches a new console, and starts play: the server is started successfully and we can access the site with no issues.
The problem is that the deployment task in Bamboo never stops because it is still monitoring the console where play run was called -- in the Bamboo status, we are seeing things like Deploying for 7,565 minutes. We thought adding the start "" would fix that issue, but in Bamboo it is the same as just doing the play run. Also, when we need to redeploy, we must first stop the deployment in process, and manually relaunch it.
Two questions:
How can we start the server from Bamboo in such a way that the deployment plan finishes?
How can we stop/kill the previous server from Bamboo at the beginning of the next deployment?
Bamboo is pretty bad for background tasks. Had a similar problem, eventually, we wrote a bash script that was run in background.
start.sh &1> /dev/null &2 > /dev/null &
not at all familiar with WAMP stack or the play cli, but try running it as a powershell command, which should run in and exit immediately
powershell -command "& <your command here>"
or failing that
powershell -command "& start-job { <your command here>} "
For Windows you can run background tasks using Groovy script.
Groovy can execute an external program as an process:
"/bin/application.exe".execute()
And then you can check that application is running:
println "tasklist /fi \"imagename eq application.exe\"".execute().text

how to use qsub to implement a java program

I would like to submit a job in the hpc and the job is running a java application.
I edit the pbs_script files as following:
#/bin/sh
#PBS -q serial
#PBS -l nodes=1:ppn=4
module load java-jdk/1.7.0-17
java myjavapp
I submitted the job
$qsub pbs_script
however the job return a error: could not find or load main class myjava. but I use the same command to run the java program under the command-line. what is the problem?
Problems such as this one are almost always a difference in the environment of where the job executes versus where you're executing it on the command line. To track it down you usually only need to check that everything is available on all of the nodes in the cluster and that the environment is configured such that the shell which runs the job will find what you're looking for.
Usually this is due not finding the class file. In PBS, the submit script starts execution from the user's home directory, not where you submitted the job. It is often useful to include the following line:
cd $PBS_O_WORKDIR
The command changes the directory on the execute node to the directory that the job was submitted from. Therefore, if you are able to run java myjavapp from the directory that you are submitting from (issuing the qsub), then the $PBS_O_WORKDIR line should work.
Your final submit file would look something like:
#/bin/sh
#PBS -q serial
#PBS -l nodes=1:ppn=4
module load java-jdk/1.7.0-17
cd $PBS_O_WORKDIR
java myjavapp
We face the same problem today.
We can run the java scripts on both master and nodes but when we submit the script through PBS, it fails. The error message is as the same as yours.
We fix the problem by giving the java program the classpath.
java -classpath myjava
Then, the qsub submitted java script can run on any node of the server. : )
Bo

Checking if java code is running and attempt to restart if not (unix)

I have some java code that is running continuously on a raspberry pi (from the terminal) and listening to a twitter stream and saving data to disk/usb.
I would like to know what would be the preferred method of detecting if a program is still running so I can take appropriate action and attempt to restart the app?
I hope that in this manner I could detect the program has failed, send an email to notify me and attempt to rerun the code. Would running this in a server environment be the best way to go?
Have a look at the forever project. If you have npm installed you can use that to install the forever package with the -g (for global install) parameter:
npm install forever -g
Then use the start argument to start the script. In your case this could be a bash file (.sh) with the required java commands.
forever start name-of-script-here
If the script would fail (system.exit in java or any fatal error) it will be restarted by forever. You can also get a list of all the running scripts managed by forever with:
forever list
In Unix let a parent process create the child java process and have it monitor. If it terminates then the parent can restart it.
The Unix fork returns the child pid to the parent.
Using this technique: Tracking the death of a child process parent can monitor child's death.

Categories