I want to run a jar as a background process on a remote machine over ssh connection. There is bash script on remote machine to execute jar
#!/bin/sh
export JAVA_HOME=/location/of/java/
export PATH=$JAVA_HOME/bin:$PATH
nohup java -jar jar_name.jar config.properties &
If I execute the above script directly from remote machine(sudo ./start_script.sh), jar is started as background process and stdout is directed to nohup.out in the same folder as jar. But when I run script from local machine : ssh vm_name 'sudo ./start_script.sh', the process starts up. but it blocks and the output is directed to local terminal.
is there way to achieve this?
EDIT: I need to run script as root and also pass parameters to script, added placeholder path for JAVA_HOME to avoid confusion
You need to tell the ssh to connect as a terminal.
ssh vm_name -t 'sudo ./start_script.sh'
It's likely recognizing that you are not on as a terminal and altering behavior accordingly.
Same issue related here:
https://serverfault.com/questions/955268/what-is-the-difference-between-running-a-command-in-ssh-shell-manually-vs-runnin
Try
$ ssh user#host bash -l ./build.sh
Related
I am running the amazonlinux:2 docker image directly from dockerhub and installing the corretto-17 JDK with the following command:
yum install -y git java-17-amazon-corretto-devel
Note: I am not using a custom Dockerfile, I do not control it and I can't change it.
When I then try and run my .gradlew task, it fails because there's no JAVA_HOME set.
So I do that by:
echo "export JAVA_HOME='/usr/lib/jvm/java-17-amazon-corretto.x86_64'" >> /root/.bashrc
If I manually connect a terminal to to the container, the .bashrc works fine and gradlew will run.
But when I run commands from outside the container via something like:
docker exec kopibuild /bin/bash -c "cd the-project-code && ./gradlew build"
The .bashrc is not loaded so JAVA_HOME is not set and gradlew fails.
My workaround is to add the interactive flag -i to the bash command and then it all works, but there are warnings in the logs about "cannot set terminal process group (-1): Inappropriate ioctl for device".
docker exec kopibuild /bin/bash -c "cd the-project-code && BASH_ENV=/root/.bashrc ./gradlew build"
But it didn't seem to do anything.
What's the right way to set environment variables for Amazon Linux so they will exist in non-interactive shell invocations?
After digging around on the Googles - I believe there is no standard Linux way to set an environment variable for non-interactive shells.
But there is a Docker way to answer the question. On the original docker create of the container from the amazonlinux:2 image, specify the environment variable via -e JAVA_HOME=/usr/lib/jvm/java-17-amazon-corretto.x86_64. This stores the environment variable in the docker metadata for the container and it will be available in all execution contexts, including non-interactive shells invoked directly via docker exec (without having to specify it explicitly for every exec command).
As far as I know, this is the same as what the ENV command in a Dockerfile does.
I want to execute a jar file of DOMO CLI from a shell script. The jar file itself has some functions which I want to call after I call the main jar file. The problem which I am facing is that after it executes the jar file, I am not able to pass the additional commands to execute inside that jar through a shell script. It just stops after calling jar and doesn't take further commands. Can anyone please help? Below is the code I am calling from a shell script.
java -jar XX.jar
The commands are as below which follow the above jar. So once we enter into the above jar we have to execute the below commands one after the other. I am not sure how to achieve this through a shell script.
connect -s X.domo.com -t Ysssss
upload-dataset -a -i dhdhdhdh -f /prehdfs/dev/comres/herm/data/yyyy.csv
Did you try using pipes and inputs.
When you execute above it runs it under a child shell.
You may try below format if not tried already
$ (echo "connect -s X.domo.com -t Ysssss" && cat) | java -jar XX.jar
If you can reference a file in your use case, you could put your commands in a file.
File: list_my_datasets.domo
connect -t ... -s ...
list-dataset
quit
then run the command:
java -jar domoUtil.jar -script list_my_datasets.domo > datasets
I wanted the data from it so I piped to a file (where I had to grep what I wanted), but you would omit that I believe, unless it has some output you'd want to check. I haven't tested with the upload command, but I would hope any commands substituted or added to the example work similarly.
Domo docs on scripting
I am new to docker,
I have pre cooked a docker image(updated and installed Java and other dependancies) and stored it on my GitHub repo,
I have stored a simple hello world Spring Boot application on an AWS S3 bucket,
I want to my DockerFile -
1. Get the docker image from my GitHub repo
2. Do an update patch
3. Set my working to directory to /home/ubuntu
4. Download application from S3 bucket using wget(it's publicly accessible)
5. run the application inside the container
After which I will run the image.
Command to build -
docker build -t someTag .
Command to run -
docker run -p 9090:8090 someTag
My java application jar that will be downloded is - docker.jar
And the application runs on port 8080
I have the following Dockerfile -
FROM someRepoHere
WORKDIR /home/ubuntu
RUN apt-get update
RUN cd /home/ubuntu
VOLUME /home/ubuntu
RUN wget S3BucketLocationHere
#RUN nohup java -jar docker.jar &
# Expose the default port
EXPOSE 8080
#Old command - CMD nohup java -jar docker.jar &
CMD ["java", "-jar", "docker.jar"]
The DockerFile is able to successfully build the image but,
My application is unreachable, It did not run inside the container.
Locally if I wget my application and run the nohup command, the application responds successfully.
Your command being run is what controls the existence of the container, when it exits/returns, the container exits and stops. Therefore you need to run your command in the foreground. When you are in an interactive shell in a container, that command is your shell. The command you've listed uses a shell, but that shell exits when it runs out of commands to process and nothing is running in the foreground:
CMD nohup java -jar docker.jar &
The string syntax will run the command with /bin/sh -c "nohup java ...".
A better option is to run with json syntax if you don't need a shell, and run your java app in the foreground, avoid the nohup and background syntax:
CMD ["java", "-jar", "docker.jar"]
A few more comments on the provided Dockerfile:
WORKDIR /home/ubuntu
RUN apt-get update
That only creates a cache inside your container that will become stale and result in cache misses if you try to use it in the future. This doesn't upgrade any packages if that's what you intended. That line should be removed.
RUN cd /home/ubuntu
This makes no filesystem changes, and will have no impact on the resulting image. The current shell state is lost after the RUN line exits, including the current directory and any variables you set. This line should be removed.
VOLUME /home/ubuntu
From this line forward, changes to /home/ubuntu will be lost. You'll only see anonymous volumes created as a result unless you specify a volume at runtime at the same location. You likely don't want the above volume line because it will break things like the next line.
RUN wget S3BucketLocationHere
This line has been obfuscated but I suspect you are outputting in /home/ubuntu because of the value of WORKDIR. Anything created here will be lost because of the VOLUME line above.
At first let me describe my issue.
I configured Jenkins and after build action I called shell script to run bash script on remote server.
The shell script starts application via command
java -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=xxx
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-XX:+HeapDumpOnOutOfMemoryError -jar name.jar "BUILD_PARAMETER"
I see logs from my application in Jenkins build, and it's keep build process running. I need to finish it after running
sh run command. Is it possible?
If you're doing this using Jenkins you will need to use the nohup notation as in the comments as well as specifying a non-numerial PID for the process. Jenkins tries to clean up after a job finishes by killing any processes it starts.
BUILD_ID=dontKillMe nohup <-your command -> &
the above command should work
https://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build
Your shell script need to fork a process, and return, otherwise Jenkins thinks your shell script is still running (which it is, if it's not forking the process and returning).
You have not provided the command you use to launch your application, but a common way to fork a process in linux is:
nohup <your command here> &
I'm currently developing a simple deployment script for vms running ubuntu.
All these machines are supposed to run a java application provided as a jar.
This is the relevant part of the script installing java, copying a jar from local machine to remote machine and then starting the application:
ssh ubuntu#$line -i ~/.ssh/key.pem -o StrictHostKeyChecking=no <java_installation.sh
scp -i ~/.ssh/key.pem $JARFILE ubuntu#$line:~/storagenode.jar
ssh ubuntu#$line -i ~/.ssh/key.pem <java_start_jar.sh
the installation via the java_installation.sh script succeeds, the scp command does as well.
The problem occurs when trying to execute the commands in java_start_jar.sh via ssh.
java_start_jar.sh:
#!/bin/sh
# this script starts a jar file and creates a shellscript which can be used to stop the execution.
nohup java -jar ~/storagenode.jar & > ~/storagenode.log
pId=$!
echo "kill $pId" > ~/stop_storagenode.sh
chmod u+x ~/stop_storagenode.sh
The scripts starts the execution of the .jar file, but then simply blocks.
Ssh does not return, the rest of the local code is only executed after manually closing connection.
Any ideas why the java application is not properly running as a background process?
Move the & to the end of the line
#!/bin/sh
# this script starts a jar file and creates a shellscript which can be used to stop the execution.
nohup java -jar ~/storagenode.jar > ~/storagenode.log &
pId=$!
echo "kill $pId" > ~/stop_storagenode.sh
chmod u+x ~/stop_storagenode.sh