I have an AppleScript which I am using to run a .jar file. The .jar file takes several inputs which were originally entered via the command line but now I enter into a .csv and get read into the .jar automatically. For unknown reasons, sometimes a number in the CSV is not read correctly leading to a NumberFormatException in the Java code. However, instead of breaking, my script continually tries to enter the invalid input in an infinite loop. Is there a way to amend my code so that when an error is raised by the .jar, the script stops?
Here is my current code:
on RunFile(jar_location)
do shell script "cd " & jar_location & " ; cat 'prompt.csv' | sh 'runScript.sh' 'WSO'"
end RunFile
After going over this in the comments, it's clear that the problem is that the .jar file is trying to be interactive — asking for some input at the cursor — and AppleScript's do shell script is not designed for that. AppleScript can get errors and outputs form the shell, but it cannot feed a response back to the shell, or tell if a shell script is waiting for input.
if the .jar file cannot be operated in a non-interactive mode, then the only for AppleScript to make sure the process ends is to grab its process id number, wait a reasonable amount of time, and then send it a kill signal. That script would look like this:
on RunFile(jar_location)
set pid to do shell script "cd " & jar_location & " ; cat 'prompt.csv' | sh 'runScript.sh' 'WSO' &> /dev/null & echo $!"
-- wait 5 seconds, or whatever seems appropriate for the task to complete
delay 5
try
do shell script "kill " & pid
end try
end RunFile
The appended &> /dev/null & echo $! phrase detaches the shell script, allowing the AppleScript to move forward, and returns the process id of the process for later use. I've put the kill signal in a try block so that the script does not throw an error if the process has already exited normally.
Related
I have a shell script on a remote linux machine which contains the following:
#!/bin/sh
for i in $(seq 1 10);
do
echo "CREATE TABLE ben$i (id NUMBER NOT NULL);
! sleep 30
select * from ben$i;
! sleep 30
DROP TABLE ben$i;" | sqlplus system/password &
done
wait
The name of this script is ben.sh.
In java, i want to execute this script and keep the script doing what it does in background.
I have a command that execute the script successfully:
sshshell.execute("su - oracle -c './ben.sh'");
I want the script to still run on the remote linux machine and i want to close the ssh connection right after i execute the command above, without interfering the script.
I thought if i put an & at the end of this command like so:
sshshell.execute("su - oracle -c './ben.sh' &");
But still the java program stuck and waits for the script to finish
Very important note: I don't want to use Threads OR any additional ssh connections.
What are my options here?
Use nohup and & to run the script in the background.
sshshell.execute("nohup su - oracle -c './ben.sh' &");
Nohup is short for “No Hangups". Nohup is a supplemental command that tells the Linux system not to stop another command once it has started. That means it’ll keep running until it’s done, even if the user that started it logs out. The syntax for nohup is as follows:
nohup sh your-script.sh &
The & at the end moves the command to the background, freeing up the terminal that you’re working in.
So, you canuse Nohup or Disown commands. With nohup system will avoid exit signals propagating to child processes.
nohup cmd &
Or you can use Disown to deatach it from child proceses.
cmd & disown
The goal of my program is to run an interactive command line executable from Java, so I can add input partway through when required. Basically redirecting input.
I couldn't find anything that worked online because the -c flag does not allow interactivity, but then I saw that the -i flag in the terminal allowed me to run commands with interactive input if I fed it a .sh file.
However, when I tried using this flag in java, it didn't work. I have separate input and output threads, so if I could get this to work it seems like it would be easy.
Relevant code:
ProcessBuilder pb = new ProcessBuilder()
.directory(new File(testDir))
.inheritIO()
.command("bash", "-i"
,"executor.sh");
proc = pb.start();
this is the error i get:
bash: cannot set terminal process group (1469): Inappropriate ioctl for device
bash: no job control in this shell
If there's way I could get this -i option working, then I'd appreciate pointers to something else that would allow me to get interactive input working because nothing else that I've tried seems to solve this problem.
bash -i is completely unrelated to ability to read from the TTY.
Rather, redirect from the TTY, after your script already started:
#!/usr/bin/env bash
exec </dev/tty || { echo "ERROR: Unable to connect stdin to /dev/tty" >&2; exit 1; }
read -r -p "Fill out this prompt please: " value
echo "Read from TTY: $value"
The command exec </dev/tty replaces the script's stdin (FD 0) with a read handle on /dev/tty. If you wanted to do this just for a single command, rather than for the whole script, put </dev/tty on the end of that command.
Of course, this only works if your process is run in a context where it has a controlling terminal at all -- but if that weren't the case, you couldn't read from the user without getting some kind of handle on an I/O device regardless.
I am running a batch (ScanProject.bat) file using java by following code
Process p= Runtime.getRuntime().exec("cmd /c start /wait ScanProject.bat "+ BaseProjDir+"\\"+jo.getString("Name")+" "+st.nextToken());
System.out.println("Exit value : "+p.waitFor());
And following is the batch file code :-
%2:
cd %1
ant -f ..\antbuild.xml analyse
exit
Batch file run successfully but problem is command prompt window do not closes automatically and hence Process do not terminated automatically and my program wait for infinite time to complete this process.Please suggest me any technique so that cmd exit after running ant -f ..\antbuild.xml analyse command.
Thanks.
cd /D "Full path of directory" or pushd "Full path of directory" with popd before exit is better to switch the current directory to any directory on any drive (cd and pushd/popd) or even to a network share (just pushd/popd). Run in a command prompt window cd /? and pushd /? for details.
cmd /C starts a new Windows command process with closing the process automatically after last command was executed. Run in a command prompt window cmd /? for details on options of Windows command interpreter.
start is a command to start a new Windows command process or a GUI/hybrid application in a separate process.
So what you do here is starting a new Windows command process which starts a new Windows command process.
Running in a command prompt window start /? outputs the help for this command. start interprets often the first double quoted string as title string for the new command process. This causes often troubles on command lines with at least 1 double quoted string. Therefore usage of start requires often an explicit definition of a title string in double quotes as first argument for start which can be even an empty string, i.e. simply "" as first argument after start.
As it can be read after running exit /? in a command prompt window, this command without /B always exits the current Windows command process immediately. So when ant.exe finished, the command process in which the batch file was processed is definitely terminated.
I'm having no experience on Java development, but in my point of view it should be enough to use the following execution command which does not need a batch file at all.
The Java code line
Process p= Runtime.getRuntime().exec("cmd.exe /C cd /D \"" + jo.getString("Name") + "\" && ant.exe -f ..\\antbuild.xml analyse");
should be enough to
start a new Windows command process,
set the current directory within this command process to the drive and directory specified by jo.getString("Name") which of course must return a directory path with drive letter and using backslashes as directory separators, and on success
execute ant in this directory with the specified parameters
with terminating the Windows command process automatically after console application ant.exe finished if ant.exe is a console application.
I'm not sure if cmd.exe /C is needed at all.
I suggest to test this command first manually from within a command prompt window. Then use it in the Java application if really working and producing the expected result. And finally I would further test if cmd.exe /C is needed at all in Java code.
See Single line with multiple commands using Windows batch file for details about the operator && to run a command after previous command was successful. And see also Why do not all started applications save the wanted information in the text files as expected? for an explanation of console / GUI / hybrid application.
NOTE: There is also Java Runtime method exec(String[] cmdarray, String[] envp, File dir) to execute a command like ant.exe with its parameters -f and ..\antbuild.xml and analyse in the directory defined with third parameter which might be better for this task.
Swap out exit for taskkill, assuming you do not have any other cmd processes running. Not very graceful but it will get the job done.
%2:
cd %1
ant -f ..\antbuild.xml analyse
taskkill /im cmd.exe
I'm trying to send a command to a minecraft server jar using /proc/{pid}/fd/0 but the server does not execute the command.
To replicate what I'm trying to do you can do this on a Debian based machine (possibly other Linux distributuions aswell).
What I use to test this:
Ubuntu 14.04
minecraft_server.jar (testing with 1.8)
OpenJDK Runtime Environment (installed with default-jre-headless)
First console:
$ java -jar minecraft_server.jar nogui
Response: [ ... server starts and waiting for input]
say hi
Response: [19:52:23] [Server thread/INFO]: [Server] hi
Second console:
Now when i switch to the second console, with the server still running in the first i write:
echo "say hi2" >> /proc/$(pidof java)/fd/0
Everything looks well until I switch back to the first console. I can see the text "say hi2" but the server hasn't recognized it. I can write another command in the first console again and it is as if the text inputted from the second console hasn't even existed.
Why is this? And more importantly, how do I use /proc/{pid}/fd/0 in a proper way to send commands to a java jar file?
I don't know if this is some kind of Java-thing that I'm not aware of, if I can use some flag or something when executing the server, or if it's the server jar itself that is the problem..
I'm aware that you can use screen, tail -f or some kind of server wrapper to accomplish this, but that's not what I'm after. I would like to send a command using this method, in some kind of way.
It's not a Java thing. What you are trying is simply not doable.
Test it like this:
Console1:
$ cat
This will basically echo anything you type on it as soon as you hit "return".
Console2: Find the process number of your cat command. Let's say it's NNN. Do:
$ echo Something > /proc/NNN/fd/0
Switch back to Console1. You'll see "Something" on the console output, but it's not echoed.
Why? Do
$ ls -l /proc/NNN/fd
And you may understand. All three descriptors, 0 for stdin, 1 for stdout and 2 for stderr are actually symbolic links, and all point to the same pseudoterminal slave (pts) device, which is the pts associated with your first terminal.
So basically, when you write to it, you actually write to the console output, not to its input. If you read from that file, you could steal some of the input that was supposed to go to the process in the first console (you are racing for this input). That's how a character device works.
The documentation for /proc says that:
/proc/[pid]/fd/
This is a subdirectory containing one entry for each file
which the process has open, named by its file descriptor, and
which is a symbolic link to the actual file. Thus, 0 is
standard input, 1 standard output, 2 standard error, and so
on.
So these are not the actual file descriptors opened by the process. They are just links to files (or in this case, character devices) with names that indicate which descriptor they are attached to in the given process. Their main duty is to tell you whether the process has redirected its file descriptors or has opened any new ones, and which resources they point to.
But if you want an alternative way of doing this, you can use a fifo - a named pipe.
Create a fifo by doing:
$ mkfifo myfifo
Run your java program:
$ java -jar minecraft_server.jar nogui < myfifo
Open another console. write
$ cat > myfifo
Now start typing things. Switch to the first console. You'll see your server executing your commands.
Mind your end-of-files, though. Several processes can write to the same fifo, but as soon as the last one closes it, your server will receive an EOF on its standard input.
It is possible to get around the fact that a named pipe is 'closed' when a process ends. You can to do this by keeping a file descriptor to the named pipe open in another process.
#! /bin/bash
# tac prints a file in reverse order (tac -> cat)
cmd="tac"
# create fifo called pipe
mkfifo pipe
# open pipe on the current process's file descriptor 3
exec 3<>pipe
bash -c "
# child process inherits all file descriptors. So cmd must be run from a sub-
# process. This allows us to close fd so cmd does not inherit fd 3, but allow fd 3
# to remain open on parent process.
exec 3>&-
# start your cmd and redirect the named pipe to its stdin
$cmd < pipe
" &
# write data to pipe
echo hello > pipe
echo world > pipe
# short wait before tidy up
sleep 0.1
# done writing data, so close fd 3 on parent (this) process
exec 3>&-
# tac now knows it will receive no more data so it prints its data and exits
# data is unbuffered, so all data is received immediately. Try `cat` instead to see.
# clean up pipe
rm pipe
I'm new to UNIX. I want to start my java app with a script like so:
#!/bin/sh
java -jar /usr/ScriptCheck.jar &
echo $! > /var/run/ScriptCheck.pid
This is supposedly working. It does run the app and it does write the pid file. But when I try to stop the process with a different script which contains this:
#!/bin/sh
kill -9 /var/run/ScriptCheck.pid
the console gives me this error:
bash: kill: /var/run/ScriptCheck.pid: arguments must be process or job IDs
My best guess is that I'm not writing the right code in the stop script, maybe not giving the right command to open the .pid file.
Any help will be very appreciated.
You're passing a file name as an argument to kill when it expects a (proces id) number, so just read the process id from that file and pass it to kill:
#!/bin/sh
PID=$(cat /var/run/ScriptCheck.pid)
kill -9 $PID
A quick and dirty method would be :
kill -9 $(cat /var/run/ScriptCheck.pid)
Your syntax is wrong, kill takes a process id, not a file. You also should not be using kill -9 unless you absolutely know what you are doing.
kill $(cat /var/run/ScriptCheck.pid)
or
xargs kill </var/run/ScriptCheck.pid
I think you need to read in the contents of the ScriptCheck.pid file (which I'm assuming has only one entry with the PID of the process in the first row).
#!/bin/sh
procID=0;
while read line
do
procID="$line";
done </var/run/ScriptCheck.pid
kill -9 procID
I've never had to create my own pid; your question was interesting.
Here is a bash code snippet I found:
#!/bin/bash
PROGRAM=/path/to/myprog
$PROGRAM &
PID=$!
echo $PID > /path/to/pid/file.pid
You would have to have root privileges to put your file.pid into /var/run --referenced by a lot of articles -- which is why daemons have root privileges.
In this case, you need to put your pid some agreed upon place, known to your start and stop scripts. You can use the fact a pid file exists, for example, not to allow a second identical process to run.
The $PROGRAM & puts the script into background "batch" mode.
If you want the program to hang around after your script exits, I suggest launching it with nohup, which means the program won't die, when your script logs out.
I just checked. The PID is returned with a nohup.