I want to do something along these lines.
Process shell = Runtime.getRuntime().exec("/bin/bash");
Then I want to use the streams for the shell process to talk to the bash shell. However this doesn't seem to work at all and it totally stumps me.
I found this link which seems to talk about the same problem. Why exactly does this happen and are there better solutions than the one outlined in the link?
It can be necessary to flush your writes from the JVM to the child process to make sure its getting its input. IIRC I didn't need to do this on Windows, but did on Linux. I also ran into issues where I had to force the child process to flush writes so the JVM would see them right away too.
Also, make sure that you have JVM threads reading from stdout and stderr before you do anything, if either of those buffers fills up it can lock the process. This is a huge problem on Windows. You will only need one thread if you use the options to combine the streams when launching the process.
Also, your example (above), doesn't have a newline, wouldn't bash require one? e.g. "touch blah\n"
Related
What I would like to accomplish is to start a Java program and have it keep running until the user kills it with a control-C. I realize that it is possible to do this by creating a BufferedReader and having it endlessly loop while reading the BufferedReader, but what I am doing involves me backgrounding the Java program (e.g., java -jar app.jar &) which kills standard in so that method would not work. I've read a bit on Java's daemon threads, but I also do not think that is the correct solution in this instance because I want the JVM to stay alive.
Any help would be much appreciated.
Thanks,
Chris
I'm guessing you're launching the java program from a different process perhaps? A potential option may be integrating the Java Service Wrapper ? Depending on what you are doing, licensing may be an issue, and you probably wouldn't kill the process with a ctrl-c, but it's a thought (in addition to the others given above).
Alright, so I'm writing this program that essentially batch runs other java programs for me (multiple times, varying parameters, parallel executions, etc).
So far the running part works great. Using ProcessBuilder's .start() method (equivalent to the Runtime.exec() I believe), it creates a separate java process and off it goes.
Problem is I would like to be able to pause/stop these processes once they've been started. With simple threads this is generally easy to do, however the external process doesn't seem to have any inbuilt functionality for waiting/sleeping, at least not from an external point of view.
My question(s) is this: Is there a way to pause a java.lang.Process object? If not, does anyone know of any related exec libraries that do contain this ability? Barring all of that, is extending Process a more viable alternative?
My question(s) is this: Is there a way to pause a java.lang.Process object?
As you've probably discovered, there's no support for this in the standard API. Process for instance provides no suspend() / resume() methods.
If not, does anyone know of any related exec libraries that do contain this ability?
On POSIX compliant operating systems such as GNU/Linux or Mac OS you could use another system call (using Runtime.exec, ProcessBuilder or some natively implemented library) to issue a kill command.
Using the kill command you can send signals such as SIGSTOP (to suspend a process) and SIGCONT (to resume it).
(You will need to get hold of the process id of the external program. There are plenty of questions and answers around that answers this.)
You will need to create a system for sending messages between processes. You might do this by:
Sending signals, depending on OS. (As aioobe notes.)
Having one process occasionally check for presence/absence of a file that another process can create/delete. (If the file is being read/written, you will need to use file locking.)
Have your "main" process listen on a port, and when it launches the children it tells them (via a comamnd-line argument) how to "phone home" as they start up. Both programs alternate between doing work and checking for handling messages.
From what you have described (all Java programs in a complex batch environment) I would suggest #3, TCP/IP communication.
While it certainly involves extra work, it also gives you the flexibility to send commands or information of whatever kind you want between different processes.
A Process represents a separate process running on the machine. Java definitely does not allow you to pause them through java.lang.Process. You can forcibly stop them using Process.destroy(). For pausing, you will need the co-operation of the spawned process.
What sorts of processes are these? Did you write them?
When I execute a command in a separate process, for example by using the Runtime.getRuntime().exec(...) method, whose JavaDoc states:
Executes the specified command and arguments in a separate process.
What do I need to do with the streams from this process, knowing that the process shall live until the Java program exists? (this is a detail but the Java program takes care of killing this process and the process itself has a safety built-in where it kills itself should it notice that the Java program who spawned him his not running anymore).
If we consider that this process produces no output at all (for example because all error messages and stdout are redirected to /dev/null and all communications are done using files/sockets/whatever), what do I need to do with the input stream?
Should I have one (or two?) Java threads running for nothing, trying to read stdout/stderr?
What is the correct way to deal with a long-living external process spawned from a Java program that produces no stdout/stderr at all?
EDIT
Basically I wrap the shell script in another shell script that makes sure to redirect everything to /dev/null. I'm pretty sure my Un*x would be non-compliant if my "outter" shell script (the one redirecting everything to /dev/null) would still generate anything on stdout or stderr. Yet I find it mindboggling that I would somehow be supposed to have threads running during the lifecycle of the app "for nothing". Really boggles the mind.
If everything is as you say, then you can probably ignore them.
However, rarely do things work out so cleanly. It may be worth it in the long run to spawn a single thread to pull stdout/stderr, just in case. The one day it fails and actually puts something out, is the day you needed to know what came out. 1 or 2 threads (I think it could be done with just one) won't be a large overhead. Especially if you are correct and nothing ever comes out of those streams.
I believe the correct way to deal with a process's input and output if you are not interested in them is to close them promptly. If the child process subsequently tried to call read or write on stdin or stdout respectively, then an IOException would be thrown. It would be the responsibility of the child process to deal with the fact that it cannot read or write.
Most processes will ignore the fact that they cannot write and silently discard and writes. This is true in Java, where System.out is a PrintWriter, so any IOExceptions thrown by stdout are ignored. This is pretty much what happens when you redirect output to /dev/null -- all output is silently discarded.
It sounds like you've read the API on Processes and why it's important to read/write to the process if it expects to be doing any writing or reading of its own. But I'll reiterate, the problem comes that some OSes only allocated very limited buffers for (specifically) stdout, so it is important to either not allow this buffers to fill up. This means either reading any output of the child process promptly, or notifying the OS that you do not require the output of the process and that it can release any resources held, and reject any further attempted to write to stdout or read from stdin (rather than just hanging until resources become available).
I am running a java app as daemon on a linux machine using a customized shell script.
Since I am new to java and linux, I want to know is it possible that the app itself resurrects itself(just like restart) and recovers from cases like app crashing, unhandled exceptions or out of memory etc.
thanks in advance
Ashish Sharma
The JVM is designed to die when there is an unrecoverable error. The ones you described fall in this category.
You could, however, easily write a shell script or a Python script that checks if the process is alive, and if it is dead, waits a few seconds and revive it. As a hint to doing this, the Unix command "pgrep" is your friend, as you can check for the exact command line used to fire a JVM (and thus including the starting class file). This way, you can determine if that specific JVM instance is running, and restart it.
All that being said, you may want to add some reporting or logging capability and check if often, because it is too easy to assume that things are ok when in fact the daemon is dying every few minutes. Make sure you've done what you could to prevent it from dying before resurrecting it.
There are Wrappers that can handle that, like Java Service Wrapper (Be aware, that the Community Edition ist under GPL) and some alternatives, mentioned here
To be honest, relaunching the daemon without any question after a crash is probably not a good idea; well it depends greatly on the type of processing achieved by your daemon, but if for example it processes files from a given directory, or requests coming from a queue manager, and the file / message contains some unexpected data causing the crash, relaunching the daemon would make it crash again immediately (excepting when the file / message is removed no matter it has been correctly processed or not, but as well it seems not to be a good idea).
In short, it's probably better to track down the possible crash reasons and fix them when possible (or at least log the the problem and go ahead, provided that the log message would ever be scanned to warn at last a human being, so some action can be engaged upon such "failures").
Anyway if you have very good reasons to do such, a solution even simpler than "checking that the process is alive" (as it would probably in some way involve some "ps -blahblah" stuff), you could just put the java program launching in a shell "while true" loop as follows :
while true
do
# launch the java program here, no background
# when crashing, the shell will be given hand back
java -classpath blahblah...
echo "program crashed, relaunching it..."
done
On unix based systems, you may use "inittab" to specify the program. If process dies, it is re-started by OS.(respawn)
I am not sure if the app itself can handle such crashes. You could write a shell script in linux which could be running as a cron job itself to manage the app, checking if the java app is running on scheduled intervals and if not, it will restart it automatically.
I'm using Java's ProcessBuilder class to run an external process. The process should not terminate before the Java program does; it must stay alive in command/response mode.
I know that the process streams may easily 'jam' if neglected, so I've done the following:
The program reads the process's combined output and error streams in a "reader" thread, and uses a "writer" thread to manage the commands. The reader thread does blocking character reads from process output, buffers them up into Strings and dispatches the results. The writer thread writes complete "command" lines via a PrintWriter; it uses a queue to ensure that no two command writes are "too close together" (currently 100ms), and that no new command gets written before the output of the previous command is complete. I also call flush() and checkError() after every println().
This scheme works fine for a few seconds or minutes, then the reader thread hangs on the blocking read(). No errors, no exceptions thrown, no more process output. Thereafter nothing will revive the external process (short of restarting it). (BTW this happens on both Linux and Windows.)
I've looked at the code and test-cases in Jakarta Commons Exec and in Plexus Utils http://plexus.codehaus.org/plexus-utils/ but (a) neither gives an example of using a long-lived Process and (b) neither appears to be doing anything basically different from what I've described.
Does anyone have a clue what's happening here please?
Thanks!
Do you also have a thread managing stderr? You only mention the two streams.
i had implemented error, input and output stream in three sepereate threads and i can read and write to external processes without any problem.
I tested both on windows/linux with multitude of built in apps cmd/bash as well as other cmd line binaries and it works fine except on some occasions it just throws io stream exception, what i do is catch the exception and restart thread again, so that program keeps on working.
If you are trying to e.g ssh in linux, then you might run across problem like you won't be able to write to same stdin, this is because of security reasons.
Try taking input from System.in and see if it works, it worked in my case
Just a guess, but have you tried un-combining the error and output streams?