I'm parsing the result of executing this composite command
ntpq -c peers | awk ' $0 ~ /^*/ {print $9}'
in order to obtain the offset of the active ntp server.
This is the java code used and executed periodically
public Double getClockOffset() {
Double localClockOffset = null;
try {
String[] cmd = {"/bin/sh",
"-c",
"ntpq -c peers | awk \' $0 ~ /^\\*/ {print $9}\'"};
Process p = Runtime.getRuntime().exec(cmd);
p.waitFor();
BufferedReader buf = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line = buf.readLine();
if (!StringUtils.isEmpty(line)) {
localClockOffset = Double.parseDouble(line.trim());
} else {
// Log "NTP -> Empty line - No active servers - Unsynchronized"
}
} catch (Exception e) {
// Log exception
}
return localClockOffset;
}
ntpq result example
> remote refid st t when poll reach delay offset jitter
> ==============================================================================
> *server001s1 .LOCL. 1 u 33 64 377 0.111 -0.017 0.011
> +server002s1 10.30.10.6 2 u 42 64 377 0.106 -0.006 0.027
> +server003s1 10.30.10.6 2 u 13 64 377 0.120 -0.009 0.016
Notice that awk searchs the first line beginnig with '*' and extracts its ninth column. In the example: -0.017
The problem is that sometimes I'm obtaining the no-active-servers log message - intended to appear when there is no server with '*'- while the execution of the command through the console returns a number.
I know that I'm not closing the BufferedReader in that code but is that the reason of this behaviour? A new instance is being created (and left open until garbage collecting) in each method invocation but I think that it shouldn't be the cause of this problem.
Runtime.exec() simply invokes the ProcessBuilder inside it, like that:
public Process More ...exec(String[] cmdarray, String[] envp, File dir)
throws IOException {
return new ProcessBuilder(cmdarray)
.environment(envp)
.directory(dir)
.start();
}
see OpenJDK Runtime.java
So there is nothing wrong with using it instead of the ProcessBuilder as is.
The problem is that you invoke:
p.waitFor();
before you obtained the InputStream.
Which means that the process will be already terminated, by the time you obtain the InputStream, and the output stream data might be or might not be available to you, depending on the OS buffering implementation nuances and precise timing of the operations.
So, if you move the waitFor() to the bottom, your code should start working more reliably.
Under Linux however you should normally be able to read the remaining data from the PIPE buffer, even after the writing process has ended.
And the UNIXProcess implementation in OpenJDK, actually makes an explicit use of that, and tries to drain the remaining data, once the process has exited, so that file descriptor can be reclaimed:
/** Called by the process reaper thread when the process exits. */
synchronized void processExited() {
synchronized (closeLock) {
try {
InputStream in = this.in;
// this stream is closed if and only if: in == null
if (in != null) {
byte[] stragglers = drainInputStream(in);
in.close();
this.in = (stragglers == null) ?
ProcessBuilder.NullInputStream.INSTANCE :
new ByteArrayInputStream(stragglers);
}
} catch (IOException ignored) {}
}
}
And this seems to work reliable enough, at least in my tests, so it would be nice to know which specific version of Linux|Unix and JRE your are running.
Have you also considered the possibility of an application-level problem ?
I.e. ntpq is not really guaranteed to always return a * row.
So, it would be nice to remove the awk part from your pipe, to see if there will be some output at all the times.
Another thing to note is that if one of your shell pipeline steps fails (e.g. the ntpq itself), you will also get an empty output, so you will have to track the STDERR as well (e.g. by merging it with STDOUT via the ProcessBuilder).
Sidenote
Doing waitFor before you start consuming the data, is a bad idea in any case, as if your external process will produce enough output to fill the pipe buffer, it will just hang waiting for someone to read it, which will never happen, as your Java process will be locked in waitFor at the same time.
As pointed by Andrew Thompson, you shall try ProcessBuilder instead.
String[] cmd = {"/bin/sh",
"-c",
"ntpq -c peers | awk \' $0 ~ /^\\*/ {print $9}\'"};
ProcessBuilder pb = new ProcessBuilder(cmd);
pb.redirectErrorStream(true);
Process proc = pb.start();
BufferedReader buf = new BufferedReader(new
InputStreamReader(proc.getInputStream()));
String line = null;
while ((line = buf.readLine()) != null) {
localClockOffset = Double.parseDouble(line.trim());
break;
}
proc.destroy();
Ref ProcessBuilder
Finally we have found the real problem.
I'm not gonna change the accepted anwser, I think that it's useful too but maybe someone can learn from our experience.
My java program is launched with a shell script. When we execute the script manually, ntpq command is found and invoked successfully. The problem arises when the software is fully deployed. In the final environment we've got a cron scheduled demon that keeps our program alive but PATH established by cron is different from the PATH that our profile has got assigned.
PATH used by cron:
.:/usr/bin:/bin
PATH that we got login for launching the script manually:
/usr/sbin:/usr/bin:/bin:/sbin:/usr/lib:/usr/lib64:/local/users/nor:
/usr/local/bin:/usr/local/lib:.
Usually ntpq is in
/usr/sbin/ntpq
After we found the key of our problem, I search StackOverflow and got this relevant question where the problem is better explained and solved.
How to get CRON to call in the correct PATHs
Related
I want to start a processes such that the JVM can die but the spawned processes continues to run even if it is writing to STDOUT.
I first tried using a ProcessBuilder with the output set to Files and passing in:
cmd /c myCmd.exe arg0 arg1
However even after closing all Input/Output streams, if I call Process#.waitFor, it does not return until myCmd.exe has finished. It seems it is still attached to the JVM in some way (even though the JVM can probably die at this point and not affect the child proc).
I then tried the start command, it seems that is not on the path (I couldn't find the bin in c:\windows) so I ran it under cmd the arguments (separated by space) passed to ProcessBuilder became:
cmd /c start /b myCmd.exe arg0 arg2 >log 2>&1
That results in:
✓ Process#.waitFor returning before myCmd.exe finished.
⚠ It seemed that I needed to use a different log file from the one passed to the ProcessBuilder
✘ I then found the escaping become weird if the command run was echo and the argument was ^^^^\foo it would write to the log file ^\foo, I also noticed if I gave it "^^^^\foo" it would return the same thing ie "^^^^\foo".
So:
Is calling cmd.exe /c start /b the correct thing to do?
Am I doing something wrong with the escaping (which is really what I give to process builder), should I perhaps be doing something different because of cmd.exe calling start, perhaps I need to actually escape in some way? Perhaps I don't understand windows processes do they even have proper support for taking an array of arguments?
Am I going about this the wrong way should I be trying to call a native library from C? If so what would it be I don't mind if I have to call a C program to get my process running in the background.
I think the solution to create a detached in background process which the JVM holds no references to which also supports the possibility to pass any arguments in a sane way is it to use the CreateProcess API. For this I used:
<dependency>
<groupId>net.java.dev.jna</groupId>
<artifactId>platform</artifactId>
<version>3.5.0</version>
</dependency>
(It is an older version but it happened to be already in use).
The Jave code to get it working is:
/**
*
* #param command a pre escaped command line e.g. c:\perl.exe c:\my.pl arg
* #param env A non null environment.
* #param stdoutFile
* #param stderrFile
* #param workingDir
*/
public void execute(String command, Map<String, String> env,
File stdoutFile, File stderrFile, String workingDir) {
WinBase.SECURITY_ATTRIBUTES sa = new WinBase.SECURITY_ATTRIBUTES();
sa.bInheritHandle = true; // I think the child processes gets handles I make with
// with this sa.
sa.lpSecurityDescriptor = null; // Use default access token from current proc.
HANDLE stdout = makeFileHandle(sa, stdoutFile);
HANDLE stderr = null;
if(stderrFile != null &&
!stderrFile.getAbsolutePath().equals(stdoutFile.getAbsolutePath())) {
stderr = makeFileHandle(sa, stderrFile);
}
try {
WinBase.STARTUPINFO si = new WinBase.STARTUPINFO();
// Assume si.cb is set by the JVM.
si.dwFlags |= WinBase.STARTF_USESTDHANDLES;
si.hStdInput = null; // No stdin for the child.
si.hStdOutput = stdout;
si.hStdError = Optional.ofNullable(stderr).orElse(stdout);
DWORD dword = new DWORD();
dword.setValue(WinBase.CREATE_UNICODE_ENVIRONMENT | // Probably makes sense.
WinBase.CREATE_NO_WINDOW | // Well we don't want a window so this makes sense.
WinBase.DETACHED_PROCESS); // I think this would let the JVM die without stopping the child
// Newer versions of platform don't use a reference.
WinBase.PROCESS_INFORMATION.ByReference processInfoByRef = new WinBase.PROCESS_INFORMATION.ByReference();
boolean result = Kernel32.INSTANCE
.CreateProcess(null, // use next argument to get the task to run
command,
null, // Don't let the child inherit a handle to itself, because I saw someone else do it.
null, // Don't let the child inherit a handle to its own main thread, because I saw someone else do it.
true, // Must be true to pass any handle to the spawned thread including STDOUT and STDERR
dword,
asPointer(createEnvironmentBlock(env)), // I hope that the new processes copies this memory
workingDir,
si,
processInfoByRef);
// Put code in try block.
try {
// Did it start?
if(!result) {
throw new RuntimeException("Could not start command: " + command);
}
} finally {
// Apparently both parent and child need to close the handles.
Kernel32.INSTANCE.CloseHandle(processInfoByRef.hProcess);
Kernel32.INSTANCE.CloseHandle(processInfoByRef.hThread);
}
} finally {
// We need to close this
// https://stackoverflow.com/questions/6581103/do-i-have-to-close-inherited-handle-later-owned-by-child-process
Kernel32.INSTANCE.CloseHandle(stdout);
if(stderr != null) {
Kernel32.INSTANCE.CloseHandle(stderr);
}
}
}
private HANDLE makeFileHandle(WinBase.SECURITY_ATTRIBUTES sa, File file) {
return Kernel32.INSTANCE
.CreateFile(file.getAbsolutePath(),
Kernel32.FILE_APPEND_DATA, // IDK I saw this in an example.
Kernel32.FILE_SHARE_WRITE | Kernel32.FILE_SHARE_READ,
sa,
Kernel32.OPEN_ALWAYS,
Kernel32.FILE_ATTRIBUTE_NORMAL,
null);
}
public static byte[] createEnvironmentBlock(Map<String, String> env) {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
// This charset seems to work.
Charset charset = StandardCharsets.UTF_16LE;
try {
for(Entry<String, String> entry : env.entrySet()) {
bos.write(entry.getKey().getBytes(charset));
bos.write("=".getBytes(charset));
bos.write(entry.getValue().getBytes(charset));
bos.write(0);
bos.write(0);
}
} catch (Exception e) {
throw new RuntimeException(e);
}
bos.write(0);
bos.write(0);
return bos.toByteArray();
}
public static Pointer asPointer(byte[] data) {
Pointer pointer = new Memory(data.length);
pointer.write(0, data, 0, data.length);
return pointer;
}
✓ The process started seems to keep running even if the JVM is stopped.
✓ I didn't need to have to deal with STDOUT/STDERR from ProcessBuilder and later another one from needing to redirect the command actually run.
✓ If you can correctly escape your command (code to do that is not in the answer as it is not mine to share) you can pass things like " which I could not work out how to do when using ProcessBuilder with cmd /c start /b command. It seemed the JVM was doing some escaping making it perhaps impossible to construct the needed string to get the correct command.
✓ I could see the file handles held by the JVM to stdout/stderr are released before the process finishes.
✓ I could create 13k tasks without the JVM throwing an OOM with 62MB of memory given to the JVM (looks like the JVM is not holding resources like some people will end up with just doing ProcessBuilder and cmd /c.
✓ an intermediate cmd.exe is not created
I have a java restful service method which executes a myscript.sh using processBuilder. My script takes one input (example - myscript.sh /path/to-a/folder).
Inside the script something like this
-> execute a command which is multithreaded i.e parallel processing
-> echo "my message"
Now when call my script from a linux command line it executes fine. First all the threads running finishes and then some text output from threaded command execution shown on terminal and then echo my message is shown.
But when I call the same script from java using processBuilder, the last echo message comes immidiately and execution ends.
Following the way I call my script from java
ProcessBuilder processBuilder = new ProcessBuilder("/bin/bash","/path/to/myscript.sh","/path/to/folder/data");
Process proc = processBuilder.start();
StringBuffer output = new StringBuffer();
BufferedReader reader = new BufferedReader(new InputStreamReader(proc.getInputStream()));
String line = "";
while((line = reader.readLine()) != null){
output.append(line + "\n");
}
System.out.println("### " + output);
I don't know whats happening, how to debug also.
Can someone enlighten me on how to get the same behaviour from shell script when run from terminal or from java processBuilder?
Use ProcessBuilder.redirectErrorStream(boolean redirectErrorStream) with argument true to merge the errors into output. Alternatively, you could also use the shell command syntax cmd 2>&1 to merge the error with output.
These are some of the cases why you may be immediately getting the output of the last echo statement (instead of the script taking time to run and return proper results):
Missing environment variables
The launched bash needs to source .bashrc or some such recource file
The launched bash may not be running in right directory (you can set this in ProcessBuilder)
The launched bash may not be finding some script/executable in its PATH
The launched bash may not be finding proper libraries in the path for any of the executables
Once you merge error, you would be able to debug and see the errors for yourself.
In your context, separate processes may be spawned in two ways:
1) Bash
/path/to/executables/executable &
This will spawn a new executable executable and you need to wait for it to finish. Here's an answer that will help you.
2) Java
Process exec = Runtime.getRuntime().exec(command);
status = exec.waitFor();
Essentially, you need to wait for the process to end before you start reading its std/err streams.
If I understand the problem correctly, adding just this line to your code should suffice: status = exec.waitFor() (Before you obtain the streams)
Here's the JavaDoc for Process.waitFor() :
Causes the current thread to wait, if necessary, until the process represented by this Process object has terminated. This method returns immediately if the subprocess has already terminated. If the subprocess has not yet terminated, the calling thread will be blocked until the subprocess exits.
Returns:
the exit value of the subprocess represented by this Process object. By convention, the value 0 indicates normal termination.
Throws:
InterruptedException - if the current thread is interrupted by another thread while it is waiting, then the wait is ended and an InterruptedException is thrown
I am having trouble interacting with a process using getOutputStream. Here is my code:
Process p = null;
ProcessBuilder pb = new ProcessBuilder("/home/eric/this.sh");
pb.directory(new File("/home/eric/"));
p = pb.start();
InputStream in = null;
OutputStream outS = null;
StringBuffer commandResult = new StringBuffer();
String line = null;
int readInt;
int returnVal = p.waitFor();
in = p.getInputStream();
while ((readInt = in.read()) != -1)
commandResult.append((char)readInt);
outS = (BufferedOutputStream) p.getOutputStream();
outS.write("Y".getBytes());
outS.close();
System.out.println(commandResult.toString());
in.close();
Here is the output:
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
libmono2.0-cil libmono-data-tds2.0-cil libmono-system-data2.0-cil
libdbus-glib1.0-cil librsvg2-2.18-cil libvncserver0 libsqlite0
libmono-messaging2.0-cil libmono-system-messaging2.0-cil
libmono-system-data-linq2.0-cil libmono-sqlite2.0-cil
libmono-system-web2.0-cil libwnck2.20-cil libgnome-keyring1.0-cil
libdbus1.0-cil libmono-wcf3.0-cil libgdiplus libgnomedesktop2.20-cil
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
firefox-globalmenu
Suggested packages:
firefox-gnome-support firefox-kde-support latex-xft-fonts
The following NEW packages will be installed:
firefox firefox-globalmenu
0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded.
Need to get 15.2 MB of archives.
After this operation, 30.6 MB of additional disk space will be used.
Do you want to continue [Y/n]? Abort
this.sh simply runs "gksudo apt-get install firefox"
I don't know why it is Aborting and not taking my input "Y" thanks.
There are several problems.
First: gksudo(1) does some dirty, non-standard tricks with the standard input and standard output of the commands it starts. It fails horrible. A good example is this command line:
$ echo foo | gksudo -g cat
I would expect any output and the termination of the cat as soon as the echo has delivered the data. Nope. Both gksudo and cat hang around forever. No output.
Your usecase would be
echo y |gksudo apt-get install ....
and this will not work also. As long as this is not solved, you can forget to do any remote control if the started program requires any user input.
Second: As already pointed out by Roger waitFor() waits for the termination of the command. This will not happen any time soon without any user input and with the gksudo problem.
Third After shoving waitFor down a bit there is the next blocker: You wait for the complete output of the process up to and including the EOF. This will not happen anytime soon (see "first" and "second").
Fourth Only after the process is already dead twice (see "second" and "third") it might get some input - your Y (which might also need an additional \n).
Instead of solving this bunch of problems there might be a better and much easier way: Don't try to control apt-get install with standard input. Just give it some appropriate options which automatically "answers" your questions. A quick man apt-get turns up some candidates:
-y, --yes, --assume-yes
--force-yes
--trivial-only
--no-remove
--no-upgrade
See the manual for details.
I think this is the better and more stable way.
PS: Right now I'm pi*** o*** gksudo quite a bit, so excuse the rant above.
I've been trying to use Java's ProcessBuilder to launch an application in Linux that should run "long-term". The way this program runs is to launch a command (in this case, I am launching a media playback application), allow it to run, and check to ensure that it hasn't crashed. For instance, check to see if the PID is still active, and then relaunch the process, if it has died.
The problem I'm getting right now is that the PID remains alive in the system, but the GUI for the application hangs. I tried shifting the ProcessBuilder(cmd).start() into a separate thread, but that doesn't seem to be solving anything, as I hoped it would have.
Basically the result is that, to the user, the program APPEARS to have crashed, but killing the Java process that drives the ProcessBuilder.start() Process actually allows the created Process to resume its normal behavior. This means that something in the Java application is interfering with the spawned Process, but I have absolutely no idea what, at this point. (Hence why I tried separating it into another thread, which didn't seem to resolve anything)
If anyone has any input/thoughts, please let me know, as I can't for the life of me think of how to solve this problem.
Edit: I have no concern over the I/O stream created from the Process, and have thus taken no steps to deal with that--could this cause a hang in the Process itself?
If the process writes to stderr or stdout, and you're not reading it - it will just "hang" , blocking when writing to stdout/err. Either redirect stdout/err to /dev/null using a shell or merge stdout/err with redirectErrorStream(true) and spawn another thread that reads from stdout of the process
You want the trick?
Don't start your process from ProcessBuilder.start(). Don't try to mess with stream redirection/consumption from Java (especially if you give no s**t about it ; )
Use ProcessBuilder.start() to start a little shell script that gobbles all the input/output streams.
Something like that:
#!/bin/bash
nohup $1 >/dev/null 2>error.log &
That is: if you don't care about stdout and still want to log stderr (do you?) to a file (error.log here).
If you don't even care about stderr, just redirect it to stdout:
#!/bin/bash
nohup $1 >/dev/null 2>1 &
And you call that tiny script from Java, giving it as an argument the name of the process you want to run.
If a process running on Linux that is redirecting both stdout and stderr to /dev/null still produce anything then you've got a broken, non-compliant, Linux install ;)
In other word: the above Just Works [TM] and get rid of the problematic "you need to consume the streams in this and that order bla bla bla Java-specific non-sense".
The thread running the process may block if it does not handle the output. This can be done by spawning a new thread that reads the output of the process.
final ProcessBuilder builder = new ProcessBuilder("script")
.redirectErrorStream(true)
.directory(workDirectory);
final Process process = builder.start();
final StringWriter writer = new StringWriter();
new Thread(new Runnable() {
public void run() {
IOUtils.copy(process.getInputStream(), writer);
}
}).start();
final int exitValue = process.waitFor();
final String processOutput = writer.toString();
Just stumbled on this after I had a similar issue. Agreeing with nos, you need to handle the output. I had something like this:
ProcessBuilder myProc2 = new ProcessBuilder(command);
final Process process = myProc2.start();
and it was working great. The spawned process even did output some output but not much. When I started to output a lot more, it appeared my process wasn't even getting launched anymore. I updated to this:
ProcessBuilder myProc2 = new ProcessBuilder(command);
myProc2.redirectErrorStream(true);
final Process process = myProc2.start();
InputStream myIS = process.getInputStream();
String tempOut = convertStreamToStr(myIS);
and it started working again. (Refer to this link for convertStreamToStr() code)
Edit: I have no concern over the I/O stream created from the Process, and have thus taken no steps to deal with that--could this cause a hang in the Process itself?
If you don't read the output streams created by the process then it is possible that the application will block once the application's buffers are full. I've never seen this happen on Linux (although I'm not saying that it doesn't) but I have seen this exact problem on Windows. I think this is likely related.
JDK7 will have builtin support for subprocess I/O redirection:
http://download.oracle.com/javase/7/docs/api/java/lang/ProcessBuilder.html
In the meantime, if you really want to discard stdout/stderr, it seems best (on Linux) to invoke ProcessBuilder on a command that looks like:
["/bin/bash", "-c", "exec YOUR_COMMAND_HERE >/dev/null 2>&1"]
Another solution is to start the process with Redirect.PIPE and close the InputStream like this:
ProcessBuilder builder = new ProcessBuilder(cmd);
builder.redirectOutput(Redirect.PIPE);
builder.redirectErrorStream(true); // redirect the SysErr to SysOut
Process proc = builder.start();
proc.getInputStream().close(); // this will close the pipe and the output will "flow"
proc.waitFor(); //wait
I tested this in Windows and Linux, and works!
In case you need to capture stdout and stderr and monitor the process then using Apache Commons Exec helped me a lot.
I believe the problem is the buffering pipe from Linux itself.
Try to use stdbuf with your executable
new ProcessBuilder().command("/usr/bin/stdbuf","-o0","*executable*","*arguments*");**
The -o0 says not to buffer the output.
The same goes to -i0 and -e0 if you want to unbuffer the input and error pipe.
you need to read the output before waiting to finish the cycle. You will not be notified If the output doesn't fill the buffer. If it does, it will wait until you read the output.
Suppose you have some errors or responses regarding your command which you are not reading. This would cause the application to stop and waitFor to wait forever. A simple way around is to re-direct the errors to the regular output.
I was spent 2 days on this issue.
public static void exeCuteCommand(String command) {
try {
boolean isWindows = System.getProperty("os.name").toLowerCase().startsWith("windows");
ProcessBuilder builder = new ProcessBuilder();
if (isWindows) {
builder.command("cmd.exe", "/c", command);
} else {
builder.command("sh", "-c", command);
}
Process process = builder.start();
BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()));
String line;
while ((line = reader.readLine()) != null)
System.out.println("Cmd Response: " + line);
process.waitFor();
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
How can I be notified when a process I did not start ends and is their a way to recover its exit code and or output? the process doing the watching will be running as root/administrator.
You can check whether a process is currently running from java by calling a shell command that lists all the current processes and parsing the output. Under linux/unix/mac os the command is ps, under windows it is tasklist.
For the ps version you would need to do something like:
ProcessBuilder pb = new ProcessBuilder("ps", "-A");
Process p = pb.start();
BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream()));
// Skip first (header) line: " PID TTY TIME CMD"
in.readLine();
// Extract process IDs from lines of output
// e.g. " 146 ? 00:03:45 pdflush"
List<String> runningProcessIds = new ArrayList<String>();
for (String line = in.readLine(); line != null; line = in.readLine()) {
runningProcessIds.add(line.trim().split("\\s+")[0]);
}
I don't know of any way that you could capture the exit code or output.
No (not on Unix/Windows, at least). You would have to be the parent process and spawn it off in order to collect the return code and output.
You can kind of do that. On Unix, you can write a script to continuously grep the list of running processes and notify you when the process you're searching for is no longer found.
This is pseudocode, but you can do something like this:
while ( true ) {
str = ps -Alh | grep "process_name"
if ( str == '' ) {
break
}
wait(5 seconds)
}
raise_alert("Alert!")
Check the man page for ps. You options may be different. Those are the ones I use on Mac OSX10.4.
looks like you could use jna to tie into the "C" way of waiting for a pid to end (in windows, poll OpenProcess( PROCESS_QUERY_INFORMATION ...) to see when it reports the process as dead, see ruby's win32.c