I use Google Closure Compiler to compile automatically javascript using PHP (is needed to do it that way - in PHP, hovewer no security limitations on Windows machine). I wrote simple PHP script which calls process, pass .js content to stdin and receive recompiled .js via stdout. It works fine, problem is, when I compiling for example 40 .js files, it takes on strong machine almost 2 minutes. However, mayor delay is because java starts new instance of .jar app for every script. Is there any way how to modify script below to create process only one and send/receive .js content multiple times before process ends?
function compileJScript($s) {
$process = proc_open('java.exe -jar compiler.jar', array(
0 => array("pipe", "r"), 1 => array("pipe", "w")), $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $s);
fclose($pipes[0]);
$output = stream_get_contents($pipes[1]);
fclose($pipes[1]);
if (proc_close($process) == 0) // If fails, keep $s intact
$s = $output;
}
return $s;
}
I can see several options, but don't know if it is possible and how to do it:
Create process once and recreate only pipes for every file
Force java to keep JIT-ed .jar in memory for much faster re-executing
If PHP can't do it, is possible to use bridge (another .exe file which will start fast every time, transfer stdin/out and redirects it to running compiler; if something like this even exists)
This is really a matter of coordination between the two process.
Here I wrote a quick 10-minutes script (just for the fun) that launches a JVM and sends an integer value, which java parses and returns incremented.. which PHP will just send it back ad-infinitum..
PHP.php
<?php
echo 'Compiling..', PHP_EOL;
system('javac Java.java');
echo 'Starting JVM..', PHP_EOL;
$pipes = null;
$process = proc_open('java Java', [0 => ['pipe', 'r'],
1 => ['pipe', 'w']], $pipes);
if (!is_resource($process)) {
exit('ERR: Cannot create java process');
}
list($javaIn, $javaOut) = $pipes;
$i = 1;
while (true) {
fwrite($javaIn, $i); // <-- send the number
fwrite($javaIn, PHP_EOL);
fflush($javaIn);
$reply = fgetss($javaOut); // <-- blocking read
$i = intval($reply);
echo $i, PHP_EOL;
sleep(1); // <-- wait 1 second
}
Java.java
import java.util.Scanner;
class Java {
public static void main(String[] args) {
Scanner s = new Scanner(System.in);
while (s.hasNextInt()) { // <-- blocking read
int i = s.nextInt();
System.out.print(i + 1); // <-- send it back
System.out.print('\n');
System.out.flush();
}
}
}
To run the script simply put those files in the same folder and do
$ php PHP.php
you should start seeing the numbers being printed like:
1
2
3
.
.
.
Note that while those numbers are printed by PHP, they are actually generated by Java
I don't think #1 from your list is possible because compiler.jar would need to have native support for keeping the process alive, which it doesn't (and if you consider that a compression algorithm needs the entire input before it can start processing data, it makes sense that the process doesn't stay alive).
According to Anyway to Boost java JVM Startup Speed? some people have been able to reduce their jvm startup times with nailgun
Nailgun is a client, protocol, and server for running Java programs
from the command line without incurring the JVM startup overhead.
Programs run in the server (which is implemented in Java), and are
triggered by the client (written in C), which handles all I/O.
Related
Not sure what's the right word for this or if this is possible.
I would like to start an external process and stop the current process in the same terminal window.
(I would like to avoid piping I/O streams for the child process.)
public static void main(String[] args) {
String ip = chooseFromCommandLine();
String cmdLine = "ping " + ip;
// launch cmdLine in the same terminal and exit this process
}
Essentially to create a "launcher"-type application but for the terminal.
In UNIX / Linux / POSIX, the terminology for this is "execing" the application. The current executing process is replaced with a new application.
Unfortunately, you can't do that in pure Java. You may be able to do it from native code that you call from Java.
Java's Runtime.exec(...) etcetera methods do the equivalent of a POSIX fork followed by exec in the child process. In other words, the parent process (i.e. the JVM) keeps running.
How do I start processes from a script in a way that also allows me to terminate them?
Basically, I can easily terminate the main script, but terminating the external processes that this main script starts has been the issue. I googled like crazy for Perl 6 solutions. I was just about to post my question and then thought I'd open the question up to solutions in other languages.
Starting external processes is easy with Perl 6:
my $proc = shell("possibly_long_running_command");
shell returns a process object after the process finishes. So, I don't know how to programmatically find out the PID of the running process because the variable $proc isn't even created until the external process finishes. (side note: after it finishes, $proc.pid returns an undefined Any, so it doesn't tell me what PID it used to have.)
Here is some code demonstrating some of my attempts to create a "self destructing" script:
#!/bin/env perl6
say "PID of the main script: $*PID";
# limit run time of this script
Promise.in(10).then( {
say "Took too long! Killing job with PID of $*PID";
shell "kill $*PID"
} );
my $example = shell('echo "PID of bash command: $$"; sleep 20; echo "PID of bash command after sleeping is still $$"');
say "This line is never printed";
This results in the following output which kills the main script, but not the externally created process (see output after the word Terminated):
[prompt]$ ./self_destruct.pl6
PID of the main script: 30432
PID of bash command: 30436
Took too long! Killing job with PID of 30432
Terminated
[prompt]$ my PID after sleeping is still 30436
By the way, the PID of sleep was also different (i.e. 30437) according to top.
I'm also not sure how to make this work with Proc::Async. Unlike the result of shell, the asynchronous process object it creates doesn't have a pid method.
I was originally looking for a Perl 6 solution, but I'm open to solutions in Python, Perl 5, Java, or any language that interacts with the "shell" reasonably well.
For Perl 6, there seems to be the Proc::Async module
Proc::Async allows you to run external commands asynchronously, capturing standard output and error handles, and optionally write to its standard input.
# command with arguments
my $proc = Proc::Async.new('echo', 'foo', 'bar');
# subscribe to new output from out and err handles:
$proc.stdout.tap(-> $v { print "Output: $v" });
$proc.stderr.tap(-> $v { print "Error: $v" });
say "Starting...";
my $promise = $proc.start;
# wait for the external program to terminate
await $promise;
say "Done.";
Method kill:
kill(Proc::Async:D: $signal = "HUP")
Sends a signal to the running program. The signal can be a signal name ("KILL" or "SIGKILL"), an integer (9) or an element of the Signal enum (Signal::SIGKILL).
An example on how to use it:
#!/usr/bin/env perl6
use v6;
say 'Start';
my $proc = Proc::Async.new('sleep', 10);
my $promise= $proc.start;
say 'Process started';
sleep 2;
$proc.kill;
await $promise;
say 'Process killed';
As you can see, $proc has a method to kill the process.
Neither Perl, Perl 6, nor Java, but bash:
timeout 5 bash -c "echo hello; sleep 10; echo goodbye" &
In Java you can create a process like this:
ProcessBuilder processBuilder = new ProcessBuilder("C:\\Path\program.exe", "param1", "param2", "ecc...");
Process process = processBuilder.start(); // start the process
process.waitFor(timeLimit, timeUnit); // This causes the current thread to wait until the process has terminated or the specified time elapses
// when you want to kill the process
if(process.isAlive()) {
process.destroy();
}
Or you can use process.destroyForcibly();, see the Process documentation for more info.
To execute a bash command point to the bash executable and set the command as a parameter.
I'm trying to spawn the following bash script and capture its stdout asynchronously:
#!/bin/bash
sleep 2;
echo "one";
sleep 2;
echo "two";
sleep 2;
echo "three";
# ... possibly infinite..
Here is the Java Code so far:
ProcessBuilder pb = new ProcessBuilder("sleeper");
Process process = pb.start();
InputStream input = process.getInputStream();
// now I continue in pseudo-code:
supervise the input-stream.
whenever a new line arrives:
check if the line equals "two";
then: doSomeAction();
Note: I'm not actually writing the program in Java. I'm writing it in Clojure, but I haven't found a Clojure approach to do so yet. So I'm trying to use the Java native API wrapped by Clojure.
a nodejs example
Tho clarify my intention a bit more, here's a JavaScript Code for Node.JS, that does exactly, what I want:
const spawn = require('child_process').spawn;
const sleeper = spawn('sleeper');
sleeper.stdout.on('data', (data) => {
if (data.toString() === "two\n") {
doSomeAction();
}
});
You can use the tools in clojure.java.io together with straightforward Java interop.
Here’s some code to get you started:
(require '[clojure.java.io :refer [reader]])
(let [process (.start (ProcessBuilder. ["./sleeper"]))]
(with-open [r (reader (.getInputStream process))]
(doseq [line (line-seq r)]
(when (= line "two")
(println line)))))
Paste this into your REPL and you should see two being output after the appropriate delay.
I'm trying to write a Groovy script that wraps another command and am having trouble with the stdout/stderr order. My script is below:
#!/usr/bin/env groovy
synchronized def output = ""
def process = "qrsh ${args.join(' ')}".execute()
def outTh = Thread.start {
process.in.eachLine {
output += it
System.out.println "out: $it"
}
}
def errTh = Thread.start {
process.err.eachLine {
output += it
System.err.println "err: $it"
}
}
outTh.join()
errTh.join()
process.waitFor()
System.exit(process.exitValue())
My problem is that the output doesn't appear on the terminal in the correct order. Below is the wrapper's output.
[<cwd>] wrap.groovy -cwd -V -now n -b y -verbose ant target
waiting for interactive job to be scheduled ...
Your interactive job 2831303 has been successfully scheduled.
Establishing builtin session to host <host> ...
Buildfile: build.xml
BUILD FAILED
Target "target" does not exist in the project "null".
Total time: 0 seconds
Your job 2831303 ("wrap.groovy") has been submitted
Below is the unwrapped command output.
[<cwd>] qrsh -cwd -V -now n -b y -verbose ant target
Your job 2831304 ("ant") has been submitted
waiting for interactive job to be scheduled ...
Your interactive job 2831303 has been successfully scheduled.
Establishing builtin session to host host ...
Buildfile: build.xml
BUILD FAILED
Target "target" does not exist in the project "null".
Total time: 0 seconds
Why does the "Your job has been submitted" message appear as the first line in one cast and the last line in another? I'm guessing it's related to Java libraries, not Groovy.
This is because of buffering. The threads which read stdout and stderr will not process the output the moment it is written by the child process. Instead, both streams are buffered, so your process won't see anything unless the child flushes the streams).
When the data is on the way, which thread gets the CPU first? There is no way to tell. Even if the data for stderr arrives a few milliseconds before stdout, if the stdout thread has the CPU right now, it will get its data first.
What you could do is use Java NIO (channels) and a single thread and first process all output from stderr but that still wouldn't guarantee that the order is preserved. Because of the buffering between child and parent process, you could get 4KB of text from one stream before you see a single byte of the other.
Unfortunately, there is no cross-platform solution because Java doesn't have an API to merge the two streams into one. On Unix, you could run the command with sh -c cmd 2>&1. That would redirect stderr to stdout. In the parent process, you could then just read stdout and ignore stderr.
The same works for OS X (since it's Unix based). On Windows, you could install Perl or a similar tool to run the process; that allows you to mess with the file descriptors.
PS: Pray that args never contains spaces. String.execute() is a really bad way to run a process; use java.lang.ProcessBuilder instead.
Try putting System.out.flush after you do your println. If I am right, the messages are appearing in different orders because the System.out is being buffered.
In our project the Java webservice communicate with the backend program written in C and Perl to process. We are using the ProcessBuilderto execute a backend (UNIX) job FrameworkHandler.
ProcessBuilder process;
process.Start(FrameworkHandler -a ACTION)
FrameworkHandler invokes a Perl script to perform some action. The Perl script internally does a diff command between two XML files and uses the print function to print the error:
sub print_error
{
$err_msg = shift;
print STDERR "$err_msg\n";
}
Whenever there is a difference between the files the Perl program hangs in the print_error function. If we execute the Perl program in the UNIX shell it is working without any issues. But if we execute the Perl through the webservice, it is not returning after the diff command. Due to this, the webservice is also not returning the response. Whether the greater than (>) symbols in the XML tags are creating problem?
Any help is much appreciated.
Part of Error:
< diff -udr --new-file --label=postProcess1 --label=postProcess2 postProcess1 postProcess2
< --- postProcess1
< +++ postProcess2
< ## -124,6 +124,36 ##
< <LOCATION></LOCATION>
< <ADDRESS_PART1>Test Address ^D</ADDRESS_PART1 >
< </address_details>
< + <address_details>
< + <CITY></CITY>
< + <STATE>12</STATE>
Thanks,
Mathew Liju
The API docs say:
“Because some native platforms only provide limited buffer size for standard input and output streams, failure to promptly write the input stream or read the output stream of the subprocess may cause the subprocess to block, and even deadlock.”
Are you complying?