Running Concurrent .jar Processes within Python - java

I have two .jar files that I want to call from a Python script. However, after I call the first jar, the terminal sits and doesn't process anything after that point as the server is running. The server starts fine, but I want to start another process that will run until I ask them to stop.
I've had trouble searching for possible solutions because I'm unsure of what terminology to use.
from subprocess import call
import glob
import sys
h2 = glob.glob("h2*.jar")
reasoner = glob.glob("reasoner*.jar")
h2 = h2.pop()
reasoner = reasoner.pop()
call(["java", "-jar", h2, "-tcp"]) # Any call commands after this point don't execute

Use subprocess.Popen instead of subprocess.call which wait the sub-process to terminate.
from subprocess import Popen
...
Popen(["java", "-jar", h2, "-tcp"])
FYI, Python documentation is good place to look, especially subprocess module documentation for this specific problem.
UPDATE
If you want to wait the sub-process explicitly when you're using Popen, save the reference to the Popen object and use wait method:
proc = Popen(["java", "-jar", h2, "-tcp"])
# Do something else ..
proc.wait() # block execution until the sub-process terminate.

Related

How to Terminate a Process Normally Created using ProcessBuilder

I am creating Processes using ProcessBuilder in my Java Application. The created process executes some FFMPEG commands which actually copy the RTSP streams in specified destination media file.
ProcessBuilder builder = new ProcessBuilder("ffmpeg", "-i", RTSP_URL, "-f", fileFormat, destFilePath);
Process processToExecute = builder.start();
I want to close the process before it completes its execution. So, If I run this FFMPEG command directly in windows CMD and then press 'CTRL+C' after 5 seconds then process get terminates with status '2'. And I can play the media file created so far.
So, If I do the same operation in my Java Application using:
process.destroy(); //I call this method after 5 sec
I get the status code '1' which means abnormal termination. I get the status by the following way:
processToExecute.destroy();
processToExecute.exitValue(); //This return me status '1'
And I can't play the media file and I think this is due to the abnormal termination of the process.
So how I can terminate the process created using ProcessBuilder in the same way we do in CMD with (CTRL+C) so that I may play the created media file ?
I want to terminate process (created using ProcessBuilder) in Java Application with status code of '2' that I get when I terminate process using CMD.
EDIT#01: --- Sharing Findings
So, when I try to delete that file once app terminates, I get the following error:
The Action Can't be Performed Because File is Opened in FFMPEG.exe
Which means that process is not terminating the command it is executing. That command still has occupied this file that's why I am not getting able to play it. Process gets terminate when I call:
processToExecute.destroy();
But, the task it is performing (that is execution of a command) is still active. Strange!!!!
EDIT#02: Sharing Ultimate Reason
Actually If I directly press 'CTRL+C' or 'q' in cmd when process is running then it terminates the process successfully and this process is no more visible in the currently executing processes lists.
And Programatically when I call method:
cmd> processToExecute.destroy();
It terminates the process but when I see the list of currently executing processes I can still see them over there.
And same scenario exists If I try to terminate this process using 'taskkill' or 'kill' command in another CMD by specifying their's name or pid that still process terminates abnormally.
P.S. I use the following command to see the running processes:
tasklist
So from this it proves that destroy() method from Application and 'taskkill or kill' command from another CMD is not terminating the process normally that pressing 'CTRL+C' and 'q' does.
Maybe try...
builder.inheritIO();
System.exit(2);
Or you could try to write to the stdin of the process...
process.getInputStream().write(exitCode);

Java process when terminated abnormally with Kill or pkill unix comamnds doesnot delete temporary files

When a tool developed in Java is launched, it creates temporary files in a folder. If terminated properly those files are getting deleted , but if terminated with kill or pkill commands those files are not getting deleted. Is there any way to send a signal to java process to delete those files before terminating the process?
Please help me to solve this issue.
Thanks in Advance
It seems like File.deleteOnExit() is fragile when it comes to process termination. In contrast, using the NIO API with the StandardOpenOption.DELETE_ON_CLOSE seems to be more reliable even though it’s specification only says: “If the close method is not invoked then a best effort attempt is made to delete the file when the Java virtual machine terminates”
E.g. when running the following program:
File f1=File.createTempFile("deleteOnExit", ".tmp");
f1.deleteOnExit();
final Path f2 = Files.createTempFile("deleteOnClose", ".tmp");
FileChannel ch = FileChannel.open(f2, StandardOpenOption.DELETE_ON_CLOSE);
System.out.println(f1);
System.out.println(f2);
LockSupport.parkNanos(Long.MAX_VALUE);
// the following statement is never reached, but it’s here to avoid
// early cleanup of the channel by garbage collector
ch.close();
and killing the process while it hangs at parkNanos, the JVM leaves the deleteOnExit tmp file while correctly deleting the deleteOnClose file on my machine.
You can add shutdown hook and clean everything you need explicitly.
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
//put your shutdown code here
}
});
This is actually the same what java.io.File#deleteOnExit does for you.

JRuby script with Rubeus and Swing exiting once packaged into jar using warble

I am trying to package a simple JRuby script into a jar file.
The script uses Rubeus::Swing and runs correctly when executed with the JRuby interpreter.
require 'rubygems'
require 'rubeus'
class Example01
extend Rubeus::Swing
def show
JFrame.new("Rubeus Swing Example 01") do |frame|
frame.visible = true
end
end
end
Example01.new.show
Once I package the script into a JAR with warble, when I execute:
java -jar jtest.jar
... the JFrame window shows up and instantly closes.
There is no indication of errors of any kind.
Does anyone know why this happens?
Warbler calls System.exit() after your main script exits. This causes the Swing EventThread to exit, closing your app.
https://github.com/jruby/warbler/blob/master/ext/JarMain.java#L131
I worked around this problem by joining with the event thread at the bottom of my start script like so:
event_thread = nil
SwingUtilities.invokeAndWait { event_thread = java.lang.Thread.currentThread }
event_thread.join
Hacky, but it works.
Just set the appropriate flag:
System.setProperty("warbler.skip_system_exit","true");

Can I disable Hudson's automatic scheduled builds all at once?

We have a large Hudson set up with many scheduled builds running all the time. Currently I'm trying to get one build to work properly, but I have to occasionally wait when a scheduled build enters the queue. Is there a way to disable all the scheduled builds so I can concentrate on my troublesome build, without adjusting the "cron" settings of each individual build?
Tell it to prepare to shut down.
Edit from OP (banjollity)
It's not perfect, but I think this is a reasonable "few mouse clicks solution with a default install" kind of solution, hence the accepted answer.
Queue up a job
Tell Hudson to prepare to shut down. This prevents other jobs being run in the meantime.
Diagnose faults with my job, commit new code that might fix it. (I love my job).
Cancel Hudson shut down.
Goto step 1.
The 'configuration slicing' plugin I contributed allows you to modify the cron settings of many jobs simultaneously. This should allow you to make the bulk changes you want.
Expanding upon Mikezx6r's suggestion, I just came up with a quick method to disable all builds matching a certain string:
[user#server jobs] $ for i in *build_name*; do sed -i s/"disabled>false"/"disabled>true/" $i/config.xml; done
You could also iterate through specific build names in the "for" loop:
[user#server jobs] $ for i in build1 build2 build3; do sed -i s/"disabled>false"/"disabled>true/" $i/config.xml; done
You can test it first to see what it will do by putting an "echo" before sed:
[user#server jobs] $ for i in build1 build2 build3; do echo sed -i s/"disabled>false"/"disabled>true/" $i/config.xml; done
Conversely, you can re-enable all matching jobs by switching around the sed script:
[user#server jobs] $ for i in build1 build2 build3; do sed -i s/"disabled>true"/"disabled>false/" $i/config.xml; done
I don't see a direct way to do it, but you could write something that updates the config.xml for all jobs.
In each job's directory in hudson, there's a config.xml. The <project> has an element called disabled that you could update to true, thereby disabling that build.
Not ideal, but once you have the script to walk a directory and change the value of disabled, you can always use it.
A search for something similar brought me to this question, and I realized there's another benefit of Michael Donohue's answer (and the plugin he contributed).
With "Configuration Slicing," it's easy to disable a subset of your jobs all at once. That's exactly what I needed to temporarily disable 7 of 8 related jobs so I could work on the 8th. Thanks Michael!
This can be done using jenkins Console. It runs groovy script and do almost anything.
Following script iterates through all projects. Check if it has TimerTrigger.(One can extend this check of other triggers as well)
import hudson.model.Hudson
import hudson.model.Project
import hudson.triggers.TimerTrigger
import hudson.triggers.Trigger
import hudson.triggers.TriggerDescriptor
//All the projects on which we can apply the getBuilders method
def allProjects = Hudson.instance.items.findAll { it instanceof Project }
def projectsToWorkOn = [];
allProjects.each { Project project ->
Map<TriggerDescriptor, Trigger> triggers =
project.getTriggers();
triggers.each { trigger ->
if (trigger.value instanceof TimerTrigger) {
projectsToWorkOn.push(project)
}
}
}
projectsToWorkOn
.each { Project project ->
project.disable();
project.save()
}

how to give wait & close the command prompt in java

I am using the following code to execute a batch file:
java.lang.Runtime rt = java.lang.Runtime.getRuntime();
Process pr = rt.exec("MyBatch.bat");
My batch file takes some time to execute. I want my servlet process to wait till the batch file execution completes. I would like to close the command prompt after executing the batch file. How can I do this?
Use Process.waitFor() to have your thread wait for the completion of the batch file's execution.
java.lang.Runtime rt = java.lang.Runtime.getRuntime();
Process pr = rt.exec("MyBatch.bat");
pr.waitFor();
You may also want to look at using ProcessBuilder instead of Runtime.getRuntime().exec() if you need access to the console's output and/or input.
The most straightforward way would be to use the .waitFor() method of the process object you created: pr.waitFor();
This is a blocking call, meaning that no other code will be executed before this call returns.
As others have said, you can use Process.waitFor(). However, before doing this you must start another thread that continually reads the contents of the process's output and error streams; otherwise if there is an error that causes lots of output your application will hang.
Alternatively you can have your batch file redirect output and errors to a file.
Look at the documentation for the Process class.
You can trace the InputStreamReader from your process.
and trace for the lines inside bat file.
When you are EOF then exit from command line
see the Example or full source code.
click here

Categories