I am running a local BrowerStack test, for which I must instantiate a connection with the local server and BrowserStack. The instructions for this type of test are found here.
I am trying to declare the process to a variable
Process serverConnection = new ProcessBuilder("C:\\Users\\folder\\BrowserStackLocal.exe","**Password**", serverURL + ",80").start();
Looking at the task manager, I see that this line creates two BrowserStackLocal.exe processes, which I think is due to how they manage logging in. Is there a way I can reference the second BrowserStackLocal.exe process?
In my cleanup I call
serverConnection.destroy();
But this only ends one of the processes. Right now I also call
Runtime.getRuntime().exec("taskkill /F /IM BrowserStackLocal.exe");
Which successfully ends the other instance, but I would much prefer to hold a reference and call .destroy().
Any suggestions for how to accomplish this would be greatly appreciated.
EDIT: I am almost sure that the reason for the two processes is the logging in functionality, as when I call the wrong password only one window opens. The second process appears to be the one doing all the computing based on its CPU usage.
EDIT 2: Further testing with BrowserStackLocal confirms that it is the process of logging in which creates an additional process. A solution could identify a way to trace the instantiation of this second process from the first process.
Edit 3: The processes appear to be linked, as when I exit one of them from the task manager, sometimes the other one will close automatically.
Turns out that is how BrowserStack handles the login process, and its unavoidable.
Related
I have a process that splits into many different branches. The business requirement is that at any step, the process has to sent back to the process originator for correction and re-approval.
I know it would be possible to to use gateway logic at the end of each step and have the process return to the beginning that way; however, this would add many lines and branches to the process so that it would be incredibly difficult to read. I was thinking that an alternate way to accomplish this would be to simply abort the process and restart based on the information in the existing process - this seems more maintainable.
Both of these would not be too hard to implement, but I am wondering if there is an easier way to achieve this goal. I have not worked with signals much, but is there a way to leverage that to return to a previous step in the process?
Start of Editted Solution
Based on #Kris Verlaenen suggestion, it created the example process below.
I started by putting all of the steps that can be might be skipped into an Embedded SubProcess. The Return, Rejection, and Cancel signals were added from the Boundary Events tab of the palette. While the process waits for the Supervisor or Manager approval to complete, you can send a either of the signals to either go back to the first step or jump to the end of the process.
Using a event sub-process could allow you to trigger some part of your process every time the event occurs (could be signal, error, etc.)
Using an embedded sub-process with boundary event might help, as that way you would only have to link back grom that boundary event to the start, and whenever a signal / error occurs inside the sub-process, the boundary event could catch this. You could even make it interrupting, meaning it would cancel anything inside the sub-process as well, basically resetting what you were doing.
I created 2 agents, one made of Java and another made of Lotusscript. The java agent is scheduled to run every 5 minutes, while the lotusscript agent is scheduled to run every 15 minutes. Therefor there will come a time that they will simultaneously run. When that happens, the java agent must pause/wait until the lotusscript agent finished. I tried to simulate locking using Profile DOcuments and Environment Variables but to no avail. Is there a way that I can simulate locking between this two different agents? Please help. Thanks a lot!
Edit: I forgot to say that the 2 agents resides in TWO DIFFERENT databases, to complicate things more :(
Why not writing a third Agent (maybe in an extra Database), which runs periodically every five Minutes, which starts the other two Agents:
The Lotus Script Agent every time
The Java Agent every third run
... then you are also in control of the run order, without any complicated lock mechanisms.
This is a near foolproof way I have found that works for controlling the execution order of independent agents. I use a real notes document as a psuedo-lock document.
The way I have done this before is to keep a Notes document that represents a "lock". Don't use a database profile document as it's prone to replication/save conflict issues and you can't view it in a view.
The "lock" document can have a flag on it which tells the java agent whether it is allowed to run now. The java agent simply has code in it similar to this
Session s = NotesFactory.createSession();
Database db = s.getDatabase("This Server", "This database");
View vw = db.getView("(lockView)");
Document docControl = vw.getFirstDocument();
String sRunStatus = docControl.getItemValueString("runStatus");
boolean bContinue = false;
if (sRunStatus =="Go"){
bContinue = true;
}
if(bContinue){
//do agent code here....
// reset the status to "wait". The lotusscript agent should then set it to "Go"
// the other agent will execute on "wait" and then update the status to "Go" on
// completion to prevent simulatenous execution. Can also use different state names
// instead of go/wait, like run0, run1, run2 etc
docControl.replaceItemValue("runStatus", "wait");
docControl.save(true);
}
Note that you use the agents to set "Go"/"wait" values in the "runStatus" field on a control document. You only need 1 document so you then only need to get the first document out of the view.
The equivalent logic should be even simpler to add in the LotusScript agent as well. The only downside I can find is that the java agent may not execute the code because the control document is not yet set to "go" and the "IF" test fails without running the logic, so it's not a pause as such, but prevents the Java agent from executing out of it's desired order with the lotusscript agent. But it would then fire on the next scheduled instance if the LotusScript agent has released it.
You can also extend this idea to manage a suite of agents and even chain multiple agents as well by using specific values like "RunAgent1", "RunAgent2", another benefit is that you can also capture execution start times, errors as well, or anything you require....
Enabling document locking in the database could work. If you can enable document locking in the database itself you can have the agents lock a specific document and check if the document is locked before/during it runs the code.
If enabling document locking in that database is not an option you can consider creating a separate database do store the document.
Why can't these agents run simultaneously? Maybe it is possible to achieve the same result while allowing the agents to run simultaneously. Trying to control agents this way will usually lead to other problems. If the database has replicas the solution might break.
You said that it is two databases, but really by far the simplest way to stop agents from running simultaneously is to put them in the same database. I will very often create a special database that only contains agents and log documents generated by the agents. The agents can open any database, so it really doesn't matter where they are.
I also led a project once in which we built our own control mechanism for agents which was a combination of giulio and spookycoder's ideas. Only one 'master' agent was scheduled, and it read the control document to decide which agent should run next. Let's say we have agents A, B and C. The master runs A, which immediately updates the control document to say "I am running", then it updates fields with its progress information as it goes along, and finally when it is done it updates the control document with either "B",The next time the master runs, it looks at the control document. If the progress information shows that A has finished, the master will see that it is B's turn to run. Of course, A might realize that B has no work to do, so it might have written "C" instead, in which case the master will run C. The master also has the option to re-run A if the progress information shows that it did not finish the job.
I am creating a process P1 by using Process P1= Runtime.exec(...). My process P1 is creating another process say P2, P3....
Then I want to kill process P1 and all the processes created by P1 i.e. P2, P3...
P1.destroy() is killing P1 only, not its sub processes.
I also Googled it and found it's a Java bug:
http://bugs.sun.com/view_bug.do?bug_id=4770092
Does anyone have any ideas on how to do it?
Yes, it is a Bug, but if you read the evaluation the underlying problem is that it is next to impossible to implement "kill all the little children" on Windows.
The answer is that P1 needs to be responsible for doing its own tidy-up.
I had a similar issue where I started a PowerShell Process which started a Ping Process, and when I stopped my Java Application the PowerShell Process would die (I would use Process.destroy() to kill it) but the Ping Process it created wouldn't.
After messing around with it this method was able to do the trick:
private void stopProcess(Process process) {
process.descendants().forEach(new Consumer<ProcessHandle>() {
#Override
public void accept(ProcessHandle t) {
t.destroy();
}
});
process.destroy();
}
It kills the given Process and all of its sub-processes.
PS: You need Java 9 to use the Process.descendants() method.
Java does not expose any information on process grandchildren with good reason. If your child process starts another process then it is up to the child process to manage them.
I would suggest either
Refactoring your design so that your parent creates/controls all child processes, or
Using operating system commands to destroy processes, or
Using another mechanism of control like some form of Inter-Process Communication (there are plenty of Java libraries out there designed for this).
Props to #Giacomo for suggesting the IPC before me.
Is you writing other processes' code or they are something you cannot change?
If you can, I would consider modifying them so that they accept some kind of messages (even through standard streams) so they nicely terminate upon request, terminating children if they have, on their own.
I don't find that "destroying process" something clean.
if it is bug, as you say then you must keep track pf process tree of child process and kill all child process from tree when you want to kill parent process
you need to use data structure tree for that, if you have only couple of process than use list
Because the Runtime.exec() return a instance of Process, you can use some array to store their reference and kill them later by Process.destroy().
I am calling a .exe file from my java code using :
Runtime r=Runtime.getRuntime();
Process p=null;
p=r.exec("ABCD.exe");
I want the program to wait till the exe completes its job .(This is actually server side code...control passes to Client side after this).The problem now is that UI on client side is populated before the .exe on server side can form the required components.Hence UI formed does not have the correct files.
I have tried the normal p.waitfor() thing but it doesn't seem to work.
Any suggestions?
The short answer is that you want to call Process.waitFor() in your main thread, as you allude to.
However, dealing with Processes is not exactly fire-and-forget, because, as referenced by the class javadocs, you likely need to be reading the process' output. If you don't do this (which in this case will require a separate thread) then in many instances you'll have an effective deadlock - your Java app is waiting for the process to finish, but the process is trying to write output to a full buffer and thus waiting for the Java app to read its output.
If you gave more information about how "it didn't work", that would help with the diagnosis too.
Edit: on a completely separate point, there's no purpose in initialising p to null and then immediately reassigning it. Your second line would be clearer and less confusing as Process p = r.exec("ABCD.exe");.
How can I find out who created a Thread in Java?
Imagine the following: You use ~30 third party JARs in a complex plugin environment. You start it up, run lots of code, do some calculations and finally call shutdown().
This life-cycle usually works fine, except that on every run some (non-daemonic) threads remain dangling. This would be no problem if every shutdown was the last shutdown, I could simply run System.exit() in that case. However, this cycle may run several times and it's producing more garbage every pass.
So, what should I do? I see the threads in Eclipse's Debug View. I see their stack traces, but they don't contain any hint about their origin. No creator's stack trace, no distinguishable class name, nothing.
Does anyone have an idea how to address this problem?
Okay, I was able to solve (sort of) the problem on my own: I put a breakpoint into
Thread.start()
and manually stepped through each invocation. This way I found out pretty quickly that Class.forName() initialized lot of static code which in return created these mysterious threads.
While I was able to solve my problem I still think the more general task still remains unaddressed.
I religiously name my threads (using Thread(Runnable, String), say), otherwise they end up with a generic and somewhat useless name. Dumping the threads will highlight what's running and (thus) what's created them. This doesn't solve 3rd party thread creation, I appreciate.
EDIT: The JavaSpecialist newsletter addressed this issue recently (Feb 2015) by using a security manager. See here for more details
MORE: A couple of details for using the JavaSpecialist technique: The SecurityManager API includes "checkAccess(newThreadBeingCreated)" that is called on the thread creator's thread. The new thread already has its "name" initialized. So in that method, you have access to both the thread creator's thread, and the new one, and can log / print etc. When I tried this the code being monitored started throwing access protection exceptions; I fixed that by calling it under a AccessController.doPriviledged(new PrivilegedAction() { ... } where the run() method called the code being monitored.
When debuging your Eclipse application, you can stop all thread by clicking org.eclipse.equinox.launcher.Main field in the debug view.
Then from there, for each thread you can see the stack trace and goes up to the thred run method.
Sometimes this can help and sometimes not.
As Brian said, it a good practice to name threads because it's the only way to easily identify "who created them"
Unfortunately it doesn't. Within Eclipse I see all the blocking threads, but their stack traces only reflect their internal state and (apparently) disclose no information about the location of their creation. Also from a look inside the object (using the Variables view) I was unable to elicit any further hints.
For local debugging purposes, one can attach a debugger to a Java application as early as possible.
Set a non-suspending breakpoint at the end of java.lang.Thread#init(java.lang.ThreadGroup, java.lang.Runnable, java.lang.String, long, java.security.AccessControlContext, boolean) that will Evaluate and log the following:
"**" + getName() + "**\n" + Arrays.toString(Thread.currentThread().getStackTrace())
This will out the thread name and how the thread is created (stacktrace) that one can just scan through.