Introducing delay in a java program - java

I am calling a .exe file from my java code using :
Runtime r=Runtime.getRuntime();
Process p=null;
p=r.exec("ABCD.exe");
I want the program to wait till the exe completes its job .(This is actually server side code...control passes to Client side after this).The problem now is that UI on client side is populated before the .exe on server side can form the required components.Hence UI formed does not have the correct files.
I have tried the normal p.waitfor() thing but it doesn't seem to work.
Any suggestions?

The short answer is that you want to call Process.waitFor() in your main thread, as you allude to.
However, dealing with Processes is not exactly fire-and-forget, because, as referenced by the class javadocs, you likely need to be reading the process' output. If you don't do this (which in this case will require a separate thread) then in many instances you'll have an effective deadlock - your Java app is waiting for the process to finish, but the process is trying to write output to a full buffer and thus waiting for the Java app to read its output.
If you gave more information about how "it didn't work", that would help with the diagnosis too.
Edit: on a completely separate point, there's no purpose in initialising p to null and then immediately reassigning it. Your second line would be clearer and less confusing as Process p = r.exec("ABCD.exe");.

Related

Thinking in node if Java background [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am a Java Developer where everything is working in sequential way (or concurrently with multiple threads), one after another. And it's logical to place things in a sequential way.
But node works in concurrent order with single thread. How it can be beneficial even if it is working only on single thread?
Frankly telling, I didn't get the concept of single thread in node. Only one thread handle everything?
Any advice would be beneficial on how I can start thinking in node.
Synchronous Programming(Java)
If you are familiar with synchronous programming (writing code that does one thing after the other) like Java or .Net. take the following code,
For example:
var fs = require('fs');
var content = fs.readFileSync('simpleserver1.js','utf-8');
console.log('File content: ');
console.log(content);
It writes out the code for a simple web server to the console. The code works sequentially, executing each line after the next. The next line is not executed until the previous line finishes executing.
Although this works well,
what if the file in this example were really large and took minutes
to read from?
How could other operations be executed while that code or long
operation is running?
These questions will not arise if you are working in java, because you have many threads to work for you(to serve multiple requests)
Asynchronous Programming(Node.Js)
But when you are using Node you just have a single thread, which serves all requests.
So there comes asynchronous programming, to help you in javascript(Node)
To execute operations while other long operations are running, we use function callbacks. The code below shows how to use an asynchronous callback function:
var fs = require('fs');
fs.readFile('simpleserver1.js','utf-8', function(err,data){
if (err) {
throw err;
}
console.log(“executed from the file finishes reading”);
});
//xyz operation
Notice that the line “executed from the file finishes reading” is executed as the file is being read, thus allowing us to perform other operations while the main reading of the file is also being executed.
Now look at the //xyz operation, in the code. when the file is being read, the server will not wait for the file to be read completely. it will just start executing //xyz operation, and will get back to , the callback function provided in fs.readFile(, when the file is ready.
So thats how Asynchronous programming works in Node.
Also if you want to conpare java and Node you can read this Article
EDIT:
How is node.Js single Threaded
lets take a scenario, where clients request server:
Assumptions:
1) there is single server process, say serverProcess,
2) There are 2 clients requesting server, say clientA and clientB.
3) Now, consider clientA, is going to require a file Operation(as one
shown above using fs).
what happens here,
Flow:
1) clientA requests serverProcess, server gets the request, then
it starts performing file operation. Now it waits till the file is
ready to read(callback is not yet invoked yet).
2) clientB requests serverProcess, Now the server is free right
now, as it is not serving clientA, so it servs clientB, in the
mean-time, the callback from fs.read, Notifies the server that file
data is ready, and it can perform operations on it.
3) Now server starts serving 'clientA'.
now you see, there was just one thread of server , which handled both the client requests, right?
Now what would have happened if this was JAVA, you would have created another thread of server for serving clientB, while clientA was being served by first thread, and waiting for file to be read. So this is how Node is single threaded, meaning A single Process Handles all the requests.
Question:
while there is another process invoked who prepared data from file system, how would you say node is single threaded:
See, I/O(files/database), is itself a different process, what difference here is,
1) Node does not wait for everything to be ready(like java), instead it will just start its next work(or serve other requests), but whatever happens, node will not create a different thread to serve rest of the requests(unless explicitly done//not recommended though).
2) while java will create another thread itself for serving new requests.
This has been said million times, but let me give you a short answer with respect to Java.
You create separate Thread in Java if you want to read a long file, without blocking main thread.
In Javascript, you just read the file using callbacks.
Main difference between those two:
It is easier to screw up the code with multiple threads (race condition, etc).
You do not need exactly the power of CPU's second core to read the file, it is a question of slow I/O, not intensive communication.
In callbacks, there is single thread as you said. Though, it just asks underlying system to read the file, and continues executing your code. Once the file is read, then javascript pauses the code it was executing, and will come back to run your Callback.
Sometimes, you also have to do computationally intensive stuff in Javascript. In that case you can spawn a new process - look into cluster module. But usually, computationally, or I/O heavy operations are already done for you, and you just use them using callbacks.
Ok giving you a head start. It is not about threads its about tasks per second. In a thread mode threads block when they wait for something.
In a non-blocking design everytime you wait for something you just give the thread back and be awaken if the event you are waiting for occured. Those events are known as future. So as in the future i want to do this when this and that has happend (or in a failure case do this other thing). Thats basically it.
It is not node or javascript. It is famous for scala too and sure there are plenty of other languages. And if you are a Java guy look for async processing. Jetty provides it. Vertx is famous for a share nothing architecture.
So have fun with this. I use it regularly. I have a server storing 20GB of data in a custom datastore. Wanna know how we scaled? We brought 512GB for the server and did 20 of those stores in parallel sharing nothing. Its like having 20 servers in one machine with no noticable latency and you scale with the cores. Thats how we do business in todays world.
Hardware is cheap so why fiddle with concurrency on the lowest level?

(JAVA)Terminate function on System.in .. possible?

I am currently working on a project in JAVA where I have to make an agent to interact with a server.
Each 50ms, the server will receive the last thing I outputted to System.out and send me a new set of lines as a 'state' through the System.in printstream to analyze and send my next message to System.out.
Also, if the server receives multiple outputs from me, it only regards the most recent one.
..
As for my question:
My program originally constructed a tree and then analyzed each leaf node to see which would be optimal, and then waited around for the next input, but I can recursively do a deeper tree search that would make my output 'better' (and again and again to keep returning a better result).
Using this and the fact that if the server receives multiple outputs, it only takes the most recent one, I could do each level, print my result and start the next level. But here comes my problem...
I can't be stuck in some complex algorithm while I am supposed to receiving the next input as I will then miss it. So I was wondering if there is a way to cancel anything else I am doing when I receive something via System.in and then go back to the beginning of the function and start the search again with the new set of input (and rinse and repeat..)
I hope this all makes sense,
Thank ye all
You absolutely require either multiple threads (or multiple processes) here.
I assume that you've solved the problem of receiving input into System.in, as well as the problem of your algorithm. The next step is to package each in a Runnable interface, and hand each a reference to a queuing object. This will scaffold a Producer-Consumer relationship.
Whenever your listening Runnable (the Producer) gets a message, it needs to put it on your queue. After every unit of work, your algorithm (the Consumer) should look into the queue for items that are there. If it finds something, it should integrate it as normal. If not, it continues on with it's work.
Both the Producer and the Consumer need to be started in their own threads and allowed to run concurrently.

Java ProcessBuilder: Input/Output Stream

I want to invoke an external program in java code, then the Google tell me that the Runtime or ProcessBuilder can help me to do this work. I have tried it, and there come out a problem the java program can't exit, that means both the sub process and the father process wait for forever. they are hanging or deadlock.
Someone tell me the reason is that the sub process's cache is too small. when it try to give back data to the father process, but the father process don't read it in time, then both of them hang. So they advice me fork an thread to be in charge of read sub process's cache data. I do it as what they tell me, but there still some problem.
Then I close the output stream which get by the method getOutputStream(). Finally, the program success. But I don't know why it happen? Is there some relationship between the output steam and input stream?
You have provided very few details in your question, so I can only provide a general answer.
All processes have three standard streams: standard input, standard output and standard error. Standard input is used for reading in data, standard output for writing out data, and standard error for writing out error messages. When you start an external program using Runtime.getRuntime().exec() or ProcessBuilder, Java will create a Process object for the external program, and this Process object will have methods to access these streams.
These streams are accessed as follows:
process.getOutputStream(): return the standard input of the external program. This is an OutputStream as it is something your Java code will write to.
process.getInputStream(): return the standard output of the external program. This is an InputStream as it is something your Java code will read from.
process.getErrorStream(): return the standard error of the external program. This is an InputStream as, like standard output, it is something your Java code will read from.
Note that the names of getInputStream() and getOutputStream() can be confusing.
All streams between your Java code and the external program are buffered. This means each stream has a small amount of memory (a buffer) where the writer can write data that is yet to be read by the reader. The writer does not have to wait for the reader to read its data immediately; it can leave its output in the buffer and continue.
There are two ways in which writing to buffers and reading from them can hang:
attempting to write data to a buffer when there is not enough space left for the data,
attempting to read from an empty buffer.
In the first situation, the writer will wait until space is made in the buffer by reading data out of it. In the second, the reader will wait until data is written into the buffer.
You mention that closing the stream returned by getOutputStream() caused your program to complete successfully. This closes the standard input of the external program, telling it that there will be nothing more for it to read. If your program then completes successfully, this suggests that your program was waiting for more input to come when it was hanging.
It is perhaps arguable that if you do run an external program, you should close its standard input if you don't need to use it, as you have done. This tells the external program that there will be no more input, and so removes the possibility of it being stuck waiting for input. However, it doesn't answer the question of why your external program is waiting for input.
Most of the time, when you run external programs using Runtime.getRuntime().exec() or ProcessBuilder, you don't often use the standard input. Typically, you'd pass whatever inputs you'd need to the external program on the command line and then read its output (if it generates any at all).
Does your external program do what you need it to and then get stuck, apparently waiting for input? Do you ever need to send it data to its standard input? If you start a process on Windows using cmd.exe /k ..., the command interpreter will continue even after the program it started has exited. In this case, you should use /c instead of /k.
Finally, I'd like to emphasise that there are two output streams, standard output and standard error. There can be problems if you read from the wrong stream at the wrong time. If you attempt to read from the external program's standard output while its buffer is empty, your Java code will wait for the external program to generate output. However, if your external program is writing a lot of data to its standard error, it could fill the buffer and then find itself waiting for your Java code to make space in the buffer by reading from it. The end result of this is your Java code and the external program are both waiting for each other to do something, i.e. deadlock.
This problem can be eliminated simply by using a ProcessBuilder and ensuring that you call its redirectErrorStream() method with a true value. Calling this method redirects the standard error of the external program into its standard output, so you only have one stream to read from.

Accessing program messages output to error stream

I've created a class which processes files and if it encounters certain specific errors, it outputs relevant error messages to the error stream.
I am working on another class that needs to access these error messages. I'm not sure how to do this. I am a beginner in Java programming. Based on my limited knowledge, I thought that my two options would be to either call the main method of the first class (but I don't know how I would get the error messages in this case) or to execute the compiled class and access the messages through the getErrorStream() method of the Process class. But, I am having trouble with the system deadlocking or possibly not even executing the exec command, so I'm not sure how implement the second case either.
I'm not quite sure what you're asking here, but a potential problem with your code is that you're not reading from the process' stdout. Per the Process API, "failure to promptly ... read the output stream of the subprocess may cause the subprocess to block, and even deadlock." Is this the "trouble" you mentioned?
Edit: So yeah, you can either do what you're doing, but be sure to read both the error stream and the output stream (see my comment), or you could just call the main method directly from your code, in which case the error output will be written to System.err. You could use System.setErr() to install your own stream that would let you get what's written to it, but keep in mind that any error output from your own app--the one that's running the other app--will also show up here. It sounds like spawning a separate process, like you're already doing, is what you want.
You can't build modularity based on many little programs with a main method. You have to make blocks of function as classes that are designed to be called from elsewhere -- and that means returning status information in some programmatic fashion, not just blatting it onto System.err. If it really is an error, throw an exception. If you have to return status, design a data structure to hold the status and return it. But don't go launching new processes all over the place and reading their error streams.

The best way to monitor output of process along with its execution

I have started a process in my Java code, this process take a very long time to run and could generate some output from time to time. I need to react to every output when they are generated, what is the best way to do this?
What kind of reaction are you talking about? Is the process writing to its standard output and/or standard error? If so, I suspect Process.getInputStream and Process.getErrorStream are what you're looking for. Read from both of those and react accordingly. Note that you may want to read from both of them from different threads, to avoid the individual buffer for either stream from filling up.
Alternatively, if you don't need the two separately, just leave redirectErrorStream in ProcessBuilder as false, so the error and output streams are merged.
You should start a thread which reads from the Process.getInputStream() and getErrorStream() (or alternatively use ProcessBuilder.redirectErrorStream(true)) and handle it when something shows up in the stream. There are many ways that how to handle it - the right way depends on how the data is being used. Please tell more details.
Here is one real-life example: SbtRunner uses ProcessRunner to send commands to a command line application and wait for the command to finish execution (the application will print "> " when a command finishes execution). There is some indirection happening to make it easier to read from the process' output (the output is written to a MulticastPipe from where it is then read by an OutputReader).

Categories