This method notifes an event loop to start processing a message. However, if the event loop is already processing a message then, this method blocks until it receives a notification of completed event processing (which is triggered at the end of the event loop).
public void processEvent(EventMessage request) throws Exception {
System.out.println("processEvent");
if (processingEvent) {
synchronized (eventCompleted) {
System.out.println("processEvent: Wait for Event to completed");
eventCompleted.wait();
System.out.println("processEvent: Event completed");
}
}
myRequest = request;
processingEvent = true;
synchronized (eventReady) {
eventReady.notifyAll();
}
}
This works in client mode. If I switch to server mode and the time spent in the event loop processing the message is too quick, then the method above blocks forever waiting for the event to completed. For some reason the event complete notification is sent after the processingEvent check and before the eventCompleted.wait(). It makes no difference if I remove the output statements. I can not repeat the same problem in client mode.
Why does this only happen in server mode and what can I do to prevent this happening?
Here is the eventReady wait and eventCompleted notification:
public void run() {
try {
while (true) {
try {
synchronized (eventReady) {
eventReady.wait();
}
nx.processEvent(myRequest, myResultSet);
if (processingEvent > 0) {
notifyInterface.notifyEventComplete(myRequest);
}
} catch (InterruptedException e) {
throw e;
} catch (Exception e) {
notifyInterface.notifyException(e, myRequest);
} finally {
processingEvent--;
synchronized (eventCompleted) {
eventCompleted.notifyAll();
}
}
} // End of while loop
} catch (InterruptedException Ignore) {
} finally {
me = null;
}
Here is revised code which seems to work without the deadlock problem - which BTW happened in client mode randomely after about 300 events.
private BlockingQueue<EventMessage> queue = new SynchronousQueue<EventMessage>();
public void processEvent(EventMessage request) throws Exception {
System.out.println("processEvent");
queue.put(request);
}
public void run() {
try {
while (true) {
EventMessage request = null;
try {
request = queue.take();
processingEvent = true;
nx.processEvent(request, myResultSet);
notifyInterface.notifyEventComplete(request);
} catch (InterruptedException e) {
throw e;
} catch (Exception e) {
notifyInterface.notifyException(e, request);
} finally {
if (processingEvent) {
synchronized (eventCompleted) {
processingEvent = false;
eventCompleted.notifyAll();
}
}
}
} // End of while loop
} catch (InterruptedException Ignore) {
} finally {
me = null;
}
}
If you call notifyAll and no thread is wait()ing, the notify is lost.
The correct approach is to always change a state, inside the synchronized block, when calling notify() and always check that state, inside the synchronized block, before calling wait().
Also your use of processingEvent doesn't appear to be thread safe.
Can you provide the code which waits on eventReady and notifies eventCompleted?
Your program can happen to work if your speed up or slow down your application just right e.g. if you use -client, but if you use a different machine, JVM or JVM options it can fail.
There are a number of race conditions in your code. Even declaring processingEvent volatile or using an AtomicBoolean won't help. I would recommend using a SynchronousQueue which will block the event until the processer is ready for it. Something like:
private final BlockingQueue<Request> queue = new SynchronousQueue<Request>();
...
// this will block until the processor dequeues it
queue.put(request);
Then the event processor does:
while (!done) {
// this will block until an event is put-ed to the queue
Request request = queue.take();
process the event ...
}
Only one request will be processed at once and all of the synchronization, etc. will be handled by the SynchronousQueue.
If processingEvent isn't declared volatile or accessed from within a synchronized block then updates made by one thread may not become visible to other threads immediately. It's not clear from your code whether this is the case, though.
The "server" VM is optimised for speed (at the expense of startup time and memory usage) which could be the reason why you didn't encounter this problem when using the "client" VM.
There is a race condition in your code that may be exasperated by using the server VM, and if processingEvent is not volatile then perhaps certain optimizations made by the server VM or its environment are further influencing the problem.
The problem with your code (assuming this method is accessed by multiple threads concurrently) is that between your check of processingEvent and eventCompleted.wait(), another thread can already notify and (I assume) set processingEvent to false.
The simplest solution to your blocking problem is to not try to manage it yourself, and just let the JVM do it by using a shared lock (if you only want to process one event at a time). So you could just synchronize the entire method, for instance, and not worry about it.
A second simple solution is to use a SynchronousQueue (this is the type of situation it is designed for) for your event passing; or if you have more executing threads and want more than 1 element in the queue at a time then you can use an ArrayBlockingQueue instead. Eg:
private SynchronousQueue<EventMessage> queue = new SynchronousQueue<EventMessage>();
public void addEvent(EventMessage request) throws Exception
{
System.out.println("Adding event");
queue.put(request);
}
public void processNextEvent()
{
EventMessage request = queue.take();
processMyEvent(request);
}
// Your queue executing thread
public void run()
{
while(!terminated)
{
processNextEvent();
}
}
Related
There is a thread which connects to the server (HTTP) and waits for response or response timeout (server doesn't respond until it has data to return) in a loop. In case response returned a thread processing it.
On service stopping it is needed to stop/interrupt all threads but threads must finish response processing (in case a thread is processing and not just awaiting response).
Here is code example
public class WaitingResponseThread extends Thread {
static final int TIMEOUT = 4 * 1000;
static final int PROCESSING_DURATION = 2000;
private volatile boolean stopped = false;
private volatile boolean processing = false;
public void run() {
System.out.println("starting child thread");
while(!stopped) {
try {
// here a thread is awaiting server response (in real life response timeout is 400 sec)
// probably it is safe to interrupt the thread now
// emulating this with Thread.sleep()
System.out.println("awaiting response");
Thread.sleep(TIMEOUT);
processing = true;
// there is some job on response and we must allow it to finish the job
// again emulating delay with Thread.sleep()
Thread.sleep(PROCESSING_DURATION);
processing = false;
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println("ending child thread");
}
public void setStopped(boolean stopped) {
this.stopped = stopped;
}
public boolean isProcessing() {
return processing;
}
public static void main(String[] args) {
WaitingResponseThread t = new WaitingResponseThread();
// starting thread, it sends a request and waits for a response
t.start();
try {
Thread.sleep(1);
// let's allow the thread to finish normally in case it is processing response
t.setStopped(true);
System.out.println("awaiting child thread to get response");
Thread.sleep(TIMEOUT + 1);
// let's allow the thread to finish processing
while(t.isProcessing()) {
System.out.println("processing");
Thread.sleep(500);
}
// we can't wait for 400sec until the next loop check
// so as the thread is sleeping we just interrupting it
if(t.isAlive()) {
System.out.println("killing");
t.interrupt();
}
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("ending main thread");
}
}
Is this "interrupt" method good? Is there another way not to wait as long as timeout and not to lose data being processed?
This
while(t.isProcessing()) {
System.out.println("processing");
Thread.sleep(500);
}
is a form of polling. You should avoid polling at all costs. If possible, use a form of delegation where the callee notifies the caller when it is done processing.
Other than that, I'm pretty sure interrupting a thread is something you also rarely want to do, but that the way you're doing it is fine if you have to do it anyway.
The Thread class has a few functions that you can use when interrupting.
Catching an InterruptionException
Using the method isInterrupted() (not static)
Using the method interrupted() (static)
Checking interrupted() or isInterrupted() and then throw new InterruptedException()
With these you can probably make your thread finish the current job before killing it, probably within the catch clause.
I have some service that both consumes from an inbound queue and produces to some outbound queue (where another thread, created by this service, picks up the messages and "transports" them to their destination).
Currently I use two plain Threads as seen in the code bellow but I know that in general you should not use them anymore and instead use the higher level abstractions like the ExecutorService.
Would this make sense in my case? More specifically I mean ->
would it reduce code?
make the code more robust in case of failure?
allow for smoother thread termination? (which is helpfull when running tests)
Am I missing something important here? (maybee some other classes from java.util.concurrent)
// called on service startup
private void init() {
// prepare everything here
startInboundWorkerThread();
startOutboundTransporterWorkerThread();
}
private void startInboundWorkerThread() {
InboundWorkerThread runnable = injector.getInstance(InboundWorkerThread.class);
inboundWorkerThread = new Thread(runnable, ownServiceIdentifier);
inboundWorkerThread.start();
}
// this is the Runnable for the InboundWorkerThread
// the runnable for the transporter thread looks almost the same
#Override
public void run() {
while (true) {
InboundMessage message = null;
TransactionStatus transaction = null;
try {
try {
transaction = txManager.getTransaction(new DefaultTransactionDefinition());
} catch (Exception ex) {
// logging
break;
}
// blocking consumer
message = repository.takeOrdered(template, MESSAGE_POLL_TIMEOUT_MILLIS);
if (message != null) {
handleMessage(message);
commitTransaction(message, transaction);
} else {
commitTransaction(transaction);
}
} catch (Exception e) {
// logging
rollback(transaction);
} catch (Throwable e) {
// logging
rollback(transaction);
throw e;
}
if (Thread.interrupted()) {
// logging
break;
}
}
// logging
}
// called when service is shutdown
// both inbound worker thread and transporter worker thread must be terminated
private void interruptAndJoinWorkerThread(final Thread workerThread) {
if (workerThread != null && workerThread.isAlive()) {
workerThread.interrupt();
try {
workerThread.join(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException e) {
// logging
}
}
}
The main benefit for me in using ThreadPools comes from structuring the work in single, independent and usually short jobs and better abstraction of threads in a ThreadPools private Workers. Sometimes you may want more direct access to those, to find out if they are still running etc. But there are usually better, job-centric ways to do that.
As for handling failures, you may want to submit your own ThreadFactory to create threads with a custom UncaughtExceptionHandler and in general, your Runnable jobs should provide good exception handling, too, in order to log more information about the specific job that failed.
Make those jobs non-blocking, since you don't want to fill up your ThreadPool with blocked workers. Move blocking operations before the job is queued.
Normally, shutdown and shutdownNow as provided by ExecutorServices, combined with proper interrupt handling in your jobs will allow for smooth job termination.
In a web controller, I have a parent thread that receives requests. Some requests take a long time to process. To prevent clients from timing out, I set up the parent thread to send back a byte every 2 seconds while a child thread is doing the time-consuming part of the operation.
I want to make sure I'm accounting for all possible cases of the child thread dying, but I also don't want to put in any extraneous checks.
Here is the parent thread:
// This is my runnable class
ProcessorRunnable runnable = new ProcessorRunnable(settings, Thread.currentThread());
Thread childThread = new Thread(runnable);
childThread.start();
boolean interrupted = false;
while (!runnable.done) { // <-- Check in question
outputStream.write(' ');
outputStream.flush();
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
// If the runnable is done, then this was an expected interrupt
// Otherwise, remember the interruption and re-interrupt after processing is done
// Or with self so that a later expected interrupt won't clear out an earlier unexpected one
interrupted = interrupted || !runnable.done;
}
}
if (runnable.runtimeException != null) {
LOG.error("Propagating runtime exception from thread");
throw runnable.runtimeException;
}
// ... Further processing on the results provided by the child thread
And here's ProcessorRunnable:
private volatile boolean done;
private volatile Result result;
private volatile RuntimeException runtimeException;
// ...
public void run() {
done = false;
try {
result = myService.timeConsumingOperation(settings);
} catch (RuntimeException e) {
runtimeException = e;
} finally {
done = true;
parentThread.interrupt();
}
}
My question is, would adding && Thread.isAlive() check in the parent thread's main loop buy me anything?
It seems that setting done = true in the finally block should do the trick, but are there some cases where this child thread could die without notifying the parent?
The finally in the child thread will always execute before it finishes. Even if that thread is interrupted or stopped, this happens via an exception that bubbles up the call stack and triggers all finallys. So, done will always be true if the child thread is interrupted.
For background tasks like this you may want to use an ExecutorService instead of raw threads. You can submit a Runnable to an ExecutorService and just call get() on the returned future to block until it is done. If you want to print out spaces while you are waiting, you can use a loop, calling the get() version with a timeout.
I have an application that every 15 minutes or so does a replication from a remote database. It just keeps the two repositories in sync. Once this replication is going it is not possible to do it again. I have setup the following structure but I'm not sure if it is the correct approach.
public class ReplicatorRunner {
private static Lock lock = new ReentrantLock();
public replicate() {
if (lock.tryLock()) {
try {
// long running process
} catch (Exception e) {
} finally {
lock.unlock();
}
} else {
throw new IllegalStateException("already replicating");
}
}
}
public class ReplicatorRunnerInvocator {
public void someMethod() {
try {
ReplicatorRunner replicator = new ReplicatorRunner();
replicator.replicate();
} catch (IllegalStateException e) {
e.printStackTrace();
}
}
}
The ReplicatorRunner is the class owning the method replicate which can only be run one at a time.
Edit.
I need the next call to fail (not block) if the method is already running on any instance.
This looks good. ReentrantLock.tryLock() will only give the lock to one thread, so synchronized is not necessary. It also prevents the blocking inherent in synchronization that you say is a requirement. ReentrantLock is Serializable, so should work across your cluster.
Go for it.
Change public replicate() to public synchronized replicate()
That way replicate will only ever allow access to one thread at a time. You'll also be able to delete the ReentrantLock and all associated code.
I ended up using the following:
public class ReplicatorRunner {
private static Semaphore lock = new Semaphore(1);
public replicate() {
if (lock.tryAcquire()) {
try {
// basic setup
Thread t = new Thread(new Runnable() {
public void run() {
try {
// long running process
} catch Exception (e) {
// handle the exceptions
} finally {
lock.release();
}
}
})
t.start();
} catch (Exception e) {
// in case something goes wrong
// before the thread starts
lock.release();
}
} else {
throw new IllegalStateException("already replicating");
}
}
}
public class ReplicatorRunnerInvocator {
public void someMethod() {
try {
ReplicatorRunner replicator = new ReplicatorRunner();
replicator.replicate();
} catch (IllegalStateException e) {
e.printStackTrace();
}
}
}
Without looking at the specifics of the ReentrantLock, it occurs to me that this prevention of multiple simultaneous replication routines will be limited to a single JVM instance.
If another instance of the class is kicked off in a separate JVM, then you might be in trouble.
Why not put a lock mechanism on the database? i.e. A row in a control table that is set to a value depicting whether or not the replication is busy running, and reset the value when the replication is finished.
take a look at the Semaphore class here or mark the method as synchronized
the thread executing the method at any given time owns a lock on it avoiding other threads to call the method until its execution ends.
Edit: if you want the other threads to fail, you could use a Lock, and test if the lock is avaible by the tryLock method.
I have created a threaded service the following way:
public class TCPClientService extends Service{
...
#Override
public void onCreate() {
...
Measurements = new LinkedList<String>();
enableDataSending();
}
#Override
public IBinder onBind(Intent intent) {
//TODO: Replace with service binding implementation
return null;
}
#Override
public void onLowMemory() {
Measurements.clear();
super.onLowMemory();
}
#Override
public void onDestroy() {
Measurements.clear();
super.onDestroy();
try {
SendDataThread.stop();
} catch(Exception e){
...
}
}
private Runnable backgrounSendData = new Runnable() {
public void run() {
doSendData();
}
};
private void enableDataSending() {
SendDataThread = new Thread(null, backgrounSendData, "send_data");
SendDataThread.start();
}
private void addMeasurementToQueue() {
if(Measurements.size() <= 100) {
String measurement = packData();
Measurements.add(measurement);
}
}
private void doSendData() {
while(true) {
try {
if(Measurements.isEmpty()) {
Thread.sleep(1000);
continue;
}
//Log.d("TCP", "C: Connecting...");
Socket socket = new Socket();
socket.setTcpNoDelay(true);
socket.connect(new InetSocketAddress(serverAddress, portNumber), 3000);
//socket.connect(new InetSocketAddress(serverAddress, portNumber));
if(!socket.isConnected()) {
throw new Exception("Server Unavailable!");
}
try {
//Log.d("TCP", "C: Sending: '" + message + "'");
PrintWriter out = new PrintWriter( new BufferedWriter( new OutputStreamWriter(socket.getOutputStream())),true);
String message = Measurements.remove();
out.println(message);
Thread.sleep(200);
Log.d("TCP", "C: Sent.");
Log.d("TCP", "C: Done.");
connectionAvailable = true;
} catch(Exception e) {
Log.e("TCP", "S: Error", e);
connectionAvailable = false;
} finally {
socket.close();
announceNetworkAvailability(connectionAvailable);
}
} catch (Exception e) {
Log.e("TCP", "C: Error", e);
connectionAvailable = false;
announceNetworkAvailability(connectionAvailable);
}
}
}
...
}
After I close the application the phone works really slow and I guess it is due to thread termination failure.
Does anyone know what is the best way to terminate all threads before terminating the application?
Addendum: The Android framework provides many helpers for one-off work, background work, etc, which may be preferable over trying to roll your own thread in many instances. As mentioned in a below post, AsyncTask is a good starting point to look into. I encourage readers to look into the framework provisions first before even beginning to think about doing their own threading.
There are several problems in the code sample you posted I will address in order:
1) Thread.stop() has been deprecated for quite some time now, as it can leave dependent variables in inconsistent states in some circumstances. See this Sun answer page for more details (Edit: that link is now dead, see this page for why not to use Thread.stop()). A preferred method of stopping and starting a thread is as follows (assuming your thread will run somewhat indefinitely):
private volatile Thread runner;
public synchronized void startThread(){
if(runner == null){
runner = new Thread(this);
runner.start();
}
}
public synchronized void stopThread(){
if(runner != null){
Thread moribund = runner;
runner = null;
moribund.interrupt();
}
}
public void run(){
while(Thread.currentThread() == runner){
//do stuff which can be interrupted if necessary
}
}
This is just one example of how to stop a thread, but the takeaway is that you are responsible for exiting a thread just as you would any other method. Maintain a method of cross thread communcation (in this case a volatile variable, could also be through a mutex, etc) and within your thread logic, use that method of communication to check if you should early exit, cleanup, etc.
2) Your measurements list is accessed by multiple threads (the event thread and your user thread) at the same time without any synchronization. It looks like you don't have to roll your own synchronization, you can use a BlockingQueue.
3) You are creating a new Socket every iteration of your sending Thread. This is a rather heavyweight operation, and only really make sense if you expect measurements to be extremely infrequent (say one an hour or less). Either you want a persistent socket that is not recreated every loop of the thread, or you want a one shot runnable you can 'fire and forget' which creates a socket, sends all relevant data, and finishes. (A quick note about using a persistent Socket, socket methods which block, such as reading, cannot be interrupted by Thread.interrupt(), and so when you want to stop the thread, you must close the socket as well as calling interrupt)
4) There is little point in throwing your own exceptions from within a Thread unless you expect to catch it somewhere else. A better solution is to log the error and if it is irrecoverable, stop the thread. A thread can stop itself with code like (in the same context as above):
public void run(){
while(Thread.currentThread() == runner){
//do stuff which can be interrupted if necessary
if(/*fatal error*/){
stopThread();
return; //optional in this case since the loop will exit anyways
}
}
}
Finally, if you want to be sure a thread exits with the rest of your application, no matter what, a good technique is to call Thread.setDaemon(true) after creation and before you start the thread. This flags the thread as a daemon thread, meaning the VM will ensure that it is automatically destroyed if there are no non-daemon threads running (such as if your app quits).
Obeying best practices with regards to Threads should ensure that your app doesn't hang or slow down the phone, though they can be quite complex :)
Actually, you don't need the "runner" variable as described above, something like:
while (!interrupted()) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
break;
}
}
But generally, sitting in a Thread.sleep() loop is a really bad idea.
Look at the AsyncTask API in the new 1.5 API. It will probably solve your problem more elegantly than using a service. Your phone is getting slow because the service never shuts down - there's nothing that will cause the service to kill itself.