I have some service that both consumes from an inbound queue and produces to some outbound queue (where another thread, created by this service, picks up the messages and "transports" them to their destination).
Currently I use two plain Threads as seen in the code bellow but I know that in general you should not use them anymore and instead use the higher level abstractions like the ExecutorService.
Would this make sense in my case? More specifically I mean ->
would it reduce code?
make the code more robust in case of failure?
allow for smoother thread termination? (which is helpfull when running tests)
Am I missing something important here? (maybee some other classes from java.util.concurrent)
// called on service startup
private void init() {
// prepare everything here
startInboundWorkerThread();
startOutboundTransporterWorkerThread();
}
private void startInboundWorkerThread() {
InboundWorkerThread runnable = injector.getInstance(InboundWorkerThread.class);
inboundWorkerThread = new Thread(runnable, ownServiceIdentifier);
inboundWorkerThread.start();
}
// this is the Runnable for the InboundWorkerThread
// the runnable for the transporter thread looks almost the same
#Override
public void run() {
while (true) {
InboundMessage message = null;
TransactionStatus transaction = null;
try {
try {
transaction = txManager.getTransaction(new DefaultTransactionDefinition());
} catch (Exception ex) {
// logging
break;
}
// blocking consumer
message = repository.takeOrdered(template, MESSAGE_POLL_TIMEOUT_MILLIS);
if (message != null) {
handleMessage(message);
commitTransaction(message, transaction);
} else {
commitTransaction(transaction);
}
} catch (Exception e) {
// logging
rollback(transaction);
} catch (Throwable e) {
// logging
rollback(transaction);
throw e;
}
if (Thread.interrupted()) {
// logging
break;
}
}
// logging
}
// called when service is shutdown
// both inbound worker thread and transporter worker thread must be terminated
private void interruptAndJoinWorkerThread(final Thread workerThread) {
if (workerThread != null && workerThread.isAlive()) {
workerThread.interrupt();
try {
workerThread.join(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException e) {
// logging
}
}
}
The main benefit for me in using ThreadPools comes from structuring the work in single, independent and usually short jobs and better abstraction of threads in a ThreadPools private Workers. Sometimes you may want more direct access to those, to find out if they are still running etc. But there are usually better, job-centric ways to do that.
As for handling failures, you may want to submit your own ThreadFactory to create threads with a custom UncaughtExceptionHandler and in general, your Runnable jobs should provide good exception handling, too, in order to log more information about the specific job that failed.
Make those jobs non-blocking, since you don't want to fill up your ThreadPool with blocked workers. Move blocking operations before the job is queued.
Normally, shutdown and shutdownNow as provided by ExecutorServices, combined with proper interrupt handling in your jobs will allow for smooth job termination.
Related
I am trying to understand if the below is thread safe, it was written by another developer whose code I have inherited and is no longer with us.
I have a BaseProvider class that is actually a message cache, represented by a LinkedBlockingQueue. This class stores incoming messages in the queue.
I have a set of worker threads that read of this queue. As such the LinkedBlockingQueue is thread safe.
Questions
1. When the worker thread calls provider.getNextQueuedItem(), the provider goes through item by item and adds it to a list and returns the list of messages. While it is doing this, what happens if there is a message added to the provider class by calling addToQueue? Does the takeLock internal to the LinkedBlockingQueue prevent from adding a new message to the queue until all messages are taken off the queue?
As you would notice, each worker thread has access to all the providers, so while one worker thread is going through all the providers and calls getNextQueuedItem() , what happens when another worker thread also calls through all the providers and calls getNextQueuedItem()? Would both the worker threads be stepping over each other?
public abstract class BaseProvider implements IProvider {
private LinkedBlockingQueue<CoreMessage> internalQueue = new LinkedBlockingQueue<CoreMessage>();
#Override
public synchronized List<CoreMessage> getNextQueuedItem() {
List<CoreMessage> arrMessages = new ArrayList<CoreMessage>();
if (internalQueue.size() > 0) {
Logger.debug("Queue has entries");
CoreMessage msg = null;
try {
msg = internalQueue.take();
} catch (InterruptedException e) {
Logger.warn("Interruption");
e.printStackTrace();
}
if (msg != null) {
arrMessages.add(msg);
}
}
return arrMessages;
}
protected synchronized void addToQueue(CoreMessage message) {
try {
internalQueue.put(message);
} catch (InterruptedException e) {
Logger.error("Exception adding message to queue " + message);
}
}
}
// There are a set of worker threads that read through these queues
public class Worker implements Runnable
#Override
public void run() {
Logger.info("Worker - Running Thread : " + Thread.currentThread().getName());
while (!stopRequested) {
boolean processedMessage = false;
for (IProvider provider : providers) {
List<CoreMessage> messages = provider.getNextQueuedItem();
if (messages == null || messages.size() != 0) {
processedMessage = true;
for (CoreMessage message : messages) {
final Message msg = createEndurMessage(provider, message);
processMessage(msg);
message.commit();
}
}
}
if (!(processedMessage || stopRequested)) {
// this is to stop the thread from spinning when there are no messages
try {
Thread.sleep(WAIT_INTERVAL);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
}
what happens if there is a message added to the provider class by calling addToQueue?
getNextQueuedItem() and addToQueue(...) are both synchronized methods. If those are the only two methods that access the private ... internalQueue, then there is no way in which multiple threads could ever access internalQueue at the same time.
while one worker thread is going through all the providers and calls getNextQueuedItem() , what happens when another worker thread also calls through all the providers and calls getNextQueuedItem()?
Are you asking about multiple workers accessing the same provider? That can't happen because getNextQueuedItem() is a synchronized method.
-- OR --
Are you asking about different workers accessing different providers? That should not matter (at least, not as far as the BaseProvider class is concerned) because there does not appear to be any way in which the different objects could be connected with each other.
I have an application that receives alerts from other applications, usually once a minute or so but I need to be able to handle higher volume per minute. The interface I am using, and the Alert framework in general, requires that alerts may be processed asynchronously and can be stopped if they are being processed asynchronously. The stop method specifically is documented as stopping a thread. I wrote the code below to create an AlertRunner thread and then stop the thread. However, is this a proper way to handle terminating a thread? And will this code be able to scale easily (not to a ridiculous volume, but maybe an alert a second or multiple alerts at the same time)?
private AlertRunner alertRunner;
#Override
public void receive(Alert a) {
assert a != null;
alertRunner = new alertRunner(a.getName());
a.start();
}
#Override
public void stop(boolean synchronous) {
if(!synchronous) {
if(alertRunner != null) {
Thread.currentThread().interrupt();
}
}
}
class AlertRunner extends Thread {
private final String alertName;
public AlertRunner(String alertName) {
this.alertName = alertName;
}
#Override
public void run() {
try {
TimeUnit.SECONDS.sleep(5);
log.info("New alert received: " + alertName);
} catch (InterruptedException e) {
log.error("Thread interrupted: " + e.getMessage());
}
}
}
This code will not scale easily because Thread is quite 'heavy' object. It's expensive to create and it's expensive to start. It's much better to use ExecutorService for your task. It will contain a limited number of threads that are ready to process your requests:
int threadPoolSize = 5;
ExecutorService executor = Executors.newFixedThreadPool(threadPoolSize);
public void receive(Alert a) {
assert a != null;
executor.submit(() -> {
// Do your work here
});
}
Here executor.submit() will handle your request in a separate thread. If all threads are busy now, the request will wait in a queue, preventing resource exhausting. It also returns an instance of Future that you can use to wait for the completion of the handling, setting the timeout, receiving the result, for cancelling execution and many other useful things.
I have a java application which has to be run as a Linux process. It connects to a remote system via socket connection. I have two threads which run through whole life cycle of the program. This is the brief version of my application entry point:
public class SMPTerminal {
private static java.util.concurrent.ExcecutorService executor;
public static void main(String[] args) {
executor = Executors.newFixedThreadPool(2);
Runtime.getRuntime().addShutdownHook(new Thread(new ShutdownHook()));
run(new SMPConsumer());
run(new SMPMaintainer());
}
public static void run(Service callableService) {
try {
Future<Callable> future = executor.submit(callableService);
run(future.get().restart());
} catch (InterruptedException | ExcecutionException e) {
// Program will shutdown
}
}
}
This is Service interface:
public interface Service() {
public Service restart();
}
And this is one implementation of Service interface:
public class SMPConsumer implements Callable<Service>, Service {
#Override
public Service call() throws Exception {
// ...
try {
while(true) {
// Perform the service
}
} catch (InterruptedException | IOException e) {
// ...
}
return this; // Returns this instance to run again
}
public Service restart() {
// Perform the initialization
return this;
}
}
I reached this structure after I have headaches when a temporary IO failure or other problems were causing my application shutdown. Now If my program encounters a problem it doesn't shutdown completely, but just initializes itself from scratch and continues. But I think this is somewhat weired and I am violating OOP design rules. My questions
Is this kind of handling failures correct or efficient?
what problems do I may encounter in future?
Do I have to study about any special design pattern for my problem?
You might not have noticed, but your run method waits for the callableService to finish execution before it returns. So you are not able to start two services concurrently. This is because Future.get() waits until the task computation completes.
public static void run(Service callableService) {
try {
Future<Callable> future = executor.submit(callableService);
run(future.get().restart()); // <=== will block until task completes!
} catch (InterruptedException | ExcecutionException e) {
// Program will shutdown
}
}
(You should have noticed that because of the InterruptionException that must be caught - it indicates that there is some blocking, long running operation going on).
This also renders the execution service useless. If the code that submits a task to the executor always waits for the task to complete, there is no need to execute this task via executor. Instead, the submitting code should call the service directly.
So I assume that blocking is not inteded in this case. Probably your run method should look something like that:
public static void run(Service callableService) {
executor.submit(() -> {
Service result = callableService.call();
run(result.restart());
return result;
});
}
This code snippet is just basic, you might want to extend it to handle exceptional situations.
Is this kind of handling failures correct or efficient? That depends on context of application and how you are using error handling.
May encounter situation where I/O failures etc. are not handled properly.
Looks like you are already using Adapter type design pattern. Look at Adapter design pattern http://www.oodesign.com/adapter-pattern.html
This method notifes an event loop to start processing a message. However, if the event loop is already processing a message then, this method blocks until it receives a notification of completed event processing (which is triggered at the end of the event loop).
public void processEvent(EventMessage request) throws Exception {
System.out.println("processEvent");
if (processingEvent) {
synchronized (eventCompleted) {
System.out.println("processEvent: Wait for Event to completed");
eventCompleted.wait();
System.out.println("processEvent: Event completed");
}
}
myRequest = request;
processingEvent = true;
synchronized (eventReady) {
eventReady.notifyAll();
}
}
This works in client mode. If I switch to server mode and the time spent in the event loop processing the message is too quick, then the method above blocks forever waiting for the event to completed. For some reason the event complete notification is sent after the processingEvent check and before the eventCompleted.wait(). It makes no difference if I remove the output statements. I can not repeat the same problem in client mode.
Why does this only happen in server mode and what can I do to prevent this happening?
Here is the eventReady wait and eventCompleted notification:
public void run() {
try {
while (true) {
try {
synchronized (eventReady) {
eventReady.wait();
}
nx.processEvent(myRequest, myResultSet);
if (processingEvent > 0) {
notifyInterface.notifyEventComplete(myRequest);
}
} catch (InterruptedException e) {
throw e;
} catch (Exception e) {
notifyInterface.notifyException(e, myRequest);
} finally {
processingEvent--;
synchronized (eventCompleted) {
eventCompleted.notifyAll();
}
}
} // End of while loop
} catch (InterruptedException Ignore) {
} finally {
me = null;
}
Here is revised code which seems to work without the deadlock problem - which BTW happened in client mode randomely after about 300 events.
private BlockingQueue<EventMessage> queue = new SynchronousQueue<EventMessage>();
public void processEvent(EventMessage request) throws Exception {
System.out.println("processEvent");
queue.put(request);
}
public void run() {
try {
while (true) {
EventMessage request = null;
try {
request = queue.take();
processingEvent = true;
nx.processEvent(request, myResultSet);
notifyInterface.notifyEventComplete(request);
} catch (InterruptedException e) {
throw e;
} catch (Exception e) {
notifyInterface.notifyException(e, request);
} finally {
if (processingEvent) {
synchronized (eventCompleted) {
processingEvent = false;
eventCompleted.notifyAll();
}
}
}
} // End of while loop
} catch (InterruptedException Ignore) {
} finally {
me = null;
}
}
If you call notifyAll and no thread is wait()ing, the notify is lost.
The correct approach is to always change a state, inside the synchronized block, when calling notify() and always check that state, inside the synchronized block, before calling wait().
Also your use of processingEvent doesn't appear to be thread safe.
Can you provide the code which waits on eventReady and notifies eventCompleted?
Your program can happen to work if your speed up or slow down your application just right e.g. if you use -client, but if you use a different machine, JVM or JVM options it can fail.
There are a number of race conditions in your code. Even declaring processingEvent volatile or using an AtomicBoolean won't help. I would recommend using a SynchronousQueue which will block the event until the processer is ready for it. Something like:
private final BlockingQueue<Request> queue = new SynchronousQueue<Request>();
...
// this will block until the processor dequeues it
queue.put(request);
Then the event processor does:
while (!done) {
// this will block until an event is put-ed to the queue
Request request = queue.take();
process the event ...
}
Only one request will be processed at once and all of the synchronization, etc. will be handled by the SynchronousQueue.
If processingEvent isn't declared volatile or accessed from within a synchronized block then updates made by one thread may not become visible to other threads immediately. It's not clear from your code whether this is the case, though.
The "server" VM is optimised for speed (at the expense of startup time and memory usage) which could be the reason why you didn't encounter this problem when using the "client" VM.
There is a race condition in your code that may be exasperated by using the server VM, and if processingEvent is not volatile then perhaps certain optimizations made by the server VM or its environment are further influencing the problem.
The problem with your code (assuming this method is accessed by multiple threads concurrently) is that between your check of processingEvent and eventCompleted.wait(), another thread can already notify and (I assume) set processingEvent to false.
The simplest solution to your blocking problem is to not try to manage it yourself, and just let the JVM do it by using a shared lock (if you only want to process one event at a time). So you could just synchronize the entire method, for instance, and not worry about it.
A second simple solution is to use a SynchronousQueue (this is the type of situation it is designed for) for your event passing; or if you have more executing threads and want more than 1 element in the queue at a time then you can use an ArrayBlockingQueue instead. Eg:
private SynchronousQueue<EventMessage> queue = new SynchronousQueue<EventMessage>();
public void addEvent(EventMessage request) throws Exception
{
System.out.println("Adding event");
queue.put(request);
}
public void processNextEvent()
{
EventMessage request = queue.take();
processMyEvent(request);
}
// Your queue executing thread
public void run()
{
while(!terminated)
{
processNextEvent();
}
}
I have created a threaded service the following way:
public class TCPClientService extends Service{
...
#Override
public void onCreate() {
...
Measurements = new LinkedList<String>();
enableDataSending();
}
#Override
public IBinder onBind(Intent intent) {
//TODO: Replace with service binding implementation
return null;
}
#Override
public void onLowMemory() {
Measurements.clear();
super.onLowMemory();
}
#Override
public void onDestroy() {
Measurements.clear();
super.onDestroy();
try {
SendDataThread.stop();
} catch(Exception e){
...
}
}
private Runnable backgrounSendData = new Runnable() {
public void run() {
doSendData();
}
};
private void enableDataSending() {
SendDataThread = new Thread(null, backgrounSendData, "send_data");
SendDataThread.start();
}
private void addMeasurementToQueue() {
if(Measurements.size() <= 100) {
String measurement = packData();
Measurements.add(measurement);
}
}
private void doSendData() {
while(true) {
try {
if(Measurements.isEmpty()) {
Thread.sleep(1000);
continue;
}
//Log.d("TCP", "C: Connecting...");
Socket socket = new Socket();
socket.setTcpNoDelay(true);
socket.connect(new InetSocketAddress(serverAddress, portNumber), 3000);
//socket.connect(new InetSocketAddress(serverAddress, portNumber));
if(!socket.isConnected()) {
throw new Exception("Server Unavailable!");
}
try {
//Log.d("TCP", "C: Sending: '" + message + "'");
PrintWriter out = new PrintWriter( new BufferedWriter( new OutputStreamWriter(socket.getOutputStream())),true);
String message = Measurements.remove();
out.println(message);
Thread.sleep(200);
Log.d("TCP", "C: Sent.");
Log.d("TCP", "C: Done.");
connectionAvailable = true;
} catch(Exception e) {
Log.e("TCP", "S: Error", e);
connectionAvailable = false;
} finally {
socket.close();
announceNetworkAvailability(connectionAvailable);
}
} catch (Exception e) {
Log.e("TCP", "C: Error", e);
connectionAvailable = false;
announceNetworkAvailability(connectionAvailable);
}
}
}
...
}
After I close the application the phone works really slow and I guess it is due to thread termination failure.
Does anyone know what is the best way to terminate all threads before terminating the application?
Addendum: The Android framework provides many helpers for one-off work, background work, etc, which may be preferable over trying to roll your own thread in many instances. As mentioned in a below post, AsyncTask is a good starting point to look into. I encourage readers to look into the framework provisions first before even beginning to think about doing their own threading.
There are several problems in the code sample you posted I will address in order:
1) Thread.stop() has been deprecated for quite some time now, as it can leave dependent variables in inconsistent states in some circumstances. See this Sun answer page for more details (Edit: that link is now dead, see this page for why not to use Thread.stop()). A preferred method of stopping and starting a thread is as follows (assuming your thread will run somewhat indefinitely):
private volatile Thread runner;
public synchronized void startThread(){
if(runner == null){
runner = new Thread(this);
runner.start();
}
}
public synchronized void stopThread(){
if(runner != null){
Thread moribund = runner;
runner = null;
moribund.interrupt();
}
}
public void run(){
while(Thread.currentThread() == runner){
//do stuff which can be interrupted if necessary
}
}
This is just one example of how to stop a thread, but the takeaway is that you are responsible for exiting a thread just as you would any other method. Maintain a method of cross thread communcation (in this case a volatile variable, could also be through a mutex, etc) and within your thread logic, use that method of communication to check if you should early exit, cleanup, etc.
2) Your measurements list is accessed by multiple threads (the event thread and your user thread) at the same time without any synchronization. It looks like you don't have to roll your own synchronization, you can use a BlockingQueue.
3) You are creating a new Socket every iteration of your sending Thread. This is a rather heavyweight operation, and only really make sense if you expect measurements to be extremely infrequent (say one an hour or less). Either you want a persistent socket that is not recreated every loop of the thread, or you want a one shot runnable you can 'fire and forget' which creates a socket, sends all relevant data, and finishes. (A quick note about using a persistent Socket, socket methods which block, such as reading, cannot be interrupted by Thread.interrupt(), and so when you want to stop the thread, you must close the socket as well as calling interrupt)
4) There is little point in throwing your own exceptions from within a Thread unless you expect to catch it somewhere else. A better solution is to log the error and if it is irrecoverable, stop the thread. A thread can stop itself with code like (in the same context as above):
public void run(){
while(Thread.currentThread() == runner){
//do stuff which can be interrupted if necessary
if(/*fatal error*/){
stopThread();
return; //optional in this case since the loop will exit anyways
}
}
}
Finally, if you want to be sure a thread exits with the rest of your application, no matter what, a good technique is to call Thread.setDaemon(true) after creation and before you start the thread. This flags the thread as a daemon thread, meaning the VM will ensure that it is automatically destroyed if there are no non-daemon threads running (such as if your app quits).
Obeying best practices with regards to Threads should ensure that your app doesn't hang or slow down the phone, though they can be quite complex :)
Actually, you don't need the "runner" variable as described above, something like:
while (!interrupted()) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
break;
}
}
But generally, sitting in a Thread.sleep() loop is a really bad idea.
Look at the AsyncTask API in the new 1.5 API. It will probably solve your problem more elegantly than using a service. Your phone is getting slow because the service never shuts down - there's nothing that will cause the service to kill itself.