Message-processing-task should stop processing a NEW message when it detects a client-login-task. However, the message-processor should complete the task it was processing before pausing. This basically means the client-login task should give the message processor a breathing space (await itself) before it can continue. So the scenario is this
1) message-processing-task is processing messages from a queue (one at a time)
2) It detects a client-login in the middle of processing a message
3) message-processing-task should complete processing the message before it waits for the client-login-task to complete. This is where the client-login-task must wait.
4) client-login-task complete and signals message-processing-task
5) message-processing-task goes about its business.
My question is , are there any ready made synchronizers between these two thread which are executing different paths and yet must wait on each other? My understanding is Cyclic Barrier, Semaphore, CountDownLatch synchronize between threads which are in the same paths of execution.
EDIT- There is a single message-processing thread. However, there can be multiple login threads.
I solution which I have in mind is to use a Reentrant lock. So what happens is before processing each message, a lock is acquired and message-processor checks if there are any client login in progress. An AtomicInteger tells me the number of login requests in progress. If there are more than one login requests in progress, the notification-processor awaits on the lock condition. The condition on which notification-processor is signalled to resume its work is that the AtomicInteger count has to come down to 0.
The only caveat with the solution is if the message is being processed and the login request comes in the middle of that, then the login thread does not wait. That is where I will need another lock on the client-login which must be released when the message-processor has processed the message. This makes the solution far too complex and I would like to avoid this unnecessary complexity.
Any suggestions appreciated.
Not sure I picked this up correctly, but I would use a ThreadPoolExecutor with single thread, passing a PriorityBlockingQueue in it. All your tasks will go to that queue, the login tasks will have higher priority so they will all be processed before any message-processing tasks.
Also if message-processing-task is in progress, it will complete before the client-login-tasks kicks in. Would that be what you need?
A simple approach with a Semaphore (1 permit, fairness true) would be to share one between login/message processing tasks. If the message processing thread were to be in the middle of a task, it would finish before the login task could process (fairness would guarantee that the login task would execute right after).
If there are multiple message processing threads, this approach won't work.
Related
I want to implement a single-producer - multi-consumer logic where each consumer processing time depends on a hardware response.
**EDIT
I have a Set of objects (devices). Each object (device) corresponds to a hardware real unit I want to simulate in software.
My main class distributes a list of tasks to each device. Each task takes a certain time to complete - which I want to have control, in order to simulate the hardware operation. Each device object has its own SingleThreadExecutorService service executor to manage its own queued tasks. A Sleep on a task of a specific device object should not interfere on main, or other devices object's performance.
So far things are working but I am not sure how to get a future from the tasks without blocking the main thread with a while(!future.isDone()). When I do it, two problems occur:
task 1 is submitted to device[ 1 ].executor. Tasks 1 sleeps to simulate hardware operation time.
task 2 should be submitted to device[ 2 ].executor as soon as task 1 is submitted, but it won't, because main thread is hold while waiting for task 1 to return a Future. This issue accumulates delay on the simulation since every task added causes the next device to have to wait for the previous to complete, instead of running simultaneously.
Orange line indicates a command to force device to wait for 1000 milliseconds.
When Future returns, it then submits a new task to device 2, but it is already 1 second late, seen in blue line. And so on, green line shows the delay increment.
If I don't use Future to get when tasks were finished, the simulation seems to run correctly. I couldn't find a way to use future.isDone() without having to create a new thread just to check it. Also, I would really be glad if someone could advice me how to proceed in this scenario.
If your goal is to implement something where each consumer task is talking to a hardware device during the processing of its task, then the run method of the task should simply talk to the device and block until it receives the response from the device. (How you do that will depend on the device and its API ...)
If your goal is to do the above with a simulated device (i.e. for testing purposes) then have the task call Thread.sleep(...) to simulate the time that the device would take to respond.
Based on your problem description (as I understand it), the PausableSchedulerThreadPoolExecutor class that you have found won't help. What that class does is to pause the threads themselves. All of them.
UPDATE
task 2 should be submitted to device[ 2 ].executor as soon as task 1 is submitted, but it won't, because main thread is hold while waiting for task 1 to return a Future.
That is not correct. The Future object is returned immediately ... when the task is submitted.
You mistake (probably) is that the main thread is calling get on the Future. That will block. But the point is that is your main thread actually needs to call get on the Future before submitting the next task then it is essentially single-threaded.
Real solution: figure out how to break that dependency that makes your application single threaded. (But beware: if you pass the Future as a parameter to a task, then the corresponding worker thread may block. Unless you have enough threads in the thread pool you could end up with starvation and reduced concurrency.)
I am implementing the following functionality in a load test tool to simulate heavy load on a target application:
Multiple threads are launched in concurrency to perform the same kind of operations.
Each thread will loop for n times. At the end of each loop, test results are available and are added to a list which is returned after all loops finish running.
I'm currently using Callable and Future, and putting lists of results returned by all the threads into another list after all threads finish running and give me the Future. The problem is that I can lose what is available if the execution of the program is interrupted. I want to be able to save results that are available in finishes loops while the threads are still processing remaining loops.
Is there something in Java concurrency library suitable for this purpose? Or is there a better design to the load test functionality I am building?
Thanks in advance!
You can pass your results to a BlockingQueue as they occur. This can be picked up by another thread or the one which triggered the tasks in the first place.
The java.util.concurrent.CyclicBarrier class is a synchronization mechanism that can synchronize threads progressing through some algorithm. In other words, it is a barrier that all threads must wait at, until all threads reach it, before any of the threads can continue.
Creating a CyclicBarrier
When you create a CyclicBarrier you specify how many threads are to wait at it, before releasing them. Here is how you create a CyclicBarrier:
CyclicBarrier barrier = new CyclicBarrier(2);
Waiting at a CyclicBarrier
Here is how a thread waits at a CyclicBarrier:
barrier.await();
You can also specify a timeout for the waiting thread. When the timeout has passed the thread is also released, even if not all N threads are waiting at the CyclicBarrier. Here is how you specify a timeout:
barrier.await(10, TimeUnit.SECONDS);
The waiting threads waits at the CyclicBarrier until either:
The last thread arrives (calls await() )
The thread is interrupted by another thread (another thread calls its interrupt() method)
Another waiting thread is interrupted
Another waiting thread times out while waiting at the CyclicBarrier
The CyclicBarrier.reset() method is called by some external thread.
I have a Queue of request. There are two threads. In on thread i am adding the items to queue and second thread basically get the requests from queue list and execute them. So second thread wait for 1st thread to put some request in the list. I am doing so in a while loop. I don't think this is a best way to do it. It is CPU intensive. I can think of a way to notify the 2nd thread whenever I add a request. but there can be problem that the request may not execute successfully so I have to ask 2nd thread again to execute the request.
so is there any way you can think will work ?
Use one of the available blocking queues in Java: http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html
The busy waiting is indeed not recommended (unless you want to use your computer for heating).
You can make use of Semaphores to solve this problem.
The second thread, which is the worker thread will wait on the semaphore. Every time the 1st thread pushes new task info onto the Queue structure, it will also post to the Semaphore so now the second thread can safely go and execute.
This may also need some synchronization along the way if there are multiple reader/writer threads.
A question on using threads in java (disclaimer - I am not very experienced with threads so please allow some leeway).
Overview:
I was wondering whether there was a way for multiple threads to add actions to be performed to a queue which another thread would take care of. It does not matter really what order - more important that the actions in the queue are taken care of one at a time.
Explanation:
I plan to host a small server (using servlets). I want each connection to a client to be handled by a separate thread (so far ok). However, each of these threads/clients will be making changes to a single xml file. However, the changes cannot be done at the same time.
Question:
Could I have each thread submit the changes to be made to a queue which another thread will continuously manage? As I said it does not matter on the order of the changes, just that they do not happen at the same time.
Also, please advise if this is not the best way to do this.
Thank you very much.
This is a reasonable approach. Use an unbounded BlockingQueue (e.g. a LinkedBlockingQueue) - the thread performing IO on the XML file calls take on the queue to remove the next message (blocking if the queue is empty) then processing the message to modify the XML file, while the threads submitting changes to the XML file will call offer on the queue in order to add their messages to it. The BlockingQueue is thread-safe, so there's no need for your threads to perform synchronization on it.
You could have the threads submit tasks to an ExecutorService that has only one thread. Or you could have a lock that allows only one thread to alter the file at once. The later seems more natural, as the file is a shared resource. The queue is the implied queue of threads awaiting a lock.
The Executor interface provides the abstraction you need:
An object that executes submitted Runnable tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An Executor is normally used instead of explicitly creating threads."
A single-threaded executor service seems like exactly the right tool for the job. See Executors.newSingleThreadExecutor(), whose javadoc says:
Creates an Executor that uses a single worker thread operating off an
unbounded queue. (Note however that if this single thread terminates
due to a failure during execution prior to shutdown, a new one will
take its place if needed to execute subsequent tasks.) Tasks are
guaranteed to execute sequentially, and no more than one task will be
active at any given time. Unlike the otherwise equivalent
newFixedThreadPool(1) the returned executor is guaranteed not to be
reconfigurable to use additional threads.
Note that in a JavaEE context, you need to take into consideration how to terminate the worker thread when your webapp is unloaded. There are other questions here on SO that deal with this.
I have a java concurrency problem, it goes like this: There is a Servlet (serv1) that stores objects (representing messages) into a database. There is also a Thread (t1) that looks (in the database) for unsent messages and delivers them. t1 runs every 5 minutes so, for efficiency purposes, serv1 notifies t1 every time it stores a message.
The question is: How the notification process is going to behave on a highly concurred scenario where serv1 is receiving an extremely high amount of requests and thus t1 is being notified so much that it’d simulate a "while (true)"?.
Another question: How does the notification process will behave if serv1 wants to notify t1 but t1 is already awake/running?
Thanks in advance!!
I don't think this is an issue #Wilmer. I suspect that the notification itself is relatively cheap compared to the cost of consuming and processing your messages. If you are spinning consuming then messages then removing the notifications is not going to help the process and you will have to block your serv1 thread somehow or offload the jobs to run later.
In terms of notifications, if no one is waiting then the notify() is effectively a no-op. This is why it is important to check to see if there is anything to process before waiting on the signal -- all in a synchronized block. It is best practice to also loop around the wait and check again when we are notified. See this race condition.
In general, this is a very common practice that is used in virtually all producer/consumer thread models that I have seen. The alternative is not to handle the square wave traffic changes. Either your consumer (t1) is waiting too long and the buffers fill up or it is spinning too much and is consumer too much CPU checking the database.
Another thing to consider is to not use the database but to put the objects into a BlockingQueue for t1 to consume directly. If you need to store them in the database then put the IDs of the objects in the queue for t1 to consume. The thread will still need to poll the database on startup but you will save the polls later in the process.
Why are you notifying t1 at all? Why doesn't T1 on it's 5 minute sweep query the database and process all of the pending messages? Then you don't need a notification at all, you simply use the database.
In order to use Object o = new Object(); o.notify() correctly, it has to be done after obtaining that object's monitor (becoming its owner; AKA synchronize on it). Moreover, the awakened thread that waits upon that monitor will have to wait yet again for the notifying thread to release that monitor and try to obtain it. Only then it shall continue processing.
So, when t1 will be awakened, it will actually fight all other serv1 threads for becoming owner of the monitor. It might obtain the monitor, thus stalling all serv1 threads (not good). It might loose constantly to serv1's threads and not process the accumulating messages in the database (just as bad, I guess).
What you should do, is let the producers (serv1 threads) work asynchronously with the consumer (t1). t1 should continue to run every X minutes (or seconds) and process all the messages altogether.
Another option, if you want to keep the thread in low activity: configure several consumer threads (t1, t2, t3... etc.). You can use a Executors.newFixedThreadPool(int nThreads, ThreadFactory threadFactory) for this purpose.