I have a problem when I try to implement a queue for http requests from scratch. Sorry, this might be a very naive concurrency problem to someone.
Basically I want my application to execute only one request at any time. Extra requests go into queue and execute later.
I am aware of other advanced stuff such as FutureTask and Execution pool, but I want the answer because I am curious about how to solve the basic concurrency problem. Following is my Class maintains the requestQueue
private Queue<HttpRequest> requestQueue;
private AsyncTask myAsyncTask=null;
public boolean send(HttpRequest hr){
//if there isn't existing task, start a new one, otherwise just enqueue the request
//COMMENT 1.
if(myAsyncTask==null){
requestQueue.offer(hr);
myAsyncTask= new RequestTask();
myAsyncTask.execute(null);
return true;
}
else{
//enqueue
//COMMENT 2
requestQueue.offer(hr);
}
}
//nested class
RequestTask extends AsyncTask<boolean,void,void>{
protected HttpResponse doInBackground(void... v){
//send all request in the queue
while(requestQueue.peek != null){
HttpResquest r= requestQueue.poll
//... leave out code about executing the request
}
return true;
}
protected void doPostExecute(boolean success){
//COMMENT 3: if scheduler stop here just before myAsyncTask is set to null
myAsyncTask=null;
}
}
The question is, if thread scheduler stops the background thread at the point COMMENT 3 (just before the myAsyncTask is set to null).
//COMMENT 3: if scheduler stop here just before myAsyncTask is set to null
myAsyncTask=null;
At the time, other threads happen to go to the point COMMENT 1 and go into the if ... else ... block. Because the myAsyncTask have not be set to null, the task get enqueued in else block(COMMENT 2) but new asyncTask will not be created, which means the queue will stuck!
//COMMENT 1.
if(myAsyncTask==null){
requestQueue.offer(hr);
myAsyncTask= new RequestTask;
myAsyncTask.execute(null);
return true;
}
else{
//enqueue
//COMMENT 2
requestQueue.offer(hr);
}
I hope it is clear. There is a chance that the queue stop being processed. I am keen to know how to avoid this. Thank you in advance
The way I would normally implement something like this is to create a class that extends thread. This would contain a queue object (use whichever one you prefer) and would have methods for adding jobs. I'd use synchronization to keep everything thread safe. Notify and wait can be used to avoid polling.
Here's an example that might help...
import java.util.*;
public class JobProcessor extends Thread
{
private Queue queue = new LinkedList();
public void addJob(Object job)
{
synchronized(queue)
{
queue.add(job);
queue.notify(); // lests the thread know that an item is ready
}
}
#Overide
public void run()
{
while (true)
{
Object job = null;
synchronized(queue) // ensures thread safety
{
// waits until something is added to the queue.
try
while (queue.isEmpty()) queue.wait();
catch (InterruptedException e)
; // the wait method can throw an exception you have to catch.
// but can ignore if you like.
job = queue.poll();
}
// at this point you have the job object and can process it!
// with minimal time waiting on other threads.
// be sure to check that job isn't null anyway!
// in case you got an InterruptedException.
... processing code ...
// job done loop back and wait for another job in the queue.
}
}
}
You pretty much just have to instantiate a class like this and start the thread, then begin inserting objects to process jobs. When the queue is empty the wait causes this thread to sleep (and also temporarily releases the synchronization lock), notify in the addJob method wakes it back up when required. Synchronization is a way of ensuring that only one thread has access to the queue. If you're not sure about how it works look it up in the java SDK reference.
Your code doesn't have any thread safety code in it (synchronization stuff) and that's where your problem is. It's probably a little over complicated which won't help you debug it either. But the main thing is you need to add synchronization blocks, but make sure you keep them as short as possible.
Related
I have threads dedicated to users on a system, and I want to be able to stop them individually, do I store the ID of the thread with the userdata at creation and then call an interrupt? or can I somehow add the thread to my user objects and just call it like myuser.mythread.interrupt(); or is this whishing for magic?
Currently I can stop them all and restart without the thread I want.
But that is a time consuming task and also triggers a lag where users must wait.
Update, can this be an answer?
if(delete==true) {
if (Thread.currentThread().getId() == deleteId) {
Thread.currentThread().interrupt();
delete=false;
}
}
Update
I managed to find a way to use myuser.mythread.interrupt();
Or sort of..
I added the thread as a sub class to the user class and created a method in the user class to start and interrupt, now i can start and stop threads with
online.get(1).hellos();
online.get(1).hellosStop();
Instead of having to create a reference and keeping track of anything else than the user objects.
Update (regarding accepted answer, using the id as a reference I could do it this way)
public class MyRunnable implements Runnable {
private boolean runThread = true;
#Override
public void run() {
try {
while (runThread) {
if(delete==true) {
if (Thread.currentThread().getId() == deleteId) {
Thread.currentThread().interrupt();
delete=false;
}
}
Thread.sleep(5);
}
}
catch (InterruptedException e) {
// Interrupted, no need to check flag, just exit
return;
}
}
}
You can just store the Thread reference, perhaps in a WeakReference so that the thread will go away if it exits on its own.
But you can also have the Thread check an AtomicBoolean (or volatile boolean) every now and then to see if it was interrupted, that way you don't need a reference to the thread.
Note though that stopping threads in Java is not possible without cooperation from the thread you want to stop. It doesn't matter if you use interrupt or a boolean that it checks, in both cases it is up to the thread to check these flags (interrupt just sets a flag) and then perform some action like exiting.
Update
A sample interruptable thread class:
public class MyRunnable implements Runnable {
private final AtomicBoolean stopFlag;
public MyRunnable(AtomicBoolean stopFlag) {
this.stopFlag = stopFlag;
}
#Override
public void run() {
try { // Try/Catch only needed if you use locks/sleep etc.
while (!stopFlag.get()) {
// Do some work, but remember to check flag often!
}
}
catch (InterruptedException e) {
// Interrupted, no need to check flag, just exit
return;
}
}
}
The best approach is to save the Thread reference and make it available to the code that needs to interrupt it.
It is technically possible (for a non-sandboxed application) to traverse the tree of all of the JVM's existing threads testing each one. However, that is expensive and doesn't scale. And if you can store or pass the id of a thread, then you should be able to store or pass the Thread reference instead.
It is also technically possible to create your own WeakHashMap<Long, Thread> and use that to map thread ids to threads. But the same argument applies ....
You ask if this is a solution:
if (delete) {
if (Thread.currentThread().getId() == deleteId) {
Thread.currentThread().interrupt();
delete = false;
}
}
No it isn't. Or more precisely, it will only "work" in the case where the thread is interrupting itself. In other cases, the target thread won't be interrupted.
Depending on your use-case, another way to do this could be to use an ExecutionService rather than bare threads. The submit methods return a Future object that represents the submitted task. The object has a cancel(...) method that can be used to cancel the task, either before it runs, or by interrupting the running thread.
I have a use-case coming from a GUI problem I would like to submit to your sagacity.
Use case
I have a GUI that displays a computation result depending on some parameters the user set in a GUI. For instance, when the user moves a slider, several events are fired, that all trigger a new computation. When the user adjust the slider value from A to B, a dozens of events are fired.
But the computation can take up to several seconds, whereas the slider adjustment can fire an event every few 100 ms.
How to write a proper Thread that would listen to these events, and kind of filter them so that the repaint of the results is lively? Ideally you would like something like
start a new computation as soon as first change event is received;
cancel the first computation if a new event is received, and start a new one with the new parameters;
but ensure that the last event will not be lost, because the last completed computation needs to be the one with last updated parameters.
What I have tried
A friend of mine (A. Cardona) proposed this low level approach of an Updater thread that prevents too many events to trigger a computation. I copy-paste it here (GPL):
He puts this in a class that extends Thread:
public void doUpdate() {
if (isInterrupted())
return;
synchronized (this) {
request++;
notify();
}
}
public void quit() {
interrupt();
synchronized (this) {
notify();
}
}
public void run() {
while (!isInterrupted()) {
try {
final long r;
synchronized (this) {
r = request;
}
// Call refreshable update from this thread
if (r > 0)
refresh(); // Will trigger re-computation
synchronized (this) {
if (r == request) {
request = 0; // reset
wait();
}
// else loop through to update again
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
public void refresh() {
// Execute computation and paint it
...
}
Every-time an event is sent by the GUI stating that parameters have been changed, we call updater.doUpdate(). This causes the method refresh() to be called much less.
But I have no control on this.
Another way?
I was wondering if there is another way to do that, that would use the jaca.concurrent classes. But I could not sort in the Executors framework what would be the one I should start with.
Does any of you have some experience with a similar use case?
Thanks
If you're using Swing, the SwingWorker provides capabilities for this, and you don't have to deal with the thread pool yourself.
Fire off a SwingWorker for each request. If a new request comes in and the worker is not done, you can cancel() it, and just start a new SwingWorker. Regarding what the other poster said, I don't think publish() and process() are what you are looking for (although they are also very useful), since they are meant for a case where the worker might fire off events faster than the GUI can process it.
ThingyWorker worker;
public void actionPerformed(ActionEvent e) {
if( worker != null ) worker.cancel();
worker = new ThingyWorker();
worker.execute();
}
class ThingyWorker extends SwingWorker<YOURCLASS, Object> {
#Override protected YOURCLASS doInBackground() throws Exception {
return doSomeComputation(); // Should be interruptible
}
#Override protected void done() {
worker = null; // Reset the reference to worker
YOURCLASS data;
try {
data = get();
} catch (Exception e) {
// May be InterruptedException or ExecutionException
e.printStackTrace();
return;
}
// Do something with data
}
}
Both the action and the done() method are executed on the same thread, so they can effectively check the reference to whether there is an existing worker.
Note that effectively this is doing the same thing that allows a GUI to cancel an existing operation, except the cancel is done automatically when a new request is fired.
I would provide a further degree of disconnect between the GUI and the controls by using a queue.
If you use a BlockingQueue between the two processes. Whenever the controls change you can post the new settings to the queue.
Your graphics component can read the queue whenever it likes and act on the arriving events or discard them as necessary.
I would look into SwingWorker.publish() (http://docs.oracle.com/javase/6/docs/api/javax/swing/SwingWorker.html)
Publish allows the background thread of a SwingWorker object to cause calls to the process() method, but not every publish() call results in a process() call. If multiple process calls are made before process() returns and can be called again, SwingWorker concatenates the parameters used for multiple publish calls into one call to process.
I had a progress dialog which displayed files being processed; the files were processed faster than the UI could keep up with them, and I didn't want the processing to slow down to display the file names; I used this and had process display only the final filename sent to process(); all I wanted in this case was to indicate to the user where the current processing was, they weren't going to read all the filenames anyway. My UI worked very smoothly with this.
Take a look at the implementation of javax.swing.SwingWorker (source code in the Java JDK),
with a focus on the handshaking between two methods: publish and process.
These won't be directly applicable, as-is, to your problem - however they demonstrate how you might queue (publish) updates to a worker thread and then service them in your worker thread (process).
Since you only need the last work request, you don't even need a queue for your situation: keep only the last work request. Sample that "last request" over some small period (1 second), to avoid stopping/restarting many many times every 1 second, and if it's changed THEN stop the work and restart.
The reason you don't want to use publish / process as-is is that process always runs on the Swing Event Dispatch Thread - not at all suitable for long running calculations.
The key here is that you want to be able to cancel an ongoing computation. The computation must frequently check a condition to see if it needs to abort.
volatile Param newParam;
Result compute(Param param)
{
loop
compute a small sub problem
if(newParam!=null) // abort
return null;
return result
}
To handover param from event thread to compute thread
synchronized void put(Param param) // invoked by event thread
newParam = param;
notify();
synchronized Param take()
while(newParam==null)
wait();
Param param = newParam;
newParam=null;
return param;
And the compute thread does
public void run()
while(true)
Param param = take();
Result result = compute(param);
if(result!=null)
paint result in event thread
I'm looking for a clean design/solution for this problem: I have two threads, that may run as long as the user wants to, but eventually stop when the user issues the stop command. However if one of the threads ends abruptly (eg. because of a runtime exception) I want to stop the other thread.
Now both threads execute a Runnable (so when I say 'stop a thread' what I mean is that I call a stop() method on the Runnable instance), what I'm thinking is to avoid using threads (Thread class) and use the CompletionService interface and then submit both Runnables to an instance of this service.
With this I would use the CompletionService's method take(), when this method returns I would stop both Runnables since I know that at least one of them already finished. Now, this works, but if possible I would like to know of a simpler/better solution for my case.
Also, what is a good solution when we have n threads and as soon as one of them finishes to stop execution of all the others ?
Thanks in advance.
There is no Runnable.stop() method, so that is an obvious non-starter.
Don't use Thread.stop()! It is fundamentally unsafe in the vast majority of cases.
Here are a couple of approaches that should work, if implemented correctly.
You could have both threads regularly check some common flag variable (e.g. call it stopNow), and arrange that both threads set it when they finish. (The flag variable needs to be volatile ... or properly synchronized.)
You could have both threads regularly call the Thread.isInterrupted() method to see if it has been interrupted. Then each thread needs to call Thread.interrupt() on the other one when it finishes.
I know Runnable doesn't have that method, but my implementation of Runnable that I pass to the threads does have it, and when calling it the runner will finish the run() method (something like Corsika's code, below this answer).
From what I can tell, Corsika's code assumes that there is a stop() method that will do the right thing when called. The real question is how have you do implemented it? Or how do you intend to implement it?
If you already have an implementation that works, then you've got a solution to the problem.
Otherwise, my answer gives two possible approaches to implementing the "stop now" functionality.
I appreciate your suggestions, but I have a doubt, how does 'regularly check/call' translate into code ?
It entirely depends on the task that the Runnable.run() method performs. It typically entails adding a check / call to certain loops so that the test happens reasonably often ... but not too often. You also want to check only when it would be safe to stop the computation, and that is another thing you must work out for yourself.
The following should help to give you some ideas of how you might apply it to your problem. Hope it helps...
import java.util.*;
public class x {
public static void main(String[] args) {
ThreadManager<Thread> t = new ThreadManager<Thread>();
Thread a = new MyThread(t);
Thread b = new MyThread(t);
Thread c = new MyThread(t);
t.add(a);
t.add(b);
t.add(c);
a.start();
b.start();
c.start();
}
}
class ThreadManager<T> extends ArrayList<T> {
public void stopThreads() {
for (T t : this) {
Thread thread = (Thread) t;
if (thread.isAlive()) {
try { thread.interrupt(); }
catch (Exception e) {/*ignore on purpose*/}
}
}
}
}
class MyThread extends Thread {
static boolean signalled = false;
private ThreadManager m;
public MyThread(ThreadManager tm) {
m = tm;
}
public void run() {
try {
// periodically check ...
if (this.interrupted()) throw new InterruptedException();
// do stuff
} catch (Exception e) {
synchronized(getClass()) {
if (!signalled) {
signalled = true;
m.stopThreads();
}
}
}
}
}
Whether you use a stop flag or an interrupt, you will need to periodically check to see whether a thread has been signalled to stop.
You could give them access to eachother, or a callback to something that had access to both so it could interrupt the other. Consider:
MyRunner aRunner = new MyRunner(this);
MyRunner bRunner = new MyRunner(this);
Thread a = new Thread(aRunner);
Thread b = new Thread(brunner);
// catch appropriate exceptions, error handling... probably should verify
// 'winner' actually is a or b
public void stopOtherThread(MyRunner winner) {
if(winner == aRunner ) bRunner .stop(); // assumes you have stop on class MyRunner
else aRunner.stop();
}
// later
a.start();
b.start();
// in your run method
public void run() {
// la de da de da
// awesome code
while(true) fork();
// other code here
myRunnerMaster.stopOtherThread(this);
}
In most cases when you create your thread you can prepare the data beforehand and pass it into the constructor or method.
However in cases like an open socket connection you will typically already have a thread created but wish to tell it to perform some action.
Basic idea:
C#
private Thread _MyThread = new Thread(MyMethod);
this._MyThread.Start(param);
Java
private Thread _MyThread = new Thread(new MyRunnableClass(param));
this._MyThread.start();
Now what?
So what is the correct way to pass data to a running thread in C# and Java?
One way to pass data to a running thread is by implementing Message Queues. The thread that wants to tell the listening thread to do something would add an item to the queue of the listening thread. The listening thread reads from this thread in a blocking fashion. Causing it to wait when there are no actions to perform. Whenever another thread puts a message in the queue it will fetch the message, depending on the item and it's content you can then do something with it.
This is some Java / pseudo code:
class Listener
{
private Queue queue;
public SendMessage(Message m)
{
// This will be executed in the calling thread.
// The locking will be done either in this function or in the Add below
// depending on your Queue implementation.
synchronize(this.queue)
{
this.queue.put(m);
}
}
public Loop()
{
// This function should be called from the Listener thread.
while(true)
{
Message m = this.queue.take();
doAction(m);
}
}
public doAction(Message m)
{
if (m is StopMessage)
{
...
}
}
}
And the caller:
class Caller
{
private Listener listener;
LetItStop()
{
listener.SendMessage(new StopMessage());
}
}
Of course, there are a lot of best practices when programming paralllel/concurrent code. For example, instead of while(true) you should at the least add a field like run :: Bool that you can set to false when you receive a StopMessage. Depending on the language in which you want to implement this you will have other primitives and behaviour to deal with.
In Java for example you might want to use the java.util.Concurrent package to keep things simple for you.
Java
You could basically have a LinkedList (a LIFO) and proceed (with something) like this (untested) :
class MyRunnable<T> implements Runnable {
private LinkedList<T> queue;
private boolean stopped;
public MyRunnable(LinkedList<T> queue) {
this.queue = queue;
this.stopped = false;
}
public void stopRunning() {
stopped = true;
synchronized (queue) {
queue.notifyAll();
}
}
public void run() {
T current;
while (!stopped) {
synchronized (queue) {
queue.wait();
}
if (queue.isEmpty()) {
try { Thread.sleep(1); } catch (InterruptedException e) {}
} else {
current = queue.removeFirst();
// do something with the data from the queue
}
Thread.yield();
}
}
}
As you keep a reference to the instance of the LinkedList given in argument, somewhere else, all you have to do is :
synchronized (queue) {
queue.addLast(T); // add your T element here. You could even handle some
// sort of priority queue by adding at a given index
queue.notifyAll();
}
Edit: Misread question,
C#
What I normally do is create a Global Static Class and then set the values there. That way you can access it from both threads. Not sure if this is the preferred method and there could be cases where locking occurs (correct me if I'm wrong) which should be handled.
I haven't tried it but It should work for for the threadpool/backgroundworker as well.
One way I can think of is through property files.
Well, it depends a lot on the work that the thread is supposed to do.
For example, you can have a thread waiting for a Event (e.g. ManualResetEvent) and a shared queue where you put work items (can be data structures to be processed, or more clever commands following a Command pattern). Somebody adds new work to the queue ad signals the event, so the trhread awakes, gets work from the queue and start performing its task.
You can encapsulate this code inside a custom queue, where any thread that calls the Deque methods stops until somebody calls Add(item).
On the other hand, maybe you want to rely on .NET ThreadPool class to issue tasks to execute by the threads on the pool.
Does this example help a bit?
You can use delegate pattern where child threads subscribes to an event and main thread raises an event, passing the parameters.
You could run your worker thread within a loop (if that makes sense for your requirement) and check a flag on each execution of the loop. The flag would be set by the other thread to signal the worker thread that some state had changed, it could also set a field at the same time to pass the new state.
Additionally, you could use monitor.wait and monitor.pulse to signal the state changes between the threads.
Obviously, the above would need synchronization.
I have an BlockingQueue<Runnable>(taken from ScheduledThreadPoolExecutor) in producer-consumer environment. There is one thread adding tasks to the queue, and a thread pool executing them.
I need notifications on two events:
First item added to empty queue
Last item removed from queue
Notification = writing a message to database.
Is there any sensible way to implement that?
A simple and naïve approach would be to decorate your BlockingQueue with an implementation that simply checks the underlying queue and then posts a task to do the notification.
NotifyingQueue<T> extends ForwardingBlockingQueue<T> implements BlockingQueue<T> {
private final Notifier notifier; // injected not null
…
#Override public void put(T element) {
if (getDelegate().isEmpty()) {
notifier.notEmptyAnymore();
}
super.put(element);
}
#Override public T poll() {
final T result = super.poll();
if ((result != null) && getDelegate().isEmpty())
notifier.nowEmpty();
}
… etc
}
This approach though has a couple of problems. While the empty -> notEmpty is pretty straightforward – particularly for a single producer case, it would be easy for two consumers to run concurrently and both see the queue go from non-empty -> empty.
If though, all you want is to be notified that the queue became empty at some time, then this will be enough as long as your notifier is your state machine, tracking emptiness and non-emptiness and notifying when it changes from one to the other:
AtomicStateNotifier implements Notifier {
private final AtomicBoolean empty = new AtomicBoolean(true); // assume it starts empty
private final Notifier delegate; // injected not null
public void notEmptyAnymore() {
if (empty.get() && empty.compareAndSet(true, false))
delegate.notEmptyAnymore();
}
public void nowEmpty() {
if (!empty.get() && empty.compareAndSet(false, true))
delegate.nowEmpty();
}
}
This is now a thread-safe guard around an actual Notifier implementation that perhaps posts tasks to an Executor to asynchronously write the events to the database.
The design is most likely flawed but you can do it relatively simple:
You have a single thread adding, so you can check before adding. i.e. pool.getQueue().isEmpty() - w/ one producer, this is safe.
Last item removed cannot be guaranteed but you can override beforeExecute and check the queue again. Possibly w/ a small timeout after isEmpty() returns true. Probably the code below will be better off executed in afterExecute instead.
protected void beforeExecute(Thread t, Runnable r) {
if (getQueue().isEmpty()){
try{
Runnable r = getQueue().poll(200, TimeUnit.MILLISECONDS);
if (r!=null){
execute(r);
} else{
//last message - or on after execute by Setting a threadLocal and check it there
//alternatively you may need to do so ONLY in after execute, depending on your needs
}
}catch(InterruptedException _ie){
Thread.currentThread().interrupt();
}
}
}
sometime like that
I can explain why doing notifications w/ the queue itself won't work well: imagine you add a task to be executed by the pool, the task is scheduled immediately, the queue is empty again and you will need notification.