I have been working with SwingWorkers for a while and have ended up with a strange behavior, at least for me. I clearly understand that due to performance reasons several invocations to publish() method are coallesced in one invocation. It makes perfectly sense to me and I suspect SwingWorker keeps some kind of queue to process all that calls.
According to tutorial and API, when SwingWorker ends its execution, either doInBackground() finishes normally or worker thread is cancelled from the outside, then done() method is invoked. So far so good.
But I have an example (similar to shown in tutorials) where there are process() method calls done after done() method is executed. Since both methods execute in the Event Dispatch Thread I would expect done() be executed after all process() invocations are finished. In other words:
Expected:
Writing...
Writing...
Stopped!
Result:
Writing...
Stopped!
Writing...
Sample code
import java.awt.BorderLayout;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.event.ActionEvent;
import java.util.List;
import javax.swing.AbstractAction;
import javax.swing.Action;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.JTextArea;
import javax.swing.SwingUtilities;
import javax.swing.SwingWorker;
public class Demo {
private SwingWorker<Void, String> worker;
private JTextArea textArea;
private Action startAction, stopAction;
private void createAndShowGui() {
startAction = new AbstractAction("Start writing") {
#Override
public void actionPerformed(ActionEvent e) {
Demo.this.startWriting();
this.setEnabled(false);
stopAction.setEnabled(true);
}
};
stopAction = new AbstractAction("Stop writing") {
#Override
public void actionPerformed(ActionEvent e) {
Demo.this.stopWriting();
this.setEnabled(false);
startAction.setEnabled(true);
}
};
JPanel buttonsPanel = new JPanel();
buttonsPanel.add(new JButton(startAction));
buttonsPanel.add(new JButton(stopAction));
textArea = new JTextArea(30, 50);
JScrollPane scrollPane = new JScrollPane(textArea);
JFrame frame = new JFrame("Test");
frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
frame.add(scrollPane);
frame.add(buttonsPanel, BorderLayout.SOUTH);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
private void startWriting() {
stopWriting();
worker = new SwingWorker<Void, String>() {
#Override
protected Void doInBackground() throws Exception {
while(!isCancelled()) {
publish("Writing...\n");
}
return null;
}
#Override
protected void process(List<String> chunks) {
String string = chunks.get(chunks.size() - 1);
textArea.append(string);
}
#Override
protected void done() {
textArea.append("Stopped!\n");
}
};
worker.execute();
}
private void stopWriting() {
if(worker != null && !worker.isCancelled()) {
worker.cancel(true);
}
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
new Demo().createAndShowGui();
}
});
}
}
SHORT ANSWER:
This happens because publish() doesn't directly schedule process, it sets a timer which will fire the scheduling of a process() block in the EDT after DELAY. So when the worker is cancelled there is still a timer waiting to schedule a process() with the data of the last publish. The reason for using a timer is to implement the optimization where a single process may be executed with the combined data of several publishes.
LONG ANSWER:
Let's see how publish() and cancel interact with each other, for that, let us dive into some source code.
First the easy part, cancel(true):
public final boolean cancel(boolean mayInterruptIfRunning) {
return future.cancel(mayInterruptIfRunning);
}
This cancel ends up calling the following code:
boolean innerCancel(boolean mayInterruptIfRunning) {
for (;;) {
int s = getState();
if (ranOrCancelled(s))
return false;
if (compareAndSetState(s, CANCELLED)) // <-----
break;
}
if (mayInterruptIfRunning) {
Thread r = runner;
if (r != null)
r.interrupt(); // <-----
}
releaseShared(0);
done(); // <-----
return true;
}
The SwingWorker state is set to CANCELLED, the thread is interrupted and done() is called, however this is not SwingWorker's done, but the future done(), which is specified when the variable is instantiated in the SwingWorker constructor:
future = new FutureTask<T>(callable) {
#Override
protected void done() {
doneEDT(); // <-----
setState(StateValue.DONE);
}
};
And the doneEDT() code is:
private void doneEDT() {
Runnable doDone =
new Runnable() {
public void run() {
done(); // <-----
}
};
if (SwingUtilities.isEventDispatchThread()) {
doDone.run(); // <-----
} else {
doSubmit.add(doDone);
}
}
Which calls the SwingWorkers's done() directly if we are in the EDT which is our case. At this point the SwingWorker should stop, no more publish() should be called, this is easy enough to demonstrate with the following modification:
while(!isCancelled()) {
textArea.append("Calling publish\n");
publish("Writing...\n");
}
However we still get a "Writing..." message from process(). So let us see how is process() called. The source code for publish(...) is
protected final void publish(V... chunks) {
synchronized (this) {
if (doProcess == null) {
doProcess = new AccumulativeRunnable<V>() {
#Override
public void run(List<V> args) {
process(args); // <-----
}
#Override
protected void submit() {
doSubmit.add(this); // <-----
}
};
}
}
doProcess.add(chunks); // <-----
}
We see that the run() of the Runnable doProcess is who ends up calling process(args), but this code just calls doProcess.add(chunks) not doProcess.run() and there's a doSubmit around too. Let's see doProcess.add(chunks).
public final synchronized void add(T... args) {
boolean isSubmitted = true;
if (arguments == null) {
isSubmitted = false;
arguments = new ArrayList<T>();
}
Collections.addAll(arguments, args); // <-----
if (!isSubmitted) { //This is what will make that for multiple publishes only one process is executed
submit(); // <-----
}
}
So what publish() actually does is adding the chunks into some internal ArrayList arguments and calling submit(). We just saw that submit just calls doSubmit.add(this), which is this very same add method, since both doProcess and doSubmit extend AccumulativeRunnable<V>, however this time around V is Runnable instead of String as in doProcess. So a chunk is the runnable that calls process(args). However the submit() call is a completely different method defined in the class of doSubmit:
private static class DoSubmitAccumulativeRunnable
extends AccumulativeRunnable<Runnable> implements ActionListener {
private final static int DELAY = (int) (1000 / 30);
#Override
protected void run(List<Runnable> args) {
for (Runnable runnable : args) {
runnable.run();
}
}
#Override
protected void submit() {
Timer timer = new Timer(DELAY, this); // <-----
timer.setRepeats(false);
timer.start();
}
public void actionPerformed(ActionEvent event) {
run(); // <-----
}
}
It creates a Timer that fires the actionPerformed code once after DELAY miliseconds. Once the event is fired the code will be enqueued in the EDT which will call an internal run() which ends up calling run(flush()) of doProcess and thus executing process(chunk), where chunk is the flushed data of the arguments ArrayList. I skipped some details, the chain of "run" calls is like this:
doSubmit.run()
doSubmit.run(flush()) //Actually a loop of runnables but will only have one (*)
doProcess.run()
doProcess.run(flush())
process(chunk)
(*)The boolean isSubmited and flush() (which resets this boolean) make it so additional calls to publish don't add doProcess runnables to be called in doSubmit.run(flush()) however their data is not ignored. Thus executing a single process for any amount of publishes called during the life of a Timer.
All in all, what publish("Writing...") does is scheduling the call to process(chunk) in the EDT after a DELAY. This explains why even after we cancelled the thread and no more publishes are done, still one process execution appears, because the moment we cancel the worker there's (with high probability) a Timer that will schedule a process() after done() is already scheduled.
Why is this Timer used instead of just scheduling process() in the EDT with an invokeLater(doProcess)? To implement the performance optimization explained in the docs:
Because the process method is invoked asynchronously on the Event
Dispatch Thread multiple invocations to the publish method might occur
before the process method is executed. For performance purposes all
these invocations are coalesced into one invocation with concatenated
arguments.
For example:
publish("1");
publish("2", "3");
publish("4", "5", "6");
might result in:
process("1", "2", "3", "4", "5", "6")
We now know that this works because all the publishes that occur within a DELAY interval are adding their args into that internal variable we saw arguments and the process(chunk) will execute with all that data in one go.
IS THIS A BUG? WORKAROUND?
It's hard to tell If this is a bug or not, It might make sense to process the data that the background thread has published, since the work is actually done and you might be interested in getting the GUI updated with as much info as you can (if that's what process() is doing, for example). And then it might not make sense if done() requires to have all the data processed and/or a call to process() after done() creates data/GUI inconsistencies.
There's an obvious workaround if you don't want any new process() to be executed after done(), simply check if the worker is cancelled in the process method too!
#Override
protected void process(List<String> chunks) {
if (isCancelled()) return;
String string = chunks.get(chunks.size() - 1);
textArea.append(string);
}
It's more tricky to make done() be executed after that last process(), for example done could just use also a timer that will schedule the actual done() work after >DELAY. Although I can't think this is would be a common case since if you cancelled It shouldn't be important to miss one more process() when we know that we are in fact cancelling the execution of all the future ones.
Having read DSquare's superb answer, and concluded from it that some subclassing would be needed, I've come up with this idea for anyone who needs to make sure all published chunks have been processed in the EDT before moving on.
NB I tried to write it in Java rather than Jython (my language of choice and officially the best language in the world), but it is a bit complicated because, for example, publish is final, so you'd have to invent another method to call it, and also because you have to (yawn) parameterise everything with generics in Java.
This code should be understandable by any Java person: just to help, with self.publication_counter.get(), this evaluates to False when the result is 0.
# this is how you say Worker... is a subclass of SwingWorker in Python/Jython
class WorkerAbleToWaitForPublicationToFinish( javax.swing.SwingWorker ):
# __init__ is the constructor method in Python/Jython
def __init__( self ):
# we just add an "attribute" (here, publication_counter) to the object being created (self) to create a field of the new object
self.publication_counter = java.util.concurrent.atomic.AtomicInteger()
def await_processing_of_all_chunks( self ):
while self.publication_counter.get():
time.sleep( 0.001 )
# fully functional override of the Java method
def process( self, chunks ):
for chunk in chunks:
pass
# DO SOMETHING WITH EACH CHUNK
# decrement the counter by the number of chunks received
# NB do this AFTER dealing with the chunks
self.publication_counter.addAndGet( - len( chunks ) )
# fully functional override of the Java method
def publish( self, *chunks ):
# increment the counter by the number of chunks received
# NB do this BEFORE publishing the chunks
self.publication_counter.addAndGet( len( chunks ))
self.super__publish( chunks )
So in your calling code, you put something like:
engine.update_xliff_task.get()
engine.update_xliff_task.await_processing_of_all_chunks()
PS the use of a while clause like this (i.e. a polling technique) is hardly elegant. I looked at the available java.util.concurrent classes such as CountDownLatch and Phaser (both with thread-blocking methods), but I don't think either would suit for this purpose...
later
I was interested enough in this to tweak a proper concurrency class (written in Java, found on the Apache site) called CounterLatch. Their version stops the thread at await() if a value of an AtomicLong counter is reached. My version here allows to you to either to do that, or the opposite: to say "wait until the counter reaches a certain value before lifting the latch":
NB use of AtomicLong for signal and AtomicBoolean for released: because in the original Java they use the volatile keyword. I think using the atomic classes will achieve the same purpose.
class CounterLatch():
def __init__( self, initial = 0, wait_value = 0, lift_on_reached = True ):
self.count = java.util.concurrent.atomic.AtomicLong( initial )
self.signal = java.util.concurrent.atomic.AtomicLong( wait_value )
class Sync( java.util.concurrent.locks.AbstractQueuedSynchronizer ):
def tryAcquireShared( sync_self, arg ):
if lift_on_reached:
return -1 if (( not self.released.get() ) and self.count.get() != self.signal.get() ) else 1
else:
return -1 if (( not self.released.get() ) and self.count.get() == self.signal.get() ) else 1
def tryReleaseShared( self, args ):
return True
self.sync = Sync()
self.released = java.util.concurrent.atomic.AtomicBoolean() # initialised at False
def await( self, *args ):
if args:
assert len( args ) == 2
assert type( args[ 0 ] ) is int
timeout = args[ 0 ]
assert type( args[ 1 ] ) is java.util.concurrent.TimeUnit
unit = args[ 1 ]
return self.sync.tryAcquireSharedNanos(1, unit.toNanos(timeout))
else:
self.sync.acquireSharedInterruptibly( 1 )
def count_relative( self, n ):
previous = self.count.addAndGet( n )
if previous == self.signal.get():
self.sync.releaseShared( 0 )
return previous
So my code now looks like this:
In the SwingWorker constructor:
self.publication_counter_latch = CounterLatch()
In SW.publish:
self.publication_counter_latch.count_relative( len( chunks ) )
self.super__publish( chunks )
In the thread waiting for chunk processing to stop:
worker.publication_counter_latch.await()
Related
I've a method who return a result (return an integer), my method is executed in a Thread for load 40 000 objects, i return an integer who count the number objects loaded. My question is, How return the int with the Thread ? Actually, the result is returned directly and is equal to 0.
public int ajouter(params) throws DaoException, ConnectException {
final ProgressDialog dialog = ProgressDialog.show(mActivity, "Title",
"Message", true);
final Handler handler = new Handler() {
public void handleMessage(Message msg) {
dialog.dismiss();
}
};
Thread t = new Thread() {
public void run() {
try {
Str_Requete = "SELECT * FROM Mytable";
ResultSet result = ExecuteQuery(Str_Base, Str_Requete);
Index = addObjects(result);
handler.sendEmptyMessage(0);
} catch (SQLException e) {
e.printStackTrace();
}
}
};
t.start();
return Index;
}
When i call my method in my mainActivity :
int test = myObjs.ajouter(params);
test is equal to 0, the value is returned directly...
My constraint is didnt use AsyncTask.
The whole point of using a Thread is not to block the calling code while performing the task of the thread. Thread.start() returns immediately, but in the meantime a new thread is started in parallel to the current thread which will execute the code in the run() method.
So by definition there is no such thing as returning a value from a thread execution. You have to somehow send a signal back from the thread that performed the task to the thread in which you need the result. There are many ways of doing this, there's the standard Java wait/notify methods, there is the Java concurrency library etc.
Since this is Android, and I assume your calling code is running on the main thread, it's probably wise to use the functionality of Handler. And in fact, you are already doing that - you have a Handler that closes the dialog when the thread is done with its work - but for some reason you seem to expect the result of that work to be ready before it has even started. It would be reasonable to extend your existing Handler with some code that does something with the calculated value and remove the code that returns the value of a variable before or at the same time as it's being calculated by another thread.
I also strongly encourage you to study some concurrency tutorial such as Oracle's concurrency lesson or Android Thread guidelines to really understand what's going on in the background. Writing concurrent code without mastering the concepts is bound to fail sooner or later, because it's in the nature of concurrency that multiple things are happening at the same time, will finish in random order etc. It may not fail often, but you will go crazy wondering why something that works 90% of the time suddenly fails. That's why topics such as atomicity, thread synchronization etc are critical to comprehend.
Edit: Simple Android example of starting a worker thread, performing some work, posting back event to main thread.
public class MyActivity extends Activity {
private Handler mHandler = new Handler();
...
private void doSomeWorkInBackground() {
new Thread() {
public void run() {
// do slow work, this may be blocking
mHandler.post(new Runnable() {
public void run() {
// this code will run on main thread,
// updating your UI or whatever you need.
// Hence, code here must NOT be blocking.
}
});
}
}.start();
// This code will be executed immediately on the main thread, and main thread will not be blocked
}
You could in this example also use Activity.runOnUiThread(Runnable).
Please consider however that AsyncTask basically wraps this kind of functionality in a very convenient way, so if it suits your purposes you should consider using AsyncTask.
If you dont want to use AsyncTask or ForkJoin, then you could implement an Interface e.g. callback in your main class.
In your Example you dont wait until the Thread is done... thread.join
One Solution:
Your Thread is a extra class with an constructor to hold the reference to the calling class.
public Interface callback
{
public int done();
}
public class main implements callback
{
...
CustomThread t = new CustomThread(this)
...
}
public class CustomThread extends Thread
{
private Callback cb;
public CustomThread(Callback cb)
{
this.cb=cb;
}
.
.
.
//when done
cb.done(int)
}
I wonder how java SwingWorker and it's thread pool works when some task is performed repeteadly. Here is SSCCE of problem ready to copy + paste:
package com.cgi.havrlantr.swingworkerexample;
import java.awt.*;
import javax.swing.*;
public class Main extends JFrame {
public static void main(String[] args) {
java.awt.EventQueue.invokeLater(new Runnable() {
public void run() {
new Main().setVisible(true);
}
});
}
public Main() {
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
setSize(new Dimension(100,100));
JButton btn = new JButton("Say Hello");
add(btn);
btn.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
btnPressed(evt);
}
});
}
private void btnPressed(AWTEvent e) {
SayHelloSwingWorker worker = new SayHelloSwingWorker();
worker.execute();
}
private class SayHelloSwingWorker extends SwingWorker<Integer, Integer> {
protected Integer doInBackground() throws Exception {
System.out.println("Hello from thread " + Thread.currentThread().getName());
return 0;
}
}
}
I want to ask on following. Every time I call execute() on a new instance of worker (after button is pressed), a new thread in SwingWorker thread pool is created up to 10 threads total created. After exceeding this limit, threads are reused as I expect. Because new workers are created sequentially after the previous one is finished, I don't understand, why the first thread is not reused immeadiately, because the previous worker already done it's work. Say there can be only one single thread, which does some work at the time. I would expect thread pool to create only one thread, which is enought to serve all tasks. Is this usual behaviour? Or there may be something wrong what denies reusability of the first thread and forces thread pool to create another one?
If it is normal I think it is waste of time for creation unwanted thread and memory for keeping the threads ready. Can I avoid it somehow? Can I force SwingWorker to have only one thread in his pool? - I think no, because number of threads is constant and dependent on Java implementation as far as I know.
Can I make SwingWorker to finish the thread after task was completed? (calling this.cancel() in done() method did not work)
Here is code of my real world worker, for case there is some dependency that may cause the problem.
public class MapWorker extends SwingWorker<Long, Object> {
#Override
protected Long doInBackground() throws Exception{
try {
long requestId = handleAction();
return requestId;
}catch(Exception e){
logger.error( "Exception when running map worker thread.", e);
SwingUtilities.invokeLater(new Runnable(){
#Override
public void run(){
requestFinished();
}
});
MapUtils.showFatalError(e);
}
return -1l;
}
#Override
protected void done(){
try{
requestId = get();
logger.info("Returned from map request call, ID: {}", Long.toString(requestId));
}catch(InterruptedException e){
logger.error( "Done interrupted - wierd!", e);
}catch(ExecutionException e){
logger.error( "Exception in execution of worker thread.", e);
}
}
In method handleAction() a new thread with some blocking call is created and thread id is returned, which should not be anything weird. Don't ask me why, it is not my code. :)
Hmmm... the default ThreadPoolExecutor for SwingWorkers is configured to not only have a max pool size of 10 but also a core size of 10, meaning it prefers to keep 10 threads alive. Hard for me to tell why this is, maybe it's optimal under certain conditions.
You can tell SwingWorkers to use a custom ExecutorService using this (weird) call:
AppContext.getAppContext().put( SwingWorker.class, Executors.newSingleThreadExecutor() );
Note, the ExecutorService returned by Executors.newSingleThreadExecutor() will create a non-daemon thread, something you may want to change.
Is it feasible to implement some util method to suspend test (current thread) execution until application becomes idle?
Idle means:
1. there were no GUI events added to event queue for some period of time
2. there were no worker threads running any tasks for the same period of time.
Could you please provide implementation/code snippets to track previous conditions of idleness?
You can replace the EventQueue with your own implementation, as shown here. The variation below adds an idle() method that relies on an arbitrary THRESHOLD of 1000 ms.
import java.awt.AWTEvent;
import java.awt.EventQueue;
import java.awt.Toolkit;
import java.lang.reflect.InvocationTargetException;
import javax.swing.JFrame;
import javax.swing.JTree;
/**
* #see https://stackoverflow.com/questions/7976967
* #see https://stackoverflow.com/questions/3158254
*/
public class EventQueueTest {
public static void main(String[] args) throws
InterruptedException, InvocationTargetException {
EventQueue eventQueue = Toolkit.getDefaultToolkit().getSystemEventQueue();
final MyEventQueue q = new MyEventQueue();
eventQueue.push(q);
EventQueue.invokeAndWait(new Runnable() {
#Override
public void run() {
JFrame f = new JFrame("Test");
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.add(new JTree());
f.pack();
f.setVisible(true);
}
});
// Test idle() on initial thread
for (int i = 0; i < 10; i++) {
Thread.sleep(2 * MyEventQueue.THRESHOLD);
System.out.println("Idle: " + q.idle());
}
}
private static class MyEventQueue extends EventQueue {
private static final int THRESHOLD = 1 * 1000;
private long last;
#Override
public void postEvent(AWTEvent e) {
super.postEvent(e);
last = System.currentTimeMillis();
}
public boolean idle() {
return System.currentTimeMillis() - last > THRESHOLD;
}
}
}
You can use Toolkit.getDefaultToolkit().addAWTEventListener(listener, eventMask) to subscribe to AWT event queue, so you can detect whether events are not added for some period of time.
I think that you need your custom code to monitor working threads, i.e. something in the beginning and end of run() method.
The problem is to "suspend the test execution". If your test is running in thread theoretically you can invoke the thread's suspend() method. But it is deprecated and should not be used. To perform clear implementation you should make your custom code that asks status during execution of the thread and calls wait() once it detects that the test must be suspended. When your code that monitors AWT event queue and working threads decides that test may be resumed it should call appropriate notify().
Probably better solution from design point of view is Actors model. There are several java framework that provide this functionality.
I need a way to bind UI indicators to rapidly-changing values.
I have a class NumberCruncher which does a bunch of heavy processing in a critical non-UI thread, thousands of iterations of a loop per second, and some number of those result in changes to a set of parameters I care about. (think of them as a key-value store)
I want to display those at a slower rate in the UI thread; 10-20Hz would be fine. How can I add MVC-style notification so that my NumberCruncher code doesn't need to know about the UI code/binding?
The idiomatic way to do this is to use the SwingWorker class, and to use calls to publish(V...) to notify the Event Dispatch thread periodically causing it to update the UI.
In the below example taken from the Javadoc the number crunching takes place on a worker thread in the doInBackground() method, which calls publish on each iteration. This call causes the process(V...) method to be called asynchronously on the Event Dispatch thread allowing it to update the UI. Note that this ensures that the user interaface is always updated from the Event Dispatch thread. Also note that you may choose to call publish every N iterations to reduce the frequency at which the user interface is updated.
Example From Javadoc
class PrimeNumbersTask extends
SwingWorker<List<Integer>, Integer> {
PrimeNumbersTask(JTextArea textArea, int numbersToFind) {
//initialize
}
#Override
public List<Integer> doInBackground() {
while (! enough && ! isCancelled()) {
number = nextPrimeNumber();
publish(number);
setProgress(100 * numbers.size() / numbersToFind);
}
}
return numbers;
}
#Override
protected void process(List<Integer> chunks) {
for (int number : chunks) {
textArea.append(number + "\n");
}
}
}
SwingWorker, suggested by #Adamski, is preferable; but an instance of javax.swing.Timer is a convenient alternative for this, as "the action event handlers for Timers execute [on] the event-dispatching thread."
Seems like you might want to take the "Listener" approach. Allow your number cruncher to register listeners, then every 100-200 loops (configurable) (or on some change condition), notify the listeners that there is an update they should be aware of.
The listener can be another class that has a thread wait() ing on it, and when it gets notified, it just updates its internal variable, then notifies the waiting thread. The fast loop class then has a quick way to update an external value and not worry about access to its fast changing internal state.
The other thread that wait()s can also have a wait() on a timer thread that is set to 10-20HZ (configurable) to wait on the timer before wait()ing on the next update from your synchronized class.
Have a single object which your NumberCrucher modifies/keeps on changing based on the numerous operations you do. Let that run in a separate thread. Have a UI in swing which uses the same Object that NumberCruncher modifies. This thread is going to only read the values at specified time period so it should not be a problem of thread deadlocks.
NumberCruncher
public class NumberCruncher implements Runnable{
CommonObject commonObj;
public NumberCruncher(CommonObject commonObj){
this.commonObj = commonObj;
}
public void run() {
for(;;){
commonObj.freqChangeVal = Math.random();
}
}
}
CommonObject:
public class CommonObject {
public double freqChangeVal;
}
UI:
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
public class UI extends JFrame implements Runnable{
private CommonObject commonObj = new CommonObject();
JLabel label ;
public static void main(String args[]){
UI ui = new UI();
ui.begin();
Thread t2 = new Thread(ui);
t2.start();
}
private void begin(){
JPanel panel = new JPanel();
label = new JLabel("Test");
panel.add(label);
Thread thread = new Thread(new NumberCruncher(commonObj));
thread.start();
this.add(panel);
this.setSize(200,200);
this.setVisible(true);
}
public void run() {
for(;;){
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
label.setText(commonObj.freqChangeVal+"");
this.repaint();
}
}
}
In the Observer Design Pattern, the subject notifies all observers by calling the update() operation of each observer. One way of doing this is
void notify() {
for (observer: observers) {
observer.update(this);
}
}
But the problem here is each observer is updated in a sequence and update operation for an observer might not be called till all the observers before it is updated. If there is an observer that has an infinite loop for update then all the observer after it will never be notified.
Question:
Is there a way to get around this problem?
If so what would be a good example?
The problem is the infinite loop, not the one-after-the-other notifications.
If you wanted things to update concurrently, you'd need to fire things off on different threads - in which case, each listener would need to synchronize with the others in order to access the object that fired the event.
Complaining about one infinite loop stopping other updates from happening is like complaining that taking a lock and then going into an infinite loop stops others from accessing the locked object - the problem is the infinite loop, not the lock manager.
Classic design patterns do not involve parallelism and threading. You'd have to spawn N threads for the N observers. Be careful though since their interaction to this will have to be done in a thread safe manner.
You could make use of the java.utils.concurrent.Executors.newFixedThreadPool(int nThreads) method, then call the invokeAll method (could make use of the one with the timout too to avoid the infinite loop).
You would change your loop to add a class that is Callable that takes the "observer" and the "this" and then call the update method in the "call" method.
Take a look at this package for more info.
This is a quick and dirty implementation of what I was talking about:
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class Main
{
private Main()
{
}
public static void main(final String[] argv)
{
final Watched watched;
final List<Watcher> watchers;
watched = new Watched();
watchers = makeWatchers(watched, 10);
watched.notifyWatchers(9);
}
private static List<Watcher> makeWatchers(final Watched watched,
final int count)
{
final List<Watcher> watchers;
watchers = new ArrayList<Watcher>(count);
for(int i = 0; i < count; i++)
{
final Watcher watcher;
watcher = new Watcher(i + 1);
watched.addWatcher(watcher);
watchers.add(watcher);
}
return (watchers);
}
}
class Watched
{
private final List<Watcher> watchers;
{
watchers = new ArrayList<Watcher>();
}
public void addWatcher(final Watcher watcher)
{
watchers.add(watcher);
}
public void notifyWatchers(final int seconds)
{
final List<Watcher> currentWatchers;
final List<WatcherCallable> callables;
final ExecutorService service;
currentWatchers = new CopyOnWriteArrayList<Watcher>(watchers);
callables = new ArrayList<WatcherCallable>(currentWatchers.size());
for(final Watcher watcher : currentWatchers)
{
final WatcherCallable callable;
callable = new WatcherCallable(watcher);
callables.add(callable);
}
service = Executors.newFixedThreadPool(callables.size());
try
{
final boolean value;
service.invokeAll(callables, seconds, TimeUnit.SECONDS);
value = service.awaitTermination(seconds, TimeUnit.SECONDS);
System.out.println("done: " + value);
}
catch (InterruptedException ex)
{
}
service.shutdown();
System.out.println("leaving");
}
private class WatcherCallable
implements Callable<Void>
{
private final Watcher watcher;
WatcherCallable(final Watcher w)
{
watcher = w;
}
public Void call()
{
watcher.update(Watched.this);
return (null);
}
}
}
class Watcher
{
private final int value;
Watcher(final int val)
{
value = val;
}
public void update(final Watched watched)
{
try
{
Thread.sleep(value * 1000);
}
catch (InterruptedException ex)
{
System.out.println(value + "interupted");
}
System.out.println(value + " done");
}
}
I'd be more concerned about the observer throwing an exception than about it looping indefinitely. Your current implementation would not notify the remaining observers in such an event.
1. Is there a way to get around this problem?
Yes, make sure the observer work fine and return in a timely fashion.
2. Can someone please explain it with an example.
Sure:
class ObserverImpl implements Observer {
public void update( Object state ) {
// remove the infinite loop.
//while( true ) {
// doSomething();
//}
// and use some kind of control:
int iterationControl = 100;
int currentIteration = 0;
while( curentIteration++ < iterationControl ) {
doSomething();
}
}
private void doSomething(){}
}
This one prevent from a given loop to go infinite ( if it makes sense, it should run at most 100 times )
Other mechanism is to start the new task in a second thread, but if it goes into an infinite loop it will eventually consume all the system memory:
class ObserverImpl implements Observer {
public void update( Object state ) {
new Thread( new Runnable(){
public void run() {
while( true ) {
doSomething();
}
}
}).start();
}
private void doSomething(){}
}
That will make the that observer instance to return immediately, but it will be only an illusion, what you have to actually do is to avoid the infinite loop.
Finally, if your observers work fine but you just want to notify them all sooner, you can take a look at this related question: Invoke a code after all mouse event listeners are executed..
All observers get notified, that's all the guarantee you get.
If you want to implement some fancy ordering, you can do that:
Connect just a single Observer;
have this primary Observer notify his friends in an order you define in code or by some other means.
That takes you away from the classic Observer pattern in that your listeners are hardwired, but if it's what you need... do it!
If you have an observer with an "infinite loop", it's no longer really the observer pattern.
You could fire a different thread to each observer, but the observers MUST be prohibited from changing the state on the observed object.
The simplest (and stupidest) method would simply be to take your example and make it threaded.
void notify() {
for (observer: observers) {
new Thread(){
public static void run() {
observer.update(this);
}
}.start();
}
}
(this was coded by hand, is untested and probably has a bug or five--and it's a bad idea anyway)
The problem with this is that it will make your machine chunky since it has to allocate a bunch of new threads at once.
So to fix the problem with all the treads starting at once, use a ThreadPoolExecutor because it will A) recycle threads, and B) can limit the max number of threads running.
This is not deterministic in your case of "Loop forever" since each forever loop will permanently eat one of the threads from your pool.
Your best bet is to not allow them to loop forever, or if they must, have them create their own thread.
If you have to support classes that can't change, but you can identify which will run quickly and which will run "Forever" (in computer terms I think that equates to more than a second or two) then you COULD use a loop like this:
void notify() {
for (observer: observers) {
if(willUpdateQuickly(observer))
observer.update(this);
else
new Thread(){
public static void run() {
observer.update(this);
}
}.start();
}
}
Hey, if it actually "Loops forever", will it consume a thread for every notification? It really sounds like you may have to spend some more time on your design.