Java - Multiple queue producer consumer - java

I've got the following code:
while(!currentBoard.boardIsValid()){
for (QueueLocation location : QueueLocation.values()){
while(!inbox.isEmpty(location)){
Cell c = inbox.dequeue(location);
notifyNeighbours(c.x, c.y, c.getCurrentState(),previousBoard);
}
}
}
I've got a consumer with a few queues (all of their methods are synchronised). One queue for each producer. The consumer loops over all the queues and checks if they've got a task for him to consume.
If the queue he's checking has a task in it, he consumes it. Otherwise, he goes to the check the next queue until he finishes iterating over all the queues.
As of now, if he iterates over all the queues and they're all empty, he keeps on looping rather than waiting for one of them to contain something (as seen by the outer while).
How can I make the consumer wait until one of the queues has something in it?
I'm having an issue with the following scenario: Lets say there are only 2 queues. The consumer checked the first one and it was empty. Just as he's checking the second one (which is also empty), the producer put something in the first queue. As far as the consumer is concerned, the queues are both empty and so he should wait (even though one of them isn't empty anymore and he should continue looping).
Edit:
One last thing. This is an exercise for me. I'm trying to implement the synchronisation myself. So if any of the java libraries have a solution that implements this I'm not interested in it. I'm trying to understand how I can implement this.

#Abe was close. I would use signal and wait - use the Object class built-ins as they are the lightest weight.
Object sync = new Object(); // Can use an existing object if there's an appropriate one
// On submit to queue
synchronized ( sync ) {
queue.add(...); // Must be inside to avoid a race condition
sync.notifyAll();
}
// On check for work in queue
synchronized ( sync ) {
item = null;
while ( item == null ) {
// Need to check all of the queues - if there will be a large number, this will be slow,
// and slow critical sections (synchronized blocks) are very bad for performance
item = getNextQueueItem();
if ( item == null ) {
sync.wait();
}
}
}
Note that sync.wait releases the lock on sync until the notify - and the lock on sync is required to successfully call the wait method (it's a reminder to the programmer that some type of critical section is really needed for this to work reliably).
By the way, I would recommend a queue dedicated to the consumer (or group of consumers) rather than a queue dedicated to the producer, if feasible. It will simplify the solution.

If you want to block across multiple queues, then one option is to use java's Lock and Condition objects and then use the signal method.
So whenever the producer has data, it should invoke the signallAll.
Lock fileLock = new ReentrantLock();
Condition condition = fileLock.newCondition();
...
// producer has to signal
condition.signalAll();
...
// consumer has to await.
condition.await();
This way only when the signal is provided will the consumer go and check the queues.

I solved a similar situation along the lines of what #Abe suggests, but settled on using a Semaphore in combination with an AtomicBoolean and called it a BinarySemaphore. It does require the producers to be modified so that they signal when there is something to do.
Below the code for the BinarySemaphore and a general idea of what the consumer work-loop should look like:
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
public class MultipleProdOneConsumer {
BinarySemaphore workAvailable = new BinarySemaphore();
class Consumer {
volatile boolean stop;
void loop() {
while (!stop) {
doWork();
if (!workAvailable.tryAcquire()) {
// waiting for work
try {
workAvailable.acquire();
} catch (InterruptedException e) {
if (!stop) {
// log error
}
}
}
}
}
void doWork() {}
void stopWork() {
stop = true;
workAvailable.release();
}
}
class Producer {
/* Must be called after work is added to the queue/made available. */
void signalSomethingToDo() {
workAvailable.release();
}
}
class BinarySemaphore {
private final AtomicBoolean havePermit = new AtomicBoolean();
private final Semaphore sync;
public BinarySemaphore() {
this(false);
}
public BinarySemaphore(boolean fair) {
sync = new Semaphore(0, fair);
}
public boolean release() {
boolean released = havePermit.compareAndSet(false, true);
if (released) {
sync.release();
}
return released;
}
public boolean tryAcquire() {
boolean acquired = sync.tryAcquire();
if (acquired) {
havePermit.set(false);
}
return acquired;
}
public boolean tryAcquire(long timeout, TimeUnit tunit) throws InterruptedException {
boolean acquired = sync.tryAcquire(timeout, tunit);
if (acquired) {
havePermit.set(false);
}
return acquired;
}
public void acquire() throws InterruptedException {
sync.acquire();
havePermit.set(false);
}
public void acquireUninterruptibly() {
sync.acquireUninterruptibly();
havePermit.set(false);
}
}
}

Related

Thread-safe FIFO queue with unique items and thread pool

I have to manage scheduled file replications in a system. The file replications are scheduled by users and I need to restrict the amount of system resources used during replication. The amount of time that each replication may take is not defined (i.e. a replication may be scheduled to run every 15 minutes and the previous run may still be running when the next run is due) and a replication should not be queued if it's already queued or running.
I have a scheduler that periodically checks for due file replications and, for each one, (1) add it to a blocking queue if it is not queued nor running or (2) drop it otherwise.
private final Object scheduledReplicationsLock = new Object();
private final BlockingQueue<Replication> replicationQueue = new LinkedBlockingQueue<>();
private final Set<Long> queuedReplicationIds = new HashSet<>();
private final Set<Long> runningReplicationIds = new HashSet<>();
public boolean add(Replication replication) {
synchronized (scheduledReplicationsLock) {
// If the replication job is either still executing or is already queued, do not add it.
if (queuedReplicationIds.contains(replication.id) || runningReplicationIds.contains(replication.id)) {
return false;
}
replicationQueue.add(replication)
queuedReplicationIds.add(replication.id);
}
I also have a pool of threads that waits until there is a replication in the queue and executes it. Below is the main method of each thread in the thread pool:
public void run() {
while (True) {
Replication replication = null;
synchronized (scheduledReplicationsLock) {
// This will block until a replication job is ready to be run or the current thread is interrupted.
replication = replicationQueue.take();
// Move the ID value out of the queued set and into the active set
Long replicationId = replication.getId();
queuedReplicationIds.remove(replicationId);
runningReplicationIds.add(replicationId);
}
executeReplication(replication)
}
}
This code gets into a deadlock because the first thread in the thread poll will get scheduledLock and prevent the scheduler to add replications to the queue. Moving replicationQueue.take() out of the synchronized block will eliminate the deadlock, but then it's possible that a element is removed from the queue and the hash sets are not atomically updated with it, which could cause a replication to be incorrectly dropped.
Should I use BlockingQueue.poll() and release the lock + sleep if the queue is empty instead of using BlockingQueue.take() ?
Fixes to the current solution or other solutions that meet the requirements are welcome.
wait / notify
Keeping your same control flow, instead of blocking on the BlockingQueue instance while holding the mutex lock, you can wait on notifications for the scheduledReplicationsLock forcing the worker thread to release the lock and return to the waiting pool.
Here down a reduced sample of your producer:
private final List<Replication> replicationQueue = new LinkedList<>();
private final Set<Long> runningReplicationIds = new HashSet<>();
public boolean add(Replication replication) {
synchronized (replicationQueue) {
// If the replication job is either still executing or is already queued, do not add it.
if (replicationQueue.contains(replication) || runningReplicationIds.contains(replication.id)) {
return false;
} else {
replicationQueue.add(replication);
replicationQueue.notifyAll();
}
}
}
The worker Runnable would then be updated as follows:
public void run() {
synchronized (replicationQueue) {
while (true) {
if (replicationQueue.isEmpty()) {
scheduledReplicationsLock.wait();
}
if (!replicationQueue.isEmpty()) {
Replication replication = replicationQueue.poll();
runningReplicationIds.add(replication.getId())
executeReplication(replication);
}
}
}
}
BlockingQueue
Generally you are better off using the BlockingQueue to coordinate your producer and replicating worker pool.
The BlockingQueue is, as the name implies, blocking by nature and will cause the calling thread to block only if items cannot be pulled / pushed from / to the queue.
Meanwhile, note that you will have to update your running / enqueued state management as you will only synchronizing on the BlockingQueue items dropping any constraints. This then will depend on the context, whether this would be acceptable or not.
This way, you would drop all other used mutex(es) and use on the BlockingQueue as your synchronization state:
private final BlockingQueue<Replication> replicationQueue = new LinkedBlockingQueue<>();
public boolean add(Replication replication) {
// not sure if this is the proper invariant to check as at some point the replication would be neither queued nor running while still have been processed
if (replicationQueue.contains(replication)) {
return false;
}
// use `put` instead of `add` as this will block waiting for free space
replicationQueue.put(replication);
return true;
}
The workers will then take indefinitely from the BlockingQueue:
public void run() {
while (true) {
Replication replication = replicationQueue.take();
executeReplication(replication);
}
}
You no need to use any additional synchronization block if you using BlockingQueue
Quote from docs (https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html)
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control.
just use something like this
public void run() {
try {
while (replicationQueue.take()) { //Thread will be wait for the next element in the queue
Long replicationId = replication.getId();
queuedReplicationIds.remove(replicationId);
runningReplicationIds.add(replicationId);
executeReplication(replication);
}
} catch (InterruptedException ex) {
//if interrupted while waiting next element
}
}
}
look in javadoc https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/LinkedBlockingQueue.html#take()
Or you can use BlockinQueue.pool() with timeout settings
UPD: After discussion, I extend LinkedBlockingQueue with two ConcurrentHashSets and add method afterTake() to remove processed Replicas. You do not need an additional synchronizations outside the queue. Just put replica in the first thread and take it in another, and call afterTake() when replication finished. You need to override other method if you want to use it.
package ru.everytag;
import io.vertx.core.impl.ConcurrentHashSet;
import java.util.concurrent.LinkedBlockingQueue;
public class TwoPhaseBlockingQueue<E> extends LinkedBlockingQueue<E> {
private ConcurrentHashSet<E> items = new ConcurrentHashSet<>();
private ConcurrentHashSet<E> taken = new ConcurrentHashSet<>();
#Override
public void put(E e) throws InterruptedException {
if (!items.contains(e)) {
items.add(e);
super.put(e);
}
}
public E take() {
E item = take();
taken.add(item);
items.remove(item);
return item;
}
public void afterTake(E e) {
if (taken.contains(e)) {
taken.remove(e);
} else if (items.contains(e)) {
throw new IllegalArgumentException("Element still in the queue");
}
}
}

Waking up a thread without risking to get blocked

I have a worker thread running indefinitely, which goes to sleep for one minute if there's nothing to do. Sometimes, another piece of code produces some work and wants to wake the worker thread immediately.
So I did something like this (code for illustration only):
class Worker {
public void run() {
while (!shuttingDown()) {
step();
}
}
private synchronized void step() {
if (hasWork()) {
doIt();
} else {
wait(60_000);
}
}
public synchronized wakeMeUpInside() {
notify();
}
}
What I dislike is having to enter the monitor only for waking something up, which means that the notifying thread may be delayed for no good reason. As the choices of native synchronization are limited, I thought I'd switch to Condition, but it has exactly the same problem:
An implementation may (and typically does) require that the current thread hold the lock associated with this Condition when this method is called.
Here's a semaphore based solution:
class Worker {
// If 0 there's no work available
private workAvailableSem = new Semaphore(0);
public void run() {
while (!shuttingDown()) {
step();
}
}
private synchronized void step() {
// Try to obtain a permit waiting up to 60 seconds to get one
boolean hasWork = workAvailableSem.tryAquire(1, TimeUnit.MINUTES);
if (hasWork) {
doIt();
}
}
public wakeMeUpInside() {
workAvailableSem.release(1);
}
}
I'm not 100% sure this meets your needs. A few things to note:
This will add one permit each time wakeMeUpInside is called. Thus if two threads wake up the Worker it will run doIt twice without blocking. You can extend the example to avoid that.
This waits 60 seconds for work to do. If none is available it'll end up back in the run method which will send it immediately back to the step method which will just wait again. I did this because I'm assuming you had some reason why you wanted to run every 60 seconds even if there's no work. If that's not the case just call aquire and you'll wait indefinitely for work.
As per comments below the OP wants to run only once. While you could call drainPermits in that case a cleaner solution is just to use a LockSupport like so:
class Worker {
// We need a reference to the thread to wake it
private Thread workerThread = null;
// Is there work available
AtomicBoolean workAvailable = new AtomicBoolean(false);
public void run() {
workerThread = Thread.currentThread();
while (!shuttingDown()) {
step();
}
}
private synchronized void step() {
// Wait until work is available or 60 seconds have passed
ThreadSupport.parkNanos(TimeUnit.MINUTES.toNanos(1));
if (workAvailable.getAndSet(false)) {
doIt();
}
}
public wakeMeUpInside() {
// NOTE: potential race here depending on desired semantics.
// For example, if doIt() will do all work we don't want to
// set workAvailable to true if the doIt loop is running.
// There are ways to work around this but the desired
// semantics need to be specified.
workAvailable.set(true);
ThreadSupport.unpark(workerThread);
}
}

Manually trigger a #Scheduled method

I need advice on the following:
I have a #Scheduled service method which has a fixedDelay of a couple of seconds in which it does scanning of a work queue and processing of apropriate work if it finds any. In the same service I have a method which puts work in the work queue and I would like this method to imediately trigger scanning of the queue after it's done (since I'm sure that there will now be some work to do for the scanner) in order to avoid the delay befor the scheduled kicks in (since this can be seconds, and time is somewhat critical).
An "trigger now" feature of the Task Execution and Scheaduling subsystem would be ideal, one that would also reset the fixedDelay after execution was initiated maually (since I dont want my manual execution to collide with the scheduled one). Note: work in the queue can come from external source, thus the requirement to do periodic scanning.
Any advice is welcome
Edit:
The queue is stored in a document-based db so local queue-based solutions are not appropriate.
A solution I am not quite happy with (don't really like the usage of raw threads) would go something like this:
#Service
public class MyProcessingService implements ProcessingService {
Thread worker;
#PostCreate
public void init() {
worker = new Thread() {
boolean ready = false;
private boolean sleep() {
synchronized(this) {
if (ready) {
ready = false;
} else {
try {
wait(2000);
} catch(InterruptedException) {
return false;
}
}
}
return true;
}
public void tickle() {
synchronized(this) {
ready = true;
notify();
}
}
public void run() {
while(!interrupted()) {
if(!sleep()) continue;
scan();
}
}
}
worker.start();
}
#PreDestroy
public void uninit() {
worker.interrup();
}
public void addWork(Work work) {
db.store(work);
worker.tickle();
}
public void scan() {
List<Work> work = db.getMyWork();
for (Work w : work) {
process();
}
}
public void process(Work work) {
// work processing here
}
}
Since the #Scheduled method wouldn't have any work to do if there are no items in the work-queue, that is, if no one put any work in the queue between the execution cycles. On the same note, if some work-item was inserted into the work-queue (by an external source probably) immediately after the scheduled-execution was complete, the work won't be attended to until the next execution.
In this scenario, what you need is a consumer-producer queue. A queue in which one or more producers put in work-items and a consumer takes items off the queue and processes them. What you want here is a BlockingQueue. They can be used for solving the consumer-producer problem in a thread-safe manner.
You can have one Runnable that performs the tasks performed by your current #Scheduled method.
public class SomeClass {
private final BlockingQueue<Work> workQueue = new LinkedBlockingQueue<Work>();
public BlockingQueue<Work> getWorkQueue() {
return workQueue;
}
private final class WorkExecutor implements Runnable {
#Override
public void run() {
while (true) {
try {
// The call to take() retrieves and removes the head of this
// queue,
// waiting if necessary until an element becomes available.
Work work = workQueue.take();
// do processing
} catch (InterruptedException e) {
continue;
}
}
}
}
// The work-producer may be anything, even a #Scheduled method
#Scheduled
public void createWork() {
Work work = new Work();
workQueue.offer(work);
}
}
And some other Runnable or another class might put in items as following:
public class WorkCreator {
#Autowired
private SomeClass workerClass;
#Override
public void run() {
// produce work
Work work = new Work();
workerClass.getWorkQueue().offer(work);
}
}
I guess that's the right way to solve the problem you have at hand. There are several variations/configurations that you can have, just look at the java.util.concurrent package.
Update after question edited
Even if the external source is a db, it is still a producer-consumer problem. You can probably call the scan() method whenever you store data in the db, and the scan() method can put the data retrieved from the db into the BlockingQueue.
To address the actual thing about resetting the fixedDelay
That is not actually possible, wither with Java, or with Spring, unless you handle the scheduling part yourself. There is no trigger-now functionality as well. If you have access to the Runnable that's doing the task, you can probably call the run() method yourself. But that would be the same as calling the processing method yourself from anywhere and you don't really need the Runnable.
Another possible workaround
private Lock queueLock = new ReentrantLock();
#Scheduled
public void findNewWorkAndProcess() {
if(!queueLock.tryLock()) {
return;
}
try {
doWork();
} finally {
queueLock.unlock();
}
}
void doWork() {
List<Work> work = getWorkFromDb();
// process work
}
// To be called when new data is inserted into the db.
public void newDataInserted() {
queueLock.lock();
try {
doWork();
} finally {
queueLock.unlock();
}
}
the newDataInserted() is called when you insert any new data. If the scheduled execution is in progress, it will wait until it is finished and then do the work. The call to lock() here is blocking since we know that there is some work in the database and the scheduled-call might have been called before the work was inserted. The call to acquire lock in findNewWorkAndProcess() in non-blocking as, if the lock has been acquired by the newDataInserted method, it would mean that the scheduled method shouldn't be executed.
Well, you can fine tune as you like.

Queue with notifications on isEmpty() changes

I have an BlockingQueue<Runnable>(taken from ScheduledThreadPoolExecutor) in producer-consumer environment. There is one thread adding tasks to the queue, and a thread pool executing them.
I need notifications on two events:
First item added to empty queue
Last item removed from queue
Notification = writing a message to database.
Is there any sensible way to implement that?
A simple and naïve approach would be to decorate your BlockingQueue with an implementation that simply checks the underlying queue and then posts a task to do the notification.
NotifyingQueue<T> extends ForwardingBlockingQueue<T> implements BlockingQueue<T> {
private final Notifier notifier; // injected not null
…
#Override public void put(T element) {
if (getDelegate().isEmpty()) {
notifier.notEmptyAnymore();
}
super.put(element);
}
#Override public T poll() {
final T result = super.poll();
if ((result != null) && getDelegate().isEmpty())
notifier.nowEmpty();
}
… etc
}
This approach though has a couple of problems. While the empty -> notEmpty is pretty straightforward – particularly for a single producer case, it would be easy for two consumers to run concurrently and both see the queue go from non-empty -> empty.
If though, all you want is to be notified that the queue became empty at some time, then this will be enough as long as your notifier is your state machine, tracking emptiness and non-emptiness and notifying when it changes from one to the other:
AtomicStateNotifier implements Notifier {
private final AtomicBoolean empty = new AtomicBoolean(true); // assume it starts empty
private final Notifier delegate; // injected not null
public void notEmptyAnymore() {
if (empty.get() && empty.compareAndSet(true, false))
delegate.notEmptyAnymore();
}
public void nowEmpty() {
if (!empty.get() && empty.compareAndSet(false, true))
delegate.nowEmpty();
}
}
This is now a thread-safe guard around an actual Notifier implementation that perhaps posts tasks to an Executor to asynchronously write the events to the database.
The design is most likely flawed but you can do it relatively simple:
You have a single thread adding, so you can check before adding. i.e. pool.getQueue().isEmpty() - w/ one producer, this is safe.
Last item removed cannot be guaranteed but you can override beforeExecute and check the queue again. Possibly w/ a small timeout after isEmpty() returns true. Probably the code below will be better off executed in afterExecute instead.
protected void beforeExecute(Thread t, Runnable r) {
if (getQueue().isEmpty()){
try{
Runnable r = getQueue().poll(200, TimeUnit.MILLISECONDS);
if (r!=null){
execute(r);
} else{
//last message - or on after execute by Setting a threadLocal and check it there
//alternatively you may need to do so ONLY in after execute, depending on your needs
}
}catch(InterruptedException _ie){
Thread.currentThread().interrupt();
}
}
}
sometime like that
I can explain why doing notifications w/ the queue itself won't work well: imagine you add a task to be executed by the pool, the task is scheduled immediately, the queue is empty again and you will need notification.

Observer Design Pattern

In the Observer Design Pattern, the subject notifies all observers by calling the update() operation of each observer. One way of doing this is
void notify() {
for (observer: observers) {
observer.update(this);
}
}
But the problem here is each observer is updated in a sequence and update operation for an observer might not be called till all the observers before it is updated. If there is an observer that has an infinite loop for update then all the observer after it will never be notified.
Question:
Is there a way to get around this problem?
If so what would be a good example?
The problem is the infinite loop, not the one-after-the-other notifications.
If you wanted things to update concurrently, you'd need to fire things off on different threads - in which case, each listener would need to synchronize with the others in order to access the object that fired the event.
Complaining about one infinite loop stopping other updates from happening is like complaining that taking a lock and then going into an infinite loop stops others from accessing the locked object - the problem is the infinite loop, not the lock manager.
Classic design patterns do not involve parallelism and threading. You'd have to spawn N threads for the N observers. Be careful though since their interaction to this will have to be done in a thread safe manner.
You could make use of the java.utils.concurrent.Executors.newFixedThreadPool(int nThreads) method, then call the invokeAll method (could make use of the one with the timout too to avoid the infinite loop).
You would change your loop to add a class that is Callable that takes the "observer" and the "this" and then call the update method in the "call" method.
Take a look at this package for more info.
This is a quick and dirty implementation of what I was talking about:
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class Main
{
private Main()
{
}
public static void main(final String[] argv)
{
final Watched watched;
final List<Watcher> watchers;
watched = new Watched();
watchers = makeWatchers(watched, 10);
watched.notifyWatchers(9);
}
private static List<Watcher> makeWatchers(final Watched watched,
final int count)
{
final List<Watcher> watchers;
watchers = new ArrayList<Watcher>(count);
for(int i = 0; i < count; i++)
{
final Watcher watcher;
watcher = new Watcher(i + 1);
watched.addWatcher(watcher);
watchers.add(watcher);
}
return (watchers);
}
}
class Watched
{
private final List<Watcher> watchers;
{
watchers = new ArrayList<Watcher>();
}
public void addWatcher(final Watcher watcher)
{
watchers.add(watcher);
}
public void notifyWatchers(final int seconds)
{
final List<Watcher> currentWatchers;
final List<WatcherCallable> callables;
final ExecutorService service;
currentWatchers = new CopyOnWriteArrayList<Watcher>(watchers);
callables = new ArrayList<WatcherCallable>(currentWatchers.size());
for(final Watcher watcher : currentWatchers)
{
final WatcherCallable callable;
callable = new WatcherCallable(watcher);
callables.add(callable);
}
service = Executors.newFixedThreadPool(callables.size());
try
{
final boolean value;
service.invokeAll(callables, seconds, TimeUnit.SECONDS);
value = service.awaitTermination(seconds, TimeUnit.SECONDS);
System.out.println("done: " + value);
}
catch (InterruptedException ex)
{
}
service.shutdown();
System.out.println("leaving");
}
private class WatcherCallable
implements Callable<Void>
{
private final Watcher watcher;
WatcherCallable(final Watcher w)
{
watcher = w;
}
public Void call()
{
watcher.update(Watched.this);
return (null);
}
}
}
class Watcher
{
private final int value;
Watcher(final int val)
{
value = val;
}
public void update(final Watched watched)
{
try
{
Thread.sleep(value * 1000);
}
catch (InterruptedException ex)
{
System.out.println(value + "interupted");
}
System.out.println(value + " done");
}
}
I'd be more concerned about the observer throwing an exception than about it looping indefinitely. Your current implementation would not notify the remaining observers in such an event.
1. Is there a way to get around this problem?
Yes, make sure the observer work fine and return in a timely fashion.
2. Can someone please explain it with an example.
Sure:
class ObserverImpl implements Observer {
public void update( Object state ) {
// remove the infinite loop.
//while( true ) {
// doSomething();
//}
// and use some kind of control:
int iterationControl = 100;
int currentIteration = 0;
while( curentIteration++ < iterationControl ) {
doSomething();
}
}
private void doSomething(){}
}
This one prevent from a given loop to go infinite ( if it makes sense, it should run at most 100 times )
Other mechanism is to start the new task in a second thread, but if it goes into an infinite loop it will eventually consume all the system memory:
class ObserverImpl implements Observer {
public void update( Object state ) {
new Thread( new Runnable(){
public void run() {
while( true ) {
doSomething();
}
}
}).start();
}
private void doSomething(){}
}
That will make the that observer instance to return immediately, but it will be only an illusion, what you have to actually do is to avoid the infinite loop.
Finally, if your observers work fine but you just want to notify them all sooner, you can take a look at this related question: Invoke a code after all mouse event listeners are executed..
All observers get notified, that's all the guarantee you get.
If you want to implement some fancy ordering, you can do that:
Connect just a single Observer;
have this primary Observer notify his friends in an order you define in code or by some other means.
That takes you away from the classic Observer pattern in that your listeners are hardwired, but if it's what you need... do it!
If you have an observer with an "infinite loop", it's no longer really the observer pattern.
You could fire a different thread to each observer, but the observers MUST be prohibited from changing the state on the observed object.
The simplest (and stupidest) method would simply be to take your example and make it threaded.
void notify() {
for (observer: observers) {
new Thread(){
public static void run() {
observer.update(this);
}
}.start();
}
}
(this was coded by hand, is untested and probably has a bug or five--and it's a bad idea anyway)
The problem with this is that it will make your machine chunky since it has to allocate a bunch of new threads at once.
So to fix the problem with all the treads starting at once, use a ThreadPoolExecutor because it will A) recycle threads, and B) can limit the max number of threads running.
This is not deterministic in your case of "Loop forever" since each forever loop will permanently eat one of the threads from your pool.
Your best bet is to not allow them to loop forever, or if they must, have them create their own thread.
If you have to support classes that can't change, but you can identify which will run quickly and which will run "Forever" (in computer terms I think that equates to more than a second or two) then you COULD use a loop like this:
void notify() {
for (observer: observers) {
if(willUpdateQuickly(observer))
observer.update(this);
else
new Thread(){
public static void run() {
observer.update(this);
}
}.start();
}
}
Hey, if it actually "Loops forever", will it consume a thread for every notification? It really sounds like you may have to spend some more time on your design.

Categories