On Android I have a normal Consumer - Producer scenario:
Different producer thread can add object to the list in
different time
When a certain event (Trigger event) appens a
consumer start to keep element from the list (there is only one
thread consumer.
When the list is empty the consumer it stop to
keep element from the list
As soon as the list is not empty the
consumer must tale the element from the list
The consumer must
be fast to keep data, as soon as the element it inserted in the list
from the producer the consumer have to keep it
I have this
scenario in a singleton, and I have to stop thread only when the app
is shutdown.
*
One of the producer is sometimes the UI thread
*
What type of synchronization and list do you suggest to use ? I would do this without waste cpu load.
I'm scared of point 7.. i don't want to block for a lot of time the UI thread
EDIT : for add details for #Alex
I'm writing it in pseudocode:
Thread C producer : calls EventTracker.trackEvent( C )
UI producer : calls EventTracker.trackEvent( A )
EventTracker
{
BlockingQueue<Event> blockingQueue
trackEvent(Event x)
{
blockingQueue.offer(500, ms);
}
Thread consumer
{
while(true){
Event p = blockingQueue.poll(100, ms);
}
}
}
if the timeout is triggered on trackEvent(A) the UI producer not waiting for long time but Does the event "A" missed ?
you could try SEDA approach to this problem with queuing and using an implementation like Blocking queue
in your case, the producer insert event in the queue by using 'offer' and the consumer take them by using 'poll'. (use the Timeout on those method to exit nicely the producer/consumer when the user quit the application)
note that there is a few things to get right on the Threading side when using this approach.
here's an example of the concept from the android developer documentation.
Related
lets say we have two threads with are connected by a ConcurrentLinkedQueue. What I want is something like a handler on the queue so that one thread knows when the other queue has added soemthing to the queue and to poll it. Is that possible?
Normally a ConcurrentLinkedQueue is used when there is at least one producer on a thread, and at least one consumer on a different thread.
The consumer will process the element as soon as they are available, to do so the read operation on the queue blocks, sometimes for a limited amount of time.
Depending on the application you can have a single producer and many consumer, or viceversa.
Blocking achieves exactly your requirement (the consumer thread knows when an element is inserted).
The fact that the consumer thread blocks is not a problem unless is your main process thread or unless you are planning to build several hundred concurrent consumers.
So, Queue#take() or Queue#poll(long timeout,TimeUnit unit) is your friend here, if you just run it on dedicated Thread.
I'm working on a producer-consumer pattern that should work with a queue. As usually a consumer Thread and a Producer thread, producer will add an item to the queue at certain times interval (from 3 to 5 seconds), consumer will wait to process it as soon as the queue isn't empty.
As a requirement the producer should and will produce items non-stop, which means if the queue is full, it will keep producing, and that's why I can't use BlockingQueue implementations as they either wait for the queue to have available space or throw exception.
My current implementation is the following
// consumer's Runnable
public void run() {
while(true) {
if(!queue.isEmpty()) {
currentItem = queue.poll();
process(currentItem);
}
}
}
This thread will keep looping even if no item has been produced by the producer Thread.
How is it done to wait until the producer add an item to the queue, and also what is a good Queue implementation with no cap-limit ?
I was trying to read the implementation of Synchronous Queue
It is not so straightforward for me. It seems to be using a linked list where each node is associated with a thread.
And the core part uses a spin loop waiting for tasks to be placed in the queue.
I was wondering why is a spin loop being used instead of something like wait/notify?
Now this way one of the cores is gone due to this constant spin loop, right?
I am trying to understand this point and get a rough understanding of the design of the Synchronous Queue
UPDATE
What is also troubling me is how the waiter threads start/stop.
The point of the SynchronousQueue is to synchronize something which is usually quite asynchronous - one thread placing an item into the queue while another tries to take from it.
The SynchronousQueue is actually not a queue at all. It has no capacity, no internal storage. It only allows taking from the queue when another process is currently trying to put in the queue.
Example:
Process A tries to put in the queue. This blocks for now.
Process B tries to take from the queue. Since someone is trying to put, the item is transferred from A to B, and both are unblocked.
Process B tries to take from the queue, but no one tries to put. So B is now blocked.
Process A now wants to put an item. Now the item is transferred over to B, and A and B are no longer blocked.
About the blocking:
The Sun/Oracle JRE implementation does use polling instead of a wait/notify pattern if you do a timed operation (like "try to take for 1 second"). This makes sense: it periodically retries until the time is up. When you do a non-timed operation (like "take, no matter how long it takes" it does use park, which wakes again if the situation has changed. In neither situation would one of your cores be constantly busy spinning a loop. The for (;;) means "retry indefinately" in this case, it does not mean "constant spinning".
I want to implement a variety of of a publisher/subscriber pattern using Java and currently running out of ideas.
There is 1 publisher and N subscribers, the publisher publish objects then each subscriber need to process the each of the objects once and only once in the correct order. The publisher and each subscriber run in their own thread.
In my original implementation, each subscriber has its own blocking queue, and the publisher put objects into each of the subscriber's queue. This works ok but the publisher will be blocked if any of the subscriber's queue is full. This leads to degration of performance as each subscriber takes different time in processing the object.
Then in another implementation, the publisher hold the object in its own queue. Along with the object, an AtomicInteger counter is associated with it with the number of subscribers out there. Each subscriber then peek the queue and decrease the counter, and remove it from the queue when the counter reach zero.
In this way the publisher is free from blocking but now the subscriber will need to wait for each other to process the object, removing the object from the queue, before the next object can be peeked.
Is there any better way to do this? I assume this should be a quite common pattern.
Your "many queues" implementation is the way to go. I don't think you need to necessarily be concerned with one full queue blocking the producer, because the overall time to completion won't be affected. Let's say you have three consumers, two consume at a rate of 1 per second and the third consumes at a rate of 1 per five seconds, meanwhile the producer produces at a rate of 1 per two seconds. Eventually the third queue will fill up, and so the producer will block on it and will also stop putting items in the first and second queues. There are ways around this, but they're not going to change the fact that the third consumer will always be the bottleneck. If you're producing/consuming 100 items, then this will take at least 500 seconds because of the third consumer (5 seconds times 100 items), and this will be the case even if the first and second consumers finish after 200 seconds (because you've done something clever to allow the producer to continue to fill their queues even after the third queue is full) or if they finish after 500 seconds (because the producer blocked on the third queue).
Definately
each subscriber has its own blocking queue, and the publisher put objects into each of the subscriber's queue.`
this is the way to go.
you can use threaded approach to put it in queue... so if one queue is full publisher will not wait..
for example.
s1 s2 s3 are subscribers and addToQueue is method in each subscriber which adds to corrosponding queue.
The addQueue Method is which waits till queue is non empty .. so call to addQueue will be a blocking call ideally synchronised code...
Then in publisher you can do something similar to below code
NOTE: code might not be in working condition as it is.. but should give you idea.
List<subscriber> slist;// Assume its initialised
public void publish(final String message){
for (final subscriber s: slist){
Thread t=new Thread(new Runnable(){
public void run(){
s.addToQueue(message);
}
});
t.start();
}
}
There is 1 publisher and N subscribers, the publisher publish objects then each subscriber need to process the each of the objects once and only once in the correct order. The publisher and each subscriber run in their own thread.
I would change this architecture. I initially considered the queue per subscriber but I don't like that mechanism. For example, if the first subscriber takes longer to run, all of the jobs will end up in that queue and you will only be doing 1 thread of work.
Since you have to run the subscribers in order, I'd have a pool of threads which run each message through all of the subscribers. The calls to the subscribers will need to be reentrant which may not be possible.
So you would have a pool of 10 threads (let's say) and each one dequeues from the publisher's queue, and does something like the following:
public void run() {
while (!shutdown && !Thread.currentThread().isInterrupted()) {
Article article = publisherQueue.take();
for (Subscriber subscriber : subscriberList) {
subscriber.process(article);
}
}
}
n threads produce to a BlockingQueue.
When the queue is full, the consumer drains the queue and does some processing.
How should I decide between the following 2 choices of implementation?
Choice A :
The consumer regularly polls the queue to check if it is full, all writers waiting (it is a blocking queue after all : ).
Choice B :
I implement my own queue with a synchronized "put" method. Before putting the provided element, I test if the queue is not nearly full (full minus 1 element). I then put the element and notify my consumer (which was waiting).
The first solution is the easiest but does polling ; which annoys me.
The second solution is in my opinion more error prone and more requires more coding.
I would suggest to write your proxy queue which would wrap a queue instance internally along with an Exchanger instance. Your proxy methods delegate calls to your internal queue. Check if the internal queue is full when you add and when it is full, exchange the internal queue with the consumer thread. The consumer thread will exchange an empty queue in return for the filled queue. Your proxy queue will continue filling the empty queue while the consumer can keep processing the filled queue. Both activities can run in parallel. They can exchange again when both parties are ready.
class MyQueue implements BlockingQueue{
Queue internalQueue = ...
Exchanger<Queue> exchanger;
MyQueue(Exchanger<BlockingQueue> ex){
this.exchanger = ex;
}
.
.
.
boolean add (E e) {
try{
internalQueue.add(e);
}catch(IllegalStateException ise){
internalQueue = exchanger.exchange(internalQueue);
}
internalQueue.add(e);
}
}
class Consumer implements Runnable {
public void run() {
Queue currentQueue = new empty queue;
while (...){
Object o = currentQueue.remove();
if (o == null){
currentQueue = exchanger.exchange(currentQueue);
continue;
}
//cast and process the element
}
}
}
The second solution is obviously better. And it is not so complicated. You can inherit or wrap any other BlockingQueue and override its method offer() as following: call the "real" offer(). If it returns true, exit. Otherwise trigger the working thread to work and immediately call offer() with timeout.
Here is the almost pseudo code:
public boolean offer(E e) {
if (queue.offer(e)) {
return true;
}
boolean result = queue.offer(e, timeout, unit); // e.g. 20 sec. - enough for worker to dequeue at least one task from the queue, so the place will be available.
worker.doYouJob();
return result; }
I don't know is there such implementation of queue you need: consumer are wait while queue become full and only when it full drain and start processing.
You queue should be blocked for consumers until it become full. Think you need to override drain() method to make it wait while queue become full. Than your consumers just call and wait for drain method. No notification from producer to consumer is needed.
Use an observer pattern. Have your consumers register with the queue notifier. When a producer does a put the queue would then decide whether to notify any listeners.
I used the CountdownLatch which is simple and works great.
Thanks for the other ideas :)