Java Multi thread messaging - java

I have an app with two threads, 1 that writes to a queue and the second one that read async from it.
I need to create a third one that generate 20 more.
the newly created threads will run till explicitly stopped. those 20 threads should get "live" data in order to analyze it.
each of the 20 has a unique ID/name. I need to send the relevant data (that the READ thread collect) to the correct thread (of the 20 threads). e.g. if the data include a string with id (in it) of 2 --> I need to send it to thread with the ID =2.
my question is: how should I hold a "pointer" to each of the 20 threads and send it the relevant data? (I can search the id in a runnable list (that will hold the threads)--> but then I need to call to a method "NewData(string)" in order to send the data to the running thread).
How should I do it?
TIA
Paz

You would probably be better to use a Queue to communicate with your threads. You could then put all of the queues in a map for easy access. I would recommend a BlockingQueue.
public class Test {
// Special stop message to tell the worker to stop.
public static final Message Stop = new Message("Stop!");
static class Message {
final String msg;
// A message to a worker.
public Message(String msg) {
this.msg = msg;
}
public String toString() {
return msg;
}
}
class Worker implements Runnable {
private volatile boolean stop = false;
private final BlockingQueue<Message> workQueue;
public Worker(BlockingQueue<Message> workQueue) {
this.workQueue = workQueue;
}
#Override
public void run() {
while (!stop) {
try {
Message msg = workQueue.poll(10, TimeUnit.SECONDS);
// Handle the message ...
System.out.println("Worker " + Thread.currentThread().getName() + " got message " + msg);
// Is it my special stop message.
if (msg == Stop) {
stop = true;
}
} catch (InterruptedException ex) {
// Just stop on interrupt.
stop = true;
}
}
}
}
Map<Integer, BlockingQueue<Message>> queues = new HashMap<>();
public void test() throws InterruptedException {
// Keep track of my threads.
List<Thread> threads = new ArrayList<>();
for (int i = 0; i < 20; i++) {
// Make the queue for it.
BlockingQueue<Message> queue = new ArrayBlockingQueue(10);
// Build its thread, handing it the queue to use.
Thread thread = new Thread(new Worker(queue), "Worker-" + i);
threads.add(thread);
// Store the queue in the map.
queues.put(i, queue);
// Start the process.
thread.start();
}
// Test one.
queues.get(5).put(new Message("Hello"));
// Close down.
for (BlockingQueue<Message> q : queues.values()) {
// Stop each queue.
q.put(Stop);
}
// Join all threads to wait for them to finish.
for (Thread t : threads) {
t.join();
}
}
public static void main(String args[]) {
try {
new Test().test();
} catch (Throwable t) {
t.printStackTrace(System.err);
}
}
}

Related

Kill consumers when blockingqueue is empty

I'm reading up on blockingqueue, executoreserivce and the producer-consumer paradigm.
I want to have a changing number of producers, and changing number of consumers. Each producer will append to the queue, and the consumers will consume the messages and process them.
The question I have is - how do the producers know that the consumers are done, and no more messages will enter the queue?
I thought to add a counter into my main thread. When a producer is started, I will increment the counter and that when each producer ends, they will decrement the int.
My consumers will be able to know the counter, and when it reaches 0, and no more elements in the queue, they can die.
Another general question in terms of syncing the work - should the main thread read the contents of the queue, and add executers for each message, or is it best practice to have the threads know this logic and decide on their own when to die?
When the system starts up, I receive a number that decides how many producers will start. Each producer will generate a random set of numbers into the queue. The consumers will print these numbers to a log. The issue that I'm having is, that once I know that the last producer pushed the last number in, I still don't understand how to let the consumers know that there won't be any more numbers coming in, and they should shut down.
How do the consumers know when the producers are done?
One elegant solution to this problem is to use the PoisonPill pattern. Here is an example of how it works. All you need to know in this case, is the number of producers.
Edit: I updated the code to clear the queue when last consumer finishes the work.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.atomic.AtomicInteger;
public class PoisonPillsTests {
interface Message {
}
interface PoisonPill extends Message {
PoisonPill INSTANCE = new PoisonPill() {
};
}
static class TextMessage implements Message {
private final String text;
public TextMessage(String text) {
this.text = text;
}
public String getText() {
return text;
}
#Override
public String toString() {
return text;
}
}
static class Producer implements Runnable {
private final String producerName;
private final AtomicInteger producersCount;
private final BlockingQueue<Message> messageBlockingQueue;
public Producer(String producerName, BlockingQueue<Message> messageBlockingQueue, AtomicInteger producersCount) {
this.producerName = producerName;
this.messageBlockingQueue = messageBlockingQueue;
this.producersCount = producersCount;
}
#Override
public void run() {
try {
for (int i = 0; i < 100; i++) {
messageBlockingQueue.put(new TextMessage("Producer " + producerName + " message " + i));
}
if (producersCount.decrementAndGet() <= 0) {
//we need this producersCount so that the producers to produce a single poison pill
messageBlockingQueue.put(PoisonPill.INSTANCE);
}
} catch (InterruptedException e) {
throw new RuntimeException("Producer interrupted", e);
}
}
}
static class Consumer implements Runnable {
private final AtomicInteger consumersCount;
private final AtomicInteger consumedMessages;
private final BlockingQueue<Message> messageBlockingQueue;
public Consumer(BlockingQueue<Message> messageBlockingQueue, AtomicInteger consumersCount, AtomicInteger consumedMessages) {
this.messageBlockingQueue = messageBlockingQueue;
this.consumersCount = consumersCount;
this.consumedMessages = consumedMessages;
}
#Override
public void run() {
try {
while (true) {
Message message = null;
message = messageBlockingQueue.take();
if (message instanceof PoisonPill) {
//we put back the poison pill so that to be consumed by the next consumer
messageBlockingQueue.put(message);
break;
} else {
consumedMessages.incrementAndGet();
System.out.println("Consumer got message " + message);
}
}
} catch (InterruptedException e) {
throw new RuntimeException("Consumer interrupted", e);
} finally {
if (consumersCount.decrementAndGet() <= 0) {
System.out.println("Last consumer, clearing the queue");
messageBlockingQueue.clear();
}
}
}
}
public static void main(String[] args) {
final AtomicInteger producerCount = new AtomicInteger(4);
final AtomicInteger consumersCount = new AtomicInteger(2);
final AtomicInteger consumedMessages = new AtomicInteger();
BlockingQueue<Message> messageBlockingQueue = new LinkedBlockingQueue<>();
List<CompletableFuture<Void>> tasks = new ArrayList<>();
for (int i = 0; i < producerCount.get(); i++) {
tasks.add(CompletableFuture.runAsync(new Producer("" + (i + 1), messageBlockingQueue, producerCount)));
}
for (int i = 0; i < consumersCount.get(); i++) {
tasks.add(CompletableFuture.runAsync(new Consumer(messageBlockingQueue, consumersCount, consumedMessages)));
}
CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])).join();
System.out.println("Consumed " + consumedMessages + " messages");
}
}
When the producers are done, the last one can interrupt all consumers and (possibly) producers.
InterruptedException is thrown whenever a blocking call (be it put() or take()) is interruped by another thread via thread.interrupt(), where thread is the thread calling the method. When the last producer finishes, it can interrupt all other threads, which will result in all blocking methods throwing InterruptedException, allowing you to terminate the corresponding threads.
final BlockingQueue<T> queue = ...;
final List<Thread> threads = new ArrayList<>();
threads.add(new Producer1());
threads.add(new Producer2());
threads.add(new Consumer1());
threads.add(new Consumer2());
threads.forEach(Thread::start);
// Done by the last producer, or any other thread
threads.forEach(Thread::interrupt);
class Producer extends Thread {
#Override
public void run() {
for (int i = 0; i < X; i++) {
T element;
// Produce element
try {
queue.put(element);
} catch (InterruptedException e) {
break; // Optional, only if other producers may still be running and
// you want to stop them, or interruption is performed by
// a completely different thread
}
}
}
}
class Consumer extends Thread {
#Override
public void run() {
while (true) {
T element;
try {
element = queue.take();
} catch (InterruptedException e) {
break;
}
// Consume element
}
}
}

Multi-threading with random thread

I try to realize Producer-Consumer pattern with several producers and consumers.
I try to make
CompletableFuture future = CompletableFuture.runAsync(() -> producer.run(), producerService)
.thenRunAsync(() -> consumer.run(), consumerService);
where producer.run() do something and return String but it is not necessary and consumer.run() do something like this
while (!queue.isEmpty()) {
try {
message = queue.poll();
if (message == null || !message.equals(thread)) {
queue.offer(message);
Thread.sleep(1000);
continue;
}
doWork(message);
} catch (InterruptedException e) {
e.printStackTrace();
My Thread has name equals the number of that like 1 or 2, or 3 if there are 3 Threads in consumerService.
message is a random number which I get with
String.valueOf(1 + new Random().nextInt(2)) for 2 Threads as I suppose.
So, my question is
What should I do instead of thenRunAsync() or something else that my consumer can be possible to change Thread to take message from the queue?
It needs to producer generates a list of numbers like 1,2,1,1,2,1,1,1 and Consumer with Thread which has name 1 get from the queue messages with number equals 1 but Thread with name 2 get with number equals 2.
I can't do every message and after that do CompletableFuture.allOf() because, if I'd have about 1_000_000 tasks, I'd have to wait while it generates and after that, I'd be able to call my consumers
CompletableFuture.run* methods are used to run multiple short-living tasks using a thread pool. Your tasks are not short-living, they are looping over queue and handle multiple values. As a result, they occupy threads from the thread pool, and the size of the thread pool decreases, which may lead to a thread starvation (a kind of dead lock).
So you should not use CompletableFuture.run* methods. Use explicit thread creation instead.
Then, make sure that producer puts messages into queue with queue.put() or queue.offer(), and consumer pulls messages with queue.get() or queue.poll(). In your code, consumer both puts and pulls messages, and producer does not interact with the queue at all.
I realized it like
class Stater {
public static boolean STOP = false;
private Producer producer;
private Consumer consumer;
private ExecutorService producerService= Executors.newFixedThreadPool(PRODUCER_NUMBER, taxiFactory);
private ExecutorService consumerService= Executors.newFixedThreadPool(CONSUMER_NUMBER, clientFactory);
private void working() {
for (int i = 0; i < PRODUCER_NUMBER; i++) {
producerService.execute(() -> producer.get());
consumerService.execute(() -> consumer.run());
}
Starter.STOP = true;
producerService.shutdown();
consumerService.shutdown();
}
}
class Common {
private Queue<Message> emergencyQueue;
private BlockingQueue<Message> blockingQueue;
public void insertOrder(Message message) {
if (!blockingQueue.offer(message)) {
emergencyQueue.add(message);
}
}
public Message getOrder() {
if (emergencyQueue.isEmpty()) {
if (!blockingQueue.isEmpty()) {
return blockingQueue.poll();
} else {
return null;
}
} else {
return emergencyQueue.poll();
}
}
public boolean shouldStop() {
return blockingQueue.isEmpty() && emergencyQueue.isEmpty() && Starter.STOP;
}
}
class Consumer implements Runnable{
private Common common;
public void run(){
common.insertOrder(new Message());
}
}
class Producer implements Runnable{
private Common common;
public void run(){
while (!common.shouldStop()) {
Message message=common.getOrder();
if (message == null) {
Thread.sleep(new Random().nextInt(TIME_TO_WAIT));
}
}
}
}

Both sequential and parallel processing

I have one producer and many consumers.
the producer is fast and generating a lot of results
tokens with the same value need to be processed sequentially
tokens with different values must be processed in parallel
creating new Runnables would be very expensive and also the production code could work with 100k of Tokens(in order to create a Runnable I have to pass to the constructor some complex to build objects)
Can I achieve the same results with a simpler algorithm? Nesting a syncronization block with a reentrant lock seems a bit unnatural.
Are there any race conditions you might notice?
Update: a second solution I found was working with 3 collections. One to cache the producer results, second a blocking queue and 3rd using a list to track in the tasks in progress. Again a bit to complicated.
My version of code
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.locks.ReentrantLock;
public class Main1 {
static class Token {
private int order;
private String value;
Token() {
}
Token(int o, String v) {
order = o;
value = v;
}
int getOrder() {
return order;
}
String getValue() {
return value;
}
}
private final static BlockingQueue<Token> queue = new ArrayBlockingQueue<Token>(10);
private final static ConcurrentMap<String, Object> locks = new ConcurrentHashMap<String, Object>();
private final static ReentrantLock reentrantLock = new ReentrantLock();
private final static Token STOP_TOKEN = new Token();
private final static List<String> lockList = Collections.synchronizedList(new ArrayList<String>());
public static void main(String[] args) {
ExecutorService producerExecutor = Executors.newSingleThreadExecutor();
producerExecutor.submit(new Runnable() {
public void run() {
Random random = new Random();
try {
for (int i = 1; i <= 100; i++) {
Token token = new Token(i, String.valueOf(random.nextInt(1)));
queue.put(token);
}
queue.put(STOP_TOKEN);
}catch(InterruptedException e){
e.printStackTrace();
}
}
});
ExecutorService consumerExecutor = Executors.newFixedThreadPool(10);
for(int i=1; i<=10;i++) {
// creating to many runnable would be inefficient because of this complex not thread safe object
final Object dependecy = new Object(); //new ComplexDependecy()
consumerExecutor.submit(new Runnable() {
public void run() {
while(true) {
try {
//not in order
Token token = queue.take();
if (token == STOP_TOKEN) {
queue.add(STOP_TOKEN);
return;
}
System.out.println("Task start" + Thread.currentThread().getId() + " order " + token.getOrder());
Random random = new Random();
Thread.sleep(random.nextInt(200)); //doLongRunningTask(dependecy)
lockList.remove(token.getValue());
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}});
}
}}
You can pre-create set of Runnables which will pick incoming tasks (tokens) and place them in queues according to their order value.
As pointed out in comments, it's not guaranteed that tokens with different values will always execute in parallel (all in all, you are bounded, at least, by nr of physical cores in your box). However, it is guaranteed that tokens with same order will be executed in the order of arrival.
Sample code:
/**
* Executor which ensures incoming tasks are executed in queues according to provided key (see {#link Task#getOrder()}).
*/
public class TasksOrderingExecutor {
public interface Task extends Runnable {
/**
* #return ordering value which will be used to sequence tasks with the same value.<br>
* Tasks with different ordering values <i>may</i> be executed in parallel, but not guaranteed to.
*/
String getOrder();
}
private static class Worker implements Runnable {
private final LinkedBlockingQueue<Task> tasks = new LinkedBlockingQueue<>();
private volatile boolean stopped;
void schedule(Task task) {
tasks.add(task);
}
void stop() {
stopped = true;
}
#Override
public void run() {
while (!stopped) {
try {
Task task = tasks.take();
task.run();
} catch (InterruptedException ie) {
// perhaps, handle somehow
}
}
}
}
private final Worker[] workers;
private final ExecutorService executorService;
/**
* #param queuesNr nr of concurrent task queues
*/
public TasksOrderingExecutor(int queuesNr) {
Preconditions.checkArgument(queuesNr >= 1, "queuesNr >= 1");
executorService = new ThreadPoolExecutor(queuesNr, queuesNr, 0, TimeUnit.SECONDS, new SynchronousQueue<>());
workers = new Worker[queuesNr];
for (int i = 0; i < queuesNr; i++) {
Worker worker = new Worker();
executorService.submit(worker);
workers[i] = worker;
}
}
public void submit(Task task) {
Worker worker = getWorker(task);
worker.schedule(task);
}
public void stop() {
for (Worker w : workers) w.stop();
executorService.shutdown();
}
private Worker getWorker(Task task) {
return workers[task.getOrder().hashCode() % workers.length];
}
}
By the nature of your code, the only way to guarantee that the tokens with the
same value are processed in serial manner is to wait for STOP_TOKEN to arrive.
You'll need single producer-single consumer setup, with consumer collecting and sorting
the tokens by their value (into the Multimap, let say).
Only then you know which tokens can be process serially and which may be processed in parallel.
Anyway, I advise you to look at LMAX Disruptor, which offers very effective way for sharing data between threads.
It doesn't suffer from synchronization overhead as Executors as it is lock free (which may give you nice performance benefits, depending on the way how you process the data).
The solution using two Disruptors
// single thread for processing as there will be only on consumer
Disruptor<InEvent> inboundDisruptor = new Disruptor<>(InEvent::new, 32, Executors.newSingleThreadExecutor());
// outbound disruptor that uses 3 threads for event processing
Disruptor<OutEvent> outboundDisruptor = new Disruptor<>(OutEvent::new, 32, Executors.newFixedThreadPool(3));
inboundDisruptor.handleEventsWith(new InEventHandler(outboundDisruptor));
// setup 3 event handlers, doing round robin consuming, effectively processing OutEvents in 3 threads
outboundDisruptor.handleEventsWith(new OutEventHandler(0, 3, new Object()));
outboundDisruptor.handleEventsWith(new OutEventHandler(1, 3, new Object()));
outboundDisruptor.handleEventsWith(new OutEventHandler(2, 3, new Object()));
inboundDisruptor.start();
outboundDisruptor.start();
// publisher code
for (int i = 0; i < 10; i++) {
inboundDisruptor.publishEvent(InEventTranslator.INSTANCE, new Token());
}
The event handler on the inbound disruptor just collects incoming tokens. When STOP token is received, it publishes the series of tokens to outbound disruptor for further processing:
public class InEventHandler implements EventHandler<InEvent> {
private ListMultimap<String, Token> tokensByValue = ArrayListMultimap.create();
private Disruptor<OutEvent> outboundDisruptor;
public InEventHandler(Disruptor<OutEvent> outboundDisruptor) {
this.outboundDisruptor = outboundDisruptor;
}
#Override
public void onEvent(InEvent event, long sequence, boolean endOfBatch) throws Exception {
if (event.token == STOP_TOKEN) {
// publish indexed tokens to outbound disruptor for parallel processing
tokensByValue.asMap().entrySet().stream().forEach(entry -> outboundDisruptor.publishEvent(OutEventTranslator.INSTANCE, entry.getValue()));
} else {
tokensByValue.put(event.token.value, event.token);
}
}
}
Outbound event handler processes tokens of the same value sequentially:
public class OutEventHandler implements EventHandler<OutEvent> {
private final long order;
private final long allHandlersCount;
private Object yourComplexDependency;
public OutEventHandler(long order, long allHandlersCount, Object yourComplexDependency) {
this.order = order;
this.allHandlersCount = allHandlersCount;
this.yourComplexDependency = yourComplexDependency;
}
#Override
public void onEvent(OutEvent event, long sequence, boolean endOfBatch) throws Exception {
if (sequence % allHandlersCount != order ) {
// round robin, do not consume every event to allow parallel processing
return;
}
for (Token token : event.tokensToProcessSerially) {
// do procesing of the token using your complex class
}
}
}
The rest of the required infrastructure (purpose described in the Disruptor docs):
public class InEventTranslator implements EventTranslatorOneArg<InEvent, Token> {
public static final InEventTranslator INSTANCE = new InEventTranslator();
#Override
public void translateTo(InEvent event, long sequence, Token arg0) {
event.token = arg0;
}
}
public class OutEventTranslator implements EventTranslatorOneArg<OutEvent, Collection<Token>> {
public static final OutEventTranslator INSTANCE = new OutEventTranslator();
#Override
public void translateTo(OutEvent event, long sequence, Collection<Token> tokens) {
event.tokensToProcessSerially = tokens;
}
}
public class InEvent {
// Note that no synchronization is used here,
// even though the field is used among multiple threads.
// Memory barrier used by Disruptor guarantee changes are visible.
public Token token;
}
public class OutEvent {
// ... again, no locks.
public Collection<Token> tokensToProcessSerially;
}
public class Token {
String value;
}
If you have lots of different tokens, then the simplest solution is to create some number of single-thread executors (about 2x your number of cores), and then distribute each task to an executor determined by the hash of its token.
That way all tasks with the same token will go to the same executor and execute sequentially, because each executor only has one thread.
If you have some unstated requirements about scheduling fairness, then it is easy enough to avoid any significant imbalances by having the producer thread queue up its requests (or block) before distributing them, until there are, say, less than 10 requests per executor outstanding.
The following solution will only use a single Map that is used by the producer and consumers to process orders in sequential order for each order number while processing different order numbers in parallel. Here is the code:
public class Main {
private static final int NUMBER_OF_CONSUMER_THREADS = 10;
private static volatile int sync = 0;
public static void main(String[] args) {
final ConcurrentHashMap<String,Controller> queues = new ConcurrentHashMap<String, Controller>();
final CountDownLatch latch = new CountDownLatch(NUMBER_OF_CONSUMER_THREADS);
final AtomicBoolean done = new AtomicBoolean(false);
// Create a Producer
new Thread() {
{
this.setDaemon(true);
this.setName("Producer");
this.start();
}
public void run() {
Random rand = new Random();
for(int i =0 ; i < 1000 ; i++) {
int order = rand.nextInt(20);
String key = String.valueOf(order);
String value = String.valueOf(rand.nextInt());
Controller controller = queues.get(key);
if (controller == null) {
controller = new Controller();
queues.put(key, controller);
}
controller.add(new Token(order, value));
Main.sync++;
}
done.set(true);
}
};
while (queues.size() < 10) {
try {
// Allow the producer to generate several entries that need to
// be processed.
Thread.sleep(5000);
} catch (InterruptedException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
// System.out.println(queues);
// Create the Consumers
ExecutorService consumers = Executors.newFixedThreadPool(NUMBER_OF_CONSUMER_THREADS);
for(int i = 0 ; i < NUMBER_OF_CONSUMER_THREADS ; i++) {
consumers.submit(new Runnable() {
private Random rand = new Random();
public void run() {
String name = Thread.currentThread().getName();
try {
boolean one_last_time = false;
while (true) {
for (Map.Entry<String, Controller> entry : queues.entrySet()) {
Controller controller = entry.getValue();
if (controller.lock(this)) {
ConcurrentLinkedQueue<Token> list = controller.getList();
Token token;
while ((token = list.poll()) != null) {
try {
System.out.println(name + " processing order: " + token.getOrder()
+ " value: " + token.getValue());
Thread.sleep(rand.nextInt(200));
} catch (InterruptedException e) {
}
}
int last = Main.sync;
queues.remove(entry.getKey());
while(done.get() == false && last == Main.sync) {
// yield until the producer has added at least another entry
Thread.yield();
}
// Purge any new entries added
while ((token = list.poll()) != null) {
try {
System.out.println(name + " processing order: " + token.getOrder()
+ " value: " + token.getValue());
Thread.sleep(200);
} catch (InterruptedException e) {
}
}
controller.unlock(this);
}
}
if (one_last_time) {
return;
}
if (done.get()) {
one_last_time = true;
}
}
} finally {
latch.countDown();
}
}
});
}
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
consumers.shutdown();
System.out.println("Exiting.. remaining number of entries: " + queues.size());
}
}
Note that the Main class contains a queues instance that is a Map. The map key is the order id that you want to process sequentially by the consumers. The value is a Controller class that will contain all of the orders associated with that order id.
The producer will generate the orders and add the order, (Token), to its associated Controller. The consumers will iterator over the queues map values and call the Controller lock method to determine if it can process orders for that particular order id. If the lock returns false it will check the next Controller instance. If the lock returns true, it will process all orders and then check the next Controller.
updated Added the sync integer that is used to guarantee that when an instance of the Controller is removed from the queues map. All of its entries will be consumed. There was an logic error in the consumer code where the unlock method was called to soon.
The Token class is similar to the one that you've posted here.
class Token {
private int order;
private String value;
Token(int order, String value) {
this.order = order;
this.value = value;
}
int getOrder() {
return order;
}
String getValue() {
return value;
}
#Override
public String toString() {
return "Token [order=" + order + ", value=" + value + "]\n";
}
}
The Controller class that follows is used to insure that only a single thread within the thread pool will be processing the orders. The lock/unlock methods are used to determine which of the threads will be allowed to process the orders.
class Controller {
private ConcurrentLinkedQueue<Token> tokens = new ConcurrentLinkedQueue<Token>();
private ReentrantLock lock = new ReentrantLock();
private Runnable current = null;
void add(Token token) {
tokens.add(token);
}
public ConcurrentLinkedQueue<Token> getList() {
return tokens;
}
public void unlock(Runnable runnable) {
lock.lock();
try {
if (current == runnable) {
current = null;
}
} finally {
lock.unlock();
}
}
public boolean lock(Runnable runnable) {
lock.lock();
try {
if (current == null) {
current = runnable;
}
} finally {
lock.unlock();
}
return current == runnable;
}
#Override
public String toString() {
return "Controller [tokens=" + tokens + "]";
}
}
Additional information about the implementation. It uses a CountDownLatch to insure that all produced orders will be processed prior to the process exiting. The done variable is just like your STOP_TOKEN variable.
The implementation does contain an issue that you would need to resolve. There is the issue that it does not purge the controller for an order id when all of the orders have been processed. This will cause instances where a thread in the thread pool gets assigned to a controller that contains no orders. Which will waste cpu cycles that could be used to perform other tasks.
Is all you need is to ensure that tokens with the same value are not being processed concurrently? Your code is too messy to understand what you mean (it does not compile, and has lots of unused variables, locks and maps, that are created but never used). It looks like you are greatly overthinking this. All you need is one queue, and one map.
Something like this I imagine:
class Consumer implements Runnable {
ConcurrentHashMap<String, Token> inProcess;
BlockingQueue<Token> queue;
public void run() {
Token token = null;
while ((token = queue.take()) != null) {
if(inProcess.putIfAbsent(token.getValue(), token) != null) {
queue.put(token);
continue;
}
processToken(token);
inProcess.remove(token.getValue());
}
}
}
tokens with the same value need to be processed sequentially
The way to insure that any two things happen in sequence is to do them in the same thread.
I'd have a collection of however many worker threads, and I'd have a Map. Any time I get a token that I've not seen before, I'll pick a thread at random, and enter the token and the thread into the map. From then on, I'll use that same thread to execute tasks associated with that token.
creating new Runnables would be very expensive
Runnable is an interface. Creating new objects that implement Runnable is not going to be significantly more expensive than creating any other kind of object.
Maybe I'm misunderstanding something. But it seems that it would be easier to filter the Tokens with same value from the ones with different values into two different queues initially.
And then use Stream with either map or foreach for the sequential. And simply use the parallel stream version for the rest.
If your Tokens in production environment are lazily generated and you only get one at a time you simply make some sort of filter which distributes them to the two different queues.
If you can implement it with Streams I suqqest doing that as they are simple, easy to use and FAST!
https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html
I made a brief example of what I mean. In this case the numbers Tokens are sort of artificially constructed but thats beside the point. Also the streams are both initiated on the main thread which would probably also not be ideal.
public static void main(String args[]) {
ArrayList<Token> sameValues = new ArrayList<Token>();
ArrayList<Token> distinctValues = new ArrayList<Token>();
Random random = new Random();
for (int i = 0; i < 100; i++) {
int next = random.nextInt(100);
Token n = new Token(i, String.valueOf(next));
if (next == i) {
sameValues.add(n);
} else {
distinctValues.add(n);
}
}
distinctValues.stream().parallel().forEach(token -> System.out.println("Distinct: " + token.value));
sameValues.stream().forEach(token -> System.out.println("Same: " + token.value));
}
I am not entirely sure I have understood the question but I'll take a stab at an algorithm.
The actors are:
A queue of tasks
A pool of free executors
A set of in-process tokens currently being processed
A controller
Then,
Initially all executors are available and the set is empty
controller picks an available executor and goes through the queue looking for a task with a token that is not in the in-process set and when it finds it
adds the token to the in-process set
assigns the executor to process the task and
goes back to the beginning of the queue
the executor removes the token from the set when it is done processing and adds itself back to the pool
One way of doing this is having one executor for sequence processing and one for parallel processing. We also need a single threaded manager service that will decide to which service token needs to be submitted for processing.
// Queue to be shared by both the threads. Contains the tokens produced by producer.
BlockingQueue tokenList = new ArrayBlockingQueue(10);
private void startProcess() {
ExecutorService producer = Executors.newSingleThreadExecutor();
final ExecutorService consumerForSequence = Executors
.newSingleThreadExecutor();
final ExecutorService consumerForParallel = Executors.newFixedThreadPool(10);
ExecutorService manager = Executors.newSingleThreadExecutor();
producer.submit(new Producer(tokenList));
manager.submit(new Runnable() {
public void run() {
try {
while (true) {
Token t = tokenList.take();
System.out.println("consumed- " + t.orderid
+ " element");
if (t.orderid % 7 == 0) { // any condition to check for sequence processing
consumerForSequence.submit(new ConsumerForSequenceProcess(t));
} else {
ConsumerForParallel.submit(new ConsumerForParallelProcess(t));
}
}
}
catch (InterruptedException e) { // TODO Auto-generated catch
// block
e.printStackTrace();
}
}
});
}
I think there is a more fundamental design issue hidden behind this task, but ok. I cant figure out from you problem description if you want in-order execution or if you just want operations on tasks described by single tokens to be atomic/transactional. What i propose below feels more like a "quick fix" to this issue than a real solution.
For the real "ordered execution" case I propose a solution which is based on queue proxies which order the output:
Define a implementation of Queue which provides a factory method generating proxy queues which are represented to the producer side by a this single queue object; the factory method should also register these proxy queue objects. adding an element to the input queue should add it directly to one of the output queues if it matches one of the elements in one of the output queues. Otherwise add it to any (the shortest) output queue. (implement the check for this efficiently). Alternatively (slightly better): don't do this when the element is added, but when any of the output queues runs empty.
Give each of your runnable consumers an field storing an individual Queue interface (instead of accessing a single object). Initialize this field by a the factory method defined above.
For the transaction case i think it's easier to span more threads than you have cores (use statistics to calculate this), and implement the blocking mechanism on an lower (object) level.

Java: notify main class when all threads in threadpool are finished / same instance of object in different threads

How do I notify my main class which instantiates a ThreadPoolExecutor when all threads within the ThreadPoolExecutor are completed?
ThreadPoolExecutor threadPool = null;
ThreadClass threadclass1;
ThreadClass threadclass2;
final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(maxPoolSize);
puclic MyClass(){
threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, queue);
threadClass1 = new ThreadClass;
threadClass2 = new ThreadClass;
threadPool.execute(threadClass1);
threadPool.execute(threadClass2);
//Now I would like to do something until the threadPool is done working
//The threads fill a ConcurrentLinkedQueueand I would like to poll
//the queue as it gets filled by the threads and output
//it to XML via JAX-RS
}
EDIT 1
Wile my threads fetch data from somewhere and fill this information into a ConcurrentLinkedQueue I basically would like to perform some action in MyClass to update the XML output with the results. When all threads are terminated I would like to return true to the JAX-RS webservice which instantiated MyClass so the webservice knows all data has been fetched and it can now display the final XML file
EDIT 2
I am passing a Queue to threads so they can add items to the queue. When one driver is done adding items to the articleQueue I want to perform an action within my main class, polling the entity from the Queue and handing it over to the response object to display it in some way.
When I pass the queue to the threads, are they working with the same object or with a "copy" of the object so that changes within the thread do not effect the main object? That is not the behavior I want. When I check the size of the articleQueue within the Driver it is 18, the size of the articleQueue in the DriverController is 0.
Is there a nicer way to react when a thread has added something to the queue other than my while loop? How do I have to modify my code to acces the same object within different classes?
DriverController
public class DriverController {
Queue<Article> articleQueue;
ThreadPoolExecutor threadPool = null;
final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(
maxPoolSize);
public DriverController(Response response) {
articleQueue = new ConcurrentLinkedQueue<Article>();
threadPool = new ThreadPoolExecutor();
Driver driver = new Driver(this.articleQueue);
threadPool.execute(driver);
// More drivers would be executed here which add to the queue
while (threadPool.getActiveCount() > 0) {
// this.articleQueue.size() gives back 0 here ... why?
if(articleQueue.size()>0){
response.addArticle(articleQueue.poll());
}
}
}
}
Driver
public class Driver implements Runnable{
private Queue<Article> articleQueue;
public DriverAlliedElectronics(Queue articleQueue) {
this.articleQueue = articleQueue;
}
public boolean getData() {
// Here would be the code where the article is created ...
this.articleQueue.offer(article);
return true;
}
public void run() {
this.getData();
// this.articleQueue.size() gives back 18 here ...
}
}
You should try to use following snippet
//Now I would like to wait until the threadPool is done working
threadPool.shutdown();
while (!threadPool.isTerminated()) {
try {
threadPool.awaitTermination(10, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
Maybe a ExecutorCompletionService might be the right thing for you:
http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ExecutorCompletionService.html
Example from the link above:
void solve(Executor e, Collection<Callable<Result>> solvers)
throws InterruptedException, ExecutionException {
CompletionService<Result> ecs = new ExecutorCompletionService<Result>(e);
for (Callable<Result> s : solvers)
ecs.submit(s);
int n = solvers.size();
for (int i = 0; i < n; ++i) {
Result r = ecs.take().get();
if (r != null)
use(r);
}
}
Instead of using execute you should use submit. This will return a Future instance on which you can wait for the task(s) to complete. That way you don't need polling or shutting down the pool.
I don't think there's a way to do this explicitly. You could poll the getCompletedTaskCount() to wait for that to become zero.
Why not collect the Future objects returned upon submission and check for all of those being completed ? Simply call get() on each one in turn. Since that call blocks you'll simply wait for each in turn and gradually fall through the set until you've waited on each on.
Alternatively you could submit the threads, and call shutdown() on the executor. That way, the submitted tasks will be executed, and then the terminated() method is called. If you override this then you'll get a callback once all tasks are completed (you couldn't use that executor again, obviously).
Judging from the reference documentation you have a few options:
ThreadPoolExecutor threadPool = null;
ThreadClass threadclass1;
ThreadClass threadclass2;
final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(maxPoolSize);
puclic MyClass(){
threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, queue);
threadClass1 = new ThreadClass;
threadClass2 = new ThreadClass;
threadPool.execute(threadClass1);
threadPool.execute(threadClass2);
//Now I would like to wait until the threadPool is done working
//Option 1: shutdown() and awaitTermination()
threadPool.shutDown();
try {
threadPool.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS)
}
catch (InterruptedException e) {
e.printStackTrace();
}
//Option 2: getActiveCount()
while (threadPool.getActiveCount() > 0) {
try {
Thread.sleep(1000);
}
catch (InterruptedException ignored) {}
}
//Option 3: getCompletedTaskCount()
while (threadPool.getCompletedTaskCount() < totalNumTasks) {
try {
Thread.sleep(1000);
}
catch (InterruptedException ignored) {}
}
}
All things considered, I think shutdown() and awaitTermination() is the best option of the three.
I think you're overengineering things a bit. You don't really care about the threads or the thread pool, and rightly so. Java provides nice abstractions so that you don't have to. You just need to know when your tasks are complete, and methods exist for that. Just submit your jobs, and wait for the futures to say they're done. If you really want to know as soon as a single task completes, you can watch all the futures and take action as soon as any one is finished. If not and you only care that everything is finished, you can remove some complexity from the code I'm about to post. Try this on for size (note MultithreadedJaxrsResource is executable):
import javax.ws.rs.*;
import javax.ws.rs.core.MediaType;
import java.util.*;
import java.util.concurrent.*;
#Path("foo")
public class MultithreadedJaxrsResource {
private ExecutorService executorService;
public MultithreadedJaxrsResource(ExecutorService executorService) {
this.executorService = executorService;
}
#GET
#Produces(MediaType.APPLICATION_XML)
public AllMyArticles getStuff() {
List<Future<Article>> futures = new ArrayList<Future<Article>>();
// Submit all the tasks to run
for (int i = 0; i < 10; i++) {
futures.add(executorService.submit(new Driver(i + 1)));
}
AllMyArticles articles = new AllMyArticles();
// Wait for all tasks to finish
// If you only care that everything is done and not about seeing
// when each one finishes, this outer do/while can go away, and
// you only need a single for loop to wait on each future.
boolean allDone;
do {
allDone = true;
Iterator<Future<Article>> futureIterator = futures.iterator();
while (futureIterator.hasNext()) {
Future<Article> future = futureIterator.next();
if (future.isDone()) {
try {
articles.articles.add(future.get());
futureIterator.remove();
} catch (InterruptedException e) {
// thread was interrupted. don't do that.
throw new IllegalStateException("broken", e);
} catch (ExecutionException e) {
// execution of the Callable failed with an
// exception. check it out.
throw new IllegalStateException("broken", e);
}
} else {
allDone = false;
}
}
} while (!allDone);
return articles;
}
public static void main(String[] args) {
ExecutorService executorService = Executors.newFixedThreadPool(10);
AllMyArticles stuff =
new MultithreadedJaxrsResource(executorService).getStuff();
System.out.println(stuff.articles);
executorService.shutdown();
}
}
class Driver implements Callable<Article> {
private int i; // Just to differentiate the instances
public Driver(int i) {
this.i = i;
}
public Article call() {
// Simulate taking some time for each call
try {
Thread.sleep(1000 / i);
} catch (InterruptedException e) {
System.err.println("oops");
}
return new Article(i);
}
}
class AllMyArticles {
public final List<Article> articles = new ArrayList<Article>();
}
class Article {
public final int i;
public Article(int i) {
this.i = i;
}
#Override
public String toString() {
return "Article{" +
"i=" + i +
'}';
}
}
Done that way, you can plainly see that the tasks are returned in the order they complete, as the last task finishes first thanks to sleeping the shortest time. If you don't care about completion order and just want to wait for all to finish, the loop becomes much simpler:
for (Future<Article> future : futures) {
try {
articles.articles.add(future.get());
} catch (InterruptedException e) {
// thread was interrupted. don't do that.
throw new IllegalStateException("broken", e);
} catch (ExecutionException e) {
// execution of the Callable failed with an exception. check it out.
throw new IllegalStateException("broken", e);
}
}

Java: Threads, how to make them all do something

I am trying to implement nodes talking to each other in Java. I am doing this by creating a new thread for every node that wants to talk to the server.
When the given number of nodes, i.e. that many threads have been created, have connected to the server I want each thread to execute their next bit of code after adding to the "sharedCounter".
I think I need to use 'locks' on the shared variable, and something like signalAll() or notifyAll() to get all the threads going, but I can't seem to make clear sense of exactly how this works or to implement it.
Any help explaining these Java concepts would be greatly appreciated :D
Below is roughly the structure of my code:
import java.net.*;
import java.io.*;
public class Node {
public static void main(String[] args) {
...
// Chooses server or client launchers depend on parameters.
...
}
}
class sharedResource {
private int sharedCounter;
public sharedResource(int i) {
sharedCounter = i;
}
public synchronized void incSharedCounter() {
sharedCounter--;
if (sharedCounter == 0)
// Get all threads to do something
}
}
class Server {
...
for (int i = 0; i < numberOfThreads; i++) {
new serverThread(serverSocket.accept()).start();
}
...
sharedResource threadCount = new sharedResource(numberOfThreads);
...
}
class serverThread extends Thread {
...
//some code
Server.threadCount.incSharedCounter();
// Some more code to run when sharedCounte == 0
...
}
class Client {
...
}
     // Get all threads to do something
Threads (or rather Runnables, which you should implement rather than extending Thread) have a run method that contains the code they are expected to execute.
Once you call Thread#start (which in turn calls Runnable#run), the thread will start doing exactly that.
Since you seem to be new to multi-threading in Java, I recommend that you read an introduction to the Concurrency Utility package, that has been introduced in Java5 to make it easier to implement concurrent operations.
Specifically what you seem to be looking for is a way to "pause" the operation until a condition is met (in your case a counter having reached zero). For this, you should look at a CountDownLatch.
Indeed, the subject is broad, but I'll try to explain the basics. More details can be read from various blogs and articles. One of which is the Java trail.
It is best to see each thread as being runners (physical persons) that run alongside each other in a race. Each runner may perform any task while running. For example, take a cup of water from a table at a given moment in the race. Physically, they cannot both drink from the same cup at once, but in the virtual world, it is possible (this is where the line is drawn).
For example, take again two runners; each of them has to run back and forth a track, and push a button (shared by the runners) at each end for 1'000'000 times, the button is simply incrementing a counter by one each time. When they completed their run, what would be the value of the counter? In the physical world, it would be 2'000'000 because the runners cannot push the button at the same time, they would wait for the first one to leave first... that is unless they fight over it... Well, this is exactly what two threads would do. Consider this code :
public class ThreadTest extends Thread {
static public final int TOTAL_INC = 1000000;
static public int counter = 0;
#Override
public void run() {
for (int i=0; i<TOTAL_INC; i++) {
counter++;
}
System.out.println("Thread stopped incrementing counter " + TOTAL_INC + " times");
}
public static void main(String[] args) throws InterruptedException {
Thread t1 = new ThreadTest();
Thread t2 = new ThreadTest();
t1.start();
t2.start();
t1.join(); // wait for each thread to stop on their own...
t2.join(); //
System.out.println("Final counter is : " + counter + " which should be equal to " + TOTAL_INC * 2);
}
}
An output could be something like
Thread stopped incrementing counter 1000000 times
Thread stopped incrementing counter 1000000 times
Final counter is : 1143470 which should be equal to 2000000
Once in a while, the two thread would just increment the same value twice; this is called a race condition.
Synchronizing the run method will not work, and you'd have to use some locking mechanism to prevent this from happening. Consider the following changes in the run method :
static private Object lock = new Object();
#Override
public void run() {
for (int i=0; i<TOTAL_INC; i++) {
synchronized(lock) {
counter++;
}
}
System.out.println("Thread stopped incrementing counter " + TOTAL_INC + " times");
}
Now the expected output is
...
Final counter is : 2000000 which should be equal to 2000000
We have synchronized our counter with a shared object. This is like putting a queue line before only one runner can access the button at once.
NOTE : this locking mechanism is called a mutex. If a resource can be accessed by n threads at once, you might consider using a semaphore.
Multithreading is also associated with deadlocking. A deadlock is when two threads mutually waits for the other to free some synchronized resource to continue. For example :
Thread 1 starts
Thread 2 starts
Thread 1 acquire synchronized object1
Thread 2 acquire synchronized object2
Thread 2 needs to acquire object2 for continuing (locked by Thread 1)
Thread 1 needs to acquire object1 for continuing (locked by Thread 2)
Program hangs in deadlock
While there are many ways to prevent this from happening (it depends on what your threads are doing, and how they are implemented...) You should read about that particularly.
NOTE : the methods wait, notify and notifyAll can only be called when an object is synchronized. For example :
static public final int TOTAL_INC = 10;
static private int counter = 0;
static private Object lock = new Object();
static class Thread1 extends Thread {
#Override
public void run() {
synchronized (lock) {
for (int i=0; i<TOTAL_INC; i++) {
try {
lock.wait();
counter++;
lock.notify();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
}
static class Thread2 extends Thread {
#Override
public void run() {
synchronized (lock) {
for (int i=0; i<TOTAL_INC; i++) {
try {
lock.notify();
counter--;
lock.wait();
} catch (InterruptedException e) {
/* ignored */
}
}
}
}
}
Notice that both threads are running their for...loop blocks within the synchronized block. (The result of counter == 0 when both threads end.) This can be achieved because they "let each other" access the synchronized resource via the resource's wait and notify methods. Without using those two methods, both threads would simply run sequentially and not concurrently (or more precisely, alternately).
I hope this shed some light about threads (in Java).
** UPDATE **
Here is a little proof of concept of everything discussed above, using the CountDownLatch class suggested by Thilo earlier :
static class Server {
static public final int NODE_COUNT = 5;
private List<RunnableNode> nodes;
private CountDownLatch startSignal;
private Object lock = new Object();
public Server() {
nodes = Collections.synchronizedList(new ArrayList<RunnableNode>());
startSignal = new CountDownLatch(Server.NODE_COUNT);
}
public Object getLock() {
return lock;
}
public synchronized void connect(RunnableNode node) {
if (startSignal.getCount() > 0) {
startSignal.countDown();
nodes.add(node);
System.out.println("Received connection from node " + node.getId() + " (" + startSignal.getCount() + " remaining...)");
} else {
System.out.println("Client overflow! Refusing connection from node " + node.getId());
throw new IllegalStateException("Too many nodes connected");
}
}
public void shutdown() {
for (RunnableNode node : nodes) {
node.shutdown();
}
}
public void awaitAllConnections() {
try {
startSignal.await();
synchronized (lock) {
lock.notifyAll(); // awake all nodes
}
} catch (InterruptedException e) {
/* ignore */
shutdown(); // properly close any connected node now
}
}
}
static class RunnableNode implements Runnable {
private Server server;
private int id;
private boolean working;
public RunnableNode(int id, Server server) {
this.id = id;
this.server = server;
this.working = true;
}
public int getId() {
return id;
}
public void run() {
try {
Thread.sleep((long) (Math.random() * 5) * 1000); // just wait randomly from 0 to 5 seconds....
synchronized (server.getLock()) {
server.connect(this);
server.getLock().wait();
}
if (!Thread.currentThread().isAlive()) {
throw new InterruptedException();
} else {
System.out.println("Node " + id + " started successfully!");
while (working) {
Thread.yield();
}
}
} catch (InterruptedException e1) {
System.out.print("Ooop! ...");
} catch (IllegalStateException e2) {
System.out.print("Awwww! Too late! ...");
}
System.out.println("Node " + id + " is shutting down");
}
public void shutdown() {
working = false; // shutdown node here...
}
}
static public void main(String...args) throws InterruptedException {
Server server = new Server();
for (int i=0; i<Server.NODE_COUNT + 4; i++) { // create 4 more nodes than needed...
new Thread(new RunnableNode(i, server)).start();
}
server.awaitAllConnections();
System.out.println("All connection received! Server started!");
Thread.sleep(6000);
server.shutdown();
}
This is a broad topic. You might try reading through the official guides for concurrency (i.e. threading, more or less) in Java. This isn't something with cut-and-dried solutions; you have to design something.

Categories