I have two local threadpools, one pool has 4 threads, second pool has 5 threads.
I want these two pools communicate with each other.
For example, first pool's second thread (1.2) communicates with the second pool`s fifth thread (2.5), i.e.
1.2 -> 2.5
1.1 -> 2.2
1.3 -> 2.1
1.4 -> 2.3
1.2 finished sending the message to 2.5 and wants to send the other message to the second pool, but 2.5 is still busy, but 2.4 if free to
process messages from 1.2
How do I make threads from first pool communicate to the first free thread from second pool?
How can I implement it in java?
Perhaps I should use a message brokers or something like that? (or BlockingQueue,Exchanger/Pipereader)
Thanks
(Your example is not clear, but I think you are asking for a scheme where the thread in one pool doesn't care which of the threads in the other pool gets the messages.)
There are probably many ways to do this, but a simple way is:
create a bounded message queue for each pool
each thread in each pool reads messages from its pool's queue
a thread in one pool sends a message to the other pool by adding the message to the other pool's queue.
A message broker could also work, but it is probably over-kill. You most likely don't want the reliability / persistence / distribution of a full-blown message broker.
How do I make threads from first pool communicate to the first free
thread from second pool?
I am not sure if you have any other specific needs but if both pools are local and you are simply willing to implement a typical producer - consumer pattern where N-Threads ( as part of a pool ) are acting as producer and another M-Threads ( as part of another pool ) are acting as consumer and you don't care which threads instance of second pool processes a message, I would go by a - BlockingQueue implementation.
You take an instance of BlockingQueue (like ArrayBlockingQueue OR LinkedBlockingQueue OR PriorityBlockingQueue and there are few more implementations in package java.util.concurrent) and share this instance among actual pool threads while restricting that - take() can be done by only consumer threads and by any consumer thread.
How can I implement it in java?
You create your pools like below ,
ExecutorService pool_1 = Executors.newFixedThreadPool(4);
ExecutorService pool_2 = Executors.newFixedThreadPool(4);
Then you give actual threads to these pools which are sharing a blocking queue. Threads can be created like below - its just a pseudo code.
public class Pool1Runnable implements Runnable {
private final BlockingQueue queue;
public Pool1Runnable(BlockingQueue queue){
this.queue=queue;
}
#Override
public void run() {
System.out.println("Pool1Runnable");
}
}
Now you write thread implementations for pool2 and make sure that their run() implementation uses take() on queue.
You create pool instances, thread instances - separate for producers and consumers (provide a single queue instance to all threads so it acts as a communication channel ) and then you execute these thread instances with pools.
Hope it helps !!
Most straightforward way as indicated by others is to have a BlockingQueue in between the pools. If I'm not mistaken your problem is same as having multiple producers and multiple consumers sending and processing messages respectively.
Here is one implementation which you can build on. There are few parameters for which comments have been added, you can tweak them based on your problem scenario. Basically, you have 2 pools and one more pool to invoke the producer and consumer in parallel.
public class MultiProducerConsumer {
private static final int MAX_PRODUCERS = 4;
private static final int MAX_CONSUMERS = 5;
private ExecutorService producerPool = new ThreadPoolExecutor(2, MAX_PRODUCERS, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>());
private ExecutorService consumerPool = new ThreadPoolExecutor(2, MAX_CONSUMERS, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>());
//ThreadPool for holding the main threads for consumer and producer
private ExecutorService mainPool = new ThreadPoolExecutor(2, 2, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>());
/**
* Indicates the stopping condition for the consumer, without this it has no idea when to stop
*/
private AtomicBoolean readerComplete = new AtomicBoolean(false);
/**
* This is the queue for passing message from producer to consumer.
* Keep queue size depending on how slow is your consumer relative to producer, or base it on resource constraints
*/
private BlockingQueue<String> queue = new ArrayBlockingQueue<>(1);
public static void main(String[] args) throws InterruptedException {
long startTime = System.currentTimeMillis();
MultiProducerConsumer multiProducerConsumer = new MultiProducerConsumer();
multiProducerConsumer.process();
System.out.println("Time taken in seconds - " + (System.currentTimeMillis() - startTime)/1000f);
}
private void process() throws InterruptedException {
mainPool.execute(this::consume);
mainPool.execute(this::produce);
Thread.sleep(10); // allow the pool to get initiated
mainPool.shutdown();
mainPool.awaitTermination(5, TimeUnit.SECONDS);
}
private void consume() {
try {
while (!readerComplete.get()) { //wait for reader to complete
consumeAndExecute();
}
while (!queue.isEmpty()) { //process any residue tasks
consumeAndExecute();
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
try {
consumerPool.shutdown();
consumerPool.awaitTermination(5, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private void consumeAndExecute() throws InterruptedException {
if (!queue.isEmpty()) {
String msg = queue.take(); //takes or waits if queue is empty
consumerPool.execute(() -> {
System.out.println("c-" + Thread.currentThread().getName() + "-" + msg);
});
}
}
private void produce() {
try {
for (int i = 0; i < MAX_PRODUCERS; i++) {
producerPool.execute(() -> {
try {
String random = getRandomNumber() + "";
queue.put(random);
System.out.println("p-" + Thread.currentThread().getName() + "-" + random);
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
} finally {
try {
Thread.sleep(10); //allow pool to get initiated
producerPool.shutdown();
producerPool.awaitTermination(5, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
readerComplete.set(true); //mark producer as done, so that consumer can exit
}
}
private int getRandomNumber() {
return (int) (Math.random() * 50 + 1);
}
}
Here is the output:
p-pool-1-thread-2-43
p-pool-1-thread-2-32
p-pool-1-thread-2-12
c-pool-2-thread-1-43
c-pool-2-thread-1-12
c-pool-2-thread-2-32
p-pool-1-thread-1-3
c-pool-2-thread-1-3
Time taken in seconds - 0.1
Related
I have around 60 sockets and 20 threads and I want to make sure each thread works on different socket everytime so I don't want to share same socket between two threads at all.
In my SocketManager class, I have a background thread which runs every 60 seconds and calls updateLiveSockets() method. In the updateLiveSockets() method, I iterate all the sockets I have and then start pinging them one by one by calling send method of SendToQueue class and basis on the response I mark them as live or dead. In the updateLiveSockets() method, I always need to iterate all the sockets and ping them to check whether they are live or dead.
Now all the reader threads will call getNextSocket() method of SocketManager class concurrently to get the next live available socket to send the business message on that socket. So I have two types of messages which I am sending on a socket:
One is ping message on a socket. This is only sent from timer thread calling updateLiveSockets() method in SocketManager class.
Other is business message on a socket. This is done in SendToQueue class.
So if pinger thread is pinging a socket to check whether they are live or not then no other business thread should use that socket. Similarly if business thread is using a socket to send data on it, then pinger thread should not ping that socket. And this applies to all the socket. But I need to make sure that in updateLiveSockets method, we are pinging all the available sockets whenever my background thread starts so that we can figure out which socket is live or dead.
Below is my SocketManager class:
public class SocketManager {
private static final Random random = new Random();
private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
private final Map<Datacenters, List<SocketHolder>> liveSocketsByDatacenter =
new ConcurrentHashMap<>();
private final ZContext ctx = new ZContext();
// ...
private SocketManager() {
connectToZMQSockets();
scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
updateLiveSockets();
}
}, 60, 60, TimeUnit.SECONDS);
}
// during startup, making a connection and populate once
private void connectToZMQSockets() {
Map<Datacenters, List<String>> socketsByDatacenter = Utils.SERVERS;
for (Map.Entry<Datacenters, List<String>> entry : socketsByDatacenter.entrySet()) {
List<SocketHolder> addedColoSockets = connect(entry.getValue(), ZMQ.PUSH);
liveSocketsByDatacenter.put(entry.getKey(), addedColoSockets);
}
}
private List<SocketHolder> connect(List<String> paddes, int socketType) {
List<SocketHolder> socketList = new ArrayList<>();
// ....
return socketList;
}
// this method will be called by multiple threads concurrently to get the next live socket
// is there any concurrency or thread safety issue or race condition here?
public Optional<SocketHolder> getNextSocket() {
for (Datacenters dc : Datacenters.getOrderedDatacenters()) {
Optional<SocketHolder> liveSocket = getLiveSocket(liveSocketsByDatacenter.get(dc));
if (liveSocket.isPresent()) {
return liveSocket;
}
}
return Optional.absent();
}
private Optional<SocketHolder> getLiveSocket(final List<SocketHolder> listOfEndPoints) {
if (!listOfEndPoints.isEmpty()) {
// The list of live sockets
List<SocketHolder> liveOnly = new ArrayList<>(listOfEndPoints.size());
for (SocketHolder obj : listOfEndPoints) {
if (obj.isLive()) {
liveOnly.add(obj);
}
}
if (!liveOnly.isEmpty()) {
// The list is not empty so we shuffle it an return the first element
return Optional.of(liveOnly.get(random.nextInt(liveOnly.size()))); // just pick one
}
}
return Optional.absent();
}
// runs every 60 seconds to ping all the available socket to make sure whether they are alive or not
private void updateLiveSockets() {
Map<Datacenters, List<String>> socketsByDatacenter = Utils.SERVERS;
for (Map.Entry<Datacenters, List<String>> entry : socketsByDatacenter.entrySet()) {
List<SocketHolder> liveSockets = liveSocketsByDatacenter.get(entry.getKey());
List<SocketHolder> liveUpdatedSockets = new ArrayList<>();
for (SocketHolder liveSocket : liveSockets) {
Socket socket = liveSocket.getSocket();
String endpoint = liveSocket.getEndpoint();
Map<byte[], byte[]> holder = populateMap();
Message message = new Message(holder, Partition.COMMAND);
// pinging to see whether a socket is live or not
boolean isLive = SendToQueue.getInstance().send(message.getAddress(), message.getEncodedRecords(), socket);
SocketHolder zmq = new SocketHolder(socket, liveSocket.getContext(), endpoint, isLive);
liveUpdatedSockets.add(zmq);
}
liveSocketsByDatacenter.put(entry.getKey(), Collections.unmodifiableList(liveUpdatedSockets));
}
}
}
And here is my SendToQueue class:
// this method will be called by multiple reader threads (around 20) concurrently to send the data
public boolean sendAsync(final long address, final byte[] encodedRecords) {
PendingMessage m = new PendingMessage(address, encodedRecords, true);
cache.put(address, m);
return doSendAsync(m);
}
private boolean doSendAsync(final PendingMessage pendingMessage) {
Optional<SocketHolder> liveSocket = SocketManager.getInstance().getNextSocket();
if (!liveSocket.isPresent()) {
// log error
return false;
}
ZMsg msg = new ZMsg();
msg.add(pendingMessage.getEncodedRecords());
try {
// send data on a socket LINE A
return msg.send(liveSocket.get().getSocket());
} finally {
msg.destroy();
}
}
public boolean send(final long address, final byte[] encodedRecords, final Socket socket) {
PendingMessage m = new PendingMessage(address, encodedRecords, socket, false);
cache.put(address, m);
try {
if (doSendAsync(m, socket)) {
return m.waitForAck();
}
return false;
} finally {
cache.invalidate(address);
}
}
Problem Statement
Now as you can see above that I am sharing same socket between two threads. It seems getNextSocket() in SocketManager class could return a 0MQ socket to Thread A. Concurrently, the timer thread may access the same 0MQ socket to ping it. In this case Thread A and the timer thread are mutating the same 0MQ socket, which can lead to problems. So I am trying to find a way so that I can prevent different threads from sending data to the same socket at the same time and mucking up my data.
One solution I can think of is using synchronization on a socket while sending the data but if many threads uses the same socket, resources aren't well utilized. Moreover If msg.send(socket); is blocked (technically it shouldn't) all threads waiting for this socket are blocked. So I guess there might be a better way to ensure that every thread uses a different single live socket at the same time instead of synchronization on a particular socket.
So I am trying to find a way so that I can prevent different threads from sending data to the same socket at the same time and mucking up my data.
There are certainly a number of different ways to do this. For me this seems like a BlockingQueue is the right thing to use. The business threads would take a socket from the queue and would be guaranteed that no one else would be using that socket.
private final BlockingQueue<SocketHolder> socketHolderQueue = new LinkedBlockingQueue<>();
...
public Optional<SocketHolder> getNextSocket() {
SocketHolder holder = socketHolderQueue.poll();
return holder;
}
...
public void finishedWithSocket(SocketHolder holder) {
socketHolderQueue.put(holder);
}
I think that synchronizing on the socket is not a good idea for the reasons that you mention – the ping thread will be blocking the business thread.
There are a number of ways of handling the ping thread logic. I would store your Socket with a last use time and then your ping thread could every so often take each of the sockets from the same BlockingQueue, test it, and put each back onto the end of the queue after testing.
public void testSockets() {
// one run this for as many sockets as are in the queue
int numTests = socketHolderQueue.size();
for (int i = 0; i < numTests; i++) {
SocketHolder holder = socketHolderQueue.poll();
if (holder == null) {
break;
}
if (socketIsOk(socketHolder)) {
socketHolderQueue.put(socketHolder);
} else {
// close it here or something
}
}
}
You could also have the getNextSocket() code that dequeues the threads from the queue check the timer and put them on a test queue for the ping thread to use and then take the next one from the queue. The business threads would never be using the same socket at the same time as the ping thread.
Depending on when you want to test the sockets, you can also reset the timer after the business thread returns it to the queue so the ping thread would test the socket after X seconds of no use.
It looks like you should consider using the try-with-resource feature here. You have the SocketHolder or Option class implement the AutoCloseable interface. For instance, let us assume that Option implements this interface. The Option close method will then add back the instance to the container. I created a simple example that shows what I mean. It is not complete but it gives you an idea on how to implement this in your code.
public class ObjectManager implements AutoCloseable {
private static class ObjectManagerFactory {
private static ObjectManager objMgr = new ObjectManager();
}
private ObjectManager() {}
public static ObjectManager getInstance() { return ObjectManagerFactory.objMgr; }
private static final int SIZE = 10;
private static BlockingQueue<AutoCloseable> objects = new LinkedBlockingQueue<AutoCloseable>();
private static ScheduledExecutorService sch;
static {
for(int cnt = 0 ; cnt < SIZE ; cnt++) {
objects.add(new AutoCloseable() {
#Override
public void close() throws Exception {
System.out.println(Thread.currentThread() + " - Adding object back to pool:" + this + " size: " + objects.size());
objects.put(this);
System.out.println(Thread.currentThread() + " - Added object back to pool:" + this);
}
});
}
sch = Executors.newSingleThreadScheduledExecutor();
sch.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
// TODO Auto-generated method stub
updateObjects();
}
}, 10, 10, TimeUnit.MICROSECONDS);
}
static void updateObjects() {
for(int cnt = 0 ; ! objects.isEmpty() && cnt < SIZE ; cnt++ ) {
try(AutoCloseable object = objects.take()) {
System.out.println(Thread.currentThread() + " - updateObjects - updated object: " + object + " size: " + objects.size());
} catch (Throwable t) {
// TODO Auto-generated catch block
t.printStackTrace();
}
}
}
public AutoCloseable getNext() {
try {
return objects.take();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return null;
}
}
public static void main(String[] args) {
try (ObjectManager mgr = ObjectManager.getInstance()) {
for (int cnt = 0; cnt < 5; cnt++) {
try (AutoCloseable o = mgr.getNext()) {
System.out.println(Thread.currentThread() + " - Working with " + o);
Thread.sleep(1000);
} catch (Throwable t) {
t.printStackTrace();
}
}
} catch (Throwable tt) {
tt.printStackTrace();
}
}
#Override
public void close() throws Exception {
// TODO Auto-generated method stub
ObjectManager.sch.shutdownNow();
}
}
I will make some points here. In the getNextSocket method getOrderedDatacenters method will always return the same ordered list, so you will always pick from the same datacenters from start to end (it's not a problem).
How do you guarantee that two threads wont get the same liveSocket from getNextSocket?
What you are saying here it is true:
Concurrently, the timer thread may access the same 0MQ socket to ping
it.
I think the main problem here is that you don't distinguish between free sockets and reserved sockets.
One option as you said is to synchronize up on each socket. An other option is to keep a list of reserved sockets and when you want to get a next socket or to update sockets, to pick only from the free sockets. You don't want to update a socket which is already reserved.
Also you can take a look at here if it fits your needs.
There's a concept in operating systems software engineering called the critical section. A critical section occurs when 2 or more processes have shared data and they are concurrently executed, in this case, no process should modify or even read this shared data if there's another process accessing these data. So as a process enters the critical section it should notify all other concurrently executed processes that it's currently modifying the critical section, so all other processes should be blocked-waiting-to enter this critical section. you would ask who organize what process enters, this is another problem called process scheduling that controls what process should enter this critical section and the operating system do that for you.
so the best solution to you is using a semaphore where the value of the semaphore is the number of sockets, in your case, I think you have one socket so you will use a semaphore-Binary Semaphore- initialized with a semaphore value = 1, then your code should be divided into four main sections: critical section entry, the critical section, critical section exiting and remainder section.
Critical section entry: where a process enters the critical section and block all other processes. The semaphore will allow one Process-Thread-to enter the critical section-use a socket- and the value of the semaphore will be decremented-equal to zero-.
The critical section: the critical section code that the process should do.
Critical section exiting: the process releasing the critical section for another process to enter. The semaphore value will be incremented-equal to 1-allowing another process to enter
Remainder section: the rest of all your code excluding the previous 3 sections.
Now all you need is to open any Java tutorials about semaphores to know how to apply a semaphore in Java, it's really easy.
Mouhammed Elshaaer is right, but in additional you can also use any concurrent collection, for example ConcurrentHashMap where you can track that each thread works on different socket (for example ConcurrentHashMap key: socket hash code, value: thread hash code or smth else).
As for me it's a little bit stupid solution, but it can be used to.
For the problem of threads (Thread A and timer thread) accessing the same socket, I would keep 2 socket list for each datacenter:
list A: The sockets that are not in use
list B: The sockets that are in use
i.e.,
call synchronisation getNextSocket() to find an not-in-use socket from list A, remove it from list A and add it to list B;
call synchronisation returnSocket(Socket) upon receiving the reponse/ACK for a sent message (either business or ping), to move the socket from list B back to list A. Put a try {} finally {} block around "sending message" to make sure that the socket will be put back to list A even if there is an exception.
I have a simple solution maybe help you. I don't know if in Java you can add a custom attribute to each socket. In Socket.io you can. So I wanna considerate this (I will delete this answer if not).
You will add a boolean attribute called locked to each socket. So, when your thread check the first socket, locked attribute will be True. Any other thread, when ping THIS socket, will check if locked attribute is False. If not, getNextSocket.
So, in this stretch below...
...
for (SocketHolder liveSocket : liveSockets) {
Socket socket = liveSocket.getSocket();
...
You will check if socket is locked or not. If yes, kill this thread or interrupt this or go to next socket. (I don't know how you call it in Java).
So the process is...
Thread get an unlocked socket
Thread set this socket.locked to True.
Thread ping socket and do any stuff you want
Thread set this socket.locked to False.
Thread go to next.
Sorry my bad english :)
So my goal is to measure the performance of a Streaming Engine. It's basically a library to which i can send data-packages. The idea to measure this is to generate data, put it into a Queue and let the Streaming Engine grab the data and process it.
I thought of implementing it like this: The Data Generator runs in a thread and generates data packages in an endless loop with a certain Thread.sleep(X) at the end. When doing the tests the idea is to minimize tis Thread.sleep(X) to see if this has an impact on the Streaming Engine's performance. The Data Generator writes the created packages into a queue, that is, a ConcurrentLinkedQueue, which at the same time is a Singleton.
In another thread I instantiate the Streaming Engine which continuously removes the packages from the queue by doing queue.remove(). This is done in an endlees loop without any sleeping, because it should just be done as fast as possible.
In a first try to implement this I ran into a problem. It seems as if the Data Generator is not able to put the packages into the Queue as it should be. It is doing that too slow. My suspicion is that the endless loop of the Streaming Engine thread is eating up all the resources and therefore slows down everything else.
I would be happy about how to approach this issue or other design patterns, which could solve this issue elegantly.
the requirements are: 2 threads which run in parallel basically. one is putting data into a queue. the other one is reading/removing from the queue. and i want to measure the size of the queue regularly in order to know if the engine which is reading/removing from the queue is fast enough to process the generated packages.
You can use a BlockingQueue, for example ArrayBlockingQueue, you can initialize these to a certain size, so the number of items queued will never exceed a certain number, as per this example:
// create queue, max size 100
final ArrayBlockingQueue<String> strings = new ArrayBlockingQueue<>(100);
final String stop = "STOP";
// start producing
Runnable producer = new Runnable() {
#Override
public void run() {
try {
for(int i = 0; i < 1000; i++) {
strings.put(Integer.toHexString(i));
}
strings.put(stop);
} catch(InterruptedException ignore) {
}
}
};
Thread producerThread = new Thread(producer);
producerThread.start();
// start monitoring
Runnable monitor = new Runnable() {
#Override
public void run() {
try {
while (true){
System.out.println("Queue size: " + strings.size());
Thread.sleep(5);
}
} catch(InterruptedException ignore) {
}
}
};
Thread monitorThread = new Thread(monitor);
monitorThread.start();
// start consuming
Runnable consumer = new Runnable() {
#Override
public void run() {
// infinite look, will interrupt thread when complete
try {
while(true) {
String value = strings.take();
if(value.equals(stop)){
return;
}
System.out.println(value);
}
} catch(InterruptedException ignore) {
}
}
};
Thread consumerThread = new Thread(consumer);
consumerThread.start();
// wait for producer to finish
producerThread.join();
consumerThread.join();
// interrupt consumer and monitor
monitorThread.interrupt();
You could also have third thread monitoring the size of the queue, to give you an idea of which thread is outpacing the other.
Also, you can used the timed put method and the timed or untimed offer methods, which will give you more control of what to do if the queue if full or empty. In the above example execution will stop until there is space for the next element or if there are no further elements in the queue.
i'm new to this topic ... i'm using a ThreadPoolExecutor created with Executors.newFixedThreadPool( 10 ) and after the pool is full i'm starting to get a RejectedExecutionException .
Is there a way to "force" the executor to put the new task in a "wait" status instead of rejecting it and starting it when the pool is freed ?
Thanks
Issue regarding this
https://github.com/evilsocket/dsploit/issues/159
Line of code involved https://github.com/evilsocket/dsploit/blob/master/src/it/evilsocket/dsploit/net/NetworkDiscovery.java#L150
If you use Executors.newFixedThreadPool(10); it queues the tasks and they wait until a thread is ready.
This method is
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
As you can see, the queue used is unbounded (which can be a problem in itself) but it means the queue will never fill and you will never get a rejection.
BTW: If you have CPU bound tasks, an optimal number of threads can be
int processors = Runtime.getRuntime().availableProcessors();
ExecutorService es = Executors.newFixedThreadPool(processors);
A test class which might illustrate the situation
public static void main(String... args) {
ExecutorService es = Executors.newFixedThreadPool(2);
for (int i = 0; i < 1000 * 1000; i++)
es.submit(new SleepOneSecond());
System.out.println("Queue length " + ((ThreadPoolExecutor) es).getQueue().size());
es.shutdown();
System.out.println("After shutdown");
try {
es.submit(new SleepOneSecond());
} catch (Exception e) {
e.printStackTrace(System.out);
}
}
static class SleepOneSecond implements Callable<Void> {
#Override
public Void call() throws Exception {
Thread.sleep(1000);
return null;
}
}
prints
Queue length 999998
After shutdown
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#e026161 rejected from java.util.concurrent.ThreadPoolExecutor#3e472e76[Shutting down, pool size = 2, active threads = 2, queued tasks = 999998, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2013)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
at Main.main(Main.java:17)
It is very possible that a thread calls exit, which sets mStopped to false and shutdowns the executor, but:
your running thread might be in the middle of the while (!mStopped) loop and tries to submit a task to the executor which has been shutdown by exit
the condition in the while returns true because the change made to mStopped is not visible (you don't use any form of synchronization around that flag).
I would suggest:
make mStopped volatile
handle the case where the executor is shutdown while you are in the middle of the loop (for example by catching RejectedExecutionException, or probably better: shutdown your executor after your while loop instead of shutting it down in your exit method).
Building on earlier suggestions, you can use a blocking queue to construct a fixed size ThreadPoolExecutor. If you then supply your own RejectedExecutionHandler which adds tasks to the blocking queue, it will behave as described.
Here's an example of how you could construct such an executor:
int corePoolSize = 10;
int maximumPoolSize = 10;
int keepAliveTime = 0;
int maxWaitingTasks = 10;
ThreadPoolExecutor blockingThreadPoolExecutor = new ThreadPoolExecutor(
corePoolSize, maximumPoolSize,
keepAliveTime, TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(maxWaitingTasks),
new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
executor.getQueue().put(r);
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while submitting task", e);
}
}
});
If I understand correctly, you have your ThreadPool created with fixed number of threads but you might have more tasked to be submitted to the thread pool. I would calcuate the keepAliveTime based on the request and set it dynamically. That way you would not have RejectedExecutionException.
For example
long keepAliveTime = ((applications.size() * 60) / FIXED_NUM_OF_THREADS) * 1000;
threadPoolExecutor.setKeepAliveTime(keepAliveTime, TimeUnit.MILLISECONDS);
where application is a collection of task that could be different every time.
That should solve your problem if you know average time the task take.
I'm writing an application that has 5 threads that get some information from web simultaneously and fill 5 different fields in a buffer class.
I need to validate buffer data and store it in a database when all threads finished their job.
How can I do this (get alerted when all threads finished their work) ?
The approach I take is to use an ExecutorService to manage pools of threads.
ExecutorService es = Executors.newCachedThreadPool();
for(int i=0;i<5;i++)
es.execute(new Runnable() { /* your task */ });
es.shutdown();
boolean finished = es.awaitTermination(1, TimeUnit.MINUTES);
// all tasks have finished or the time has been reached.
You can join to the threads. The join blocks until the thread completes.
for (Thread thread : threads) {
thread.join();
}
Note that join throws an InterruptedException. You'll have to decide what to do if that happens (e.g. try to cancel the other threads to prevent unnecessary work being done).
Have a look at various solutions.
join() API has been introduced in early versions of Java. Some good alternatives are available with this concurrent package since the JDK 1.5 release.
ExecutorService#invokeAll()
Executes the given tasks, returning a list of Futures holding their status and results when everything is completed.
Refer to this related SE question for code example:
How to use invokeAll() to let all thread pool do their task?
CountDownLatch
A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
A CountDownLatch is initialized with a given count. The await methods block until the current count reaches zero due to invocations of the countDown() method, after which all waiting threads are released and any subsequent invocations of await return immediately. This is a one-shot phenomenon -- the count cannot be reset. If you need a version that resets the count, consider using a CyclicBarrier.
Refer to this question for usage of CountDownLatch
How to wait for a thread that spawns it's own thread?
ForkJoinPool or newWorkStealingPool() in Executors
Iterate through all Future objects created after submitting to ExecutorService
Wait/block the Thread Main until some other threads complete their work.
As #Ravindra babu said it can be achieved in various ways, but showing with examples.
java.lang.Thread.join() Since:1.0
public static void joiningThreads() throws InterruptedException {
Thread t1 = new Thread( new LatchTask(1, null), "T1" );
Thread t2 = new Thread( new LatchTask(7, null), "T2" );
Thread t3 = new Thread( new LatchTask(5, null), "T3" );
Thread t4 = new Thread( new LatchTask(2, null), "T4" );
// Start all the threads
t1.start();
t2.start();
t3.start();
t4.start();
// Wait till all threads completes
t1.join();
t2.join();
t3.join();
t4.join();
}
java.util.concurrent.CountDownLatch Since:1.5
.countDown() « Decrements the count of the latch group.
.await() « The await methods block until the current count reaches zero.
If you created latchGroupCount = 4 then countDown() should be called 4 times to make count 0. So, that await() will release the blocking threads.
public static void latchThreads() throws InterruptedException {
int latchGroupCount = 4;
CountDownLatch latch = new CountDownLatch(latchGroupCount);
Thread t1 = new Thread( new LatchTask(1, latch), "T1" );
Thread t2 = new Thread( new LatchTask(7, latch), "T2" );
Thread t3 = new Thread( new LatchTask(5, latch), "T3" );
Thread t4 = new Thread( new LatchTask(2, latch), "T4" );
t1.start();
t2.start();
t3.start();
t4.start();
//latch.countDown();
latch.await(); // block until latchGroupCount is 0.
}
Example code of Threaded class LatchTask. To test the approach use joiningThreads();
and latchThreads(); from main method.
class LatchTask extends Thread {
CountDownLatch latch;
int iterations = 10;
public LatchTask(int iterations, CountDownLatch latch) {
this.iterations = iterations;
this.latch = latch;
}
#Override
public void run() {
String threadName = Thread.currentThread().getName();
System.out.println(threadName + " : Started Task...");
for (int i = 0; i < iterations; i++) {
System.out.println(threadName + " : " + i);
MainThread_Wait_TillWorkerThreadsComplete.sleep(1);
}
System.out.println(threadName + " : Completed Task");
// countDown() « Decrements the count of the latch group.
if(latch != null)
latch.countDown();
}
}
CyclicBarriers A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point.CyclicBarriers are useful in programs involving a fixed sized party of threads that must occasionally wait for each other. The barrier is called cyclic because it can be re-used after the waiting threads are released.
CyclicBarrier barrier = new CyclicBarrier(3);
barrier.await();
For example refer this Concurrent_ParallelNotifyies class.
Executer framework: we can use ExecutorService to create a thread pool, and tracks the progress of the asynchronous tasks with Future.
submit(Runnable), submit(Callable) which return Future Object. By using future.get() function we can block the main thread till the working threads completes its work.
invokeAll(...) - returns a list of Future objects via which you can obtain the results of the executions of each Callable.
Find example of using Interfaces Runnable, Callable with Executor framework.
#See also
Find out thread is still alive?
Apart from Thread.join() suggested by others, java 5 introduced the executor framework. There you don't work with Thread objects. Instead, you submit your Callable or Runnable objects to an executor. There's a special executor that is meant to execute multiple tasks and return their results out of order. That's the ExecutorCompletionService:
ExecutorCompletionService executor;
for (..) {
executor.submit(Executors.callable(yourRunnable));
}
Then you can repeatedly call take() until there are no more Future<?> objects to return, which means all of them are completed.
Another thing that may be relevant, depending on your scenario is CyclicBarrier.
A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point. CyclicBarriers are useful in programs involving a fixed sized party of threads that must occasionally wait for each other. The barrier is called cyclic because it can be re-used after the waiting threads are released.
Another possibility is the CountDownLatch object, which is useful for simple situations : since you know in advance the number of threads, you initialize it with the relevant count, and pass the reference of the object to each thread.
Upon completion of its task, each thread calls CountDownLatch.countDown() which decrements the internal counter. The main thread, after starting all others, should do the CountDownLatch.await() blocking call. It will be released as soon as the internal counter has reached 0.
Pay attention that with this object, an InterruptedException can be thrown as well.
You do
for (Thread t : new Thread[] { th1, th2, th3, th4, th5 })
t.join()
After this for loop, you can be sure all threads have finished their jobs.
Store the Thread-objects into some collection (like a List or a Set), then loop through the collection once the threads are started and call join() on the Threads.
You can use Threadf#join method for this purpose.
Although not relevant to OP's problem, if you are interested in synchronization (more precisely, a rendez-vous) with exactly one thread, you may use an Exchanger
In my case, I needed to pause the parent thread until the child thread did something, e.g. completed its initialization. A CountDownLatch also works well.
I created a small helper method to wait for a few Threads to finish:
public static void waitForThreadsToFinish(Thread... threads) {
try {
for (Thread thread : threads) {
thread.join();
}
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
An executor service can be used to manage multiple threads including status and completion. See http://programmingexamples.wikidot.com/executorservice
try this, will work.
Thread[] threads = new Thread[10];
List<Thread> allThreads = new ArrayList<Thread>();
for(Thread thread : threads){
if(null != thread){
if(thread.isAlive()){
allThreads.add(thread);
}
}
}
while(!allThreads.isEmpty()){
Iterator<Thread> ite = allThreads.iterator();
while(ite.hasNext()){
Thread thread = ite.next();
if(!thread.isAlive()){
ite.remove();
}
}
}
I had a similar problem and ended up using Java 8 parallelStream.
requestList.parallelStream().forEach(req -> makeRequest(req));
It's super simple and readable.
Behind the scenes it is using default JVM’s fork join pool which means that it will wait for all the threads to finish before continuing. For my case it was a neat solution, because it was the only parallelStream in my application. If you have more than one parallelStream running simultaneously, please read the link below.
More information about parallel streams here.
The existing answers said could join() each thread.
But there are several ways to get the thread array / list:
Add the Thread into a list on creation.
Use ThreadGroup to manage the threads.
Following code will use the ThreadGruop approach. It create a group first, then when create each thread specify the group in constructor, later could get the thread array via ThreadGroup.enumerate()
Code
SyncBlockLearn.java
import org.testng.Assert;
import org.testng.annotations.Test;
/**
* synchronized block - learn,
*
* #author eric
* #date Apr 20, 2015 1:37:11 PM
*/
public class SyncBlockLearn {
private static final int TD_COUNT = 5; // thread count
private static final int ROUND_PER_THREAD = 100; // round for each thread,
private static final long INC_DELAY = 10; // delay of each increase,
// sync block test,
#Test
public void syncBlockTest() throws InterruptedException {
Counter ct = new Counter();
ThreadGroup tg = new ThreadGroup("runner");
for (int i = 0; i < TD_COUNT; i++) {
new Thread(tg, ct, "t-" + i).start();
}
Thread[] tArr = new Thread[TD_COUNT];
tg.enumerate(tArr); // get threads,
// wait all runner to finish,
for (Thread t : tArr) {
t.join();
}
System.out.printf("\nfinal count: %d\n", ct.getCount());
Assert.assertEquals(ct.getCount(), TD_COUNT * ROUND_PER_THREAD);
}
static class Counter implements Runnable {
private final Object lkOn = new Object(); // the object to lock on,
private int count = 0;
#Override
public void run() {
System.out.printf("[%s] begin\n", Thread.currentThread().getName());
for (int i = 0; i < ROUND_PER_THREAD; i++) {
synchronized (lkOn) {
System.out.printf("[%s] [%d] inc to: %d\n", Thread.currentThread().getName(), i, ++count);
}
try {
Thread.sleep(INC_DELAY); // wait a while,
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.printf("[%s] end\n", Thread.currentThread().getName());
}
public int getCount() {
return count;
}
}
}
The main thread will wait for all threads in the group to finish.
I had similar situation , where i had to wait till all child threads complete its execution then only i could get the status result for each of them .. hence i needed to wait till all child thread completed.
below is my code where i did multi-threading using
public static void main(String[] args) {
List<RunnerPojo> testList = ExcelObject.getTestStepsList();//.parallelStream().collect(Collectors.toList());
int threadCount = ConfigFileReader.getInstance().readConfig().getParallelThreadCount();
System.out.println("Thread count is : ========= " + threadCount); // 5
ExecutorService threadExecutor = new DriverScript().threadExecutor(testList, threadCount);
boolean isProcessCompleted = waitUntilCondition(() -> threadExecutor.isTerminated()); // Here i used waitUntil condition
if (isProcessCompleted) {
testList.forEach(x -> {
System.out.println("Test Name: " + x.getTestCaseId());
System.out.println("Test Status : " + x.getStatus());
System.out.println("======= Test Steps ===== ");
x.getTestStepsList().forEach(y -> {
System.out.println("Step Name: " + y.getDescription());
System.out.println("Test caseId : " + y.getTestCaseId());
System.out.println("Step Status: " + y.getResult());
System.out.println("\n ============ ==========");
});
});
}
Below method is for distribution of list with parallel proccessing
// This method will split my list and run in a parallel process with mutliple threads
private ExecutorService threadExecutor(List<RunnerPojo> testList, int threadSize) {
ExecutorService exec = Executors.newFixedThreadPool(threadSize);
testList.forEach(tests -> {
exec.submit(() -> {
driverScript(tests);
});
});
exec.shutdown();
return exec;
}
This is my wait until method: here you can wait till your condition satisfies within do while loop . in my case i waited for some max timeout .
this will keep checking until your threadExecutor.isTerminated() is true with polling period of 5 sec.
static boolean waitUntilCondition(Supplier<Boolean> function) {
Double timer = 0.0;
Double maxTimeOut = 20.0;
boolean isFound;
do {
isFound = function.get();
if (isFound) {
break;
} else {
try {
Thread.sleep(5000); // Sleeping for 5 sec (main thread will sleep for 5 sec)
} catch (InterruptedException e) {
e.printStackTrace();
}
timer++;
System.out.println("Waiting for condition to be true .. waited .." + timer * 5 + " sec.");
}
} while (timer < maxTimeOut + 1.0);
return isFound;
}
Use this in your main thread: while(!executor.isTerminated());
Put this line of code after starting all the threads from executor service. This will only start the main thread after all the threads started by executors are finished. Make sure to call executor.shutdown(); before the above loop.
The JavaDoc for ThreadPoolExecutor is unclear on whether it is acceptable to add tasks directly to the BlockingQueue backing the executor. The docs say calling executor.getQueue() is "intended primarily for debugging and monitoring".
I'm constructing a ThreadPoolExecutor with my own BlockingQueue. I retain a reference to the queue so I can add tasks to it directly. The same queue is returned by getQueue() so I assume the admonition in getQueue() applies to a reference to the backing queue acquired through my means.
Example
General pattern of the code is:
int n = ...; // number of threads
queue = new ArrayBlockingQueue<Runnable>(queueSize);
executor = new ThreadPoolExecutor(n, n, 1, TimeUnit.HOURS, queue);
executor.prestartAllCoreThreads();
// ...
while (...) {
Runnable job = ...;
queue.offer(job, 1, TimeUnit.HOURS);
}
while (jobsOutstanding.get() != 0) {
try {
Thread.sleep(...);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
executor.shutdownNow();
queue.offer() vs executor.execute()
As I understand it, the typical use is to add tasks via executor.execute(). The approach in my example above has the benefit of blocking on the queue whereas execute() fails immediately if the queue is full and rejects my task. I also like that submitting jobs interacts with a blocking queue; this feels more "pure" producer-consumer to me.
An implication of adding tasks to the queue directly: I must call prestartAllCoreThreads() otherwise no worker threads are running. Assuming no other interactions with the executor, nothing will be monitoring the queue (examination of ThreadPoolExecutor source confirms this). This also implies for direct enqueuing that the ThreadPoolExecutor must additionally be configured for > 0 core threads and mustn't be configured to allow core threads to timeout.
tl;dr
Given a ThreadPoolExecutor configured as follows:
core threads > 0
core threads aren't allowed to timeout
core threads are prestarted
hold a reference to the BlockingQueue backing the executor
Is it acceptable to add tasks directly to the queue instead of calling executor.execute()?
Related
This question ( producer/consumer work queues ) is similar, but doesn't specifically cover adding to the queue directly.
One trick is to implement a custom subclass of ArrayBlockingQueue and to override the offer() method to call your blocking version, then you can still use the normal code path.
queue = new ArrayBlockingQueue<Runnable>(queueSize) {
#Override public boolean offer(Runnable runnable) {
try {
return offer(runnable, 1, TimeUnit.HOURS);
} catch(InterruptedException e) {
// return interrupt status to caller
Thread.currentThread().interrupt();
}
return false;
}
};
(as you can probably guess, i think calling offer directly on the queue as your normal code path is probably a bad idea).
If it were me, I would prefer using Executor#execute() over Queue#offer(), simply because I'm using everything else from java.util.concurrent already.
Your question is a good one, and it piqued my interest, so I took a look at the source for ThreadPoolExecutor#execute():
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
if (poolSize >= corePoolSize || !addIfUnderCorePoolSize(command)) {
if (runState == RUNNING && workQueue.offer(command)) {
if (runState != RUNNING || poolSize == 0)
ensureQueuedTaskHandled(command);
}
else if (!addIfUnderMaximumPoolSize(command))
reject(command); // is shutdown or saturated
}
}
We can see that execute itself calls offer() on the work queue, but not before doing some nice, tasty pool manipulations if necessary. For that reason, I'd think that it'd be advisable to use execute(); not using it may (although I don't know for certain) cause the pool to operate in a non-optimal way. However, I don't think that using offer() will break the executor - it looks like tasks are pulled off the queue using the following (also from ThreadPoolExecutor):
Runnable getTask() {
for (;;) {
try {
int state = runState;
if (state > SHUTDOWN)
return null;
Runnable r;
if (state == SHUTDOWN) // Help drain queue
r = workQueue.poll();
else if (poolSize > corePoolSize || allowCoreThreadTimeOut)
r = workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS);
else
r = workQueue.take();
if (r != null)
return r;
if (workerCanExit()) {
if (runState >= SHUTDOWN) // Wake up others
interruptIdleWorkers();
return null;
}
// Else retry
} catch (InterruptedException ie) {
// On interruption, re-check runState
}
}
}
This getTask() method is just called from within a loop, so if the executor's not shutting down, it'd block until a new task was given to the queue (regardless of from where it came from).
Note: Even though I've posted code snippets from source here, we can't rely on them for a definitive answer - we should only be coding to the API. We don't know how the implementation of execute() will change over time.
One can actually configure behavior of the pool when the queue is full, by specifying a RejectedExecutionHandler at instantiation. ThreadPoolExecutor defines four policies as inner classes, including AbortPolicy, DiscardOldestPolicy, DiscardPolicy, as well as my personal favorite, CallerRunsPolicy, which runs the new job in the controlling thread.
For example:
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(
nproc, // core size
nproc, // max size
60, // idle timeout
TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(4096, true), // Fairness = true guarantees FIFO
new ThreadPoolExecutor.CallerRunsPolicy() ); // If we have to reject a task, run it in the calling thread.
The behavior desired in the question can be obtained using something like:
public class BlockingPolicy implements RejectedExecutionHandler {
void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
executor.getQueue.put(r); // Self contained, no queue reference needed.
}
At some point the queue must be accessed. The best place to do so is in a self-contained RejectedExecutionHandler, which saves any code duplication or potenial bugs arising from direct manipulation of the queue at the scope of the pool object. Note that the handlers included in ThreadPoolExecutor themselves use getQueue().
It's a very important question if the queue you're using is a completely different implementation from the standard in-memory LinkedBlockingQueue or ArrayBlockingQueue.
For instance if you're implementing the producer-consumer pattern using several producers on different machines, and use a queuing mechanism based on a separate persistence subsystem (like Redis), then the question becomes relevant on its own, even if you don't want a blocking offer() like the OP.
So the given answer, that prestartAllCoreThreads() has to be called (or enough times prestartCoreThread()) for the worker threads to be available and running, is important enough to be stressed.
If required, we can also use a parking lot which separates main processing from rejected tasks -
final CountDownLatch taskCounter = new CountDownLatch(TASKCOUNT);
final List<Runnable> taskParking = new LinkedList<Runnable>();
BlockingQueue<Runnable> taskPool = new ArrayBlockingQueue<Runnable>(1);
RejectedExecutionHandler rejectionHandler = new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
System.err.println(Thread.currentThread().getName() + " -->rejection reported - adding to parking lot " + r);
taskCounter.countDown();
taskParking.add(r);
}
};
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(5, 10, 1000, TimeUnit.SECONDS, taskPool, rejectionHandler);
for(int i=0 ; i<TASKCOUNT; i++){
//main
threadPoolExecutor.submit(getRandomTask());
}
taskCounter.await(TASKCOUNT * 5 , TimeUnit.SECONDS);
System.out.println("Checking the parking lot..." + taskParking);
while(taskParking.size() > 0){
Runnable r = taskParking.remove(0);
System.out.println("Running from parking lot..." + r);
if(taskParking.size() > LIMIT){
waitForSometime(...);
}
threadPoolExecutor.submit(r);
}
threadPoolExecutor.shutdown();