We have a scheduled task that runs every 10 seconds and a thread pool with 3 threads that actually update a static common map. Every 10 seconds the scheduled action prints this map.
The problem is that I want the scheduler to stop printing after the 3 threads finish with the map. But here is the key. I don't want to stop scheduler instantly, I want to print first ( the final version of the map) and then finishes.
public class myClass implements ThreadListener {
public static ArrayList<Pair<String, Integer>> wordOccurenceSet = new ArrayList<Pair<String, Integer>>();
int numberOfThreads = 0;
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public void getAnswer(Collection<CharacterReader> characterReaders, Outputter outputter) {
ExecutorService executor = Executors.newFixedThreadPool(characterReaders.size());
OutputterWriteBatch scheduledThread = new OutputterWriteBatch(outputter,wordOccurenceSet);
scheduler.scheduleAtFixedRate(scheduledThread, 10, 10, TimeUnit.SECONDS);
for (CharacterReader characterReader : characterReaders) {
NotifyingRunnable runnable = new CharacterReaderTask(characterReader, wordOccurenceSet);
runnable.addListener(this);
executor.execute(runnable);
}
}
#Override
public void notifyRunnableComplete(Runnable runnable) {
numberOfThreads += 1;
if(numberOfThreads == 3 ){
//All threads finished... What can I do to terminate after one more run?
}
}
}
The Listener actually just get notified when a thread finishes.
First of all, make your numberOfThreads synchronized. You don't want it to become corrupted when two Reader threads finish concurrently. It's a primitive int so it may not be corruptable (i am not that proficient with JVM), but the general rules of thread safety should be followed anyway.
// 1. let finish OutputterWriteBatch if currently running
scheduler.shutdown();
// 2. will block and wait if OutputterWriteBatch was currently running
scheduler.awaitTermination(someReasonableTimeout);
// 3. one more shot.
scheduler.schedule(scheduledThread,0);
// You could also run it directly if your outputting logic in run()
// is published via separate method, but i don't know the API so i suppose
// only Runnable is published
But this shouldn't be called directly from notifyRunnableComplete, of course. The listener method is called from your Reader threads, so it would block the last one of 3 threads from finishing timely. Rather make a notification object which some other thread will wait() on (preferably the one which executed getAnswer()), notify() it when numberOfThreads reaches 3 and put the above code after the wait().
Oh, and when wait() unblocks, you should double check that numberOfThreads is really 3, if not, cycle back to wait(). Google "spurious wakeup" to explanation why this is needed.
Related
I have 5 threads (5 instances of one Runnable class) starting approximately at the same time (using CyclicBarrier) and I need to stop them all as soon as one of them finished.
Currently, I have a static volatile boolean field threadsOver that I'm setting to true at the end of doSomething(), the method that run() is calling.
private static final CyclicBarrier barrier = new CyclicBarrier(5);
private static volatile boolean threadsOver;
#Override
public void run() {
try {
/* waiting for all threads to have been initialised,
so as to start them at the same time */
barrier.await();
doSomething();
} catch (InterruptedException | BrokenBarrierException e) {
e.printStackTrace();
}
}
public void doSomething() {
// while something AND if the threads are not over yet
while (someCondition && !threadsOver) {
// some lines of code
}
// if the threads are not over yet, it means I'm the first one to finish
if (!threadsOver) {
// so I'm telling the other threads to stop
threadsOver = true;
}
}
The problem with that code is that the code in doSomething() is executing too fast and as a result, the threads that finish after the first one are already over by the time that the first thread noticed them.
I tried adding some delay in doSomething() using Thread.sleep(), which reduced the number of threads which finished even after the first one, but there are still some times where 2 or 3 threads will finish execution completely.
How could I make sure that when one thread is finished, all of the others don't execute all the way to the end?
First where I copied code snippets from: https://www.baeldung.com/java-executor-service-tutorial .
As you have 5 tasks of which every one can produce the result, I prefer Callable, but Runnable with a side effect is handled likewise.
The almost simultaneous start, the Future task aspect, and picking the first result can be done by invokeAny below:
Callable<Integer> callable1 = () -> {
return 1*2*3*5*7/5;
};
List<Callable<Integer>> callables = List.of(callable1, callable2, ...);
ExecutorService executorService = new ThreadPoolExecutor(5);
Integer results = executorService.invokeAny(callables);
executorService.shutDown();
invokeAny() assigns a collection of tasks to an ExecutorService, causing each to run, and returns the result of a successful execution of one task (if there was a successful execution).
Suppose that I have an arraylist called myList of threads all of which are created with an instance of the class myRunnable implementing the Runnable interface, that is, all the threads share the same code to execute in the run() method of myRunnable. Now suppose that I have another single thread called singleThread that is created with an instance of the class otherRunnable implementing the Runnable interface.
The synchornization challenge I have to resolve for these threads is the following: I need all of the threads in myList to execute their code until certain point. Once reached this point, they shoud sleep. Once all and only all of the threads in myList are sleeping, then singleThread should be awakened (singleThread was already asleep). Then singleThread execute its own stuff, and when it is done, it should sleep and all the threads in myList should be awakened. Imagine that the codes are wrapped in while(true)'s, so this process must happen again and again.
Here is an example of the situation I've just described including an attempt of solving the synchronization problem:
class myRunnable extends Runnable
{
public static final Object lock = new Object();
static int count = 0;
#override
run()
{
while(true)
{
//do stuff
barrier();
//do stuff
}
}
void barrier()
{
try {
synchronized(lock) {
count++;
if (count == Program.myList.size()) {
count = 0;
synchronized(otherRunnable.lock) {
otherRunnable.lock.notify();
}
}
lock.wait();
}
} catch (InterruptedException ex) {}
}
}
class otherRunnable extend Runnable
{
public static final Object lock = new Object();
#override
run()
{
while(true)
{
try {
synchronized(lock) {
lock.wait();
} catch (InterruptedException ex) {}
// do stuff
try {
synchronized(myRunnable.lock) {
myRunnable.notifyAll();
}
}
}
}
class Program
{
public static ArrayList<Thread> myList;
public static void main (string[] args)
{
myList = new ArrayList<Thread>();
for(int i = 0; i < 10; i++)
{
myList.add(new Thread(new myRunnable()));
myList.get(i).start();
}
new Thread(new OtherRunnable()).start();
}
}
Basically my idea is to use a counter to make sure that threads in myList just wait except the last thread incrementing the counter, which resets the counter to 0, wakes up singleThread by notifying to its lock, and then this last thread goes to sleep as well by waiting to myRunnable.lock. In a more abstract level, my approach is to use some sort of barrier for threads in myList to stop their execution in a critical point, then the last thread hitting the barrier wakes up singleThread and goes to sleep as well, then singleThread makes its stuff and when finished, it wakes up all the threads in the barrier so they can continue again.
My problem is that there is a flaw in my logic (probably there are more). When the last thread hitting the barrier notifies otherRunnable.lock, there is a chance that an immediate context switch could occur, giving the cpu to singleThread, before the last thread could execute its wait on myRunnable.lock (and going to sleep). Then singleThread would execute all its stuff, would execute notifyAll on myRunnable.lock, and all the threads in myList would be awakened except the last thread hitting the barrier because it has not yet executed its wait command. Then, all those threads would do their stuff again and would hit the barrier again, but the count would never be equal to myList.size() because the last thread mentioned earlier would be eventually scheduled again and would execute wait. singleThread in turn would also execute wait in its first line, and as a result we have a deadlock, with everybody sleeping.
So my question is: what would be a good way to synchronize these threads in order to achieve the desired behaviour described before but at the same time in a way safe of deadlocks??
Based on your comment, sounds like a CyclicBarrier would fit your need exactly. From the docs (emphasis mine):
A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point. CyclicBarriers are useful in programs involving a fixed sized party of threads that must occasionally wait for each other. The barrier is called cyclic because it can be re-used after the waiting threads are released.
Unfortunately, I haven't used them myself, so I can't give you specific pointers on them. I think the basic idea is you construct your barrier using the two-argument constructor with the barrierAction. Have your n threads await() on this barrier after this task is done, after which barrierAction is executed, after which the n threads will continue.
From the javadoc for CyclicBarrier#await():
If the current thread is the last thread to arrive, and a non-null barrier action was supplied in the constructor, then the current thread runs the action before allowing the other threads to continue. If an exception occurs during the barrier action then that exception will be propagated in the current thread and the barrier is placed in the broken state.
Here's my code
public class Main {
private static class GetData implements Runnable{
private List list;
private SqlQuery query;
GetData(SqlQuery<String> param){
this.query=param;
}
public void run(){
list = query.execute();
}
}
public static void main(String[] args){
ApplicationContext context = new ClassPathXmlApplicationContext("database.xml");
SqlQuery<String> parameter = (SqlQuery<String>) context.getBean("BEAN_NAME");
System.out.println("hello");
new Thread(new Inner(parameter)).start();
for(each element in list of inner class){
System.out.println(element.id);
}
}
}
Well my question is after i get the query from xml file, it executes but it doesnt print anything? Why?
Also, how do i ensure that after all my threads have finished execution, only then my main program moves ahead in execution, given i make another thread and run it to create another list.
Change
new Thread(new Inner(parameter)).start();
to
Thread t = new Thread(new Inner(parameter));
t.start();
and put t.join(); after your for loop.
EDIT:
For 5 or any number of threads say n
Create an array of threads like this
Thread[] tArray = new Thread[n];
for (int j = 0; j < tArray .length; j++) {
//your code to start the thread goes here
}
Once you have started them all, loop through them again at the end of the main function to join each of them to main thread.
for (int j = 0; j < tarray .length; j++) {
tArray.join()
}
If you are using one Thread and want main program waits its execution completed. You do not need to use Thread mechanism. Instead, you can add a method in main program in substance to Thread.run().
Otherwise if you want to use multiple thread you can use Thread.join method so that all other threads wait at that line until all thread execution are completed.
I also advice you to investifate countdownlatch mechanism. It can give you ready mechanism not to involve in Join/wait operations manually.
Use the join method of the thread you want to wait. Thread.join() javadoc
how it works :
The thread (lets call him A) that join another thread (called B) will stop it execution until the joined thread (B) finish and returns.
EDIT :
In fact, unless your Thread is in Daemon mode, your program won't exit.
The JVM automatically joins all running non daemon thread before exiting
new Thread(runnable).start(); executes the provided runnable asynchronously and continues the execution to the next line, so the loop executes before anything has been added to the list.
So if you want to execute the loop after the thread has finished, you will have to wait for it. The easiest way is to run the Runnable: new Inner(parameter).run();.
Now that defeats the purpose of parallel execution.
Assuming you have more than one runnable, you could use an ExecutorService (instead of using the low-level Thread API, which is more complicated to use and error-prone) to run the various tasks in parallel and collect the results when they are all completed:
ExecutorService executor = Executors.newCachedThreadPool();
executor.submit(runnable1); //first task
executor.submit(runnable2); //second task
executor.shutdown(); //stop accepting new tasks
executor.awaiTermination(Integer.MAX_VALUE, TimeUnit.SECONDS); //wait until both tasks finish
//now you can use the results of your tasks.
Finally, note that if you use threads, you will be sharing the list in your runnable across threads (the worker thread and the main thread) and you will need to use a thread safe structure to make that possible - for example, by using a CopyOnWriteArrayList:
list = new CopyOnWriteArrayList(query.execute());
There's something odd about the implementation of the BoundedExecutor in the book Java Concurrency in Practice.
It's supposed to throttle task submission to the Executor by blocking the submitting thread when there are enough threads either queued or running in the Executor.
This is the implementation (after adding the missing rethrow in the catch clause):
public class BoundedExecutor {
private final Executor exec;
private final Semaphore semaphore;
public BoundedExecutor(Executor exec, int bound) {
this.exec = exec;
this.semaphore = new Semaphore(bound);
}
public void submitTask(final Runnable command) throws InterruptedException, RejectedExecutionException {
semaphore.acquire();
try {
exec.execute(new Runnable() {
#Override public void run() {
try {
command.run();
} finally {
semaphore.release();
}
}
});
} catch (RejectedExecutionException e) {
semaphore.release();
throw e;
}
}
When I instantiate the BoundedExecutor with an Executors.newCachedThreadPool() and a bound of 4, I would expect the number of threads instantiated by the cached thread pool to never exceed 4. In practice, however, it does. I've gotten this little test program to create as much as 11 threads:
public static void main(String[] args) throws Exception {
class CountingThreadFactory implements ThreadFactory {
int count;
#Override public Thread newThread(Runnable r) {
++count;
return new Thread(r);
}
}
List<Integer> counts = new ArrayList<Integer>();
for (int n = 0; n < 100; ++n) {
CountingThreadFactory countingThreadFactory = new CountingThreadFactory();
ExecutorService exec = Executors.newCachedThreadPool(countingThreadFactory);
try {
BoundedExecutor be = new BoundedExecutor(exec, 4);
for (int i = 0; i < 20000; ++i) {
be.submitTask(new Runnable() {
#Override public void run() {}
});
}
} finally {
exec.shutdown();
}
counts.add(countingThreadFactory.count);
}
System.out.println(Collections.max(counts));
}
I think there's a tiny little time frame between the release of the semaphore and the task ending, where another thread can aquire a permit and submit a task while the releasing thread hasn't finished yet. In other words, it has a race condition.
Can someone confirm this?
BoundedExecutor was indeed intended as an illustration of how to throttle task submission, not as a way to place a bound on thread pool size. There are more direct ways to achieve the latter, as at least one comment pointed out.
But the other answers don't mention the text in the book that says to use an unbounded queue and to
set the bound on the semaphore to be equal to the pool size plus the
number of queued tasks you want to allow, since the semaphore is
bounding the number of tasks both currently executing and awaiting
execution. [JCiP, end of section 8.3.3]
By mentioning unbounded queues and pool size, we were implying (apparently not very clearly) the use of a thread pool of bounded size.
What has always bothered me about BoundedExecutor, however, is that it doesn't implement the ExecutorService interface. A modern way to achieve similar functionality and still implement the standard interfaces would be to use Guava's listeningDecorator method and ForwardingListeningExecutorService class.
You are correct in your analysis of the race condition. There is no synchronization guarantees between the ExecutorService & the Semaphore.
However, I do not know if throttling the number of threads is what the BoundedExecutor is used for. I think it is more for throttling the number of tasks submitted to the service. Imagine if you have 5 million tasks that need to submit, and if you submit more then 10,000 of them you run out of memory.
Well you only will ever have 4 threads running at any given time, why would you want to try and queue up all 5 millions tasks? You can use a construct similar to this to throttle the number of tasks queued up at any given time. What you should get out of this is that at any given time there are only 4 tasks running.
Obviously the resolution to this is to use a Executors.newFixedThreadPool(4).
I see as much as 9 threads created at once. I suspect there is a race condition which causes there to be more thread than required.
This could be because there is before and after running the task work to be done. This means that even though there is only 4 thread inside your block of code, there is a number of thread stopping a previous task or getting ready to start a new task.
i.e. the thread does a release() while it is still running. Even though its the last thing you do its not the last thing it does before acquiring a new task.
I have a main for-loop that sends out requests to an external system. The external system might take a few seconds or even minutes to respond back.
Also, if the number of requests reaches the MAX_REQUESTS, the current for-loop should SLEEP for a few seconds.
This is my scenario. Lets say the main for-loop goes to sleep say for 5 seconds because it has reached the MAX_REQUESTS. Then say a previous external requests comes back returns from callExternalSystem(). What will happen to the main for-loop Thread that is currently on the SLEEP state? Will it be interrupted and continue processing or continue to SLEEP?
for(...){
...
while(numRequestsProcessing > MAX_REQUESTS){
Thread.sleep(SLEEP_TIME);
}
...
callExternalSystem();
}
Thanks in advance.
Unless you've got some code to interrupt the sleeping thread, it will continue sleeping until the required time has elapsed. If you don't want that to happen, you could possibly use wait()/notify() instead of sleep() so that another thread can notify the object that the main thread is sleeping on, in order to wake it up. That relies on there being another thread to notice that the external system has responded, of course - it's not really clear how you're getting responses back.
EDIT: It sounds like really you should use a Semaphore. Each time the main thread wants to issue a request, it acquires a permit. Each time there's a response, that releases a permit. Then you just need to set it up with as many permits as you want concurrent requests. Use tryAcquire if you want to be able to specify a timeout in the main thread - but think about what you want to do if you already have as many requests outstanding as you're really happy with.
I would use java.util.concurrent.Executors to create a thread pool with MAX_REQUESTS threads. Create a java.util.concurrent.CountDownLatch for however many requests you're sending out at once. Pass the latch to the Runnables that make the request, they call countDown() on the latch when complete. The main thread then calls await(timeout) on the latch. I would also suggest the book "Java Concurrency in Practice".
One approach, is to use a ThreadPoolExecutor which blocks whenever there is no free thread.
ThreadPoolExecutor executor = new ThreadPoolExecutor(MAX_REQUESTS, MAX_REQUESTS, 60, TimeUnit.SECONDS, new SynchronousQueue<Runnable>(), new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
executor.getQueue().offer(r, Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});
for(int i=0;i<LOTS_OF_REQUESTS;i++) {
final int finalI = i;
executor.submit(new Runnable() {
#Override
public void run() {
request(finalI);
}
});
}
Another approach is to have the tasks generate their own requests. This way a new request is generated each time a thread is free concurrently.
ExecutorService executor = Executors.newFixedThreadPool(MAX_REQUESTS);
final AtomicInteger counter = new AtomicInteger();
for (int i = 0; i < MAX_REQUESTS; i++) {
executor.submit(new Runnable() {
#Override
public void run() {
int i;
while ((i = counter.getAndIncrement()) < LOTS_OF_REQUESTS)
request(i);
}
});
}