I've been using the Java 8 Streams for a while. I came across a situation where I need to stream through a List and pass each element to a method of a class that is not static.
List<String> emps = new ArrayList<>();
emps.add("ABC");
emps.add("DEF");
emps.add("GHI");
I want to call the "start" method of EmpDataGenerator.
EmpDataGenerator generator = new EmpDataGenerator(
Executors.newFixedThreadPool(emps.size()));
I have tried this, but it's not working
emps.stream().map(e-> generator.start(e));
public class EmpDataGenerator {
// Used to signal a graceful shutdown
private volatile boolean stop = false;
private final ExecutorService executor;
public EmpDataGenerator(ExecutorService executor) {
this.executor = executor;
}
public void start(String name ) {
Runnable generator = () -> {
try {
while (!stop) {
//do some processing
}
System.out.println("Broke while loop, stop " + stop);
} catch (Exception e) {
System.out.println("EmpDataGenerator thread caught an exception and halted!");
throw e;
}
};
executor.execute(generator);
}
public void stop() {
stop = true;
// The shutdown the executor (after waiting a bit to be nice)
try {
executor.awaitTermination(1000, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
// Purposely ignore any InterruptedException
Thread.currentThread().interrupt();
}
executor.shutdownNow();
}
}
A map must take an input and transform it to something. The start method is void.
There is no need for streams here. A simple forEach should do.
emps.forEach(e-> generator.start(e));
or
emps.forEach(generator::start);
Related
My multi-threaded class is supposed to carry out three operations – operation1, operation2, and operation3 – on a number of objects of the class ClassA, where each type of operation is dependant on the earlier operation. For this, I have tried to implement the producer-consumer pattern using a number of BlockingQueues and an ExecutorService.
final ExecutorService executor = ForkJoinPool.commonPool();
final BlockingQueue<ClassA> operationOneQueue = new ArrayBlockingQueue<>(NO_OF_CLASS_A_OBJECTS);
final BlockingQueue<ClassA> operationTwoQueue = new ArrayBlockingQueue<>(NO_OF_CLASS_A_OBJECTS);
final BlockingQueue<ClassA> operationThreeQueue = new ArrayBlockingQueue<>(NO_OF_CLASS_A_OBJECTS);
final BlockingQueue<ClassA> resultQueue = new ArrayBlockingQueue<>(NO_OF_CLASS_A_OBJECTS);
The operations are implemented like this:
void doOperationOne() throws InterruptedException {
ClassA objectA = operationOneQueue.take();
objectA.operationOne();
operationTwoQueue.put(objectA);
}
where each type of operation has its own corresponding method, with its "own" in-queue and out-queue. Each operation method calls the appropriate method on the ClassA object. The method doOperationThree puts ClassA objects in the resultQueue, meaning they have been completely processed.
First, I fill the operationOneQueue with all ClassA objects that are to be operated on. Then, I try to assign executable tasks to the ExecutorService like this:
while (resultQueue.size() < NO_OF_CLASS_A_OBJECTS) {
executor.execute(() -> {
try {
doOperationOne();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
executor.execute(() -> {
try {
doOperationTwo();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
executor.execute(() -> {
try {
doOperationThree();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
}
executor.shutdown();
Running my program, I get a java.util.concurrent.RejectedExecutionException.
Operation1: ClassA object 0
Operation2: ClassA object 0
Operation1: ClassA object 1
Operation3: ClassA object 0
....
Operation1: ClassA object 46
Operation2: ClassA object 45
Operation3: ClassA object 45
Exception in thread "main" java.util.concurrent.RejectedExecutionException: Queue capacity exceeded
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.growArray(ForkJoinPool.java:912)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.lockedPush(ForkJoinPool.java:867)
at java.base/java.util.concurrent.ForkJoinPool.externalPush(ForkJoinPool.java:1911)
at java.base/java.util.concurrent.ForkJoinPool.externalSubmit(ForkJoinPool.java:1930)
at java.base/java.util.concurrent.ForkJoinPool.execute(ForkJoinPool.java:2462)
at concurrent.operations.Program1.main(Program1.java:96)
What am I doing wrong? How can I achieve this without over-saturating the thread pool?
Edit: Full disclosure – this is homework with some requirements. 1. I must use ForkJoinPool.commonPool() and must not set the number of threads myself, 2. I must use the consumer-producer pattern, and 3. I must not modify ClassA.
I really like doing concurrent stuff, so I did try writing it. I did use CompletableFuture which a) does run in the ForkJoinPool.commonPool by default and b) makes the actual processing really easy:
while (true) {
final ClassA nextOperation = queue.take();
CompletableFuture.runAsync(nextOperation::operationOne)
.thenRun(nextOperation::operationTwo)
.thenRun(nextOperation::operationThree)
.thenRun(() -> resultQueue.add(nextOperation));
}
This will take ClassA objects from the queue and execute all their operations concurrently, but in order.
You did leave out where the tasks are coming from, and whether you need the consumer to terminate. Generally you don't want to, and it does make matters a bit more complicated.
private static final int COUNT = 10;
private static final Random RANDOM = new Random();
public static void main(String[] args) throws ExecutionException, InterruptedException {
BlockingQueue<ClassA> runnables = new ArrayBlockingQueue<>(COUNT);
BlockingQueue<ClassA> finished = new ArrayBlockingQueue<>(COUNT);
// start producer
ExecutorService createTaskExecutor = Executors.newSingleThreadExecutor();
createTaskExecutor.submit(() -> fillQueue(runnables));
// wait for all consumer tasks to finish
while (finished.size() != COUNT) {
try {
// we need to poll instead of waiting forever
// because the last tasks might still be running
// while there are no others to add anymore
// so we need to check again if all have finished in the meantime
final ClassA nextOperation = runnables.poll(2, TimeUnit.SECONDS);
if (nextOperation != null) {
CompletableFuture.runAsync(nextOperation::operationOne)
.thenRun(nextOperation::operationTwo)
.thenRun(nextOperation::operationThree)
.thenRun(() -> finished.add(nextOperation));
}
} catch (InterruptedException e) {
System.err.println("exception while retrieving next operation");
// we will actually need to terminate now, or probably never will
throw e;
}
}
System.out.printf("finished tasks (%d):%n", finished.size());
for (ClassA classA : finished) {
System.out.printf("finished task %d%n", classA.designator);
}
createTaskExecutor.shutdown();
}
private static void fillQueue(BlockingQueue<ClassA> runnables) {
// start thread filling the queue at random
for (int i = 0; i < COUNT; i++) {
runnables.add(new ClassA(i));
try {
Thread.sleep(RANDOM.nextInt(1_000));
} catch (InterruptedException e) {
System.err.println("failed to add runnable");
}
}
}
Since you didn't provide ClassA, I used this one. It contains an identifier so you can track which is running at what time.
class ClassA {
private static final Random RANDOM = new Random();
public final int designator;
public ClassA(int i) {
designator = i;
}
public void operationOne() {
System.out.printf("%d: operation 1%n", designator);
sleep();
}
public void operationTwo() {
System.out.printf("%d: operation 2%n", designator);
sleep();
}
public void operationThree() {
System.out.printf("%d: operation 3%n", designator);
sleep();
}
private static void sleep() {
try {
Thread.sleep(RANDOM.nextInt(5_000));
} catch (InterruptedException e) {
System.err.println("interrupted while executing task");
}
}
}
I have got a class that records eyetracking data asynchronously. There are methods to start and stop the recording process. The data is collected in a collection and the collection can only be accessed if the recording thread has finished its work. It basically encapsulates all the threading and synchronizing so the user of my library doesn't have to do it.
The heavily shortened code (generics and error handling omitted):
public class Recorder {
private Collection accumulatorCollection;
private Thread recordingThread;
private class RecordingRunnable implements Runnable {
...
public void run() {
while(!Thread.currentThread().isInterrupted()) {
// fetch data and collect it in the accumulator
synchronized(acc) { acc.add(Eyetracker.getData()) }
}
}
}
public void start() {
accumulatorCollection = new Collection();
recordingThread = new Thread(new RecordingRunnable(accumulatorCollection));
recordingThread.start();
}
public void stop() {
recordingThread.interrupt();
}
public void getData() {
try {
recordingThread.join(2000);
if(recordingThread.isAlive()) { throw Exception(); }
}
catch(InterruptedException e) { ... }
synchronized(accumulatorCollection) { return accumulatorCollection; }
}
}
The usage is quite simple:
recorder.start();
...
recorder.stop();
Collection data = recorder.getData();
My problem with the whole thing is how to test it. Currently i am doing it like this:
recorder.start();
Thread.sleep(50);
recorder.stop();
Collection data = recorder.getData();
assert(stuff);
This works, but it is non-deterministic and slows down the test suite quite a bit (i marked these tests as integration tests, so they have to be run separately to circumvent this problem).
Is there a better way?
There is a better way using a CountDownLatch.
The non-deterministic part of the test stems from two variables in time you do not account for:
creating and starting a thread takes time and the thread may not have started executing the runnable when Thread.start() returns (the runnable will get executed, but it may be a bit later).
the stop/interrupt will break the while-loop in the Runnable but not immediately, it may be a bit later.
This is where a CountDownLatch comes in: it gives you precise information about where another thread is in execution. E.g. let the first thread wait on the latch, while the second "counts down" the latch as last statement within a runnable and now the first thread knows that the runnable finished. The CountDownLatch also acts as a synchronizer: whatever the second thread was writing to memory, can now be read by the first thread.
Instead of using an interrupt, you can also use a volatile boolean. Any thread reading the volatile variable is guaranteed to see the last value set by any other thread.
A CountDownLatch can also be given a timeout which is useful for tests that can hang: if you have to wait to long you can abort the whole test (e.g. shutdown executors, interrupt threads) and throw an AssertionError. In the code below I re-used the timeout to wait for a certain amount of data to collect instead of 'sleeping'.
As an optimization, use an Executor (ThreadPool) instead of creating and starting threads. The latter is relative expensive, using an Executor can really make a difference.
Below the updated code, I made it runnable as an application (main method). (edit 28/02/17: check maxCollect > 0 in while-loop)
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
public class Recorder {
private final ExecutorService executor;
private Thread recordingThread;
private volatile boolean stopRecording;
private CountDownLatch finishedRecording;
private Collection<Object> eyeData;
private int maxCollect;
private final AtomicBoolean started = new AtomicBoolean();
private final AtomicBoolean stopped = new AtomicBoolean();
public Recorder() {
this(null);
}
public Recorder(ExecutorService executor) {
this.executor = executor;
}
public Recorder maxCollect(int max) { maxCollect = max; return this; }
private class RecordingRunnable implements Runnable {
#Override public void run() {
try {
int collected = 0;
while (!stopRecording) {
eyeData.add(EyeTracker.getData());
if (maxCollect > 0 && ++collected >= maxCollect) {
stopRecording = true;
}
}
} finally {
finishedRecording.countDown();
}
}
}
public Recorder start() {
if (!started.compareAndSet(false, true)) {
throw new IllegalStateException("already started");
}
stopRecording = false;
finishedRecording = new CountDownLatch(1);
eyeData = new ArrayList<Object>();
// the RecordingRunnable created below will see the values assigned above ('happens before relationship')
if (executor == null) {
recordingThread = new Thread(new RecordingRunnable());
recordingThread.start();
} else {
executor.execute(new RecordingRunnable());
}
return this;
}
public Collection<Object> getData(long timeout, TimeUnit tunit) {
if (started.get() == false) {
throw new IllegalStateException("start first");
}
if (!stopped.compareAndSet(false, true)) {
throw new IllegalStateException("data already fetched");
}
if (maxCollect <= 0) {
stopRecording = true;
}
boolean recordingStopped = false;
try {
// this establishes a 'happens before relationship'
// all updates to eyeData are now visible in this thread.
recordingStopped = finishedRecording.await(timeout, tunit);
} catch(InterruptedException e) {
throw new RuntimeException("interrupted", e);
} finally {
stopRecording = true;
}
// if recording did not stop, do not return the eyeData (could stil be modified by recording-runnable).
if (!recordingStopped) {
throw new RuntimeException("recording");
}
// only when everything is OK this recorder instance can be re-used
started.set(false);
stopped.set(false);
return eyeData;
}
public static class EyeTracker {
public static Object getData() {
try { Thread.sleep(1); } catch (Exception ignored) {}
return new Object();
}
}
public static void main(String[] args) {
System.out.println("Starting.");
ExecutorService exe = Executors.newSingleThreadExecutor();
try {
Recorder r = new Recorder(exe).maxCollect(50).start();
int dsize = r.getData(2000, TimeUnit.MILLISECONDS).size();
System.out.println("Collected " + dsize);
r.maxCollect(100).start();
dsize = r.getData(2000, TimeUnit.MILLISECONDS).size();
System.out.println("Collected " + dsize);
r.maxCollect(0).start();
Thread.sleep(100);
dsize = r.getData(2000, TimeUnit.MILLISECONDS).size();
System.out.println("Collected " + dsize);
} catch (Exception e) {
e.printStackTrace();
} finally {
exe.shutdownNow();
System.out.println("Done.");
}
}
}
Happy coding :)
I am having the same issue from this question
Thread hangs when executing a method via reflection
What they haven't noticed is that the problem is on the .invoke method.
When I comment the line .invoke method on the next snippet, the Callables return the desired "Future" values.
This is my Callable implementation:
public class ParallelCallable implements Callable<Boolean> {
private Class _class;
private Method _method;
public ParallelCallable(ParallelPair pair) {
this._class = pair.getClassType();
this._method = pair.getMethod();
}
public Boolean call() {
Boolean done;
try {
this._method.invoke(null, this._class.newInstance()); // All threads hangs on this line.
done = true;
} catch(Exception ex) {
ex.printStackTrace();
done = false;
}
return done;
}
}
And the next code is where I submit the callables to the ExecutorService:
List<Callable<Boolean>> tasks = new ArrayList<Callable<Boolean>>();
ExecutorService pool = Executors.newFixedThreadPool(10);
try {
futures = pool.invokeAll(this.tasks);
pool.shutdown();
// Wait until all threads are finish
while (!pool.isTerminated()) {
}
boolean isLoaded;
for (Future<Boolean> future : futures) {
isLoaded = future.get();
}
} catch (Exception ex) {
ex.printStackTrace();
}
"this.tasks" is the list of ParallelCallable objects.
All the Threads hangs on the line "this._method.invoke(...)". The methods I invoke are public static and also, there is no exception thrown at all.
Thank you and I hope you can help me!
I have a method which returns a List of futures
List<Future<O>> futures = getFutures();
Now I want to wait until either all futures are done processing successfully or any of the tasks whose output is returned by a future throws an exception. Even if one task throws an exception, there is no point in waiting for the other futures.
Simple approach would be to
wait() {
For(Future f : futures) {
try {
f.get();
} catch(Exception e) {
//TODO catch specific exception
// this future threw exception , means somone could not do its task
return;
}
}
}
But the problem here is if, for example, the 4th future throws an exception, then I will wait unnecessarily for the first 3 futures to be available.
How to solve this? Will count down latch help in any way? I'm unable to use Future isDone because the java doc says
boolean isDone()
Returns true if this task completed. Completion may be due to normal termination, an exception, or cancellation -- in all of these cases, this method will return true.
You can use a CompletionService to receive the futures as soon as they are ready and if one of them throws an exception cancel the processing. Something like this:
Executor executor = Executors.newFixedThreadPool(4);
CompletionService<SomeResult> completionService =
new ExecutorCompletionService<SomeResult>(executor);
//4 tasks
for(int i = 0; i < 4; i++) {
completionService.submit(new Callable<SomeResult>() {
public SomeResult call() {
...
return result;
}
});
}
int received = 0;
boolean errors = false;
while(received < 4 && !errors) {
Future<SomeResult> resultFuture = completionService.take(); //blocks if none available
try {
SomeResult result = resultFuture.get();
received ++;
... // do something with the result
}
catch(Exception e) {
//log
errors = true;
}
}
I think you can further improve to cancel any still executing tasks if one of them throws an error.
If you are using Java 8 then you can do this easier with CompletableFuture and CompletableFuture.allOf, which applies the callback only after all supplied CompletableFutures are done.
// Waits for *all* futures to complete and returns a list of results.
// If *any* future completes exceptionally then the resulting future will also complete exceptionally.
public static <T> CompletableFuture<List<T>> all(List<CompletableFuture<T>> futures) {
CompletableFuture[] cfs = futures.toArray(new CompletableFuture[futures.size()]);
return CompletableFuture.allOf(cfs)
.thenApply(ignored -> futures.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList())
);
}
Use a CompletableFuture in Java 8
// Kick of multiple, asynchronous lookups
CompletableFuture<User> page1 = gitHubLookupService.findUser("Test1");
CompletableFuture<User> page2 = gitHubLookupService.findUser("Test2");
CompletableFuture<User> page3 = gitHubLookupService.findUser("Test3");
// Wait until they are all done
CompletableFuture.allOf(page1,page2,page3).join();
logger.info("--> " + page1.get());
You can use an ExecutorCompletionService. The documentation even has an example for your exact use-case:
Suppose instead that you would like to use the first non-null result of the set of tasks, ignoring any that encounter exceptions, and cancelling all other tasks when the first one is ready:
void solve(Executor e, Collection<Callable<Result>> solvers) throws InterruptedException {
CompletionService<Result> ecs = new ExecutorCompletionService<Result>(e);
int n = solvers.size();
List<Future<Result>> futures = new ArrayList<Future<Result>>(n);
Result result = null;
try {
for (Callable<Result> s : solvers)
futures.add(ecs.submit(s));
for (int i = 0; i < n; ++i) {
try {
Result r = ecs.take().get();
if (r != null) {
result = r;
break;
}
} catch (ExecutionException ignore) {
}
}
} finally {
for (Future<Result> f : futures)
f.cancel(true);
}
if (result != null)
use(result);
}
The important thing to notice here is that ecs.take() will get the first completed task, not just the first submitted one. Thus you should get them in the order of finishing the execution (or throwing an exception).
If you are using Java 8 and don't want to manipulate CompletableFutures, I have written a tool to retrieve results for a List<Future<T>> using streaming. The key is that you are forbidden to map(Future::get) as it throws.
public final class Futures
{
private Futures()
{}
public static <E> Collector<Future<E>, Collection<E>, List<E>> present()
{
return new FutureCollector<>();
}
private static class FutureCollector<T> implements Collector<Future<T>, Collection<T>, List<T>>
{
private final List<Throwable> exceptions = new LinkedList<>();
#Override
public Supplier<Collection<T>> supplier()
{
return LinkedList::new;
}
#Override
public BiConsumer<Collection<T>, Future<T>> accumulator()
{
return (r, f) -> {
try
{
r.add(f.get());
}
catch (InterruptedException e)
{}
catch (ExecutionException e)
{
exceptions.add(e.getCause());
}
};
}
#Override
public BinaryOperator<Collection<T>> combiner()
{
return (l1, l2) -> {
l1.addAll(l2);
return l1;
};
}
#Override
public Function<Collection<T>, List<T>> finisher()
{
return l -> {
List<T> ret = new ArrayList<>(l);
if (!exceptions.isEmpty())
throw new AggregateException(exceptions, ret);
return ret;
};
}
#Override
public Set<java.util.stream.Collector.Characteristics> characteristics()
{
return java.util.Collections.emptySet();
}
}
This needs an AggregateException that works like C#'s
public class AggregateException extends RuntimeException
{
/**
*
*/
private static final long serialVersionUID = -4477649337710077094L;
private final List<Throwable> causes;
private List<?> successfulElements;
public AggregateException(List<Throwable> causes, List<?> l)
{
this.causes = causes;
successfulElements = l;
}
public AggregateException(List<Throwable> causes)
{
this.causes = causes;
}
#Override
public synchronized Throwable getCause()
{
return this;
}
public List<Throwable> getCauses()
{
return causes;
}
public List<?> getSuccessfulElements()
{
return successfulElements;
}
public void setSuccessfulElements(List<?> successfulElements)
{
this.successfulElements = successfulElements;
}
}
This component acts exactly as C#'s Task.WaitAll. I am working on a variant that does the same as CompletableFuture.allOf (equivalento to Task.WhenAll)
The reason why I did this is that I am using Spring's ListenableFuture and don't want to port to CompletableFuture despite it is a more standard way
In case that you want combine a List of CompletableFutures, you can do this :
List<CompletableFuture<Void>> futures = new ArrayList<>();
// ... Add futures to this ArrayList of CompletableFutures
// CompletableFuture.allOf() method demand a variadic arguments
// You can use this syntax to pass a List instead
CompletableFuture<Void> allFutures = CompletableFuture.allOf(
futures.toArray(new CompletableFuture[futures.size()]));
// Wait for all individual CompletableFuture to complete
// All individual CompletableFutures are executed in parallel
allFutures.get();
For more details on Future & CompletableFuture, useful links:
1. Future: https://www.baeldung.com/java-future
2. CompletableFuture: https://www.baeldung.com/java-completablefuture
3. CompletableFuture: https://www.callicoder.com/java-8-completablefuture-tutorial/
I've got a utility class that contains these:
#FunctionalInterface
public interface CheckedSupplier<X> {
X get() throws Throwable;
}
public static <X> Supplier<X> uncheckedSupplier(final CheckedSupplier<X> supplier) {
return () -> {
try {
return supplier.get();
} catch (final Throwable checkedException) {
throw new IllegalStateException(checkedException);
}
};
}
Once you have that, using a static import, you can simple wait for all futures like this:
futures.stream().forEach(future -> uncheckedSupplier(future::get).get());
you can also collect all their results like this:
List<MyResultType> results = futures.stream()
.map(future -> uncheckedSupplier(future::get).get())
.collect(Collectors.toList());
Just revisiting my old post and noticing that you had another grief:
But the problem here is if, for example, the 4th future throws an exception, then I will wait unnecessarily for the first 3 futures to be available.
In this case, the simple solution is to do this in parallel:
futures.stream().parallel()
.forEach(future -> uncheckedSupplier(future::get).get());
This way the first exception, although it will not stop the future, will break the forEach-statement, like in the serial example, but since all wait in parallel, you won't have to wait for the first 3 to complete.
maybe this would help (nothing would replaced with raw thread, yeah!)
I suggest run each Future guy with a separated thread (they goes parallel), then when ever one of the got error, it just signal the manager(Handler class).
class Handler{
//...
private Thread thisThread;
private boolean failed=false;
private Thread[] trds;
public void waitFor(){
thisThread=Thread.currentThread();
List<Future<Object>> futures = getFutures();
trds=new Thread[futures.size()];
for (int i = 0; i < trds.length; i++) {
RunTask rt=new RunTask(futures.get(i), this);
trds[i]=new Thread(rt);
}
synchronized (this) {
for(Thread tx:trds){
tx.start();
}
}
for(Thread tx:trds){
try {tx.join();
} catch (InterruptedException e) {
System.out.println("Job failed!");break;
}
}if(!failed){System.out.println("Job Done");}
}
private List<Future<Object>> getFutures() {
return null;
}
public synchronized void cancelOther(){if(failed){return;}
failed=true;
for(Thread tx:trds){
tx.stop();//Deprecated but works here like a boss
}thisThread.interrupt();
}
//...
}
class RunTask implements Runnable{
private Future f;private Handler h;
public RunTask(Future f,Handler h){this.f=f;this.h=h;}
public void run(){
try{
f.get();//beware about state of working, the stop() method throws ThreadDeath Error at any thread state (unless it blocked by some operation)
}catch(Exception e){System.out.println("Error, stopping other guys...");h.cancelOther();}
catch(Throwable t){System.out.println("Oops, some other guy has stopped working...");}
}
}
I have to say the above code would error(didn't check), but I hope I could explain the solution. please have a try.
/**
* execute suppliers as future tasks then wait / join for getting results
* #param functors a supplier(s) to execute
* #return a list of results
*/
private List getResultsInFuture(Supplier<?>... functors) {
CompletableFuture[] futures = stream(functors)
.map(CompletableFuture::supplyAsync)
.collect(Collectors.toList())
.toArray(new CompletableFuture[functors.length]);
CompletableFuture.allOf(futures).join();
return stream(futures).map(a-> {
try {
return a.get();
} catch (InterruptedException | ExecutionException e) {
//logger.error("an error occurred during runtime execution a function",e);
return null;
}
}).collect(Collectors.toList());
};
The CompletionService will take your Callables with the .submit() method and you can retrieve the computed futures with the .take() method.
One thing you must not forget is to terminate the ExecutorService by calling the .shutdown() method. Also you can only call this method when you have saved a reference to the executor service so make sure to keep one.
Example code - For a fixed number of work items to be worked on in parallel:
ExecutorService service = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
CompletionService<YourCallableImplementor> completionService =
new ExecutorCompletionService<YourCallableImplementor>(service);
ArrayList<Future<YourCallableImplementor>> futures = new ArrayList<Future<YourCallableImplementor>>();
for (String computeMe : elementsToCompute) {
futures.add(completionService.submit(new YourCallableImplementor(computeMe)));
}
//now retrieve the futures after computation (auto wait for it)
int received = 0;
while(received < elementsToCompute.size()) {
Future<YourCallableImplementor> resultFuture = completionService.take();
YourCallableImplementor result = resultFuture.get();
received ++;
}
//important: shutdown your ExecutorService
service.shutdown();
Example code - For a dynamic number of work items to be worked on in parallel:
public void runIt(){
ExecutorService service = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
CompletionService<CallableImplementor> completionService = new ExecutorCompletionService<CallableImplementor>(service);
ArrayList<Future<CallableImplementor>> futures = new ArrayList<Future<CallableImplementor>>();
//Initial workload is 8 threads
for (int i = 0; i < 9; i++) {
futures.add(completionService.submit(write.new CallableImplementor()));
}
boolean finished = false;
while (!finished) {
try {
Future<CallableImplementor> resultFuture;
resultFuture = completionService.take();
CallableImplementor result = resultFuture.get();
finished = doSomethingWith(result.getResult());
result.setResult(null);
result = null;
resultFuture = null;
//After work package has been finished create new work package and add it to futures
futures.add(completionService.submit(write.new CallableImplementor()));
} catch (InterruptedException | ExecutionException e) {
//handle interrupted and assert correct thread / work packet count
}
}
//important: shutdown your ExecutorService
service.shutdown();
}
public class CallableImplementor implements Callable{
boolean result;
#Override
public CallableImplementor call() throws Exception {
//business logic goes here
return this;
}
public boolean getResult() {
return result;
}
public void setResult(boolean result) {
this.result = result;
}
}
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class Stack2 {
public static void waitFor(List<Future<?>> futures) {
List<Future<?>> futureCopies = new ArrayList<Future<?>>(futures);//contains features for which status has not been completed
while (!futureCopies.isEmpty()) {//worst case :all task worked without exception, then this method should wait for all tasks
Iterator<Future<?>> futureCopiesIterator = futureCopies.iterator();
while (futureCopiesIterator.hasNext()) {
Future<?> future = futureCopiesIterator.next();
if (future.isDone()) {//already done
futureCopiesIterator.remove();
try {
future.get();// no longer waiting
} catch (InterruptedException e) {
//ignore
//only happen when current Thread interrupted
} catch (ExecutionException e) {
Throwable throwable = e.getCause();// real cause of exception
futureCopies.forEach(f -> f.cancel(true));//cancel other tasks that not completed
return;
}
}
}
}
}
public static void main(String[] args) {
ExecutorService executorService = Executors.newFixedThreadPool(3);
Runnable runnable1 = new Runnable (){
public void run(){
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
}
}
};
Runnable runnable2 = new Runnable (){
public void run(){
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
}
}
};
Runnable fail = new Runnable (){
public void run(){
try {
Thread.sleep(1000);
throw new RuntimeException("bla bla bla");
} catch (InterruptedException e) {
}
}
};
List<Future<?>> futures = Stream.of(runnable1,fail,runnable2)
.map(executorService::submit)
.collect(Collectors.toList());
double start = System.nanoTime();
waitFor(futures);
double end = (System.nanoTime()-start)/1e9;
System.out.println(end +" seconds");
}
}
This is what i use to wait for a certain time on a list of futures. I think its cleaner.
CountDownLatch countDownLatch = new CountDownLatch(partitions.size());
// Some parallel work
for (Something tp : somethings) {
completionService.submit(() -> {
try {
work(something)
} catch (ConnectException e) {
} finally {
countDownLatch.countDown();
}
});
}
try {
if (!countDownLatch.await(secondsToWait, TimeUnit.SECONDS)){
}
} catch (InterruptedException e) {
}
A Guava-based solution can be implemented using Futures.FutureCombiner.
Here is the code example given in the javadoc:
final ListenableFuture<Instant> loginDateFuture =
loginService.findLastLoginDate(username);
final ListenableFuture<List<String>> recentCommandsFuture =
recentCommandsService.findRecentCommands(username);
ListenableFuture<UsageHistory> usageFuture =
Futures.whenAllSucceed(loginDateFuture, recentCommandsFuture)
.call(
() ->
new UsageHistory(
username,
Futures.getDone(loginDateFuture),
Futures.getDone(recentCommandsFuture)),
executor);
For more info, see the ListenableFutureExplained section of the user's guide.
If you're curious about how it works under the hood, I suggest looking at this part of the source code: AggregateFuture.java#L127-L186
Is there a standard nice way to call a blocking method with a timeout in Java? I want to be able to do:
// call something.blockingMethod();
// if it hasn't come back within 2 seconds, forget it
if that makes sense.
Thanks.
You could use an Executor:
ExecutorService executor = Executors.newCachedThreadPool();
Callable<Object> task = new Callable<Object>() {
public Object call() {
return something.blockingMethod();
}
};
Future<Object> future = executor.submit(task);
try {
Object result = future.get(5, TimeUnit.SECONDS);
} catch (TimeoutException ex) {
// handle the timeout
} catch (InterruptedException e) {
// handle the interrupts
} catch (ExecutionException e) {
// handle other exceptions
} finally {
future.cancel(true); // may or may not desire this
}
If the future.get doesn't return in 5 seconds, it throws a TimeoutException. The timeout can be configured in seconds, minutes, milliseconds or any unit available as a constant in TimeUnit.
See the JavaDoc for more detail.
You could wrap the call in a FutureTask and use the timeout version of get().
See http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/FutureTask.html
See also Guava's TimeLimiter which uses an Executor behind the scenes.
It's really great that people try to implement this in so many ways. But the truth is, there is NO way.
Most developers would try to put the blocking call in a different thread and have a future or some timer. BUT there is no way in Java to stop a thread externally, let alone a few very specific cases like the Thread.sleep() and Lock.lockInterruptibly() methods that explicitly handle thread interruption.
So really you have only 3 generic options:
Put your blocking call on a new thread and if the time expires you just move on, leaving that thread hanging. In that case you should make sure the thread is set to be a Daemon thread. This way the thread will not stop your application from terminating.
Use non blocking Java APIs. So for network for example, use NIO2 and use the non blocking methods. For reading from the console use Scanner.hasNext() before blocking etc.
If your blocking call is not an IO, but your logic, then you can repeatedly check for Thread.isInterrupted() to check if it was interrupted externally, and have another thread call thread.interrupt() on the blocking thread
This course about concurrency https://www.udemy.com/java-multithreading-concurrency-performance-optimization/?couponCode=CONCURRENCY
really walks through those fundamentals if you really want to understand how it works in Java. It actually talks about those specific limitations and scenarios, and how to go about them in one of the lectures.
I personally try to program without using blocking calls as much as possible. There are toolkits like Vert.x for example that make it really easy and performant to do IO and no IO operations asynchronously and in a non blocking way.
I hope it helps
There is also an AspectJ solution for that with jcabi-aspects library.
#Timeable(limit = 30, unit = TimeUnit.MINUTES)
public Soup cookSoup() {
// Cook soup, but for no more than 30 minutes (throw and exception if it takes any longer
}
It can't get more succinct, but you have to depend on AspectJ and introduce it in your build lifecycle, of course.
There is an article explaining it further: Limit Java Method Execution Time
I'm giving you here the complete code. In place of the method I'm calling, you can use your method:
public class NewTimeout {
public String simpleMethod() {
return "simple method";
}
public static void main(String[] args) {
ExecutorService executor = Executors.newSingleThreadScheduledExecutor();
Callable<Object> task = new Callable<Object>() {
public Object call() throws InterruptedException {
Thread.sleep(1100);
return new NewTimeout().simpleMethod();
}
};
Future<Object> future = executor.submit(task);
try {
Object result = future.get(1, TimeUnit.SECONDS);
System.out.println(result);
} catch (TimeoutException ex) {
System.out.println("Timeout............Timeout...........");
} catch (InterruptedException e) {
// handle the interrupts
} catch (ExecutionException e) {
// handle other exceptions
} finally {
executor.shutdown(); // may or may not desire this
}
}
}
Thread thread = new Thread(new Runnable() {
public void run() {
something.blockingMethod();
}
});
thread.start();
thread.join(2000);
if (thread.isAlive()) {
thread.stop();
}
Note, that stop is deprecated, better alternative is to set some volatile boolean flag, inside blockingMethod() check it and exit, like this:
import org.junit.*;
import java.util.*;
import junit.framework.TestCase;
public class ThreadTest extends TestCase {
static class Something implements Runnable {
private volatile boolean stopRequested;
private final int steps;
private final long waitPerStep;
public Something(int steps, long waitPerStep) {
this.steps = steps;
this.waitPerStep = waitPerStep;
}
#Override
public void run() {
blockingMethod();
}
public void blockingMethod() {
try {
for (int i = 0; i < steps && !stopRequested; i++) {
doALittleBit();
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
public void doALittleBit() throws InterruptedException {
Thread.sleep(waitPerStep);
}
public void setStopRequested(boolean stopRequested) {
this.stopRequested = stopRequested;
}
}
#Test
public void test() throws InterruptedException {
final Something somethingRunnable = new Something(5, 1000);
Thread thread = new Thread(somethingRunnable);
thread.start();
thread.join(2000);
if (thread.isAlive()) {
somethingRunnable.setStopRequested(true);
thread.join(2000);
assertFalse(thread.isAlive());
} else {
fail("Exptected to be alive (5 * 1000 > 2000)");
}
}
}
You need a circuit breaker implementation like the one present in the failsafe project on GitHub.
Try this. More simple solution. Guarantees that if block didn't execute within the time limit. the process will terminate and throws an exception.
public class TimeoutBlock {
private final long timeoutMilliSeconds;
private long timeoutInteval=100;
public TimeoutBlock(long timeoutMilliSeconds){
this.timeoutMilliSeconds=timeoutMilliSeconds;
}
public void addBlock(Runnable runnable) throws Throwable{
long collectIntervals=0;
Thread timeoutWorker=new Thread(runnable);
timeoutWorker.start();
do{
if(collectIntervals>=this.timeoutMilliSeconds){
timeoutWorker.stop();
throw new Exception("<<<<<<<<<<****>>>>>>>>>>> Timeout Block Execution Time Exceeded In "+timeoutMilliSeconds+" Milli Seconds. Thread Block Terminated.");
}
collectIntervals+=timeoutInteval;
Thread.sleep(timeoutInteval);
}while(timeoutWorker.isAlive());
System.out.println("<<<<<<<<<<####>>>>>>>>>>> Timeout Block Executed Within "+collectIntervals+" Milli Seconds.");
}
/**
* #return the timeoutInteval
*/
public long getTimeoutInteval() {
return timeoutInteval;
}
/**
* #param timeoutInteval the timeoutInteval to set
*/
public void setTimeoutInteval(long timeoutInteval) {
this.timeoutInteval = timeoutInteval;
}
}
example :
try {
TimeoutBlock timeoutBlock = new TimeoutBlock(10 * 60 * 1000);//set timeout in milliseconds
Runnable block=new Runnable() {
#Override
public void run() {
//TO DO write block of code
}
};
timeoutBlock.addBlock(block);// execute the runnable block
} catch (Throwable e) {
//catch the exception here . Which is block didn't execute within the time limit
}
In special case of a blocking queue:
Generic java.util.concurrent.SynchronousQueue has a poll method with timeout parameter.
Assume blockingMethod just sleep for some millis:
public void blockingMethod(Object input) {
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
My solution is to use wait() and synchronized like this:
public void blockingMethod(final Object input, long millis) {
final Object lock = new Object();
new Thread(new Runnable() {
#Override
public void run() {
blockingMethod(input);
synchronized (lock) {
lock.notify();
}
}
}).start();
synchronized (lock) {
try {
// Wait for specific millis and release the lock.
// If blockingMethod is done during waiting time, it will wake
// me up and give me the lock, and I will finish directly.
// Otherwise, when the waiting time is over and the
// blockingMethod is still
// running, I will reacquire the lock and finish.
lock.wait(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
So u can replace
something.blockingMethod(input)
to
something.blockingMethod(input, 2000)
Hope it helps.