I have a scheduled executor to reset a parameter to 0 and awake all active threads to continue processing. However after initial run of the thread it is not executing again.
ScheduledExecutorService exec = Executors.newScheduledThreadPool(4);
exec.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
logger.info("Setting hourly limit record count back to 0 to continue processing");
lines = 0;
executor.notifyAll();
Thread.currentThread().interrupt();
return;
}
}, 0, 1, TimeUnit.MINUTES);
There is another Executor defined in the class which executes further processes and not sure if this influences it:
ExecutorService executor = Executors.newCachedThreadPool();
for (String processList : processFiles) {
String appName = processList.substring(0,processList.indexOf("-"));
String scope = processList.substring(processList.lastIndexOf("-") + 1);
logger.info("Starting execution of thread for app " + appName + " under scope: " + scope);
try {
File processedFile = new File(ConfigurationReader.processedDirectory + appName + "-" + scope + ".csv");
processedFile.createNewFile();
executor.execute(new APIInitialisation(appName,processedFile.length(),scope));
} catch (InterruptedException | IOException e) {
e.printStackTrace();
}
}
From the documentation for ScheduledExecutorService.scheduleAtFixedRate():
If any execution of the task encounters an exception, subsequent executions are suppressed.
So something in your task is throwing an exception. I would guess the call to executor.notifyAll() which is documented to throw an IllegalMonitorStateException:
if the current thread is not the owner of this object's monitor.
Your scheduled task will most probably end up in a uncaught Exception. Taken from the JavaDoc of ScheduledExecutorService.scheduleAtFixedRate
If any execution of the task encounters an exception, subsequent
executions are suppressed.
Because you are provoking a uncaught exception, all further executions are cancelled.
Related
I wrote a small peice of program to demonstrate the usage of CountDownLatch class in java.
But, it not working as expected. I created 5 threads and assigned task to each thread. Now, each thread would wait for the start signal. Once the start signal is on, all thread start its work and call countDown(). Now, my main thread wait for all the thread to finish its work till it receives the done signal. But the output is not expected. Please help if I am missing anything in the concept.
Below is the program.
class Task implements Runnable{
private CountDownLatch startSignal;
private CountDownLatch doneSignal;
private int id;
Task(int id, CountDownLatch startSignal, CountDownLatch doneSignal){
this.startSignal = startSignal;
this.doneSignal = doneSignal;
this.id = id;
}
#Override
public void run() {
try {
startSignal.await();
performTask();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void performTask() {
try {
System.out.println("Task started by thread : " + id);
Thread.sleep(5000);
doneSignal.countDown();
System.out.println("Task ended by thread : " + id);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public class CountDownLatchExample {
public static void main(String[] args) {
CountDownLatch startSignal = new CountDownLatch(1);
CountDownLatch doneSignal = new CountDownLatch(5);
for(int i=0; i < 5; ++i) {
new Thread(new Task(i, startSignal, doneSignal)).start();
}
System.out.println("Press enter to start work");
new Scanner(System.in).nextLine();
startSignal.countDown();
try {
doneSignal.await();
System.out.println("All Tasks Completed");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Output
Press enter to start work
Task started by thread : 0
Task started by thread : 4
Task started by thread : 3
Task started by thread : 2
Task started by thread : 1
Task ended by thread : 4
Task ended by thread : 2
Task ended by thread : 1
All Tasks Completed
Task ended by thread : 0
Task ended by thread : 3
Expected output
Press enter to start work
Task started by thread : 0
Task started by thread : 4
Task started by thread : 3
Task started by thread : 2
Task started by thread : 1
Task ended by thread : 4
Task ended by thread : 2
Task ended by thread : 1
Task ended by thread : 0
Task ended by thread : 3
All Tasks Completed
In your Task class, you have:
doneSignal.countDown();
System.out.println("Task ended by thread : " + id);
In other words, you count down the latch before you print "task ended". That allows the main thread to wake up from its call to doneSignal.await() and print "All Tasks Completed" before all the "task ended" print statements complete. Though note the "wrong output" will not always happen; sometimes you'll get your expected output.
Simply switch those two lines of code around to guarantee the output you want:
System.out.println("Task ended by thread : " + id);
doneSignal.countDown();
This ensures the print statement happens-before the doneSignal.countDown() call, which itself happens-before the main thread returns from doneSignal.await(). Thus, now the above "task ended" print statement happens-before the main thread wakes up and prints the "All Tasks Completed" message.
From loom-lab, given the code
var virtualThreadFactory = Thread.ofVirtual().factory();
try (var executorService = Executors.newThreadPerTaskExecutor(virtualThreadFactory)) {
IntStream.range(0, 15).forEach(item -> {
executorService.submit(() -> {
try {
var milliseconds = item * 1000;
System.out.println(Thread.currentThread() + " sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if (item == 8) throw new RuntimeException("task 8 is acting up");
} catch (InterruptedException e) {
System.out.println("Interrupted task = " + item + ", Thread ID = " + Thread.currentThread());
}
});
});
}
catch (RuntimeException e) {
System.err.println(e.getMessage());
}
My hope was that the code would catch the RuntimeException and print the message, but it does not.
Am I hoping for too much, or will this someday work as I hope?
In response to an amazing answer by Stephen C, which I can fully appreciate, upon further exploration I discovered via
static String spawn(
ExecutorService executorService,
Callable<String> callable,
Consumer<Future<String>> consumer
) throws Exception {
try {
var result = executorService.submit(callable);
consumer.accept(result);
return result.get(3, TimeUnit.SECONDS);
}
catch (TimeoutException e) {
// The timeout expired...
return callable.call() + " - TimeoutException";
}
catch (ExecutionException e) {
// Why doesn't malcontent get caught here?
return callable.call() + " - ExecutionException";
}
catch (CancellationException e) { // future.cancel(false);
// Exception was thrown
return callable.call() + " - CancellationException";
}
catch (InterruptedException e) { // future.cancel(true);
return callable.call() + "- InterruptedException ";
}
}
and
try (var executorService = Executors.newThreadPerTaskExecutor(threadFactory)) {
Callable<String> malcontent = () -> {
Thread.sleep(Duration.ofSeconds(2));
throw new IllegalStateException("malcontent acting up");
};
System.out.println("\n\nresult = " + spawn(executorService, malcontent, (future) -> {}));
} catch (Exception e) {
e.printStackTrace(); // malcontent gets caught here
}
I was expecting malcontent to get caught in spawn as an ExecutionException per the documentation, but it does not. Consequently, I have trouble reasoning about my expectations.
Much of my hope for Project Loom was that, unlike Functional Reactive Programming, I could once again rely on Exceptions to do the right thing, and reason about them such that I could predict what would happen without having to run experiments to validate what really happens.
As Steve Jobs (at NeXT) used to say: "It just works"
So far, my posting on loom-dev#openjdk.java.net has not been responded to... which is why I have used StackOverflow. I don't know the best way to engage the Project Loom developers.
This is speculation ... but I don't think so.
According to the provisional javadocs, ExecutorService now inherits AutoClosable, and it is specified that the default behavior of the close() method is to perform a clean shutdown and wait for it to complete. (Note that this is described as default behavior not required behavior!)
So why couldn't they change the behavior to catch an resignal the exceptions on this thread's stack?
One problem is that specifying patterns of behavior that are logically consistent for both this case, and the case where the ExecutorService is not used as a resource in a try with resources. In order to implement the behavior in this case, the close() method has to be informed by some other part of the executor service of the task's unhandled exception. But if nothing calls close() then the exceptions can't be re-raised. And if the close() is called in a finalizer or similar, there probably won't be anything to handle them. At the very least, it is complicated.
A second problem is that it would be difficult to handle the exception(s) in the general case. What if more than one task failed with an exception? What if different tasks failed with different exceptions? How does the code that handles the exception (e.g. your catch (RuntimeException e) ... figure out which task failed?
A third problem is that this would be a breaking change. In Java 17 and earlier, the above code would not propagate any exceptions from the tasks. In Java 18 and later it would. Java 17 code that assumed there were no "random" exceptions from failed tasks delivered to this thread would break.
A fourth point is that this would be an nuisance in use-cases where the Java 18+ programmer wants to treat the executor service as a resource, but does not want to deal with "stray" exceptions on this thread. (I suspect that would be the majority of use-cases for autoclosing an executor service.)
A fifth problem (if you want to call it that) is that it is a breaking change for early adopters of Loom. (I am reading your question as saying that you tried it with Loom and it currently doesn't behave as you proposed.)
The final problem is that there are already ways to capture a task's exception and deliver it; e.g. via the Future objects returned when you submit a task. This proposal is not filling a gap in ExecutorService functionality.
(Phew!)
Of course I don't know that the Java developers will actually do. And we won't collectively know until Loom is finally released as a non-preview feature of mainstream Java.
Anyhow, if you want to lobby for this, you should email the Loom mailing list about it.
LOOM has made many improvements such as making ExecutorService an AutoClosable so it simplifies coding, eliminating calls to shutdown / awaitTermination.
Your point on the expectation of neat exception handling applies to typical usage of ExecutorService in any JDK - not just the upcoming LOOM release - so IMO isn't obviously necessary to be tied in with LOOM work.
The error handling you wish for is quite easy to incorporate with any version of JDK by adding a few lines of code around code blocks that use ExecutorService:
var ex = new AtomicReference<RuntimeException>();
try {
// add any use of ExecutorService here
// eg OLD JDK style:
// var executorService = Executors.newFixedThreadPool(5);
try (var executorService = Executors.newThreadPerTaskExecutor(virtualThreadFactory)) {
...
if (item == 8) {
// Save exception before sending:
ex.set(new RuntimeException("task 8 is acting up"));
throw ex.get();
}
...
}
// OR: not-LOOM JDK call executorService.shutdown/awaitTermination here
// Pass on any handling problem
if (ex.get() != null)
throw ex.get();
}
catch (Exception e) {
System.err.println("Exception was: "+e.getMessage());
}
Not elegant as you hope for, but works in any JDK release.
EDIT On your edited question:
You've put callable.call() as the code inside catch (ExecutionException e) { so that you've lost the first exception and malcontent raises a second exception. Add System.out.println to see the original:
catch (ExecutionException e) {
System.out.println(Thread.currentThread()+" ExecutionException: "+e);
e.printStackTrace();
// Why doesn't malcontent get caught here?
return callable.call() + " - ExecutionException";
}
I think, the closest to what you are trying to achieve, is
try(var executor = StructuredExecutor.open()) {
var handler = new StructuredExecutor.ShutdownOnFailure();
IntStream.range(0, 15).forEach(item -> {
executor.fork(() -> {
var milliseconds = item * 100;
System.out.println(Thread.currentThread()
+ "sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if(item == 8) {
throw new RuntimeException("task 8 is acting up");
}
return null;
}, handler);
});
executor.join();
handler.throwIfFailed();
}
catch(InterruptedException|ExecutionException ex) {
System.err.println("Caught in initiator thread");
ex.printStackTrace();
}
which will run all jobs in virtual threads and generate an exception in the initiator thread when one of the jobs failed. StructuredExecutor is a new tool introduced by project Loom which allows to show the ownership of the created virtual threads to this specific job in diagnostic tools. But note that it’s close() won’t wait for the completion but rather requires the owner to do this before closing, throwing an exception if the developer failed to do so.
The behavior of classic ExecutorService implementations won’t change.
A solution for the ExecutorService would be
try(var executor = Executors.newVirtualThreadPerTaskExecutor()) {
var jobs = executor.invokeAll(IntStream.range(0, 15).<Callable<?>>mapToObj(item ->
() -> {
var milliseconds = item * 100;
System.out.println(Thread.currentThread()
+ " sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if(item == 8) {
throw new RuntimeException("task 8 is acting up");
}
return null;
}).toList());
for(var f: jobs) f.get();
}
catch(InterruptedException|ExecutionException ex) {
System.err.println("Caught in initiator thread");
ex.printStackTrace();
}
Note that while invokeAll waits for the completion of all jobs, we still need the loop calling get to enforce an ExecutionException to be thrown in the initiating thread.
I have a task where while generating a random password for user the SMS should go after 4 MIN, but the welcome SMS should go immediately. Since password I am setting first and need to send after 4 MIN I am making that thread sleep (Cant use ExecutorServices), and welcome SMS thread start.
Here is the code:
String PasswordSMS="Dear User, Your password is "+'"'+"goody"+'"'+" Your FREE
recharge service is LIVE now!";
String welcomeSMS="Dear goody, Welcome to XYZ";
try {
Thread q=new Thread(new GupShupSMSUtill(PasswordSMS,MOB_NUM));
Thread.sleep(4 * 60 * 1000);
q.start();
GupShupSMSUtill sendWelcomesms2=new GupShupSMSUtill(welcomeSMS, MOB_NUM);
Thread Bal3=new Thread(sendWelcomesms2);
Bal3.start();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
</code>
So if I change the order the thread sendWelcomesms2 Immediately starts.I have to send welcome SMS then password sms (After 4 Min) how its achievable ??
NOTE: Both SMS come after 4 MIN
Thread.sleep(4 * 60 * 1000);
delays execution of your currently running thread, your q.start() is not executed until the wait time is over. This order doesn't make sense.
Your thread is only created when
Thread q=new Thread(new GupShupSMSUtill(PasswordSMS,MOB_NUM));
is executed. Your thread is started when
q.start();
is executed. So if you want to achieve running the q thread while the main thread sleep, you should write your lines in this order:
Thread q=new Thread(new GupShupSMSUtill(PasswordSMS,MOB_NUM)); // Create thread
q.start(); // start thread
Thread.sleep(4 * 60 * 1000); // suspend main thread for 4 sec
You can use join():
String PasswordSMS = "Dear User, Your password is " + "\"" + "goody" + "\"" + " Your FREE recharge service is LIVE now!";
String welcomeSMS = "Dear goody, Welcome to XYZ";
try
{
GupShupSMSUtill sendWelcomesms2 = new GupShupSMSUtill(welcomeSMS, MOB_NUM);
Thread Bal3 = new Thread(sendWelcomesms2);
Bal3.start();
Thread q = new Thread(new GupShupSMSUtill(PasswordSMS, MOB_NUM));
q.start();
q.join();
}
catch (InterruptedException e)
{
e.printStackTrace();
}
Or latch:
private static java.util.concurrent.CountDownLatch latch = new java.util.concurrent.CountDownLatch(1);
And the code:
String PasswordSMS = "Dear User, Your password is " + "\"" + "goody" + "\"" + " Your FREE recharge service is LIVE now!";
String welcomeSMS = "Dear goody, Welcome to XYZ";
try
{
GupShupSMSUtill sendWelcomesms2 = new GupShupSMSUtill(welcomeSMS, MOB_NUM);
Thread Bal3 = new Thread(sendWelcomesms2);
Bal3.start();
Thread q = new Thread(new GupShupSMSUtill(PasswordSMS, MOB_NUM));
q.start();
latch.await(); // Wait
}
catch (InterruptedException e)
{
e.printStackTrace();
}
At the end of the Thread "q":
latch.countDown(); // stop to wait
Hint - Don't use Thread.sleep(x) in this case.
You are sleeping the current thread, before you issue the startcommand for q.
You probably want to issue the sleep inside GupShupSMSUtill() (maybe change its signature to something like GupShupSMSUtill(PasswordSMS,MOB_NUM, sleeptime) to be able to control how long it sleeps).
I have a callable which starts a Thread(this Thread runs a ping process) I want to allow the user to cancel the tasks:
public class PingCallable implements Callable<PingResult> {
private ProcThread processThread;
public PingCallable(String ip) {
this.processThread = new ProcThread(ip);
}
#Override
public PingResult call() throws Exception {
log.trace("Checking if the ip " + ip + " is alive");
try {
processThread.start();
try {
processThread.join();
} catch (InterruptedException e) {
log.error("The callable thread was interrupted for " + processThread.getName());
processThread.interrupt();
// Good practice to reset the interrupt flag.
Thread.currentThread().interrupt();
}
} catch (Throwable e) {
System.out.println("Throwable ");
}
return new PingResult(ip, processThread.isPingAlive());
}
}
The ProcThread, looks something like:
#Override
public void run() {
try {
process = Runtime.getRuntime().exec("the long ping", null, workDirFile);
/* Get process input and error stream, not here to keep it short*/
// waitFor is InterruptedException sensitive
exitVal = process.waitFor();
} catch (InterruptedException ex) {
log.error("interrupted " + getName(), ex);
process.destroy();
/* Stop the intput and error stream handlers, not here */
// Reset the status, good practice
Thread.currentThread().interrupt();
} catch (IOException ex) {
log.error("Exception while execution", ex);
}
}
And the test:
#Test
public void test() throws ExecutionException, InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(15);
List<Future<PingResult>> futures = new ArrayList<>();
for (int i= 0; i < 100; i++) {
PingCallable pingTask = new PingCallable("10.1.1.142");
futures.add(executorService.submit(pingTask));
}
Thread.sleep(10000);
executorService.shutdownNow();
// for (Future<PingResult> future : futures) {
// future.cancel(true);
// }
}
I monitor the ping processes using ProcessExplorer, I see 15, then the shutdownNow is executed, or future.cancel(true), only 4-5 max 8 processes are interrupted, the rest are left alive, I almost never see 15 messages saying "The callable thread was interrupted..", and the test does not finish until the processes end. Why is that?
I might not have a complete answer but there are two things to note:
shutdownNow signals a shutdown, to see if threads are actually stopped, use awaitTermination
process.destroy() also takes time to execute so the callable should wait for that to complete after interrupting the process-thread.
I modified the code a little and found that future.cancel(true) will actually prevent execution of anything in the catch InterruptedException-block of ProcThread, unless you use executor.shutdown() instead of executor.shutdownNow(). The unit-test does finish when "Executor terminated: true" is printed (using junit 4.11).
It looks like using future.cancel(true) and executor.shutdownNow() will double-interrupt a thread and that can cause the interrupted-blocks to be skipped.
Below the code I used for testing. Uncomment for (Future<PingResult> f : futures) f.cancel(true); together with shutdown(Now) to see the difference in output.
public class TestRunInterrupt {
static long sleepTime = 1000L;
static long killTime = 2000L;
#Test
public void testInterrupts() throws Exception {
ExecutorService executorService = Executors.newFixedThreadPool(3);
List<Future<PingResult>> futures = new ArrayList<Future<PingResult>>();
for (int i= 0; i < 100; i++) {
PingCallable pingTask = new PingCallable("10.1.1.142");
futures.add(executorService.submit(pingTask));
}
Thread.sleep(sleepTime + sleepTime / 2);
// for (Future<PingResult> f : futures) f.cancel(true);
// executorService.shutdown();
executorService.shutdownNow();
int i = 0;
while (!executorService.isTerminated()) {
System.out.println("Awaiting executor termination " + i);
executorService.awaitTermination(1000L, TimeUnit.MILLISECONDS);
i++;
if (i > 5) {
break;
}
}
System.out.println("Executor terminated: " + executorService.isTerminated());
}
static class ProcThread extends Thread {
static AtomicInteger tcount = new AtomicInteger();
int id;
volatile boolean slept;
public ProcThread() {
super();
id = tcount.incrementAndGet();
}
#Override
public void run() {
try {
Thread.sleep(sleepTime);
slept = true;
} catch (InterruptedException ie) {
// Catching an interrupted-exception clears the interrupted flag.
System.out.println(id + " procThread interrupted");
try {
Thread.sleep(killTime);
System.out.println(id + " procThread kill time finished");
} catch (InterruptedException ie2) {
System.out.println(id + "procThread killing interrupted");
}
Thread.currentThread().interrupt();
} catch (Throwable t) {
System.out.println(id + " procThread stopped: " + t);
}
}
}
static class PingCallable implements Callable<PingResult> {
ProcThread pthread;
public PingCallable(String s) {
pthread = new ProcThread();
}
#Override
public PingResult call() throws Exception {
System.out.println(pthread.id + " starting sleep");
pthread.start();
try {
System.out.println(pthread.id + " awaiting sleep");
pthread.join();
} catch (InterruptedException ie) {
System.out.println(pthread.id + " callable interrupted");
pthread.interrupt();
// wait for kill process to finish
pthread.join();
System.out.println(pthread.id + " callable interrupt done");
Thread.currentThread().interrupt();
} catch (Throwable t) {
System.out.println(pthread.id + " callable stopped: " + t);
}
return new PingResult(pthread.id, pthread.slept);
}
}
static class PingResult {
int id;
boolean slept;
public PingResult(int id, boolean slept) {
this.id = id;
this.slept = slept;
System.out.println(id + " slept " + slept);
}
}
}
Output without future.cancel(true) or with future.cancel(true) and normal shutdown():
1 starting sleep
1 awaiting sleep
2 starting sleep
3 starting sleep
2 awaiting sleep
3 awaiting sleep
1 slept true
3 slept true
2 slept true
5 starting sleep
4 starting sleep
6 starting sleep
5 awaiting sleep
6 awaiting sleep
4 awaiting sleep
4 callable interrupted
Awaiting executor termination 0
6 callable interrupted
4 procThread interrupted
5 callable interrupted
6 procThread interrupted
5 procThread interrupted
Awaiting executor termination 1
6 procThread kill time finished
5 procThread kill time finished
4 procThread kill time finished
5 callable interrupt done
5 slept false
6 callable interrupt done
4 callable interrupt done
6 slept false
4 slept false
Executor terminated: true
Output with future.cancel(true) and shutdownNow():
1 starting sleep
2 starting sleep
1 awaiting sleep
2 awaiting sleep
3 starting sleep
3 awaiting sleep
3 slept true
2 slept true
1 slept true
4 starting sleep
6 starting sleep
5 starting sleep
4 awaiting sleep
5 awaiting sleep
6 awaiting sleep
5 callable interrupted
6 callable interrupted
4 callable interrupted
5 procThread interrupted
6 procThread interrupted
4 procThread interrupted
Executor terminated: true
Yesterday I ran a series of tests, one of the most fruitful involved:
Interrupting the threads which run the procces, checking that it was interrupted, and that the process nevertheless was still hanging on "waitFor",
I decided to investigate why was the process not detecting that the thread in which it was running was interrupted.
I found that it is crucial to handle the streams (output, input and error) correctly otherwise the external process will block on I/O buffer.
I noticed that my error handler was also blocking on reading (no error output), don't know if it's an issue, but I decided to follow the suggestion and redirect the err stream to out stream
Finally I discovered that there is a correct way to invoke and destroy processes in Java
New ProcThread (As #pauli suggests, it does not extend from THREAD anymore! Run's in a callable, I keep the name so the difference can be noticed) looks like:
try {
ProcessBuilder builder = new ProcessBuilder(cmd);
builder.directory(new File(workDir));
builder.redirectErrorStream(true);
process = builder.start();
// any output?
sht= new StreamHandlerThread(process.getInputStream(), outBuff);
sht.start();
// Wait for is InterruptedException sensitive, so when you want the job to stop, interrupt the thread.
exitVal = process.waitFor();
sht.join();
postProcessing();
log.info("exitValue: %d", exitVal);
} catch (InterruptedException ex) {
log.error("interrupted " + Thread.currentThread().getName(), ex);
shutdownProcess();
The shutdown process:
private void shutdownProcess() {
postProcessing();
sht.interrupt();
sht.join();
}
The postProcessing:
private void postProcessing() {
if (process != null) {
closeTheStream(process.getErrorStream());
closeTheStream(process.getInputStream());
closeTheStream(process.getOutputStream());
process.destroy();
}
}
It seems that Hibernate Search synchronous execution uses other threads than the calling thread for parallel execution.
How do I execute the Hibernate Search executions serially in the calling thread?
The problem seems to be in the org.hibernate.search.backend.impl.lucene.QueueProcessors class :
private void runAllWaiting() throws InterruptedException {
List<Future<Object>> futures = new ArrayList<Future<Object>>( dpProcessors.size() );
// execute all work in parallel on each DirectoryProvider;
// each DP has it's own ExecutorService.
for ( PerDPQueueProcessor process : dpProcessors.values() ) {
ExecutorService executor = process.getOwningExecutor();
//wrap each Runnable in a Future
FutureTask<Object> f = new FutureTask<Object>( process, null );
futures.add( f );
executor.execute( f );
}
// and then wait for all tasks to be finished:
for ( Future<Object> f : futures ) {
if ( !f.isDone() ) {
try {
f.get();
}
catch (CancellationException ignore) {
// ignored, as in java.util.concurrent.AbstractExecutorService.invokeAll(Collection<Callable<T>>
// tasks)
}
catch (ExecutionException error) {
// rethrow cause to serviced thread - this could hide more exception:
Throwable cause = error.getCause();
throw new SearchException( cause );
}
}
}
}
A serial synchronous execution would happen in the calling thread and would expose context information such as authentication information to the underlying DirectoryProvider.
Very old question, but I might as well answer it...
Hibernate Search does that to ensure single-threaded access to the Lucene IndexWriter for a directory (which is required by Lucene). I imagine the use of an single-threaded executor per-directory was a way of dealing with the queueing problem.
If you want it all to run in the calling thread you need to re-implement the LuceneBackendQueueProcessorFactory and bind it to hibernate.search.worker.backend in your hibernate properties. Not trivial, but do-able.