Tracking thread failures - java

I have an HTML which is output (displaying the results of the threads) and displayed after all threads complete (I wait for completion using a join)
Sometimes individual threads can have exceptions.
If I don't have any exceptions in any threads, i want to display the HTML in my browser.
If I do have an exception in all threads then I want to NOT display the HTML
If I have an exception in some but not all threads then I want TO display the HTML
What's the easiest way (least amount of code) to implement something that can track if a thread has failed or not?

You can use CompletableFuture for this purpose, example:
val future1: CompletableFuture<String> = CompletableFuture.supplyAsync {
println("This is your thread 1 code")
"<html><head><title>"
}
val future2: CompletableFuture<String> = CompletableFuture.supplyAsync {
println("This is your thread 2 code")
if (Random().nextBoolean()) throw RuntimeException("Failed")
"Title!</title></html></head>"
}
future1.thenCombine(future2, {result1, result2 -> result1 + result2}).whenComplete { s, throwable ->
if (throwable != null) {
println("failed")
} else {
println("done with $s")
}
}
And in Kotlin 1.1 you will be able write this code in more readable way:
async {
try {
val s1 = await(future1)
val s2 = await(future2)
println(s1 + s2)
} catch (e: Exception) {
println("failed")
}
}

Related

Thread blocking even when LinkedBlockingQueue is empty

For clear investigation I have only one thread producing an entity and one thread consuming it. These two parts share LinkedBlockingQueue. After consuming the entity the thread pass it forward to other thread to save entity in DB. The producing thread stops working after few iterations of inserting and removes an entity via queue. Debug logging shows it like the queue blocks the insert operation even when the queue is empty or has enough space.
Producer code:
final BlockingQueue<Entity> queue = new LinkedBlockingQueue<>(8); //located in calling method
....................................................................................
do {
List<Entity> entityList = entityDatasource.getEntity();
for (Entity entity: entityList) {
try {
log.debug("Size before insert opertaion is: " + queue.size());
queue.put(entity);
log.debug("Size after insert opertaion is: " + queue.size());
} catch (InterruptedException ex) {
...
}
}
} while (atomicBool.get());
Consumer code:
CompletableFuture<Void> queueHandler = CompletableFuture.runAsync(() -> {
do {
try {
log.debug("Queue size is: " + queue.size());
Entity entity = queue.take();
log.debug("Queue size is: " + queue.size());
storeInDb(entity);
} catch (InterruptedException ex) {
...
}
} while (atomicBool.get());
}, asyncPoolQueueHandler); //ThreadPoolTaskExecutor
List<CompletableFuture<Void>> pool = new ArrayList<>();
IntStream.range(0, 1).forEach(i -> {
pool.add(queueHandler);
});
CompletableFuture.allOf(pool.toArray(CompletableFuture[]::new));
DB store:
CompletableFuture
.supplyAsync(() -> {
return entityRep.save(entity);
}, asyncPoolDbPerformer).join(); //ThreadPoolTaskExecutor
VisualVM screenshot
I was wached VisualVM, but there is nothing unexpected to me: when producer stuck then other parts of pipeline are motionless. I would be grateful for advice on what I could do with my issue
The problem was in wrong design. Producer-consumer is not normal solution. More appropriate way is using synchronous blocking pipeline scaled by performance of bottleneck. In my case I'm bounded by database pool connection performance.
(dataSource->businessLogic->dataDestination) x N
where N is scale

How to prevent IllegalStateException in reactor when you need to block

We have a synchronous process that needs to call two REST endpoints, whereas the result of the first is needed for the second. Using Springs WebClient the .block() causes the following exception:
java.lang.IllegalStateException: block()/blockFirst()/blockLast() are blocking, which is not supported in thread parallel-2
How can this be prevented?
Here is a simplified code snippet:
var job = webClient.createJob().block();
if (job == null || StringUtils.isBlank(job.getId())) {
throw new Exception("WebClient did not return with a job id");
}
batchRecords(job.getId(), records);// does some additional calls to the webClient
This works in the unit test, but when called through a #RestController the above exception is thrown.
EDIT:
The batchRecords method currently also has blocking Monos in it, so we can have a delay in between:
public void batchRecords(final String jobId, final List<InventoryRecord> records)
var recordCount = 0;
var inventoryPositions = new ArrayList<InventoryPosition>();
var recordIterator = records.iterator();
while (recordIterator != null && recordIterator.hasNext()) {
var inventoryRecord = recordIterator.next();
inventoryPositions.add(mapInventoryPosition(inventoryRecord));
recordCount++;
if (inventoryPositions.size() == batchSize) {
var response = createBatch(jobId, inventoryPositions);
Thread.sleep(sleepTime);
response.block();
inventoryPositions = new ArrayList<>();
}
}
}
You should do it reactively without blocking:
webClient.createJob()
.filter(job -> !StringUtils.isBlank(job.getId()))
.flatMap(job -> batchRecords(job.getId(), records))
.switchIfEmpty(Mono.error(new Exception("WebClient did not return with a job id")));
As soon as the createJob operation is finished, the result is filtered and provided to the flatMap operator. In case of an empty response (Mono.empty()) an exception is thrown.

Will Exceptions in Project Loom someday purcolate up through ExecutorService contexts?

From loom-lab, given the code
var virtualThreadFactory = Thread.ofVirtual().factory();
try (var executorService = Executors.newThreadPerTaskExecutor(virtualThreadFactory)) {
IntStream.range(0, 15).forEach(item -> {
executorService.submit(() -> {
try {
var milliseconds = item * 1000;
System.out.println(Thread.currentThread() + " sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if (item == 8) throw new RuntimeException("task 8 is acting up");
} catch (InterruptedException e) {
System.out.println("Interrupted task = " + item + ", Thread ID = " + Thread.currentThread());
}
});
});
}
catch (RuntimeException e) {
System.err.println(e.getMessage());
}
My hope was that the code would catch the RuntimeException and print the message, but it does not.
Am I hoping for too much, or will this someday work as I hope?
In response to an amazing answer by Stephen C, which I can fully appreciate, upon further exploration I discovered via
static String spawn(
ExecutorService executorService,
Callable<String> callable,
Consumer<Future<String>> consumer
) throws Exception {
try {
var result = executorService.submit(callable);
consumer.accept(result);
return result.get(3, TimeUnit.SECONDS);
}
catch (TimeoutException e) {
// The timeout expired...
return callable.call() + " - TimeoutException";
}
catch (ExecutionException e) {
// Why doesn't malcontent get caught here?
return callable.call() + " - ExecutionException";
}
catch (CancellationException e) { // future.cancel(false);
// Exception was thrown
return callable.call() + " - CancellationException";
}
catch (InterruptedException e) { // future.cancel(true);
return callable.call() + "- InterruptedException ";
}
}
and
try (var executorService = Executors.newThreadPerTaskExecutor(threadFactory)) {
Callable<String> malcontent = () -> {
Thread.sleep(Duration.ofSeconds(2));
throw new IllegalStateException("malcontent acting up");
};
System.out.println("\n\nresult = " + spawn(executorService, malcontent, (future) -> {}));
} catch (Exception e) {
e.printStackTrace(); // malcontent gets caught here
}
I was expecting malcontent to get caught in spawn as an ExecutionException per the documentation, but it does not. Consequently, I have trouble reasoning about my expectations.
Much of my hope for Project Loom was that, unlike Functional Reactive Programming, I could once again rely on Exceptions to do the right thing, and reason about them such that I could predict what would happen without having to run experiments to validate what really happens.
As Steve Jobs (at NeXT) used to say: "It just works"
So far, my posting on loom-dev#openjdk.java.net has not been responded to... which is why I have used StackOverflow. I don't know the best way to engage the Project Loom developers.
This is speculation ... but I don't think so.
According to the provisional javadocs, ExecutorService now inherits AutoClosable, and it is specified that the default behavior of the close() method is to perform a clean shutdown and wait for it to complete. (Note that this is described as default behavior not required behavior!)
So why couldn't they change the behavior to catch an resignal the exceptions on this thread's stack?
One problem is that specifying patterns of behavior that are logically consistent for both this case, and the case where the ExecutorService is not used as a resource in a try with resources. In order to implement the behavior in this case, the close() method has to be informed by some other part of the executor service of the task's unhandled exception. But if nothing calls close() then the exceptions can't be re-raised. And if the close() is called in a finalizer or similar, there probably won't be anything to handle them. At the very least, it is complicated.
A second problem is that it would be difficult to handle the exception(s) in the general case. What if more than one task failed with an exception? What if different tasks failed with different exceptions? How does the code that handles the exception (e.g. your catch (RuntimeException e) ... figure out which task failed?
A third problem is that this would be a breaking change. In Java 17 and earlier, the above code would not propagate any exceptions from the tasks. In Java 18 and later it would. Java 17 code that assumed there were no "random" exceptions from failed tasks delivered to this thread would break.
A fourth point is that this would be an nuisance in use-cases where the Java 18+ programmer wants to treat the executor service as a resource, but does not want to deal with "stray" exceptions on this thread. (I suspect that would be the majority of use-cases for autoclosing an executor service.)
A fifth problem (if you want to call it that) is that it is a breaking change for early adopters of Loom. (I am reading your question as saying that you tried it with Loom and it currently doesn't behave as you proposed.)
The final problem is that there are already ways to capture a task's exception and deliver it; e.g. via the Future objects returned when you submit a task. This proposal is not filling a gap in ExecutorService functionality.
(Phew!)
Of course I don't know that the Java developers will actually do. And we won't collectively know until Loom is finally released as a non-preview feature of mainstream Java.
Anyhow, if you want to lobby for this, you should email the Loom mailing list about it.
LOOM has made many improvements such as making ExecutorService an AutoClosable so it simplifies coding, eliminating calls to shutdown / awaitTermination.
Your point on the expectation of neat exception handling applies to typical usage of ExecutorService in any JDK - not just the upcoming LOOM release - so IMO isn't obviously necessary to be tied in with LOOM work.
The error handling you wish for is quite easy to incorporate with any version of JDK by adding a few lines of code around code blocks that use ExecutorService:
var ex = new AtomicReference<RuntimeException>();
try {
// add any use of ExecutorService here
// eg OLD JDK style:
// var executorService = Executors.newFixedThreadPool(5);
try (var executorService = Executors.newThreadPerTaskExecutor(virtualThreadFactory)) {
...
if (item == 8) {
// Save exception before sending:
ex.set(new RuntimeException("task 8 is acting up"));
throw ex.get();
}
...
}
// OR: not-LOOM JDK call executorService.shutdown/awaitTermination here
// Pass on any handling problem
if (ex.get() != null)
throw ex.get();
}
catch (Exception e) {
System.err.println("Exception was: "+e.getMessage());
}
Not elegant as you hope for, but works in any JDK release.
EDIT On your edited question:
You've put callable.call() as the code inside catch (ExecutionException e) { so that you've lost the first exception and malcontent raises a second exception. Add System.out.println to see the original:
catch (ExecutionException e) {
System.out.println(Thread.currentThread()+" ExecutionException: "+e);
e.printStackTrace();
// Why doesn't malcontent get caught here?
return callable.call() + " - ExecutionException";
}
I think, the closest to what you are trying to achieve, is
try(var executor = StructuredExecutor.open()) {
var handler = new StructuredExecutor.ShutdownOnFailure();
IntStream.range(0, 15).forEach(item -> {
executor.fork(() -> {
var milliseconds = item * 100;
System.out.println(Thread.currentThread()
+ "sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if(item == 8) {
throw new RuntimeException("task 8 is acting up");
}
return null;
}, handler);
});
executor.join();
handler.throwIfFailed();
}
catch(InterruptedException|ExecutionException ex) {
System.err.println("Caught in initiator thread");
ex.printStackTrace();
}
which will run all jobs in virtual threads and generate an exception in the initiator thread when one of the jobs failed. StructuredExecutor is a new tool introduced by project Loom which allows to show the ownership of the created virtual threads to this specific job in diagnostic tools. But note that it’s close() won’t wait for the completion but rather requires the owner to do this before closing, throwing an exception if the developer failed to do so.
The behavior of classic ExecutorService implementations won’t change.
A solution for the ExecutorService would be
try(var executor = Executors.newVirtualThreadPerTaskExecutor()) {
var jobs = executor.invokeAll(IntStream.range(0, 15).<Callable<?>>mapToObj(item ->
() -> {
var milliseconds = item * 100;
System.out.println(Thread.currentThread()
+ " sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if(item == 8) {
throw new RuntimeException("task 8 is acting up");
}
return null;
}).toList());
for(var f: jobs) f.get();
}
catch(InterruptedException|ExecutionException ex) {
System.err.println("Caught in initiator thread");
ex.printStackTrace();
}
Note that while invokeAll waits for the completion of all jobs, we still need the loop calling get to enforce an ExecutionException to be thrown in the initiating thread.

CompletionStage.thenCompose not executing serially

I'm trying to use java 8 CompletionStages to execute 2 asynchronous method serially, so that the second is not executed if the first fails. But when I call thenCompose, the function passed in seems to get started before the previous function is complete (eg: the two function erroneously execute in parallel. Here is the code:
public CompletionStage<Graph> create(Payload payload) {
CompletionStage<BlobInfo> fileFuture = createFile(payload);
CompletionStage<Entity> metadataFuture = createMetadata(payload);
return fileFuture
.thenCompose(ignore -> metadataFuture)
.thenApply(entity ->
buildFromEntity(objectMapper, entity));
}
public CompletionStage<BlobInfo> createFile(Payload payload) {
return CompletableFuture.supplyAsync(() -> {
try {
return
storage.create(
BlobInfo
.newBuilder(payload.bucket, payload.name)
.build(),
payload.data.getBytes());
} catch (StorageException e) {
LOG.error("Failed to write to storage: " + e);
throw new RequestHandlerException(StatusCode.SERVER_ERROR,
"Failed to write to storage.");
}
});
}
public CompletionStage<Entity> createMetadata(Payload payload) {
return CompletableFuture.supplyAsync(() -> createSync(payload));
}
private Entity createMetadataSync(Payload payload) {
Key key = keyFactory.newKey(payload.id);
Entity.Builder entityBuilder = GraphPayload.buildEntityFromGraph(payload, key);
Entity entity = entityBuilder.build();
LOG.error("Metadata.createSync");
try {
datastore.add(entity);
} catch (DatastoreException e) {
LOG.error("Failed to write initial metadata: " + e);
throw new RequestHandlerException(StatusCode.SERVER_ERROR,
"Failed to write initial metadata.");
}
return entity;
}
OUTPUT:
16:57:47.530 [ForkJoinPool.commonPool-worker-3] ERROR com.spotify.nfgraphstore.store.FileStore - CreateFile
16:57:47.530 [ForkJoinPool.commonPool-worker-2] ERROR com.spotify.nfgraphstore.store.MetadataStore - Metadata.createSync
16:57:47.530 [ForkJoinPool.commonPool-worker-3] ERROR com.spotify.nfgraphstore.store.FileStore - Failed to write initial graph to storage: com.google.cloud.storage.StorageException: X
The logged output demonstrates that Metadata.createSync is getting executed before the Storage exception gets thrown. This conclusion is also born out by a test (not shown) which is supposed to show zero interactions with the metadata DB if the write to the file storage DB fails. That test sometimes fails, suggesting a race condition.
So I'm left thinking thenCompose does not guarantee serial execution. But everything I've read in the java docs suggests execution should be serial: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html#thenCompose-java.util.function.Function-
Does anyone know why execution is not guaranteed to be serial, or recommend other functions that might work more as I've intended?
The call to createMetadata launches the task immediately, because it is not called as part of the lambda expression passed to thenCompose.
Perhaps you meant to do this:
.thenCompose(ignore -> createMetadata(payload))

Parallel processing using collection of CompletableFuture supplyAsync then collecting results

//Unit of logic I want to make it to run in parallel
public PagesDTO convertOCRStreamToDTO(String pageId, Integer pageSequence) throws Exception {
LOG.info("Get OCR begin for pageId [{}] thread name {}",pageId, Thread.currentThread().getName());
OcrContent ocrContent = getOcrContent(pageId);
OcrDTO ocrData = populateOCRData(ocrContent.getInputStream());
PagesDTO pageDTO = new PagesDTO(pageId, pageSequence.toString(), ocrData);
return pageDTO;
}
Logic to execute convertOCRStreamToDTO(..) in parallel then collect its results when individuals thread execution is done
List<PagesDTO> pageDTOList = new ArrayList<>();
//javadoc: Creates a work-stealing thread pool using all available processors as its target parallelism level.
ExecutorService newWorkStealingPool = Executors.newWorkStealingPool();
Instant start = Instant.now();
List<CompletableFuture<PagesDTO>> pendingTasks = new ArrayList<>();
List<CompletableFuture<PagesDTO>> completedTasks = new ArrayList<>();
CompletableFuture<<PagesDTO>> task = null;
for (InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
String pageId = dcInputPageDTO.getPageId();
task = CompletableFuture
.supplyAsync(() -> {
try {
return convertOCRStreamToDTO(pageId, pageSequence.getAndIncrement());
} catch (HttpHostConnectException | ConnectTimeoutException e) {
LOG.error("Error connecting to Redis for pageId [{}]", pageId, e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.REDIS_CONNECTION_FAILURE),
" Connecting to the Redis failed while getting OCR for pageId ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
} catch (CaptureException e) {
LOG.error("Error in Document Classification Engine Service while getting OCR for pageId [{}]",pageId,e);
exceptionMap.put(pageId,e);
} catch (Exception e) {
LOG.error("Error getting OCR content for the pageId [{}]", pageId,e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.TECHNICAL_FAILURE),
"Error while getting ocr content for pageId : ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
}
return null;
}, newWorkStealingPool);
//collect all async tasks
pendingTasks.add(task);
}
//TODO: How to avoid unnecessary loops which is happening here just for the sake of waiting for the future tasks to complete???
//TODO: Looking for the best solutions
while(pendingTasks.size() > 0) {
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
if(futureTask != null && futureTask.isDone()){
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
}
pendingTasks.removeAll(completedTasks);
}
//Throw the exception cought while getting converting OCR stream to DTO - for any of the pageId
for(InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
if(exceptionMap.containsKey(dcInputPageDTO.getPageId())) {
CaptureException e = exceptionMap.get(dcInputPageDTO.getPageId());
throw e;
}
}
LOG.info("Parallel processing time taken for {} pages = {}", dcReqDTO.getPages().size(),
org.springframework.util.StringUtils.deleteAny(Duration.between(Instant.now(), start).toString().toLowerCase(), "pt-"));
Please look at my above code base todo items, I have below two concerns for which I am looking for advice over stackoverflow:
1) I want to avoid unnecessary looping (happening in while loop above), what is the best way for optimistically I wait for all threads to complete its async execution then collect my results out of it??? Please anybody has an advice?
2) ExecutorService instance is created at my service bean class level, thinking that, it will be re-used for every requests, instead create it local to the method, and shutdown in finally. Am I doing right here?? or any correction in my thought process?
Simply remove the while and the if and you are good:
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
get() (as well as join()) will wait for the future to complete before returning a value. Also, there is no need to test for null since your list will never contain any.
You should however probably change the way you handle exceptions. CompletableFuture has a specific mechanism for handling them and rethrowing them when calling get()/join(). You might simply want to wrap your checked exceptions in CompletionException.

Categories