App Engine Pull Queue tasks disappear before being properly handled - java

Update 4 - rephrasing question for clarity
I am using Pull Queues to feed back-end workers tasks that send push notifications. I can see the front-end instance queue the task in the logs. However, the task is only occasionally handled by the back-end. I see no indication of why the task disappears prior to being handled and deleted from the queue.
This may be related: I am seeing an unusually high number of TransientFailureExceptions when attempting to lease tasks from the queue - despite sleeping between attempts.
Everything works properly on my development server (and an earlier version had worked in production) but production is no longer working properly. At first I thought it was a certificate issue. However, notifications are sometimes sent when the backend first starts.
There is no indication that an error is happening except for the TransientFailureException when I call leaseTasks on the queue. Also, it seems to take a very long time for my logs to show up.
I can provide more information and code snippets as needed.
Thanks for the help.
Update 1:
The application uses 10 pull queues. It would normally use 2 but queue tagging is still considered experimental. They are declared in the standard fashion:
<queue>
<name>gcm-henchdist</name>
<mode>pull</mode>
</queue>
The lease tasks function is:
public boolean processBatchOfTasks()
{
List< TaskHandle > tasks = attemptLeaseTasks();
if( null == tasks || tasks.isEmpty() )
{
return false;
}
processLeasedTasks( tasks );
return true;
}
private List< TaskHandle > attemptLeaseTasks()
{
for( int attemptNnum = 1; !LifecycleManager.getInstance().isShuttingDown(); ++attemptNnum )
{
try
{
return m_taskQueue.leaseTasks( m_numLeaseTimeUnits, m_leaseTimeUnit, m_maxTasksPerLease );
} catch( TransientFailureException exc )
{
LOG.warn( "TransientFailureException when leasing tasks from queue '{}'", m_taskQueue.getQueueName(), exc );
ApiProxy.flushLogs();
} catch( ApiDeadlineExceededException exc )
{
LOG.warn( "ApiDeadlineExceededException when when leasing tasks from queue '{}'",
m_taskQueue.getQueueName(), exc );
ApiProxy.flushLogs();
}
if( !backOff( attemptNnum ) )
{
LOG.warn( "Failed to lease tasks." );
break;
}
}
return Collections.emptyList();
}
where the lease variables are 30, TimeUnit.MINUTES, 100 respectively
the processBatchOfTasks function is polled via:
private void startPollingForClient( EClientType clientType )
{
InterimApnsCertificateConfig config = InterimApnsCertificateConfigMgr.getConfig( clientType );
Queue notificationQueue = QueueFactory.getQueue( config.getQueueId().getName() );
ApplePushNotificationWorker worker = new ApplePushNotificationWorker(
notificationQueue,
m_messageConverter.getObjectMapper(),
config.getCertificateBytes(),
config.getPassword(),
config.isProduction() );
LOG.info( "Started worker for {} polling queue {}", clientType, notificationQueue.getQueueName() );
while ( !LifecycleManager.getInstance().isShuttingDown() )
{
boolean tasksProcessed = worker.processBatchOfTasks();
ApiProxy.flushLogs();
if ( !tasksProcessed )
{
// Wait before trying to lease tasks again.
try
{
//LOG.info( "Going to sleep" );
Thread.sleep( MILLISECONDS_TO_WAIT_WHEN_NO_TASKS_LEASED );
//LOG.info( "Waking up" );
} catch ( InterruptedException exc )
{
LOG.info( "Polling loop interrupted. Terminating loop.", exc );
return;
}
}
}
LOG.info( "Instance is shutting down" );
}
and the thread is created via:
Thread thread = ThreadManager.createBackgroundThread( new Runnable()
{
#Override
public void run()
{
startPollingForClient( clientType );
}
} );
thread.start();
GCM notifications are handled in a similar fashion.
Update 2
The following is the backoff function. I have verified in the logs (with both GAE and my own timestamps) that the sleep is incrementing properly
private boolean backOff( int attemptNo )
{
// Exponential back off between 2 seconds and 64 seconds with jitter
// 0..1000 ms.
attemptNo = Math.min( 6, attemptNo );
int backOffTimeInSeconds = 1 << attemptNo;
long backOffTimeInMilliseconds = backOffTimeInSeconds * 1000 + (int)( Math.random() * 1000 );
LOG.info( "Backing off for {} milliseconds from queue '{}'", backOffTimeInMilliseconds, m_taskQueue.getQueueName() );
ApiProxy.flushLogs();
try
{
Thread.sleep( backOffTimeInMilliseconds );
} catch( InterruptedException e )
{
return false;
}
LOG.info( "Waking up from {} milliseconds sleep for queue '{}'", backOffTimeInMilliseconds, m_taskQueue.getQueueName() );
ApiProxy.flushLogs();
return true;
}
Update 3
The tasks are added to the queue within a transaction on a front-end instance:
if( null != queueType )
{
String deviceName;
int numDevices = deviceList.size();
for ( int iDevice = 0; iDevice < numDevices; ++iDevice )
{
deviceName = deviceList.get( iDevice ).getName();
LOG.info( "Queueing Your-Turn notification for user: {} device: {} queue: {}", user.getId(), deviceName, queueType.getName() );
Queue queue = QueueFactory.getQueue( queueType.getName() );
queue.addAsync( TaskOptions.Builder.withMethod( TaskOptions.Method.PULL )
.param( "alertLocKey", "NOTIF_YOUR_TURN" ).param( "device", deviceName ) );
}
}
I know that the transaction succeeds because the database updates correctly.
In the logs I see the "Queuing Your-Turn notification..." entry, but I see nothing appear on the back-end logs.
In the administration panel, I see Task Queue API Calls increment by 1 as well as Task Queue Stored Task Count increment by 1. However, the queue that was written to shows zero in both the Tasks In Queue and Leased In Last Minute fields.

The TransientFailureException JavaDoc says that "The requested operation may succeed if attempted again" (because the failure is transient). Therefore when this exception is thrown your code should loop back and repeat the leaseTasks call. Furthermore AppEngine does not have to redo the request itself because it notified you via the exception that you should do so.
It's a pity you repeat the method name leaseTasks as one of your own because now it's not clear which one I'm referring to when I mention leaseTasks. Still, wrap the inner call to m_taskQueue.leaseTasks in a while loop and an additional try block to catch only the TransientFailureException. Use a flag to end the while loop only if that exception is not thrown.
Is that enough explanation, or do you need a complete source code listing?

It appears that the culprit may have been that I was calling addAsync when enqueuing the task instead of just calling add.
I replaced the call and things seem to be consistently working now. I would like to know why this makes a difference and will update the answer when I find the reason.

Related

How to correctly implement executor that runs multiple iterations and waits for all tasks to complete and successfully terminates after tasks are done

Cut to the chase short answer ---------------------
Code demonstrating the accepted answer can be found here:
Full example:
https://github.com/NACHC-CAD/thread-example/tree/shutdown-first
Implementation:
https://github.com/NACHC-CAD/thread-example/blob/shutdown-first/src/main/java/com/nachc/examples/threadexample/WidgetFactory.java
Original Post -------------------------------------
There are a number of examples of use of Java threads and Executors:
https://www.baeldung.com/thread-pool-java-and-guava
https://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html
https://howtodoinjava.com/java/multi-threading/java-thread-pool-executor-example/
https://jenkov.com/tutorials/java-concurrency/thread-pools.html
https://xperti.io/blogs/thread-pools-java-introduction/
https://www.journaldev.com/1069/threadpoolexecutor-java-thread-pool-example-executorservice
https://stackify.com/java-thread-pools/
However, I've not been able to successfully write an example that executes all of the tasks, waits for the tasks to complete, and then correctly terminates.
Working from this example: https://howtodoinjava.com/java/multi-threading/java-thread-pool-executor-example/
The code only calls executor.shutdown(). This does not allow the threads time to complete if they consume any time.
I've created a complete simplest example here: https://github.com/NACHC-CAD/thread-example/tree/await-termination
The shutdown only branch covers this use case (https://github.com/NACHC-CAD/thread-example/tree/shutdown-only):
public void makeWidgets() {
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(batchSize);
log.info("Building " + howMany + " widgets...");
for (int i = 0; i < howMany; i++) {
Widget widget = new Widget(lotNumber, i);
WidgetRunnable runnable = new WidgetRunnable(widget);
executor.execute(runnable);
}
log.info("SHUTTING DOWN----------------");
executor.shutdown();
}
This code gives the following output (there should be 1000 widgets created and they should report that they are done after waiting 1 second).
2022-04-23 21:27:05,796 21:27:05.796 [main] INFO (WidgetFactoryIntegrationTest.java:12) - Starting test...
2022-04-23 21:27:05,799 21:27:05.799 [main] INFO (WidgetFactory.java:29) - Building 100 widgets...
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-2] INFO (Widget.java:24) - Starting build: 1/1
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-4] INFO (Widget.java:24) - Starting build: 1/3
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-1] INFO (Widget.java:24) - Starting build: 1/0
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-5] INFO (Widget.java:24) - Starting build: 1/4
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-6] INFO (Widget.java:24) - Starting build: 1/5
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-7] INFO (Widget.java:24) - Starting build: 1/6
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-8] INFO (Widget.java:24) - Starting build: 1/7
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-10] INFO (Widget.java:24) - Starting build: 1/9
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-9] INFO (Widget.java:24) - Starting build: 1/8
2022-04-23 21:27:05,801 21:27:05.801 [main] INFO (WidgetFactory.java:35) - SHUTTING DOWN----------------
2022-04-23 21:27:05,800 21:27:05.800 [pool-1-thread-3] INFO (Widget.java:24) - Starting build: 1/2
2022-04-23 21:27:05,801 21:27:05.801 [main] INFO (WidgetFactoryIntegrationTest.java:18) - Done.
If I add executor.awaitTermination the code runs all threads but never terminates. This example is in the await-termination branch: https://github.com/NACHC-CAD/thread-example/tree/await-termination
public void makeWidgets() {
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(batchSize);
log.info("Building " + howMany + " widgets...");
for (int i = 0; i < howMany; i++) {
Widget widget = new Widget(lotNumber, i);
WidgetRunnable runnable = new WidgetRunnable(widget);
executor.execute(runnable);
}
try {
executor.awaitTermination(1000, TimeUnit.HOURS);
} catch(Exception exp) {
throw(new RuntimeException(exp));
}
log.info("SHUTTING DOWN----------------");
executor.shutdown();
}
This code lets all of the runnables finish but never exits. How do I let all of the runnables finish and have the code run to completion (exit)?
With reference to ThreadPoolExecutor documentation. The awaitTermination() method description reads:
Blocks until all tasks have completed execution after a shutdown request
While the shutdown() method descriptin reads
Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted
Which indicates that awaitTermination() call is effective after a shutdown() call.
To solve the above problem, shutdown() needs to be called first and then awaitTermination()
NOTE: I have not personally tested this; however, John has, as mentioned in the comment of the original post and the mechanism works
The Answer by Ironluca is correct. Here is additional points and some example code.
For one thing, there is no need to declare & cast ThreadPoolExecutor directly. Just use the more general ExecutorService.
And using a thread pool sized to your batch size seems unwise. In current Java, you generally want an active thread pool to be less than the count of CPU cores. (This calculus will change radically if Project Loom and its virtual threads succeeds, but that is not the reality today, though you can try the early-access build.)
int threadPoolSize = 3 ; // Generally less than number of cores.
ExecutorService executorService = Executors.newFixedThreadPool( threadPoolSize );
Let's simplify your example scenario. We define Widget as a simple record.
record Widget ( UUID id , Instant whenCreated ) {}
Define a task that produces a Widget. We want to get back a Widget object, so we use Callable rather than Runnable.
Callable < Widget > makeWidgetTask = ( ) -> {
Thread.sleep( Duration.ofMillis( 50 ).toMillis() ); // Pretend that we have a long-running task.
Widget widget = new Widget( UUID.randomUUID() , Instant.now() );
return widget;
};
Make a big collection, to be used in running that task many times.
List < Callable < Widget > > tasks = Collections.nCopies( 1_000 , makeWidgetTask );
Actually, we need to wrap in a try-catch.
List < Future < Widget > > futures = null;
try
{
futures = executorService.invokeAll( tasks );
}
catch ( InterruptedException e )
{
throw new RuntimeException( e );
}
Submit all those tasks to the executor service. Notice how we get back a list of Future objects. A Future is our handle to the success or failure of each task’s completion.
As for how to wait for completion, and how to use ExecutorService#shutdown, shutdownNow, and awaitTermination, merely read the Javadoc. 👉 A full example of boilerplate code is provided for you.
To quote the Javadoc:
pool.shutdown(); // Disable new tasks from being submitted
try {
// Wait a while for existing tasks to terminate
if (!pool.awaitTermination(60, TimeUnit.SECONDS)) {
pool.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!pool.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Pool did not terminate");
}
} catch (InterruptedException ex) {
// (Re-)Cancel if current thread also interrupted
pool.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}
The key concept is that shutdown does not stop any work-in-progress. All tasks currently under execution will continue. All submitted tasks will eventually be scheduled for execution on a core as a thread becomes available. The shutdown method does only one thing: Stop any further tasks from being submitted to this executor service. To quote the Javadoc:
shutdown() … previously submitted tasks are executed, but no new tasks will be accepted.
To quote further:
This method does not wait for previously submitted tasks to complete execution. Use awaitTermination to do that.
So you need to call awaitTermination after calling shutdown. You pass arguments for a reasonable time in which you expect all submitted tasks to be completed or cancelled or interrupted. If that time limit elapses, then you can presume something has gone wrong.
Notice that the call to shutdown does not block, but the call to awaitTermination does block.
Let's adapt the boilerplate code to our own example.
executorService.shutdown(); // Disable new tasks from being submitted.
try
{
if ( ! executorService.awaitTermination( 60 , TimeUnit.SECONDS ) )
{
executorService.shutdownNow(); // Cancel currently executing tasks.
// Wait a while for tasks to respond to being cancelled.
if ( ! executorService.awaitTermination( 60 , TimeUnit.SECONDS ) )
{ System.err.println( "Executor service did not terminate." ); }
}
}
catch ( InterruptedException ex )
{
executorService.shutdownNow(); // (Re-)Cancel if current thread also interrupted
Thread.currentThread().interrupt(); // Preserve interrupt status
}
Finally, review our results by examining the collection of Future objects.
System.out.println( "Count futures: " + futures.size() );
for ( Future < Widget > future : futures )
{
if ( ! future.isDone() ) { System.out.println( "Oops! Task not done: " + future.toString() ); }
else if ( future.isCancelled() ) { System.out.println( "Bummer. Task cancelled: " + future.toString() ); }
else // Else task must have completed successfully.
{
try
{
Widget widget = future.get();
System.out.println( widget.toString() );
}
catch ( InterruptedException e )
{
throw new RuntimeException( e );
}
catch ( ExecutionException e )
{
throw new RuntimeException( e );
}
}
}
Add some elapsed time code at top and bottom.
long start = System.nanoTime();
…
System.out.println( "Elapsed: " + Duration.ofNanos( System.nanoTime() - start ) );
On my M1 MacBook Pro with 8 real cores, on Java 18, that takes about 18 seconds.
Count futures: 1000
Widget[id=56e594bf-75a6-4cf1-83fc-2b671873c534, whenCreated=2022-04-25T07:00:18.977719Z]
Widget[id=11373948-0689-467a-9ace-1e8d57f40f40, whenCreated=2022-04-25T07:00:18.977721Z]
…
Widget[id=d3b11574-6c11-41cc-9f26-c24ad53aa18c, whenCreated=2022-04-25T07:00:36.747058Z]
Widget[id=017ff453-da92-4296-992e-2c2a2ac44ed8, whenCreated=2022-04-25T07:00:36.748571Z]
Elapsed: PT17.906065583S
Full example code, for your copy-paste convenience.
package work.basil.example.threading;
import java.time.Duration;
import java.time.Instant;
import java.util.Collections;
import java.util.List;
import java.util.UUID;
import java.util.concurrent.*;
public class App
{
public static void main ( String[] args )
{
long start = System.nanoTime();
int threadPoolSize = 3; // Generally less than number of cores.
ExecutorService executorService = Executors.newFixedThreadPool( threadPoolSize );
record Widget( UUID id , Instant whenCreated )
{
}
Callable < Widget > makeWidgetTask = ( ) -> {
Thread.sleep( Duration.ofMillis( 50 ).toMillis() ); // Pretend that we have a long-running task.
Widget widget = new Widget( UUID.randomUUID() , Instant.now() );
return widget;
};
List < Callable < Widget > > tasks = Collections.nCopies( 1_000 , makeWidgetTask );
List < Future < Widget > > futures = null;
try
{
futures = executorService.invokeAll( tasks );
}
catch ( InterruptedException e )
{
throw new RuntimeException( e );
}
executorService.shutdown(); // Disable new tasks from being submitted.
try
{
if ( ! executorService.awaitTermination( 60 , TimeUnit.SECONDS ) )
{
executorService.shutdownNow(); // Cancel currently executing tasks.
// Wait a while for tasks to respond to being cancelled.
if ( ! executorService.awaitTermination( 60 , TimeUnit.SECONDS ) )
{ System.err.println( "Executor service did not terminate." ); }
}
}
catch ( InterruptedException ex )
{
executorService.shutdownNow(); // (Re-)Cancel if current thread also interrupted
Thread.currentThread().interrupt(); // Preserve interrupt status
}
System.out.println( "Count futures: " + futures.size() );
for ( Future < Widget > future : futures )
{
if ( ! future.isDone() ) { System.out.println( "Oops! Task not done: " + future.toString() ); }
else if ( future.isCancelled() ) { System.out.println( "Bummer. Task cancelled: " + future.toString() ); }
else // Else task must have completed successfully.
{
try
{
Widget widget = future.get();
System.out.println( widget.toString() );
}
catch ( InterruptedException e )
{
throw new RuntimeException( e );
}
catch ( ExecutionException e )
{
throw new RuntimeException( e );
}
}
}
System.out.println( "Elapsed: " + Duration.ofNanos( System.nanoTime() - start ) );
}
}

Apache Camel library will holding memory for long time?

I am using the verbosegc to capture some data and try to analyze the memory usage of my application.
I have a module that will pulling data from database or third party and put it into a list object then only return to front end for display.
When I choose the date to be date range, it will pull the data from database.
When I choose the date to be today date, then my application will send a request to MQ server, and the MQ server will response my application with xml message. The I will use Apache camel library to handle it.
Here is my verbosegc screen shot when pulling data from database:
As you can see, everytime when I trigger the search function, the memory usage will increase, and then drop back. So this is normal, and also what I expected.
And this is the verbosegc screen shot when pulling data from third party.
As you can see, after the memory increase, it will will horizontal there for a period, and then only drop back.
I suspect that the org.apache.camel.Exchange or org.apache.camel.Message or those object in Apache will holding the memory for longer time.
Here is some of my code to handle the xml message from third party:
/**
* Camel Exchange producer template
*/
protected ProducerTemplate< Exchange > template;
#SuppressWarnings("unchecked")
private < T > T doSend(final Object request, final String headerName,
final Object headerObject,
final SendEaiMessageTemplateCallBack callback)
throws BaseRuntimeException {
log.debug( "doSend START >> {} ", request );
if ( this.requestObjectValidator != null
&& requestObjectValidator
.requiredValidation( requestObjectValidator ) ) {
requestObjectValidator.validateRequest( request );
}
final Exchange exchange = template.request( to, new Processor( ) {
public void process(final Exchange exchange) throws Exception {
exchange.getIn( ).setBody( request );
if ( headerName != null && headerObject != null ) {
exchange.getIn( ).setHeader( headerName, headerObject );
}
}
} );
log.debug( "doSend >> END >> exchange is failed? {}",
exchange.isFailed( ) );
Message outBoundMessage = null;
if ( callback != null ) {
// provide the callBack method to access exchange
callback.exchangeCallBack( exchange );
}
if ( exchange.isFailed( ) ) {
failedHandler.handleExchangeFailed( exchange, request );
} else {
outBoundMessage = exchange.getOut( false );
}
// handler outbound message
if ( this.outboundMessageHandler != null ) {
this.outboundMessageHandler.handleMessage( outBoundMessage );
}
if ( outBoundMessage != null ) {
if ( outBoundMessage.getBody( ) != null ) {
log.debug( "OutBoundMessage body {}", outBoundMessage.getBody( ) );
}
return (T) outBoundMessage.getBody( );
} else {
return null;
}
}
Because of this, my application was hitting Out Of Memory Exception. I am not sure is it because of Apache Camel library or not, kindly advise.
Other than that, when I open the heapdump file, there is 52% complain on the com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory$DataBufferLink
And the other are complain on the "Java heap is used by this char[] alone", which is some sub category under DataBufferLink as well.
I google on this, all is talking about the xml message too large.
I have no idea on which way or which direction I should continue to troubleshoot, can kindly advise on this?
FYI, I am using camel-core-1.5.0.jar

Parallel processing using collection of CompletableFuture supplyAsync then collecting results

//Unit of logic I want to make it to run in parallel
public PagesDTO convertOCRStreamToDTO(String pageId, Integer pageSequence) throws Exception {
LOG.info("Get OCR begin for pageId [{}] thread name {}",pageId, Thread.currentThread().getName());
OcrContent ocrContent = getOcrContent(pageId);
OcrDTO ocrData = populateOCRData(ocrContent.getInputStream());
PagesDTO pageDTO = new PagesDTO(pageId, pageSequence.toString(), ocrData);
return pageDTO;
}
Logic to execute convertOCRStreamToDTO(..) in parallel then collect its results when individuals thread execution is done
List<PagesDTO> pageDTOList = new ArrayList<>();
//javadoc: Creates a work-stealing thread pool using all available processors as its target parallelism level.
ExecutorService newWorkStealingPool = Executors.newWorkStealingPool();
Instant start = Instant.now();
List<CompletableFuture<PagesDTO>> pendingTasks = new ArrayList<>();
List<CompletableFuture<PagesDTO>> completedTasks = new ArrayList<>();
CompletableFuture<<PagesDTO>> task = null;
for (InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
String pageId = dcInputPageDTO.getPageId();
task = CompletableFuture
.supplyAsync(() -> {
try {
return convertOCRStreamToDTO(pageId, pageSequence.getAndIncrement());
} catch (HttpHostConnectException | ConnectTimeoutException e) {
LOG.error("Error connecting to Redis for pageId [{}]", pageId, e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.REDIS_CONNECTION_FAILURE),
" Connecting to the Redis failed while getting OCR for pageId ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
} catch (CaptureException e) {
LOG.error("Error in Document Classification Engine Service while getting OCR for pageId [{}]",pageId,e);
exceptionMap.put(pageId,e);
} catch (Exception e) {
LOG.error("Error getting OCR content for the pageId [{}]", pageId,e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.TECHNICAL_FAILURE),
"Error while getting ocr content for pageId : ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
}
return null;
}, newWorkStealingPool);
//collect all async tasks
pendingTasks.add(task);
}
//TODO: How to avoid unnecessary loops which is happening here just for the sake of waiting for the future tasks to complete???
//TODO: Looking for the best solutions
while(pendingTasks.size() > 0) {
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
if(futureTask != null && futureTask.isDone()){
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
}
pendingTasks.removeAll(completedTasks);
}
//Throw the exception cought while getting converting OCR stream to DTO - for any of the pageId
for(InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
if(exceptionMap.containsKey(dcInputPageDTO.getPageId())) {
CaptureException e = exceptionMap.get(dcInputPageDTO.getPageId());
throw e;
}
}
LOG.info("Parallel processing time taken for {} pages = {}", dcReqDTO.getPages().size(),
org.springframework.util.StringUtils.deleteAny(Duration.between(Instant.now(), start).toString().toLowerCase(), "pt-"));
Please look at my above code base todo items, I have below two concerns for which I am looking for advice over stackoverflow:
1) I want to avoid unnecessary looping (happening in while loop above), what is the best way for optimistically I wait for all threads to complete its async execution then collect my results out of it??? Please anybody has an advice?
2) ExecutorService instance is created at my service bean class level, thinking that, it will be re-used for every requests, instead create it local to the method, and shutdown in finally. Am I doing right here?? or any correction in my thought process?
Simply remove the while and the if and you are good:
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
get() (as well as join()) will wait for the future to complete before returning a value. Also, there is no need to test for null since your list will never contain any.
You should however probably change the way you handle exceptions. CompletableFuture has a specific mechanism for handling them and rethrowing them when calling get()/join(). You might simply want to wrap your checked exceptions in CompletionException.

Java Concurrency more than one thead does not gets executed in threadpool

I'm having a project where I have a client which makes calls to the webserver. The webserver continuously opens one connection to another server which serves a XML file. This XML file gets converted into Java objects. To perform these actions, I use a thread pool.
First there is this worker thread with a while loop. In the loop I call another method to retrieve the XML data.
public static void startXmlRetrieval( String sbeSystem, String params ) throws Exception {
Processable xmlEventTask = sTaskWorkQueue.poll();
if ( xmlEventTask == null )
xmlEventTask = new EventXMLTask( sInstance );
else if ( !(xmlEventTask instanceof EventXMLTask) )
xmlEventTask = new EventXMLTask( sInstance );
((EventXMLTask) xmlEventTask).setXmlRetrievalParams( params );
((EventXMLTask) xmlEventTask).setXmlRetrievalAddr( sbeSystem );
sTaskWorkQueue.add( xmlEventTask );
synchronized ( sLockObject ) {
System.out.println( "Initiating thread to retrieve XML feed" );
sThreadPool.execute( ((EventXMLTask) xmlEventTask).getTaskRunnable());
// send the continuous thread from the ServerSessionHandler class to wait for this execution to finish
sLockObject.wait();
}
}
The above method executes and comes back to the object where the method is declared.
#Override
public void handleState( Processable task, int state ) throws Exception {
switch ( state ) {
case XML_RETRIEVAL_FAILED:
recycleXMLTask( (EventXMLTask) task );
//throw new Exception( "XML retrieval failed" );
case XML_RETRIEVAL_COMPLETED:
synchronized ( sLockObject ) {
ConvertManager.startXMLConversion( sInstance, sXmlData );
}
break;
case DATABASE_RETRIEVAL_FAILED:
case DATABASE_RETRIEVAL_COMPLETED:
recycleDatabaseTask( (DatabaseTask) task );
break;
case UNX_COMMAND_EXEC_FAILED:
case UNX_COMMAND_EXEC_COMPLETED:
recycleUnxCommandExecTask( (CommandOutputRetrievalTask) task );
break;
case ConvertManager.XML_CONVERSION_COMPLETED:
synchronized ( sLockObject ) {
removeXMLEventRetrieval( (TaskBase) task );
sLockObject.notifyAll();
}
break;
}
}
In the cas section "XML_RETRIEVAL_COMPLETED" i want to pass another thread to the same threadpool to execute the conversion of the XML data.
The problem is that the method ConvertManager.startXMLConversion is executed, but when it comes to a submit on the threadpool for submittng a callable (FutureTask), this call method is not executed anymore.
For the Thread group in the debugger, it says "WAIT" and it's currently stuck in the Unsafe.park method which is called from the FutureTask.awaitDone method.
Please help me with what the thread is waiting for as I used the synchronized statement for the one thread wait for the other one, but the other one is only executed to a certain point and the stops. I also tried playing around with notify and notifyAll on the sLockObject without any success.
The ConvertManager.startXmlConversion method looks as follows:
public static List<XMLEventData> startXMLConversion( AbstractManager mng, Document xmlDocument ) throws Exception {
sInstance.mCallingManager = mng;
List<XMLEventData> retVal;
Processable converterTask = sInstance.sTaskWorkQueue.poll();
try {
if ( converterTask == null )
converterTask = new XMLToXMLEventConverterTask( sInstance );
else if ( !(converterTask instanceof XMLToXMLEventConverterTask) )
converterTask = new XMLToXMLEventConverterTask( sInstance );
else if ( ((XMLToXMLEventConverterTask) converterTask).getTaskCallable() == null ) {
converterTask = new XMLToXMLEventConverterTask( sInstance );
}
sTaskWorkQueue.add( converterTask );
((XMLToXMLEventConverterTask) converterTask).setXmlDocument( xmlDocument );
System.out.println( "Starting new thread to convert XML data" );
retVal = (List<XMLEventData>) sThreadPool.submit( ((XMLToXMLEventConverterTask) converterTask).getTaskRunnable() ).get();
} catch ( Exception e ) {
e.printStackTrace();
throw new Exception( e );
}
return retVal;
}
Thank you in advance!

Hibernate Search synchronous execution in main thread

It seems that Hibernate Search synchronous execution uses other threads than the calling thread for parallel execution.
How do I execute the Hibernate Search executions serially in the calling thread?
The problem seems to be in the org.hibernate.search.backend.impl.lucene.QueueProcessors class :
private void runAllWaiting() throws InterruptedException {
List<Future<Object>> futures = new ArrayList<Future<Object>>( dpProcessors.size() );
// execute all work in parallel on each DirectoryProvider;
// each DP has it's own ExecutorService.
for ( PerDPQueueProcessor process : dpProcessors.values() ) {
ExecutorService executor = process.getOwningExecutor();
//wrap each Runnable in a Future
FutureTask<Object> f = new FutureTask<Object>( process, null );
futures.add( f );
executor.execute( f );
}
// and then wait for all tasks to be finished:
for ( Future<Object> f : futures ) {
if ( !f.isDone() ) {
try {
f.get();
}
catch (CancellationException ignore) {
// ignored, as in java.util.concurrent.AbstractExecutorService.invokeAll(Collection<Callable<T>>
// tasks)
}
catch (ExecutionException error) {
// rethrow cause to serviced thread - this could hide more exception:
Throwable cause = error.getCause();
throw new SearchException( cause );
}
}
}
}
A serial synchronous execution would happen in the calling thread and would expose context information such as authentication information to the underlying DirectoryProvider.
Very old question, but I might as well answer it...
Hibernate Search does that to ensure single-threaded access to the Lucene IndexWriter for a directory (which is required by Lucene). I imagine the use of an single-threaded executor per-directory was a way of dealing with the queueing problem.
If you want it all to run in the calling thread you need to re-implement the LuceneBackendQueueProcessorFactory and bind it to hibernate.search.worker.backend in your hibernate properties. Not trivial, but do-able.

Categories