Hibernate Search synchronous execution in main thread - java

It seems that Hibernate Search synchronous execution uses other threads than the calling thread for parallel execution.
How do I execute the Hibernate Search executions serially in the calling thread?
The problem seems to be in the org.hibernate.search.backend.impl.lucene.QueueProcessors class :
private void runAllWaiting() throws InterruptedException {
List<Future<Object>> futures = new ArrayList<Future<Object>>( dpProcessors.size() );
// execute all work in parallel on each DirectoryProvider;
// each DP has it's own ExecutorService.
for ( PerDPQueueProcessor process : dpProcessors.values() ) {
ExecutorService executor = process.getOwningExecutor();
//wrap each Runnable in a Future
FutureTask<Object> f = new FutureTask<Object>( process, null );
futures.add( f );
executor.execute( f );
}
// and then wait for all tasks to be finished:
for ( Future<Object> f : futures ) {
if ( !f.isDone() ) {
try {
f.get();
}
catch (CancellationException ignore) {
// ignored, as in java.util.concurrent.AbstractExecutorService.invokeAll(Collection<Callable<T>>
// tasks)
}
catch (ExecutionException error) {
// rethrow cause to serviced thread - this could hide more exception:
Throwable cause = error.getCause();
throw new SearchException( cause );
}
}
}
}
A serial synchronous execution would happen in the calling thread and would expose context information such as authentication information to the underlying DirectoryProvider.

Very old question, but I might as well answer it...
Hibernate Search does that to ensure single-threaded access to the Lucene IndexWriter for a directory (which is required by Lucene). I imagine the use of an single-threaded executor per-directory was a way of dealing with the queueing problem.
If you want it all to run in the calling thread you need to re-implement the LuceneBackendQueueProcessorFactory and bind it to hibernate.search.worker.backend in your hibernate properties. Not trivial, but do-able.

Related

Will Exceptions in Project Loom someday purcolate up through ExecutorService contexts?

From loom-lab, given the code
var virtualThreadFactory = Thread.ofVirtual().factory();
try (var executorService = Executors.newThreadPerTaskExecutor(virtualThreadFactory)) {
IntStream.range(0, 15).forEach(item -> {
executorService.submit(() -> {
try {
var milliseconds = item * 1000;
System.out.println(Thread.currentThread() + " sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if (item == 8) throw new RuntimeException("task 8 is acting up");
} catch (InterruptedException e) {
System.out.println("Interrupted task = " + item + ", Thread ID = " + Thread.currentThread());
}
});
});
}
catch (RuntimeException e) {
System.err.println(e.getMessage());
}
My hope was that the code would catch the RuntimeException and print the message, but it does not.
Am I hoping for too much, or will this someday work as I hope?
In response to an amazing answer by Stephen C, which I can fully appreciate, upon further exploration I discovered via
static String spawn(
ExecutorService executorService,
Callable<String> callable,
Consumer<Future<String>> consumer
) throws Exception {
try {
var result = executorService.submit(callable);
consumer.accept(result);
return result.get(3, TimeUnit.SECONDS);
}
catch (TimeoutException e) {
// The timeout expired...
return callable.call() + " - TimeoutException";
}
catch (ExecutionException e) {
// Why doesn't malcontent get caught here?
return callable.call() + " - ExecutionException";
}
catch (CancellationException e) { // future.cancel(false);
// Exception was thrown
return callable.call() + " - CancellationException";
}
catch (InterruptedException e) { // future.cancel(true);
return callable.call() + "- InterruptedException ";
}
}
and
try (var executorService = Executors.newThreadPerTaskExecutor(threadFactory)) {
Callable<String> malcontent = () -> {
Thread.sleep(Duration.ofSeconds(2));
throw new IllegalStateException("malcontent acting up");
};
System.out.println("\n\nresult = " + spawn(executorService, malcontent, (future) -> {}));
} catch (Exception e) {
e.printStackTrace(); // malcontent gets caught here
}
I was expecting malcontent to get caught in spawn as an ExecutionException per the documentation, but it does not. Consequently, I have trouble reasoning about my expectations.
Much of my hope for Project Loom was that, unlike Functional Reactive Programming, I could once again rely on Exceptions to do the right thing, and reason about them such that I could predict what would happen without having to run experiments to validate what really happens.
As Steve Jobs (at NeXT) used to say: "It just works"
So far, my posting on loom-dev#openjdk.java.net has not been responded to... which is why I have used StackOverflow. I don't know the best way to engage the Project Loom developers.
This is speculation ... but I don't think so.
According to the provisional javadocs, ExecutorService now inherits AutoClosable, and it is specified that the default behavior of the close() method is to perform a clean shutdown and wait for it to complete. (Note that this is described as default behavior not required behavior!)
So why couldn't they change the behavior to catch an resignal the exceptions on this thread's stack?
One problem is that specifying patterns of behavior that are logically consistent for both this case, and the case where the ExecutorService is not used as a resource in a try with resources. In order to implement the behavior in this case, the close() method has to be informed by some other part of the executor service of the task's unhandled exception. But if nothing calls close() then the exceptions can't be re-raised. And if the close() is called in a finalizer or similar, there probably won't be anything to handle them. At the very least, it is complicated.
A second problem is that it would be difficult to handle the exception(s) in the general case. What if more than one task failed with an exception? What if different tasks failed with different exceptions? How does the code that handles the exception (e.g. your catch (RuntimeException e) ... figure out which task failed?
A third problem is that this would be a breaking change. In Java 17 and earlier, the above code would not propagate any exceptions from the tasks. In Java 18 and later it would. Java 17 code that assumed there were no "random" exceptions from failed tasks delivered to this thread would break.
A fourth point is that this would be an nuisance in use-cases where the Java 18+ programmer wants to treat the executor service as a resource, but does not want to deal with "stray" exceptions on this thread. (I suspect that would be the majority of use-cases for autoclosing an executor service.)
A fifth problem (if you want to call it that) is that it is a breaking change for early adopters of Loom. (I am reading your question as saying that you tried it with Loom and it currently doesn't behave as you proposed.)
The final problem is that there are already ways to capture a task's exception and deliver it; e.g. via the Future objects returned when you submit a task. This proposal is not filling a gap in ExecutorService functionality.
(Phew!)
Of course I don't know that the Java developers will actually do. And we won't collectively know until Loom is finally released as a non-preview feature of mainstream Java.
Anyhow, if you want to lobby for this, you should email the Loom mailing list about it.
LOOM has made many improvements such as making ExecutorService an AutoClosable so it simplifies coding, eliminating calls to shutdown / awaitTermination.
Your point on the expectation of neat exception handling applies to typical usage of ExecutorService in any JDK - not just the upcoming LOOM release - so IMO isn't obviously necessary to be tied in with LOOM work.
The error handling you wish for is quite easy to incorporate with any version of JDK by adding a few lines of code around code blocks that use ExecutorService:
var ex = new AtomicReference<RuntimeException>();
try {
// add any use of ExecutorService here
// eg OLD JDK style:
// var executorService = Executors.newFixedThreadPool(5);
try (var executorService = Executors.newThreadPerTaskExecutor(virtualThreadFactory)) {
...
if (item == 8) {
// Save exception before sending:
ex.set(new RuntimeException("task 8 is acting up"));
throw ex.get();
}
...
}
// OR: not-LOOM JDK call executorService.shutdown/awaitTermination here
// Pass on any handling problem
if (ex.get() != null)
throw ex.get();
}
catch (Exception e) {
System.err.println("Exception was: "+e.getMessage());
}
Not elegant as you hope for, but works in any JDK release.
EDIT On your edited question:
You've put callable.call() as the code inside catch (ExecutionException e) { so that you've lost the first exception and malcontent raises a second exception. Add System.out.println to see the original:
catch (ExecutionException e) {
System.out.println(Thread.currentThread()+" ExecutionException: "+e);
e.printStackTrace();
// Why doesn't malcontent get caught here?
return callable.call() + " - ExecutionException";
}
I think, the closest to what you are trying to achieve, is
try(var executor = StructuredExecutor.open()) {
var handler = new StructuredExecutor.ShutdownOnFailure();
IntStream.range(0, 15).forEach(item -> {
executor.fork(() -> {
var milliseconds = item * 100;
System.out.println(Thread.currentThread()
+ "sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if(item == 8) {
throw new RuntimeException("task 8 is acting up");
}
return null;
}, handler);
});
executor.join();
handler.throwIfFailed();
}
catch(InterruptedException|ExecutionException ex) {
System.err.println("Caught in initiator thread");
ex.printStackTrace();
}
which will run all jobs in virtual threads and generate an exception in the initiator thread when one of the jobs failed. StructuredExecutor is a new tool introduced by project Loom which allows to show the ownership of the created virtual threads to this specific job in diagnostic tools. But note that it’s close() won’t wait for the completion but rather requires the owner to do this before closing, throwing an exception if the developer failed to do so.
The behavior of classic ExecutorService implementations won’t change.
A solution for the ExecutorService would be
try(var executor = Executors.newVirtualThreadPerTaskExecutor()) {
var jobs = executor.invokeAll(IntStream.range(0, 15).<Callable<?>>mapToObj(item ->
() -> {
var milliseconds = item * 100;
System.out.println(Thread.currentThread()
+ " sleeping " + milliseconds + " milliseconds");
Thread.sleep(milliseconds);
System.out.println(Thread.currentThread() + " awake");
if(item == 8) {
throw new RuntimeException("task 8 is acting up");
}
return null;
}).toList());
for(var f: jobs) f.get();
}
catch(InterruptedException|ExecutionException ex) {
System.err.println("Caught in initiator thread");
ex.printStackTrace();
}
Note that while invokeAll waits for the completion of all jobs, we still need the loop calling get to enforce an ExecutionException to be thrown in the initiating thread.

Parallel processing using collection of CompletableFuture supplyAsync then collecting results

//Unit of logic I want to make it to run in parallel
public PagesDTO convertOCRStreamToDTO(String pageId, Integer pageSequence) throws Exception {
LOG.info("Get OCR begin for pageId [{}] thread name {}",pageId, Thread.currentThread().getName());
OcrContent ocrContent = getOcrContent(pageId);
OcrDTO ocrData = populateOCRData(ocrContent.getInputStream());
PagesDTO pageDTO = new PagesDTO(pageId, pageSequence.toString(), ocrData);
return pageDTO;
}
Logic to execute convertOCRStreamToDTO(..) in parallel then collect its results when individuals thread execution is done
List<PagesDTO> pageDTOList = new ArrayList<>();
//javadoc: Creates a work-stealing thread pool using all available processors as its target parallelism level.
ExecutorService newWorkStealingPool = Executors.newWorkStealingPool();
Instant start = Instant.now();
List<CompletableFuture<PagesDTO>> pendingTasks = new ArrayList<>();
List<CompletableFuture<PagesDTO>> completedTasks = new ArrayList<>();
CompletableFuture<<PagesDTO>> task = null;
for (InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
String pageId = dcInputPageDTO.getPageId();
task = CompletableFuture
.supplyAsync(() -> {
try {
return convertOCRStreamToDTO(pageId, pageSequence.getAndIncrement());
} catch (HttpHostConnectException | ConnectTimeoutException e) {
LOG.error("Error connecting to Redis for pageId [{}]", pageId, e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.REDIS_CONNECTION_FAILURE),
" Connecting to the Redis failed while getting OCR for pageId ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
} catch (CaptureException e) {
LOG.error("Error in Document Classification Engine Service while getting OCR for pageId [{}]",pageId,e);
exceptionMap.put(pageId,e);
} catch (Exception e) {
LOG.error("Error getting OCR content for the pageId [{}]", pageId,e);
CaptureException e1 = new CaptureException(Error.getErrorCodes().get(ErrorCodeConstants.TECHNICAL_FAILURE),
"Error while getting ocr content for pageId : ["+pageId +"] " + e.getMessage(), CaptureErrorComponent.REDIS_CACHE, e);
exceptionMap.put(pageId,e1);
}
return null;
}, newWorkStealingPool);
//collect all async tasks
pendingTasks.add(task);
}
//TODO: How to avoid unnecessary loops which is happening here just for the sake of waiting for the future tasks to complete???
//TODO: Looking for the best solutions
while(pendingTasks.size() > 0) {
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
if(futureTask != null && futureTask.isDone()){
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
}
pendingTasks.removeAll(completedTasks);
}
//Throw the exception cought while getting converting OCR stream to DTO - for any of the pageId
for(InputPageDTO dcInputPageDTO : dcReqDTO.getPages()) {
if(exceptionMap.containsKey(dcInputPageDTO.getPageId())) {
CaptureException e = exceptionMap.get(dcInputPageDTO.getPageId());
throw e;
}
}
LOG.info("Parallel processing time taken for {} pages = {}", dcReqDTO.getPages().size(),
org.springframework.util.StringUtils.deleteAny(Duration.between(Instant.now(), start).toString().toLowerCase(), "pt-"));
Please look at my above code base todo items, I have below two concerns for which I am looking for advice over stackoverflow:
1) I want to avoid unnecessary looping (happening in while loop above), what is the best way for optimistically I wait for all threads to complete its async execution then collect my results out of it??? Please anybody has an advice?
2) ExecutorService instance is created at my service bean class level, thinking that, it will be re-used for every requests, instead create it local to the method, and shutdown in finally. Am I doing right here?? or any correction in my thought process?
Simply remove the while and the if and you are good:
for(CompletableFuture<PagesDTO> futureTask: pendingTasks) {
completedTasks.add(futureTask);
pageDTOList.add(futureTask.get());
}
get() (as well as join()) will wait for the future to complete before returning a value. Also, there is no need to test for null since your list will never contain any.
You should however probably change the way you handle exceptions. CompletableFuture has a specific mechanism for handling them and rethrowing them when calling get()/join(). You might simply want to wrap your checked exceptions in CompletionException.

Vert.x multi-thread web-socket

I have simple vert.x app:
public class Main {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40));
Router router = Router.router(vertx);
long main_pid = Thread.currentThread().getId();
Handler<ServerWebSocket> wsHandler = serverWebSocket -> {
if(!serverWebSocket.path().equalsIgnoreCase("/ws")){
serverWebSocket.reject();
} else {
long socket_pid = Thread.currentThread().getId();
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
});
}
};
vertx
.createHttpServer()
.websocketHandler(wsHandler)
.listen(8080);
}
}
When I connect this server with multiple clients I see that it works in one thread. But I want to handle each client connection parallelly. How I should change this code to do it?
This:
new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40)
looks like you're trying to create your own HTTP connection pool, which is likely not what you really want.
The idea of Vert.x and other non-blocking event-loop based frameworks, is that we don't attempt the 1 thread -> 1 connection affinity, rather, when a request, currently being served by the event loop thread is waiting for IO - EG the response from a DB - that event-loop thread is freed to service another connection. This then allows a single event loop thread to service multiple connections in a concurrent-like fashion.
If you want to fully utilise all core on your machine, and you're only going to be running a single verticle, then set the number of instances to the number of cores when your deploy your verticle.
IE
Vertx.vertx().deployVerticle("MyVerticle", new DeploymentOptions().setInstances(Runtime.getRuntime().availableProcessors()));
Vert.x is a reactive framework, which means that it uses a single thread model to handle all your application load. This model is known to scale better than the threaded model.
The key point to know is that all code you put in a handler must never block (like your Thread.sleep) since it will block the main thread. If you have blocking code (say for example a JDBC call) you should wrap your blocking code in a executingBlocking handler, e.g.:
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
vertx.executeBlocking(future -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
future.complete();
});
});
Now all the blocking code will be run on a thread from the thread pool that you can configure as already shown in other replies.
If you would like to avoid writing all these execute blocking handlers and you know that you need to do several blocking calls then you should consider using a worker verticle, since these will scale at the event bus level.
A final note for multi threading is that if you use multiple threads your server will not be as efficient as a single thread, for example it won't be able to handle 10 million websockets since 10 million threads event on a modern machine (we're in 2016) will bring your OS scheduler to its knees.

Java executorsevice is closing even its threads are running

I am Using Executorsevice to generate files from database. I am using jdbc and core java to get the table data into files.
After creating the Executorservice with 10 threads I am submitting 60 threads in a for loop to get 60 files parallelly. This is working fine with small data and a table with few columns. But in case of a huge file and for tables having more columns, the thread which is working on huge table data/ more columns table is stopping without giving any information in the log when the other threads are completed .
ExecutorService executor = Executors.newFixedThreadPool(THREAD_COUNT);
for (String filename : filenames) {
EachFileThread worker = new EachFileThread(destdir, converter,
filename, this);
executor.execute(worker);
}
executor.shutdown();
Inside Eachfilethread I am reading the xml and get columns, table and form a query and executing the query and formatting the data and putting the data into file
forTable = (FileData) converter.convertFromXMLToObject( filename + ".xml");
String query = getQuery(forTable);
statement = connection.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY);
resultSet = statement.executeQuery(query);
resultSet.setFetchSize(3000);
WriteData(resultSet, filepath, forTable);(formatting the data from db and then writing to a file)
The problem is that you are not waiting for all the jobs to finish what they were doing. As #msandiford suggested in the comment you should add call to awaitTermination(..) after calling shutdown() as it is in sample shutdownAndAwaitTermination() method on https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html
For example you can try to do it like so:
ExecutorService executor = Executors.newFixedThreadPool(THREAD_COUNT);
for (String filename : filenames) {
EachFileThread worker = new EachFileThread(destdir, converter, filename, this);
executor.execute(worker);
}
executor.shutdown();
try {
// Wait a while for existing tasks to terminate
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
executor.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!executor.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Executor did not terminate");
}
} catch (InterruptedException ie) {
// (Re-)Cancel if current thread also interrupted
executor.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}

App Engine Pull Queue tasks disappear before being properly handled

Update 4 - rephrasing question for clarity
I am using Pull Queues to feed back-end workers tasks that send push notifications. I can see the front-end instance queue the task in the logs. However, the task is only occasionally handled by the back-end. I see no indication of why the task disappears prior to being handled and deleted from the queue.
This may be related: I am seeing an unusually high number of TransientFailureExceptions when attempting to lease tasks from the queue - despite sleeping between attempts.
Everything works properly on my development server (and an earlier version had worked in production) but production is no longer working properly. At first I thought it was a certificate issue. However, notifications are sometimes sent when the backend first starts.
There is no indication that an error is happening except for the TransientFailureException when I call leaseTasks on the queue. Also, it seems to take a very long time for my logs to show up.
I can provide more information and code snippets as needed.
Thanks for the help.
Update 1:
The application uses 10 pull queues. It would normally use 2 but queue tagging is still considered experimental. They are declared in the standard fashion:
<queue>
<name>gcm-henchdist</name>
<mode>pull</mode>
</queue>
The lease tasks function is:
public boolean processBatchOfTasks()
{
List< TaskHandle > tasks = attemptLeaseTasks();
if( null == tasks || tasks.isEmpty() )
{
return false;
}
processLeasedTasks( tasks );
return true;
}
private List< TaskHandle > attemptLeaseTasks()
{
for( int attemptNnum = 1; !LifecycleManager.getInstance().isShuttingDown(); ++attemptNnum )
{
try
{
return m_taskQueue.leaseTasks( m_numLeaseTimeUnits, m_leaseTimeUnit, m_maxTasksPerLease );
} catch( TransientFailureException exc )
{
LOG.warn( "TransientFailureException when leasing tasks from queue '{}'", m_taskQueue.getQueueName(), exc );
ApiProxy.flushLogs();
} catch( ApiDeadlineExceededException exc )
{
LOG.warn( "ApiDeadlineExceededException when when leasing tasks from queue '{}'",
m_taskQueue.getQueueName(), exc );
ApiProxy.flushLogs();
}
if( !backOff( attemptNnum ) )
{
LOG.warn( "Failed to lease tasks." );
break;
}
}
return Collections.emptyList();
}
where the lease variables are 30, TimeUnit.MINUTES, 100 respectively
the processBatchOfTasks function is polled via:
private void startPollingForClient( EClientType clientType )
{
InterimApnsCertificateConfig config = InterimApnsCertificateConfigMgr.getConfig( clientType );
Queue notificationQueue = QueueFactory.getQueue( config.getQueueId().getName() );
ApplePushNotificationWorker worker = new ApplePushNotificationWorker(
notificationQueue,
m_messageConverter.getObjectMapper(),
config.getCertificateBytes(),
config.getPassword(),
config.isProduction() );
LOG.info( "Started worker for {} polling queue {}", clientType, notificationQueue.getQueueName() );
while ( !LifecycleManager.getInstance().isShuttingDown() )
{
boolean tasksProcessed = worker.processBatchOfTasks();
ApiProxy.flushLogs();
if ( !tasksProcessed )
{
// Wait before trying to lease tasks again.
try
{
//LOG.info( "Going to sleep" );
Thread.sleep( MILLISECONDS_TO_WAIT_WHEN_NO_TASKS_LEASED );
//LOG.info( "Waking up" );
} catch ( InterruptedException exc )
{
LOG.info( "Polling loop interrupted. Terminating loop.", exc );
return;
}
}
}
LOG.info( "Instance is shutting down" );
}
and the thread is created via:
Thread thread = ThreadManager.createBackgroundThread( new Runnable()
{
#Override
public void run()
{
startPollingForClient( clientType );
}
} );
thread.start();
GCM notifications are handled in a similar fashion.
Update 2
The following is the backoff function. I have verified in the logs (with both GAE and my own timestamps) that the sleep is incrementing properly
private boolean backOff( int attemptNo )
{
// Exponential back off between 2 seconds and 64 seconds with jitter
// 0..1000 ms.
attemptNo = Math.min( 6, attemptNo );
int backOffTimeInSeconds = 1 << attemptNo;
long backOffTimeInMilliseconds = backOffTimeInSeconds * 1000 + (int)( Math.random() * 1000 );
LOG.info( "Backing off for {} milliseconds from queue '{}'", backOffTimeInMilliseconds, m_taskQueue.getQueueName() );
ApiProxy.flushLogs();
try
{
Thread.sleep( backOffTimeInMilliseconds );
} catch( InterruptedException e )
{
return false;
}
LOG.info( "Waking up from {} milliseconds sleep for queue '{}'", backOffTimeInMilliseconds, m_taskQueue.getQueueName() );
ApiProxy.flushLogs();
return true;
}
Update 3
The tasks are added to the queue within a transaction on a front-end instance:
if( null != queueType )
{
String deviceName;
int numDevices = deviceList.size();
for ( int iDevice = 0; iDevice < numDevices; ++iDevice )
{
deviceName = deviceList.get( iDevice ).getName();
LOG.info( "Queueing Your-Turn notification for user: {} device: {} queue: {}", user.getId(), deviceName, queueType.getName() );
Queue queue = QueueFactory.getQueue( queueType.getName() );
queue.addAsync( TaskOptions.Builder.withMethod( TaskOptions.Method.PULL )
.param( "alertLocKey", "NOTIF_YOUR_TURN" ).param( "device", deviceName ) );
}
}
I know that the transaction succeeds because the database updates correctly.
In the logs I see the "Queuing Your-Turn notification..." entry, but I see nothing appear on the back-end logs.
In the administration panel, I see Task Queue API Calls increment by 1 as well as Task Queue Stored Task Count increment by 1. However, the queue that was written to shows zero in both the Tasks In Queue and Leased In Last Minute fields.
The TransientFailureException JavaDoc says that "The requested operation may succeed if attempted again" (because the failure is transient). Therefore when this exception is thrown your code should loop back and repeat the leaseTasks call. Furthermore AppEngine does not have to redo the request itself because it notified you via the exception that you should do so.
It's a pity you repeat the method name leaseTasks as one of your own because now it's not clear which one I'm referring to when I mention leaseTasks. Still, wrap the inner call to m_taskQueue.leaseTasks in a while loop and an additional try block to catch only the TransientFailureException. Use a flag to end the while loop only if that exception is not thrown.
Is that enough explanation, or do you need a complete source code listing?
It appears that the culprit may have been that I was calling addAsync when enqueuing the task instead of just calling add.
I replaced the call and things seem to be consistently working now. I would like to know why this makes a difference and will update the answer when I find the reason.

Categories