FutureCallback never executed - java

I'm trying to get FutureCallbacks from Guava working because I want to use it in my java api for cloudflare.
Test class:
import com.google.common.util.concurrent.*;
import javax.annotation.Nullable;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Test {
#org.junit.Test
public static void apiUsersCode( ) {
apiMethodThatReturnsCloudflareInfos( new FutureCallback<String>() {
#Override
public void onSuccess( #Nullable String result ) {
System.out.println( "WORKING" );
}
#Override
public void onFailure( Throwable t ) {
System.out.println( t );
}
} );
}
public static void apiMethodThatReturnsCloudflareInfos( FutureCallback<String> usersFutureCallback ) {
Callable<String> solution = ( ) -> {
Thread.sleep( 1000L ); // simulate slowness of next line
return "http output"; // this is this other method that sends a http request
};
// runs callable async, when done: execute usersFutureCallback
ExecutorService executor = Executors.newFixedThreadPool( 10 );
ListeningExecutorService service = MoreExecutors.listeningDecorator( executor );
ListenableFuture<String> listenableFuture = service.submit( solution );
Futures.addCallback( listenableFuture, usersFutureCallback, service );
}
}
When someone uses this api simplified:
He calls a method and pass a FutureCallback object (usersFutureCallback)
This method runs another method where its output is returned in the callable
Done
Example Api method that is beeing executed by the user
apiMethodThatReturnsCloudflareInfos( new FutureCallback<String>() {
#Override
public void onSuccess( #Nullable String cloudflareOutput ) {
System.out.println( "WORKING" );
}
#Override
public void onFailure( Throwable t ) {
System.out.println( t );
}
} );
Example api methods simplefied doing
public static void apiMethodThatReturnsCloudflareInfos( FutureCallback<String> usersFutureCallback ) {
Callable<String> solution = ( ) -> {
Thread.sleep( 1000L ); // simulate slowness of next line
return "http output"; // this is this other method that sends a http request
};
// runs callable async, when done: execute usersFutureCallback
ExecutorService executor = Executors.newFixedThreadPool( 10 );
ListeningExecutorService service = MoreExecutors.listeningDecorator( executor );
ListenableFuture<String> listenableFuture = service.submit( solution );
Futures.addCallback( listenableFuture, usersFutureCallback, service );
}
"WORKING" is only printed when changing and adding this lines to add the listener after the callable is done.
But that is not the solution of cause.
public static void apiMethodThatReturnsCloudflareInfos( FutureCallback<String> usersFutureCallback ) {
Callable<String> solution = ( ) -> {
Thread.sleep( 1000L ); // simulate slowness of next line
return "http output"; // this is this other method that sends a http request
};
// runs callable async, when done: execute usersFutureCallback
ExecutorService executor = Executors.newFixedThreadPool( 10 );
ListeningExecutorService service = MoreExecutors.listeningDecorator( executor );
ListenableFuture<String> listenableFuture = service.submit( solution );
// i don't want that
try {
Thread.sleep( 10001L );
} catch ( InterruptedException e ) {
e.printStackTrace();
}
Futures.addCallback( listenableFuture, usersFutureCallback, service );
}
What am I doing wrong?

The problem lies in how you are testing that method. You basically create a new thread that runs your Callable<String> solution, but nothing will wait for that thread to finish. Once your test method is finished, your test suite will terminate and with it the thread that is running solution.
In a real life scenario, where your application lives longer than solution, you won't encounter that problem.
To solve this, you must update your test case to not terminate prematurely. You could do this by updating the apiUsersCode() to set some value in onSuccess and onFailure, which you wait for after calling apiMethodThatReturnsCloudflareInfos.

Solved!
The problem was the test suit JUnit.
It worked when testing it in a main(String[] args) method.
...everything is correct only what should help me with the tests makes the mistake :D
#Test
public void apiUsersCode( ) {
apiMethodThatReturnsCloudflareInfos( new FutureCallback<String>() {
#Override
public void onSuccess( #Nullable String result ) {
System.out.println( "WORKING" );
}
#Override
public void onFailure( Throwable t ) {
System.out.println( t );
}
} );
try {
new CountDownLatch( 1 ).await(2000, TimeUnit.MILLISECONDS);
} catch ( InterruptedException e ) {
e.printStackTrace();
}
}
public static void apiMethodThatReturnsCloudflareInfos( FutureCallback<String> usersFutureCallback ) {
Callable<String> solution = ( ) -> {
Thread.sleep( 1000L ); // simulate slowness of next line
return "http output"; // this is this other method that sends a http request
};
// runs callable async, when done: execute usersFutureCallback
ExecutorService executor = Executors.newFixedThreadPool( 10 );
ListeningExecutorService service = MoreExecutors.listeningDecorator( executor );
ListenableFuture<String> listenableFuture = service.submit( solution );
Futures.addCallback( listenableFuture, usersFutureCallback, service );
}

Related

How to chain 2 Uni<?> in unit test using Panache.withTransaction() without getting a java.util.concurrent.TimeoutException

I'm struggling using Panache.withTransaction() in unit tests, whatever I do, I get a java.util.concurrent.TimeoutException.
Note: It works without transaction but I have to delete the inserts manually.
I want to chain insertKline and getOhlcList inside a transaction so I can benefit from the rollback:
#QuarkusTest
#Slf4j
class KlineServiceTest {
#Inject
KlineRepository klineRepository;
#Inject
CurrencyPairRepository currencyPairRepository;
#Inject
KlineService service;
#Test
#DisplayName("ohlc matches inserted kline")
void ohlcMatchesInsertedKline() {
// GIVEN
val volume = BigDecimal.valueOf(1d);
val closeTime = LocalDateTime.now().withSecond(0).withNano(0);
val currencyPair = new CurrencyPair("BTC", "USDT");
val currencyPairEntity = currencyPairRepository
.findOrCreate(currencyPair)
.await().indefinitely();
val kline = KlineEntity.builder()
.id(new KlineId(currencyPairEntity, closeTime))
.volume(volume)
.build();
val insertKline = Uni.createFrom().item(kline)
.call(klineRepository::persistAndFlush);
val getOhlcList = service.listOhlcByCurrencyPairAndTimeWindow(currencyPair, ofMinutes(5));
// WHEN
val ohlcList = Panache.withTransaction(
() -> Panache.currentTransaction()
.invoke(Transaction::markForRollback)
.replaceWith(insertKline)
.chain(() -> getOhlcList))
.await().indefinitely();
// THEN
assertThat(ohlcList).hasSize(1);
val ohlc = ohlcList.get(0);
assertThat(ohlc).extracting(Ohlc::getCloseTime, Ohlc::getVolume)
.containsExactly(closeTime, volume);
}
}
I get this exception:
java.lang.RuntimeException: java.util.concurrent.TimeoutException
at io.quarkus.hibernate.reactive.panache.common.runtime.AbstractJpaOperations.executeInVertxEventLoop(AbstractJpaOperations.java:52)
at io.smallrye.mutiny.operators.uni.UniRunSubscribeOn.subscribe(UniRunSubscribeOn.java:25)
at io.smallrye.mutiny.operators.AbstractUni.subscribe(AbstractUni.java:36)
And looking at AbstractJpaOperations, I can see:
public abstract class AbstractJpaOperations<PanacheQueryType> {
// FIXME: make it configurable?
static final long TIMEOUT_MS = 5000;
...
}
Also, same issue when I tried to use runOnContext():
#Test
#DisplayName("ohlc matches inserted kline")
void ohlcMatchesInsertedKline() throws ExecutionException, InterruptedException {
// GIVEN
val volume = BigDecimal.valueOf(1d);
val closeTime = LocalDateTime.now().withSecond(0).withNano(0);
val currencyPair = new CurrencyPair("BTC", "USDT");
val currencyPairEntity = currencyPairRepository
.findOrCreate(currencyPair)
.await().indefinitely();
val kline = KlineEntity.builder()
.id(new KlineId(currencyPairEntity, closeTime))
.volume(volume)
.build();
val insertKline = Uni.createFrom().item(kline)
.call(klineRepository::persist);
val getOhlcList = service.listOhlcByCurrencyPairAndTimeWindow(currencyPair, ofMinutes(5));
val insertAndGet = insertKline.chain(() -> getOhlcList);
// WHEN
val ohlcList = runAndRollback(insertAndGet)
.runSubscriptionOn(action -> vertx.getOrCreateContext()
.runOnContext(action))
.await().indefinitely();
// THEN
assertThat(ohlcList).hasSize(1);
val ohlc = ohlcList.get(0);
assertThat(ohlc).extracting(Ohlc::getCloseTime, Ohlc::getVolume)
.containsExactly(closeTime, volume);
}
private static Uni<List<Ohlc>> runAndRollback(Uni<List<Ohlc>> getOhlcList) {
return Panache.withTransaction(
() -> Panache.currentTransaction()
.invoke(Transaction::markForRollback)
.replaceWith(getOhlcList));
}
Annotation #TestReactiveTransaction
Quarkus provides the annotation #TestReactiveTransaction: it will wrap the test method in a transaction and rollback the transaction at the end.
I'm going to use quarkus-test-vertx for testing the reactive code:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-test-vertx</artifactId>
<scope>test</scope>
</dependency>
Here's an example of a test class that can be used with the Hibernate Reactive quickstart with Panache (after adding the quarkus-test-vertx dependency):
The entity:
#Entity
public class Fruit extends PanacheEntity {
#Column(length = 40, unique = true)
public String name;
...
}
The test class:
package org.acme.hibernate.orm.panache;
import java.util.List;
import org.junit.jupiter.api.Test;
import io.quarkus.test.TestReactiveTransaction;
import io.quarkus.test.junit.QuarkusTest;
import io.quarkus.test.vertx.UniAsserter;
import io.smallrye.mutiny.Uni;
import org.assertj.core.api.Assertions;
#QuarkusTest
public class ExampleReactiveTest {
#Test
#TestReactiveTransaction
public void test(UniAsserter asserter) {
printThread( "Start" );
Uni<List<Fruit>> listAllUni = Fruit.<Fruit>listAll();
Fruit mandarino = new Fruit( "Mandarino" );
asserter.assertThat(
() -> Fruit
.persist( mandarino )
.replaceWith( listAllUni ),
result -> {
Assertions.assertThat( result ).hasSize( 4 );
Assertions.assertThat( result ).contains( mandarino );
printThread( "End" );
}
);
}
private void printThread(String step) {
System.out.println( step + " - " + Thread.currentThread().getId() + ":" + Thread.currentThread().getName() );
}
}
#TestReactiveTransaction runs the method in a transaction that it's going to be rollbacked at the end of the test.
UniAsserter makes it possible to test reactive code without having to block anything.
Annotation #RunOnVertxContext
It's also possible to run a test in the Vert.x event loop using the annotation #RunOnVertxContext in the quarkus-vertx-test library:
This way you don't need to wrap the whole test in a trasaction:
import io.quarkus.test.vertx.RunOnVertxContext;
#QuarkusTest
public class ExampleReactiveTest {
#Test
#RunOnVertxContext
public void test(UniAsserter asserter) {
printThread( "Start" );
Uni<List<Fruit>> listAllUni = Fruit.<Fruit>listAll();
Fruit mandarino = new Fruit( "Mandarino" );
asserter.assertThat(
() -> Panache.withTransaction( () -> Panache
// This test doesn't have #TestReactiveTransaction
// we need to rollback the transaction manually
.currentTransaction().invoke( Mutiny.Transaction::markForRollback )
.call( () -> Fruit.persist( mandarino ) )
.replaceWith( listAllUni )
),
result -> {
Assertions.assertThat( result ).hasSize( 4 );
Assertions.assertThat( result ).contains( mandarino );
printThread( "End" );
}
);
}
I finally managed to get it working, the trick was to defer the Uni creation:
Like in:
#QuarkusTest
public class ExamplePanacheTest {
#Test
public void test() {
final var mandarino = new Fruit("Mandarino");
final var insertAndGet = Uni.createFrom()
.deferred(() -> Fruit.persist(mandarino)
.replaceWith(Fruit.<Fruit>listAll()));
final var fruits = runAndRollback(insertAndGet)
.await().indefinitely();
assertThat(fruits).hasSize(4)
.contains(mandarino);
}
private static Uni<List<Fruit>> runAndRollback(Uni<List<Fruit>> insertAndGet) {
return Panache.withTransaction(
() -> Panache.currentTransaction()
.invoke(Transaction::markForRollback)
.replaceWith(insertAndGet));
}
}

How to merge multiple vertx web client responses

I am new to vertx and async programming.
I have 2 verticles communicating via an event bus as follows:
//API Verticle
public class SearchAPIVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private Integer defaultPort;
private void sendSearchRequest(RoutingContext routingContext) {
final JsonObject requestMessage = routingContext.getBodyAsJson();
final EventBus eventBus = vertx.eventBus();
eventBus.request(GET_USEARCH_DOCS, requestMessage, reply -> {
if (reply.succeeded()) {
Logger.info("Search Result = " + reply.result().body());
routingContext.response()
.putHeader("content-type", "application/json")
.setStatusCode(200)
.end((String) reply.result().body());
} else {
Logger.info("Document Search Request cannot be processed");
routingContext.response()
.setStatusCode(500)
.end();
}
});
}
#Override
public void start() throws Exception {
Logger.info("Starting the Gateway service (Event Sender) verticle");
// Create a Router
Router router = Router.router(vertx);
//Added bodyhandler so we can process json messages via the event bus
router.route().handler(BodyHandler.create());
// Mount the handler for incoming requests
// Find documents
router.post("/api/search/docs/*").handler(this::sendSearchRequest);
// Create an HTTP Server using default options
HttpServer server = vertx.createHttpServer();
// Handle every request using the router
server.requestHandler(router)
//start listening on port 8083
.listen(config().getInteger("http.port", 8083)).onSuccess(msg -> {
Logger.info("*************** Search Gateway Server started on "
+ server.actualPort() + " *************");
});
}
#Override
public void stop(){
//house keeping
}
}
//Below is the target verticle should be making the multiple web client call and merging the responses
.
#Component
public class SolrCloudVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private SearchRepository searchRepositoryService;
#Override
public void start() throws Exception {
Logger.info("Starting the Solr Cloud Search Service (Event Consumer) verticle");
super.start();
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file")
.setConfig(new JsonObject().put("path", "conf/config.json"));
ConfigRetrieverOptions configRetrieverOptions = new ConfigRetrieverOptions()
.addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, configRetrieverOptions);
configRetriever.getConfig(ar -> {
if (ar.succeeded()) {
JsonObject configJson = ar.result();
EventBus eventBus = vertx.eventBus();
eventBus.<JsonObject>consumer(GET_USEARCH_DOCS).handler(getDocumentService(searchRepositoryService, configJson));
Logger.info("Completed search service event processing");
} else {
Logger.error("Failed to retrieve the config");
}
});
}
private Handler<Message<JsonObject>> getDocumentService(SearchRepository searchRepositoryService, JsonObject configJson) {
return requestMessage -> vertx.<String>executeBlocking(future -> {
try {
//I need to incorporate the logic here that adds futures to list and composes the compositefuture
/*
//Below is my logic to populate the future list
WebClient client = WebClient.create(vertx);
List<Future> futureList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
Future<String> future1 = client.post(8983, "127.0.0.1", "/solr/" + collection + "/query")
.expect(ResponsePredicate.SC_OK)
.sendJsonObject(requestMessage.body())
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
futureList.add(future1);
}
//Below is the CompositeFuture logic, but the logic and construct does not make sense to me. What goes as first and second argument of executeBlocking method
/*CompositeFuture.join(futureList)
.onSuccess(result -> {
result.list().forEach( x -> {
if(x != null){
requestMessage.reply(result.result());
}
}
);
})
.onFailure(error -> {
System.out.println("We should not fail");
})
*/
future.complete("DAO returns a Json String");
} catch (Exception e) {
future.fail(e);
}
}, result -> {
if (result.succeeded()) {
requestMessage.reply(result.result());
} else {
requestMessage.reply(result.cause()
.toString());
}
});
}
}
I was able to use the org.springframework.web.reactive.function.client.WebClient calls to compose my search result from multiple web client calls, as against using Future<io.vertx.ext.web.client.WebClient> with CompositeFuture.
I was trying to avoid mixing Springboot and Vertx, but unfortunately Vertx CompositeFuture did not work here:
//This method supplies the parameter for the future.complete(..) line in getDocumentService(SearchRepository,JsonObject)
private List<JsonObject> findByQueryParamsAndDataSources(SearchRepository searchRepositoryService,
JsonObject configJson,
JsonObject requestMessage)
throws SolrServerException, IOException {
List<JsonObject> searchResultList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
searchResultList.add(new JsonObject(doSearchPerCollection(collection.toString(), requestMessage.toString())));
}
return aggregateMultiCollectionSearchResults(searchResultList);
}
public String doSearchPerCollection(String collection, String message) {
org.springframework.web.reactive.function.client.WebClient client =
org.springframework.web.reactive.function.client.WebClient.create();
return client.post()
.uri("http://127.0.0.1:8983/solr/" + collection + "/query")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(message.toString()))
.retrieve()
.bodyToMono(String.class)
.block();
}
private List<JsonObject> aggregateMultiCollectionSearchResults(List<JsonObject> searchList){
//TODO: Search result aggregation
return searchList;
}
My use case is the second verticle should make multiple vertx web client calls and should combine the responses.
If an API call falls, I want to log the error and still continue processing and merging responses from other calls.
Please, any help on how my code above could be adaptable to handle the use case?
I am looking at vertx CompositeFuture, but no headway or useful example seen yet!
What you are looking for can done with Future coordination with a little bit of additional handling:
CompositeFuture.join(future1, future2, future3).onComplete(ar -> {
if (ar.succeeded()) {
// All succeeded
} else {
// All completed and at least one failed
}
});
The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join
takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of them is failed
Using join you will wait for all Futures to complete, the issue is that if one of them fails you will not be able to obtain response from others as CompositeFuture will be failed. To avoid this you should add Future<T> recover(Function<Throwable, Future<T>> mapper) on each of your Futures in which you should log the error and pass an empty response so that the future does not fail.
Here is short example:
Future<String> response1 = client.post(8887, "localhost", "work").expect(ResponsePredicate.SC_OK).send()
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
Future<String> response2 = client.post(8887, "localhost", "error").expect(ResponsePredicate.SC_OK).send()
map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
CompositeFuture.join(response2, response1)
.onSuccess(result -> {
result.list().forEach(x -> {
if(x != null) {
System.out.println(x);
}
});
})
.onFailure(error -> {
System.out.println("We should not fail");
});
Edit 1:
Limit for CompositeFuture.join(Future...) is 6 Futures, in the case you need more you can use: CompositeFuture.join(Arrays.asList(future1, future2, future3)); where you can pass unlimited number of futures.

Quartz #DisallowConcurrentExecution not working

Hi i have two jobs instances in quartz that i want to not run in parallel, i simplified the code in the example below to show what is not in line with my expectations.
public class QuartzTest {
public static void main( String[] args ) throws SchedulerException {
SchedulerFactory schedulerFactory = new StdSchedulerFactory();
Scheduler scheduler = schedulerFactory.getScheduler();
scheduler.start();
JobDetail job1 = newJob( TestJob.class ).withIdentity( "job1", "group1" ).build();
CronTrigger trigger1 = newTrigger().withIdentity( "trigger1", "group1" ).startAt( new Date() ).withSchedule( cronSchedule( getCronExpression( 1 ) ) ).build();
scheduler.scheduleJob( job1, trigger1 );
JobDetail job2 = newJob( TestJob.class ).withIdentity( "job2", "group1" ).build();
CronTrigger trigger2 = newTrigger().withIdentity( "trigger2", "group1" ).startAt( new Date() ).withSchedule( cronSchedule( getCronExpression( 1 ) ) ).build();
scheduler.scheduleJob( job2, trigger2 );
for ( int i = 0; i < 5; i++ ) {
System.out.println( trigger1.getNextFireTime() );
System.out.println( trigger2.getNextFireTime() );
try {
Thread.sleep( 1 * 60 * 1000 );
} catch ( InterruptedException e ) {
e.printStackTrace();
}
}
}
private static String getCronExpression( int interval ) {
return "0 */" + interval + " * * * ?";
}
}
This is the job class
#DisallowConcurrentExecution
public class TestJob implements Job {
#Override
public void execute( JobExecutionContext context ) throws JobExecutionException {
System.out.println( "Job started" );
System.out.println( "Job sleeping 30s..." );
try {
Thread.sleep( 30 * 1000 );
} catch ( InterruptedException e ) {
e.printStackTrace();
}
System.out.println( "Job finished." );
}
}
So here i am scheduling two jobs to run each minute (in the real case one is running every minute the other every 5) and this is the output i am getting:
Job started
Job sleeping 30s...
Job started
Job sleeping 30s...
Job finished.
Job finished.
So both job are running in parallel, because a sequential sequence where job1 waits for job2 to complete before running would give me this sequence
Job started
Job sleeping 30s...
Job finished.
Job started
Job sleeping 30s...
Job finished.
So why is this not happening ?
From docs:
#DisallowConcurrentExecution:
An annotation that marks a Job class as one that must not have multiple instances executed concurrently (where instance is based-upon a JobDetail definition - or in other words based upon a JobKey).
JobKey is composed of both a name and group
In your example the name is not the same, so this is two different jobs.
DisallowConcurrentExecution is ensuring that job1#thread1 is completed before you trigger another job1#thread2.

Neo4j embedded only stores inside one runtime

I'm experimenting with Neo4J via Embedded Java API.
My Build path seems ok (no Exceptions during runtime).
When I create some nodes and relations, I can query it directly after it with success.
But after shutting down and re-run my programm, i'm only getting the data I created in the new runtime and none of them before.
But if I look at my directory, I see, that the size has grown with each runtime, I perform a creating of data.
Here's my code:
public static void main(String[] args)
{
GraphDatabaseService gdb = new GraphDatabaseFactory().newEmbeddedDatabase( "/mytestdb/" );
create( gdb );
query( gdb );
gdb.shutdown();
}
private static void query( GraphDatabaseService gdb )
{
StringLogger sl = StringLogger.wrap( new Writer()
{
#Override
public void write( char[] arg0, int arg1, int arg2 ) throws IOException
{
for( int i=arg1; i<=arg2; i++ ) System.out.print( arg0[i] );
}
#Override
public void flush() throws IOException
{}
#Override
public void close() throws IOException
{}
} );
ExecutionEngine ee = new ExecutionEngine( gdb, sl );
ExecutionResult result = ee.execute( "MATCH (p:Privilleg) RETURN p" );
System.out.println( result.dumpToString() );
}
private static void create( GraphDatabaseService gdb )
{
Transaction tx = gdb.beginTx();
Node project = gdb.createNode( MyLabels.Project );
Node user = gdb.createNode( MyLabels.User );
Node priv1 = gdb.createNode( MyLabels.Privilleg );
Node priv2 = gdb.createNode( MyLabels.Privilleg );
user.setProperty( "name", "Heinz" );
user.setProperty( "email", "heinz#gmx.net" );
priv1.setProperty( "name", "Allowed to read all" );
priv1.setProperty( "targets", Short.MAX_VALUE );
priv1.setProperty( "read", true );
priv1.setProperty( "write", false );
priv2.setProperty( "name", "Allowed to write all" );
priv2.setProperty( "targets", Short.MAX_VALUE );
priv2.setProperty( "read", false );
priv2.setProperty( "write", true );
project.setProperty( "name", "My first project" );
project.setProperty( "sname", "STARTUP" );
user.createRelationshipTo( priv1, MyRelationships.UserPrivilleg );
user.createRelationshipTo( priv2, MyRelationships.UserPrivilleg );
priv1.createRelationshipTo( project, MyRelationships.ProjectPrivilleg );
priv2.createRelationshipTo( project, MyRelationships.ProjectPrivilleg );
tx.success();
}
Your code doesn't close the transaction. Typically you use a try-with-resources block:
try (Transaction tx=gdb.beginTx()) {
// do stuff in the graph
tx.success();
}
Since Transaction is AutoClosable its close() method will be called implicitly upon leaving the code block. If (for whatever) reason you decide not to use try-with-resources, be sure to explicitly call close().
On a different notice: your code uses ExecutionEngine. Since Neo4j 2.2 you directly call gdb.execute(myCypherString) instead.
Thank you! This works.
Also, before I closed the transaction, it takes about 20 seconds to shuting down the db. This also is now less than a second.

Stopping Callable in MvcAsyncTask

I have a controller with WebAsyncTask. Further on I'm using a timeout callback.
As writen here I shall have an option to notifies the Callable to cancel processing. However I don't see any option to do so.
#Controller
public class UserDataProviderController {
private static final Logger log = LoggerFactory.getLogger(UserDataProviderController.class.getName());
#Autowired
private Collection<UserDataService> dataServices;
#RequestMapping(value = "/client/{socialSecurityNumber}", method = RequestMethod.GET)
public #ResponseBody
WebAsyncTask<ResponseEntity<CustomDataResponse>> process(#PathVariable final String socialSecurityNumber) {
final Callable<ResponseEntity<CustomDataResponse>> callable = new Callable<ResponseEntity<CustomDataResponse>>() {
#Override
public ResponseEntity<CustomDataResponse> call() throws Exception {
CustomDataResponse CustomDataResponse = CustomDataResponse.newInstance();
// Find user data
for(UserDataService dataService:dataServices)
{
List<? extends DataClient> clients = dataService.findBySsn(socialSecurityNumber);
CustomDataResponse.put(dataService.getDataSource(), UserDataConverter.convert(clients));
}
// test long execution
Thread.sleep(4000);
log.info("Execution thread continued and shall be terminated:"+Thread.currentThread().getName());
HttpHeaders responseHeaders = new HttpHeaders();
responseHeaders.setContentType(new MediaType("application", "json", Charset.forName("UTF-8")));
return new ResponseEntity(CustomDataResponse,responseHeaders,HttpStatus.OK);
}
};
final Callable<ResponseEntity<CustomDataResponse>> callableTimeout = new Callable<ResponseEntity<CustomDataResponse>>() {
#Override
public ResponseEntity<CustomDataResponse> call() throws Exception {
// Error response
HttpHeaders responseHeaders = new HttpHeaders();
responseHeaders.setContentType(new MediaType("application", "json", Charset.forName("UTF-8")));
return new ResponseEntity("Request has timed out!",responseHeaders,HttpStatus.INTERNAL_SERVER_ERROR);
}
};
WebAsyncTask<ResponseEntity<CustomDataResponse>> task = new WebAsyncTask<>(3000,callable);
task.onTimeout(callableTimeout);
return task;
}
}
My #WebConfig
#Configuration
#EnableWebMvc
class WebAppConfig extends WebMvcConfigurerAdapter {
#Override
public void configureAsyncSupport(AsyncSupportConfigurer configurer) {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setKeepAliveSeconds(60 * 60);
executor.afterPropertiesSet();
configurer.registerCallableInterceptors(new TimeoutCallableProcessingInterceptor());
configurer.setTaskExecutor(executor);
}
}
And quite standard Interceptor:
public class TimeoutCallableProcessingInterceptor extends CallableProcessingInterceptorAdapter {
#Override
public <T> Object handleTimeout(NativeWebRequest request, Callable<T> task) {
throw new IllegalStateException("[" + task.getClass().getName() + "] timed out");
}
}
Everything work as it should, but Callable from controller always completes, which is obvious, but how to stop processing there ?
You can use WebAsyncTask to implement the timeout control and Thread management to stop the new async thread gracefully.
Implement a Callable to run the process
In this method (that runs in a diferent thread) store the current Thread in a Controller's local variable
Implement another Callable to handle timeout event
In this method retrieve the previously stored Thread and interrupt it calling the interrupt() method.
Also throw a TimeoutException to stop the controller process
In the running process, check if the thread interrupted with Thread.currentThread().isInterrupted(), if so, then rollback the transaction throwing an Exception.
Controller:
public WebAsyncTask<ResponseEntity<BookingFileDTO>> confirm(#RequestBody final BookingConfirmationRQDTO bookingConfirmationRQDTO)
throws AppException,
ProductException,
ConfirmationException,
BeanValidationException {
final Long startTimestamp = System.currentTimeMillis();
// The compiler obligates to define the local variable shared with the callable as final array
final Thread[] asyncTaskThread = new Thread[1];
/**
* Asynchronous execution of the service's task
* Implemented without ThreadPool, we're using Tomcat's ThreadPool
* To implement an specific ThreadPool take a look at http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#mvc-ann-async-configuration-spring-mvc
*/
Callable<ResponseEntity<BookingFileDTO>> callableTask = () -> {
//Stores the thread of the newly started asynchronous task
asyncTaskThread[0] = Thread.currentThread();
log.debug("Running saveBookingFile task at `{}`thread", asyncTaskThread[0].getName());
BookingFileDTO bookingFileDTO = bookingFileService.saveBookingFile(
bookingConfirmationRQDTO,
MDC.get(HttpHeader.XB3_TRACE_ID))
.getValue();
if (log.isDebugEnabled()) {
log.debug("The saveBookingFile task took {} ms",
System.currentTimeMillis() - startTimestamp);
}
return new ResponseEntity<>(bookingFileDTO, HttpStatus.OK);
};
/**
* This method is executed if a timeout occurs
*/
Callable<ResponseEntity<BookingFileDTO>> callableTimeout = () -> {
String msg = String.format("Timeout detected at %d ms during confirm operation",
System.currentTimeMillis() - startTimestamp);
log.error("Timeout detected at {} ms during confirm operation: informing BookingFileService.", msg);
// Informs the service that the time has ran out
asyncTaskThread[0].interrupt();
// Interrupts the controller call
throw new TimeoutException(msg);
};
WebAsyncTask<ResponseEntity<BookingFileDTO>> webAsyncTask = new WebAsyncTask<>(timeoutMillis, callableTask);
webAsyncTask.onTimeout(callableTimeout);
log.debug("Timeout set to {} ms", timeoutMillis);
return webAsyncTask;
}
Service implementation:
/**
* If the service has been informed that the time has ran out
* throws an AsyncRequestTimeoutException to roll-back transactions
*/
private void rollbackOnTimeout() throws TimeoutException {
if(Thread.currentThread().isInterrupted()) {
log.error(TIMEOUT_DETECTED_MSG);
throw new TimeoutException(TIMEOUT_DETECTED_MSG);
}
}
#Transactional(rollbackFor = TimeoutException.class, propagation = Propagation.REQUIRES_NEW)
DTOSimpleWrapper<BookingFileDTO> saveBookingFile(BookingConfirmationRQDTO bookingConfirmationRQDTO, String traceId) {
// Database operations
// ...
return retValue;
}

Categories