Java Spring issue with java.lang.IllegalThreadStateException - java

I have an app in Java which is giving me this error (java.lang.IllegalThreadStateException) when I call two different times the same method while the first one is still running....
For the solution I read, I should create a new instance of the Thread, but I think that it is not an option for me as I'm Autowiring all the classes... So if I forced to create a new instance, the other services (which are inside the thread) won't be instanciated.
So, let go to the code...
In my spring-config.xml I added the following bean so I can access to EmailThread (Which is the thread I want to create to send emails) without stopping the app (as the emails takes several seconds to be sent)
<bean id="emailThread" class="com.cobranzasmoviles.services.EmailThread"></bean>
This is my EmailThread
#Component
public class EmailThread extends Thread {
Client client;
CollectionBO collection;
boolean confirmation;
#Autowired
private EmailService emailService;
public void update(Client client, CollectionBO collection, boolean confirmation){
this.client = client;
this.collection = collection;
this.confirmation = confirmation;
}
#Override
public void run() {
try {
if(this.client != null && this.collection != null)
emailService.send(this.client, this.collection, this.confirmation);
} catch (Exception e) {
e.printStackTrace();
}
}
}
And here is how I call it (if I remove the while and the timeout, I get the error I meantioned)
#Autowired
EmailThread emailThread;
private void sendEmail(Client client, CollectionBO collection, boolean confirmation){
try {
this.emailThread.update(client, collection, false);
State state = this.emailThread.getState();
while(!state.name().equalsIgnoreCase("NEW") && !state.name().equalsIgnoreCase("TERMINATED"))
Timeout.seconds(1000);
this.emailThread.start();
} catch (Exception e) {
e.printStackTrace();
}
}
The issue with this solution is that I'm stopping the app execution waiting for the state to change finalized.
So as you can see, I cannot simple do new EmailThread()... Is there any solution? I'm using the wrong strategy to send emails?
UPD
I changed my method to
#Async
private void sendEmailService(Client client, CollectionBO collection, boolean confirmation){
try {
// this.emailThread.update(client, collection, false);
// State state = this.emailThread.getState();
// while(!state.name().equalsIgnoreCase("NEW") && !state.name().equalsIgnoreCase("TERMINATED"))
// Timeout.seconds(1000);
// this.emailThread.start();
this.emailService.send(client, collection, confirmation);
} catch (Exception e) {
e.printStackTrace();
}
}
But the app is still waiting for the email to be sent to continue with the execution... Am I missing something?
I'm calling the method with:
this.sendEmailService(client, colResponse, false);
Is the #Async enough or should I add anything else?
UDT2:
I added #EnableAsync on my WebConfig.java
#Configuration
#EnableWebMvc
#EnableAsync
#EnableAspectJAutoProxy(proxyTargetClass=true)
public class WebConfig extends WebMvcConfigurerAdapter {
In one class I'm calling the #Async method in this way
this.emailService.send(client, colResponse, true);
In the interface I added teh #Async as well (not sure if required)
#Async
public void send(Client client, CollectionBO collection, boolean confirmation) throws Exception;
And in the method implementation also added the #Async
#Async
public void send(Client client, CollectionBO collection, boolean confirmation) {
But it is not working.. I mean, the app still wait for the email to be sent to complete the flow. The email is the last step in the flow, so I'm trying to send the response to the frontend, without waiting for the email.
But I don;t get any response until the email is sent.

If you don't want to wait for the emailService to perform the send operation, you could annotate it with #Async. As the method is void, you don't need to handle the Future provided by the proxy, so the change has minimum impact.
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/scheduling/annotation/Async.html

Related

Why is Spring State Machine PersistStateChangeListener component sometimes not executing during same thread?

Long story short:
We have a Spring State Machine implementation that will persist with a different request (on same thread from thread pool), causing a state error within our code. Why does it not run synchronously as part of same request?
Much longer story:
Our state machine is set up such that when an event is triggered and the state transition is successful/valid and it persisted, then work is done. If it fails either, we throw an exception. Persistence ironically is not enforced in Spring State Machine, so we had to do our own thing to enforce it.
We have a config class:
#Slf4j
#Configuration
#EnableStateMachine(name = "myStateMachine")
public class StatemachineConfiguration extends StateMachineConfigurerAdapter<String, String> {
#Override
public void configure(StateMachineTransitionConfigurer<String, String> transitions) throws Exception {
transitions
.withExternal().source(States.INITIAL.name()).target(States.IN_PROGRESS.name())
.event(Events.START.name())
.and()
...more transitions
}
#Override
public void configure(StateMachineStateConfigurer<String, String> states) throws Exception {
states
.withStates()
.initial(States.INITIAL.name())
.state(States.IN_PROGRESS.name())
...more states;
}
#Override
public void configure(StateMachineConfigurationConfigurer<String, String> config) throws Exception {
config.withConfiguration()
.taskExecutor(new SyncTaskExecutor()) // This should be default, but I added it anyways
.autoStartup(false);
}
}
To trigger events we are using a wrapper to enforce persistence prior to continuing on. Without the wrapper, we were continuing without actually having successfully saved the state entity. This was causing us to run into state transition errors when we later tried to trigger events, but the database had the old state.
StateEntityWrapper wrapper = new StateEntityWrapper(stateEntity);
boolean success = persistStateMachineHandler.handleEventWithState(
MessageBuilder.withPayload(event.name()).setHeader(Constants.PERSIST_ENTITY_WRAPPER_HEADER, wrapper).build(),
stateEntity.getState().name()
);
if (!(success && wrapper.isPersisted()) {
throw new StateTransitionException();
}
// Transition saved and successful - begin work
// Trigger other events depending on what happens during work
Our PersistStateChangeListener:
#Slf4j
#Component
#RequiredArgsConstructor
public class MyPersistStateChangeListener implements PersistStateChangeListener {
private final MyStateRepository repository;
#Override
public void onPersist(State<String, String> state,
Message<String> message,
Transition<String, String> transition,
StateMachine<String, String> stateMachine) {
StateEntityWrapper wrapper = message.getHeaders().get(Constants.PERSIST_ENTITY_WRAPPER_HEADER, StateEntityWrapper.class);
if (wrapper != null) {
StateEntity entity = wrapper.getStateEntity();
entity.setState(States.valueOf(state.getId()));
this.persist(entity);
wrapper.setPersisted(true);
} else {
String msg = "StateEntityWrapper is null. Could not retrieve " + Constants.PERSIST_ENTITY_WRAPPER_HEADER + ".";
log.warn("{}", msg);
throw new MyPersistRuntimeException(msg);
}
}
private void persist(FulfillmentState entity) {
try {
repository.save(entity);
} catch (Exception e) {
throw new MyPersistRuntimeException("Persistence failed: " + e.getMessage(), e);
}
}
}
I found that exceptions thrown by the persist listener are caught by Spring State Machine in AbstractStateMachine::callPreStateChangeInterceptors and will skip the state change. However, the call to PersistStateMachineHandler::handleEventWithState will return true regardless of not being able to save. This is because the state machine interceptors (persist listener is one of these) can be set up to be asynchronous. However, according to documentation, the default is supposed to be synchronous.
If the default is synchronous, then why does it sometimes execute after the PersistStateMachineHandler returns? We use Sleuth in our logging and I noticed that usually it has the same trace id during persistence. But occassionally, I see it execute with another request and its trace id. Lastly, I found DefaultStateMachineExecutor manages a TaskExecutor and should run synchronously based on default configurations, but sometimes does not. Any ideas?

What is the best practice for synchronizing parallel API requests to return the same result from a long running query without multiple executions?

I'm working on a backend Spring Boot project which is called by multiple clients. One of the functionalities is to merge data from two different databases and return the result, which may take up to 2 minutes.
I would like to be able to make concurrent calls to this endpoint wait for an already running request and return the same result without running the query again.
As shown below I've tried to setup a CompletableFuture field in the service singleton bean (which I know is a code smell since singleton service beans should be stateless).
//RestController
#Async
#GetMapping
public CompletableFuture<List<Foo>> getSyncedFoo() {
return service.syncFoo();
}
//ServiceImpl
private CompletableFuture<List<Foo>> syncTask;
#Override
#Async
#Transactional
public CompletableFuture<List<Foo>> syncFoo() {
if (this.syncTask == null || this.syncTask.isDone()) {
this.syncTask = CompletableFuture.supplyAsync(() -> {
// long running task
return new ArrayList<>();
});
}
return this.dbaseSyncTask;
}
I expected multiple frontend clients calling the api endpoint to receive the same response at roughly the same time, resulting in the backend performing the long running operation just once.
The operation was in fact executed just once but one of the clients received a 503 (Service Unavailable) while the other client received the expected response.
I suspect it's due to the shared use of the CompletableFuture, but I'm at a loss on what approach I should take. Could RxJava be of any use with the Observable strategy?
I've found a functional answer, for now.
#Service
public class FooServiceImpl implements FooService {
private CompletableFuture<List<Foo>> syncFuture;
private Observable<List<Foo>> syncObservable;
#Override
public Single<List<Foo>> syncFoo() {
if (syncFuture == null || syncFuture .isDone()) {
syncFuture = syncFooAsync();
syncObservable = Observable.fromFuture(syncFuture).share();
}
return Single.fromObservable(syncObservable);
}
private CompletableFuture<List<Foo>> syncFooAsync() {
return CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(10_000);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
return new ArrayList<>();
}
});
}
}
By using the RxJava library it s possible to multicast the created observable to multiple listeners using Observable::share method and the #RestController will happily work with the returned Single(s).
Sadly it still uses state in a singleton which is accessed concurrently by multiple threads so I fear situations where concurrency issues like the Observable completing while a new request is still in the process of creating a new subscription.
Hence I do not recommend this as a best practice so I'm not marking this as a final answer.

JBOSS EAP 6 blocked to call ejb method after asynchronous method

I have a stateless bean that insert some data using asynchronous method of other bean ( local injection). This data insertion takes a time , so I do not wait to finish for this operation. After this data insertion, I am calling another method of same bean. When I put a debug point to method, server waits for approximately 90 seconds to reach this point. May be Jboss waits for transaction to complete for asynchronous method. I do not know what is going on. .
#Stateless
public class SimulationNodePersistenceBean implements SimulationNodePersistenceRemote, SimulationNodePersistenceLocal {
#Resource
SessionContext context;
#EJB
private SimulationResultGraphPersitenceBean graphPersistenceBean;
#Asynchronous
#TransactionAttribute(TransactionAttributeType.REQUIRED)
private void addResultGraphsToDatabase(long id, Graph[] graphList) {
ResultGraph paramGraph;
ResultGraphPoint dataPoint;
Graph graph;
for (int i = 0; i < graphList.length; i++) {
graph = graphList[i];
paramGraph = new ResultGraph();
try {
graphPersistenceBean.persistGraph(paramGraph);
} catch (Exception databaseException) {
// TODO add error message to the contingency simulation messages
// list
logger.error("Error saving ResultGraph:" + paramGraph);
}
}
long duration = System.nanoTime() - startTime;
logger.debug("Graphs inserted to DB in (sec) :" + (duration / NANO_SECOND_CONVERSION_FACTOR));
}
// #Asynchronous
public void persistSimulationResults(long contingencySimulationId, Graph[] graphList,
List<AB> reiList) {
if (graphList != null) {
addResultGraphsToDatabase(contingencySimulationId, graphList);
}
if (reiList != null) {
//another method
}
calculateContSimStability(contingencySimulationId);
}
#Override
public void calculateSimIndex(long id) {
}
This is other bean called from main bean
#Stateless
public class SimulationResultGraphPersitenceBean {
#PersistenceContext(unitName = "DBService")
private EntityManager em;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#Asynchronous
public void persistGraph(ResultGraph graph) throws SiGuardPersistenceException {
try {
ResultGraphService service = new ResultGraphService(em);
service.create(graph);
em.flush();
} catch (Exception ex) {
throw new PersistenceException("Error persisting graph", ex);
}
}
This is client calls main bean.This works asynchronously.
getSimulationEJB().persistSimulationResults(id, tsaParser.getLstFile().getGraphArray());
After calling this method, I call another method of SimulationNodePersistenceBean.This method waits for some minutes.
getSimulationEJB().calculateSimIndex(contSimId);
I have created a thread dump using jstack. Actually I do not have this problem in Jboss As 6. I migrated my application to Jboss EAP 6. 4. May be I need to make some configuration changes in configuration. But I do not know what should I do.
I checked thread dump. I did not find any thread in BLOCKING state. Should I look for other keywords?
As I already pointed out in the comments, you are mixing the calling of Asynchronous and Synchronous methods. In your example, you are calling the addResultGraphsToDatabase method (Which is a Asynch method) from persistSimulationResults method (which is a synch method - since you have commented out the asynchronous annotation on top of it). Therefore, right now the addResultGraphsToDatabase method is behaving like a Synchronous method despite the Asynchronous annotation.
I am not sure if you took a look at the link that I posted in the comments but you need to call the Asynch method using the SessionContext. Something like this:
At the class level:
#Inject
SessionContext ctx;
The, within the persistSimulationResults method:
ctx.addResultGraphsToDatabase
For a more detailed example, please take a look at the link I have posted in the comments.

Automatic retry of transactions/requests in Dropwizard/JPA/Hibernate

I am currently implementing a REST API web service using the Dropwizard framework together with dropwizard-hibernate respectively JPA/Hibernate (using a PostgreSQL database).
I have a method inside a resource which I annotated with #UnitOfWork to get one transaction for the whole request.
The resource method calls a method of one of my DAOs which extends AbstractDAO<MyEntity> and is used to communicate retrieval or modification of my entities (of type MyEntity) with the database.
This DAO method does the following: First it selects an entity instance and therefore a row from the database. Afterwards, the entity instance is inspected and based on its properties, some of its properties can be altered. In this case, the row in the database should be updated.
I didn't specify anything else regarding caching, locking or transactions anywhere, so I assume the default is some kind of optimistic locking mechanism enforced by Hibernate.
Therefore (I think), when deleting the entity instance in another thread after selecting it from the database in the current one, a StaleStateException is thrown when trying to commit the transaction because the entity instance which should be updated has been deleted before by the other thread.
When using the #UnitOfWork annotation, my understanding is that I'm not able to catch this exception, neither in the DAO method nor in the resource method.
I could now implement an ExceptionMapper<StaleStateException> for Jersey to deliver a HTTP 503 response with a Retry-After header or something like that to the client to tell it to retry its request.
But I'd rather first like to retry to request/transaction (which is basically the same here because of the #UnitOfWork annotation) while still on the server.
Is there any example implementation for a server-sided transaction retry mechanism when using Dropwizard? Like retrying a configurable amount of times (e.g. 3) and then failing with an exception/HTTP 503 response.
How would you implement this? First thing that came to my mind is another annotation like #Retry(exception = StaleStateException.class, count = 3) which I could add to my resource.
Any suggestions on this?
Or is there an alternative solution to my problem considering different locking/transaction-related things?
Alternative approach to this is to use an injection framework - in my case guice - and use method interceptors for this. This is a more generic solution.
DW integreates with guice very smoothly through https://github.com/xvik/dropwizard-guicey
I have a generic implementation that can retry any exception. It works, as yours, on an annotation, as follows:
#Target({ElementType.TYPE, ElementType.METHOD})
#Retention(RetentionPolicy.RUNTIME)
public #interface Retry {
}
The interceptor then does (with docs):
/**
* Abstract interceptor to catch exceptions and retry the method automatically.
* Things to note:
*
* 1. Method must be idempotent (you can invoke it x times without alterint the result)
* 2. Method MUST re-open a connection to the DB if that is what is retried. Connections are in an undefined state after a rollback/deadlock.
* You can try and reuse them, however the result will likely not be what you expected
* 3. Implement the retry logic inteligently. You may need to unpack the exception to get to the original.
*
* #author artur
*
*/
public abstract class RetryInterceptor implements MethodInterceptor {
private static final Logger log = Logger.getLogger(RetryInterceptor.class);
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
if(invocation.getMethod().isAnnotationPresent(Retry.class)) {
int retryCount = 0;
boolean retry = true;
while(retry && retryCount < maxRetries()) {
try {
return invocation.proceed();
} catch(Exception e) {
log.warn("Exception occured while trying to executed method", e);
if(!retry(e)) {
retry = false;
} {
retryCount++;
}
}
}
}
throw new IllegalStateException("All retries if invocation failed");
}
protected boolean retry(Exception e) {
return false;
}
protected int maxRetries() {
return 0;
}
}
A few things to note about this approach.
The retried method must be designed to be invoked multiple times without any result altering (e.g. if the method stores temporary results in forms of increments, then executing twice might increment twice)
Database exceptions are generally not save for retry. They must open a new connection (in particular when retrying deadlocks which is my case)
Other than that this base implementation simply catches anything and then delegates the retry count and detection to the implementing class. For example, my specific deadlock retry interceptor:
public class DeadlockRetryInterceptor extends RetryInterceptor {
private static final Logger log = Logger.getLogger(MsRetryInterceptor.class);
#Override
protected int maxRetries() {
return 6;
}
#Override
protected boolean retry(Exception e) {
SQLException ex = unpack(e);
if(ex == null) {
return false;
}
int errorCode = ex.getErrorCode();
log.info("Found exception: " + ex.getClass().getSimpleName() + " With error code: " + errorCode, ex);
return errorCode == 1205;
}
private SQLException unpack(final Throwable t) {
if(t == null) {
return null;
}
if(t instanceof SQLException) {
return (SQLException) t;
}
return unpack(t.getCause());
}
}
And finally, i can bind this to guice by doing:
bindInterceptor(Matchers.any(), Matchers.annotatedWith(Retry.class), new MsRetryInterceptor());
Which checks any class, and any method annotated with retry.
An example method for retry would be:
#Override
#Retry
public List<MyObject> getSomething(int count, String property) {
try(Connection con = datasource.getConnection();
Context c = metrics.timer(TIMER_NAME).time())
{
// do some work
// return some stuff
} catch (SQLException e) {
// catches exception and throws it out
throw new RuntimeException("Some more specific thing",e);
}
}
The reason I need an unpack is that old legacy cases, like this DAO impl, already catch their own exceptions.
Note also how the method (a get) retrieves a new connection when invoked twice from my datasource pool, and how no modifications are done inside it (hence: safe to retry)
I hope that helps.
You can do similar things by implementing ApplicationListeners or RequestFilters or similar, however I think this is a more generic approach that could retry any kind of failure on any method that is guice bound.
Also note that guice can only intercept methods when it constructs the class (inject annotated constructor etc.)
Hope that helps,
Artur
I found a pull request in the Dropwizard repository that helped me. It basically enables the possibility of using the #UnitOfWork annotation on other than resource methods.
Using this, I was able to detach the session opening/closing and transaction creation/committing lifecycle from the resource method by moving the #UnitOfWork annotation from the resource method to the DAO method which is responsible for the data manipulation which causes the StaleStateException.
Then I was able to build a retry mechanism around this DAO method.
Examplary explanation:
// class MyEntityDAO extends AbstractDAO<MyEntity>
#UnitOfWork
void tryManipulateData() {
// Due to optimistic locking, this operations cause a StaleStateException when
// committed "by the #UnitOfWork annotation" after returning from this method.
}
// Retry mechanism, implemented wheresoever.
void manipulateData() {
while (true) {
try {
retryManipulateData();
} catch (StaleStateException e) {
continue; // Retry.
}
return;
}
}
// class MyEntityResource
#POST
// ...
// #UnitOfWork can also be used here if nested transactions are desired.
public Response someResourceMethod() {
// Call manipulateData() somehow.
}
Of course one could also attach the #UnitOfWork annotation rather on a method inside a service class which makes use of the DAOs instead of directly applying it to a DAO method. In whatever class the annotation is used, remember to create a proxy of the instances with the UnitOfWorkAwareProxyFactory as described in the pull request.

Issue with Retrofit Callbacks

I have a retrofit interface that defines a method with a callback, like this one :
public interface ApiManagerService {
#GET("/users/")
void getUsers(Callback<List<GitHubMember>>);
}
GitHubMember is just a POJO with 3 fields : id, login and url. I created a class called ResponseHandler where I can wrap the response from the Callback. Here how it's define :
public class ResponseHandler<T> {
private T response;
private RESPONSE_CODE responseCode;
private String detail;
public static enum RESPONSE_CODE {
OK, // if request suceed
APP_ERROR, //technical error for instance network
TECHNICAL_ERROR // app technical error
}
//getters and setters
Here is how I use this class with the getUsers method :
public ResponseHandler<List<GitHubMember>> getUsers() {
ResponseHandler<List<GitHubMember>> handler = new ResponseHandler<List<GitHubMember>>();
apiManagerService.getUsers(new Callback<List<GitHubMember>> cb) {
#Override
public void success(List<GitHubMember> users, Response response) {
responseHandler.setResponse(users);
responseHandler.setResponseCode(ResponseHandler.RESPONSE_CODE.OK);
}
#Override
public void failure(RetrofitError error) {
try {
handler.setResponseCode(ResponseHandler.RESPONSE_CODE.APP_ERROR);
responseHandler.setDetail(error.toString());
}catch (Exception e){
handler.setResponseCode(ResponseHandler.RESPONSE_CODE.TECHNICAL_ERROR);
responseHandler.setDetail(e.getMessage());
}
}
}
return handler;
}
The problem that I have is after executing this method and entering the callback, all the fields in ResponseHandler are null. I am totally sure that the callbacks are executed because I set breakpoints in the callback while debugging.
The apiManagerService object is correctly initialized with the RestAdapter class.
How can I solve this problem?
apiManagerService.getUsers() only initiates network request and returns. Callback is called after network responded on the background thread. So by the time return handler is executed handler is still empty. More than that, even if somehow return-statement will be executed after callback, you may not see changes made to the shared variables (fields of handler) by background thread due to Java Memory Model restrictions.

Categories