I'm having a strange issue.
In a class I have:
private final ScheduledExecutorService executor
= Executors.newSingleThreadScheduledExecutor();
public MyClass(final MyService service) {
executor.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
service.foo();
}
}, 0, 30, TimeUnit.SECONDS);
}
MyService is a spring bean that has #Transactional on its foo method. MyClass is instantiated only once (effectively singleton in the application)
After the first invocation of service.foo() (which works fine), on subsequent requests to the application I am randomly getting:
java.lang.IllegalStateException: Already value [SessionImpl(PersistenceContext[entityKeys=[],collectionKeys=[]];ActionQueue[insertions=[] updates=[] deletions=[] collectionCreations=[] collectionRemovals=[] collectionUpdates=[]])] for key [org.hibernate.impl.SessionFactoryImpl#2cd91000] bound to thread [http-bio-8080-exec-10]
A few observations:
when the exception is thrown, the session stored in the TransactionSynchronizationManager is closed
the transaction synchronization manager resource map for the manually scheduled thread is empty
the exception occurs in http-bio-8080-exec threads, but the manually scheduled one is a pool- thread - so there is no 'thread polution'
MyClass is instantiated on startup, in a thread named "Thread-5", i.e. it is not in any way related to the http-bio threads.
If I comment the invocation to service.foo(), or get rid of the #Transactioanl annotation, everything works (except, of course, that data is not inserted in the db)
Any clues what the issue might be?
(Note: I prefer not to use #Scheduled - I don't want MyClass to be a spring bean, and the runnable has to operate on some of its internal state before invoking the service)
Update: After a while I'm able to reproduce it even without the scheduling stuff. So probably a general spring problem with the latest snapshot I'm using.
I assume that exception comes from an invocation of the TransactionInterceptor or the like (some Spring infrastructure bean), or are you using the TransactionSynchronizationManager from your own code somewhere? It appears to me that something is binding sessions to a thread being managed by your container (is that Tomcat 7?) and failing to unbind them before they're returned to the container's thread pool. Thus when the same thread is used for another transactional request later, Spring can't bind the new Session to it because the old one wasn't cleaned up.
I don't actually see anything to make me think it's directly related to your custom scheduling with MyClass. Are you sure it's not just a coincidence that you didn't see the exception when you remove the service.foo() call?
If you could catch one of those threads in a debugger when it's being returned to the pool with a Session still bound to it, you might be able to backtrack to what it was used for. An omniscient debugger would theoretically be perfect for this, though I've never used one myself: ODB and TOD are the two I know of.
Edit: An easier way to find the offending threads: add a Filter (servlet filter, that is) to your app that runs "around" everything else. After chain.doFilter(), as the last act of handling a request before it leaves your application, check the value of TransactionSynchronizationManager.getResourceMap(). It should be an empty map when you're done handling a request. When you find one that isn't, that's where you need to backtrack from to see what happened.
Related
I am working on a fallback procedure for when the connection fail (or another error) occurs. I've created the CacheConfiguration/CacheErrorHandler to handle the errors and log them. The application successfully switches between using the cache and going through the normal process when Redis fails.
However, the way I've implemented cache eviction endpoint (via the #cacheEvict annotation), it is essentially an empty method.
#DeleteMapping(value = "/cache/clear")
#CacheEvict(value = {_values_}, allEntries = true)
public ResponseEntity<String> clearAllCache() {return ResponseEntity.ok("OK"); }
Current CacheErrorHandler
#Override
public CacheErrorHandler errorHandler() {
return new CacheErrorHandler() {
#Override
public void handleCacheEvictError(RuntimeException exception, Cache cache, Object key) {
LOGGER.warn("Failure evicting from cache: " + cache.getName() + ", exception: " + exception);
}
}
Logger will output the cacheEvictError but the response will send back "OK" to the client.
Is there a way to catch the cache error and send a different response saying that the cache evict failed?
I've tried adding a try-catch to throw an exception inside the endpoint but that went nowhere. Couldn't seem to find any examples online to solve this specific issue.
One thing to keep in mind here is that Spring's #CacheEvict annotation and behavior is called "after" the method (by default) on which the annotation is declared, which in your case is the clearAllCache() Web service method.
Although, you can configure the cache eviction to occur before the (actual) clearAllCache() Web service method is called, like so:
#CacheEvict(cacheNames = { ... }, allEntries = true, beforeInvocation = true)
public ResponseEntity<String> clearAllCache() {
// ...
}
That is, using the beforeInvocation attribute on the #CacheEvict annotation, set to true, the cache eviction (for all entries) will occur before the actual clearAllCache() method is invoked.
NOTE: Logically, if the invocation happens after the clearAllCache() method has already been called, then you really have no way to respond if the cache eviction (or rather, the "clear" operation) was unsuccessful. So you must configure the cache eviction to occur before your Web service method gets invoked, first of all.
Next, you need someway to know that your custom CacheErrorHandler was invoked on an error occurring in your caching provider (e.g. Redis) during eviction (or technically, the Cache.clear() operation in this case, since you evicting "all entries").
Another thing to keep in mind here is that since you appear to be operating in Web environment (e.g. a Servlet container like Tomcat or Jetty, or other) then you need to keep "Thread Safety" in mind since each HTTP request and corresponding Web handler method, like the clearAllCache() method called on HTTP DELETE, will be invoked from a separate Thread (i.e. Thread per (HTTP) Request model).
So, you can solve that problem using a Java ThreadLocal declared inside your custom CacheErrorHandler class to capture the necessary state / information that is needed once the clearAllCache() method is called.
I have wrote one such example test class demonstrating how you could accomplish this. The key to this implementation (solution) is the proper configuration of the cache eviction and the use of the ThreadLocal in the custom CacheErrorHandler.
My test is not specifically configured as a Web-based service (e.g. using Spring Web MVC, or anything like that), but I modeled the test use case after your particular situation. I also made use of Mockito to spy on the Spring caching infrastructure to always throw a RuntimeException anytime a Cache eviction based operation occurs (e.g. evict(key) or clear(), etc).
Of course, there are probably better, more robust ways to implement this solution, but this at least demonstrates that it is possible.
Hopefully, this gives you more ideas.
I am maintaining code which looks like that
#Asynchronous
#TransactionTimeout(value = 1, unit = TimeUnit.HOUR)
public void downloadFile(Long fileId) {
//This method takes more than 1hour
service.download(fileId)
//this method should be called even when download finished with error
service.fileDownloadedFinishedNotification(fileId);
}
This is just an example code, to the fileDownloadedFinished we are passing message which we want to display etc, and inside of that we want to mark process as finished with error/success.
So as you can see on download we can get timeout, and after that the fileDownloadedFinishedNotification wont be called, because transaction failed because of the timeout.
I was thinking about extracting notification to other method and call it like this:
#Asynchronous
#TransactionTimeout(value = 1, unit = TimeUnit.HOUR)
public Future<String> downloadFile(Long fileId) {
//This method takes more than 1hour
service.download(fileId)
return new AsyncResult<String>("Test");
}
public void example(){
long id = 15;
String msg = "default stuff";
try {
msg = downloadFile(id).get();
}
catch (Exception e) {
e.printStackTrace();
}
service.fileDownloadedFinishedNotification(fileId, string);
}
But I am not sure if it is good idea, or maybe there is some other functionality, which I can call when timeout is reaced. Something like onTimeout.
Some considerations :
There is no simple way to handle transaction timeout with a listener AFAIK
Annotations use dynamic proxies under the cover, they won't so be applied on a inner call, you have to call your downloadFile from outside (on a bean injected in your caller).
The current transaction will be aborted when fileDownloadedFinishedNotification will be called and so all operations on a transacted resource (DB, etc...) will be rolled back (you may have to invoke the method within a dedicated transaction (e.g. annotate your method with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW).
Assuming the download method retrieves the content across the network and unless you access this last through a dedicated JCA adapter, no exception will be thrown on transaction timeout, the Transaction reaper only marks the current transaction as aborted and release related resources but does not interrupt the thread, only a subsequent access to a MANAGED resource (Datasource, JMS, etc) will throw an exception.
Regarding the last point, while interacting with a un-managed resource the only way to know if the current transaction is still active is to regularly check its state using EJBContext.getRollbackOnly() or by making a dummy access to any managed resource.
There are ways of achieving what you want but a proper implementation would need more information about your level of access to application change.
There are many places where transaction propagation is explained but giving you are running your app in an EJB containeer I would start from here:
https://docs.oracle.com/javaee/6/tutorial/doc/bncih.html
I would read it all chapter but most specific for your case is the container managed transactions here:
https://docs.oracle.com/javaee/6/tutorial/doc/bncij.html
Now assuming you have full access and you can change your database structure the way I would implemented this would be:
You are running your service in a parent transaction T1
Before you invoking the download method call another service to record the download started and the maximum expected time to finish. Do this in a REQUIRED_NEW transaction. This no time consuming database interaction will run in an autonomous transaction T2
Once the above T2 transaction commits your download started record is committed and available to query
Once back in the parent T1 start your download
Record the success in the same record you persisted in T2 if the download successfully finishes.
If you get a timeout the above will never be recorded and the database will still show the download as started and maximum expected time to finish
Define a monitoring process that would kick off at regular times and check the download status. If the expected time to finish have been passed over have that monitoring process alert or record failure or trigger another retry or whatever your business rules are
Hope it helped. Sorry for not codding examples but I thing you will have enough to start with.
Cheers
I have a project running on Spring Boot 1.3.8, Hikari CP 2.6.1 and Hibernate (Spring ORM 4.2.8). The code on service layer looks like this:
public void doStuff() {
A a = dao.findByWhatever();
if (a.hasProperty()) {
B b = restService.doRemoteRequestWithRetries(); // May take long time
}
a.setProp(b.getSomethig());
dao.save(b);
}
Hikari configuration has this: spring.datasource.leakDetectionThreshold=2000.
The problem is that external REST service is quite slow and often takes 2+ seconds to respond, as a result we see a lot of java.lang.Exception: Apparent connection leak detected which are nothing else but false negatives, though the problem can be clearly seen: we hold DB connection for the time we executing rest request.
The question would be: how to properly decouple DB and REST stuff? Or how to tell hibernate to release connection in between? So that we return DB connection to pool while waiting for REST response.
I have tried setting hibernate.connection.release_mode=AFTER_TRANSACTION and it kind of helps, at least we do not have connection leak exceptions. The only problem is that our tests started showing this:
2018-04-17 15:48:03.438 WARN 94029 --- [ main] o.s.orm.jpa.vendor.HibernateJpaDialect : JDBC Connection to reset not identical to originally prepared Connection - please make sure to use connection release mode ON_CLOSE (the default) and to run against Hibernate 4.2+ (or switch HibernateJpaDialect's prepareConnection flag to false`
The tests are using injected DAO to insert records in DB and later check them via application API. They are not annotated with #Transactional and the list of listeners looks like this:
#TestExecutionListeners({
DependencyInjectionTestExecutionListener.class,
TransactionalTestExecutionListener.class,
TransactionDbUnitTestExecutionListener.class
})
Any ideas what could be the problem with tests?
In the code
public void doStuff() {
A a = dao.findByWhatever();
if (a.hasProperty()) {
B b = restService.doRemoteRequestWithRetries(); // May take long time
}
a.setProp(b.getSomethig());
dao.save(b);
}
I see three tasks here - fetching entity A, connecting to remote service and updating entity A. And all these are in same transaction, so the underlying connection will be held till the method is complete.
So the idea is to split the tasks one and three into separate transactions, there by allowing the connection to be releases before making the call to remote service.
Basically, with spring boot you need to add spring.jpa.open-in-view=false. This will not register OpenEntityManagerInViewInterceptor and thus entityManager (in-turn connection) is not bound to the current thread/request.
Subsequently, split the three tasks into separate methods with #Transactional. This helps us bind the entityManager to the transaction scope and releasing connection at end of transaction method.
NOTE: And do ensure that there isn't any transaction started/in progress before (i.e., caller - like Controller etc) calling these methods. Else the purpose is defeated and these new #Transactional methods will run in the same transaction as before.
So the high-level approach could look like below:
In spring boot application.properties add property spring.jpa.open-in-view=false.
Next you need to split doStuff method into three methods in new service class. Intent is to ensure they use different transactions.
First method with #Transactionalwill call A a = dao.findByWhatever();`.
Second method makes remote call.
Third method with #Transactionalwill call rest of the code with JPA merge or hibernate saveOrUpdate on objecta`.
Now Autowired this new service in your current code and call the 3 methods.
I am working on a Java Portlet (extending GenericPortlet), using JBoss 7.02 and LifeRay Portal 6.1.0 GA1. This is one of the bundles that can be downloaded from LifeRay's release archive.
During deployment, when the init() method is called, getRequestDispatcher() returns null. Below is the exact error message:
09:22:15,972 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/my-portlet-name]] (MSC service thread 1-15) Error during mapping: java.lang.NullPointerException
Below is a snippet from my init() method:
PortletConfig config = getPortletConfig();
PortletContext context = getPortletContext();
PortletRequestDispatcher normalView = context.getRequestDispatcher("/portlet.jsp");
As a temporary workaround, I have moved all getRequestDispatcher() calls to doView() where it executes without problem. I do not understand why getRequestDispatcher() can locate portlet.jsp when called during doView, but not when its called during init()
Am I missing a preceding call of some other method that would resolve this? Is this a known issue?
Thanks for any help.
Getting the request dispatcher in the doView is the only place I've seen it done. I would imagine that it returns null during init because there is no actual request to dispatch.
Typically the init method is used for time-expensive operations that you don't want to incur for each request. This might be something like reading data from a file, or creating a reusable SQL connection.
You should also keep in mind that you should keep any portlet state thread safe. Don't create class or object variables that can only be used for one request at a time. The portlet methods are not inhererently thread safe, so you need to make sure that whatever variables a request is interacting with won't be manipulated by another request that is executing concurrently.
I'm not familiar with Portlets, but the answer should be the same as for Servlets.
The init() method is called exactly once, when your application is initially deployed. There is no active request (no one is asking for anything) or response (no one is going to read what the output is). Therefore, it is very reasonable forgetRequestDispatcher() to return null. In doView(), when you're handling a request and response, it makes sense to ask another resource to generate part (or all) of the response.
To address your question directly, getRequestDispatcher() has no problem locating portlet.jsp from init(); it's the request that's missing. (Where do you expect to see the result of portlet.jsp, anyway?)
If you do want to print some output during initialization, you can try logging it to a file, if your application is set up for that. Or, you can display data on System.out, if you know where the container's console is. (I use this second option quite often with servlets.)
I have an EJB3 application which consists of some EJB's for accessing a DB, and exposed via a Session Bean as a web service.
Now there are two things I need to find out:
1) Is there any way I can stop SQL exceptions from causing the web service from throwing a SOAP Fault? The transactions are handled by the container, and currently sql exceptions cause a RollBackException to be thrown, and consequently the transaction to be rolled back (desired behaviour) and the web service to throw a fault (not desired).
2) I wish to extend the webservice to be able to take in a list of entities, and the session bean to persist each. However, I want each entity to be executed in its own transaction, so that if one fails the others are not affected (and again the web service should not fault).
For (1) I have tried to catch the RollBackException, but I assume this is thrown somewhere on another thread, as the catch block is never reached. I assume for (2) I will need to look into User Transactions, but firstly would prefer the container to manage this, and secondly do not know how to force the use of user transactions.
Thanks.
no, you can do all this with container managed transactions (and this is definitely preferable, as managing transactions is a pain).
the gist of the solution is to create a second EJB with a local interface only and the transaction semantics you desire. then your "public" ejb, which the web-service is calling directly, calls into this second ejb via its local interface to do the actual work.
something along the lines of:
public class MyPublicEjb {
#EJB
private MyPrivateImpl impl;
public void doSomething() {
try {
impl.doSomething();
} catch(TXRolledBack) {
// handle rollback ...
}
}
}
I know this looks sort of ugly, but trust me, this is far preferable to directly manipulating transactions.
For (1): Debug your code to find out where the exception is being thrown and what is causing it. Then handle the exception there.
For (2): Wrap each instance with beginTransaction() and commit().
for(each Entity){
try{
//begin transaction
//save entity
//commit
} catch(Exception e) {
//handle Exception, but continue on
}
}