I'm very new to Hibernate Search and trying to integrate it with my application.
Facing a memory leak issue (threads going into wait/park state).
Hibernate Search configuration (very minimal) is annotation driven. Using :
#Indexed(index = "<index_name>")
#IndexedEmbedded
AND
#Field(name = "title", store = Store.YES, analyzer = #Analyzer(definition = "standardAnalyzer")
Hibernate Search properties:
<property name="hibernate.search.default.indexBase" value="../lucene/indexes" />
<property name="hibernate.search.default.directory_provider" value="filesystem" />
<property name="hibernate.search.default.exclusive_index_use"
value="false" />
Using Tomcat 8 to deploy the application.
My main problem is that
VisualVm clearly shows that Hibernate search creates a sync consumer thread for every index. It does so for each and every call I make to my server (in my case, 2 new threads spawned and in parked state on every call). Eventually the number of threads increase till my server becomes unresponsive.
On server shutdown, i get the error:
09-Aug-2017 17:15:28.151 WARNING [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [procurewise] appears to have started a thread named [Hibernate Search sync consumer thread for index Category] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338)
org.hibernate.search.backend.impl.lucene.SyncWorkProcessor.parkCurrentThread(SyncWorkProcessor.java:175)
org.hibernate.search.backend.impl.lucene.SyncWorkProcessor.access$300(SyncWorkProcessor.java:35)
org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:147)
java.lang.Thread.run(Thread.java:745)
I'm using Spring, Hibernate, JPA, AOP and JAX-RS, Jersey .
Environment Details:
Hibernate version 5.2.10.Final
Hibernate Search version 5.7.0.Final
Spring version 4.3.6.RELEASE
JPA 2.1
JAVA 8
Jersey 1.8
This thread should stop automatically when Hibernate Search shuts down, which happens automatically when Hibernate ORM shuts down.
I see two reasons for this warning: either Tomcat checked the threads while Hibernate ORM was still shutting down, or you didn't stop Hibernate ORM at all.
I would suggest to check your shutdown process, see if Hibernate ORM is closed correctly (normally you use org.hibernate.SessionFactory.close(), but I'd expect Spring to take care of that itself), and if it is, see if it's closed synchronously (so that your Tomcat will only perform checks after all the closing is done).
Related
I am using the Kubernetes cluster with docker. When I deploy the java services[springboot] some requests get dropped(for a couple of secs) with the following error.
exception=org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'controller': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!), stackTrace=[org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton() at line: 208]
I am already using livenessProbe & readinessProbe.
Java Version: 12
SpringBoot Version: 2.1.5.RELEASE
Hibernate Version: 2.4.3 with Postgres DB
As per my knowledge, it is happening due to the closing of the application context while executing some requests. Ideally, it should not be.
can anyone help here ?
the problem is not actually springboot, but rather the way Kubernetes stops pods.
at the moment when a pod from your old deployment/replicaset is being terminated (or rather actually set to state "terminating"), 2 things happen simultaneously:
A) pod is removed from service endpoints, so it does no longer receive new requests
B) pod container gets a SIGTERM, so apps can gracefully shutdown
so what you are seeing here is basically active requests that are being processed when the context gets shut down (as you already found out)
there are (at least) two solutions:
1 in kubernetes pod definition:
Kubernetes pods can be configured with a pre-stop hook that get executes a command in between A and B.
depending on your app, a simple "sleep" for a couple (milli)seconds should be sufficient, leaving the app enough time to finish the current requests before shutting down.
theres nice docu from google that goes more into detail:
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
2 in SpringBoot:
you can make Java wait for finishing up running tasks when receiving the shutdown interrupt.
this is (imho) nicely explained here:
https://www.baeldung.com/spring-boot-graceful-shutdown
Beware: kubernetes default graceful shutdown timeout is 30seconds, then the pod is forcefully removed. but as usual you can configure this timeout in terminationGracePeriodSeconds (also described in the google blog in (1)
Our current legacy web-app creates threads in it not managed by Application Server Containers. I have to modify it with JavaEE standards for multi-threading.
My web-app works fine on Tomcat but fails on Websphere.
Error on Websphere :
... ... Caused by: javax.naming.ConfigurationException: A JNDI operation on a "java:" name cannot be completed because the server runtime is not able to associate the operation's thread with any J2EE application component. This condition can occur when the JNDI client using the "java:" name is not executed on the thread of a server application request. Make sure that a J2EE application does not execute JNDI operations on "java:" names within static code blocks or in threads created by that J2EE application. Such code does not necessarily run on the thread of a server application request and therefore is not supported by JNDI operations on "java:" names.
at com.ibm.ws.naming.java.javaURLContextImpl.throwExceptionIfDefaultJavaNS(javaURLContextImpl.java:534) ~[com.ibm.ws.runtime.jar:?]
at com.ibm.ws.naming.java.javaURLContextImpl.throwConfigurationExceptionWithDefaultJavaNS(javaURLContextImpl.java:564) ~[com.ibm.ws.runtime.jar:?]
at com.ibm.ws.naming.java.javaURLContextImpl.lookupExt(javaURLContextImpl.java:485) ~[com.ibm.ws.runtime.jar:?]
at com.ibm.ws.naming.java.javaURLContextRoot.lookupExt(javaURLContextRoot.java:485) ~[com.ibm.ws.runtime.jar:?]
In order to resolve this issue, I am referring Concurrency Utilities in Java EE. I found similar kind of description and example for ManagedExecutorService and ManagedThreadFactory.
ManagedExecutorService: A managed executor service is used by applications to execute submitted tasks asynchronously. Tasks are
executed on threads that are started and managed by the container. The
context of the container is propagated to the thread executing the
task.
ManagedThreadFactory: A managed thread factory is used by applications to create managed threads. The threads are started and
managed by the container. The context of the container is propagated
to the thread executing the task. This object can also be used to
provide custom factories for specific use cases (with custom Threads)
and, for example, set specific/proprietary properties to these
objects.
Which one is preferred in which condition and why ?
I have solved issue by using ManagedExecutorService.
ExecutorService framework indeed has more ways to deal threads while ManagedThreadFactory can only call newThread() method.
Websphere issue can be resolved by using ManagedExecutorService or ManagedThreadFactory. Both works. But for further thread processing, ManagedExecutorService turns out lot better.
Now, this solution causes same web-app to fail on Tomcat. JNDI naming exception. As per my R&D, container based concurrency is supported in TomEE server not in Tomcat so we have to use routing mechanism to switch between code as per underlying application server.
I have a spring boot application which is using a JPA query. The same query when executed directly on the live oracle DB tends to give results in some 20-40ms. On the other hand, when I try to hit using the application takes variable time ranging from 1-2 seconds to 50-60 seconds.
I want to understand the reason for this behavior as to why it is behaving unpredictably. We suspected it could be the limited number of threads in pool but later after isolating the application from external use now with only one user showed the same behavior.
The query should execute in a fast manner consistently.
I wanted to know the possible reasons behind this behavior.
It could really be anything e.g. unreliable network, contended database resources, JDBC driver miss-configuration or JVM GC pause. Try to establish where is the problem: is it Java client or is the database server that is taking the time when the problem occurs.
If you suspect that the problem is the database it would be best to trace the connection and SQL query on the database server side. This will give you the most information e.g. query execution plan. Each database has it's own tools e.g. Oracle docs have entire chapter on Performing Application Tracing.
One possible reason could be your entity relationships
try enabling hibernate statistics for more detail:
You can enable by following:
<persistence>
<persistence-unit name="my-persistence-unit">
...
<properties>
<property name="hibernate.generate_statistics" value="true" />
...
</properties>
</persistence-unit>
I am using eclipse link as a JPA implementation and am connected to a DB running on "jdbc:XXX://localhost:35001/". Is there a way I can track all the sql calls? I am running this inside a java project in eclipse on my local machine.
Thanks
Several options you can try:
You could try using a proxy like P6spy as mentioned by Andreas, or alternative ones like log4jdbc etc. This can be useful in debugging when you are trying to trace calls from multiple clients since the proxy could intercept the calls from them all.
However for your case I would suggest using the built in logging facilities of EclipseLink. In eclipse link you can configure logging of the statements via entries in the persistence XML like shown below:
<property name="eclipselink.logging.level.sql" value="FINE"/>
<property name="eclipselink.logging.parameters" value="true"/>
I would suggest that after making the code changes to undeploy the application and stop and restart the application server before rebuilding and then deploying the application again. I have seen more than few instances where the logging does not start with you going through this entire cycle.
The last option would be a SQL trace. Depending on your database backend you might be able to run a profile or trace. SQL-Server would allow you to trace it. You can then view all SQL executed against the database. This is probably overkill in your scenario as it will log all activity unless configured incorrectly.
Similar to my question here Spring Tomcat C3P0PooledConnectionPoolManager creates a memory leak I wish to understand why Spring or C3P0 itself not cleaning up the threads it creates on shutdown?
I get the following logs in Tomcat
SEVERE: The web application [/ul-xtrain] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->2wpukr9b7ohfj11xtbfft|725548e3]-AdminTaskTimer] but has failed to stop it. This is very likely to create a memory leak.
I am not interested in solution as I already have working solution based on these threads
Hibernate4 + c3p0 + Derby - Memory Leak on Tomcat stop or web application reload
Tomcat Guice/JDBC Memory Leak
To prevent a memory leak, the JDBC Driver has been forcibly unregistered
I just wish to understand why Spring is not closing it. Thank you