We are using Apache Ignite for caching and during testing i came accross this error
java.lang.IllegalStateException: Cache has been closed or destroyed
We have a Spring Restful client with IGNITE embedded inside. Calls come to update and remove from cache.
The Steps that happened are as follow
One instance of Ignite server running.
one instance of Restful client running on different server with
Ignite Embedded.
Killed the Ignite server instance, client still running
Ignite server restarted.
Any attempt by client to put a value in the cache leads to above
exception.
If Client is restarted everything works as normal
Can some one throw some insight as in why this is happening. Do i have to handle that event of all nodes leaving and manually evict cache or something.
Any help is appeciated
In case all servers go down, client rejoins with a new ID (just like if you restart it manually). In this case all existing cache instances are closed and you have to get new ones (use Ignite.cache(...) method).
There is a ticket to improve this behavior: https://issues.apache.org/jira/browse/IGNITE-2766
We also ran into this problem and we have a work-around. We implemented our own version of SpringCacheManager (ReconnectSafeSpringCacheManager) that wraps the cache objects in reconnect-safe cache proxy objects (ReconnectSafeCacheProxy).
When an IllegalStateException is caught by one of the cache proxies, we tell our cache manager to drop that cache (remove it from the internal caches map) and then we call ReconnectSafeSpringCacheManager.getCache(<cacheName>) which recreates the Ignite cache instance. The proxy replaces its cache reference with the new Ignite cache and then the operation that caused the exception is retried.
Our approach required us to place our cache manager code in the org.apache.ignite.cache.spring package as there are references to non-public API:s in SpringCacheManager, which isn't the cleanest approach but it seems to work and we plan to remove the work-around when IGNITE-2786 is resolved.
Related
I am using the Kubernetes cluster with docker. When I deploy the java services[springboot] some requests get dropped(for a couple of secs) with the following error.
exception=org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'controller': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!), stackTrace=[org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton() at line: 208]
I am already using livenessProbe & readinessProbe.
Java Version: 12
SpringBoot Version: 2.1.5.RELEASE
Hibernate Version: 2.4.3 with Postgres DB
As per my knowledge, it is happening due to the closing of the application context while executing some requests. Ideally, it should not be.
can anyone help here ?
the problem is not actually springboot, but rather the way Kubernetes stops pods.
at the moment when a pod from your old deployment/replicaset is being terminated (or rather actually set to state "terminating"), 2 things happen simultaneously:
A) pod is removed from service endpoints, so it does no longer receive new requests
B) pod container gets a SIGTERM, so apps can gracefully shutdown
so what you are seeing here is basically active requests that are being processed when the context gets shut down (as you already found out)
there are (at least) two solutions:
1 in kubernetes pod definition:
Kubernetes pods can be configured with a pre-stop hook that get executes a command in between A and B.
depending on your app, a simple "sleep" for a couple (milli)seconds should be sufficient, leaving the app enough time to finish the current requests before shutting down.
theres nice docu from google that goes more into detail:
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
2 in SpringBoot:
you can make Java wait for finishing up running tasks when receiving the shutdown interrupt.
this is (imho) nicely explained here:
https://www.baeldung.com/spring-boot-graceful-shutdown
Beware: kubernetes default graceful shutdown timeout is 30seconds, then the pod is forcefully removed. but as usual you can configure this timeout in terminationGracePeriodSeconds (also described in the google blog in (1)
I am trying to implement a sticky session based load balancing across two Tomcat instances using the Hazelcast Tomcat Web Session Replication. For testing purposes, I have deployed the application on two different Tomcat instances and the load balancing is handled through Apache HTTPD. The jvmroute parameters and the mod-proxy settings are fine and the load balancing has no issues.
The problem is with the session replication across the two instances. When the first server (currently serving the request) goes down the request is sent to the second server. The Hazelcast cluster identifies that the session is through fail-over and is copying the session with the new session id (suffixed with the jvmroute parameter of the second server) - as described in the Hazelcast documentation https://github.com/hazelcast/hazelcast-tomcat-sessionmanager#sticky-sessions-and-tomcat) . However for the failed-over request, the session attributes are getting updated in the older session(failed over jvmroute) and not getting replicated resulting in the failure of the subsequent request.
I have gone through the documentation but unable to find a resolution at this point. I am sure I am missing some setting as this would be a basic setting for a fail over scenario.
Can someone help me out? Please let me know if you need any additional details.
[UPDATE]
After tracing the flow, able to determine that the handleTomcatSessionChange in com.hazelcast.session.HazelcastSessionChangeValve is being called correctly. The request.changeSessionId(newSessionId) call happens and post this if I display the value of the requestedsession id, the value is updated. However, the session id by itself is not updated and this is resulting in the older id in a request.getSession().getId() call.
I am unable to configure/change the Map(declared as part of hazelcast config in spring) properties after hazelcast instance start up. I am using hazelcast integrated with spring as hibernate second level cache. I am trying to configure the properties of map (like TTL) in an init method (PostConstruct annotated) which is called during spring bean initialization.
There is not enough Documentation , if there is please guide me to it.
Mean while i went through this post and found this Hazelcast MapStoreConfig ignored
But how does the management center changes the config, will it recreate a new instance again ?
Is hazelcast Instance light weight unlike session factory ? i assume not,
please share your thoughts
This is not yet supported. JCache is the only on-the-fly configuration data structure at the moment.
However you'll most probably be able to destroy a proxy (DistributedObject like IMap, IQueue, ...), reconfigure it and recreate it. Anyhow at the time of recreation you must make sure that every node sees the same configuration, for example by storing the configuration itself inside an IMap or something like that. You'll have to do some wrapping on your own.
PS: This is not officially supported and an implementation detail that might change at later versions!
PPS: This feature is on the roadmap for quite some time but didn't made it into a release version yet, it however is still expected to have full support at some time in the future.
I have a requirement to cache xml bean java objects by reading xml’s from database . I am using a HashMap in memory to maintain my java objects. I am using spring for DI and Weblogic 11g app server.
Can you please suggest me a mechanism to reload cache when there is an update in xml files.
You can make use of weblogic p13n cache for this purpose, instead of using your own HashMap to cache the java objects. You will have to configure p13n-cache-config.xml file, which contains, TTL, max value etc. for your cache.
Coming to the first point, the cache will be automatically reloaded after the TTL time is done with. For manually clearing cache, You can implement a Servlet kind of thing, which you can hit directly from your browser (can restrict it for a particular URL). In that servlet clear the cache which you want to reload.
weblogic p13n cache provides you method for cluster aware cache clear as well, in case you need it, in case you want to use your own HashMap for caching, provide a update method for that HashMap and clear the java objects that you want to be reloaded and then call the cache creation method.
In a J2EE application, we are using EJB2 in weblogic.
To avoid losing time building the initial context and looking up EJB Home interface, I'm considering the Service Locator Pattern.
But after a few search on the web I found that even if this pattern is often recommended for the InitialContext caching, there are some negative opinion about the EJB Home caching.
Questions:
Is it safe to cache EJB Home lookup result ?
What will happen if one my cluster node is no more working ?
What will happen if I install a new version of the EJB without refreshing the service locator's cache ?
Is it safe to cache EJB Home lookup
result ?
Yes.
What will happen if one my cluster
node is no more working ?
If your server is configured for clustering/WLM, then the request should silently failover to another server in the cluster. The routing information is encoded in the stub IOR.
What will happen if I install a new
version of the EJB without refreshing
the service locator's cache ?
Assuming you update the bean and not the component or home interfaces, then everything continues to work. EJBHome is effectively a stateless session bean, so the request can continue to be accessed from the same server if available or on a different server in the cluster if not.
Note that the #EJB injection in EJB3 effectively encourages home caching. (Though, admittedly, it also allows SFSB caching even though this is clearly incorrect, so perhaps #EJB isn't the best support of my claim :-)).
Is it safe to cache EJB Home lookup result ?
What will happen if one my cluster node is no more working ?
IMHO the purpose of ServiceLocator within J2EE is to cache EJB Home and reduce expensive JNDI look ups. It is safe on Weblogic since by default the EJB Home is load balanced across the cluster, and this will automatically allow failover to the next server.
This value is controlled by the home-is-clusterable value in weblogic-ejb-jar.xml, documented here which defaults to true.
What will happen if I install a new version of the EJB without refreshing
the service locator's cache ?
I havent tried doing such a change myself. However, I'm guessing as part of your build/deploy, your Service Locator class would also get redeployed along with a change to your EJBs - and thus do a fresh lookup?
If your client is unaffected during the changes to the EJB, then the cached EJBHome will return a stale reference when you call a method on it. So you will have to force the client to be refreshed.
What will happen if I install a new
version of the EJB without refreshing
the service locator's cache ?
Once your application goes live, new installations should become much less frequent than requests for an EJBHome.
So your focus and concern should lie with the frequent live operations, rather than the transient development operations.
Factor into your design the ability to invalidate caches when necessary.