I'm developing an application that works with a clustered Infinispan cache.
For this I'm using GlobalConfigurationBuilder.defaultClusteredBuilder() at the moment.
When Infinispan is clustered, the initialization of the first cache always takes about 4 - 6 seconds for starting JGroups channel and joining the cluster. For development this is a bit tedious and unnecessary since I'm only working on one node.
So my actual question is, if there is a way to prevent Infinispan from performing a cluster without changing my code.
You can set the join_timeout attribute for the JGroups GMS protocol to 0 if not specified as a property:
<pbcast.GMS join_timeout="${jgroups.join.timeout:0}" view_bundling="true"/>
Related
I am using the Kubernetes cluster with docker. When I deploy the java services[springboot] some requests get dropped(for a couple of secs) with the following error.
exception=org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'controller': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!), stackTrace=[org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton() at line: 208]
I am already using livenessProbe & readinessProbe.
Java Version: 12
SpringBoot Version: 2.1.5.RELEASE
Hibernate Version: 2.4.3 with Postgres DB
As per my knowledge, it is happening due to the closing of the application context while executing some requests. Ideally, it should not be.
can anyone help here ?
the problem is not actually springboot, but rather the way Kubernetes stops pods.
at the moment when a pod from your old deployment/replicaset is being terminated (or rather actually set to state "terminating"), 2 things happen simultaneously:
A) pod is removed from service endpoints, so it does no longer receive new requests
B) pod container gets a SIGTERM, so apps can gracefully shutdown
so what you are seeing here is basically active requests that are being processed when the context gets shut down (as you already found out)
there are (at least) two solutions:
1 in kubernetes pod definition:
Kubernetes pods can be configured with a pre-stop hook that get executes a command in between A and B.
depending on your app, a simple "sleep" for a couple (milli)seconds should be sufficient, leaving the app enough time to finish the current requests before shutting down.
theres nice docu from google that goes more into detail:
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
2 in SpringBoot:
you can make Java wait for finishing up running tasks when receiving the shutdown interrupt.
this is (imho) nicely explained here:
https://www.baeldung.com/spring-boot-graceful-shutdown
Beware: kubernetes default graceful shutdown timeout is 30seconds, then the pod is forcefully removed. but as usual you can configure this timeout in terminationGracePeriodSeconds (also described in the google blog in (1)
We are using Apache Ignite for caching and during testing i came accross this error
java.lang.IllegalStateException: Cache has been closed or destroyed
We have a Spring Restful client with IGNITE embedded inside. Calls come to update and remove from cache.
The Steps that happened are as follow
One instance of Ignite server running.
one instance of Restful client running on different server with
Ignite Embedded.
Killed the Ignite server instance, client still running
Ignite server restarted.
Any attempt by client to put a value in the cache leads to above
exception.
If Client is restarted everything works as normal
Can some one throw some insight as in why this is happening. Do i have to handle that event of all nodes leaving and manually evict cache or something.
Any help is appeciated
In case all servers go down, client rejoins with a new ID (just like if you restart it manually). In this case all existing cache instances are closed and you have to get new ones (use Ignite.cache(...) method).
There is a ticket to improve this behavior: https://issues.apache.org/jira/browse/IGNITE-2766
We also ran into this problem and we have a work-around. We implemented our own version of SpringCacheManager (ReconnectSafeSpringCacheManager) that wraps the cache objects in reconnect-safe cache proxy objects (ReconnectSafeCacheProxy).
When an IllegalStateException is caught by one of the cache proxies, we tell our cache manager to drop that cache (remove it from the internal caches map) and then we call ReconnectSafeSpringCacheManager.getCache(<cacheName>) which recreates the Ignite cache instance. The proxy replaces its cache reference with the new Ignite cache and then the operation that caused the exception is retried.
Our approach required us to place our cache manager code in the org.apache.ignite.cache.spring package as there are references to non-public API:s in SpringCacheManager, which isn't the cleanest approach but it seems to work and we plan to remove the work-around when IGNITE-2786 is resolved.
I have two context on single tomcat pointing to same database. I am using ehcache for 2nd level caching with Hibernate.
Now, when I do any create/update/delete operation on database, it reflect in contect1 cache but to update in contecxt2 cache, it took almost 15-20 min. I can't use refresh/clear function in context2 as I don't know when to refresh.
How Can I refresh context2's Hibernate cache when there is any update happened through context1?
Also to do clustering for Hibernate cache, I need to give IP address and port number, but in my case, both are same for two contexts. So I think I can't use Hibernate cache clustering.
My question is about cache replication by RMI using ehcache. Let's imagine I have 3 servers, that replicate cache with each other. At startup I want to load a cache from other running instances (bootstrap). My concerns are about these topics:
I have in-memory caching on all nodes. I restart one node1 and at startup (which I to bootstrap synchronously - bootstrapAsynchronously=false) I am loading cache from node2. What happens if suddenly, before cache is fully replicated node2 is down? Will replication continue from node3 (which also have it loaded)?
If I setup bootstrapping in async mode - will it throw some event about the fact that replication has finished and instance fully loaded cache?
Answer for the first part is that the cache will not start.
See http://ehcache.org/documentation/user-guide/rmi-replicated-caching#configuring-bootstrap-from-a-cache-peer :
When a peer comes up, it will be incoherent with other caches. When
the bootstrap completes it will be partially coherent. Bootstrap gets
the list of keys from a random peer, and then loads those in batches
from random peers. If bootstrap fails then the Cache will not start.
However if a cache replication operation occurs which is then
overwritten by bootstrap there is a chance that the cache could be
inconsistent.
I run 2 tomcat instances on the same host. Each instance runs the same web application which tries to communicate some ehcache caches via RMI replication. I use the autodiscovery configuration in ehcache so I don't have to explicitly define which are the hosts and which are the caches I want to replicate. The ehcache instances do not manage to find each other and communicate:
DEBUG (RMIBootstrapCacheLoader.java:211) - cache peers: []
DEBUG (RMIBootstrapCacheLoader.java:133) - Empty list of cache peers for cache org.hibernate.cache.UpdateTimestampsCache. No cache peer to bootstrap from.
If I try the same thing but this time run each tomcat instance on a separate host (box) then everything works like a charm.
Am I doing something wrong, or isn't autodiscovery via multicast possible when the instances are on the same host?
My configuration uses the defaults presented in the RMI Distributed Caching documentation:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1,
multicastGroupPort=4446, timeToLive=32"/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="port=40001, socketTimeoutMillis=2000"/>
And inside each cache region I want to replicate I have:
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="asynchronousReplicationIntervalMillis=500 " />
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory" />
thanks
Am I doing something wrong, or isn't
autodiscovery via multicast possible
when the instances are on the same
host?
While I'm not really familiar with ehcache I'd think this to be possible and they are in fact providing an example doing something similar at least (multiple nodes per host, though one instance only): see section Full Example in the RMI Distributed Caching documentation you mentioned.
Usually you cannot open the same TCP port (40001 here) more than once per host though, it is bound to the first application/service allocating it (there do exist things like TCP Port Sharing on Windows for example, but you'd have to specifically account for that).
Consequently, if you are really using their identical default configurations, the second Tomcat instance trying to allocate TCP port 40001 will fail to do so. Of course, this should manifest itself somewhere earlier in the Tomcat logs, have you had a thorough look already?
Just using another free port for one Tomcat instance should solve the issue; you can see this in action within the ehcache.xml's for the Full Example mentioned above: the port number is increased one by one from 40001 up to 40006 per node.