Ehcache replicated cache RMI bootstrap - java

My question is about cache replication by RMI using ehcache. Let's imagine I have 3 servers, that replicate cache with each other. At startup I want to load a cache from other running instances (bootstrap). My concerns are about these topics:
I have in-memory caching on all nodes. I restart one node1 and at startup (which I to bootstrap synchronously - bootstrapAsynchronously=false) I am loading cache from node2. What happens if suddenly, before cache is fully replicated node2 is down? Will replication continue from node3 (which also have it loaded)?
If I setup bootstrapping in async mode - will it throw some event about the fact that replication has finished and instance fully loaded cache?

Answer for the first part is that the cache will not start.
See http://ehcache.org/documentation/user-guide/rmi-replicated-caching#configuring-bootstrap-from-a-cache-peer :
When a peer comes up, it will be incoherent with other caches. When
the bootstrap completes it will be partially coherent. Bootstrap gets
the list of keys from a random peer, and then loads those in batches
from random peers. If bootstrap fails then the Cache will not start.
However if a cache replication operation occurs which is then
overwritten by bootstrap there is a chance that the cache could be
inconsistent.

Related

Distributed Infinispan Cache as Hibernate L2 Cache Issue

I would like to create an application, where I would use Hibernate L2 Cache to reduce the unnecessity to always requesting database for data.
In my application, over 80% of the time would be READ operation, and less than 20% would be Create / Update / Delete Operation. Therefore I think using Hibernate L2 Cache would be beneficial. However, as we are going to horizontal scale the application, we would like to use Infinispan as the L2 Cache of Hibernate.
However there are several questions which we are uncertain of.
If I understand correctly, Hibernate L2 Cache should work by updating the cache, whenever there is a new create / update / delete operation, or when the query has not yet queried previously. Therefore on a multiple servers setup connecting to the same database, since there are network IO issue, how could multiple update operation works in such environment? As the 2 application server may update the database simultaneously, each update the same entity to different data, but due to network IO issue, how could Hibernate know this data should be cached and synced, and this data should not?
It depends what kind of cache you are using. An invalidation cache, which I would suggest you to use here, will just invalidate cache entries that are stale. A replicating cache would replicate the changes to each node in the cluster. Hibernate just asks the cache implementation for the cache entry and if the cache returns that, it will use that and avoid accessing the database. If the cache entry is stale or how much blocking is involved in that cache lookup depends on the transaction configuration of the cache.

Prevent Infinispan from clustering

I'm developing an application that works with a clustered Infinispan cache.
For this I'm using GlobalConfigurationBuilder.defaultClusteredBuilder() at the moment.
When Infinispan is clustered, the initialization of the first cache always takes about 4 - 6 seconds for starting JGroups channel and joining the cluster. For development this is a bit tedious and unnecessary since I'm only working on one node.
So my actual question is, if there is a way to prevent Infinispan from performing a cluster without changing my code.
You can set the join_timeout attribute for the JGroups GMS protocol to 0 if not specified as a property:
<pbcast.GMS join_timeout="${jgroups.join.timeout:0}" view_bundling="true"/>

Ehcache Replication in case DiskStore turned on?

Does Ehcache replicate the underlying Disk Store to other nodes, when replication enabled ?
And when element is searched in cache, which is overflown to disk, does cache search disk for that element or it returns NULL ?
Ehcache 2.x replication is based of cache event listeners and so happens irrelevant of the tiering configured. That means that any mutation on the cache once it has been configured will be replicated. This also means that if you were to configure it on a cache already having content on disk, that would not get replicated (Note: this change may be considered invalid and cause the cache to drop disk content - I did not test it).
When you Cache.get from a multiple tier cache, all tiers, from faster to slower, will be accessed to find the entry and will stop as soon as found.
Note also that since Ehcache 2.6.x overflow is no longer the storage model. All entries will exist in the disk tier while hot entries will also leave on heap. See another answer for more details.

IGNITE Cache Error

We are using Apache Ignite for caching and during testing i came accross this error
java.lang.IllegalStateException: Cache has been closed or destroyed
We have a Spring Restful client with IGNITE embedded inside. Calls come to update and remove from cache.
The Steps that happened are as follow
One instance of Ignite server running.
one instance of Restful client running on different server with
Ignite Embedded.
Killed the Ignite server instance, client still running
Ignite server restarted.
Any attempt by client to put a value in the cache leads to above
exception.
If Client is restarted everything works as normal
Can some one throw some insight as in why this is happening. Do i have to handle that event of all nodes leaving and manually evict cache or something.
Any help is appeciated
In case all servers go down, client rejoins with a new ID (just like if you restart it manually). In this case all existing cache instances are closed and you have to get new ones (use Ignite.cache(...) method).
There is a ticket to improve this behavior: https://issues.apache.org/jira/browse/IGNITE-2766
We also ran into this problem and we have a work-around. We implemented our own version of SpringCacheManager (ReconnectSafeSpringCacheManager) that wraps the cache objects in reconnect-safe cache proxy objects (ReconnectSafeCacheProxy).
When an IllegalStateException is caught by one of the cache proxies, we tell our cache manager to drop that cache (remove it from the internal caches map) and then we call ReconnectSafeSpringCacheManager.getCache(<cacheName>) which recreates the Ignite cache instance. The proxy replaces its cache reference with the new Ignite cache and then the operation that caused the exception is retried.
Our approach required us to place our cache manager code in the org.apache.ignite.cache.spring package as there are references to non-public API:s in SpringCacheManager, which isn't the cleanest approach but it seems to work and we plan to remove the work-around when IGNITE-2786 is resolved.

J2EE/EJB + service locator: is it safe to cache EJB Home lookup result?

In a J2EE application, we are using EJB2 in weblogic.
To avoid losing time building the initial context and looking up EJB Home interface, I'm considering the Service Locator Pattern.
But after a few search on the web I found that even if this pattern is often recommended for the InitialContext caching, there are some negative opinion about the EJB Home caching.
Questions:
Is it safe to cache EJB Home lookup result ?
What will happen if one my cluster node is no more working ?
What will happen if I install a new version of the EJB without refreshing the service locator's cache ?
Is it safe to cache EJB Home lookup
result ?
Yes.
What will happen if one my cluster
node is no more working ?
If your server is configured for clustering/WLM, then the request should silently failover to another server in the cluster. The routing information is encoded in the stub IOR.
What will happen if I install a new
version of the EJB without refreshing
the service locator's cache ?
Assuming you update the bean and not the component or home interfaces, then everything continues to work. EJBHome is effectively a stateless session bean, so the request can continue to be accessed from the same server if available or on a different server in the cluster if not.
Note that the #EJB injection in EJB3 effectively encourages home caching. (Though, admittedly, it also allows SFSB caching even though this is clearly incorrect, so perhaps #EJB isn't the best support of my claim :-)).
Is it safe to cache EJB Home lookup result ?
What will happen if one my cluster node is no more working ?
IMHO the purpose of ServiceLocator within J2EE is to cache EJB Home and reduce expensive JNDI look ups. It is safe on Weblogic since by default the EJB Home is load balanced across the cluster, and this will automatically allow failover to the next server.
This value is controlled by the home-is-clusterable value in weblogic-ejb-jar.xml, documented here which defaults to true.
What will happen if I install a new version of the EJB without refreshing
the service locator's cache ?
I havent tried doing such a change myself. However, I'm guessing as part of your build/deploy, your Service Locator class would also get redeployed along with a change to your EJBs - and thus do a fresh lookup?
If your client is unaffected during the changes to the EJB, then the cached EJBHome will return a stale reference when you call a method on it. So you will have to force the client to be refreshed.
What will happen if I install a new
version of the EJB without refreshing
the service locator's cache ?
Once your application goes live, new installations should become much less frequent than requests for an EJBHome.
So your focus and concern should lie with the frequent live operations, rather than the transient development operations.
Factor into your design the ability to invalidate caches when necessary.

Categories