Ehcache Replication in case DiskStore turned on? - java

Does Ehcache replicate the underlying Disk Store to other nodes, when replication enabled ?
And when element is searched in cache, which is overflown to disk, does cache search disk for that element or it returns NULL ?

Ehcache 2.x replication is based of cache event listeners and so happens irrelevant of the tiering configured. That means that any mutation on the cache once it has been configured will be replicated. This also means that if you were to configure it on a cache already having content on disk, that would not get replicated (Note: this change may be considered invalid and cause the cache to drop disk content - I did not test it).
When you Cache.get from a multiple tier cache, all tiers, from faster to slower, will be accessed to find the entry and will stop as soon as found.
Note also that since Ehcache 2.6.x overflow is no longer the storage model. All entries will exist in the disk tier while hot entries will also leave on heap. See another answer for more details.

Related

Distributed Infinispan Cache as Hibernate L2 Cache Issue

I would like to create an application, where I would use Hibernate L2 Cache to reduce the unnecessity to always requesting database for data.
In my application, over 80% of the time would be READ operation, and less than 20% would be Create / Update / Delete Operation. Therefore I think using Hibernate L2 Cache would be beneficial. However, as we are going to horizontal scale the application, we would like to use Infinispan as the L2 Cache of Hibernate.
However there are several questions which we are uncertain of.
If I understand correctly, Hibernate L2 Cache should work by updating the cache, whenever there is a new create / update / delete operation, or when the query has not yet queried previously. Therefore on a multiple servers setup connecting to the same database, since there are network IO issue, how could multiple update operation works in such environment? As the 2 application server may update the database simultaneously, each update the same entity to different data, but due to network IO issue, how could Hibernate know this data should be cached and synced, and this data should not?
It depends what kind of cache you are using. An invalidation cache, which I would suggest you to use here, will just invalidate cache entries that are stale. A replicating cache would replicate the changes to each node in the cluster. Hibernate just asks the cache implementation for the cache entry and if the cache returns that, it will use that and avoid accessing the database. If the cache entry is stale or how much blocking is involved in that cache lookup depends on the transaction configuration of the cache.

Ehcache replicated cache RMI bootstrap

My question is about cache replication by RMI using ehcache. Let's imagine I have 3 servers, that replicate cache with each other. At startup I want to load a cache from other running instances (bootstrap). My concerns are about these topics:
I have in-memory caching on all nodes. I restart one node1 and at startup (which I to bootstrap synchronously - bootstrapAsynchronously=false) I am loading cache from node2. What happens if suddenly, before cache is fully replicated node2 is down? Will replication continue from node3 (which also have it loaded)?
If I setup bootstrapping in async mode - will it throw some event about the fact that replication has finished and instance fully loaded cache?
Answer for the first part is that the cache will not start.
See http://ehcache.org/documentation/user-guide/rmi-replicated-caching#configuring-bootstrap-from-a-cache-peer :
When a peer comes up, it will be incoherent with other caches. When
the bootstrap completes it will be partially coherent. Bootstrap gets
the list of keys from a random peer, and then loads those in batches
from random peers. If bootstrap fails then the Cache will not start.
However if a cache replication operation occurs which is then
overwritten by bootstrap there is a chance that the cache could be
inconsistent.

custom cache reload in java on weblogic

I have a requirement to cache xml bean java objects by reading xml’s from database . I am using a HashMap in memory to maintain my java objects. I am using spring for DI and Weblogic 11g app server.
Can you please suggest me a mechanism to reload cache when there is an update in xml files.
You can make use of weblogic p13n cache for this purpose, instead of using your own HashMap to cache the java objects. You will have to configure p13n-cache-config.xml file, which contains, TTL, max value etc. for your cache.
Coming to the first point, the cache will be automatically reloaded after the TTL time is done with. For manually clearing cache, You can implement a Servlet kind of thing, which you can hit directly from your browser (can restrict it for a particular URL). In that servlet clear the cache which you want to reload.
weblogic p13n cache provides you method for cluster aware cache clear as well, in case you need it, in case you want to use your own HashMap for caching, provide a update method for that HashMap and clear the java objects that you want to be reloaded and then call the cache creation method.

Memory Leak in WebSphere Portal Relating to Portal URIs

I've got an application leaking out java heap at a decent rate (400 users leaves 25% free after 2hours...after logoff all memory is restored) and we've identified the items causing the memory leak as Strings placed in session that appear to be generated by Portal itself. The values are the encoded Portal URIs (very long endcoded strings ... usually sized around 19kb), and the keys seem to be seven (7) randomly generated characters prefixed by RES# (for example, RES#NhhEY37).
We've stepped through the application using session tracing and snapping off heapdumps which has resulted in determining that there is one of these objects created and added to session on almost every page ... in fact, it seems like it is on each page that submits data (which is most pages). So, it's either 1:1 with pages in general, or 1:1 with forms.
Has anyone encountered a similar problem as this? We are opening a ticket with IBM, but wanted to ask this community as well. Thanks in advance!
Can it be the portlet cache? You could have servlet caching activated and declare a long portlet expiry time. Quoting from techjournal:
Portlets can advertise their ability to be cached in the fragment cache by setting their expiry time in their portlet.xml descriptor (see Portlet descriptor example)
<!-Expiration value is in seconds, -1 = no time limit, 0 = deactivated-->
<expiration-cache>3600</expiration-cache> <!- 1 Hour cache -->
To use the fragment caching functions, servlet caching needs to be activated in the Web Container section of WebSphere Application Server administrative console (see Portlet descriptor example). WebSphere Application Server also provides also a cache monitor enterprise application (CacheMonitor.ear), which is very useful for visualizing the contents of the fragment cache.
Update
Do you have portlets that set EXPIRATION_CACHE? Quote:
Modifying the local cache at runtime
For standard portlets, the portlet window can modify the expiration time at runtime by setting the EXPIRATION_CACHE property in the RenderResponse, as follows:
RenderResponse.setProperty(
PortletResponse.EXPIRATION_CACHE,
(new Integer(3000)).toString() );
Note that for me the value is a bit counter-intuitive, -1 means never expire, 0 means don't cache.
The actual issue turned out to be a working feature within Portal. Specifically, Portal's action protection which prevents the same action from being submitted twice, while keeping the navigational ability of the portal. There is a cache that retains the actions results for every successful action and uses them to compare and reject duplicates.
The issue for us was the fact that we required "longer than normal" user sessions (60+ minutes) and with 1,000+ concurrent users, we leaked out on this protection mechanism after just a couple hours.
IBM recommended that we just shut off the cache entirely using the following portlet.xml configuration entry:
wps.multiple.action.execution = true
This allows double submits, which may or may not harm business functionality. However, our internal Portal framework already contained a mechanism to prevent double submits, so this was not an issue for us.
At our request, IBM did come back with a patch for this issue which makes the cache customizeable, that is, let's you configure the number of action results that you store in cache for each user and thus you can leverage Portal's mechanism again, at a reduced session overhead. Those portal configuration settings were:
wps.multiple.action.cache.bound.enabled = true
wps.multiple.action.cache.key.maxsize = 40
wps.multiple.action.cache.value.maxsize = 10
You'll need to contact IBM about this patch as it is not currently in a released fixpack.
Is your Websphere Portal Server having latest fix pack installed?
http://www-01.ibm.com/support/docview.wss?uid=swg24024380&rs=0&cs=utf-8&context=SSHRKX&dc=D420&loc=en_US&lang=en&cc=US
Also you may be interested in following discussion
http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14427700&tstart=0
Update:
Just throwing some blind folded darts.
"RES#" to me sounds like resource.
From the forum stack trace,
"DefaultActionResultManager.storeDocument"
indicates it is storing the document.
Hence looks like your resources(generated portal pages) are being cached. Check if there is some paramater that can lmit cache size of resource.
Also in another test set cache expiration to 5 minutes instead of an hour.

How I can disable the second-level cache of some certain entities in Hibernate without changing annotations

I'm using Hibernate second level cache in my application, for certain business reason I can't change the entity annotation any more.
In my project, apart from changing the Database from Hibernate, there exist also other native SQL that do not go through Hibernate. Therefore, the Hibernate second-level cache data could be stale after database being updated from native SQL. That's why I want to disable the second-level cache for certain entities (programmatically or other way than changing annotation).
Thanks in advance!
WARNING: As Jens Schauder noted, it is impossible to configure Ehcache to store 0 elements in memory by setting maxElementsInMemory="0" as it effectively causes opposite effect - sets unlimited size for the cache. This behaviour is not mentioned on the Hibernate Caching page but is documented on Cache Configuration page.
I have quickly reviewed the documentation and haven't found alternative approach yet. I am unable to delete this answer by myself. :-(
My original suggestion:
You can configure the implementation provider of second level cache to short TTL times and/or to store 0 entries of particular entity type.
E.g. if you are using the Ehcache, you can configure it in ehcache.xml:*
<cache
name="com.problematic.cache.EntityName"
maxElementsInMemory="0" <<== this should effectively disable caching for EntityName
overflowToDisk="false" <<== Do not overflow any entries to disk
/>
See Hibernate Caching in Ehcache documentation.
In Terracotta 3.1 and above, you can enable/disable Hibernate 2nd Level Caches on a per region basis, both in the configuration (statically) and at runtime, using the Terracotta Developer Console.
You can also monitor in realtime statistics about the cache and Hibernate, for individual nodes in a cluster or cluster-wide.
Terracotta is open source. For more details, check out Terracotta for Hibernate.

Categories