I want to use Spring Boot Cache Abstraction to cache some data (https://docs.spring.io/spring/docs/current/spring-framework-reference/html/cache.html).
I'm open to using any of the providers that are available.
The main thing I need is this: I want to be able to set object level TTL, not just global cache level TTL.
E.g. for each object I store in my cache, I want to specify a custom TTL for the object based on some property of that object.
I know that to set up something like this, it must be done directly through the cache provider; but I have not been able to find examples of my use case - only found use cases where global TTL was being set. Can anyone help?
If you are working with redis, you can take a look at JetCache:
#Cached(expire = 10, timeUnit = TimeUnit.MINUTES)
User getUserById(long userId);
You need to check out the features of the different cache implementations available for Spring boot.
Supporting a variable expiry based on the entry value, has implications on the internals of the cache implementation and its performance. With variable expiry you need typically a O(log n) data structure. For example, Guava and Caffeine do not support it. EHCache does support it, see the Documentation about expiry.
The requested functionality is "beyond" the Spring abstraction, which means, you need to produce code for one specific cache implementation.
Related
I want to implement a cache using Guava's caching mechanism.
I have a DB query which returns a map, I want to cache the entire map but let it expire after a certain amount of time.
I realize Guava caches works as a per-item bases. We provide a key, the Cache will either returns the corresponding value from the cache or get it.
Is there a way to use Guava to get everything, cache it but timeout it after a certain time period of time and get everything again.
Many thanks
You can create an instance of Supplier<Map<K,V>> that fetches the entire map from the database, and then use Suppliers.memoizeWithExpiration to cache it.
Related:
Google Guava Supplier Example
http://google.github.io/guava/releases/snapshot/api/docs/com/google/common/base/Supplier.html
http://google.github.io/guava/releases/snapshot/api/docs/com/google/common/base/Suppliers.html
I am planning to implement a cache solution into an existing web app. Nothing complicated: basically a concurrent map that supports overflowing to disk and automatic eviction. Clustering the cache could be requirement in the future, but not now.
I like ehcache's copyOnRead and copyOnWrite features, because it means that I don't have to manually clone things before modifying something I take out of the cache. Now I have started to look at Infinispan, but I have not found anything equivalent there. Does it exist?
I.e., the following unit tests should pass:
#Test
public void testCopyOnWrite() {
Date date = new Date(0);
cache.put(0, date);
date.setTime(1000);
date = cache.get(0);
assertEquals(0, date.getTime());
}
#Test
public void testCopyOnRead() {
Date date = new Date(0);
cache.put(0, date);
assertNotSame(cache.get(0), cache.get(0));
}
Infinispan does support copyOnRead/copyOnWrite, albeit the actual format isn't pluggable. The configuration element is lazyDeserialization in Infinispan 4.x and storeAsBinary in Infinispan 5.x. Objects are serialized using the pluggable Marshaller framework, which is used for all forms of marshalling including for RPC calls over a network and storage to disk.
According to a JBoss developer, Infinispan does not yet support such feature. You should log a request for enhancement in the Infinispan issue tracker, so that others may vote on it (I will).
That being said, if you need this feature now, a workaround would be to extend AbstractDelegatingCache, and override the get and put methods to add this functionality. You could use your own copy strategy or look at how EHCache did it for inspiration.
Also, you may consider the Infinispan forum if you have further questions, since you will have more views from the Infinispan community.
I believe storeAsBinary only takes effect when objects need to be serialized which means when a put operation is called, the owner is not the current node.
This also means the testcases in the question could pass if the owner of key 0 is not the current node, but it would still fail if it's a single node environment.
I'm developing a simple Java EE 5 "routing" application. Different messages from a MQ queue are first transformed and then, according to the value of a certain field, stored in different datasources (stored procedures in different ds need to be called).
For example valueX -> dataSource1, valueY -> dataSource2. All datasources are setup in the application server with different jndi entries. Since the routing info usually won't change while the app is running, is it save to cache the datasource lookups? For example I would implement a singleton, which holds a hashmap where I store valueX->DataSource1. When a certain entry is not in the list, I would do the resource lookup and store the result in the map. Do I gain any performance with the cache or are these resource lookups fast enough?
In general, what's the best way to build this kind of cache? I could use a cache for some other db lookups too. For example the mapping valueX -> resource name is defined in a simple table in a DB. Is it better too lookup the values on demand and save the result in a map, do a lookup all the time or even read and save all entries on startup? Do I need to synchronize the access? Can I just create a "enum" singleton implementation?
It is safe from operational/change management point of view, but not safe from programmer's one.
From programmer's PoV, DataSource configuration can be changed at runtime, and therefore one should always repeat the lookup.
But this is not how things are happening in real life.
When a change to a Datasource is to be implemented, this is done via a Change Management procedure. There is a c/r record, and that record states that the application will have a downtime. In other words, operational folks executing the c/r will bring the application down, do the change and bring it back up. Nobody does the changes like this on a live AS -- for safety reasons. As the result, you shouldn't take into account a possibility that DS changes at runtime.
So any permanent synchronized shared cache is good in the case.
Will you get a performance boost? This depends on the AS implementation. It likely to have a cache of its own, but that cache may be more generic and so slower and in fact you cannot count on its presence at all.
Do you need to build a cache? The answer usually comes from performance tests. If there is no problem, why waste time and introduce risks?
Resume: yes, build a simple cache and use it -- if it is justified by the performance increase.
Specifics of implementation depend on your preferences. I usually have a cache that does lookups on demand, and has a synchronized map of jndi->object inside. For high-concurrency cache I'd use Read/Write locks instead of naive synchronized -- i.e. many reads can go in parallel, while adding a new entry gets an exclusive access. But those are details much depending on the application details.
I have the java servlet that retrieves data from a mysql database. In order to minimize roundtrips to the database, it is retrieved only once in init() method, and is placed to a HashMap<> (i.e. cached in memory).
For now, this HashMap is a member of the servlet class. I need not only store this data but also update some values (counters in fact) in the cached objects of underlying hashmap value class. And there is a Timer (or Cron task) to schedule dumping these counters to DB.
So, after googling i found 3 options of storing the cached data:
1) as now, as a member of servlet class (but servlets can be taken out of service and put back into service by the container at will. Then the data will be lost)
2) in ServletContext (am i right that it is recommended to store small amounts of data here?)
3) in a JNDI resource.
What is the most preferred way?
Put it in ServletContext But use ConcurrentHashMap to avoid concurrency issues.
From those 3 options, the best is to store it in the application scope. I.e. use ServletContext#setAttribute(). You'd like to use a ServletContextListener for this. In normal servlets you can access the ServletContext by the inherited getServletContext() method. In JSP you can access it by ${attributename}.
If the data is getting excessive large that it eats too much of Java's memory, then you should consider a 4th option: use a cache manager.
The most obvious way would be use something like ehcache and store the data in that. ehcache is a cache manager that works much like a hash map except the cache manager can be tweaked to hold things in memory, move them to disk, flush them, even write them into a database via a plugin etc. Depends if the objects are serializable, and whether your app can cope without data (i.e. make another round trip if necessary) but I would trust a cache manager to do a better job of it than a hand rolled solution.
If your cache can become large enough and you access it often it'll be reasonable to utilize some caching solution. For example ehcache is a good candidate and easily integrated with Spring applications, too. Documentation is here.
Also check this overview of open-source caching solutions for Java.
In order to minimize the number of database queries I need some sort of cache to store pairs of data. My approach now is a hashtable (with Strings as keys, Integers as value). But I want to be able to detect updates in the database and replace the values in my "cache". What I'm looking for is something that makes my stored pairs invalid after a preset timespan, perhaps 10-15 minutes. How would I implement that? Is there something in the standard Java package I can use?
I would use some existing solution(there are many cache frameworks).
ehcache is great, it can reset the values on given timespan and i bet it can do much more(i only used that)
You can either use existing solutions (see previous reply)
Or if you want a challenge, make your own easy cache class (not recommended for production project, but it's a great learning experience.
You will need at least 3 members
A cache data stored as hashtable object,
Next cache expiration date
Cache expiration interval set via constructor.
Then simply have public data getter methods, which verify cache expiration status:
if not expired, call hastable's accessors;
if expired, first call "data load" method that is also called in the constructor to pre-populate and then call hashtable accessors.
For an even cooler cache class (I have implemented it in Perl at my job), you can have additional functionality you can implement:
Individual per-key cache expiration (coupled with overall total cache expiration)
Auto, semi-auto, and single-shot data reload (e.g., reload entire cache at once; reload a batch of data defined either by some predefined query, or reload individual data elements piecemail). The latter approach is very useful when your cache has many hits on the same exact keys - that way you don't need to reload universe every time 3 kets that are always accessed expire.
You could use a caching framework like OSCache, EHCache, JBoss Cache, JCS... If you're looking for something that follows a "standard", choose a framework that supports the JCache standard interface (javax.cache) aka JSR-107.
For simple needs like what you are describing, I'd look at EHCache or OSCache (I'm not saying they are basic, but they are simple to start with), they both support expiration based on time.
If I had to choose one solution, I'd recommend Ehcache which has my preference, especially now that it has joined Terracotta. And just for the record, Ehcache provides a preview implementation of JSR107 via the net.sf.cache.jcache package.