Can any one point to a good implementation, if one exists, of what I call Ticking collection/Map in Java. Where the elements in the collection has some expiry time. When a particular element of the collection is expired then collection raises a certain kind of alarm or call a handler.
I saw a Guava implementation of an expiring map which automatically removes the key which has been expired.
Expiring map
guava supports a callback on eviction:
Cache<String, String> cache = CacheBuilder.newBuilder()
.expireAfterAccess(100, TimeUnit.SECONDS)
.removalListener(new RemovalListener<Object, Object>() {
public void onRemoval(RemovalNotification<Object, Object> objectObjectRemovalNotification) {
//do something
}
})
.build();
Related
This question is to validate an observed behavior to ensure Guava Cache is used in correct way.
I have setup two Guava Caches (see code below): with and without a builder - as Guava documentation states:
Caches built with CacheBuilder do not perform cleanup and evict values
"automatically," or instantly after a value expires, or anything of
the sort.
It appears that expiration is only observed if getIfPresent() method is used, i.e. when a key is queried then value of null is returned after a period of time > expiry interval passes upon key/value is written to the cache. In case of Cache built with CacheLoader using get() or getUnchecked() results in CacheLoader.load() method to be executed thus expiry is not observed i.e. null value is never returned.
Is this the correct expectation?
Thank you for your patience and help.
// excerpt from test code
private static final FakeTicker fakeTicker = new FakeTicker();
private static LoadingCache<Integer, String> usingCacheLoader = CacheBuilder.newBuilder()
.expireAfterWrite(2, TimeUnit.MINUTES)
.ticker(fakeTicker)
.build(new CacheLoader<Integer, String>() {
public String load(Integer keyName) throws Exception {
logger.info("Getting value for key: {}", keyName);
return getValue(keyName, "with_cache_loader");
}
});
private static Cache<Integer, String> withoutCacheLoader = CacheBuilder.newBuilder()
.expireAfterWrite(2, TimeUnit.MINUTES)
.ticker(fakeTicker)
.build();
It is true that if you call get or getUnchecked you will never get null.
The expiration can be "observed" both in terms of performance - how long it takes to get for a specific key and whether it has to be freshly computed - and in whether the actual value you get reflects perhaps out of date information.
I am using EhCache core 3.0. It internally uses BaseExpiry and Eh107Expiry class to check whether cache is expired or not. These classes implement Expiry interface. My query is, can we extend methods which are used to check whether cache is expired or not. I don't want to expire contents of the cache even if time is elapsed if my method is using some data from that cache.
Have a look at the dedicated section on Expiry in the documentation. It will help you understand what you can do and how to do it.
If that does not help you, please expand your question as suggested in comments.
If you add time-to-idle in xml or override getExpiryForAccess from Expiry interface,then your entries will not delete when you are accessing them.Below is the code to build Eh cache with custom Expire.This blog will help you for other properties with explanation.
CacheConfigurationBuilder<Integer,String> cacheConfigurationBuilder = CacheConfigurationBuilder.newCacheConfigurationBuilder();
cacheConfigurationBuilder.withExpiry(new Expiry() {
#Override
public Duration getExpiryForCreation(Object key, Object value) {
return new Duration(120, TimeUnit.SECONDS);
}
#Override
public Duration getExpiryForAccess(Object key, Object value) {
return new Duration(120, TimeUnit.SECONDS);
}
#Override
public Duration getExpiryForUpdate(Object key, Object oldValue, Object newValue) {
return null;
}
})
.usingEvictionPrioritizer(Eviction.Prioritizer.LFU)
.withResourcePools(ResourcePoolsBuilder.newResourcePoolsBuilder().heap(200, EntryUnit.ENTRIES))
// adding defaultSerializer config service to configuration
.add(new DefaultSerializerConfiguration(CompactJavaSerializer.class, SerializerConfiguration.Type.KEY))
.buildConfig(Integer.class, String.class);
I guess you can use an ehcache decorator and reimplement isExpiry to add your own conditions. Please refer to https://www.ehcache.org/documentation/2.8/apis/cache-decorators.html.
I have something like this:
private final Cache<Long, BlockingDeque<Peer>> peers = CacheBuilder.newBuilder()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build();
public class Peer {
public void hanleRequest(String request) { ... }
//....
}
Cache provides only two policies: expiredAfterWrite and expireAfterAccess. Either the first nor the second is suitable for me.
I want BlockingDeque<Peer> entity expires in 10 minutes after last invocation of Peer#handleRequest() method on one of Peer objects that belongs to that BlockingDeque. Means Peer#handleRequest() resets the expiration counter.
I want Any of other methods of Peer object doesn't reset counter.
I want peers.get(key) also doesn't reset counter.
Example
peers.getIfPresent(key); // doesn't reset counter
peers.getIfPresent(key).add(new Peer()); // doesn't reset counter
peers.getIfPresent(key).remove(peer); //doesn't reset counter
peers.getIfPresent(key).handleRequest(request); // RESET counter!
Questions
Is that possible with help of Guava Cache, ExpiringMap, MapMaker or any other Guava map?
If asnwer to the first question is NO. Can I just customize one of the Guava elements to have no need to implement all from scratch?
If answer to the second question is NO. What is the better way to implement that by my own? At the moment I suppose it'll be ConcurrentHashMap with daemon thread in addition. That thread will be iterate throught the whole map each 5-15 seconds and check if any entity is expired
Updated: Is that a good solution? As I suppose, handleRequest is a operation which will be performed on each user request, so it performance stays on the first place. Approximate BlockingDeque objects in peers cache is near 10, approximate number of Peer object in one deque is 2.
private final Cache<Long, BlockingDeque<Peer>> peers = CacheBuilder.newBuilder()
.expireAfterWrite(10, TimeUnit.MINUTES) //CHANGE TO WRITE POLICY
.build();
public class Peer {
public void hanleRequest(String request) {
BlockingDeque<Peer> deque = peers.getIfPresent(key);
peers.invalidate(key);
peers.put(key, deque);
//...
}
//....
}
First a remark: The kind of question you ask smells like a XY problem, see:
https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
So maybe some background what you really want to achieve would be good.
Taking the question literally, I would do the following:
Use a second cache without expiration for the "don't reset counter" accesses. Add a removal listener to the peers cache, to remove the value from the second cache. Maybe just a HashMap is fine, too. The resource usage is actually controlled by the peers cache.
#cruftex's suggestion of using a second cache is fine.
Regarding your updated question, you don't need to invalidate before "updating" the value, just update it:
public class Peer {
public void handleRequest(String request) {
BlockingDeque<Peer> deque = peers.getIfPresent(key);
if (deque != null) {
peers.put(key, deque);
}
//...
}
//....
}
Two things I really like about Guava 11's CacheLoader (thanks, Google!) are loadAll(), which allows me to load multiple keys at once, and reload(), which allows me to reload a key asynchronously when it's "stale" but an old value exists. I'm curious as to how they play together, since reload() operates on but a single key.
Concretely, extending the example from CachesExplained:
LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
.maximumSize(1000)
.refreshAfterWrite(1, TimeUnit.MINUTES)
.build(
new CacheLoader<Key, Graph>() {
public Graph load(Key key) { // no checked exception
return getGraphFromDatabase(key);
}
public Map<Key, Graph> loadAll(Iterable<? extends K> keys) {
return getAllGraphsFromDatabase(keys);
}
public ListenableFuture<Graph> reload(final Key key, Graph prevGraph) {
if (neverNeedsRefresh(key)) {
return Futures.immediateFuture(prevGraph);
} else {
// asynchronous!
return ListenableFutureTask.create(new Callable<Graph>() {
public Graph call() {
return getGraphFromDatabase(key);
}
});
}
}
});
...where "getAllGraphsFromDatabase()" does an aggregate database query rather than length(keys) individual queries.
How do these two components of a LoadingCache play together? If some keys in my request to getAll() aren't present in the cache, they are loaded as a group with loadAll(), but if some need refreshing, do they get reloaded individually with load()? If so, are there plans to support a reloadAll()?
Here's how refreshing works.
Refreshing on a cache entry can be triggered in two ways:
Explicitly, with cache.refresh(key).
Implicitly, if the cache is configured with refreshAfterWrite and the entry is queried after the specified amount of time after it was written.
If an entry that is eligible for reload is queried, then the old value is returned, and a (possibly asynchronous) refresh is triggered. The cache will continue to return the old value for the key while the refresh is in progress. (So if some keys in a getAll request are eligible for refresh, their old values will be returned, but the values for those keys will be (possibly asynchronously) reloaded.)
The default implementation of CacheLoader.reload(key, oldValue) just returns Futures.immediateFuture(load(key)), which (synchronously) recomputes the value. More sophisticated, asynchronous implementations are recommended if you expect to be doing cache refreshes.
I don't think we're inclined to provide reloadAll at the moment. I suspect it's possible, but things are complicated enough as it is, and I think we're inclined to wait until we see specific demand for such a thing.
I trying to implement a write-back cache. I'm trying to use soft referenes, but I'm having troubles performing the post-mortum write-back because the reference is cleared before it's added to the gcQueue and thus I don't have access to the referent object.
Solutions?
You can try Guava's Mapmaker.
Example:
final ConcurrentMap<Long, Integer> cache = new MapMaker()
.softValues().expiration(20,TimeUnit.MINUTES)
.makeComputingMap(new Function<Long, Integer>() {
#Override
public Integer apply(Long arg0) {
return null;
}
});
SO Questions on MapMaker :
Use of Google-collections MapMaker ?
Using MapMaker to create a cache
Alternative option :
Use Supplier class's memoizeWithExpiration which is also part of guava library.