Implementing a Write-Back Cache in Java - java

I trying to implement a write-back cache. I'm trying to use soft referenes, but I'm having troubles performing the post-mortum write-back because the reference is cleared before it's added to the gcQueue and thus I don't have access to the referent object.
Solutions?

You can try Guava's Mapmaker.
Example:
final ConcurrentMap<Long, Integer> cache = new MapMaker()
.softValues().expiration(20,TimeUnit.MINUTES)
.makeComputingMap(new Function<Long, Integer>() {
#Override
public Integer apply(Long arg0) {
return null;
}
});
SO Questions on MapMaker :
Use of Google-collections MapMaker ?
Using MapMaker to create a cache
Alternative option :
Use Supplier class's memoizeWithExpiration which is also part of guava library.

Related

Passing in a value that does not match method signature

I am using Apache Ignite, which isn't really central to the question, but gives background. In that context, I've create a class extending CacheStoreAdapter that has a method with the following signature:
#Override
public void write(Entry<? extends K, ? extends V> cacheEntry) throws CacheWriterException {
I registered that class with Ignite, so that it calls the write() method whenever I give it data to save in its cache.
What I was surprised to find is that, depending on how Ignite is otherwise configured, the following code...
final V cacheObject = cacheEntry.getValue();
LOG.info("cacheObject = " + ToStringBuilder.reflectionToString(cacheObject));
... outputs the following:
cacheObject = org.apache.ignite.internal.binary.BinaryObjectImpl#7c40ffef[ctx=org.apac
That is, the cacheObject taken from an Entry<? extends K, ? extends V> is not an instance of type V!
I've worked around the issue (as I said it only happens depending on how Ignite is otherwise configured), but I am curious how this is even done in Java.
TL;DR Question:
How is is possible to pass a variable to a method that does not conform to the method's signature? Some kind of reflection technique? Is there a common / legitimate use for doing this?
In java, type parameters are optional: they are not carried along with an object instance and only exist on language level.
So you can always cast anything to and then call any methods with type checks erased:
Map<String, Integer> sim = new HashMap<>();
Map<Object, Object> oom = (Map<Object, Object>) (Map) sim;
As for BinaryObjectImpl, Ignite will try to keep objects in serialized state where possible to save on serialization costs. So you should be aware that type parameters of CacheStore are not always the user-facing types.
It is possible that the caller to your implementation of write creates an instance of the raw type Map.Entry. For example:
Entry entry = new Entry() { /* ... */ }
...
cache.write(entry);

How to extend functionality of expiry in ehCache 3.0

I am using EhCache core 3.0. It internally uses BaseExpiry and Eh107Expiry class to check whether cache is expired or not. These classes implement Expiry interface. My query is, can we extend methods which are used to check whether cache is expired or not. I don't want to expire contents of the cache even if time is elapsed if my method is using some data from that cache.
Have a look at the dedicated section on Expiry in the documentation. It will help you understand what you can do and how to do it.
If that does not help you, please expand your question as suggested in comments.
If you add time-to-idle in xml or override getExpiryForAccess from Expiry interface,then your entries will not delete when you are accessing them.Below is the code to build Eh cache with custom Expire.This blog will help you for other properties with explanation.
CacheConfigurationBuilder<Integer,String> cacheConfigurationBuilder = CacheConfigurationBuilder.newCacheConfigurationBuilder();
cacheConfigurationBuilder.withExpiry(new Expiry() {
#Override
public Duration getExpiryForCreation(Object key, Object value) {
return new Duration(120, TimeUnit.SECONDS);
}
#Override
public Duration getExpiryForAccess(Object key, Object value) {
return new Duration(120, TimeUnit.SECONDS);
}
#Override
public Duration getExpiryForUpdate(Object key, Object oldValue, Object newValue) {
return null;
}
})
.usingEvictionPrioritizer(Eviction.Prioritizer.LFU)
.withResourcePools(ResourcePoolsBuilder.newResourcePoolsBuilder().heap(200, EntryUnit.ENTRIES))
// adding defaultSerializer config service to configuration
.add(new DefaultSerializerConfiguration(CompactJavaSerializer.class, SerializerConfiguration.Type.KEY))
.buildConfig(Integer.class, String.class);
I guess you can use an ehcache decorator and reimplement isExpiry to add your own conditions. Please refer to https://www.ehcache.org/documentation/2.8/apis/cache-decorators.html.

Inject a synchronizedMultiMap with Guava & Spring?

The documentation for using Guava HashMultimap stresses the importance of wrapping your multimap through Multimaps.synchronizedMultimap upon initialization for a thread-safe access. Given that, I know I can create the following multimap:
private Multimap<Short, String> successfulMultimap =
Multimaps.synchronizedMultimap(HashMultimap.<Short, String>create());
However, my multimap needs to be injected using Spring because it will be used by another class on my service.
Without the synchronized wrapper, I know I can use something along these lines:
//setter
public void setSuccessfulMultimap(Multimap<Short, String> successfulMultimap) {
this.successfulMultimap = successfulMultimap;
}
<!-- XML configuration -->
<bean id="myBean" factory-method="create" class="com.google.common.collect.HashMultimap"/>
But seeing as I need to initialize it as thread-safe, I'm lost on how to "spring"-ify it. Can someone help me on how to inject a synchronized multimap or any good approach to it?
You should be able to put the appropriate code in the spring set method:
//setter
public void setSuccessfulMultimap(Multimap<Short, String> value) {
successfulMultimap = Multimap.synchronizedMultimap(value);
}
Since it is set after object construction, you may also want to make the successfulMultimap member volatile to ensure the initialization is visible to other threads.

Query Results caching using Java : Any better Approaches?

I have a class ClassA which calls another class(DAO) to fetch the query results. In a specific business scenario,
ClassA invokes the DAO with queryparameters about 20,000 times.
Out of this, about 10,000 times ClassA sends same set of query parameters to DAO. Obviously Resultset will be the same and can be cached.
The following is the code I implemented.
Class A
{
.
.
.
.
Map<String, CachData> cachDataMap= new HashMap<String, CachData>();
priavate void getQueryResults(String queryParam)
{
try {
Set<String> cacheSet = cachDataMap.keySet();
CachData cachData = null;
if(!cacheSet.contains(queryParam))
{
dao.getResuslts((queryParam)));
cachData = new CachData();
cachData.setResult0(__getStringResult(0));
cachData.setResult1(__getStringResult(1));
cachData.setResult2(__getStringResult(2));
cachData.setResult3(__getStringResult(3));
cachData.setResult4(__getStringResult(4));
cachData.setResult5(__getStringResult(5));
cachDataMap.put(queryParam, cachData);
}else
{
cachData = cachDataMap.get(queryParam);
}
} catch(Exception e) {
//handle here
}
}
Do we have anyother better solution other than using any framework? A better datastructure or better method.. For good performance?
You could use ehcache.
Whatever you do don't use a Map as your interface for a cache. A good cache interface would allow the implementation to do cleanup of the cache. The Map contract won't allow this.
Depending on the implementation, cleanup can be based on time in the cache, or based on usage statistics, or memory availability, ...
The Map approach you're using here seems prone to go out of memory over a longer period of usage.
You could use the Table from guava library and use ehCache to save the object as it is with the key being the query.

Google Guava's CacheLoader loadAll() vs reload() semantics

Two things I really like about Guava 11's CacheLoader (thanks, Google!) are loadAll(), which allows me to load multiple keys at once, and reload(), which allows me to reload a key asynchronously when it's "stale" but an old value exists. I'm curious as to how they play together, since reload() operates on but a single key.
Concretely, extending the example from CachesExplained:
LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
.maximumSize(1000)
.refreshAfterWrite(1, TimeUnit.MINUTES)
.build(
new CacheLoader<Key, Graph>() {
public Graph load(Key key) { // no checked exception
return getGraphFromDatabase(key);
}
public Map<Key, Graph> loadAll(Iterable<? extends K> keys) {
return getAllGraphsFromDatabase(keys);
}
public ListenableFuture<Graph> reload(final Key key, Graph prevGraph) {
if (neverNeedsRefresh(key)) {
return Futures.immediateFuture(prevGraph);
} else {
// asynchronous!
return ListenableFutureTask.create(new Callable<Graph>() {
public Graph call() {
return getGraphFromDatabase(key);
}
});
}
}
});
...where "getAllGraphsFromDatabase()" does an aggregate database query rather than length(keys) individual queries.
How do these two components of a LoadingCache play together? If some keys in my request to getAll() aren't present in the cache, they are loaded as a group with loadAll(), but if some need refreshing, do they get reloaded individually with load()? If so, are there plans to support a reloadAll()?
Here's how refreshing works.
Refreshing on a cache entry can be triggered in two ways:
Explicitly, with cache.refresh(key).
Implicitly, if the cache is configured with refreshAfterWrite and the entry is queried after the specified amount of time after it was written.
If an entry that is eligible for reload is queried, then the old value is returned, and a (possibly asynchronous) refresh is triggered. The cache will continue to return the old value for the key while the refresh is in progress. (So if some keys in a getAll request are eligible for refresh, their old values will be returned, but the values for those keys will be (possibly asynchronously) reloaded.)
The default implementation of CacheLoader.reload(key, oldValue) just returns Futures.immediateFuture(load(key)), which (synchronously) recomputes the value. More sophisticated, asynchronous implementations are recommended if you expect to be doing cache refreshes.
I don't think we're inclined to provide reloadAll at the moment. I suspect it's possible, but things are complicated enough as it is, and I think we're inclined to wait until we see specific demand for such a thing.

Categories