New to Caffeine and I'm clearly missing something super fundamental. All the basic usage examples I see look like this:
LoadingCache<String,Fizzbuzz> fizzbuzzes = Caffeine.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.DAYS)
.build(k -> fetchFizzbuzzes());
What I'm struggling with is the role that the fetchFizzbuzzes() function plays:
Is it used to populate the initial cache?
Is it used as some sort of fallback in case a key doesn't exist in the cache?
Something else?
Actually this is the most important part of the builder. The function passed to the build(CacheLoader) method takes a key and computes the value for this key. This function is called if there is currently no value for this key in the cache. The computed value will be added to the cache afterwards. There is a build() method without arguments as well, which can be used if you want to manually check for elements to be present in the cache and add them manually as well.
Your example however doesn't make too much sense as the fetchFizzbuzzes() method has no arguments. That is - without side effects - it will probably return the same value for all keys k.
Take the examples below:
LoadingCache<String,Fizzbuzz> fizzbuzzes = Caffeine.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.DAYS)
.build(key -> fetchFizzbuzzes(key));
fizzbuzzes.get("myKey"); // will automatically invoke fetchFizzbuzzes with 'myKey' as argument
fizzbuzzes.get("myKey"); // fetchFizzbuzzes won't be called as return value from previous call is added to the cache automatically
Cache<String, Fizzbuzz> fizzbuzzesManual = Caffeine.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.DAYS)
.build();
fizzbuzzesManual.getIfPresent("myKey"); // will return null as no element for 'myKey' was added to the cache before
See the Caffeine wiki for additional details.
Related
I'm relatively new to Webflux and am trying to zip two objects together in order to return a tuple, which is then being used to build an object in the following step.
I'm doing it like so:
//Will shorten my code quite a bit.
//I'm including only the pieces that are invovled in my problem //"Flux.zip" call.
//This is a repository that is used in my "problem" code. It is simply an
//interface which extends ReactiveCrudRepository from spring data.
MyRepository repo;
//wiring in my repository...
public MyClass(MyRepository repo) {
this.repo = repo;
}
//Below is later in my reactive chain
//Starting halfway down the chain, we have Flux of objA
(flatMapMany returning Flux<ObjectA>)
//Problem code below...
//Some context :: I am zipping ObjectA with a reference to an object
//I am saving. I am saving an object from earlier, which is stored in an
//AtomicReference<ObjectB>
.flatMap(obj ->
Flux.zip(Mono.just(obj), repo.save(atomicReferenceFromEarlier.get()))
//Below, when calling "getId()" it logs the SAME ID FOR EACH OBJECT,
//even though I want it to return EACH OBJECT'S ID THAT WAS SAVED.
.map(myTuple2 -> log("I've saved this object {} ::" myTuple2.getT2().getId())))
//Further processing....
So, my ultimate issue is, the "second" parameter I'm zipping, the repo.save(someAtomicReferencedObject.get()) is the same for every "zipped" tuple.
in the following step, I'm logging something like "I'm now building object", just to see what object I've returned for each event, but my "second" object in my tuple is always the same...
How can I zip and ensure that the "save" call to the repo returns a unique object for each event in the flux?
However, when I check the database, I really have saved unique entities for each event in my flux. So, the save is happening as expected, but when the repo returns a Mono, it's the same one for each tuple returned.
Please let me know if I should clarify something if anything is unclear. Thank you in advance for any and all help.
I would like to transform/add the Mono from WebClient response into a Map with the input as a key
I am executing a batch of REST calls in parallel using WebClient but instead of returning the list of Users I would like to return a HashMap of ID as the key and the User returned from REST call as the value.
I don't want to block every individual call to get the value before I add to the HashMap.
Is there a way I can transform the result from WebClient into HashMap entry without impacting the parallel execution of the REST calls?
I tried doOnSuccess callback for Mono but not sure if thats really the right way to do it.
Current Implementation
public List<<User> fetchUsers(List<Integer> ids) {
List<Mono<User>> userMonos = new ArrayList();
for (int id : ids) {
userMonos.add(webClient.get()
.uri("/otheruser/{id}", id)
.retrieve()
.bodyToMono(User.class));
}
List<User> list = Flux.merge(userMonos).collectList().block();
return list;
}
So the expected output is:
HashMap<Integer, User>()
I apologize if I wasn't able to express the expected result appropriately. Feel free to let me know if I need to add more detail or add more clarity to the question.
I would really appreciate some help with this. I am also trying to keep looking for a solution in the meantime.
you are mixing imperative code with reactive code. You have to pick one way, and stick to it.
If you want the actual values and not Mono or Flux you MUST block. Think of it as a Future, there is no "value" there until we wait for the value to show up. So blocking is the ONLY way.
If i understand your code correctly i would do the following.
public HashMap<Integer, User> fetchUsers(List<Integer> ids) {
final Map<Integer, User> userMap = new HashMap();
return Flux.fromIterable(ids)
.flatMap(id -> webClient.get()
.uri("/otheruser/{id}", id)
.retrieve()
.bodyToMono(User.class)
.doOnSuccess(user -> {
userMap.put(id, user);
})
.thenReturn(userMap)
.block()
}
So what does this code do?
It takes a list of id's and place it into a Flux. The flux will async start all the requests at the same time since we are using flatMap. When all the reqests are finished, we will do a side effect by adding the value to the hashmap. Since we dont care about the return type, we use then to silenty ignore the return. we tell it to return the hashmap instead. And lastly we call block to make the code actually run and wait for all the requests etc to finish and produce the final hashmap.
I have written this on mobile, so i cant check against a compiler, but something like this should get you started. If someone sees any errors, feel free to edit.
If possible, it's best to avoid modifying external state from side effect operators like doOnSuccess. For example in this particular case it could cause concurrency issues if the external Map is not thread-safe.
As a better alternative you can collect to a Map using Reactor operator:
public Map<Integer, User> fetchUsers(List<Integer> ids) {
return Flux.fromIterable(ids)
.flatMap(id -> webClient.get()
.uri("/otheruser/{id}", id)
.retrieve()
.bodyToMono(User.class)
.map(user -> Tuples.of(id, user)))
.collectMap(Tuple2::getT1, Tuple2::getT2)
.block();
}
Instead of Tuple you might create a small class to improve readability. Or even better, if the User knows its ID, then you can omit Tuple completely and you can do something like .collectMap(User::getId, user -> user).
I'm trying to write a AsyncLoadingCache that accepts a CacheWriter and I'm getting an IllegalStateException.
Here's my code:
CacheWriter<String, UUID> cacheWriter = new CacheWriter<String, UUID>() {
#Override
public void write(String key, UUID value) {
}
#Override
public void delete(String key, UUID value, RemovalCause cause) {
}
};
AsyncLoadingCache<String, UUID> asyncCache = Caffeine.newBuilder()
.expireAfterWrite(60, TimeUnit.SECONDS)
.writer(cacheWriter)
.maximumSize(100L)
.buildAsync((String s) -> { /* <== line 41, exception occurs here */
return UUID.randomUUID();
});
And I'm getting this trace
Exception in thread "main" java.lang.IllegalStateException
at com.github.benmanes.caffeine.cache.Caffeine.requireState(Caffeine.java:174)
at com.github.benmanes.caffeine.cache.Caffeine.buildAsync(Caffeine.java:854)
at com.mycompany.caffeinetest.Main.main(Main.java:41)
If I'll change the cache to a LoadingCache or remove .writer(cacheWriter) the code will run properly. What am I doing wrong? it seems I'm providing the right types to both objects.
Unfortunately these two features are incompatible. While the documentation states this, I have updated the exception to communicate this better. In Caffeine.writer it states,
This feature cannot be used in conjunction with {#link #weakKeys()} or {#link #buildAsync}.
A CacheWriter is a synchronous interceptor for a mutation of an entry. For example, it might be used to evict into a disk cache as a secondary layer, whereas a RemovalListener is asynchronous and using it would leave a race where the entry is not present in either caches. The mechanism is to use ConcurrentHashMap's compute methods to perform the write or removal, and call into the CacheWriter within that block.
In AsyncLoadingCache, the value materializes later when the CompletableFuture is successful, or is automatically removed if null or an error. When the entry is modified within the hash table, this future may be in-flight. This would mean that the CacheWriter would often be called without the materialized value and likely cannot do very intelligent things.
From an API perspective, unfortunately telescoping builders (which use the type system to disallow incompatible chains) become more confusing than using runtime exceptions. Sorry for not making the error clear, which should now be fixed.
This question is to validate an observed behavior to ensure Guava Cache is used in correct way.
I have setup two Guava Caches (see code below): with and without a builder - as Guava documentation states:
Caches built with CacheBuilder do not perform cleanup and evict values
"automatically," or instantly after a value expires, or anything of
the sort.
It appears that expiration is only observed if getIfPresent() method is used, i.e. when a key is queried then value of null is returned after a period of time > expiry interval passes upon key/value is written to the cache. In case of Cache built with CacheLoader using get() or getUnchecked() results in CacheLoader.load() method to be executed thus expiry is not observed i.e. null value is never returned.
Is this the correct expectation?
Thank you for your patience and help.
// excerpt from test code
private static final FakeTicker fakeTicker = new FakeTicker();
private static LoadingCache<Integer, String> usingCacheLoader = CacheBuilder.newBuilder()
.expireAfterWrite(2, TimeUnit.MINUTES)
.ticker(fakeTicker)
.build(new CacheLoader<Integer, String>() {
public String load(Integer keyName) throws Exception {
logger.info("Getting value for key: {}", keyName);
return getValue(keyName, "with_cache_loader");
}
});
private static Cache<Integer, String> withoutCacheLoader = CacheBuilder.newBuilder()
.expireAfterWrite(2, TimeUnit.MINUTES)
.ticker(fakeTicker)
.build();
It is true that if you call get or getUnchecked you will never get null.
The expiration can be "observed" both in terms of performance - how long it takes to get for a specific key and whether it has to be freshly computed - and in whether the actual value you get reflects perhaps out of date information.
Two things I really like about Guava 11's CacheLoader (thanks, Google!) are loadAll(), which allows me to load multiple keys at once, and reload(), which allows me to reload a key asynchronously when it's "stale" but an old value exists. I'm curious as to how they play together, since reload() operates on but a single key.
Concretely, extending the example from CachesExplained:
LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
.maximumSize(1000)
.refreshAfterWrite(1, TimeUnit.MINUTES)
.build(
new CacheLoader<Key, Graph>() {
public Graph load(Key key) { // no checked exception
return getGraphFromDatabase(key);
}
public Map<Key, Graph> loadAll(Iterable<? extends K> keys) {
return getAllGraphsFromDatabase(keys);
}
public ListenableFuture<Graph> reload(final Key key, Graph prevGraph) {
if (neverNeedsRefresh(key)) {
return Futures.immediateFuture(prevGraph);
} else {
// asynchronous!
return ListenableFutureTask.create(new Callable<Graph>() {
public Graph call() {
return getGraphFromDatabase(key);
}
});
}
}
});
...where "getAllGraphsFromDatabase()" does an aggregate database query rather than length(keys) individual queries.
How do these two components of a LoadingCache play together? If some keys in my request to getAll() aren't present in the cache, they are loaded as a group with loadAll(), but if some need refreshing, do they get reloaded individually with load()? If so, are there plans to support a reloadAll()?
Here's how refreshing works.
Refreshing on a cache entry can be triggered in two ways:
Explicitly, with cache.refresh(key).
Implicitly, if the cache is configured with refreshAfterWrite and the entry is queried after the specified amount of time after it was written.
If an entry that is eligible for reload is queried, then the old value is returned, and a (possibly asynchronous) refresh is triggered. The cache will continue to return the old value for the key while the refresh is in progress. (So if some keys in a getAll request are eligible for refresh, their old values will be returned, but the values for those keys will be (possibly asynchronously) reloaded.)
The default implementation of CacheLoader.reload(key, oldValue) just returns Futures.immediateFuture(load(key)), which (synchronously) recomputes the value. More sophisticated, asynchronous implementations are recommended if you expect to be doing cache refreshes.
I don't think we're inclined to provide reloadAll at the moment. I suspect it's possible, but things are complicated enough as it is, and I think we're inclined to wait until we see specific demand for such a thing.