How to create CaffeineCache object using Java? - java

I am trying to use Caffeine cache. How to create the object for the Caffeine cache using Java? I am not using any Spring for now in my project.

Based on Caffeine's official repository and wiki, Caffeine is a high performance Java 8 based caching library providing a near optimal hit rate. And it is inspired by Google Guava.
Because Caffeine is an in-memory cache it is quite simple to instantiate a cache object.
For example:
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
.maximumSize(10_000)
.expireAfterWrite(5, TimeUnit.MINUTES)
.refreshAfterWrite(1, TimeUnit.MINUTES)
.build(key -> createExpensiveGraph(key));
Lookup an entry, or null if not found:
Graph graph = graphs.getIfPresent(key);
Lookup and compute an entry if absent, or null if not computable:
graph = graphs.get(key, k -> createExpensiveGraph(key));
Note: createExpensiveGraph(key) may be a DB getter or an actual computed graph.
Insert or update an entry:
graphs.put(key, graph);
Remove an entry:
graphs.invalidate(key);
EDIT:
Thanks to #BenManes's suggestion, I'm adding the dependency:
Edit your pom.xml and add:
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>1.0.0</version>
</dependency>

Related

Add a Mapper to cache with java api

I want to add a Mapper to cache with java api in MyBatis. I try this way and add Mapper's namespace to caches, but the statements of the Mapper still has no cache. is there any way to add Mapper statements into the cache?
...
Cache cache = (new CacheBuilder("package.name.SampleEntity"))
.implementation(PerpetualCache.class)
.addDecorator(LruCache.class)
.clearInterval(null)
.size(null)
.readWrite(true)
.blocking(false)
.properties(new Properties())
.build();
sqlSessionFactory.getConfiguration().addCache(cache);
...

How should one migrate from the removed #Depth annotation in Spring Data Neo4j 6?

Since spring-data-neo4j 6.0 the #Depth annotation for query methods was removed (DATAGRAPH-1333, commit).
How would one migrate existing 5.3 code which uses the annotation to 6.0? There is no mention of it in the migration guide.
Example usage, documented in the 5.3.6.RELEASE reference:
public interface MovieRepo extends Neo4jRepository<Movie, Long> {
#Depth(1) // Default, load simple properties and its immediately-related objects
Optional<Movie> findById(Long id);
#Depth(0) // Load simple properties only
Optional<Movie> findByProperty1(String property1);
#Depth(2) // Load simple properties, immediately-related objects and their immediately-related objects
Optional<Movie> findByProperty2(String property2);
#Depth(-1) // Load whole relationship graph
Optional<Movie> findByProperty3(String property3);
}
Are custom queries the only option or is there a replacement?
There is no custom depth anymore in SDN. It either loads everything that is described in your Java model or you have to supply custom Cypher statements.
Some background for this: with SDN 6 we dropped the internal session cache completely because we want to ensure that the Java object graph is after loading and persisting in sync with the database graph. As a consequence we cannot track a custom depth over multiple operations anymore.
A partial loaded graph now reflects the truth of the Java model and when getting persisted might remove existing (but not loaded) relationships.
Some insights can be found in the documentation section for the query creation. https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#query-creation

spring elasticsearch : how to save Map as string in index?

i'm facing an index issue.
In my Spring document, I have a map. This map can contains thousands of data because I save history.
private NavigableMap<String, Integer> installHistory = new TreeMap<>();
In elastic search all data in my map are index, and I got a limit exeed error.
How could I do to not index all data inside the Map ?
I use spring 2.2 and spring elastic search 3.2.4
Thanks in advance.
Edit :
I upgrade to spring data elastic 4.0.1 to use FielType.Flattened, but spring data elastic 4.0.1 supports min version of elasticsearch 7.6.X. My version is 7.4 and I can't change it because it is the latest version provided by aws.
I made the field transient, and created a String property for this Map. Before saving my object i convert the map into list and put it in the String variable.
A map is converted to a JSON object that has the map keys as properties and the map values as values. So you end up storing objects with thousands of properties, see the Elasticsearch documentation about this.
You could declare the type of the installHistory with FieldType.Flattened
Edit:
I missed that you are using Spring Data Elasticsearch 3.2.x. Support for the flattened field type was added in 4.0.
The best thing then would probably be to convert the Map property to a List of Pairs or Tupels, where each Pair contains a key-value pair from the map.
Did you try these Custom Converters provided in the Documentation
https://docs.spring.io/spring-data/elasticsearch/docs/current/reference/html/#elasticsearch.mapping.meta-model.conversions

Invalidate entire namespace using Simple Spring Memcached

Has anyone tried invalidating an entire memcached namespace?
For example I have two reads methods having differents keys
#Override
#ReadThroughSingleCache(namespace = UrlClientExclusion.TABLE_NAME, expiration = 24 * HOUR)
public List<UrlClientExclusion> list(#ParameterValueKeyProvider String idClient) {
#Override
#ReadThroughSingleCache(namespace = UrlClientExclusion.TABLE_NAME, expiration = 24 * HOUR)
public UrlClientExclusion find(#ParameterValueKeyProvider int idUrlClientExclusion) {
and I want delete entire namespace UrlClientExclusion.TABLE_NAME on update/delete operation
I can't use a keylist method because there are many instances of the app
#InvalidateMultiCache(namespace = UrlClientExclusion.TABLE_NAME)
public int update(UrlClientExclusion urlClientExclusion, /*#ParameterValueKeyProvider List<Object> keylist*/ /* it is impossibile for me put all keys in this list*/) {
so the solution is to delete entire namespace.
What is the annototation to do this?
It is possibible to build custom annotation to delete entire namespace? How?
Many thanks
Memcached doesn't support namespaces, SSM provides namespaces as a logic abstraction. It is not possible to flush all keys from given namespaces as memcached doesn't group keys into namespaces. Memcached support only flushing/removing single key or all keys.
You can flush all your data from memcached instance or need to provide exact keys that should be removed.
I don't know how this can be handled with the simple-spring-memcached lib. But I would suggest you to use Spring's Cache Abstraction instead. In that way you can change a cache storage to one of your preference e.g. ConcurrentHashMap, Ehcache, Redis etc. It would be just a configuration change for your application. For the eviction of the namespace, you could do something like:
#CacheEvict(cacheNames="url_client_exclusion", allEntries=true)
public int update(...)
Unfortunately there is not an official support for Memcached offered by Pivotal, but if you really need to use Memcached, you could check out Memcached Spring Boot library which is compliant with the Spring Cache Abstraction.
There is a sample Java app where you could see how this lib is used. Over there you could also find an example of #CacheEvict usage (link).

Remove cache name from generated cache key

I would like to know if there is some way to remove the cache name from the generated cache key on Spring boot 2.
This is the code i'm currently using to cache data:
#Cacheable(value = "products", key = "#product.id")
public SimilarProducts findSimilarProducts(Product product){}
Spring boot is concatenating the string "products" to every single key i'm generating to save on the cache. I have already tried to make my own key generator but the spring boot keeps concatenating the string "products" to the generated keys. Thanks for your attention.
For example when i use:
Product p = new Product();
p.setId("12345");
findSimilarProducts(p);
The generated key will be:
products::12345
I would like it to be only 12345.
spring boot keeps concatenating the string "products" to the generated keys.
Spring Boot (or the cache abstraction for that matter) doesn't do such thing but a particular Cache implementation might. It would have been interesting to share a bit more details about your setup but I can only guess you are using Redis as the cache store and the default CacheKeyPrefix adds the name of the cache indeed.
Please review the documentation.
You can (maybe you need to) disable key prefix like this.
#Bean
public RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory) {
RedisCacheManager cacheManager = RedisCacheManager.builder(connectionFactory)
.cacheDefaults(defaultCacheConfig().disableKeyPrefix())
.build();
return cacheManager;
}

Categories