Remove cache name from generated cache key - java

I would like to know if there is some way to remove the cache name from the generated cache key on Spring boot 2.
This is the code i'm currently using to cache data:
#Cacheable(value = "products", key = "#product.id")
public SimilarProducts findSimilarProducts(Product product){}
Spring boot is concatenating the string "products" to every single key i'm generating to save on the cache. I have already tried to make my own key generator but the spring boot keeps concatenating the string "products" to the generated keys. Thanks for your attention.
For example when i use:
Product p = new Product();
p.setId("12345");
findSimilarProducts(p);
The generated key will be:
products::12345
I would like it to be only 12345.

spring boot keeps concatenating the string "products" to the generated keys.
Spring Boot (or the cache abstraction for that matter) doesn't do such thing but a particular Cache implementation might. It would have been interesting to share a bit more details about your setup but I can only guess you are using Redis as the cache store and the default CacheKeyPrefix adds the name of the cache indeed.
Please review the documentation.

You can (maybe you need to) disable key prefix like this.
#Bean
public RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory) {
RedisCacheManager cacheManager = RedisCacheManager.builder(connectionFactory)
.cacheDefaults(defaultCacheConfig().disableKeyPrefix())
.build();
return cacheManager;
}

Related

Quarkus data encryption with hibernate #ColumnTransformer: how to read key from application.properties?

I need encrypt/decrypt data with #ColumnTransformer (with Postgres DB) but I don't want an hardcoded key or store it in postgresql.conf.
I am able to get the key from application.properties with:
#ConfigProperty(name = "encryptionkey")
String key;
Unfortunately I cannot use it in #ColumnTransformer annotation because it requires a constant string.
Moreover I cannot use #AttributeConverter because I need do queries with "like" and I've tried yet but it did not work.
I have followed also this discussion which uses reflection for update annotation value at runtime : I would like to know if there is a way with Hibernate to perform a programmatic configuration of ColumnTransformer ? but it has not worked for me, infact I can replace the hardcoded key with "encryptionkey" when Quarkus starts but when I persists my entity, it has no effect.
Can anyone help me?
Thanks in advance.

Create users in Oracle, MySQL databases using Springboot - Spring Data JPA

I am very new to Springboot and Spring Data JPA and working on a use case where I am required to create users in different databases.
The application will receive 2 inputs from a queue - username and database name.
Using this I have to provision the given user in the given database.
I am unable to understand the project architecture.
Since the query I need to run will be of the format - create user ABC identified by password;
How should the project look like in terms of model class, repositories etc? Since I do not have an actual table against which the query will be run, do I need a model class since there will be no column mappings happening as such.
TLDR - Help in architecturing Springboot-Spring Data JPA application configured with multiple data sources to run queries of the format : create user identified by password
I have been using this GitHub repo for reference - https://github.com/jahe/spring-boot-multiple-datasources/blob/master/src/main/java/com/foobar
I'll be making some assumptions here:
your database of choice is Oracle, based on provided syntax: create user ABC identified by password
you want to create and list users
your databases are well-known and defined in JNDI
I can't just provide code unfortunately as setting it up would take me some work, but I can give you the gist of it.
Method 1: using JPA
first, create a User entity and a corresponding UserRepository. Bind the entity to the all_users table. The primary key will probably be either the USERNAME or the USER_ID column... but it doesn't really matter as you won't be doing any insert into that table.
to create and a user, add a dedicated method to your own UserRepository specifying the user creation query within a #NativeQuery annotation. It should work out-of-the-box.
to list users you shouldn't need to do anything, as your entity at this point is already bound to the correct table. Just call the appropriate (and already existing) method in your repository.
The above in theory covers the creation and listing of users in a given database using JPA.
If you have a limited number of databases (and therefore a limited number of well-known JNDI datasources) at this point you can proceed as shown in the GitHub example you referenced, by providing different #Configuration classes for each different DataSource, each with the related (identical) repository living in a separate package.
You will of course have to add some logic that will allow you to appropriately select the JpaRepository to use for the operations.
This will lead to some code duplication and works well only if the needs remain very simple over time. That is: it works if all your "microservice" will ever have to do is this create/list (and maybe delete) of users and the number of datasources remains small over time, as each new datasource will require you to add new classes, recompile and redeploy the microservice.
Alternatively, try with the approach proposed here:
https://www.endpoint.com/blog/2016/11/16/connect-multiple-jpa-repositories-using
Personally however I would throw JPA out of the window completely as it's anything but easy to dynamically configure arbitrary DataSource objects and reconfigure the repositories to work each time against a different DataSource and the above solution will force you to constant maintenance over such a simple application.
What I would do would be sticking with NamedParameterJdbcTemplate initialising it by using JndiTemplate. Example:
void createUser(String username, String password, String database) {
DataSource ds = (new JndiTemplate()).lookup(database);
NamedParameterJdbcTemplate npjt = new NamedParameterJdbcTemplate();
Map<String, Object> params = new HashMap<>();
params.put("USERNAME", username);
params.put("PASSWORD", password);
npjt.execute('create user :USERNAME identified by :PASSWORD', params);
}
List<Map<String, Object>> listUsers() {
DataSource ds = (new JndiTemplate()).lookup(database);
NamedParameterJdbcTemplate npjt = new NamedParameterJdbcTemplate();
return npjt.queryForList("select * from all_users", new HashMap<>());
}
Provided that your container has the JNDI datasources already defined, the above code should cover both the creation of a user and the listing of users. No need to define entities or repositories or anything else. You don't even have to define your datasources in a spring #Configuration. The above code (which you will have to test) is really all you need so you could wire it in a #Controller and be done with it.
If you don't use JNDI it's no problem either: you can use HikariCP to define your datasources, providing the additional arguments as parameters.
This solution will work no matter how many different datasources you have and won't need redeployment unless you really have to work on its features. Plus, it doesn't need the developer to know JPA and it doesn't need to spread the configuration all over the place.

Invalidate entire namespace using Simple Spring Memcached

Has anyone tried invalidating an entire memcached namespace?
For example I have two reads methods having differents keys
#Override
#ReadThroughSingleCache(namespace = UrlClientExclusion.TABLE_NAME, expiration = 24 * HOUR)
public List<UrlClientExclusion> list(#ParameterValueKeyProvider String idClient) {
#Override
#ReadThroughSingleCache(namespace = UrlClientExclusion.TABLE_NAME, expiration = 24 * HOUR)
public UrlClientExclusion find(#ParameterValueKeyProvider int idUrlClientExclusion) {
and I want delete entire namespace UrlClientExclusion.TABLE_NAME on update/delete operation
I can't use a keylist method because there are many instances of the app
#InvalidateMultiCache(namespace = UrlClientExclusion.TABLE_NAME)
public int update(UrlClientExclusion urlClientExclusion, /*#ParameterValueKeyProvider List<Object> keylist*/ /* it is impossibile for me put all keys in this list*/) {
so the solution is to delete entire namespace.
What is the annototation to do this?
It is possibible to build custom annotation to delete entire namespace? How?
Many thanks
Memcached doesn't support namespaces, SSM provides namespaces as a logic abstraction. It is not possible to flush all keys from given namespaces as memcached doesn't group keys into namespaces. Memcached support only flushing/removing single key or all keys.
You can flush all your data from memcached instance or need to provide exact keys that should be removed.
I don't know how this can be handled with the simple-spring-memcached lib. But I would suggest you to use Spring's Cache Abstraction instead. In that way you can change a cache storage to one of your preference e.g. ConcurrentHashMap, Ehcache, Redis etc. It would be just a configuration change for your application. For the eviction of the namespace, you could do something like:
#CacheEvict(cacheNames="url_client_exclusion", allEntries=true)
public int update(...)
Unfortunately there is not an official support for Memcached offered by Pivotal, but if you really need to use Memcached, you could check out Memcached Spring Boot library which is compliant with the Spring Cache Abstraction.
There is a sample Java app where you could see how this lib is used. Over there you could also find an example of #CacheEvict usage (link).

Spring Data Redis Repository returning null for expired entries

I'm using a CrudRepository for connecting to Redis in my Spring Boot application and a #TimeToLive annotated field in the entity for expiration:
#RedisHash("keyspace")
public class MyRedisEntity {
#Id String key;
MyPojo pojo;
#TimeToLive Long ttl;
}
public interface MyRedisRepository extends CrudRepository<MyRedisEntity, String>{}
Now when the expiration has taken place, myRedisRepo.findAll() returns null for the expired entities. I discovered redis (or spring-data redis) stores all inserted entities' id in a set with the keyspace as key:
redis-cli> smembers keyspace
0) key0
1) key1
2) key2
...
redis-cli> hgetall key0
(empty list or set)
I suspect this set is used for the findAll call, returning null for ids no longer present as hashmaps due to expiration. Also, I tried using a listener for RedisKeyExpiredEvent, using the repository's delete method in onApplicationEvent, but that doesn't help.
#Component
public class RedisExpirationListener implements ApplicationListener<RedisKeyExpiredEvent> {
private MyRedisRepository myRedisRepository;
#Autowired
public RedisExpirationListener(MyRedisRepository myRedisRepository) {
this.myRedisRepository = myRedisRepository;
}
#Override
public void onApplicationEvent(RedisKeyExpiredEvent redisKeyExpiredEvent) {
if (redisKeyExpiredEvent.getKeyspace().equals("keyspace")) {
myRedisRepository.deleteById(new String(redisKeyExpiredEvent.getId()));
}
}
}
What should I do to get only non null entries? Ideally I'd want the expired entries to be deleted entirely from redis and thus not appear in findAll, but it'd be sufficient if a repository method could return list of non null values.
(And yes, I know about the phantom behaviour, but I don't think it's relevant to what I want)
As mentioned by #Enoobong there is open issue https://jira.spring.io/browse/DATAREDIS-570 (and there is a workaround added this September)
It is open for 5 years.
That means you should use RedishHash with TTL very carefully:
Make sure your Redis server allows your app to 'CONFIGURE' keyspace events
Make sure you have a workaround from github issue
But that is still not recommended as production-ready solution (I mean configuring kayspace events from app annotation) so please consider putting this to the application logic to keep the scheduler to remove outdated records.
Also this meens you need to handle null values returned from the repo until that issue is fixed.

Data Replication from MySql to Redis Server

I`m using RedisTemplate provided by spring framework for In-Memory caching. And MySql as primary database. I need to update the cache whenever new row is added or updated in primary database.
How can i accomplish this using java?
Is there any inbuilt feature provided by Redis server to achieve this?
Update:
#Override public void getEmployeeDetailsForRedisTemplate(List<Employee> employee) {
logger.info("Saving " + employee.size() + " record to redis template");
for (Employee emp : employee) {
listOperations.leftPush(EnumConstants.EMPLOYEE_KEY.getValue(), emp);
}
}
I have been polling database in regular interval and based on status column in DB i was pushing updated data to redis server. Which is not a efficient way of doing it
Sorry I wanted to ask whether there is a code snippet of the Java code that is invoked when a new row is added/updated but I am new so I can't comment
If you are using Redis as a cache then you might want to use #Cacheable annotation provided by Spring-Data-Redis by setting up RedisCacheManager in your configuration
#Cacheable - Annotation indicating that the result of invoking a
method (or all methods in a class) can be cached. Each time an advised
method is invoked, caching behavior will be applied, checking whether
the method has been already invoked for the given arguments. A
sensible default simply uses the method parameters to compute the key,
but a SpEL expression can be provided via the key() attribute, or a
custom KeyGenerator implementation can replace the default one (see
keyGenerator()).
If no value is found in the cache for the computed key, the target
method will be invoked and the returned value stored in the associated
cache. Note that Java8's Optional return types are automatically
handled and its content is stored in the cache if present.
So you could actually use it like
#Override
#Cacheable(value = "cacheName", key = "#employee.id")
public void getEmployeeDetailsForRedisTemplate(List<Employee> employee) {
logger.info("Saving " + employee.size() + " record to redis template");
for (Employee emp : employee) {
listOperations.leftPush(EnumConstants.EMPLOYEE_KEY.getValue(), emp);
}
}
Hope this helps

Categories