Invalidate entire namespace using Simple Spring Memcached - java

Has anyone tried invalidating an entire memcached namespace?
For example I have two reads methods having differents keys
#Override
#ReadThroughSingleCache(namespace = UrlClientExclusion.TABLE_NAME, expiration = 24 * HOUR)
public List<UrlClientExclusion> list(#ParameterValueKeyProvider String idClient) {
#Override
#ReadThroughSingleCache(namespace = UrlClientExclusion.TABLE_NAME, expiration = 24 * HOUR)
public UrlClientExclusion find(#ParameterValueKeyProvider int idUrlClientExclusion) {
and I want delete entire namespace UrlClientExclusion.TABLE_NAME on update/delete operation
I can't use a keylist method because there are many instances of the app
#InvalidateMultiCache(namespace = UrlClientExclusion.TABLE_NAME)
public int update(UrlClientExclusion urlClientExclusion, /*#ParameterValueKeyProvider List<Object> keylist*/ /* it is impossibile for me put all keys in this list*/) {
so the solution is to delete entire namespace.
What is the annototation to do this?
It is possibible to build custom annotation to delete entire namespace? How?
Many thanks

Memcached doesn't support namespaces, SSM provides namespaces as a logic abstraction. It is not possible to flush all keys from given namespaces as memcached doesn't group keys into namespaces. Memcached support only flushing/removing single key or all keys.
You can flush all your data from memcached instance or need to provide exact keys that should be removed.

I don't know how this can be handled with the simple-spring-memcached lib. But I would suggest you to use Spring's Cache Abstraction instead. In that way you can change a cache storage to one of your preference e.g. ConcurrentHashMap, Ehcache, Redis etc. It would be just a configuration change for your application. For the eviction of the namespace, you could do something like:
#CacheEvict(cacheNames="url_client_exclusion", allEntries=true)
public int update(...)
Unfortunately there is not an official support for Memcached offered by Pivotal, but if you really need to use Memcached, you could check out Memcached Spring Boot library which is compliant with the Spring Cache Abstraction.
There is a sample Java app where you could see how this lib is used. Over there you could also find an example of #CacheEvict usage (link).

Related

How should one migrate from the removed #Depth annotation in Spring Data Neo4j 6?

Since spring-data-neo4j 6.0 the #Depth annotation for query methods was removed (DATAGRAPH-1333, commit).
How would one migrate existing 5.3 code which uses the annotation to 6.0? There is no mention of it in the migration guide.
Example usage, documented in the 5.3.6.RELEASE reference:
public interface MovieRepo extends Neo4jRepository<Movie, Long> {
#Depth(1) // Default, load simple properties and its immediately-related objects
Optional<Movie> findById(Long id);
#Depth(0) // Load simple properties only
Optional<Movie> findByProperty1(String property1);
#Depth(2) // Load simple properties, immediately-related objects and their immediately-related objects
Optional<Movie> findByProperty2(String property2);
#Depth(-1) // Load whole relationship graph
Optional<Movie> findByProperty3(String property3);
}
Are custom queries the only option or is there a replacement?
There is no custom depth anymore in SDN. It either loads everything that is described in your Java model or you have to supply custom Cypher statements.
Some background for this: with SDN 6 we dropped the internal session cache completely because we want to ensure that the Java object graph is after loading and persisting in sync with the database graph. As a consequence we cannot track a custom depth over multiple operations anymore.
A partial loaded graph now reflects the truth of the Java model and when getting persisted might remove existing (but not loaded) relationships.
Some insights can be found in the documentation section for the query creation. https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#query-creation

Create users in Oracle, MySQL databases using Springboot - Spring Data JPA

I am very new to Springboot and Spring Data JPA and working on a use case where I am required to create users in different databases.
The application will receive 2 inputs from a queue - username and database name.
Using this I have to provision the given user in the given database.
I am unable to understand the project architecture.
Since the query I need to run will be of the format - create user ABC identified by password;
How should the project look like in terms of model class, repositories etc? Since I do not have an actual table against which the query will be run, do I need a model class since there will be no column mappings happening as such.
TLDR - Help in architecturing Springboot-Spring Data JPA application configured with multiple data sources to run queries of the format : create user identified by password
I have been using this GitHub repo for reference - https://github.com/jahe/spring-boot-multiple-datasources/blob/master/src/main/java/com/foobar
I'll be making some assumptions here:
your database of choice is Oracle, based on provided syntax: create user ABC identified by password
you want to create and list users
your databases are well-known and defined in JNDI
I can't just provide code unfortunately as setting it up would take me some work, but I can give you the gist of it.
Method 1: using JPA
first, create a User entity and a corresponding UserRepository. Bind the entity to the all_users table. The primary key will probably be either the USERNAME or the USER_ID column... but it doesn't really matter as you won't be doing any insert into that table.
to create and a user, add a dedicated method to your own UserRepository specifying the user creation query within a #NativeQuery annotation. It should work out-of-the-box.
to list users you shouldn't need to do anything, as your entity at this point is already bound to the correct table. Just call the appropriate (and already existing) method in your repository.
The above in theory covers the creation and listing of users in a given database using JPA.
If you have a limited number of databases (and therefore a limited number of well-known JNDI datasources) at this point you can proceed as shown in the GitHub example you referenced, by providing different #Configuration classes for each different DataSource, each with the related (identical) repository living in a separate package.
You will of course have to add some logic that will allow you to appropriately select the JpaRepository to use for the operations.
This will lead to some code duplication and works well only if the needs remain very simple over time. That is: it works if all your "microservice" will ever have to do is this create/list (and maybe delete) of users and the number of datasources remains small over time, as each new datasource will require you to add new classes, recompile and redeploy the microservice.
Alternatively, try with the approach proposed here:
https://www.endpoint.com/blog/2016/11/16/connect-multiple-jpa-repositories-using
Personally however I would throw JPA out of the window completely as it's anything but easy to dynamically configure arbitrary DataSource objects and reconfigure the repositories to work each time against a different DataSource and the above solution will force you to constant maintenance over such a simple application.
What I would do would be sticking with NamedParameterJdbcTemplate initialising it by using JndiTemplate. Example:
void createUser(String username, String password, String database) {
DataSource ds = (new JndiTemplate()).lookup(database);
NamedParameterJdbcTemplate npjt = new NamedParameterJdbcTemplate();
Map<String, Object> params = new HashMap<>();
params.put("USERNAME", username);
params.put("PASSWORD", password);
npjt.execute('create user :USERNAME identified by :PASSWORD', params);
}
List<Map<String, Object>> listUsers() {
DataSource ds = (new JndiTemplate()).lookup(database);
NamedParameterJdbcTemplate npjt = new NamedParameterJdbcTemplate();
return npjt.queryForList("select * from all_users", new HashMap<>());
}
Provided that your container has the JNDI datasources already defined, the above code should cover both the creation of a user and the listing of users. No need to define entities or repositories or anything else. You don't even have to define your datasources in a spring #Configuration. The above code (which you will have to test) is really all you need so you could wire it in a #Controller and be done with it.
If you don't use JNDI it's no problem either: you can use HikariCP to define your datasources, providing the additional arguments as parameters.
This solution will work no matter how many different datasources you have and won't need redeployment unless you really have to work on its features. Plus, it doesn't need the developer to know JPA and it doesn't need to spread the configuration all over the place.

Clear Hibernate 2nd level cache after manually DB update

Shortly, I have an entity mapped to view in DB (Oracle) with enabled 2nd level Cache (read only strategy) -- ehcache.
If I manually update some column in DB -- cache will not be updated.
I did not find any ways to do this. Only if updates will be done through Hibernate entity.
May I somehow implement this feature?
Maybe Job to monitor table (or view)? Or maybe there is some method to notify Hibernate about change in DB in concrete table.
Thanks for future answers!
According to Hibernate JavaDoc, you can use org.hibernate.Cache.evictAllRegions() :
evictAllRegions() Evict all data from the cache.
Using Session and SessionFactory:
Session session = sessionFactory.getCurrentSession();
if (session != null) {
session.clear(); // internal cache clear
}
Cache cache = sessionFactory.getCache();
if (cache != null) {
cache.evictAllRegions(); // Evict data from all query regions.
}
1) If you need update only one entity (if directly from db you will update only certain entities) not whole session, you can use
evictEntityRegion(Class entityClass) Evicts all entity data from the given region (i.e.
2) If you have a lot of entities, that can be updated directly from db you can use this method that evicts all entities from 2nd level cache (we can expose this method to admins through JMX or other admin tools):
/**
* Evicts all second level cache hibernate entites. This is generally only
* needed when an external application modifies the game databaase.
*/
public void evict2ndLevelCache() {
try {
Map<String, ClassMetadata> classesMetadata = sessionFactory.getAllClassMetadata();
Cache cache = sessionFactory.getCache();
for (String entityName : classesMetadata.keySet()) {
logger.info("Evicting Entity from 2nd level cache: " + entityName);
cache.evictEntityRegion(entityName);
}
} catch (Exception e) {
logger.logp(Level.SEVERE, "SessionController", "evict2ndLevelCache", "Error evicting 2nd level hibernate cache entities: ", e);
}
}
3) Another approach is described here for postgresql+hibernate, I think you can do something similar for Oracle like this
Use debezium for asynchronous cache updation from your database. You can know more by visiting https://debezium.io/
Also this article is very helpful as it gives direct implementation
https://debezium.io/blog/2018/12/05/automating-cache-invalidation-with-change-data-capture/
As mentioned, when you update DB from the back-end manually (not though application/ hibernate session), the cache is not updated. And your application remains ignorant about it.
To tell the app about the change, you need to refresh the entire cache or part of the cache related to the entity depending on the case. This can be one in two ways:
1- Restart the application - In this case the cache will be rebuild with updated DB change.
2- Trigger the update w/o restarting the app - You need not restart the app but you want to tell you application that the current cache in invalid and it should be refreshed.
You can give this external push to you app in many ways. Few are
listed below.
Through JMX.
Through a servlet with a published URL to refresh the cache. Hit the URL after you change tables in DB.
Implementing a trigger on the database that call a listener on the application.
While implementing the external push/ admin task, you can call a suitable cache related method based on your requirement to invalidate cache/ refresh cache. Examples: Session.refresh(), Cache.evictAllRegions(), Cache.evictEntityRegion(entityName) etc as described in other posts.
You will find here the way to control the second level cache:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-sessioncache
You can use session.refresh() method to reload the objects which are currently held in session.
Read object loading for more detail.
As of JEE 7.0:
myStatelessDaoBean.getSession().evict(MyEntity.class);

How to handle many jpa entities in one request (OpenSessionInView)

we have classic Spring application with OpenSessionInView pattern.
Sometimes we want proces many (=unknown size) entities in one request.
This is well-known OutOfMemory issue.
Long time ago, we create function(hack) to switch current hibernate session for particular method.
public void proceedOnePerson(int id) {
recalculateVAT(id);
}
public void proceedAllPerson(int[] ids) {
for(int id : ids) {
switchToAnotherHibernateSesion();
recalculateVAT(id); //OutOfMemory
closeAnotherSessionAndSwitchBackToOriginSession();
}
}
Is there standard solution to this in Spring, Spring Boot or Spring Data?
Session.clear is not suitable, because has side effect on other entities from other methods.
Session.evict is not suitable too, because developer of method recalculateVAT had no idea if this function will be used in batch. Error prone.
Spring batch is too heavy. And we dont want to write one method for user and second method for batch.
I think you are trying to use Open Session in view pattern, performance of this pattern is very poor, and view layer is nos prepared to manage hibernate or other jpa implement exception. Why don't you use spring data with a service layer transforming pojos objects to dto to be consumed by view layer?
OutOfMemory is typicall when you use this pattern when you are getting a lot of data informaciĆ³n with multiples queries, if you do not use this patter this problem will disapear.

Working with multiple database schemas

There is a third-party application, which database is accessed by my application. It's database schema had been changed several times, so, there are about four different database schemas right now (different columns, different select conditions for the same entities).
For example, there is an entity "Application". For different schemas it could be retrieved by:
1)SELECT * FROM apps WHERE cell_number < 65535 AND page_number < 65535 AND top_number = 65535
2)SELECT * FROM menu_item WHERE cell_number > -1 AND page_number > -1 AND parent_id = -1 AND component_name IS NOT NULL
And so on...
So, what design pattern (in Java) would be more comfortable to support multiple database schemas of different versions of the same application? It should be ready for future changes, of course.
It's not an easy task because is difficult to proper map a table structure to an object (nowadays an ORM is often used to perform this task).
From your question seems that declaring Application as an abstract class or interface and provide different implementation is enough.
public abstract class Application(){
public abstract Application getAnApplication();
}
public ConcreteApplicationOne(){
public Application getAnApplication(){
//retrieve application data from database scheama 1 , build object and return it.
}
}
public ConcreteApplicationTwo(){
public Application getAnApplication(){
//retrieve application data from database scheama 2 , build object and return it.
}
}
And using some sort of factory to build give to the user the right concrete Application class:
public class factory{
public Application getApplicationImplementation(){
if (cond1){
return new ConcreteApplicationOne();
}else {
return new ConcreteApplicationTwo();
}
}
}
I believe the solution to your problem is to define your data classes in your application and use an ORM like Hibernate to generate the database tables in your DB. You will need to check for the Migration functionality. Please check out the following article that talks about this topic:
Hibernate and DB migration
With moving the data structure to your primary code you will win the following:
No need to maintain code and logic in two places and in two languages
Easier to test as there is no logic in DB
The migration script can be generated automatically

Categories