Ignite - Manually trigger persistence - java

I have an application using in-memory ignite caches. What I would like to do is, after user input, trigger the persistence of those caches in a Postgres DB.
I already checked the ignite persistence with writeThrough and WriteBehindEnabled properties. it doesn't work in this case because I don't want to write on the db every time I write on the caches.
I also found that, when configuring the caches with persistence, we create the store with a CacheConfiguration<key, value>, we can then use the following to trigger a write :
cache.getSnap(ignite, snapdId).getCache().forEach(e -> {
cache.getConfig().getCacheStoreFactory().create().write(e);
});
Unfortunately this throws an exception with this.session().cacheName() as the session is null.
The this.session() returns the following attribute: #CacheStoreSessionResource private CacheStoreSession ses;
If someone knows how I could do that, that would very much help me.
Thanks !

That's just not how the Cache Store is intended to work. I would suggest either:
Two tables, one writes through to Postgres, the other one is purely in-memory. In this one, you can use the JDBC Cache Store adapter
Write your own Cache Store adapter. In this, you would have an extra column that would indicate whether the row should be written through

Related

Spring transaction management multi-thread issue

I would like to ask for your help regarding the following issue.
I have a system where I use Spring and "TransactionTemplate" from that to manage DB transaction. The DB call is a simple one, it's for saving data or getting, it depends on it's an existing one or not. How it looks like in nutshell:
public void getOrCreate() {
TransactionTemplate transactionTemplate = new TransactionTemplate(platformTransactionManager);
transactionTemplate.execute(TransactionCallback<T> action)
}
It works fine until many threads use this function. So for example, 10 threads call this "getOrCreate" method (if the given data doesn't exist, the data will be created, otherwise the system will give back the data).
However, it doesn't work always because sometimes I get an exception about the DB cannot create the data (Cannot insert duplicate key row in object ). I use a unique key not the basic incremented one in DB so that's why the key can be the same for different data what I would like to process.
I think the problem the following.
Two threads use data with the same key. Both of them process the data and when the data should be committed in DB the slower one cannot be because the key already has existed therefore it throws an exception about that.
Do you have any ideal how I can manage this multi-thread issue? Is this TransactionTemplate thread-safe or can I make it for that?
Maybe should I use this Spring transaction management in a different way?
I hope my problem is understandable and thanks for your help in advance.
Regards

How to force Hibernate read external database changes

I have a common database that is used by two different applications (different technologies, different deployment servers, they just use the same database).
Let's call them application #1 and application #2.
Suppose we have the following scenario:
the database contains a table called items (doesn't matter its content)
application #2 is developed in Spring Boot and it is mainly used just for reading data from the database
application #2 retrieves an item from the database
application #1 changes that item
application #2 retrieves the same item again, but the changes are not visible
What I understood by reading a lot of articles:
when application #2 retrieves the item, Hibernate stores it in the first level cache
the changes that are done to the item by application #1 are external changes and Hibernate is unaware of them, and thus, the cache is not updated (same happens when you do a manual change in the database)
you cannot disable Hibernate's first level cache.
So, my question is, can you force Hibernate into refreshing the entities every time they are read (or make it go into the database) without explicitly calling em.refresh(entity)? The problem is that the business logic module from application1 is used as a dependency in application1 so I can only call service methods (i.e. I don't have access to the entityManager or session references).
Hibernate L1 cache is roughly equivalent to a DB transaction when you run in a repeatable-read level isolation. Basically, if you read/write some data, the next time you query in the context of the same session, you will get the same data. Further, within the same process, sessions run independent of each other, which means 2 session are looking at different data in the L1 cache.
If you use repeatable read or less, then you shouldn't really be concerned about the L1 cache, as you might run into this scenario regardless of the ORM (or no ORM).
I think you only need to think about the L2 cache here. The L2 cache is what stores data and assumes only hibernate is accessing the DB, which means that if some change happens in the DB, hibernate might not know about it. If you just disable the L2 cache, you are sorted.
Further reading - Short description of hibernate cache levels
Well, if you cannot access hibernate session you are left with nothing. Any operations you want to do requires session access. For instance you can remove entity from cache after reading it like this:
session.evict(entity);
or this
session.clear();
but first and foremost you need a session. Since you calling only services you need to create service endpoints clearing session cache after serving them or modify existing endpoints to do that.
You can try to use StatelessSession, but you will lose cascading and other things.
https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#_statelesssession
https://stackoverflow.com/a/48978736/3405171
You can force to start a new transaction, so in this manner hibernate will not be read from the cache and it will redo the read from the db.
You can annotate your function in this manner
#Transactional(readOnly = true, propagation = Propagation.REQUIRES_NEW)
Requesting a new transaction, the system will generation a new hibernate session, so the data will not be in the cache.

Async auditing with JaVers

I need to audit changes to some entities in our application and am thinking of using JaVers. I like the support for interrogating the audit data provided by JaVers. Hibernate Envers looks good, but it stores data in the same DB.
Here are my requirements:
async logging - for minimal performance impact
store audit data in a different db - performance reasons as well
As far as I can see JaVers is not designed for the above, but seems possible to adapt to achieve the above. Here's how:
JaVers actually allows data to be stored in a different DB. You can provide a connection to any DB really. It's not how it's intended, but it works. Code below (note connectionProvider which can provide a connection to any DB):
'
final Connection dbConnection =
DriverManager.getConnection("jdbc:mysql://localhost:3306/javers", "root", "root");
ConnectionProvider connectionProvider = new ConnectionProvider() {
#Override
public Connection getConnection() {
//suitable only for testing!
return dbConnection;
}
};
JaversSqlRepository sqlRepository = SqlRepositoryBuilder
.sqlRepository()
.withConnectionProvider(connectionProvider)
.withDialect(DialectName.MYSQL).build();
The async can be achieved by moving the execution of the JaVers commit into a thread/executor. The challenge with that is that if the execution takes too long, it could be that the object changes before it's logged. There are 2 solutions I can think of here:
we could create a snapshot of the object (e.g. serialize it to JSON or the like) and pass that to a Thread to log it.
we provide our custom implementation of Javers Repository which processes the differences in the current thread, and then passes the Snapshot objects to be persisted in another thread. This way we'd only do reading from DB in the application thread, and do writing (which is generally more costly performance wise) in the Auditing thread.
QUESTIONS:
am I missing anything here? Could this work?
Does JaVers have support to create a snapshot of the object which then can be moved to another thread. It does it internally somewhere, so maybe it's something we could use.
JUST FYI: Not relevant for the question, but here are some other challenges I can think of and how I'm planning to solve them:
due to not doing audits in the same transaction, as if the transaction fails, it'd make audit rollback complex. So we need to audit only objects that were successfully committed. I intend to do that by using a Hibernate Interceptor, listening to the afterTransactionCompletion and only committing objects updated by that transaction.
In case of lazy loaded objects, I could see how, if we're trying to access them once the transaction is finished, it might be that the lazy loaded props can't be accessed (as the session might be closed too) - don't know how to fix this, but it might not be an issue as I think we're loading eager most props.
Interesting question.
First the démenti. All JaVers core modules are designed to decouple audit data from application data. As you mentioned, user provides a ConnectionProvider to be used by JaVers. It could be any database you want.
What are not designed to use with multiple DB are Spring integration modules for SQL, so javers-spring-jpa and javers-spring-boot-starter-sql. They just cover most common scenario so the same DB for application and JaVers.
You are right about lack of async commit. Fortunately, it can be implemented only in JaversCore without changing the Repositories.
The API could be:
CompletableFuture<Commit> javers.commitAsync(..., Executor);
First, Javers will take a snapshot of user's objects, it's fast so it can be done in the current thread.
Then, DB reads (loading latest snapshots) and DB writes (inserting new snapshots) can be done asynchronously (submitted to the given Executor).
As you mentioned, it requires the new approach to DB transactions. We plan to implement the Commit Withdrawal feature, so the app would be able to withdraw JaVers' commit after main DB rollback. See https://github.com/javers/javers/issues/588

Ehcache cached item is wrong

I use hibernate + ehcache to read a workflow engine database.
hibernate does not write anything on that database.
If i set TimetoLive setting in the cache, the cache won't reflect any database changes unless TimetoLive arrives.
database changes is done by the workflow engine API, so there is no way to use hibernate to write the database.
Shouldn't ehcache knows the cache is expired and do the updates for me ?
Any clean way to solve the cache wrong problem ?
the cache won't reflect any database changes unless TimetoLive arrives.
That's the intended functionality! These second level caches do nothing but store data in hash maps and know nothing about the changes unless you tell it to or the time to evict the objects out of cache and reread them.
To solve this is to not use caches on volatile objects.
If i set TimetoLive setting in the cache, the cache won't reflect any database changes unless TimetoLive arrives.
So that means you are not using it.
database changes is done by the workflow engine API, so there is no way to use hibernate to write the database.
So as an laternative (to timetoLive), that means you need cache mode to read-write or read-nonstrictly-write (check the name something like that). If its not reflecting the chnages and I am asssuming two things
Your workflow Engine is using hibernate
And your cache setting is read-only

Database changes done by one instance of an application is not being picked up another instance using Hibernate

I have an application which can read/write changes to a database table. Another instance of the same application should be able to see the updated values in the database. i am using hibernate for this purpose. If I have 2 instances of the application running, and if i make changes to the db from one instance for the first time, the updated values can be seen from the second. But any further changes from the first instance is not reflected in the second. Please throw some light.
This seems to be a bug in your cache settings. By default, Hibernate assumes that it's the only one changing the database. This allows it to efficiently cache objects. If several parties can change tables in the DB, then you must switch off caching for those tables/instances.
You can use hibernate.connection.autocommit=true
This will make hibernate commit each SQL update to the database immediately and u should be able to see the changes from the other application.
HOWEVER, I would strongly discourage you from doing so. Like Aaron pointed out, you should only use one Hibernate SessionFactory with a database.
If you do need multiple applications to be in sync, think about using a shared cache,e.g. Gemstone.

Categories