Map Entity to different Tables based on certain condition - java

I have one Entity named Transaction and its related table in database is TAB_TRANSACTIONS. The whole system is working pretty fine; now a new requirement has came up in which the client has demanded that all the transactions older than 30 days should be moved to another archive table, e.g. TAB_TRANSACTIONS_HIST.
Currently as a work around I have given them a script scheduled to run every 24 hours, which simply moves the data from Source to Dest.
I was wondering is there any better solution to this using hibernate?
Can I fetch Transaction entities and then store them in TAB_TRANSACTIONS_HISTORY? I have looked at many similar questions but couldn't find a solution to that, any suggestions would help.

You may want to create a quartz scheduler for this task. Here is the Job for the scheduler
public class DatabaseBackupJob implements Job {
public void execute(JobExecutionContext jec) throws JobExecutionException {
Configuration cfg=new Configuration();
cfg.configure("hibernate.cfg.xml");
Session session = cfg.buildSessionFactory().openSession();
Query q = session.createQuery("insert into Tab_Transaction_History(trans) select t.trans as trans from Tab_Transaction t where t.date < :date")
.setParameter("date", reqDate);
try{
Trasaction t = session.beginTransaction();
q.executeNonQuery();
t.commit();
} catch(Exception e){
} finally {
session.close();
}
}
}
P.S. hibernate doesnot provide a scheduler, so you cannot perform this activity using core hibernate and hence you need external APIs like quartz scheduler

The solution you search may be achieved only if you rely on TWO different persistence context, I think.
A single persistence context maps entities to tables in a non-dynamic way, so you can't perform a "runtime-switch" from a mapped-table to another.
But you can create a different persistence context (or a parallel configuration in hibernate instead of using 2 different contexts), then load this new configuration in a different EntityManager, and perform all your tasks.
That's the only solution that comes to mind, at the moment. Really don't know if it's adequate...

I think it's a good idea to run the script every 24 hours.
You could decrase the interval if you're not happy.
But if you already have a working script, where is your actual problem?
Checking the age of all transactions and move the ones older than 30 days to another list or map is the best way I think.

You will need some kind of schedule mechanism. Either a thread that is woken up periodically, or some other trigger that is appropriate for you.
You can also use a bulk insert operation
Query q = session.createQuery(
"insert into TabTransactionHistory tth
(.....)
select .... from TabTransaction tt"
);
int createdObjects = q.executeUpdate();
(Replace ... with actual fields)
You can also use the "where clause" which can be used to trim down result on basis of how old the entries are.

Related

How to check special conditions before saving data with Hibernate

Sample Scenario
I have a limit that controls the total value of a column. If I make a save that exceeds this limit, I want it to throw an exception. For example;
Suppose I have already added the following data: LIMIT = 20
id
code
value
1
A
15
2
A
5
3
B
12
4
B
3
If I insert (A,2) it exceeds the limit and I want to get exception
If I insert (B,4) the transaction should be successful since it didn't exceed the limit
code and value are interrelated
What can I do
I can check this scenario with required queries. For example, I write a method for it and I can check it in the save method. That's it.
However, I'm looking for a more useful solution than this
For example, is there any annotation when designing Entity ?
Can I do this without calling the method that provides this control every time ?
What examples can I give ?
#UniqueConstraint checking if it adds the same values
Using transaction
The most common and long-accepted way is to simply abstract in a suitable form (in a class, a library, a service, ...) the business rules that govern the behavior you describe, within a transaction:
#Transactional(propagation = Propagation.REQUIRED)
public RetType operation(ReqType args) {
...
perform operations;
...
if(fail post conditions)
throw ...;
...
}
In this case, if when calling a method there is already an open transaction, that transaction will be used (and there will be no interlocks), if there is no transaction created, it will create a new one so that both the operations and the postconditions check are performed within the same transaction.
Note that with this strategy both operation and invariant check transactions can combine multiple transactional states managed by the TransactionManager (e.g. Redis, MySQL, MQS, ... simultaneously and in a coordinated manner).
Using only the database
It has not been used for a long time (in favor of the first way) but using TRIGGERS was the canonical option used some decades ago to check postconditions, but this solution is usually coupled to the specific database engine (e.g. in PostgreSQL or MySQL).
It could be useful in the case where the client making the modifications is unable or unwilling (not safe) to check postconditions (e.g. bash processes) within a transaction. But nowadays it is infrequent.
The use of TRIGGERS may also be preferable in certain scenarios where efficiency is required, as there are certain optimization options within the database scripts.
Neither Hibernate nor Spring Data JPA have anything built-in for this scenario. You have to program the transaction logic in your repository yourself:
#PersistenceContext
EntityManager em;
public addValue(String code, int value) {
var checkQuery = em.createQuery("SELECT SUM(value) FROM Entity WHERE code = :code", Integer.class);
checkQuery.setParameter("code", code);
if (checkQuery.getSingleResult() + value > 20) {
throw new LimitExceededException("attempted to exceed limit for " + code);
}
var newEntity = new Entity();
newEntity.setCode(code);
newEntity.setValue(value);
em.persist(newEntity);
}
Then (it's important!) you have to define SERIALIZABLE isolation level on the #Transactional annotations for the methods that work with this table.
Read more about serializable isolation level here, they have an oddly similar example.
Note that you have to consider retrying the failed transaction. No idea how to do this with Spring though.
You should use a singleton (javax/ejb/Singleton)
#Singleton
public class Register {
#Lock(LockType.WRITE)
public register(String code, int value) {
if(i_can_insert_modify(code, value)) {
//use entityManager or some dao
} else {
//do something
}
}
}

Scheduled Spring MVC task not updating DB entity

Good Morning,
I am trying to create a scheduled task which has to update database entity cyclically, I am using Spring MVC and Hibernate as ORM.
Problem
The scheduled task should update entities in background, but changes are not persisted in the Database.
Structure of the system
I have a Batch entity with basic information and plenty of sensors inserting record in the DB every few seconds.
Related to the Batch entity, there is a TrackedBatch entity which contains many calculated fields related to the Batch entity itself, the scheduled task takes each Batch one by one, update related data from sensors with lotto = lottoService.updateBatchRelations(batch) and then update the TrackedBatch entity with the new computed data.
A user can modify Batch basic information, then the system should recompute TrackedBatch data and update the entity (this is done by the controller which calls updateBatchFollowingModification method). This step is correctly done with an asynch method, the problem comes when the scheduled task should recompute the same infos.
Asynch method used to update entities after user modification (Working correctly)
#Async("threadPoolTaskExecutor")
#Transactional
public void updateBatchFollowingModification(Lotto lotto)
{
logger.debug("Daemon started");
Lotto batch = lottoService.findBatchById(lotto.getId_lotto(), false);
lotto = lottoService.updateBatchRelations(batch);
lotto.setTrackedBatch(trackableBatchService.modifyTrackedBatch(batch.getTrackedBatch(), batch));
logger.debug("Daemon ended");
}
Scheduled methods to update entities cyclically (Not working as expected)
#Scheduled(fixedDelay = 10000)
public void updateActiveBatchesWithDaemon()
{
logger.info("updating active batches in background");
List<Integer> idsOfActiveBatches = lottoService.findIdsOfActiveBatchesInAllSectors();
if(!idsOfActiveBatches.isEmpty())
{
logger.info("found " + idsOfActiveBatches.size() + " active batches");
for(Integer id : idsOfActiveBatches)
{
logger.debug("update batch " + id + " in background");
updateBatch(id);
}
}
else
{
logger.info("no active batches found");
}
}
#Transactional
public void updateBatch(Integer id)
{
Lotto activeLotto = lottoService.findBatchById(id, false);
updateBatchFollowingModification(activeLotto);
}
As a premise, I can state that scheduled method is fired/configured correctly and runs continously (the same stands for asynch method, as following a user modification all entities are updated correctly), at line updateBatchFollowingModification(activeLotto) in updateBatch method, the related entities are modified correctly (even the TrackedBatch, I have checked with the debugger), then the changes are not persisted in the Database when method ends and no exception is thrown.
Looking around the internet I didn't find any solution to this problem nor it seems to be a known problem or bug from Hibernate and Spring.
Also reading Spring documentation about scheduling didn't help, I also tried to use save method in the scheduled task to save again the entity (but it obiously didn't work).
Further considerations
I do not know if the #Scheduled annotation needs some extra configuration to handle #Transactional methods as in the web devs are using those annotations together with no problem, moreover in documentation no cons are mentioned.
I also do not think it is a concurrency problem, because if the asynch method is modifying the data, the scheduled one should be stopped by the implicit optimistic locking system in order to finish after the first transaction commit, the same stands if the first to acquire the locking is the scheduled method (correct me if I am wrong).
I cannot figure out why changes are not persisted when the scheduled method is used, can someone link documentation or tutorials on this topic? so I can find a solution, or, better, if someone faced a similar problem, how it can be solved?
Finally I managed to resolve the issue by explicitly defining the isolation level for the transaction involved in the process and by eliminating the updateBatch method (as it was a kind of duplicated feature as updateBatchFollowingModification is doing the same thing), in particular I put the isolation level for updateBatchFollowingModification to #Transactional(isolation = Isolation.SERIALIZABLE).
This obviously works in my case as no scalability is needed, so serializing actions do not bring any problem to the application.

JAX-WS Webservice with JPA transactions

I'm going to become mad with JPA...
I have a JAX-WS Webservice like that
#WebService
public class MyService
{
#EJB private MyDbService myDbService;
...
System.out.println(dmrService.read());
...
}
My EJB contains
#Stateless
public class MyDbService
{
#PersistenceContext(unitName="mypu")
private EntityManager entityManager;
public MyEntity read()
{
MyEntity myEntity;
String queryString = "SELECT ... WHERE e.name = :type";
TypedQuery<MyEntity> query = entityManager.createQuery(queryString,MyEntity.class);
query.setParameter("type","xyz");
try
{
myEntity= query.getSingleResult();
}
catch (Exception e)
{
myEntity= null;
}
return myEntity;
}
In my persistence.xml the mypu has transaction-type="JTA" and a jta-data-source
If I call the webservice, it's working. The entity is retrieved from the db.
Now, using an external tool, I'm changing the value of one field in my record.
I'm calling the webservice again and ... the entity displayed contains the old value.
If I'm deploying again, or if I'm adding a entityManager.refresh(myEntity) after the request, I have the good value again.
In #MyTwoCents answer, Option 2 is to NOT use your 'external' tool for changes, use your application instead. Caching is of more use if your application knows about all the changes going on, or has some way of being informed of them. This is the better option, but only if your application can be the single access point for the data.
Forcing a refresh, via EntityManager.refresh() or through provider specific query hints on specific queries, or by invalidating the cache as described here https://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching#How_to_refresh_the_cache is another option. This forces JPA to go past the cache and access the database on the specific query. Problems with this are you must either know when the cache is stale and needs to be refreshed, or put it on queries that cannot tolerate stale data. If that is fairly frequent or on every query, then your application is going through all the work of maintaining a cache that isn't used.
The last option is to turn off the second level cache. This forces queries to always load entities into an EntityManager from the database data, not a second level cache. You reduce the risk of stale data (but not eliminate it, as the EntityManager is required to have its own first level cache for managed entities, representing a transactional cache), but at the cost of reloading and rebuilding entities, sometimes unnecessarily if they have been read before by other threads.
Which is best depends entirely on the application and its expected use cases.
Don't be mad its fine
Flow goes like this.
You fired a query saying where type="xyz"
Now Hibernate keeps this query or state in cache so that if you fire query again it will return same value if state is not changes.
Now you are updating detail from some external resource.
Hibernate doesnt have any clue about that
So when you fire query again it returns from catch
When you do refresh, hibernate gets detail from Database
Solution :
So you can either add refresh before calling get call
OR
Change the Table value using Hibernate methods in Application so that Hibernate is aware about changes.
OR
Disable Hibernate cache to query each time from DB (not recommended as it will slow down stuff)

Why does MyBatis close sessions after executing every statement?

I'm using MyBatis on Spring 3. Now I'm trying to execute two following queries consequently,
SELECT SQL_CALC_FOUND_ROWS() *
FROM media m, contract_url_${contract_id} c
WHERE m.media_id = c.media_id AND
m.media_id = ${media_id}
LIMIT ${offset}, ${limit}
SELECT FOUND_ROWS()
so that I can retrieve the total rows of the first query without executing count(*) additionally.
However, the second one always returns 1, so I opened the log, and found out that the SqlSessionDaoSupport class opens a connection for the first query, and closes it (stupidly), and opens a new connection for the second.
How can I fix this?
I am not sure my answer will be 100% accurate since I have no experience with MyBatis but it sounds like your problem is not exactly related to this framework.
In general, if you don't specify transaction boundaries somehow, each call to spring ORM or JDBC api will execute in a connection retrieved for this call from dataSource/connectionPool.
You can either use transactions to make sure you stay with the same connection or manage connection manually. I recommend the former which is how spring db apis are meant to be used.
hope this helps
#Resource
public void setSqlSessionFactory(DefaultSqlSessionFactory sqlSessionFactory) {
this.sqlSessionFactory = sqlSessionFactory;
}
SqlSession sqlSession = sqlSessionFactory.openSession();
YourMapper ym = sqlSession.getMapper(YourMapper.class);
ym.getSqlCalcFoundRows();
Integer count = pm.getFoundRows();
sqlSession.commit();
sqlSession.close();

Spring Roo project as batch job without transactions

I have a Roo project that works "fine" with transactions, but each .merge() or .persist() takes longer and longer time, so that what should've taken 10ms takes 5000ms towards the end of the transaction. Luckily, my changes are individually idempotent, so I don't really need a transaction.
But when I throw out transaction handling I run into the classic "The context has been closed" when I do myObject.merge()
The job I'm running is from the command line as a batch, so here is what I usually do:
public static void main(final String[] args) {
context = new ClassPathXmlApplicationContext("META-INF/spring/applicationContext.xml");
JpaTransactionManager txMgr = (JpaTransactionManager) context.getBean("transactionManager");
TransactionTemplate txTemplate = new TransactionTemplate(txMgr);
txTemplate.execute(new TransactionCallback() { #SuppressWarnings("finally")
public Object doInTransaction(TransactionStatus txStatus) {
try {
ImportUnitFromDisk importer = new ImportUnitFromDisk();
int status = importer.run(args[0]);
System.out.println("Import data complete status: " + status);
} catch (Exception e) {
e.printStackTrace();
} finally {
return null;
}
}});
System.out.println("All done!");
System.exit(0);
}
But what I really want to do is something like this:
public static void main(final String[] args) {
ImportUnitFromDisk importer = new ImportUnitFromDisk();
int status = importer.run(args[0]);
System.out.println("Import data complete status: " + status);
System.out.println("All done!");
System.exit(0);
}
What can I do to allow me to persist() and merge() without using transactions, given that the entities are generated with Spring Roo (using OpenJPA and MySQL)?
Cheers
Nik
Even if your changes are idempotent, you still will need transaction.
As far as performance is concerned.
How tightly coupled is your entity objects. (For instance if all table fk refernces are migrated to entity relationship, then its pretty tightly coupled)?
May be you should remove some unwanted bidirectional relationships.
Identify master tables and remove entities mapping to master records.
What is your cascade options? Check if you have cascade all options everywhere.
For me it looks the Entity map is far too tightly coupled .(Everyone knows someone who has ...) and the cascade options kick off merging the whole object graph. (log your jpa sql, that can validate my assumption)
I have experienced exactly the same performance problem with a Spring / Hibernate batch process. Note that this has nothing to do with Spring Roo or even Spring - it is due to the workings of Hibernate / JPA.
The basic problem is that Hibernate maintains a session cache of all the Java entities that are part of the transaction, and for new entities (for which bytecode instrumentation has not been done) Hibernate must scan the entities on each flush to see if there were updates. This is at least O(n) for n = # of new entities in the session. If the batch process is primarily adding new entities, then this turns into O(n^2) behavior for the overall batch.
One solution if you want to maintain the whole process in one transaction is to periodically flush (to do inserts/updates) and then evict entities that you no longer need to keep in the session. Another solution is to split the batch process into multiple transactions.
See http://www.basilv.com/psd/blog/2010/avoiding-caching-to-improve-hibernate-performance for more details.

Categories