I'm using MyBatis on Spring 3. Now I'm trying to execute two following queries consequently,
SELECT SQL_CALC_FOUND_ROWS() *
FROM media m, contract_url_${contract_id} c
WHERE m.media_id = c.media_id AND
m.media_id = ${media_id}
LIMIT ${offset}, ${limit}
SELECT FOUND_ROWS()
so that I can retrieve the total rows of the first query without executing count(*) additionally.
However, the second one always returns 1, so I opened the log, and found out that the SqlSessionDaoSupport class opens a connection for the first query, and closes it (stupidly), and opens a new connection for the second.
How can I fix this?
I am not sure my answer will be 100% accurate since I have no experience with MyBatis but it sounds like your problem is not exactly related to this framework.
In general, if you don't specify transaction boundaries somehow, each call to spring ORM or JDBC api will execute in a connection retrieved for this call from dataSource/connectionPool.
You can either use transactions to make sure you stay with the same connection or manage connection manually. I recommend the former which is how spring db apis are meant to be used.
hope this helps
#Resource
public void setSqlSessionFactory(DefaultSqlSessionFactory sqlSessionFactory) {
this.sqlSessionFactory = sqlSessionFactory;
}
SqlSession sqlSession = sqlSessionFactory.openSession();
YourMapper ym = sqlSession.getMapper(YourMapper.class);
ym.getSqlCalcFoundRows();
Integer count = pm.getFoundRows();
sqlSession.commit();
sqlSession.close();
Related
Background
We have 2 services written in Java - one handles database operations on different files (CRUD on database), the other handles long-running processing of those records (complicated background tasks). Simply we could say they are producer and consumer.
Supposed behavior is as follows:
Service 1 (uses the code bellow):
Store file into DB
If the file is of type 'C' put it into message queue for further processing
Service 2:
Receive the message from message queue
Load the file from the database (by ID)
Perform further processing
The code of Service 1 is as follows (I changed some names for corporate reasons)
private void persist() throws Exception {
try (SqlSession sqlSession = sessionFactory.openSession()) {
FileType fileType = FileType.fromFileName(filename);
FileEntity dto = new FileEntity(filename, currentTime(), null, user.getName(), count, data);
oracleFileStore.create(sqlSession, dto);
auditLog.logFileUploaded(user, filename, count);
sqlSession.commit();
if (fileType == FileType.C) {
mqClient.submit(new Record(dto.getId(), dto.getName(), user));
auditLog.logCFileDetected(user, filename);
}
}
}
Additional info
ActiveMQ 5.15 is used for message queue
Database is Oracle 12c
Database is handled by Mybatis 3.4.1
Problem
From time to time it happens, that Service 2 receives the message from MQ, tries to read the file from the database and surprisingly - file is not there. The incident is pretty rare but it happens. When we check the database, the file is there. It almost looks like the background processing of the file started before the file was put into database.
Questions
Is it possible that MQ call could be faster than the database commit? I created the file in DB, called commit and only after that I put the message into MQ. The MQ even contains the ID which is generated by database itself (sequence).
Does the connection needs to be closed to be sure the commit was performed? I always thought when I commit then it's in the database regardless if my transaction ended or not.
Can the problem be Mybatis? I've read some problems regarding Mybatis transactions/sessions but it doesn't seem similar to my problem
Update
I can provide some additional code although please understand that I cannot share everything for corporate reasons. If you don't see anything obvious in this, that's fine. Unfortunately I cannot continue in much more deeper analysis than this.
Also I basically wanted to confirm whether my understanding of SQL and Mybatis is correct and I can mark such response for correct as well.
SessionFactory.java (excerpt)
private SqlSessionFactory createLegacySessionFactory(DataSource dataSource) throws Exception
{
Configuration configuration = prepareConfiguration(dataSource);
return new SqlSessionFactoryBuilder().build(configuration);
}
//javax.sql.DataSource
private Configuration prepareConfiguration(DataSource dataSource)
{
//classes from package org.apache.ibatis
TransactionFactory transactionFactory = new JdbcTransactionFactory();
Environment environment = new Environment("development", transactionFactory, dataSource);
Configuration configuration = new Configuration(environment);
addSettings(configuration);
addTypeAliases(configuration);
addTypeHandlers(configuration);
configuration.addMapper(PermissionMapper.class);
addMapperXMLs(configuration); //just add all the XML mappers
return configuration;
}
public SqlSession openSession()
{
//Initialization of factory is above
return new ForceCommitSqlSession(factory.openSession());
}
ForceCommitSqlSession.java (excerpt)
/**
* ForceCommitSqlSession is wrapper around mybatis {#link SqlSession}.
* <p>
* Its purpose is to force commit/rollback during standard commit/rollback operations. The default implementation (according to javadoc)
* does
* not commit/rollback if there were no changes to the database - this can lead to problems, when operations are executed outside mybatis
* session (e.g. via {#link #getConnection()}).
*/
public class ForceCommitSqlSession implements SqlSession
{
private final SqlSession session;
/**
* Force the commit all the time (despite "generic contract")
*/
#Override
public void commit()
{
session.commit(true);
}
/**
* Force the roll back all the time (despite "generic contract")
*/
#Override
public void rollback()
{
session.rollback(true);
}
#Override
public int insert(String statement)
{
return session.insert(statement);
}
....
}
OracleFileStore.java (excerpt)
public int create(SqlSession session, FileEntity fileEntity) throws Exception
{
//the mybatis xml is simple insert SQL query
return session.insert(STATEMENT_CREATE, fileEntity);
}
Is it possible that MQ call could be faster than the database commit?
If database commit is done the changes are in the database. The creation of the task in the queue happens after that. The main thing here is that you need to check that commit does happen synchronously when you invoke commit on session. From the configuration you provided so far it seems ok, unless there's some mangling with the Connection itself. I can imagine that there is some wrapper over the native Connection for example. I would check in debugger that the commit call causes the call of the Connection.commit on the implementation from the oracle JDBC driver. It is even better to check the logs on the DB side.
Does the connection needs to be closed to be sure the commit was performed? I always thought when I commit then it's in the database regardless if my transaction ended or not.
You are correct. There is no need to close the connection that obeys JDBC specification (native JDCB connection does that). Of cause you can always create some wrapper that does not obey Connection API and does some magic (like delays commit until connection is closed).
Can the problem be Mybatis? I've read some problems regarding Mybatis transactions/sessions but it doesn't seem similar to my problem
I would say it is unlikely. You are using JdbcTransactionFactory which does commit to the database. You need to track what happens on commit to be sure.
Have you checked that the problem is not on the reader side? For example it may use long transaction with serialized isolation level, in this case it wouldn't be able to read changes in the database.
In postgres if the replication is used and replicas are used for read queries reader may see outdated data even if commit successfully completed on master. I'm not that familiar with oracle but it seems that if replication is used you may see the same issue:
A table snapshot is a transaction-consistent reflection of its master data as that data existed at a specific point in time. To keep a snapshot's data relatively current with the data of its master, Oracle must periodically refresh the snapshot
I would check the setup of the DB to know if this is the case. If replicatiin is usedyou need to change your approach to this.
I'm using Spring and JDBC template to manage database access, but build the actual SQL queries using JOOQ. For instance, one DAO may look like the following:
public List<DrupalTaxonomyLocationTerm> getLocations(String value, String language) throws DataAccessException {
DSLContext ctx = DSL.using(getJdbcTemplate().getDataSource(), SQLDialect.MYSQL);
SelectQuery q = ctx.selectQuery();
q.addSelect(field("entity_id").as("id"),);
q.addFrom(table("entity").as("e"));
[...]
}
As you can see from the above, I'm building and executing queries using JOOQ. Does Spring still take care of closing the ResultSet I get back from JOOQ, or do I somehow "bypass" Spring when I access the data source directly and pass the data source on to JOOQ?
Spring doesn't do anything with the objects generated from your DataSource, i.e. Connection, PreparedStatement, ResultSet. From a Spring (or generally from a DataSource perspective), you have to do that yourself.
However, jOOQ will always:
close Connection objects obtained from a DataSource. This is documented in jOOQ's DataSourceConnectionProvider
close PreparedStatement objects right after executing them - unless you explicitly tell jOOQ to keep an open reference through Query.keepStatement()
close ResultSet objects right after consuming them through any ResultQuery.fetchXXX() method - unless you explicitly want to keep an open Cursor with ResultQuery.fetchLazy()
By design, jOOQ inverses JDBC's default behaviour of keeping all resources open and having users tediously close them explicitly. jOOQ closes all resources eagerly (which is what people do 95% of the time) and allows you to explicitly keep resources open where this is useful for performance reasons.
See this page of the jOOQ manual for differences between jOOQ and JDBC.
Currently we are using play 1.2.5 with Java and MySQL. We have a simple JPA model (a Play entity extending Model class) we save to the database.
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
At each web request we save multiple instances of the SimpleModel, for example:
JPAPlugin.startTx(false);
for (int i=0;i<5000;i++)
{
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
}
JPAPlugin.closeTx(false);
We are using the JPAPlugin.startTx and closeTx to manually start and end the transaction.
Everything works fine if there is only one request executing the transaction.
What we noticed is that if a second request tries to execute the loop simultaneously, the second request gets a "Lock wait timeout exceeded; try restarting transaction javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not insert: [SimpleModel]" since the first request locks the table but is not done until the second request times out.
This results in multiple:
ERROR AssertionFailure:45 - an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session)
org.hibernate.AssertionFailure: null id in SimpleModel entry (don't flush the Session after an exception occurs)
Another disinfect is that the CPU usage during the inserts goes crazy.
To fix this, I'm thinking to create a transaction aware queue to insert the entities sequentially but this will result in huge inserting times.
What is the correct way to handle this situation?
JPAPlugin on Play Framwork 1.2.5 is not thread-safe and you will not resolve this using this version of Play.
That problem is fixed on Play 2.x, but if you can't migrate try to use hibernate directly.
You should not need to handle transactions yourself in this scenario.
Instead either put your inserts in a controller method or in an asynchronous job if the task is time consuming.
Jobs and controller both handle transasctions.
However check that this is really what you are trying to achieve. Each http request creating 5000 records does not seem realistic. Perhaps it would make more sense to have a container model with a collection?
Do you really need a transaction for the entire insert? Does it matter if the database is not locked during the data import?
You can simply create a job and execute it for each insert:
for (int i=0;i<5000;i++)
{
new Job() {
doJob(){
SimpleModel() test = new SimpleModel();
test.foo = "bar";
test.save();
}.now();
}
This will create a single transaction for each insert and get rid of your database lock issue.
maybe somebody can help me with a transactional issue in Spring (3.1)/ Postgresql (8.4.11)
My transactional service is as follows:
#Transactional(isolation = Isolation.SERIALIZABLE, readOnly = false)
#Override
public Foo insertObject(Bar bar) {
// these methods are just examples
int x = firstDao.getMaxNumberOfAllowedObjects(bar)
int y = secondDao.getNumerOfExistingObjects(bar)
// comparison
if (x - y > 0){
secondDao.insertNewObject(...)
}
....
}
The Spring configuration Webapp contains:
#Configuration
#EnableTransactionManagement
public class ....{
#Bean
public DataSource dataSource() {
org.apache.tomcat.jdbc.pool.DataSource ds = new DataSource();
....configuration details
return ds;
}
#Bean
public DataSourceTransactionManager txManager() {
return new DataSourceTransactionManager(dataSource());
}
}
Let us say a request "x" and a request "y" execute concurrently and arrive both at the comment "comparison" (method insertObject). Then both of them are allowed to insert a new object and their transactions are commited.
Why am I not having a RollbackException? As far as I know that is what the Serializable isolotation level is for. Coming back to the previous scenario, if x manages to insert a new object and commits its transaction, then "y"'s transaction should not be allowed to commit since there is a new object he did not read.
That is, if "y" could read again the value of secondDao.getNumerOfExistingObjects(bar) it would realize that there is a new object more. Phantom?
The transaction configuration seems to be working fine:
For each request I can see the same connection for firstDao and secondDao
A transaction is created everytime insertObject is invoked
Both first and second DAOs are as follows:
#Autowired
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
}
#Override
public Object daoMethod(Object param) {
//uses jdbcTemplate
}
I am sure I am missing something. Any idea?
Thanks for your time,
Javier
TL;DR: Detection of serializability conflicts improved dramatically in Pg 9.1, so upgrade.
It's tricky to figure out from your description what the actual SQL is and why you expect to get a rollback. It looks like you've seriously misunderstood serializable isolation, perhaps thinking it perfectly tests all predicates, which it doesn't, especially not in Pg 8.4.
SERIALIZABLE doesn't perfectly guarantee that the transactions execute as if they were run in series - as doing so would be prohibitively expensive from a performance point of view if it it were possible at all. It only provides limited checking. Exactly what is checked and how varies from database to database and version to version, so you need to read the docs for your version of your database.
Anomalies are possible, where two transactions executing in SERIALIZABLE mode produce a different result to if those transactions truly executed in series.
Read the documentation on transaction isolation in Pg to learn more. Note that SERIALIZABLE changed behaviour dramatically in Pg 9.1, so make sure to read the version of the manual appropriate for your Pg version. Here's the 8.4 version. In particular read 13.2.2.1. Serializable Isolation versus True Serializability. Now compare that to the greatly improved predicate locking based serialization support described in the Pg 9.1 docs.
It looks like you're trying to perform logic something like this pseudocode:
count = query("SELECT count(*) FROM the_table");
if (count < threshold):
query("INSERT INTO the_table (...) VALUES (...)");
If so, that's not going to work in Pg 8.4 when executed concurrently - it's pretty much the same as the anomaly example used in the documentation linked above. Amazingly it actually works on Pg 9.1; I didn't expect even 9.1's predicate locking to catch use of aggregates.
You write that:
Coming back to the previous scenario, if x manages to insert a new
object and commits its transaction, then "y"'s transaction should not
be allowed to commit since there is a new object he did not read.
but 8.4 won't detect that the two transactions are interdependent, something you can trivially prove by using two psql sessions to test it. It's only with the true-serializability stuff introduced in 9.1 that this will work - and frankly, I was surprised it works in 9.1.
If you want to do something like enforce a maximum row count in Pg 8.4, you need to LOCK the table to prevent concurrent INSERTs, doing the locking either manually or via a trigger function. Doing it in a trigger will inherently require a lock promotion and thus will frequently deadlock, but will successfully do the job. It's better done in the application where you can issue the LOCK TABLE my_table IN EXCLUSIVE MODE before obtaining even SELECTing from the table, so it already has the highest lock mode it will need on the table and thus shouldn't need deadlock-prone lock promotion. The EXCLUSIVE lock mode is appropriate because it permits SELECTs but nothing else.
Here's how to test it in two psql sessions:
SESSION 1 SESSION 2
create table ser_test( x text );
BEGIN TRANSACTION
ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION
ISOLATION LEVEL SERIALIZABLE;
SELECT count(*) FROM ser_test ;
SELECT count(*) FROM ser_test ;
INSERT INTO ser_test(x) VALUES ('bob');
INSERT INTO ser_test(x) VALUES ('bob');
COMMIT;
COMMIT;
When run on Pg 9.1, the st commits succeeds then the secondCOMMIT` fails with:
regress=# COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
HINT: The transaction might succeed if retried.
but when run on 8.4 both commits commits succeed, because 8.4 didn't have all the predicate locking code for serializability added in 9.1.
I have one Entity named Transaction and its related table in database is TAB_TRANSACTIONS. The whole system is working pretty fine; now a new requirement has came up in which the client has demanded that all the transactions older than 30 days should be moved to another archive table, e.g. TAB_TRANSACTIONS_HIST.
Currently as a work around I have given them a script scheduled to run every 24 hours, which simply moves the data from Source to Dest.
I was wondering is there any better solution to this using hibernate?
Can I fetch Transaction entities and then store them in TAB_TRANSACTIONS_HISTORY? I have looked at many similar questions but couldn't find a solution to that, any suggestions would help.
You may want to create a quartz scheduler for this task. Here is the Job for the scheduler
public class DatabaseBackupJob implements Job {
public void execute(JobExecutionContext jec) throws JobExecutionException {
Configuration cfg=new Configuration();
cfg.configure("hibernate.cfg.xml");
Session session = cfg.buildSessionFactory().openSession();
Query q = session.createQuery("insert into Tab_Transaction_History(trans) select t.trans as trans from Tab_Transaction t where t.date < :date")
.setParameter("date", reqDate);
try{
Trasaction t = session.beginTransaction();
q.executeNonQuery();
t.commit();
} catch(Exception e){
} finally {
session.close();
}
}
}
P.S. hibernate doesnot provide a scheduler, so you cannot perform this activity using core hibernate and hence you need external APIs like quartz scheduler
The solution you search may be achieved only if you rely on TWO different persistence context, I think.
A single persistence context maps entities to tables in a non-dynamic way, so you can't perform a "runtime-switch" from a mapped-table to another.
But you can create a different persistence context (or a parallel configuration in hibernate instead of using 2 different contexts), then load this new configuration in a different EntityManager, and perform all your tasks.
That's the only solution that comes to mind, at the moment. Really don't know if it's adequate...
I think it's a good idea to run the script every 24 hours.
You could decrase the interval if you're not happy.
But if you already have a working script, where is your actual problem?
Checking the age of all transactions and move the ones older than 30 days to another list or map is the best way I think.
You will need some kind of schedule mechanism. Either a thread that is woken up periodically, or some other trigger that is appropriate for you.
You can also use a bulk insert operation
Query q = session.createQuery(
"insert into TabTransactionHistory tth
(.....)
select .... from TabTransaction tt"
);
int createdObjects = q.executeUpdate();
(Replace ... with actual fields)
You can also use the "where clause" which can be used to trim down result on basis of how old the entries are.