mysql autocommit vs managers and daos insert methods - java

I've searched through the net but no answer so far, at least no clear answer.
Suppose you are in the following situation
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
private void usingManagerTest()
{
List<SomeType> someList = someDao.findAll();
for (SomeType listItem : someList )
{
someManager.create();
}
}
where someManager.create() set the fields of an entity, say someEntity, and then calls someDao.create(someEntity).
Mysql logs shows that for every iteration in the for, the following mysql queries are performed:
set autocommit = 0
insert into ...
commit
Now suppose you are in the following situation:
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
private void usingDaoTest()
{
List<SomeType> someList = someDao.findAll();
for (SomeType listItem : someList )
{
SomeEntity someEntity = someManager.createEntity();
someDao.create(someEntity);
}
}
where the createEntity method calls some setters on a java entity, and the create is performed by the DAO. This lead to a mysql log like the following:
set autocommit = 0
insert into ...
insert into ...
insert into ...
...
commit
where the number of insert query is the number of iteration in the for cycle.
I've read the spring documentation but so far it is not clear to me why this happens.
Anyone that could explain this behaviour?
Thanks
P.S. I know that the title is not clear, any suggestion is welcomed.
UPDATE: it seems that it works differently from what I've said: the log resulting from running usingDaoTest() does not shows at all the autocommit query (that is no good for me).
I'm still interested in understanding why the two scripts work differently, but now I'm interested also in understanding how to achieve the second log result (where all the operation in the for loop are executed between autocommit = 0 and commit).
Thanks again
UPDATE2: after some other test I've understood a bit more the logic behind the #Transactional, so I've performed a more specific research, founding a solution here.
This discussion can be considered closed, thanks to all.

MySQL will perform the operations for you while your transaction is running. (reason why autocommit is set to 0) After you commit your transaction, all changes will be effectively performed on the database tables that are visible to other transactions.
This is the normal situation. However there is a possibility to define transactions where the changes performed are directly visible to other transactions. This has its up- and downsides.

Related

How to check special conditions before saving data with Hibernate

Sample Scenario
I have a limit that controls the total value of a column. If I make a save that exceeds this limit, I want it to throw an exception. For example;
Suppose I have already added the following data: LIMIT = 20
id
code
value
1
A
15
2
A
5
3
B
12
4
B
3
If I insert (A,2) it exceeds the limit and I want to get exception
If I insert (B,4) the transaction should be successful since it didn't exceed the limit
code and value are interrelated
What can I do
I can check this scenario with required queries. For example, I write a method for it and I can check it in the save method. That's it.
However, I'm looking for a more useful solution than this
For example, is there any annotation when designing Entity ?
Can I do this without calling the method that provides this control every time ?
What examples can I give ?
#UniqueConstraint checking if it adds the same values
Using transaction
The most common and long-accepted way is to simply abstract in a suitable form (in a class, a library, a service, ...) the business rules that govern the behavior you describe, within a transaction:
#Transactional(propagation = Propagation.REQUIRED)
public RetType operation(ReqType args) {
...
perform operations;
...
if(fail post conditions)
throw ...;
...
}
In this case, if when calling a method there is already an open transaction, that transaction will be used (and there will be no interlocks), if there is no transaction created, it will create a new one so that both the operations and the postconditions check are performed within the same transaction.
Note that with this strategy both operation and invariant check transactions can combine multiple transactional states managed by the TransactionManager (e.g. Redis, MySQL, MQS, ... simultaneously and in a coordinated manner).
Using only the database
It has not been used for a long time (in favor of the first way) but using TRIGGERS was the canonical option used some decades ago to check postconditions, but this solution is usually coupled to the specific database engine (e.g. in PostgreSQL or MySQL).
It could be useful in the case where the client making the modifications is unable or unwilling (not safe) to check postconditions (e.g. bash processes) within a transaction. But nowadays it is infrequent.
The use of TRIGGERS may also be preferable in certain scenarios where efficiency is required, as there are certain optimization options within the database scripts.
Neither Hibernate nor Spring Data JPA have anything built-in for this scenario. You have to program the transaction logic in your repository yourself:
#PersistenceContext
EntityManager em;
public addValue(String code, int value) {
var checkQuery = em.createQuery("SELECT SUM(value) FROM Entity WHERE code = :code", Integer.class);
checkQuery.setParameter("code", code);
if (checkQuery.getSingleResult() + value > 20) {
throw new LimitExceededException("attempted to exceed limit for " + code);
}
var newEntity = new Entity();
newEntity.setCode(code);
newEntity.setValue(value);
em.persist(newEntity);
}
Then (it's important!) you have to define SERIALIZABLE isolation level on the #Transactional annotations for the methods that work with this table.
Read more about serializable isolation level here, they have an oddly similar example.
Note that you have to consider retrying the failed transaction. No idea how to do this with Spring though.
You should use a singleton (javax/ejb/Singleton)
#Singleton
public class Register {
#Lock(LockType.WRITE)
public register(String code, int value) {
if(i_can_insert_modify(code, value)) {
//use entityManager or some dao
} else {
//do something
}
}
}

Why isn't my Hibernate insert reflected in my Hibernate query?

I've been asked to write some coded tests for a hibernate-based data access object.
I figure that I'd start with a trivial test: when I save a model, it should be in the collection returned by dao.getTheList(). The problem is, no matter what, when I call dao.getTheList(), it is always an empty collection.
The application code is already working in production, so let's assume that the problem is just with my test code.
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
session.save(model);
session.flush();
final Collection<Model> actual = dao.getTheList();
assertEquals(1, actual.size());
}
The test output is expected:<1> but was:<0>
So far, I've tried explicitly committing after the insert, and disabling the cache, but that hasn't worked.
I'm not looking to become a master of Hibernate, and I haven't been given enough time to read the entire documentation. Without really knowing where to start, this seemed like this might be a good question for the community.
What can I do to make sure that my Hibernate insert is flushed/committed/de-cached/or whatever it is, before the verification step of the test executes?
[edit] Some additional info on what I've tried. I tried manually committing the transaction between the insert and the call to dao.getTheList(), but I just get the error Could not roll back Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
final Transaction firstTransaction = session.beginTransaction();
session.save(model);
session.flush();
firstTransaction.commit();
final Transaction secondTransaction = session.beginTransaction();
final Collection<SystemConfiguration> actual = dao.getTheList();
secondTransaction.commit();
assertEquals(1, actual.size());
}
I've also tried breaking taking the #Transactional annotation off the test thread and annotating each of 2 helper methods, one for each Hibernate job. For that, though I get the error: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here.
[/edit]
I think the underlying DBMS might hide the change to other transactions as long as the changing transaction is not completed yet. Is getTheList running in an extra transaction? Are you using oracle or postgres?

How to Hibernate Batch Insert with real time data? Use #Transactional or not?

I am trying to perform batch inserts with data that is currently being inserted to DB one statement per transaction. Transaction code statement looks similar to below. Currently, addHolding() method is being called for each quote that comes in from an external feed, and each of these quote updates happens about 150 times per second.
public class HoldingServiceImpl {
#Autowired
private HoldingDAO holdingDao;
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public void addHolding(Quote quote) {
Holding holding = transformQuote(quote);
holdingDao.addHolding(holding);
}
}
And DAO is getting current session from Hibernate SessionFactory and calling save on object.
public class HoldingDAOImpl {
#Autowired
private SessionFactory sessionFactory;
public void addHolding(Holding holding) {
sessionFactory.getCurrentSession().save(holding);
}
}
I have looked at Hibernate batching documentation, but it is not clear from document how I would organize code for batch inserting in this case, since I don't have the full list of data at hand, but rather am waiting for it to stream.
Does merely setting Hibernate batching properties in properties file (e.g. hibernate.jdbc.batch_size=20) "magically" batch insert these? Or will I need to, say, capture each quote update in a synchronized list, and then insert list load and clear list when batch size limit reached?
Also, the whole purpose of implementing batching is to see if performance improves. If there is better way to handle inserts in this scenario, let me know.
Setting the property hibernate.jdbc.batch_size=20 is an indication for the hibernate to Flush the objects after 20. In your case hibernate automatically calls sessionfactory.flush() after 20 records saved.
When u call a sessionFactory.save(), the insert command is only fired to in-memory hibernate cache. Only once the Flush is called hibernate synchronizes these changes with the Database. Hence setting hibernate batch size is enough to do batch inserts. Fine tune the Batch size according to your needs.
Also make sure your transactions are handled properly. If you commit a transaction also forces hibernate to flush the session.

Bizarre Spring Transactional JUnit tests with Hibernate entity is null

I have been annoyed and bothered by this problem for a while and finally worked up an example to show what is happening. Hopefully somebody else has the slightest clue what is going on.
I have a Spring Transactional JUnit test with #Rollback(true). The test is wrapped in a HibernateTransaction that rollsback the database changes at the end of the unit test automatically. This appears to be working, however in this bizarre scenario with this one query and only in my unit tests will this one #Transactional Business Logic method return null.
#Test
#Rollback(true)
public void testObscureIssue() throws Exception {
// Not important...
l = createLeague();
t1 = createTeam(l);
User u = userBo.getUser(1L, false);
Player player = TestUtils.getInstance().createTestData(Player.class, 1).get(0);
player.setUser(u);
player.setGender("M");
player.setStartingActivityLevel(ActivityLevelEnum.Sedentary);
playerBo.addOrUpdate(player);
TeamPlayer tp = new TeamPlayer(t1, player);
leagueStructureBo.addOrUpdate(tp);
// This test will pass 10% of the time, seemingly random. Randomness only inside of unit test
Team t = playerBo.getCurrentTeam(player.getPlayerID());
if (t == null) throw new OutOfMemoryError("What is this... I don't even...");
Team expected = playerBo.getCurrentTeam(player.getPlayerID());
assertNotNull(expected);
assertEquals(t1, expected);
}
So the method playerBo.getCurrentTeam always returns correctly in the application, and always returns correctly if I place a breakpoint anywhere in the unit test then step through the code one line at a time. It will most of the time fail however when simply running the unit test without debugging.
I thought perhaps there is some racing condition occurring here, but even if I put a Thread.sleep(400000L); statement before calling this Transactional method it will still fail.
Code for the transactional method:
#Override
#Transactional
public Team getCurrentTeam(long playerId) {
String qry = "select t from Team as t inner join t.teamPlayers as tp " +
"inner join tp.player as tpp where tpp.playerID = :playerId and (((current_timestamp() between tp.startDate and tp.endDate " +
"and tp.endDate is not null) or (tp.endDate is null and current_timestamp() > tp.startDate)))";
Object wtf = sessionFactory.getCurrentSession().createQuery(qry)
.setParameter("playerId", new Long(playerId)).uniqueResult();
return (Team)wtf;
}
The Transaction attributes are all default for a Spring Hibernate4 TransactionManager.
You can see in the code example that I have clearly created this Team entity and the log shows the generated ID for the new record. I can query the record directly by that ID using HQL and it WILL return, but then this one HQL query above in this Transactional method still will return null unless I step through it in debug mode, then it works.
Is this a problem with nested transactions because I was under the impression that nothing gets rolled back until the outermost transaction is rolledback. Why only on this one particular method? Is it a bug with Hibernate 4 or Spring 3.1.1? I am using MySQL InnoDB, could this be an issue with the way that MySQL InnoDB handles database transactions?
Any suggestions of additional things to try are welcome because I am completely out of ideas here.
My guess is that the problem comes from the use of current_timestamp(). You probably created the TeamPlayer with now as the start date, and if you put a breakpoint, the current timestamp is systematically begger than the start date, whereas if you don't put a breakpoint, the code is fast enough, 10% of the time, to have a the current timestamp equal to the start date of the team player.

Isolation level SERIALIZABLE in Spring-JDBC

maybe somebody can help me with a transactional issue in Spring (3.1)/ Postgresql (8.4.11)
My transactional service is as follows:
#Transactional(isolation = Isolation.SERIALIZABLE, readOnly = false)
#Override
public Foo insertObject(Bar bar) {
// these methods are just examples
int x = firstDao.getMaxNumberOfAllowedObjects(bar)
int y = secondDao.getNumerOfExistingObjects(bar)
// comparison
if (x - y > 0){
secondDao.insertNewObject(...)
}
....
}
The Spring configuration Webapp contains:
#Configuration
#EnableTransactionManagement
public class ....{
#Bean
public DataSource dataSource() {
org.apache.tomcat.jdbc.pool.DataSource ds = new DataSource();
....configuration details
return ds;
}
#Bean
public DataSourceTransactionManager txManager() {
return new DataSourceTransactionManager(dataSource());
}
}
Let us say a request "x" and a request "y" execute concurrently and arrive both at the comment "comparison" (method insertObject). Then both of them are allowed to insert a new object and their transactions are commited.
Why am I not having a RollbackException? As far as I know that is what the Serializable isolotation level is for. Coming back to the previous scenario, if x manages to insert a new object and commits its transaction, then "y"'s transaction should not be allowed to commit since there is a new object he did not read.
That is, if "y" could read again the value of secondDao.getNumerOfExistingObjects(bar) it would realize that there is a new object more. Phantom?
The transaction configuration seems to be working fine:
For each request I can see the same connection for firstDao and secondDao
A transaction is created everytime insertObject is invoked
Both first and second DAOs are as follows:
#Autowired
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
}
#Override
public Object daoMethod(Object param) {
//uses jdbcTemplate
}
I am sure I am missing something. Any idea?
Thanks for your time,
Javier
TL;DR: Detection of serializability conflicts improved dramatically in Pg 9.1, so upgrade.
It's tricky to figure out from your description what the actual SQL is and why you expect to get a rollback. It looks like you've seriously misunderstood serializable isolation, perhaps thinking it perfectly tests all predicates, which it doesn't, especially not in Pg 8.4.
SERIALIZABLE doesn't perfectly guarantee that the transactions execute as if they were run in series - as doing so would be prohibitively expensive from a performance point of view if it it were possible at all. It only provides limited checking. Exactly what is checked and how varies from database to database and version to version, so you need to read the docs for your version of your database.
Anomalies are possible, where two transactions executing in SERIALIZABLE mode produce a different result to if those transactions truly executed in series.
Read the documentation on transaction isolation in Pg to learn more. Note that SERIALIZABLE changed behaviour dramatically in Pg 9.1, so make sure to read the version of the manual appropriate for your Pg version. Here's the 8.4 version. In particular read 13.2.2.1. Serializable Isolation versus True Serializability. Now compare that to the greatly improved predicate locking based serialization support described in the Pg 9.1 docs.
It looks like you're trying to perform logic something like this pseudocode:
count = query("SELECT count(*) FROM the_table");
if (count < threshold):
query("INSERT INTO the_table (...) VALUES (...)");
If so, that's not going to work in Pg 8.4 when executed concurrently - it's pretty much the same as the anomaly example used in the documentation linked above. Amazingly it actually works on Pg 9.1; I didn't expect even 9.1's predicate locking to catch use of aggregates.
You write that:
Coming back to the previous scenario, if x manages to insert a new
object and commits its transaction, then "y"'s transaction should not
be allowed to commit since there is a new object he did not read.
but 8.4 won't detect that the two transactions are interdependent, something you can trivially prove by using two psql sessions to test it. It's only with the true-serializability stuff introduced in 9.1 that this will work - and frankly, I was surprised it works in 9.1.
If you want to do something like enforce a maximum row count in Pg 8.4, you need to LOCK the table to prevent concurrent INSERTs, doing the locking either manually or via a trigger function. Doing it in a trigger will inherently require a lock promotion and thus will frequently deadlock, but will successfully do the job. It's better done in the application where you can issue the LOCK TABLE my_table IN EXCLUSIVE MODE before obtaining even SELECTing from the table, so it already has the highest lock mode it will need on the table and thus shouldn't need deadlock-prone lock promotion. The EXCLUSIVE lock mode is appropriate because it permits SELECTs but nothing else.
Here's how to test it in two psql sessions:
SESSION 1 SESSION 2
create table ser_test( x text );
BEGIN TRANSACTION
ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION
ISOLATION LEVEL SERIALIZABLE;
SELECT count(*) FROM ser_test ;
SELECT count(*) FROM ser_test ;
INSERT INTO ser_test(x) VALUES ('bob');
INSERT INTO ser_test(x) VALUES ('bob');
COMMIT;
COMMIT;
When run on Pg 9.1, the st commits succeeds then the secondCOMMIT` fails with:
regress=# COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
HINT: The transaction might succeed if retried.
but when run on 8.4 both commits commits succeed, because 8.4 didn't have all the predicate locking code for serializability added in 9.1.

Categories