Bizarre Spring Transactional JUnit tests with Hibernate entity is null - java

I have been annoyed and bothered by this problem for a while and finally worked up an example to show what is happening. Hopefully somebody else has the slightest clue what is going on.
I have a Spring Transactional JUnit test with #Rollback(true). The test is wrapped in a HibernateTransaction that rollsback the database changes at the end of the unit test automatically. This appears to be working, however in this bizarre scenario with this one query and only in my unit tests will this one #Transactional Business Logic method return null.
#Test
#Rollback(true)
public void testObscureIssue() throws Exception {
// Not important...
l = createLeague();
t1 = createTeam(l);
User u = userBo.getUser(1L, false);
Player player = TestUtils.getInstance().createTestData(Player.class, 1).get(0);
player.setUser(u);
player.setGender("M");
player.setStartingActivityLevel(ActivityLevelEnum.Sedentary);
playerBo.addOrUpdate(player);
TeamPlayer tp = new TeamPlayer(t1, player);
leagueStructureBo.addOrUpdate(tp);
// This test will pass 10% of the time, seemingly random. Randomness only inside of unit test
Team t = playerBo.getCurrentTeam(player.getPlayerID());
if (t == null) throw new OutOfMemoryError("What is this... I don't even...");
Team expected = playerBo.getCurrentTeam(player.getPlayerID());
assertNotNull(expected);
assertEquals(t1, expected);
}
So the method playerBo.getCurrentTeam always returns correctly in the application, and always returns correctly if I place a breakpoint anywhere in the unit test then step through the code one line at a time. It will most of the time fail however when simply running the unit test without debugging.
I thought perhaps there is some racing condition occurring here, but even if I put a Thread.sleep(400000L); statement before calling this Transactional method it will still fail.
Code for the transactional method:
#Override
#Transactional
public Team getCurrentTeam(long playerId) {
String qry = "select t from Team as t inner join t.teamPlayers as tp " +
"inner join tp.player as tpp where tpp.playerID = :playerId and (((current_timestamp() between tp.startDate and tp.endDate " +
"and tp.endDate is not null) or (tp.endDate is null and current_timestamp() > tp.startDate)))";
Object wtf = sessionFactory.getCurrentSession().createQuery(qry)
.setParameter("playerId", new Long(playerId)).uniqueResult();
return (Team)wtf;
}
The Transaction attributes are all default for a Spring Hibernate4 TransactionManager.
You can see in the code example that I have clearly created this Team entity and the log shows the generated ID for the new record. I can query the record directly by that ID using HQL and it WILL return, but then this one HQL query above in this Transactional method still will return null unless I step through it in debug mode, then it works.
Is this a problem with nested transactions because I was under the impression that nothing gets rolled back until the outermost transaction is rolledback. Why only on this one particular method? Is it a bug with Hibernate 4 or Spring 3.1.1? I am using MySQL InnoDB, could this be an issue with the way that MySQL InnoDB handles database transactions?
Any suggestions of additional things to try are welcome because I am completely out of ideas here.

My guess is that the problem comes from the use of current_timestamp(). You probably created the TeamPlayer with now as the start date, and if you put a breakpoint, the current timestamp is systematically begger than the start date, whereas if you don't put a breakpoint, the code is fast enough, 10% of the time, to have a the current timestamp equal to the start date of the team player.

Related

How to check special conditions before saving data with Hibernate

Sample Scenario
I have a limit that controls the total value of a column. If I make a save that exceeds this limit, I want it to throw an exception. For example;
Suppose I have already added the following data: LIMIT = 20
id
code
value
1
A
15
2
A
5
3
B
12
4
B
3
If I insert (A,2) it exceeds the limit and I want to get exception
If I insert (B,4) the transaction should be successful since it didn't exceed the limit
code and value are interrelated
What can I do
I can check this scenario with required queries. For example, I write a method for it and I can check it in the save method. That's it.
However, I'm looking for a more useful solution than this
For example, is there any annotation when designing Entity ?
Can I do this without calling the method that provides this control every time ?
What examples can I give ?
#UniqueConstraint checking if it adds the same values
Using transaction
The most common and long-accepted way is to simply abstract in a suitable form (in a class, a library, a service, ...) the business rules that govern the behavior you describe, within a transaction:
#Transactional(propagation = Propagation.REQUIRED)
public RetType operation(ReqType args) {
...
perform operations;
...
if(fail post conditions)
throw ...;
...
}
In this case, if when calling a method there is already an open transaction, that transaction will be used (and there will be no interlocks), if there is no transaction created, it will create a new one so that both the operations and the postconditions check are performed within the same transaction.
Note that with this strategy both operation and invariant check transactions can combine multiple transactional states managed by the TransactionManager (e.g. Redis, MySQL, MQS, ... simultaneously and in a coordinated manner).
Using only the database
It has not been used for a long time (in favor of the first way) but using TRIGGERS was the canonical option used some decades ago to check postconditions, but this solution is usually coupled to the specific database engine (e.g. in PostgreSQL or MySQL).
It could be useful in the case where the client making the modifications is unable or unwilling (not safe) to check postconditions (e.g. bash processes) within a transaction. But nowadays it is infrequent.
The use of TRIGGERS may also be preferable in certain scenarios where efficiency is required, as there are certain optimization options within the database scripts.
Neither Hibernate nor Spring Data JPA have anything built-in for this scenario. You have to program the transaction logic in your repository yourself:
#PersistenceContext
EntityManager em;
public addValue(String code, int value) {
var checkQuery = em.createQuery("SELECT SUM(value) FROM Entity WHERE code = :code", Integer.class);
checkQuery.setParameter("code", code);
if (checkQuery.getSingleResult() + value > 20) {
throw new LimitExceededException("attempted to exceed limit for " + code);
}
var newEntity = new Entity();
newEntity.setCode(code);
newEntity.setValue(value);
em.persist(newEntity);
}
Then (it's important!) you have to define SERIALIZABLE isolation level on the #Transactional annotations for the methods that work with this table.
Read more about serializable isolation level here, they have an oddly similar example.
Note that you have to consider retrying the failed transaction. No idea how to do this with Spring though.
You should use a singleton (javax/ejb/Singleton)
#Singleton
public class Register {
#Lock(LockType.WRITE)
public register(String code, int value) {
if(i_can_insert_modify(code, value)) {
//use entityManager or some dao
} else {
//do something
}
}
}

Why isn't my Hibernate insert reflected in my Hibernate query?

I've been asked to write some coded tests for a hibernate-based data access object.
I figure that I'd start with a trivial test: when I save a model, it should be in the collection returned by dao.getTheList(). The problem is, no matter what, when I call dao.getTheList(), it is always an empty collection.
The application code is already working in production, so let's assume that the problem is just with my test code.
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
session.save(model);
session.flush();
final Collection<Model> actual = dao.getTheList();
assertEquals(1, actual.size());
}
The test output is expected:<1> but was:<0>
So far, I've tried explicitly committing after the insert, and disabling the cache, but that hasn't worked.
I'm not looking to become a master of Hibernate, and I haven't been given enough time to read the entire documentation. Without really knowing where to start, this seemed like this might be a good question for the community.
What can I do to make sure that my Hibernate insert is flushed/committed/de-cached/or whatever it is, before the verification step of the test executes?
[edit] Some additional info on what I've tried. I tried manually committing the transaction between the insert and the call to dao.getTheList(), but I just get the error Could not roll back Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
final Transaction firstTransaction = session.beginTransaction();
session.save(model);
session.flush();
firstTransaction.commit();
final Transaction secondTransaction = session.beginTransaction();
final Collection<SystemConfiguration> actual = dao.getTheList();
secondTransaction.commit();
assertEquals(1, actual.size());
}
I've also tried breaking taking the #Transactional annotation off the test thread and annotating each of 2 helper methods, one for each Hibernate job. For that, though I get the error: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here.
[/edit]
I think the underlying DBMS might hide the change to other transactions as long as the changing transaction is not completed yet. Is getTheList running in an extra transaction? Are you using oracle or postgres?

Why didn't read JPA find() method uncommitted changes?

I am puzzled by a JPA behavior which I did not expect in that way (using Eclipselink).
I run on Wildfly 10 (JDK-8) a stateless session EJB (3.2). My method call is - per default - encapsulated in a transaction.
Now my business method, when reading and updating a entity bean, did not recognize updates - especially the version number of the entity. So my call results in a
org.eclipse.persistence.exceptions.OptimisticLockException
My code looks simplified as this:
public ItemCollection process(MyData workitem) {
....
// load document from jpa
persistedDocument = manager.find(Document.class, id);
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 3
// change some data
....
manager.flush();
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 4
....
// load document from jpa once again
persistedDocument = manager.find(Document.class, id);
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 3 (!!)
// change some data
....
manager.flush();
// Throws OptimisticLockException !!
// ...Document#1fbf7c8e] cannot be updated because it has changed or been deleted since it was last read
...
}
If I put the code (which changes the data and flush the entity bean) in a method annotated with
#TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW)
everything works as expected.
But why is the second call of the find() method in my code not reading the new version number? I would expect version 4 after the flush() and find() call.
After all it looks like calling
manager.clear();
solves the problem. I thought that detaching the object should do the same, but in my case only calling clear() did fix the problem.
More findings:
After all it seems not be a good idea to call the methods detach() and flush() form a service layer. I did this, because I wanted to get the new version id of my entity before I left my business method to return this id to the client. I changed my strategy in this case and I removed all the 'bad stuff' with detaching and flushing my entity beans. The code becomes more clear and after all the code complexity was reduced dramatically.
And of course the entityManager now behaves correctly. If I query again the same entity bean several times in one transaction, the entityManager returns the correct updated version.
So the answer to my own question is: Leave the methods flush() and clear(), as long as there is no really good reason for it to use them.

mysql autocommit vs managers and daos insert methods

I've searched through the net but no answer so far, at least no clear answer.
Suppose you are in the following situation
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
private void usingManagerTest()
{
List<SomeType> someList = someDao.findAll();
for (SomeType listItem : someList )
{
someManager.create();
}
}
where someManager.create() set the fields of an entity, say someEntity, and then calls someDao.create(someEntity).
Mysql logs shows that for every iteration in the for, the following mysql queries are performed:
set autocommit = 0
insert into ...
commit
Now suppose you are in the following situation:
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
private void usingDaoTest()
{
List<SomeType> someList = someDao.findAll();
for (SomeType listItem : someList )
{
SomeEntity someEntity = someManager.createEntity();
someDao.create(someEntity);
}
}
where the createEntity method calls some setters on a java entity, and the create is performed by the DAO. This lead to a mysql log like the following:
set autocommit = 0
insert into ...
insert into ...
insert into ...
...
commit
where the number of insert query is the number of iteration in the for cycle.
I've read the spring documentation but so far it is not clear to me why this happens.
Anyone that could explain this behaviour?
Thanks
P.S. I know that the title is not clear, any suggestion is welcomed.
UPDATE: it seems that it works differently from what I've said: the log resulting from running usingDaoTest() does not shows at all the autocommit query (that is no good for me).
I'm still interested in understanding why the two scripts work differently, but now I'm interested also in understanding how to achieve the second log result (where all the operation in the for loop are executed between autocommit = 0 and commit).
Thanks again
UPDATE2: after some other test I've understood a bit more the logic behind the #Transactional, so I've performed a more specific research, founding a solution here.
This discussion can be considered closed, thanks to all.
MySQL will perform the operations for you while your transaction is running. (reason why autocommit is set to 0) After you commit your transaction, all changes will be effectively performed on the database tables that are visible to other transactions.
This is the normal situation. However there is a possibility to define transactions where the changes performed are directly visible to other transactions. This has its up- and downsides.

hibernate column uniqueness question

I'm still in the process of learning hibernate/hql and I have a question that's half best practices question/half sanity check.
Let's say I have a class A:
#Entity
public class A
{
#Id #GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
#Column(unique=true)
private String name = "";
//getters, setters, etc. omitted for brevity
}
I want to enforce that every instance of A that gets saved has a unique name (hence the #Column annotation), but I also want to be able to handle the case where there's already an A instance saved that has that name. I see two ways of doing this:
1) I can catch the org.hibernate.exception.ConstraintViolationException that could be thrown during the session.saveOrUpdate() call and try to handle it.
2) I can query for existing instances of A that already have that name in the DAO before calling session.saveOrUpdate().
Right now I'm leaning towards approach 2, because in approach 1 I don't know how to programmatically figure out which constraint was violated (there are a couple of other unique members in A). Right now my DAO.save() code looks roughly like this:
public void save(A a) throws DataAccessException, NonUniqueNameException
{
Session session = sessionFactory.getCurrentSession();
try
{
session.beginTransaction();
Query query = null;
//if id isn't null, make sure we don't count this object as a duplicate
if(obj.getId() == null)
{
query = session.createQuery("select count(a) from A a where a.name = :name").setParameter("name", obj.getName());
}
else
{
query = session.createQuery("select count(a) from A a where a.name = :name " +
"and a.id != :id").setParameter("name", obj.getName()).setParameter("name", obj.getName());
}
Long numNameDuplicates = (Long)query.uniqueResult();
if(numNameDuplicates > 0)
throw new NonUniqueNameException();
session.saveOrUpdate(a);
session.getTransaction().commit();
}
catch(RuntimeException e)
{
session.getTransaction().rollback();
throw new DataAccessException(e); //my own class
}
}
Am I going about this in the right way? Can hibernate tell me programmatically (i.e. not as an error string) which value is violating the uniqueness constraint? By separating the query from the commit, am I inviting thread-safety errors, or am I safe? How is this usually done?
Thanks!
I think that your second approach is best.
To be able to catch the ConstraintViolation exception with any certainty that this particular object caused it, you would need to flush the session immediately after the call to saveOrUpdate. This could introduce performance problems if you need to insert a number of these objects at a time.
Even though you would be testing if the name already exists in the table on every save action, this would still be faster than flushing after every insert. (You could always benchmark to confirm.)
This also allows you to structure your code in such a way that you could call a 'validator' from a different layer. For example, if this unique property is the email of a new user, from the web interface you can call the validation method to determine if the email address is acceptable. If you went with the first option, you would only know if the email was acceptable after trying to insert it.
Approach 1 would be ok if:
There is only one constraint in the entity.
There is only one dirty object in the session.
Remember that the object may not be saved until flush() is called or the transaction commited.
For best error reporting I would:
Use approach two for every constraint violation, so I can give an specific error for each of them..
Implement an interceptor that in case of an constraint exception retries the transaction (a max number of times) so the violation can't be caught in one of the tests. This is only needed depending on the transaction isolation level.

Categories