I have a complex service method, that loads a lot of data from the db. Since it only reads and validates data I want to make sure
JPA (Hibernate) does not waste time for checking the big persistence
context for changes.
No accidental changes to the data are written
to the db.
Therefore towards the end of my service I have this code
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly()
My unit tests have mocks for all db access, so transactioning is not an issue. (I have integration test with #SpringBootTest and full trx support as well.) Unfortunately the rollback statement fails in pure unit tests.
Is there an easy way to get the rollback statement just do nothing in case of a unit test?
For now I just wrote a small component which is easy to mock:
#Component
public class RollBacker {
public void setRollBackOnly() {
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
}
}
After all it's the singleton approach of TransactionAspectSupport which makes unit testing difficult here. IMHO TransactionAspectSupport should be a spring bean.
Consider to use #Transactional(readOnly=true) instead of TransactionAspectSupport.currentTransactionStatus().setRollbackOnly().
Doc
A boolean flag that can be set to true if the transaction is effectively read-only, allowing for corresponding optimizations at runtime.
Related
I have DAO using Spring's jdbcTemplate with Create Read Update (no Delete) operation.
Create method have ID parameter which is unique key in table.
Except mocking DAO, how can I actually test create without getting constraint violation?
Using random ID still can fail sometimes
Should I override setAutoCommit to avoid adding record? is it still consider a valid unit test?
Must I delete in SQL the record in database beforehand or is there spring option for this types of tests?
Or should I consider it as integration test and not unit test?
EDIT
I'm using Oracle, I can't use sequence for creating values for the ID
We have a few Data sources exists (not for testing) in production
It really depends on what is the purpose of such a test, not all the tests are "unit tests" with this respect.
If, for example, the goal is to test the "service" that encapsulates a business logic, but from this service, sometimes there are calls to DAO, then probably the best way is to just mock the DAO as you suggest.
In this case, DAO obviously won't be covered by this test, but the service will.
If the purpose is to test SQL statements (and I assume that DAO contains nothing but SQL statements + maybe transforming them into the domain object), then mocking is not an option.
In this case, the test should include calls to some kind of database, but in this case, it's not called a unit test anymore (a unit test is something that runs really fast and only in memory, no DBs, no I/O, etc.) I'll call this an integration test (as you also suggest), but different people have probably different names for this kind of tests.
In practice, we need both kinds of tests because they test different things
So, how to test this?
First of all the decision should be made, which database should be used, there are 3 approaches here:
Run with a real database, shared between the users, tests assume its pre-installed
Run with an in-memory database
Run the DB docker image of the DB when the test suite runs, and destroy it afterwards
While the discussion of which approach is better is very interesting on its own, its out of scope for this question IMO, each choise has its implications.
Once you're done with this decision, you should decide how to work with this database from the code.
Usually spring tests use the following pattern:
open a transaction before the test
run the test (change the data, even change the schema - add columns, tables if you want). Do assertions
Regardless the tests result, rollback the transaction so that the data will be just like before the test
So if you follow this approach for all your tests, they'll start with "empty" data state so that no constraint violations are expected
This effectively solves also the "deletion of the record" question because the data will be deleted anyway when the transactions is rolled back.
Now regarding the Deletion of the record outside a transaction.
An obvious approach is to execute an sql of deletion right from the test (outside the DAO), so that DAO (a production code won't be changed)
You can inject DataSource/JDBCTemplate right into the test (Spring test perfectly supports this) and call the required SQL from there
Im trying to understand what is going on when im using AbstractTransactionalJUnit4SpringContextTests in my integration tests when trying to rollback changes made by legacy code.
The legacy code uses NamedParameterJdbcTemplate to communicate with the database. The transaction manager is of type DataSourceTransactionManager and my datasource is of type DriverManagerDataSource.
This is the overall structure of my test class.
#Begin
//Make initial setup of database using `JDBCTemplate` from `AbstractTransactionalJUnit4SpringContextTests`.
#Test
//Call legacy code that makes inserts to database.
My question is if my assuption is wrong that by extending AbstractTransactionalJUnit4SpringContextTests I make all my tests transactional. This has the expected effect that all changes made directly by my testfunction AND in legacy code called from the test function transactional and implicitly rolled back at the end of the tests???
Some observations I made are:
The #Begin function works as expected when used with testfunctions that does not call legacy code that makes changes. In this case the changes made in #Begin are rolledback.
If however I use #Begin with #Test functions calling legacy code that makes changes both the changes made by #Begin and #Test does not get rolled back! The log message printed dos state that the transactions is initated and that the rollback is successful but im not getting the expected behavior.
The Spring TestContext Framework (TCF) manages what I call test-managed transactions. The TCF does not manage application-managed transactions.
If you have code that it is managing its own transactions (e.g., via Spring's TransactionTemplate or some other programmatic means), then those interactions with the database will not be rolled back by the TCF.
For further details, consult the Test-managed Transactions section of the reference manual or slide #43 from my Testing with Spring: An Introduction presentation.
Furthermore, you naturally have to ensure that the DataSource used by the TCF is the exact same DataSource used by the Spring DataSourceTransactionManager and your legacy code. If all of your legacy code uses Spring's NamedParameterJdbcTemplate, that code should participate in the correct transaction. Otherwise, you'll need to use Spring's DataSourceUtils.getConnection(DataSource) to ensure that legacy code works with Spring-managed and test-managed transactions.
I'm having a difficult time understanding Spring Transactions. First off, I am trying to use as little of Spring as possible. Basically, I want to use #Transactional and some of its injection capabilities with JPA/Hibernate. My transactional usage is in my service layer. I try to keep them out of test cases. I use CTW and am spring-configured. I have component scan on the root of my packages right now. I also am using Java configuration for my datasource, JpaTransactionManager, and EntityManagerFactory. My configuration for the JpaTransactionFactory uses:
AnnotationTransactionAspect.aspectOf().setTransactionManager( txnMgr );
I do not use #EnableTransactionManagement.
Unfortunately, I'm having a hard time understanding the rules for #Transactional and can't find an easy page that describes them simply. Especially with regards to Session usage. For example, what if I want to use #Transactional on a class that does not have a default no-arg constructor?
The weirdest problem I'm having is that in some of the POJO service classes Transacitonal works great while in others I can see the transactions being created but operations ultimately fall saying that there is "no session or the session has been closed". I apologize for not having code to reproduce this, I can't get it down to a small set. This is why I am looking for the best resources so I can figure it out myself.
For example, I can have a method that gets a lazily fetched collection of children, iterates through it and puts it into a Set and returns that set. In one class it will work fine while in another class also marked with #Transactional it will fail while trying to iterate through the PersistentSet saying that there is no session (even though there IS a transaction).
Maybe this is because I create both of these service objects in a test case and the first one is somehow hijacking the session?
By the way, I have read through the spring source transaction documents. I'm looking for a clear set of rules and tips for debugging issues like this. Thanks.
Are you sure you loaded your parent entity in the scope of the very transaction where you try to load the lazy children? If it was passed in as parameter for example (that is, loaded from outside your #Transactional method) then it might not be bound to a persistence context anymore...
Note that when no #Transactional context is given, any database-related action may have a short tx to be created, then immediately closed - disabling subsequent lazy-loading calls. It depends on your persistence context and auto-commit configurations.
To put it simply, the behaviour with no transactional context being sometimes unexpected, the ground rule is to always have one. If you access your database, then you give yourself a well-defined tx - period. With spring-tx, it means all your #Repository's and #Services are #Transactional. This way you should avoid most of tx-related issues.
We're using MyBatis (3.0.5) as our or-mapping tool (and I don't have any say-so over that!).
I've created a Response object that, through MyBatis, mapped to our [responses] database table (we use PostgreSQL).
Actually, the ormapping structure we use is as follows:
ResponseMapper.xml - this is the XML file where the PSQL queries are defined and mapped to the ResponseMapper.java** class and its methods
ReponseMapper.java - An interface used by MyBatis for executing the queries defined in the XML file (above)
ResponseDAO.java - An interface used for DI purposes (we use Spring)
ResponseDAOImpl.java - A concrete implementation of ResponseDAO that actually calls ResponseMapper methods; we use Spring to inject instances of this class into our application
So, to INSERT a new [responses] record into PostgreSQL, the code looks like this from the invoking component:
#Autowired
private ResponseDAO response;
public void doStuff()
{
int action = getAction();
response.setAction(action);
response.insert();
}
This set up works beautifully for us. However I am now trying to write a set of JUnit tests for the ResponseDAOImpl class, and I want to make sure that it is correctly executing queries to our PostgreSQL database.
As far as I can tell, there is no way to "mock" an entire database. So my only option (seemingly) is to have the test method(s) execute a query, check for success, and then roll it back regardless.
MyBatis doesn't seem to support this kind of rollback feature. I found this post off the mybatis-user mailing list on Old Nabble, but the poster was using Guice and his/her question seemed to be more about rolling back transactions through Guice.
If MyBatis doesn't support transactions/rollbacks (does it?!?!), then it seems like my only repireve would be if the PostgreSQL-JDBC driver supports these. I guess I could then try to configure my test methods so that they run the ResponseDAO.insert() method, and then manually try to rollback the transaction directly through the driver (sans MyBatis).
Does SO have any experience with this? Code samples? Tips? Best practices? Thanks in advance!
MyBatis allows rollbacks when working with an "SqlSession", the thing is your using the spring dependency injection piece, which automatically commits your transaction when the method completes.
You have a few options, among them
Inject a Mock of your dependencies. There is some rocking libraries to help with this. Like Mockito, here is a good question on Spring Mockito stuff. This will test your business logic in your java, but not your actual queries.
Commit your queries, and delete your data after the test runs. This is the approach we took, because it tests our database as well. You would obviously need a test instance of your database, which some people don't have.
You could try to provide your own test bindings for the classes that do the automatic commit in the MyBatis Spring Integration and override there behavior so that in the test environment the behavior is to rollback the query instead of committing. A similar approach was use in the Guice integration, and it is described here.
Not sure this is what you need, but org.apache.ibatis.session.SqlSession class has rollback() method which can be used to rollback.
Another approach is to use getConnection() method from the same class which will return javax.sql.Connection javax.sql.Connection class which also has commit() and rollback() methods.
Hope it helps.
Remis B
When I test DAO module in JUnit, an obvious problem is: how to recover testing data in database?
For instance, a record should be deleted in both test methods testA() and testB(), that means precondition of both test methods need an existing record to be deleted. Then my strategy is inserting the record in setUp() method to recover data.
What’s your better solution? Or your practical idea in such case? Thanks
I'd make a method called createRecord(). It may be a test-method as well. And whenever you need to create a record, call that method from your other test methods.
Maybe DBUnit can help you out.
It allows to have a TEST database in a predefined state before executing each test. Once it set upped, it's really easy to test database driven applications.
A simple solution is to roll back the transaction after the test (for example in tearDown()). That way, the tests can make all the changes they like but they won't change the database (don't forget to turn autoCommit off for the connection).
There is a drawback, though: If a test fails, you can't peek at the database to figure out why. Therefore, most of my tests clean the database before they run and they use autoCommit so I can see the last state where it failed, run a fixed SQL query against the data, etc.
Bozho is correct, of course, but just to add a bit of detail:
If possible, unit tests set up their data before manipulating it, and then clean up after themselves. So ideally you would not be trampling on existing data (perhaps copied from production) for testing, but setting some up as part of the test; that's practically the only way you can be assured that your test will be testing what you intended.