How to write unit tests for data access? - java

Unit tests have become increasingly important in modern software development, and I'm finding myself lost in the dust. Primarily a Java programmer, I understand the basics of unit tests: have methods in place that test fundamental operations in your application. I can implement this for the simple (and often used as examples) cases:
public boolean addNumbers(int a, intb) {return a + b;}
//Unit test for above
public boolean testAddNumbers() {return addNumbers(5, 10) == 15;}
What confuses me is how to move this out into practical application. After all, most simple functions are already in APIs or the JDK. A real world situation that I do frequently in my job is data access, i.e. writing DAOs to work with a database. I can't write static tests like the example above, because pulling a record set from an Oracle box can return a whole manner of things. Writing a generalized unit test that just looks for a particular pattern in the return set seems too broad and unhelpful. Instead, I write no unit tests. This is bad.
Another example of a case where I don't know how to approach writing tests is web applications. My web applications are typically built on a J2EE stack, but they don't involve much logic. Generally it's delivering information from databases with little to no manipulation. Are these inappropriate targets for unit tests?
In short, I've found the vast majority of unit test examples to focus on test cases that are too simplistic and not relevant to what I do. I'm looking for any (preferably Java) examples/tips on writing unit tests for applications that move and display data, not perform logic on it.

You generally don't write unit tests for DAOs, but integration tests. These tests basically consist in
setting the database in a well-known state, suitable for the test
call the DAO method
verify that the DAO returns the right data and/or changes the stateof the database as expected.
Shameless plug: DbSetup is good tool to do the first part. But other tools exist like DBUnit.
To test the business logic of the app (complex or not, that doesn't change much), you typically mock the DAOs using a mocking framework like Mockito:
SomeDao mockDao = mock(SomeDao.class);
when(mockDao.findAllEmployees()).thenReturn(Arrays.asList(e1, e2, e3));
SomeService service = new SomeService(mockDao);
someService.increaseSalaryOfAllEmployeees(1000);
// todo verify that e1, e2 and e3's salary is 1000 larger than before

Related

Why do we not mock domain objects in unit tests?

I give you 2 tests; the purpose of which is solely to confirm that when service.doSomething is called, emailService.sendEmail is called with the person's email as a parameter.
#Mock
private EmailService emailService;
#InjectMocks
private Service service;
#Captor
private ArgumentCaptor<String> stringCaptor;
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_NO_MOCKING() {
final String email = "billy.tyne#myspace.com";
// There is only one way of building an Address and it requires all these fields
final Address crowsNest = new Address("334", "Main Street", "Gloucester", "MA", "01930", "USA");
// There is only one way of building a Phone and it requires all these fields
final Phone phone = new Phone("1", "978-281-2965");
// There is only one way of building a Vessel and it requires all these fields
final Vessel andreaGail = new Vessel("Andrea Gail", "Fishing", 92000);
// There is only one way of building a Person and it requires all these fields
final Person captain = new Person("Billy", "Tyne", email, crowsNest, phone, andreaGail);
service.doSomething(captain); // <-- This requires only the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_WITH_MOCKING() {
final String email = "billy.tyne#myspace.com";
final Person captain = mock(Person.class);
when(captain.getEmail()).thenReturn(email);
service.doSomething(captain); // <-- This requires the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
Why is it that my team is telling me not to mock the domain objects required to run my tests, but not part of the actual test? I am told mocks are for the dependencies of the tested service only. In my opinion, the resulting test code is leaner, cleaner and easier to understand. There is nothing to distract from the purpose of the test which is to verify the call to emailService.sendEmail occurs. This is something that I have heard and accepted as gospel for a long time, over many jobs. But I still can not agree with.
I think I understand your team's position.
They are probably saying that you should reserve mocks for things that have hard-to-instantiate dependencies. That includes repositories that make calls to a database, and other services that can potentially have their own rats-nest of dependencies. It doesn't include domain objects that can be instantiated (even if filling out all the constructor arguments is a pain).
If you mock the domain objects then the test doesn't give you any code coverage of them. I know I'd rather get these domain objects covered by tests of services, controllers, repositories, etc. as much as possible and minimize tests written just to exercise their getters and setters directly. That lets tests of domain objects focus on any actual business logic.
That does mean that if the domain object has an error then tests of multiple components can fail. I think that's ok. I would still have tests of the domain objects (because it's easier to test those in isolation than to make sure all paths are covered in a test of a service), but I don't want to depend entirely on the domain object tests to accurately reflect how those objects are used in the service, it seems like too much to ask.
You have a point that the mocks allow you to make the objects without filling in all their data (and I'm sure the real code can get a lot worse than what is posted). It's a trade-off, but having code coverage that includes the actual domain objects as well as the service under test seems like a bigger win to me.
It seems to me like your team has chosen to err on the side of pragmatism vs purity. If everybody else has arrived at this consensus you need to respect that. Some things are worth making waves over. This isn't one of them.
It is a tradeoff, and you have designed your example nicely to be 'on the edge'. Generally, mocking should be done for a reason. Good reasons are:
You can not easily make the depended-on-component (DOC) behave as intended for your tests.
Does calling the DOC cause any non-derministic behaviour (date/time, randomness, network connections)?
The test setup is overly complex and/or maintenance intensive (like, need for external files) (* see below)
The original DOC brings portability problems for your test code.
Does using the original DOC cause unnacceptably long build / execution times?
Has the DOC stability (maturity) issues that make the tests unreliable, or, worse, is the DOC not even available yet?
For example, you (typically) don't mock standard library math functions like sin or cos, because they don't have any of the abovementioned problems.
Why is it recommendable to avoid mocking where unnecessary?
For one thing, mocking increases test complexity.
Secondly, mocking makes your tests dependent on the inner workings of your code, namely, on how the code interacts with the DOCs (like, in your case, that the captain's first name is obtained using getFirstName, although possibly another way might exist to get that information).
And, as Nathan mentioned, it may be seen as a plus that - without mocking - DOCs are tested for free - although I would be careful here: There is a risk that your tests lose focus if you get tempted to also test the DOCs. The DOCs should have tests of their own.
Why is your scenario 'on the edge'?
One of the abovementioned good reasons for mocking is marked with (*): "The test setup is overly complex ...", and your example is constructed to have a test setup that is a bit complex. Complexity of the test setup is obviously not a hard criterion and developers will simply have to make a choice. If you want to look at it this way, you could say that either way has some risks when it comes to future maintenance scenarios.
Summarized, I would say that neither position (generally to mock or generally not to mock) is right. Instead, developers should understand the decision criteria and then apply them to the specific situation. And, when the scenario is in the grey zone such that the criteria don't lead to a clear decision, don't fight over it.
There are two mistakes here.
First, testing that when a service method is called, it delegates to another method. That is a bad specification. A service method should be specified in terms of the values it returns (for getters) or the values that could be subsequently got (for mutators) through that service interface. The service layer should be treated as a Facade. In general, few methods should be specified in terms of which methods they delegate to and when they delegate. The delegations are implementation details and so should not be tested.
Unfortunately, the popular mocking frameworks encourage this erroneous approach. And so does over zealous use of Behaviour Driven Development.
The second mistake is centered around the very concept of unit testing. We would like each of our unit tests to test one thing, so when there is a fault in one thing, we have one test failure, and locating the fault is easy. And we tend to think of "unit" meaning the same as "method" or "class". This leads people to think that a unit test should involve only one real class, and all other classes should be mocked. This is impossible for all but the simplest of classes. Almost all Java code uses classes from the standard library, such as String or HashSet. Most professional Java code uses classes from various frameworks, such as Spring. Nobody seriously suggests mocking those. We accept that those classes are trustworthy, and so do not need mocking. We accept that it is OK not to mock "trustworthy" classes that the code of our unit uses. But, you say, our classes are not trustworthy, so we must mock them. Not so. You can trust those other classes, by having good unit tests for them. But how to avoid a tangle of interdependent classes that cause a confusing mass of test failures when there is only one fault present? That would be a nightmare to debug! Use a concept from 1970s programming (called, a virtual machine hierarchy, which is now a rather confusing term, given the additional meanings of virtual machine): arrange your software in layers from low level to high level, with higher layers performing operations using lower layers. Each layer provides a more expressive or advanced means of abstractly describing operations and objects. So, domain objects are in a low level, and the service layer is at a higher level. When several tests fail, start debugging the lowest level test failure(s): the fault will probably be in that layer, possibly (but probably not) in a lower layer, and not in a higher layer.
Reserve mocks only for input and output interfaces that would make the tests very expensive to run (typically, this means mocking the repository layer and the logging interface).
The intention of an automated test is to reveal that the intended behavior of some unit of software is no longer performing as expected (aka reveal bugs.)
The granularity/size/bounds of units under test in a given test suite is to be decided by you and your team.
Once that is decided, if something outside of that scope can be mocked without sacrificing the behavior being tested, then that means it is clearly irrelevant to the test, and it should be mocked. This will help with making your tests more:
Isolated
Fast
Readable (as you mentioned)
...and most importantly, when the test fails, it will reveal that the intended behavior of some unit of software is no longer performing as expected. Given a sufficiently small unit under test, it will be obvious where the bug has occurred and why.
If your test-without-mocks example were to fail, it could indicate an issue with Address, Phone, Vessel, or Person. This will cause wasted time tracking down exactly where the bug has occurred.
One thing I will mention is that your example with mocks is actually a bit unreadable IMO because you are asserting that a String will have a value of "Billy" but it is unclear why.

How to mock, or otherwise test readPassword?

I'm developing a framework for simplifying the creation of console applications. My framework is written in Scala, and I'm using ScalaTest and Mockito for unit testing.
I need to be able to mock java.io.Console, but it's declared final. I'm trying to achieve 100% unit test coverage is currently this is the only thing blocking me - in both functional and unit tests.
So far I've not been able to get very far with any solution, I just can't think of a way of doing this. It doesn't implement an interface that I can mock, the method isn't available anywhere else, obviously I can't extend it. I'm thinking perhaps there's a solution that could involve some sort of dynamic method of calling the methods like readLine and readPassword, but I'm not experienced enough to get anywhere with that train of thought either!
You should create your own interface to wrap all interactions with java.io.Console, e.g.
public interface ConsoleService {
...
}
So long as you only interact with the console via an instance of ConsoleService, then you will be able to mock the ConsoleService and test 99% of your code as you normally would. The ConsoleService interface becomes the boundary of your application for both functional testing of the entire app and the unit tests of the classes that interact with it directly.
Now we have reduced the scope of the problem to "how do I test the ConsoleService implementation", and we need to get a little creative. For example, you could redirect Console output to a file and inspect the contents of the file. You might not even want to test the ConsoleService in Scala; you could write a skeleton application using the ConsoleService then use your scripting language of choice to start a real Console on your favourite OS, interact with your skeleton application and test the ConsoleService that way. You can get as creative (and hacky) as you like here because:
it only affects a small number of tests; and
your application will likely mature to a point where the ConsoleService implementation doesn't need to change very much, i.e. your wacky testing solution will not be a great burden on future developers.
For these reasons it should be obvious that it is a good idea to keep the ConsoleService wrapper very thin because any logic in there will be tested via the strange ConsoleService tests, not nice friendly Scala tests. Often direct delegation to java.io.Console methods is good enough, but you should allow your application's functional tests to drive out the ConsoleService interface rather than making any presumptions (your functional test assertions will likely rely on particular interactions with a mock ConsoleService, or perhaps on the state of a stub, test implementation of ConsoleService which you can control in the test).
Finally, you may decide that the ConsoleServicewrapper is so thin that its implementation does require any unit/functional tests at all. The implementation of ConsoleService will likely be so critical to your application that any defects will be exposed by integration tests or manual inspection of the app in UAT.
You might end up with something like this (apologies, I don't speak Scala so it's Java):
public class RealConsoleService implements ConsoleService {
private final java.io.Console delegate;
public RealConsoleService(java.io.Console delegate) {
this.delegate = delegate;
}
#Override
public String readLine() throws IOError {
return delegate.readLine();
}
}
Two interesting points:
This is a great example of why test driven development helps write flexible code. If you wanted to rewrite your framework using another method of input and output, you would just rename ConsoleService to the more abstract ApplicationInputOutputService and plug in a different implementation.
The same concept can be used to test applications that use other difficult-to-test APIs. Many of Java's useful file IO methods are static methods and therefore difficult to control in tests. By wrapping in an interface as above, your application functionality becomes easy to test.

What are the best practices for writing unit tests with Mock frameworks

I am new to mocking and I have goggled a lot for the best practices but till now I couldn't find any satisfying resource so I thought of putting in SO.
I have tried few test cases and had following doubts:
Do you write a separate unit test for each method (public private etc) and mock other methods calls that's being invoked inside this method or you test only public method?
Is it okay to verify the invocation of stubbed method at the end when testing a method that returns nothing e.g DB insertion?
Please add other practices also that are must to know.
There are many levels of testing. Unit testing is of a finer granularity to integration testing which you should research separately. Regrettably this is still quite a young area of the software engineering industry and as a result the terminology is being intermixed in ways not intended.
For unit testing you should write tests that determine if the behaviour of the class meets expectations. Once you have all such tests you should find any private methods are also tested as a consequence so no need to test private methods directly. If you only test behaviour, you should find your tests never need to change although the class under test may over time - you may of course need to increase the number of tests to compensate just never change the existing tests.
Each class, in a good design, should have minimal use of other classes (collaborators). Those collaborators that get mocked are often implementing infrastructure such as database access. Be wary of testing collaboration as this is more closely associated with a larger system test - mocking collaborators gives your unit test knowledge not only of how it behaves but also how it operates which is a different subject matter.
Sorry to be vague but you are embarking on a large topic and I wanted to be brief.

How to unit test old legacy application

I was assigned to old web applicaton(JSF 1.2 + Eclipselink), there is no middleware like EJB or Spring and service layer of application is composed of POJO that directly calls EntityManager. Structure of the code is like this SomeBean(backing bean) -> SomeServices(here is mix of business logic and data access code), no seperate DAO layer. The code on service classes usually looks like this(here very simplified):
public void someMethod(SomeEntity someEntity, ....) throws SomeServiceExeption {
try{
entitiyManager.getTransaction.begin();
//lotOfLogicHereAndCallingSomeOtherPrivateMethods
entitiyManager.getTransaction.commit();
}catch(Exception e){
log.error("");
if(entitiyManager.getTransaction..isActive()){
entitiyManager.getTransaction.rollback();
}
throw new SomeServiceExeption(e);
}
}
This application has only few tests, that were testing almost nothing, so I am trying to cover as much code as possible with unit tests(there will be some changes commintg into application that will require a lot of changes in legacy code which is not covered by tests). My question is how would you unit test code like this. I have three ideas:
Refactor to tests. I could introduce DAO layer and put all
entityManager calls there. But refactoring without tests is allways
problem.
Mock EntityManager. I tried this several times with EasyMock, it works
and helps me to at least have some code coverage of code that
requires changes, but is probably not good style, as you should not
mock api that does not belong to you. Also to prepare EntityManager mocks requires a lot of time and code
Instead of unit testing, do integration testing with hsqldb or h2
and some dummy test data. Well this would require probably most of
the work and tests would be slow. Also I want to cover mostly
bussiness logic, not data access.
I would probably first add some integration tests and get coverage on the parts you want to refactor. Then, you can go on refactoring to more isolated units that are individually testable. If refactored correctly, you can then unit test your business logic seperately from the storage.
It's always a good idea to have some integration tests, so that would be a good place to start.
In any case, I wouldn't refactor code that's not covered by any tests.

What is the best approach for Unit testing when you have interfaces with both dummy & real implementations?

I'm familiar with the basic principles of TDD, being :
Write tests, these will fail because of no implementation
Write basic implementation to make tests pass
Refactor code
However, I'm a little confused as to where interfaces and implementation fit. I'm creating a Spring web application in my spare time, and rather than going in guns blazing, I'd like to understand how I can test interfaces/implementations a little better, take this simple example code I've created here :
public class RunMe
{
public static void main(String[] args)
{
// Using a dummy service now, but would have a real implementation later (fetch from DB etc.)
UserService userService = new DummyUserService();
System.out.println(userService.getUserById(1));
}
}
interface UserService
{
public String getUserById(Integer id);
}
class DummyUserService implements UserService
{
#Override
public String getUserById(Integer id)
{
return "James";
}
}
I've created the UserService interface, ultimately there will be a real implementation of this that will query a database, however in order to get the application off the ground I've substituted a DummyUserService implementation that will just return some static data for now.
Question : How can I implement a testing strategy for the above?
I could create a test class called DummyUserServiceTest and test that when I call getUserById() it'll return James, seems pretty simple if not a waste of time(?).
Subsequently, I could also create a test class RealUserService that would test that getUserById() returns a users name from the database. This is the part that confuses me slightly, in doing so, does this not essentially overstep the boundary of a unit test and become more of an intergration test (with the hit on the DB)?
Question (improved, a little): When using interfaces with dummy/stubbed, and real implementations, which parts should be unit tested, and which parts can safely be left untested?
I spent a few hours Googling on this topic last night, and mostly found either tutorials on what TDD is, or examples of how to use JUnit, but nothing in the realms of advising what should actually be tested. It is entirely possible though, that I didn't search hard enough or wasn't looking for the right thing...
Don't test the dummy implementations: they won't be used in production. It makes no real sense to test them.
If the real UserService implementation does nothing else than go to a database and get the user name by its ID, then the test should test that it does that and does it correctly. Call it an integration test if you want, but it's nevertheless a test that should be written and automated.
The usual strategy is to populate the database with minimal test data in the #Before annotated method of the test, and have you test method check that for an ID which exists in the database, the corresponding user name is returned.
I would recommend you to read this book first: Growing Object-Oriented Software Guided by Tests by Steve Freemand and Nat Pryce. It answers your question and many others, related to TDD.
In your particular case you should make your RealUserService configurable with a DB-adapter, which will make real DB queries. The service itself will the servicing, not data persistence. Read the book, it will help a lot :)
JB's Answer is a good one, I thought I'd throw out another technique I've used.
When developing the original test, don't bother stubbing out the UserService in the first place. In fact, go ahead and write the real thing. Proceed by following Kent Beck's 3 rules.
1) Make it work.
2) Make it right.
3) Make it fast.
Your code will have tests that then verify the find by id works. As JB stated, your tests will be considered Integration Tests at this point. Once they are passing we have successfully achieved step 1. Now, look at the design. Is it right? Tweak any design smells and check step 2 off your list.
For step 3, we need to make this test fast. We all know that integration tests are slow and error prone with all of the transaction management and database setups. Once we know the code works, I typically don't bother with the integration tests. It is at this time where you can introduce your dummy service, effectively turning your Integration Test into a unit test. Now that it doesn't touch the database in any way, we can check step 3 off the list because this test is now fast.
So, what are the problems with this approach? Well, many will say that I still need a test for the database-backed UserService. I typically don't keep integration tests laying around in my project. My opinion is that these types of tests are slow, brittle, and don't catch enough logic errors in most projects to pay for themselves.
Hope that helps!
Brandon

Categories