I write jUnit test cases for 3 purposes:
To ensure that my code satisfies all of the required functionality, under all (or most of) the input combinations/values.
To ensure that I can change the implementation, and rely on JUnit test cases to tell me that all my functionality is still satisfied.
As a documentation of all the use cases my code handles, and act as a spec for refactoring - should the code ever need to be rewritten. (Refactor the code, and if my jUnit tests fail - you probably missed some use case).
I do not understand why or when Mockito.verify() should be used. When I see verify() being called, it is telling me that my jUnit is becoming aware of the implementation. (Thus changing my implementation would break my jUnits, even though my functionality was unaffected).
I'm looking for:
What should be the guidelines for appropriate usage of Mockito.verify()?
Is it fundamentally correct for jUnits to be aware of, or tightly coupled to, the implementation of the class under test?
If the contract of class A includes the fact that it calls method B of an object of type C, then you should test this by making a mock of type C, and verifying that method B has been called.
This implies that the contract of class A has sufficient detail that it talks about type C (which might be an interface or a class). So yes, we're talking about a level of specification that goes beyond just "system requirements", and goes some way to describing implementation.
This is normal for unit tests. When you are unit testing, you want to ensure that each unit is doing the "right thing", and that will usually include its interactions with other units. "Units" here might mean classes, or larger subsets of your application.
Update:
I feel that this doesn't apply just to verification, but to stubbing as well. As soon as you stub a method of a collaborator class, your unit test has become, in some sense, dependent on implementation. It's kind of in the nature of unit tests to be so. Since Mockito is as much about stubbing as it is about verification, the fact that you're using Mockito at all implies that you're going to run across this kind of dependency.
In my experience, if I change the implementation of a class, I often have to change the implementation of its unit tests to match. Typically, though, I won't have to change the inventory of what unit tests there are for the class; unless of course, the reason for the change was the existence of a condition that I failed to test earlier.
So this is what unit tests are about. A test that doesn't suffer from this kind of dependency on the way collaborator classes are used is really a sub-system test or an integration test. Of course, these are frequently written with JUnit too, and frequently involve the use of mocking. In my opinion, "JUnit" is a terrible name, for a product that lets us produce all different types of test.
David's answer is of course correct but doesn't quite explain why you would want this.
Basically, when unit testing you are testing a unit of functionality in isolation. You test whether the input produces the expected output. Sometimes, you have to test side effects as well. In a nutshell, verify allows you to do that.
For example you have bit of business logic that is supposed to store things using a DAO. You could do this using an integration test that instantiates the DAO, hooks it up to the business logic and then pokes around in the database to see if the expected stuff got stored. That's not a unit test any more.
Or, you could mock the DAO and verify that it gets called in the way you expect. With mockito you can verify that something is called, how often it is called, and even use matchers on the parameters to ensure it gets called in a particular way.
The flip side of unit testing like this is indeed that you are tying the tests to the implementation which makes refactoring a bit harder. On the other hand, a good design smell is the amount of code it takes to exercise it properly. If your tests need to be very long, probably something is wrong with the design. So code with a lot of side effects/complex interactions that need to be tested is probably not a good thing to have.
This is great question!
I think the root cause of it is the following, we are using JUnit not only for unit testing. So the question should be splited up:
Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?
Should I use Mockito.verify() in my black-box unit-testing?
Should I use Mockito.verify() in my white-box unit-testing?
so if we will ignore higher-than-unit testing, the question can be rephrased "Using white-box unit-testing with Mockito.verify() creates great couple between unit test and my could implementation, can I make some "grey-box" unit-testing and what rules of thumb I should use for this".
Now, let's go through all of this step-by-step.
*- Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?*
I think the answer is clearly no, moreover you shouldn't use mocks for this. Your test should be as close to real application as possible. You are testing complete use case, not isolated part of the application.
*black-box vs white-box unit-testing*
If you are using black-box approach what is you really doing, you supply (all equivalence classes) input, a state, and tests that you will receive expected output. In this approach using of mocks in general is justifies (you just mimic that they are doing the right thing; you don't want to test them), but calling Mockito.verify() is superfluous.
If you are using white-box approach what is you really doing, you're testing the behaviour of your unit. In this approach calling to Mockito.verify() is essential, you should verify that your unit behaves as you're expecting to.
rules of thumbs for grey-box-testing
The problem with white-box testing is it creates a high coupling. One possible solution is to do grey-box-testing, not white-box-testing. This is sort of combination of black&white box testing. You are really testing the behaviour of your unit like in white-box testing, but in general you make it implementation-agnostic when possible. When it is possible, you will just make a check like in black-box case, just asserts that output is what is your expected to be. So, the essence of your question is when it is possible.
This is really hard. I don't have a good example, but I can give you to examples. In the case that was mentioned above with equals() vs equalsIgnoreCase() you shouldn't call Mockito.verify(), just assert the output. If you couldn't do it, break down your code to the smaller unit, until you can do it. On the other hand, suppose you have some #Service and you are writting #Web-Service that is essentially wrapper upon your #Service - it delegates all calls to the #Service (and making some extra error handling). In this case calling to Mockito.verify() is essential, you shouldn't duplicate all of your checks that you did for the #Serive, verifying that you're calling to #Service with correct parammeter list is sufficient.
I must say, that you are absolutely right from a classical approach's point of view:
If you first create (or change) business logic of your application and then cover it with (adopt) tests (Test-Last approach), then it will be very painful and dangerous to let tests know anything about how your software works, other than checking inputs and outputs.
If you are practicing a Test-Driven approach, then your tests are the first to be written, to be changed and to reflect the use cases of your software's functionality. The implementation depends on tests. That sometimes mean, that you want your software to be implemented in some particular way, e.g. rely on some other component's method or even call it a particular amount of times. That is where Mockito.verify() comes in handy!
It is important to remember, that there are no universal tools. The type of software, it's size, company goals and market situation, team skills and many other things influence the decision on which approach to use at your particular case.
In most cases when people don't like using Mockito.verify, it is because it is used to verify everything that the tested unit is doing and that means you will need to adapt your test if anything changes in it.
But, I don't think that is a problem. If you want to be able to change what a method does without the need to change it's test, that basically means you want to write tests which don't test everything your method is doing, because you don't want it to test your changes. And that is the wrong way of thinking.
What really is a problem, is if you can modify what your method does and a unit test which is supposed to cover the functionality entirely doesn't fail. That would mean that whatever the intention of your change is, the result of your change isn't covered by the test.
Because of that, I prefer to mock as much as possible: also mock your data objects. When doing that you can not only use verify to check that the correct methods of other classes are called, but also that the data being passed is collected via the correct methods of those data objects. And to make it complete, you should test the order in which calls occur.
Example: if you modify a db entity object and then save it using a repository, it is not enough to verify that the setters of the object are called with the correct data and that the save method of the repository is called. If they are called in the wrong order, your method still doesn't do what it should do.
So, I don't use Mockito.verify but I create an inOrder object with all mocks and use inOrder.verify instead. And if you want to make it complete, you should also call Mockito.verifyNoMoreInteractions at the end and pass it all the mocks. Otherwise someone can add new functionality/behavior without testing it, which would mean after while your coverage statistics can be 100% and still you are piling up code which isn't asserted or verified.
As some people said
Sometimes you don't have a direct output on which you can assert
Sometimes you just need to confirm that your tested method is sending the correct indirect outputs to its collaborators (which you are mocking).
Regarding your concern about breaking your tests when refactoring, that is somewhat expected when using mocks/stubs/spies. I mean that by definition and not regarding a specific implementation such as Mockito.
But you could think in this way - if you need to do a refactoring that would create major changes on the way your method works, it is a good idea to do it on a TDD approach, meaning you can change your test first to define the new behavior (that will fail the test), and then do the changes and get the test passed again.
Related
I give you 2 tests; the purpose of which is solely to confirm that when service.doSomething is called, emailService.sendEmail is called with the person's email as a parameter.
#Mock
private EmailService emailService;
#InjectMocks
private Service service;
#Captor
private ArgumentCaptor<String> stringCaptor;
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_NO_MOCKING() {
final String email = "billy.tyne#myspace.com";
// There is only one way of building an Address and it requires all these fields
final Address crowsNest = new Address("334", "Main Street", "Gloucester", "MA", "01930", "USA");
// There is only one way of building a Phone and it requires all these fields
final Phone phone = new Phone("1", "978-281-2965");
// There is only one way of building a Vessel and it requires all these fields
final Vessel andreaGail = new Vessel("Andrea Gail", "Fishing", 92000);
// There is only one way of building a Person and it requires all these fields
final Person captain = new Person("Billy", "Tyne", email, crowsNest, phone, andreaGail);
service.doSomething(captain); // <-- This requires only the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_WITH_MOCKING() {
final String email = "billy.tyne#myspace.com";
final Person captain = mock(Person.class);
when(captain.getEmail()).thenReturn(email);
service.doSomething(captain); // <-- This requires the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
Why is it that my team is telling me not to mock the domain objects required to run my tests, but not part of the actual test? I am told mocks are for the dependencies of the tested service only. In my opinion, the resulting test code is leaner, cleaner and easier to understand. There is nothing to distract from the purpose of the test which is to verify the call to emailService.sendEmail occurs. This is something that I have heard and accepted as gospel for a long time, over many jobs. But I still can not agree with.
I think I understand your team's position.
They are probably saying that you should reserve mocks for things that have hard-to-instantiate dependencies. That includes repositories that make calls to a database, and other services that can potentially have their own rats-nest of dependencies. It doesn't include domain objects that can be instantiated (even if filling out all the constructor arguments is a pain).
If you mock the domain objects then the test doesn't give you any code coverage of them. I know I'd rather get these domain objects covered by tests of services, controllers, repositories, etc. as much as possible and minimize tests written just to exercise their getters and setters directly. That lets tests of domain objects focus on any actual business logic.
That does mean that if the domain object has an error then tests of multiple components can fail. I think that's ok. I would still have tests of the domain objects (because it's easier to test those in isolation than to make sure all paths are covered in a test of a service), but I don't want to depend entirely on the domain object tests to accurately reflect how those objects are used in the service, it seems like too much to ask.
You have a point that the mocks allow you to make the objects without filling in all their data (and I'm sure the real code can get a lot worse than what is posted). It's a trade-off, but having code coverage that includes the actual domain objects as well as the service under test seems like a bigger win to me.
It seems to me like your team has chosen to err on the side of pragmatism vs purity. If everybody else has arrived at this consensus you need to respect that. Some things are worth making waves over. This isn't one of them.
It is a tradeoff, and you have designed your example nicely to be 'on the edge'. Generally, mocking should be done for a reason. Good reasons are:
You can not easily make the depended-on-component (DOC) behave as intended for your tests.
Does calling the DOC cause any non-derministic behaviour (date/time, randomness, network connections)?
The test setup is overly complex and/or maintenance intensive (like, need for external files) (* see below)
The original DOC brings portability problems for your test code.
Does using the original DOC cause unnacceptably long build / execution times?
Has the DOC stability (maturity) issues that make the tests unreliable, or, worse, is the DOC not even available yet?
For example, you (typically) don't mock standard library math functions like sin or cos, because they don't have any of the abovementioned problems.
Why is it recommendable to avoid mocking where unnecessary?
For one thing, mocking increases test complexity.
Secondly, mocking makes your tests dependent on the inner workings of your code, namely, on how the code interacts with the DOCs (like, in your case, that the captain's first name is obtained using getFirstName, although possibly another way might exist to get that information).
And, as Nathan mentioned, it may be seen as a plus that - without mocking - DOCs are tested for free - although I would be careful here: There is a risk that your tests lose focus if you get tempted to also test the DOCs. The DOCs should have tests of their own.
Why is your scenario 'on the edge'?
One of the abovementioned good reasons for mocking is marked with (*): "The test setup is overly complex ...", and your example is constructed to have a test setup that is a bit complex. Complexity of the test setup is obviously not a hard criterion and developers will simply have to make a choice. If you want to look at it this way, you could say that either way has some risks when it comes to future maintenance scenarios.
Summarized, I would say that neither position (generally to mock or generally not to mock) is right. Instead, developers should understand the decision criteria and then apply them to the specific situation. And, when the scenario is in the grey zone such that the criteria don't lead to a clear decision, don't fight over it.
There are two mistakes here.
First, testing that when a service method is called, it delegates to another method. That is a bad specification. A service method should be specified in terms of the values it returns (for getters) or the values that could be subsequently got (for mutators) through that service interface. The service layer should be treated as a Facade. In general, few methods should be specified in terms of which methods they delegate to and when they delegate. The delegations are implementation details and so should not be tested.
Unfortunately, the popular mocking frameworks encourage this erroneous approach. And so does over zealous use of Behaviour Driven Development.
The second mistake is centered around the very concept of unit testing. We would like each of our unit tests to test one thing, so when there is a fault in one thing, we have one test failure, and locating the fault is easy. And we tend to think of "unit" meaning the same as "method" or "class". This leads people to think that a unit test should involve only one real class, and all other classes should be mocked. This is impossible for all but the simplest of classes. Almost all Java code uses classes from the standard library, such as String or HashSet. Most professional Java code uses classes from various frameworks, such as Spring. Nobody seriously suggests mocking those. We accept that those classes are trustworthy, and so do not need mocking. We accept that it is OK not to mock "trustworthy" classes that the code of our unit uses. But, you say, our classes are not trustworthy, so we must mock them. Not so. You can trust those other classes, by having good unit tests for them. But how to avoid a tangle of interdependent classes that cause a confusing mass of test failures when there is only one fault present? That would be a nightmare to debug! Use a concept from 1970s programming (called, a virtual machine hierarchy, which is now a rather confusing term, given the additional meanings of virtual machine): arrange your software in layers from low level to high level, with higher layers performing operations using lower layers. Each layer provides a more expressive or advanced means of abstractly describing operations and objects. So, domain objects are in a low level, and the service layer is at a higher level. When several tests fail, start debugging the lowest level test failure(s): the fault will probably be in that layer, possibly (but probably not) in a lower layer, and not in a higher layer.
Reserve mocks only for input and output interfaces that would make the tests very expensive to run (typically, this means mocking the repository layer and the logging interface).
The intention of an automated test is to reveal that the intended behavior of some unit of software is no longer performing as expected (aka reveal bugs.)
The granularity/size/bounds of units under test in a given test suite is to be decided by you and your team.
Once that is decided, if something outside of that scope can be mocked without sacrificing the behavior being tested, then that means it is clearly irrelevant to the test, and it should be mocked. This will help with making your tests more:
Isolated
Fast
Readable (as you mentioned)
...and most importantly, when the test fails, it will reveal that the intended behavior of some unit of software is no longer performing as expected. Given a sufficiently small unit under test, it will be obvious where the bug has occurred and why.
If your test-without-mocks example were to fail, it could indicate an issue with Address, Phone, Vessel, or Person. This will cause wasted time tracking down exactly where the bug has occurred.
One thing I will mention is that your example with mocks is actually a bit unreadable IMO because you are asserting that a String will have a value of "Billy" but it is unclear why.
I am learner in writing Junit Test Cases. I have seen writing Junit cases Pattern that We usually make test class for each class indivisually by their name and write test cases for each method of that class in its respective class so that maximum code coverage can occur.
What I was thinking If I make test cases for my feature that would be better choice because In future any number of methods Signature changes I don't have to change or create again unnecessary test cases for those modified methods or newly created. Because that moment I would have certain test cases for my developed feature. So my test cases are running fine for particular feature then I can be sure in minimum number of test cases code that everything is fine.
By keeping this I don't have to write test cases for each and every methods of each class. Is it a good way?
Well, test cases are written for a reason. Each and every methods have to be working properly as expected. If you only do test cases for the feature level, how do you find exactly where the error occurred and how confidently you can ship your code to next level?
The better approach will be to do unit test cases for each class and do an integration test to make sure everything works good.
We found success in utilizing both. By default we use one per class. But when particular use-cases come up, e.g. use-cases that involve multiple classes or use cases where the existing boiler plate testing code prevents the use case from being properly tested, then we would create a test class for that scenario.
by keeping this I don't have to write test cases for each and every
methods of each class. Is it a good way?
In this case you write only integration tests and no unit tests.
Writing tests for use cases is really nice but it is not enough because it is hard to cover all cases of all methods invoked in an integration test because there may have a very important number of branches while it is much easier in an unit test.
Besides, a use case test may be successful for bad reasons : thanks to side effects between multiple methods invoked.
By writing an unit test you protect yourself against this kind of issue.
Definitively, unit and integration tests are are not opposed but complementary. So you have to write both to get a robust application.
I am new to mocking and I have goggled a lot for the best practices but till now I couldn't find any satisfying resource so I thought of putting in SO.
I have tried few test cases and had following doubts:
Do you write a separate unit test for each method (public private etc) and mock other methods calls that's being invoked inside this method or you test only public method?
Is it okay to verify the invocation of stubbed method at the end when testing a method that returns nothing e.g DB insertion?
Please add other practices also that are must to know.
There are many levels of testing. Unit testing is of a finer granularity to integration testing which you should research separately. Regrettably this is still quite a young area of the software engineering industry and as a result the terminology is being intermixed in ways not intended.
For unit testing you should write tests that determine if the behaviour of the class meets expectations. Once you have all such tests you should find any private methods are also tested as a consequence so no need to test private methods directly. If you only test behaviour, you should find your tests never need to change although the class under test may over time - you may of course need to increase the number of tests to compensate just never change the existing tests.
Each class, in a good design, should have minimal use of other classes (collaborators). Those collaborators that get mocked are often implementing infrastructure such as database access. Be wary of testing collaboration as this is more closely associated with a larger system test - mocking collaborators gives your unit test knowledge not only of how it behaves but also how it operates which is a different subject matter.
Sorry to be vague but you are embarking on a large topic and I wanted to be brief.
Folks it is always said in TDD that
we should write junits even before we write the actual code.
Somehow I am not able to understand this in right spirit. I hope what it means is that you just write empty methods wih right signatures and your test case is expected to fail initially
Say in TDD approach i need to get the list of customers.
As per my understanding i will write the empty method like below
public List<CustomerData> getCustomers(int custId){
return null;
}
Now i will write junit test case where i will check the size as 10(that i am eactually expecting). Is this Right?
Basically my question is in TDD, How we can write junit test case before writing actual code?
I hope what it means is that you just write empty methods wih right signatures
Yes. And with most modern IDEs, if you write a method name which does not exist in your test, they will create a stub for you.
Say in TDD approach i need to get the list of customers. whats the right way to proceed?
Your example is not quite there. You want to test for a 0-length array, but you already return it: you should first return null, the test will obviously fail.
Then modify the method so that the test succeeds.
Then create a test method for customer add. Test fails. Fix it. Rinse. Repeat.
So, basically: with TDD, you start and write test that you KNOW will fail, and then fix your code so that they work.
Recommended read.
Often you'll write the test alongside the skeleton of the code. Initially you can write a non-functional implementation (e.g. throw an UnsupportedOperationException) and that will trigger a test failure. Then you'd flesh out the implementation until finally your test passes.
You need to be pragmatic about this. Obviously you can't compile your test until at least your unit under test compiles, and so you have to do a minimal amount of implementation work alongside your test.
Check out this recent Dr Dobbs editoral, which discusses exactly this point and the role of pragmatism around this, especially by the mavens of this practise (Kent Beck et al)
A key principle of TDD is that you write no code without first writing
a failing unit test. But in fact, if you talk to the principal
advocates of TDD (such as Kent Beck, who popularized the technique,
and Bob Martin, who has taught it to thousands of developers), you
find that both of them write some code without writing tests first.
They do not — I should emphasize this — view these moments as lapses
of faith, but rather as the necessary pragmatism of the intelligent
developer.
That's partly right.
Using an IDE (Eclipse, IntelliJ) you can create a test. In that test invoke a method (that does not exist) and using a refactoring tool create a method with the proper signature.
That's a trick that makes working with TDD easier and more fun.
According to Now i will write junit test case where i will check the size as 0. Is this Right? you should write a test that fails, and the provide proper implementation.
I think go write the test first, Think about the signature of the function while writing the test.
It's the same as writing the signature and then writing the test, But when inventing the signature of the function while writing the test will be helpful since you will have all information about the responsibility of the function and you will be able to come up with the proper signature.
I'd like to know if my approach to unit testing has been wrong:
My application has a boot strap process that initializes a several components and provides services to various sub-systems - let's call it the "controller".
In many cases, in order to unit test these sub-systems, I would need access to the controller as these sub-systems may depend on it. My approach to doing this unit test would be to initialize the system, and then provide the controller to any unit test that requires it. I achieve this via inheritance: I have a base unit test that initializes and tests the controller, then any unit test requiring the controller would extend this base class, and hence, have access to it.
My question is this:
(1) Is this achieving proper isolation? It makes sense to me that unit tests should be done in isolation so that they are repeatable and independent - is it ok that I am providing a real initialized controller rather than mocking it or attempting to mock the specific environment required by each test?
(2) As a best practice (assuming that my previous approach is OK) - should I be creating the controller over and over for each unit test, or would it suffice to create it once (its state is not changing).
If we are supplying a "real" controller to test another component, then strictly speaking we are performing an integration test rather than a unit test. This is not necessarily a bad thing, but consider the following points:
Cost of Creating the Controller
If the controller is a heavyweight object with a considerable cost to construct, then every unit test will incur this cost. As the number of unit tests grows in number, that cost may begin to dominate the overall test execution time. It is always desirable to keep the runtime of unit tests as small as possible to allow quick turnaround after code changes.
Controller Dependencies
If the controller is a complex object, it may have dependencies of its own that need to be instantiated in order to construct the controller itself. For example, it may need to access a database or configuration file of some kind. Now, not only does the controller need to be initialized, but also those components. As the application evolves over time, the controller may require more and more dependencies, just making this problem worse as time goes on.
Controller State
If the controller carries any state, the execution of a unit test may change that state. This, in turn, may change the behaviour of subsequent unit tests. Such changes may result in apparently non-deterministic behaviour of the unit tests, introducing the possibility of masking bugs. The cure for this problem is to create the controller anew for each test, which may be impractical if that creation is expensive (as noted above).
Combinatorial Problem
The number of combinations of possible inputs to the composite system of the unit under test and the controller object may be much larger than the number of combinations for the unit alone. That number might be too large to test practically. By testing the unit in isolation with a stub or mock object in place of the controller, it is easier to keep the number of combinations under control.
God Object
If the controller is conveniently accessible to all components in every unit test, there will be a great temptation to turn the controller into a God Object that knows everything about every component in the system. Even worse, those components may begin to interact with one another through that god object. The end result is that the separation between application components begins to erode and system starts to become monolithic.
Technical Debt
Even if the controller is stateless and cheap to instantiate today, that may change as the application evolves. If that day arrives after we have written a large number of unit tests, we might be faced with a large refactoring exercise of all of those tests. Furthermore, the actual system code might also need refactoring to replace all of the controller references with lighter weight interfaces. There is a risk that the refactoring cost is significant -- possibly even too high to contemplate, resulting in a system is "stuck" in an undesirable form.
Recommendation
In order to avoid these pitfalls now and in the future, my recommendation is to avoid supplying the real controller to the unit tests.
The full controller is likely to be difficult to stub or mock effectively. This will induce (desirable) pressure to express a component's dependencies as a "thin", focused interface in place of the "thick", "kitchen sink" interface that the controller is likely to present. Why is this desirable? It is desirable because this practice promotes better separation of concerns between system components, yielding architectural benefits far beyond the unit test code base.
For lots of good practical advice about how to achieve separation of concerns and generally write testable code, see Misko Hevery's guide and talks.
I think it's OK to provide a real controller. That will provide for a good integration test of your system. At my company we do a lot of what you're doing: a base test class that sets up the environment and the actual test cases that inherit it.
Hrm ... I think I might create it once. That'll test your controller also and make sure its state isn't changing and can bear repeated invocations.
If you're looking for a strict unit test, why not use mock objects, like EasyMock:
http://www.easymock.org/
That way you can provide "mock" behavior for the controller without ever instantiating it. Unitils also provides integration with EasyMock, such that if you extend the UnitilsJUnit4 unit test class, you get automatic mock object creation and injection. Unitils also provides for DB unit/integration testing, which might be overkill for your software, though.