Java Mockito one test causes another to fail - java

#Test
public void onConnectionCompletedTest() {
connectionProvider.initialize();
connectionProvider.addEventObserver(SocketEvent.Type.SOCKET_CONNECT, mockedObserver);
connectionProvider.onConnectionCompleted(mockedChannel);
verify(mockedObserver).socketEventObserved(socketEventCaptor.capture());
Assert.assertEquals(SocketEvent.Type.SOCKET_CONNECT, socketEventCaptor.getValue().getType());
}
#Test
public void onConnectionClosedTest() {
connectionProvider.initialize();
connectionProvider.addEventObserver(SocketEvent.Type.SOCKET_DISCONNECT, mockedObserver);
connectionProvider.onConnectionClosed(mockedChannel);
verify(mockedObserver).socketEventObserved(socketEventCaptor.capture());
Assert.assertEquals(SocketEvent.Type.SOCKET_DISCONNECT, socketEventCaptor.getValue().getType());
}
The problem is when I run both of these tests, the 2nd one fails. But if I comment out
verify(mockedObserver).socketEventObserved(socketEventCaptor.capture());
Assert.assertEquals(SocketEvent.Type.SOCKET_CONNECT, socketEventCaptor.getValue().getType());
then the 2nd test will pass. There's a lot of different classes/methods involved in this so hopefully this is enough information to be able to come up with an explanation.
The error I get:
wanted but not invoked:
mockedObserver.socketEventObserved(
<Capturing argument>
);
-> at com.company.cnx.cip.io.ConnectionProviderTest.onConnectionClosedTest(ConnectionProviderTest.java:188)
Actually, there were zero interactions with this mock.
My question exactly: What could be happening that when I #Ignore the first test, the 2nd will pass?
EDIT: I have an #Before class that's important.
#Before
public void init() {
MockitoAnnotations.initMocks(this);
JsonParser parser = new JsonParser();
JsonElement jsonElement = parser.parse(json);
configurationService.loadConfiguration(jsonElement, "id");
AppContext.getContext().applyConfiguration(configurationService);
connectionProvider = ConnectionProvider.newInstance();
}
You can ignore everything that isn't the first and last line of it. I make a new ConnectionProvider object, so I would think that one test shoudldn't affect another, because they're operating on two separate objects.

This is more straightforward than you think: The test works when you comment out those lines because socketEventObserved is not getting called when it's the second test of the run. This is probably a problem with code that you haven't posted above.
Since it seems that the culprit may be buried in an impractical volume of code, here are a few debugging tips and general sources of test pollution:
First and foremost, set a breakpoint everywhere that socketEventObserved is called, and compare between single-test runs and multi-test runs. If you see the same behavior Mockito sees, then it's not Mockito.
As it turned out to be in this case, keep an eye out for actions that may occur on other threads (particularly to listeners). Using a Mockito timeout can help there:
verify(mockedObserver, timeout(2000))
.socketEventObserved(socketEventCaptor.capture());
You seem to be working with I/O channels, which sometimes involve buffering or flushing policies that can be triggered only when a certain number of tests are run, or if tests are run in certain order. Make sure that your #Before method fully resets state, including any modes or buffers that your code may touch.
You interact with AppContext.getInstance() and ConnectionProvider.newInstance(), both of which are static method calls. I'm less worried about the latter, unless it conserves instances in spite of its name, but the former may not take kindly to multiple initializations. In general, in your system-under-test, keep an eye out for writes to global state.
Mockito itself keeps static state. Mockito keeps its internal state static and thread-scoped (through ThreadLocal), and at times a test can leave Mockito in an invalid internal state that it cannot detect (because an operation is half-completed, for instance). Add an #After method to detect this case:
#After public void checkMockito() { Mockito.validateMockitoUsage(); }
...or switch to using MockitoRule or MockitoJUnitRunner, which do this and initMocks(this) automatically.
Finally, Mockito isn't perfect: It uses proxy objects to override methods, which also means it will silently fail to mock final methods and some non-public methods that work with nested classes (as those require the compiler to generate synthetic methods you can't see). If your mockedObserver has any final or limited-visibility methods, it may cause real code and mocked code to interact in a way that makes the system's behavior hard-to-predict.

Related

How to return a new instance every time doReturn is called?

I have the following test case:
ClassPathResource resource = new ClassPathResource(...);
doReturn(resource.getInputStream()).when(someMock).getInputStream();
where I read some resource and return it in my test. This method is called with some uncertainty in an asynchronous, scheduled manner in the actual production code - hence I do not have complete control over how many times it is exactly called, although the returned data is always the same (or should be).
In other words, this can potentially cause a randomly failing test if the method is invoked and the stream is read more times than expected, since the stream has not (and cannot) be reset, and is already at EOF.
I explicitly do not wish to have this be invoked synchronously in the test context, and am using Awaitility.await() to test these kinds of scenarios, so unfortunately that is not an option for me.
Now, the 'dumb' way to potentially fix this is something like the following:
doReturn(resource.getInputStream()).doReturn(resource.getInputStream()).<...>.when(someMock).getInputStream();
But this still does not fix the actual issue and is a band-aid at best.
I was actually expecting something like the following to work:
ClassPathResource resource = new ClassPathResource(...);
doReturn(resource.call().getInputStream()).when(someMock).getInputStream();
but unfortunately, this also retrieves the underlying Stream only once.
How can I provide a fresh instance on every doReturn() call? Can mockito even do this? Is there an alternative approach to what I wish to do?
I'm not totally sure I'm on board with what you're asking.
You're first doing ClassPathResource resource = new ClassPathResource(...);, which I'm assuming is a one time thing. And then you are returning resource.getInputStream() for every invocation. By definition, you can't have multiple instances then - you're asking for one thing and trying to do another. So what is it that you want? You could use some question restructuring. Also, what are you doing with ClassPathResource in a unit test in the first place?
In the meantime, the best I can guess of what you want is the most literal interpretation of your question that I can get to, and completely ignoring any and all psudeo-code (because it completely contradicts your question). That is, you want a completely separate instance of an InputStream for every invocation to a method. This part is easy.
Mockito.doAnswer(new Answer<InputStream>() {
#Override
public InputStream answer(InvocationOnMock invocation)
throws Throwable {
return Mockito.mock(InputStream.class);
}
}).when(resource.getInputStream());

Can I place unit test result validation in the #After method?

I'm writing a Unit test for the behavior of a method, and want to try to reduce my code duplication - I'm about to double the test cases again, and the file is getting a bit hard to look at.
If anyone's really interested, the code is here (warning to the future - link may not be permanent).
Right now, I do the following, repeated several times in slightly different ways:
#Test
public void testAdd64OfMax64() {
// Set up container
TileEntityChest nmsinv = new TileEntityChest();
Inventory container = new CraftInventory(nmsinv);
// Call method
HashMap<Integer,ItemStack> leftover = container.addItem(new ItemStack(Foo, 64));
// Set up expected result
ItemStack[] expected = empty.clone();
expected[0] = new ItemStack(Foo, 64);
// Verify result
assertThat(container.getContents(), is(expected));
assertThat(leftover, is(Collections.EMPTY_MAP));
}
(Note: assertThat comes from org.hamcrest.Matchers.)
I know that I can put the "Set up container" and cloning the empty array into a #Before method.
My question is, can I put the assertThat method into an #After method and have the test still fail (when it should fail)?
No. The After method should be used to release external resources after a test run. Also, note that it is guaranteed to run even if your Test method throws an exception which may not be desirable in your case.
Also, think about the people who will have to maintain your code in the future. You are violating the PLA, by validating the results of your tests in the After method. Nobody expects that!
It would be better if you created a helper method e.g. checkResults which was called at the end of each of your Test methods to validate your results.

Specifying order of execution in JUnit test case [duplicate]

This question already has answers here:
How to run test methods in specific order in JUnit4?
(23 answers)
Closed 9 years ago.
I have a test case where I add an entity, update it and delete the same. Hence, the order of execution is important here. I want it to be :
Create
Update
Delete
Strangely, for just one test case ( out of 15) , JUnit executes it in the following order :
Delete
Update
Create .
How do I tell JUnit to execute them in a specific order ? In other cases, JUnit works totally fine ( executing serially ) . And why does JUnit behave weirdly in this one case ?
Relevant code snippet below :
private static Date date;
private static int entity;
static Parking p;
public ParkingTests(String name) {
super(name);
}
public void testAdd() throws Exception {
//Add code here
}
public void testUpdate() throws Exception {
//update code here
}
public void testDelete() throws Exception {
//delete code here
}
}
It gets weirder. I run a lot of test cases as part of a suite. If I run just the Parking case, the order is maintained. If I run it along with others, it is sometimes maintained, sometimes not !
Your kind of situation is awkward, as it feels bad to keep duplicating work in order to isolate the tests (see below) - but note that most of the duplication can be pulled out into setUp and tearDown (#Before, #After) methods, so you don't need much extra code. Provided that the tests are not running so slowly that you stop running them often, it's better to waste a bit of CPU in the name of clean testing.
public void testAdd() throws Exception {
// wipe database
// add something
// assert that it was added
}
public void testUpdate() throws Exception {
// wipe database
// add something
// update it
// assert that it was updated
}
public void testDelete() throws Exception {
// wipe database
// add something
// delete it
// assert that it was deleted
}
The alternative is to stick everything into one test with multiple asserts, but this is harder to understand and maintain, and gives a bit less information when a test fails:
public void testCRUD() throws Exception {
// wipe database
// add something
// assert that it was added
// update it
// assert that it was updated
// delete it
// assert that it was deleted
}
Testing with databases or collections or storage of any kind is tricky because one test can always affect other tests by leaving junk behind in the database/collection. Even if your tests don't explicitly rely on one another, they may still interfere with one another, especially if one of them fails.
Where possible, use a fresh instance for each test, or wipe the data, ideally in as simple a way as possible - e.g. for a database, wiping an entire table is more likely to succeed than a very specific deletion that you might accidentally get wrong.
Update: It's usually better to wipe data at the start of the test, so one failed test run doesn't affect the next run.
Generally junit tests(test methods) should not depend on each other.
Following is taken from junit FAQ
Each test runs in its own test fixture to isolate tests from the
changes made by other tests. That is, tests don't share the state of
objects in the test fixture. Because the tests are isolated, they can
be run in any order...... The ordering of test-method invocations is not
guaranteed.
So if you want to do some common initialization stuff then you could do that in the method annotated with #Before and cleanup in method annotated with #After. Or else if that initialization is not required for all tests methods in your test class then you could put that in private methods and call them appropriately from your tests.
On a side note, if you still want to do ordering of tests then you may have a look at TestNG.
If you're determined you would want to have order of execution for your tests, JUnit 4.11 now supports this through an annotation. See this thread for more discussion - basically, you would use
#FixMethodOrder
to guarantee some test order that way. It is discouraged though.
If you are using Java 7 then you should know that Junit gets the list of all tests using "Method[] getDeclaredMethods()" from java.lang.Class. You can read from the javadoc of this method or from junit docs that: "The elements in the array returned are not sorted and are not in any particular order.", but in previous jvm implementation methods list was ordered as they were in source code.
This was taken from this blog and he provides a work around.
In general, JUnit does not guarantee the ordering of test cases. It's not guaranteed to be alphabetical, nor the order in the file. If the ordering of tests were important, then one depends on the output of the previous. What if the first one failed? Should we even bother with the later (and dependent) tests? Probably not.
So if we had this:
#Test
public void first(){...}
#Test
public void second() {...}
#Test
public void third() {...}
We don't know what order they will run in. Since we are hoping they go in order, and we should probably not bother running second or third if the previous one(s) failed, we can do this instead:
#Test
public void firstThree(){
first();
second();
third();
}
public void first(){...}
public void second() {...}
public void third() {...}
Notice that we only have one #Test this time, and it guarantees ordering.
If you want to run junit tests in order "just as they present in your source code",
see my note about this here:
How to run junit tests in order as they present in your source code
But it is really not a good idea, tests must be independent.
What you can do :
Cleanup the database before every test
Start by testing the first logical operation first. When you have enough confidence, assume it is correct and move to the next, etc...
Write white box tests first, but start with black box tests. For example if you have triggers or similar in your database, start with that.

Value of Behavior Verification

I've been reading up on (and experimenting with) several Java mocking APIs such as Mockito, EasyMock, JMock and PowerMock. I like each of them for different reasons, but have ultimately decided on Mockito. Please note though, that this is not a question about which framework to use - the question really applies to any mocking framework, although the solution will look different as the APIs are (obviously) different.
Like many things, you read the tutorials, you follow the examples, and you tinker around with a few code samples in a sandbox project. But then, when it comes time to actually use the thing, you start to choke - and that's where I am.
I really, really like the idea of mocking. And yes, I am aware of the complaints about mocking leading to "brittle" tests that are too heavily coupled with the classes under test. But until I come to such a realization myself, I really want to give mocking a chance to see if it can add some good value to my unit tests.
I'm now trying to actively use mocks in my unit tests. Mockito allows both stubbing and mocking. Let's say we have a Car object that has a getMaxSpeed() method. In Mockito, we could stub it like so:
Car mockCar = mock(Car.class);
when(mockCar.getMaxSpeed()).thenReturn(100.0);
This "stubs" the Car object to always return 100.0 as the max speed of our car.
My problem is that, after writing a handful of unit tests already...all I'm doing is stubbing my collaborators! I'm not using a single mock method (verify, etc.) available to me!
I realize that I'm stuck in a "stubbing state of mind" and I'm finding it impossible to break. All this reading, and all this excitement building up to using mocks in my unit testing and... I can't think of a single use case for behavior verification.
So I backed up and re-read Fowler's article and other BDD-style literatures, and still I'm just "not getting" the value of behavior verification for test double collaborators.
I know that I'm missing something, I'm just not sure of what. Could someone give me a concrete example (or even a set of examples!) using, say, this Car class, and demonstrate when a behavior-verifying unit test is favorable to a state-verifying test?
Thanks in advance for any nudges in the right direction!
Well, if the object under test calls a collaborator with a computed value, and the test is supposed to test that the computation is correct, then verifying the mock colaborator is the right thing to do. Example:
private ResultDisplayer resultDisplayer;
public void add(int a, int b) {
int sum = a + b; // trivial example, but the computation might be more complex
displayer.display(sum);
}
Clearly, in this case, you'll have to mock the displayer, and verify that its display method has been called, with the value 5 if 2 and 3 are the arguments of the add method.
If all you do with your collaborators is call getters without arguments, or with arguments which are direct inputs of the tested method, then stubbing is probably sufficient, unless the code might get a value from two different collaborators and you want to verify that the appropriate collaborator has been called.
Example:
private Computer noTaxComputer;
private Computer taxComputer;
public BigDecimal computePrice(Client c, ShoppingCart cart) {
if (client.isSubjectToTaxes()) {
return taxComputer.compute(cart);
}
else {
return noTaxComputer.compute(cart);
}
}
I like #JB Nizet's answer, but here's another example. Suppose you want to persist a Car to a database using Hibernate after making some changes. So you have a class like this:
public class CarController {
private HibernateTemplate hibernateTemplate;
public void setHibernateTemplate(HibernateTemplate hibernateTemplate) {
this.hibernateTemplate = hibernateTemplate;
}
public void accelerate(Car car, double mph) {
car.setCurrentSpeed(car.getCurrentSpeed() + mph);
hibernateTemplate.update(car);
}
}
To test the accelerate method, you could just use a stub, but you wouldn't have a compete test.
public class CarControllerTest {
#Mock
private HibernateTemplate mockHibernateTemplate;
#InjectMocks
private CarController controllerUT;
#Test
public void testAccelerate() {
Car car = new Car();
car.setCurrentSpeed(10.0);
controllerUT.accelerate(car, 2.5);
assertThat(car.getCurrentSpeed(), is(12.5));
}
}
This test passes and does check the computation, but we don't know if the car's new speed was saved or not. To do that, we need to add:
verify(hibernateTemplate).update(car);
Now, suppose that if you try to accelerate past max speed, you expect the acceleration and the update not to happen. In that case, you would want:
#Test
public void testAcceleratePastMaxSpeed() {
Car car = new Car();
car.setMaxSpeed(20.0);
car.setCurrentSpeed(10.0);
controllerUT.accelerate(car, 12.5);
assertThat(car.getCurrentSpeed(), is(10.0));
verify(mockHibernateTemplate, never()).update(car);
}
This test will not pass with our current implementation of CarController, but it shouldn't. It shows you need to do more work to support this case and that one of the requirements is that you don't try to write to the database in this case.
Basically, verify should be used for exactly what it sounds like - to verify that something happened (or didn't happen). If the fact that it happened or didn't isn't really what you are trying to test, then skip it. Take the second example I made. One could argue that since the value wasn't changed, it doesn't really matter whether update was called or not. In that case, you can skip the verify step in the second example since the implementation of accelerate would be correct either way.
I hope it doesn't sound like I'm making a case for using verify a lot. It can make your tests very brittle. But it can also 'verify' that important things that were supposed to happen did happen.
My take on this is that every test case should contain EITHER
stubbing, plus one or more asserts OR
one or more verifys .
but not both.
It seems to me that in most test classes, you end up with a mixture of "stub and assert" test cases and "verify" test cases. Whether a test case does a "stub and assert" or does a "verify" depends on whether the value returned by a collaborator is important to the test. I need two examples to illustrate this.
Suppose I have an Investment class, which has a value in dollars. Its constructor sets the initial value. It has an addGold method, which increases the value of an Investment by the amount of gold times the price of gold in dollars per ounce. I have a collaborator called PriceCalculator that calculates the price of gold. I might write a test like this.
public void addGoldIncreasesInvestmentValueByPriceTimesAmount(){
PriceCalculator mockCalculator = mock( PriceCalculator.class );
when( mockCalculator.getGoldPrice()).thenReturn( new BigDecimal( 400 ));
Investment toTest = new Investment( new BigDecimal( 10000 ));
toTest.addGold( 5 );
assertEquals( new BigDecimal( 12000 ), toTest.getValue());
}
In this case, the result from the collaborator method is important to the test. We stub it, because we're not testing the PriceCalculator at this point. There's no need to verify, because if the method hadn't been called, the final value of the investment value would not be correct. So all we need is the assert.
Now, suppose there's a requirement that the Investment class notifies the IRS whenever anyone withdraws more than $100000 from an Investment. It uses a collaborator called IrsNotifier to do this. So a test for this might look like this.
public void largeWithdrawalNotifiesIRS(){
IrsNotifier mockNotifier = mock( IrsNotifier.class );
Investment toTest = new Investment( new BigDecimal( 200000 ));
toTest.setIrsNotifier( mockNotifier );
toTest.withdraw( 150000 );
verify( mockNotifier ).notifyIRS();
}
In this case, the test doesn't care about the return value from the collaborator method notifyIRS(). Or maybe it's void. What matters is just that the method got called. For a test like this, you'll use a verify. There may be stubbing in a test like this (to set up other collaborators, or return values from different methods), but it's unlikely that you'll ever want to stub the same method that you verify.
If you find yourself using both stubbing and verification on the same collaborator method, you should probably ask yourself why. What is the test really trying to prove? Does the return value matter to the test? Because this is usually a testing code smell.
Hope these examples are helpful to you.

TestNG: How to test for mandatory exceptions?

I'd like to write a TestNG test to make sure an exception is thrown under a specific condition, and fail the test if the exception is not thrown. Is there an easy way to do this without having to create an extra boolean variable?
A related blog post on this subject: http://konigsberg.blogspot.com/2007/11/testng-and-expectedexceptions-ive.html
#Test(expectedExceptions) is useful for the most common cases:
You expect a specific exception to be thrown
You need the message of that exception to contain specific words
Per the documentation a test will fail if no expectedException is thrown:
The list of exceptions that a test method is expected to throw. If no exception or a different than one on this list is thrown, this test will be marked a failure.
Here are a few scenarios where #Test(expectedExceptions) is not sufficient:
Your test method has several statements and only one of them is expected to throw
You are throwing your own type of exception and you need to make sure it matches a certain criterion
In such cases, you should just revert to the traditional (pre-TestNG) pattern:
try {
// your statement expected to throw
fail();
}
catch(<the expected exception>) {
// pass
}
Use #Test annotation to check expected exceptions.
#Test(
expectedExceptions = AnyClassThatExtendsException.class,
expectedExceptionsMessageRegExp = "Exception message regexp"
)
Or if you don't want to check for exception message, only below is enough
#Test(expectedExceptions = AnyClassThatExtendsException.class)
In that way, you don't need to use ugly try catch block, just invoke you exception-thrower method inside the test.
If you are using Java 7 and TestNG, the below example can be used. For Java 8 you can also use Lambda Expressions.
class A implements ThrowingRunnable {
#Override
public void run() throws AuthenticationFailedException{
spy.processAuthenticationResponse(mockRequest, mockResponse, authenticationContext);
}
}
assertThrows(AuthenticationFailedException.class, new A());
I have to disagree with the the article on the nature of the testing techniques employed. The solution employs a gate, to verify if the test should succeed or fail in an intermediate stage.
In my opinion, it is better to employ Guard Assertions, especially for such tests (assuming that the test does not turn out to be long-winded and complex, which is an anti-pattern in itself). Using guard-assertions forces you to design the SUT in either of the following ways:
design the method itself to provide enough information in the result on whether the invocation passed or succeeded. Sometimes, this cannot be done as the intention of the designer is to not return a result, and instead throw an exception (this can be handled in the second case).
design the SUT so that it's state can be verified after each significant method invocation.
But before we consider the above possibilities, have a look at the following snippet again:
plane.bookAllSeats();
plane.bookPlane(createValidItinerary(), null);
If the intention is to test bookPlane() and verify for execution of that method, it is better to have bookAllSeats() in a fixture. In my understanding, invoking bookAllSeats() is equivalent to setting up the SUT to ensure that the invocation of bookPlane() fails, and hence having a fixture to do the same would make for a more readable test. If the intention are different, I would recommend testing the state after every transition (as I normally would do in functional tests), to help pinpoint the original cause of failure.
Why don't you use the try/fail/catch pattern mentioned in the blog post you linked to?
catch-exception provides probably everything you need to test for expected exceptions.
I created a custom Stack data structure which is backed by an array. The push() method throws a custom exception when the stack is full and you still try to push() data into the stack. You could handle it like this :
public class TestStackDataStructure {
//All test methods use this variable.
public Stack<String> stack;//This Stack class is NOT from Java.
#BeforeMethod
public void beforeMethod(){
//Don't want to repeat this code inside each test, especially if we have several lines for setup.
stack = new Stack<>(5);
}
#Test
public void pushItemIntoAFullStack(){
//I know this code won't throw exceptions, but what if we have some code that does ?
IntStream.rangeClosed(1,5).mapToObj(i -> i + "").forEach(stack::push);
try{
stack.push("6");
Assert.fail("Exception expected.");
}catch (StackIsFullException ex) {
// do nothing;
}
}
//Other tests here.
}
Alternately, you could change your api as suggested here :
#Test
public void pushItemIntoAFullStack(){
IntStream.rangeClosed(1,5).mapToObj(i -> i + "").forEach(stack::push);
Assert.assertFalse( stack.push("6"), "Expected push to fail." );
}
I updated the push method to return true or false if the operation passed or failed, instead of returning void. The Java Stack.push(item) returns the element you tried to insert instead of void. I don't know why. But, it also inherits a similar method addElement(item) from Vector which returns void.
One minor downside I see to making push(item) return a boolean or void is that you are stuck with those return types. If you return Stack instead then you can write code conveniently like this stack.push(1).push(2).push(3).pop(). But, I don't know how often one would have to write code like that.
Similarly, my pop() method used to return a generic type "T" and used to throw an exception if the stack was empty. I updated it to return Optional<T> instead.
#Test
public void popEmptyStack(){
Assert.assertTrue(stack.pop().isEmpty());
}
I guess I am now free of the clunky try-catch blocks and TestNg expectedExceptions. Hopefully, my design is good now.

Categories