One of unit test best practices is to make each test independent to all the others. Lets say I want to test add() method of a BoundedPriorityBlockingQueue custom class:
public void testAdd() {
BoundedPriorityBlockingQueue q = BoundedPriorityBlockingQueue();
q.add(1);
assertEquals(1, q.size());
}
as you can see currently testAdd uses size() method so it depends on it but I dont want testAdd() to fail when size() is broken. What is the best practice in this situation?
What is the best practice in this situation?
Just suck it up, bearing in mind that tests are meant to serve you, not the other way round.
Will your tests break if something goes horribly wrong? Yes.
Will it be clear where the problem is? Probably, given that anything using size will fail.
Is this test driving you towards a less testable design? No.
Is this the simplest approach to testing add, which is robust in the face of changing implementation details? Probably. (I'd test that you can get the value out again, mind you.)
Yes, it's sort of testing two parts of the same class - but I really don't think that's a problem. I see a lot of dogma around testing ("only ever test the public API, always use AAA" etc) - in my experience you should temper that dogmatism with a healthy dose of pragmatism.
The goal is to make all test methods independent of other test methods, and this method is independent. It will pass or fail based on the operation of the methods in the class under test, regardless of what you do in other test methods.
It's fine for this test to fail if another method from the class under test is broken. If size() is broken you'll have multiple test failures (this one and the one that explicitly tests size()) so it will be obvious where the problem is. If add() is broken, only this test will fail (along with any other methods that rely on add()).
As others have already said, if your size method is broken the test will fail anyway so you have a reason there to investigate and understand why is that happening.
Anyway, if you are still interested on having such independence between your tests you could go for a white-box testing strategy: I guess that your BoundedPropertyBlockingQueue uses internally either any of the java.util collections, an array or an collection implementation from other provider (Guava, Apache Collections, etc) that you rely on so you don't need to verify that those structures work as they are expected to do.
So, define that internal structure as protected, place your test class in a package with the same name and, instead of relying on the implementation of the size method, go into the guts of the BoundedPropertyBlockingQueue:
BoundedPriorityBlockingQueue q = BoundedPriorityBlockingQueue();
q.add(1);
assertEquals(1, q.contents.size()); // assuming that `contents` attribute is a collection.
The main drawback is that now if your internal implementation of the queue changes, you'll need to change the test whilst with your previous test method you won't need to.
IMO I would chose your current implementation, is less coupled and, at the end, meets its goal.
There's nothing wrong with doing such cross-testing - some methods tend to live in pairs (add/remove, enqueue/dequeue, etc) and it makes little sense to test one without its complementary part.
However, I would give a bit more thought to how the add method will be used by your clients (class users). Most likely won't call add only to determine whether size changed, but rather to later retrieve added item. Perhaps your test should look more like this:
BoundedPriorityBlockingQueue q = new BoundedPriorityBlockingQueue();
QueueItem toAdd = 1;
QueueItem added = q.dequeue();
assertEquals(toAdded, added);
On top of that you can also add guard assert to the test above (to assure queue doesn't start with some items already added) or even better - include separate test that guarantees initial state of queue (size is 0, dequeue returning null/throwing).
Related
i have a question regarding testing classes like a converter.
Lets say i have a converter from EntityA to EntityB. The converter seems like this:
public EntityB convert(EntityA){
//call interal methods
return B.
}
private xy internalMethod1(...){
//call other interal Method
}
private xy internalMethod2(...){
....
}
private xy internalMethod3(...){
....
}
private xy internalMethod4(...){
....
}
The converter has one public method and 4 internal methods to convert the entity.
How should i test it?
Option1
I only test the public method and cover all cases from the internalMethods by different example inputs.
Advantages:
Testing only the "interface". Dont know the interal structure.
Internal refactoring is very easy and needs no changes at the tests.
Disadvantages:
Really big maybe unclear tests that tests all cases.
Every input must be pass all the methods.
Option2
I write tests for my public method and my private methods. (Some testframeworks can access private methods like powermock or spock (groovy))
I test every method alone and mock every other internal method.
Advantages:
Really small tests that only test the method itself and mock all other methods .
Disadvantages:
I know how it is implemented internal and must change the tests if i refactor some method, some methodname or something at the internal calling structure
Option3
I write some new classes that do the internal stuff and have public methods
Advantages:
Tests are maybe clearer and only for the special classes.
Disadvantages:
More classes for one conversion task.
Please help me what is the best practise here.
Maybe some good links/hints.
Thank you for your time.
The points you make are valid, but I think you might not be estimating their weight correctly.
Writing brittle tests (tests that are coupled to the implementation code) makes for a rigid code base that is hard to change. Since the point of writing tests in the first place is to be able to go fast, this is counter productive.
This is why you write your tests through the API only - it decouples the tests from the implementation. As you've said, this might make writing the tests a bit harder, but the reward is worth the effort since you'll get safety and be able to refactor easily.
Option 3 comes into play when you see a code smell where some tests cover only some of the code, and other tests only cover the other part of the code. This usually means there's a collaborator that maybe needs to be extracted. This is especially true when some internal functions only use some parameters and others don't. Also, when there's code duplication and the like.
What I would suggest, is to write it using the way you described in option 1, and then extract code out if needed, in the refactoring stage.
From the design perspective, I am wondering should I test the data, especially if it's a generally known data (not something very configurable) - this can apply to things like popular file extensions, special IP addresses etc.
Suppose we have a emergency phone number classifier:
public class ContactClassifier {
public final static String EMERGENCY_PHONE_NUMBER = "911";
public boolean isEmergencyNumber(String number) {
return number.equals(EMERGENCY_PHONE_NUMBER);
}
}
Should I test it this way ("911" duplication):
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber("911"));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
or (test if properly recognized "configured" number):
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber(ContactClassifier.EMERGENCY_PHONE_NUMBER));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
or inject "911" in the constructor,which looks the most reasonable for me, but even if I do so - should I wrote a test for the "application glue" if the component was instantiated with proper value? If someone can do a typo in data (code), then I see no reasons someone can do a typo in tests case (I bet such data would be copy-paste)
What is the point in test data that you can test? That constant value is in fact constant value? It's already defined in code. Java makes sure that the value is in fact the value so don't bother.
What you should do in unit test is test implementation, if it's correct or not. To test incorrect behaviour you use data defined inside test, marked as wrong, and send to method. To test that data is correct you input it during test, if it's border values that are not well known, or use application wide known values (constants inside interfaces) if they're defined somewhere already.
What is bothering you is that the data, that should be well known to everyone) is placed in test and that is not correct at all. What you can do is to move it to interface level. This way, by design, you have your application known data designed to be part of contract and it's correctness checked by java compiler.
Values that are well known should not be checked but should be handled by interfaces of some sort to maintain them. Changing it is easy, yes, and your test will not fail during that change, but to avoid accidents with it you should have merge request, reviews and tasks that are associated with them. If someone does change it by accident you can find that at the code review. If you commit everything to master you have bigger problems than constants doubly defined.
Now, onto parts that are bothering you in other approaches:
1) If someone can do a typo in data (code), then I see no reasons someone can do a typo in tests case (I bet such data would be copy-paste)
Actually, if someone changes values in data and then continues to develop, at some point he will run clean-install and see those failed tests. At that point he will probably change/ignore test to make it pass. If you have person that changes data so randomly you have bigger issues, and if not and the change is defined by task - you made someone do the change twice (at least?). No pros and many cons.
2) Worrying about someone making a mistake is generally bad practice. You can't catch it using code. Code reviews are designed for that. You can worry though about someone not correctly using the interface you defined.
3) Should I test it this way:
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber(ContactClassifier.EMERGENCY_PHONE_NUMBER));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
Also not this way. This is not test but test batch, i.e. multiple tests in the same method. It should be this way (convention-s):
#Test
public testClassifier_emergencyNumberSupplied_correctnessConfirmed() {
assertTrue(contactClassifier.isEmergencyNumber(ContactClassifier.EMERGENCY_PHONE_NUMBER));
}
#Test
public testClassifier_incorrectValueSupplied_correctnessNotConfirmed() {
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
4) it's not necessary when method is properly named, but if it's long enough you might consider naming the values inside test. For example
#Test
public testClassifier_incorrectValueSupplied_correctnessNotConfirmed() {
String nonEmergencyNumber = "111-other-222";
assertFalse(contactClassifier.isEmergencyNumber(nonEmergencyNumber));
}
External constants as such have a problem. The import disappears and the constant is added to the class' constant pool. Hence when in the future the constant is changed in the original class, the compiler does not see a dependency between the .class files, and leaves the old constant value in the test class.
So you would need a clean build.
Furthermore tests should be short, clear to read and fast to write. Tests deal with concrete cases of data. Abstractions are counter-productive, and may even lead to errors in the test themselves. Constants (like a speed limit) should be etched in stone, should be literals. Value properties like the maximum velocity of a car brand can stem from some kind of table lookup.
Of course repeated values could be placed in local constants. Prevents typos, easy - as local - abstraction, clarifies the semantic meaning of a value.
However as cases in general will use constants maybe twice or three times (positive and negative test), I would go for bare constants.
In my opinion the test should check behaviour and not the internal implementation.
The fact that isEmergencyNumber verifies the number over constant declared in the class you're trying to test is verification over internal implementation. You shouldn't rely on it in the test because it is not safe.
Let me give you some examples:
Example #1: Someone changed EMERGENCY_PHONE_NUMBER by mistake and didn't notice. The second test will never catch it.
Example #2: Suppose ContactClassifier is changed by not very smart developer to the following code. Of course it is completely edge case and most likely it will never happen in practice, but it also helps to understand what I mean.
public final static String EMERGENCY_PHONE_NUMBER = new String("911");
public boolean isEmergencyNumber(String number) {
return number == EMERGENCY_PHONE_NUMBER;
}
In this case your second test will not fail because it relies on internal implementation, but your first test which checks real word behaviour will catch the problem.
Writing a unit test serves an important purpose: you specify rules to be followed by the method being tested.
So, when the method breaks that rule i.e. the behavior changes, the test would fail.
I suggest, write in human language, what you want the rule to be, and then accordingly write it in computer language.
Let me elaborate.
Option 1 When I ask ContactClassifier.isEmergencyNumber method, "Is the string "911" an emergency number?", it should say yes.
Translates to
assertTrue(contactClassifier.isEmergencyNumber("911"));
What this means is you want to control and test what number is specified by the constant ContactClassifier.EMERGENCY_PHONE_NUMBER. Its value should be 911 and that the method isEmergencyNumber(String number) does its logic against this "911" string.
Option 2 When I ask ContactClassifier.isEmergencyNumber method, "Is the string specified in ContactClassifier.EMERGENCY_PHONE_NUMBER an emergency number ?", it should say yes.
It translates to
assertTrue(contactClassifier.isEmergencyNumber("911"));
What this means is you don't care what string is specified by the constant ContactClassifier.EMERGENCY_PHONE_NUMBER. Just that the method isEmergencyNumber(String number) does its logic against that string.
So, the answer would depend on which one of above behaviors you want to ensure.
I'd opt for
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber("911"));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
as this doesn't test against something from the class under test that might be faulty. Testing with
#Test
public testClassifier() {
assertTrue(contactClassifier.isEmergencyNumber(ContactClassifier.EMERGENCY_PHONE_NUMBER));
assertFalse(contactClassifier.isEmergencyNumber("111-other-222"));
}
will never catch if someone introduces a typo into ContactClassifier.EMERGENCY_PHONE_NUMBER.
In my opinion that is not necessary to test this logic. The reason is: this logic is trivial for me.
We can test all line of our code, but I don't think that is a good idea to do this. For example getter and setter. If we follow the theory to test all line of code, we have to write test for each of getter and setter. But these tests have low value and cost more time to write, to maintain. That is not a good investment
I always have the doubt if unit testing isolation should be done at very fine grain like stubbing internal method calls, look at the following example with Java, JUnit and Mockito.
package com.company.domain;
public class Foo {
private FooHelper helper;
public setHelper(FooHelper helper) {
this.helper = helper;
}
public Double method1(Integer a, Integer b) {
... // logic here
Integer var = method2(a); // internal call to method2
... // more logic here
}
protected Integer method2(Integer c) {
... // logic here and return an Integer
}
}
This is the test class:
package com.company.domain;
#RunWith(MockitoJUnitRunner.class)
public class FooTest {
#Mock private FooHelper fooHelperMock; // mocking dependencies as usual
#InjectMocks #Spy Foo foo; // this way we can stub internal calls
#Test
public void method1Test() {
doReturn(2).when(foo).method2(1); //stub the internal method call
... // stub fooHelperMock method calls here
Double result = foo.method1(1, 2);
assertEquals(new Double(1.54), result);
verify(foo, times(1)).method2(1);
... // verify fooHelperMock method calls here
}
#Test
public void method2Test() {
... // stub fooHelper method calls here
assertEquals(new Integer(5), foo.method2(3));
... // verify fooHelperMock method calls here
}
}
I think this way you are able to really isolate the code under testing at method level even with internal calls. But I cannot find lot of information about this being considered a good practice or not and why.
In general, testing a class's internal function calls makes your tests far too fragile. When unit testing, you only want to see that the class as a whole is doing what it is supposed to do. You are supposed to intentionally ignore the internals.
Ignoring the internals has a lot of useful benefits. Mainly, if you change the methods -- as long as the public interface stays the same -- your tests will all still pass. Yay. This is the type of stability you want your unit tests to have. Fragile unit tests are almost worse than no unit tests. A fragile test has to be changed every time the code changes even if everything is still working. That provides you with nothing but extra work.
There are exceptions though. I would consider mocking out internal function calls in the following cases:
1) The internal call pulls data from somewhere. For example a database or large file. This is often called a 'boundary' test. We don't want to really set up a database or file to test, so instead we just sub out the function. However, if you find yourself doing this, it probably means that your class should be split into two classes because it is doing too many different things. Make yourself a dedicated database class instead.
2) The internal call does heavy processing that takes a lot of time. In that case, it might make sense to sub out the function instead of waiting. However, once again this might be a sign that you're on the wrong track. You should try to find a way to make things more partitioned so no single step takes that long.
In conclusion, I would highly recommend you do not test at the method level. A class should be a distinct unit. If the class is working everything is fine. This approach give you the testing you need to be secure and the flexibility you need to change things.
#raspacorp
Indeed, if you only test one case, someone could hardcode var = 2 and then pass your test case. But it is your job in designing tests cases to cover enough different cases to be reasonably satisfied that it is behaving appropriately.
As far as tests changing because the code has changed, this is not what you want at all. You want your tests to verify that the responses you are getting are correct for all the different types of cases. As long as the responses are correct, you want to be able to change everything without changing the tests at all.
Imagine a large system with thousands of tests verifing that everything is working right. Now imagine that you want to make a HUGE optimization change that will make things go faster, but change how everything is stored and calculated. What you want your tests to allow you to do is only change code (not tests) and have the tests continually confirm that everything is still working. This is what unit tests are for, not catching every possible line change, but rather to verify that all the calculations and behaviors are corrent.
This article from Martin Fowler answers your question.
You're a mockist, as I'm, but there are many people who don't like this approach and prefer the classicist approach (e.g. Kent Beck).
Roy Osherove says that the goal of a test, when it fails, is to identify the production code causing the problem, and having that in mind is obvious that the more fine grained the better.
For sure being a mockist may be overwhelming for beginners, but once you're used to the mechanisms of a Mocking library the benefits are uncountable while the effort is not much higher than the other approach.
I do not think it is necessary and I would even go further to say you should not do this. You are writing more test code to test less of you application code. In general it is considered good practice to only test public methods, because a unit test should only be concerned whether a method under test satisfies its contract (i.e. does what it is supposed to do) and not how the method is implemented. If you have a bug in your private method, how will you know?
I've been writing code that processes certain fields of an object by modifying their values. To test it, I first wrote a JUnit test case that recursively traverses fields of an object and makes sure they're correctly modified. The CUT (Class Under Test) does something vary similar: it recursively traverses fields of an object and modifies them as required.
So the code to recursively traverse the fields remains the same in test case and CUT, and is currently duplicated, which is against DRY. So I have two questions:
1) have you come across such situations in your project? If yes, did you apply DRY, or let such duplication remain as is?
2) if I put this common code in a util method, I will need to write a test case to test that, which would again involve traversing fields recursively. So how can this be solved without adding any duplication?
You have just hit the ugly mirror testing anti-pattern. If your CUT has a bug, most likely you will copy it to your test case, essentially verifying that a bug is still there.
You must show us some more code, but basically your test case should be much simpler, no for loops, no conditions - just assertions. If your production code does some fancy traversing, reflection, etc. on complicated data structures - create a test Java object and test every field manually in the unit test.
Use the visitor pattern to abstract traversing the tree, and then build visitors both in the Test case and in your productive code. And test the Visitor infrastructure separately.
I'm testing a function that takes several paramters and on the basis of their values calls different private methods.
I want to check that the function always call the right private method.
Since I know what the private methods will do I can check the final result but it would be more convenient to be able to check directly if the right function was called, because I have already tested the private methods.
Is there a way to replace a privae method with a stub?
Yes, there are mocking libraries that let you do this. One is PowerMock. From their private method tutorial, you need something like this:
#RunWith(PowerMockRunner.class)
#PrepareForTest(MyUnit.class)
public class TestMyUnit {
#Test
public void testSomething() {
MyUnit unit = PowerMock.createPartialMock(MyUnit.class, "methodNameToStub");
PowerMock.expectPrivate(unit, "methodNameToStub", param1).andReturn(retVal);
EasyMock.replay(unit);
unit.publicMethod(param1);
EasyMock.verify(unit);
}
}
However, I really disagree with this practice myself. Your unit test should test inputs, outputs, and side effects, and that's it. By ensuring that a private method is called correctly, all you're doing is preventing your code from being easily refactored.
In other words, what if down the road you want to change how your unit does its job? The safe way to do this is to make sure the code is under (passing) tests, then refactor the code (potentially including changing which internal methods are called), and then run the tests again to make sure you didn't break anything. With your approach, this is impossible because your tests test the exact implementation, not the behaviour of the unit itself. Refactoring will almost always break the test, so how much benefit is the test really giving you?
Most often you would want to do this because you're actually considering those privates a unit unto themselves (this sound like you, since you say you are testing those private methods directly already!). If that's the case, it's best to extract that logic into its own class, test it, and then in the remaining code interact with a mock/stub version of that new unit. If you do that, your code has a better structure and you don't need to fall back on the voodoo magic that is PowerMock. A fantastic reference to do these kinds of refactorings is Michael Feathers' Working Effectively with Legacy Code.
You may check java instrumentation to do so
As one of solution can be used proxy from inner classes. You need add inner class inside every your class which must be tested.
But it is not very good solution for big product project. its require create addition script for remove generated classes from your release files(jar/war).
But more easier way will be used PowerMock as wrote in comments bellow(or upper :)) - http://code.google.com/p/powermock/wiki/MockPrivate
Would it be possible to provide the class in question with another object, to which the private methods are moved and made public? In that case, it would be easy to create a test dummy for that interface.
If calling the right "private method" has no observable outside result, are you sure you want to test this? Maybe shouldn't.
If the end result is the same regardless of whether the private method gets called, and you still want to observe its invocation, you could make the method public and move it to its own class, and mock that class. Then you could verify (using Mockito or a similar framework) whether your method is being called.
Code coverage tools do this kind of thing by re-writing the bytecode before the tests are actually run. So, it's got to be possible, but it's non-trivial.
Update: writing a unit test that requires that the "right" private method be called kind of makes the job of refactoring a real pain because then you have to re-write all your tests. That kind of defeats the purpose of the tests.