I have an orchestrator class which has one method implements a feature via calling multiple methods:
public class Orchestrator {
public void doImportantStuff(){
firstDoThis();
thenDoThis();
finallyDoThis();
}
private void firstDoThis(){
...
}
private void thenDoThis(){
...
}
private void finallyDoThis(){
...
}
}
My question is, I have lots of cases to test for all methods called from doImportantStuff() so I'm planning to write separate test classes for each method. Is it something I should avoid, or does it sounds good?
This actually depends on which kind of tests you are about to write. Unit-testing assumes you test each method in isolation, but integration test will test all-in-one.
Assuming the methods do contain some business logic I'd recomment to write a set of unit tests for each of the method (maybe moving them into separate classes to comply the Single Responsibility principle).
After you are sure that all the methods works fine, it is time to write an integration test and test how the logic integrates (e.g. the particular call order). BTW, here could be useful some mocking tools as you are not going to test each method again.
Related
Why JUnit test methods can't return a value?
Documentation says (emphasis mine):
Test methods and lifecycle methods may be declared locally within the current test class, inherited from superclasses, or inherited from interfaces (see Test Interfaces and Default Methods). In addition, test methods and lifecycle methods must not be abstract and must not return a value.
Why is it enforced like this?
Is this purely a design choice? Is it a best practice?
One thing I haven't yet seen mentioned is that one of the reasons behind this design decision is historical.
Before test cases where marked by annotations, they were analysed by JUnit via reflection - all public void methods, the name of which started with test, and that didn't have any arguments, were considered test cases that were run by the framework.
Had these methods allowed to also return a value, you could not have created a public helper method starting with test that calculated a value commonly used in some tests, or the framework runner would have tried to run it.
You should have made them private anyway I guess.
Additionally, this was also the high time of "a method name that starts with a verb should do something" (rather than calculate something).
What is the reasoning behind this?
Think of each test method as being the logical analog of a public static void main(String[]) method in a classic Java application. There needs to be a well known method signature so that the JUnit framework is able to identify the test methods, and call them in a standard way.
As to the specific design points you are querying:
The reason that the methods cannot be abstract is that they need to be callable. You can't call an abstract method.
The reason that methods cannot return values is that the test framework would not be able to do anything useful with a returned value. (It would not be impossible to discard any result, but what is the point of a testcase result that can't be used?)
I was thinking maybe we could reuse one test method within another.
If you wanted to reuse code in unit tests, you would just use ordinary methods and other OO features of Java; e.g. base classes, helper classes, etcetera. There are few situations where you would want a free-standing test method and a "component" (for want of a better word) of another test method. And if you did you could write something like this.
public void test_xxx() {
real_test_xxx();
}
public int real_test_xxx() {
...
}
public void test_yyy() {
...
res = real_test_xxx()
...
}
Is this purely a design choice? Is it a best practice?
There are no best practices!
These are design choices that are guided by the general requirements and practicalities of unit testing1. Particularly, the requirement that unit test authors should NOT need to build configurations to tell the test framework what to do.
Note that TestNG takes a similar approach. If a TestNG #Test method is declared as returning a value, it is not treated as a test method; see Can a test method return a value in TestNG
Either way, unless you are proposing to implement yet another Java test framework, this is all moot.
1 - If there were strong practical reasons for needing abstract test methods or test methods that returned values, someone would have done something about it. The masses (of unit test writers) would rise up and overthrow the tyranical regime (of test framework developers). (Reprise of the "We won't be fooled again" by The Who.)
Is this purely a design choice?
Yes. In theory, nothing prevents the Junit runner (component thats runs tests of a test class) from accepting as valid a test method with a return value.
Is it a best practice?
Indeed. More you restrict the ways of specifying a unit test method, more you ensure that no misuse happens.
For example look at that simple example that would suppose a test method to return a value.
A addFoo() test method that asserts that the method does what it is expected and that also returns the Foo fixture created :
#Test
public Foo addFoo(){
Foo foo = new Foo(...);
// call the api ...
sut.addFoo(foo);
// assertion
assertEquals(...);
// return the foo fixture
return foo;
}
And here a addFooBar() test method that retrieves the Foo fixture created by the previous method and then asserts that the method addFooBar() does what it is expected :
#Test
public void addFooBar(){
Foo foo = addFoo();
Bar bar = new Bar(..., foo);
// call the api ...
sut.addFooBar(bar);
// assertion
assertEquals(...);
}
That lays multiple issues :
Does the method addFoo() should be executed multiple times ? It may hurt the test speed execution and make test reports unclear.
The addFoo() test result may make the addFooBar() test result to fail in spit of a correct implementation.
How to diagnostic the issue origin ? We added useless complexity.
The addFoo() test result may make the addFooBar() test result to succeed in spit of a incorrect implementation. Indeed, an incorrect implementation in addFoo() may compensate an incorrect implementation in addFooBar()
Test methods should be executed independently
If the tests can not run independently then they are not unit tests.
A unit test should not rely on any external state
If test method will return a value, it may applies that test results should be used/relevant to other tests
Same best practice in TestNG
Since TestNG follows best practices for unit testing, which means a unit test method should not have a return value
you can write some private methods and reuse them like
#Test
public void test1() {
privateTest();
}
#Test
public void test2() {
privateTest();
// more Logic
}
private int privateTest() {
return 0;
}
currently in production code, I have a function as such
public void doWork() {
... call functionA
... call functionB
... do other work
}
I have to test a case where I need to pause after functionA is called in doWork(). And there is no way for me to pause through testing framework
so I've changed the production to be
public void doWork() {
doWork(new CountdownLatch(0))
}
public void doWork(CountdownLatch latch) {
... call functionA with a latch and functionA calls latch.await()
... call functionB
... do other work
}
Now, i can create a test case and test with doWork(new CountdownLatch(1))
But in production it will always be calling doWork() which in turn calls doWork(new CountdownLatch(0))
Is this unnecessary overhead just to be able to make the code test-able? or is this acceptable?
Modification of your code in order to make it testable is completely valid. Test is just another client of your code and as such provides feedback on the usability of the code.
On the other hand for the feedback to be constructive, test has to abide to some rules. Such as it should test behavior of the code and not its internals.
Now, for the actual testing you have more options.
Most strait-forward would be to use a test double for its dependencies. You mentioned that the functionA is final and so it cannot be mocked. It can. While not ideal solution both Mockito 2.+ and PowerMock support mocking of final classes.
Cleaner way would be to listen to your tests. Your tests are telling you that your code has design problems. Maybe you could fix those. You could for example try to separate threading and execution logic so that it can be tested. Or you could introduce an interface to make dependencies mockable.
I'm working on a method that can be considered a specialization of another already defined and tested method. Here's an example code to illustrate:
public class ProductService {
public void addProduct(Product product) {
//do something
}
public void addSpecialProduct(Product product) {
addProduct(product);
//do something after
}
}
I don't want to copy the tests I have for addProduct which are already pretty complex. What I want is when I define the tests for addSpecialProduct, I just make sure that it also calls addProduct in the process. If this were a matter of 2 classes collaborating, it's easy to have the collaborator mocked and just verify that the target method gets called (and stub it if necessary). However, the 2 methods belong to the same class.
What I'm thinking right now is to spy on the object I'm testing, something like:
public void testAddSpecialProduct() {
//set up code
ProductService service = spy(new DefaultProductService());
service.addSpecialProduct(specialProduct);
verify(service).addProduct(specialProduct);
//more tests
}
However, I'm wondering whether this approach somehow defeats the purpose of unit testing. What's the general consensus on this matter?
I think it depends on how rigorous you want to be with your unit testing. In the extreme sense, unit testing should only test the behavior, not the implementation. This would mean you would need to duplicate your tests (or take take #chrylis' suggestion of abstracting out common functionality to helpers). Ensuring the other method is called is testing the implementation.
However in reality, I think your idea of spying on the instance to ensure the other well-tested method is called is a good idea. Here are my reasons:
1) It simplifies your code.
2) It becomes immediately clear to me what is happening. It says to me that everything that has been tested for the other method will now also be true, and here are the extra assertions.
One day you may modify the implementation so that the other method is not called, and this will cause your unit tests to fail, which is what people are generally trying to avoid by not testing the implementation. But in my experience, changes like this are much more common when the behavior is going to change anyway.
You may consider refactoring your code. Use the strategy pattern to to actually implement the functionality for adding products and special products.
public class ProductService {
#Resource
private AddProductStrategy normalAddProductStrategy;
#Resource
private AddProductStrategy addSpecialProductStrategy;
public void addProduct(Product product) {
normalAddProductStrategy.addProduct(product);
}
public void addSpecialProduct(Product product) {
addSpecialProductStrategy.addProduct(product);
}
}
There will be 2 Implementations of the AddProductStrategy. One does that what happend in your original ProductService.addProduct implementation. The second implementation will delegate to the first one and then do the additional work required. Therefore you can test each strategy separately. The second strategy implementation is then just a decorator for the first implementation.
I have a class that I want to test using mockito. The best way to describe the class is to paste the code, but I will try and do my best in a short phrase.
The class has one void function and calls another object that is passed in via setter and getter methods. The object that is being called (from the void function) is an asynchronous call.
The problem I am facing is mocking the asynchronous call that void function (testing via junit) uses.
public class Tester {
private Auth auth; // not mock'ed or spy'ed
#Mock private Http transport;
#Before
....
#Test
public void testVoidFunctionFromAuth() {
doAnswer(new Answer<Object>() {
#Override
public Object answer(InvocationOnMock invocation) throws Throwable {
return doOutput();
}
}).when(transport).executeAsync(param1, param2, param3...);
auth.obtainAuth(); // void function that uses transport mock class
// obtainAuth calls transport.executeAsync()
// as part of the code
}
// return type of transport.executeAsync() is
// ListenableFuture<ResponseEntity<String>>
private ListenableFuture<ResponseEntity<String>> doOutput() {
return new SimpleAsyncTaskExecutor()
.submitListenable(new Callable<ResponseEntity<String>>() {
#Override
public ResponseEntity<String> call() throws Exception {
....
return responseEntity
}
});
}
}
What happens is that the doOutput() function gets called before the auth.obtainAuth(); and when obtainAuth() tries to call doOutput() it returns null -- most likely because doOutput was already execute on the line before. I am not sure how to bind/inject the mock'ed class (transport) on the call executeAsync.
I'm not sure if I understood the question, but as chrylis pointed, the mock object returns a value instantly.
Unit test should have their own context, and not depend on external resources, so there is no point on testing the async call itself. It should return different values so you are able to test the behaviour of the classes that uses it.
To have a better understanding of mock definition take a look at this post: What is Mocking?
Quoting from Pro Spring MVC with Web Flow, a unit test should
• Run fast: A unit test must run extremely fast. If it needs to wait
for database
connections or external server processes, or to parse large files, its
usefulness will quickly become limited. A test should provide an
immediate response and instant gratification.
• Have zero external configuration: A unit test must not require any
external configuration files, not even simple text files. The test’s
configurations must be provided and set by the test framework itself
by calling code. The intent is to minimize both the runtime of the
test and to eliminate external dependencies (which can change over
time, becoming out of sync with the test). Test case conditions should
be expressed in the test framework, creating more readable test
conditions.
• Run independent of other tests: A unit test must be able to run in
complete isolation. In other words, the unit test can’t depend on some
other test running before or after itself. Each test is a stand-alone
unit. In fact, every test method inside a test should be stand-alone
and not depend on another method or on the test methods being run in a
certain order. • Depend on zero external resources: A unit test must
not depend on any outside resources, such as database connections or
web services. Not only will these resources slow the test down, but
they are outside the control of the test and thus aren’t guaranteed to
be in a correct state for testing.
• Leave external state untouched: A unit test must not leave any
evidence that it ever ran. Unit tests are written to be repeatable, so
they must clean up after themselves. Obviously, this is much easier
when the test doesn’t rely on external resources (which are often
harder to clean up or restore).
• Test smallest unit of code possible: A unit test must test the
smallest unit of code possible in order to isolate the code under
test. In object-oriented programming, this unit is usually a method of
an object or class. Writing unit tests such that a method is tested
independently of other methods reduces the number of code lines that
could contain a potential bug.
I find myself writing lots and lots of boiler plate tests these days and I want to optimize away a lot of these basic tests in a clean way that can be added to all the current test classes without a lot of hassle.
Here is a basic test class:
class MyClassTest {
#Test
public void doesWhatItDoes() {
assertEquals("foo",new MyClass("bar").get());
}
}
Lets say if MyClass implements Serializable, then it stands to reason we want to ensure that it really is serializable. So I built a class which you can extend which contains a battery of standard tests which will be run along side the other tests.
My problem is that if MyClass does NOT implement Serializable for instance, we still have a serialization test in the class. We can make it just succeed for non-serializable classes but it still sticks around in the test list and once this class starts to build it will get more and more cluttered.
What I want to do is find a way to dynamically add those tests which are relevant to already existing test classes where appropriate. I know some of this can be done with a TestSuit but then you have to maintain two test classes per class and that will quickly become a hassle.
If anyone knows of a way to do it which doesn't require an eclipse plug-in or something like that, then I'd be forever grateful.
EDIT: Added a brief sample of what I described above;
class MyClassTest extend AutoTest<MyClass> {
public MyClassTest() {
super(MyClass.class);
}
#Test
public void doesWhatItDoes() {
assertEquals("foo",new MyClass("bar").get());
}
}
public abstract class AutoTest<T> {
private final Class<T> clazz;
protected AutoTest(Clazz<T> clazz) {
super();
this.clazz = clazz;
}
#Test
public void serializes() {
if (Arrays.asList(clazz.getInterfaces()).contains(Serializable.class)) {
/* Serialize and deserialize and check equals, hashcode and other things... */
}
}
}
Two ideas.
Idea 1:
Use Assume
A set of methods useful for stating assumptions about the conditions in which a test is meaningful. A failed assumption does not mean the code is broken, but that the test provides no useful information. The default JUnit runner treats tests with failing assumptions as ignored.
#Test
public void serializes() {
assumeTrue(Serializable.class.isAssignableFrom(clazz));
/* Serialize and deserialize and check equals, hashcode and other things... */
}
Idea 2: implement your own test runner.
Have a look at #RunWith and Runner at http://junit.sourceforge.net/javadoc/
Most pragmatic solution within existing capabilities of JUnit is to have a single annotated test:
#Test
void followsStandardJavaLibraryProtocols() {
if (implementsInterface(Serializable.class) {
testSerialisableInterface
...
Breaks various abstract principles of TDD, but works, with no unnecessary cleverness.
Perhaps, instead of a flat list of test cases, Junit could be extended to have more straightforward support for this kind of heirarchical test with subtests. Something like a #Subtest annotation that identified a test not to be invoked directly, instead adding a node to the result tree when it was, and with what arguments.
Your approach seems like a valid one to me. I don't have a problem with it.
I do this slightly differently. I would create another single test which tests all of your Serializable classes:
public class SerializablesTest {
#Test
public void serializes() {
testSerializable(MyClass.class);
testSerializable(MyClass2.class);
}
private testSerializable(Class clazz) {
// do the real test here
/* Serialize and deserialize and check equals, hashcode and other things... */
}
}
What does this give you? For me, explicitness. I know that I am testing class MyClass for serializability. There isn't any magic involved. You don't need to pollute your other tests.
If you really need to test all your classes which implement Serializable, you can find all of your classes using reflection.
I use this approach a lot, using reflection to build objects. For instance, I can test that all fields are persisted to & reread from a database correctly. I use this sort of thing all of the time.