Dynamically adding JUnit tests to a test class - java

I find myself writing lots and lots of boiler plate tests these days and I want to optimize away a lot of these basic tests in a clean way that can be added to all the current test classes without a lot of hassle.
Here is a basic test class:
class MyClassTest {
#Test
public void doesWhatItDoes() {
assertEquals("foo",new MyClass("bar").get());
}
}
Lets say if MyClass implements Serializable, then it stands to reason we want to ensure that it really is serializable. So I built a class which you can extend which contains a battery of standard tests which will be run along side the other tests.
My problem is that if MyClass does NOT implement Serializable for instance, we still have a serialization test in the class. We can make it just succeed for non-serializable classes but it still sticks around in the test list and once this class starts to build it will get more and more cluttered.
What I want to do is find a way to dynamically add those tests which are relevant to already existing test classes where appropriate. I know some of this can be done with a TestSuit but then you have to maintain two test classes per class and that will quickly become a hassle.
If anyone knows of a way to do it which doesn't require an eclipse plug-in or something like that, then I'd be forever grateful.
EDIT: Added a brief sample of what I described above;
class MyClassTest extend AutoTest<MyClass> {
public MyClassTest() {
super(MyClass.class);
}
#Test
public void doesWhatItDoes() {
assertEquals("foo",new MyClass("bar").get());
}
}
public abstract class AutoTest<T> {
private final Class<T> clazz;
protected AutoTest(Clazz<T> clazz) {
super();
this.clazz = clazz;
}
#Test
public void serializes() {
if (Arrays.asList(clazz.getInterfaces()).contains(Serializable.class)) {
/* Serialize and deserialize and check equals, hashcode and other things... */
}
}
}

Two ideas.
Idea 1:
Use Assume
A set of methods useful for stating assumptions about the conditions in which a test is meaningful. A failed assumption does not mean the code is broken, but that the test provides no useful information. The default JUnit runner treats tests with failing assumptions as ignored.
#Test
public void serializes() {
assumeTrue(Serializable.class.isAssignableFrom(clazz));
/* Serialize and deserialize and check equals, hashcode and other things... */
}
Idea 2: implement your own test runner.
Have a look at #RunWith and Runner at http://junit.sourceforge.net/javadoc/

Most pragmatic solution within existing capabilities of JUnit is to have a single annotated test:
#Test
void followsStandardJavaLibraryProtocols() {
if (implementsInterface(Serializable.class) {
testSerialisableInterface
...
Breaks various abstract principles of TDD, but works, with no unnecessary cleverness.
Perhaps, instead of a flat list of test cases, Junit could be extended to have more straightforward support for this kind of heirarchical test with subtests. Something like a #Subtest annotation that identified a test not to be invoked directly, instead adding a node to the result tree when it was, and with what arguments.

Your approach seems like a valid one to me. I don't have a problem with it.
I do this slightly differently. I would create another single test which tests all of your Serializable classes:
public class SerializablesTest {
#Test
public void serializes() {
testSerializable(MyClass.class);
testSerializable(MyClass2.class);
}
private testSerializable(Class clazz) {
// do the real test here
/* Serialize and deserialize and check equals, hashcode and other things... */
}
}
What does this give you? For me, explicitness. I know that I am testing class MyClass for serializability. There isn't any magic involved. You don't need to pollute your other tests.
If you really need to test all your classes which implement Serializable, you can find all of your classes using reflection.
I use this approach a lot, using reflection to build objects. For instance, I can test that all fields are persisted to & reread from a database correctly. I use this sort of thing all of the time.

Related

Writing Unit Tests for inherited classes in Java

Consider the following simple class hierarchy in Java
class Foo {
protected void someMethod(Bar bar) {
...
}
protected void someOtherMethod(Baz baz) {
...
}
}
class EnhancedFoo extends Foo {
#Override
protected void someMethod(Bar bar) {
...
}
}
I now start writing JUnit unit tests for these two classes. Since the contracts for the method someMethod are same for both the classes, I need basically exactly the same test methods (concerning the someMethod method) for both the classes, which leads to code duplication. Doing this for a much richer class hierarchy with multiple overwritten methods, it just feels like inheritance is bad for testability.
Also, even though the method someOtherMethod is not overridden in ExtendedFoo, I need to include the pertaining tests for ExtendedFoo because that is still the contract and this extended class and unit tests should test this.
Is there some other way of organizing hierarchies in Java that is better for testability? Is there some JUnit construct that would help alleviate this problem?
One approach we used when we had a very similar scenario was to also reuse the est classes:
class FooTest {
#Test
public void testSomeMethodBar() {
...
}
#Test
public void void someOtherMethodBaz(Baz baz) {
...
}
}
And extend it for subclass tests:
class EnhancedFooTest extends FooTest {
#Test
public void testSomeMethodBar() {
...
}
}
JUnit will run this specific overridden test method, and also the other default tests in FooTest. And that eliminates unnecessary duplication.
Where appropriate, some test classes are even declared abstract, and are extended by concrete test classes (tests for concrete classes).
As I mentioned in my comments I think the tests should be decoupled from the implementation and be "use-case-driven". It could look like that:
interface Foo {
public void doSomething(...);
}
class DefaultFoo implements Foo { ... }
class EnhancedFoo extends DefaultFoo { ... }
class MyUseCaseTest {
private Foo foo = new DefaultFoo(...);
#Test
public void someRequirement() { ... }
}
class AnotherUseCaseTest {
private Foo foo = new EnhancedFoo(...);
#Test
public void differentRequirement() { ... }
}
The best would be to get rid of inheritance whatsoever, but it's a different topic...
Quoting from your question and comments ,
As far as I understand, unit tests assume classes and their methods to
be black boxes and test their contracts.
And
Since the contracts for the method someMethod are same for both the
classes, I need basically exactly the same test methods (concerning
the someMethod method) for both the classes, which leads to code
duplication.
Both are wrong assumptions in the context of generic unit test concepts and might be correct assumptions in the very narrow context of TDD. You haven't tagged your question for TDD so I am just guessing that and TDD is all about what is acceptable & not necessarily very robust from code perspective.
TDD never stops a developer to write more comprehensive unit tests that satisfy him/her and not only contract.
You have to understand that unit tests are developer tools to assure for method accuracy and it doesn't assume methods as black boxes - it is supposed to test contract as well as implementation details ( code coverage ) .
A Developer shouldn't write unit tests blindly like for trivial methods ( like setter / getter methods ) .
When you go in detail for code coverage , you will find that you will have to write multiple test methods for a target method covering all the scenarios.
If implementation of below method is not changed in class - EnhancedFoo , why write unit tests for it? It should be assumed that parent class tests would be covering it.
protected void someMethod(Bar bar) {
...
}
You write unit tests for methods that you change or add and shouldn't be worried about inheritance hierarchy.
I am simply trying to emphasize the importance of word unit :)

How to test a method that depends on another already tested method?

I'm working on a method that can be considered a specialization of another already defined and tested method. Here's an example code to illustrate:
public class ProductService {
public void addProduct(Product product) {
//do something
}
public void addSpecialProduct(Product product) {
addProduct(product);
//do something after
}
}
I don't want to copy the tests I have for addProduct which are already pretty complex. What I want is when I define the tests for addSpecialProduct, I just make sure that it also calls addProduct in the process. If this were a matter of 2 classes collaborating, it's easy to have the collaborator mocked and just verify that the target method gets called (and stub it if necessary). However, the 2 methods belong to the same class.
What I'm thinking right now is to spy on the object I'm testing, something like:
public void testAddSpecialProduct() {
//set up code
ProductService service = spy(new DefaultProductService());
service.addSpecialProduct(specialProduct);
verify(service).addProduct(specialProduct);
//more tests
}
However, I'm wondering whether this approach somehow defeats the purpose of unit testing. What's the general consensus on this matter?
I think it depends on how rigorous you want to be with your unit testing. In the extreme sense, unit testing should only test the behavior, not the implementation. This would mean you would need to duplicate your tests (or take take #chrylis' suggestion of abstracting out common functionality to helpers). Ensuring the other method is called is testing the implementation.
However in reality, I think your idea of spying on the instance to ensure the other well-tested method is called is a good idea. Here are my reasons:
1) It simplifies your code.
2) It becomes immediately clear to me what is happening. It says to me that everything that has been tested for the other method will now also be true, and here are the extra assertions.
One day you may modify the implementation so that the other method is not called, and this will cause your unit tests to fail, which is what people are generally trying to avoid by not testing the implementation. But in my experience, changes like this are much more common when the behavior is going to change anyway.
You may consider refactoring your code. Use the strategy pattern to to actually implement the functionality for adding products and special products.
public class ProductService {
#Resource
private AddProductStrategy normalAddProductStrategy;
#Resource
private AddProductStrategy addSpecialProductStrategy;
public void addProduct(Product product) {
normalAddProductStrategy.addProduct(product);
}
public void addSpecialProduct(Product product) {
addSpecialProductStrategy.addProduct(product);
}
}
There will be 2 Implementations of the AddProductStrategy. One does that what happend in your original ProductService.addProduct implementation. The second implementation will delegate to the first one and then do the additional work required. Therefore you can test each strategy separately. The second strategy implementation is then just a decorator for the first implementation.

Java - contract tests

I am trying to write contract tests for some widely used interfaces:
Along the lines of:
public abstract class MyInterfaceContractTest extends TestCase {
private MyInterface _toTest;
public void setUp(){
_toTest = getTestableImplementation();
}
protected abstract MyInterface getTestableImplementation();
public void testContract(){
}
}
...and...
public class MyInterfaceImplementationTest extends MyInterfaceContractTest {
protected MyInterface getTestableImplementation(){
return new MyInterfaceImplementation(...);
}
}
However, I want to be able to test multiple instances of MyInterfaceImplementation. In my use case, this is an immutable object containing a collection of data (with accessors specified as per the interface MyInterface), and it might be empty, or have a small amount of data, or even lots of data.
So the question is, how can I test multiple instances of my implementations?
At the moment, I have to initialise the implementation to pass it into the abstract contract test. One approach would be to have multiple test classes for each implementation, where each test class tests a particular instance of that implementation - but that then seems a bit voluminous and difficult to keep track of.
FWIW, I'm using JUnit 3.
Generally the approach would be to use a "testable" subclass of the abstract class to test all the functionality of the abstract class in one test. Then write a separate test for each concrete implementation testing just the methods defined / implemented in the concrete class (don't retest the functionality in the concrete class).
If I've understood your need correctly, you want to run the same test method or methods with multiple implementations of the same interface.
I don't know how to do this very nicely in JUnit 3.
If you're willing to upgrade to JUnit 4, this can be done by using a parametrized test.
For the simple example of running a single test method on two implementations of an interface, your test code could look something like this:
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import java.util.Arrays;
import java.util.Collection;
import static junit.framework.Assert.assertEquals;
// Tell JUnit4 to run it as a parametrized test.
#RunWith(Parameterized.class)
public class MyInterfaceContractTest {
private MyInterface _toTest;
// Load the test data into your test instance via constructor
public MyInterfaceContractTest(MyInterface impl) {
this._toTest = impl;
}
// Supply the test data, in this example just the two implementations you want to test.
#Parameterized.Parameters
public static Collection<Object[]> generateData() {
return Arrays.asList(new Object[]{new MyInterfaceImpl1()}, new Object[]{new MyInterfaceImpl2()});
}
#Test
public void testContract(){
// assert whatever, using your _toTest field
}
}
On running this test, JUnit will run the test twice, calling the constructor with the successive entries in the parameter list.
If you have more complex things, like different expectations for the different implementations, the data generation could return lists of Object arrays that contain multiple elements, and the constructor would then take a corresponding multiple arguments.
If you need to reinitialize the object under test between test methods, you might also want to use the trick I described in this related question.
I believe similar things are possible in TestNG, which might be another choice for your upgrade path.

Enforce jUnit tests cover the whole interface

How can I enforce that my test class covers a particular interface?
Also is there a convention for writing overloaded test methods, so the names are consistent (my current technique is something like: methodName + parameter1Type + parameter2Type + ...)?
I'm hoping the second question will be covered/avoided if there is a good way to do the first.
My issue is I have classes which implement a number of interfaces. Since I'm testing Spring injected service classes, everything has at least one interface.
Anyways say I have a class that implements:
public interface MyInterface{
int doFoo(int input);
int doBar(int input);
}
Lets say MyInterfaceImpl, implements this interface.
Now my test class will look something like:
import static org.junit.Assert.assertEquals;
import org.junit.Test;
public class MyInterfaceImplTest{
private MyInterface = new MyInterfaceImpl(); //could inject it...
#Test
public void doFooTest(){
//content of test not relevant
}
#Test
public void doBarTest(){
//content of test not relevant
}
}
Now the above isn't to bad in terms of size, but it's hard to know if I've covered all the testing in larger classes, I could have missed one. Also I find it anyoying to create method names for overloaded methods. I could also add functionality to a class and possibly missed it. If I'm doing TDD this would be nearly impossible but I'd still like to be sure. What I've been tempted to write is...
public class MyInterfaceImplTest implements MyInterface{
And then I'd like to stick #Test in front of each method. Of course this isn't going to work because, well the test needs to put the values in. But using implements lets the IDE add the methods and it enforces that the full interface has been implemented. To be clear I know I am not looking to actually implement the interface in the test, but I think it could speed up development if I could do something like this.
To me this depends on what you mean by "enforce" and "covers a particular interface".
If your interface methods imply certain "contracts" (e.g. java.util.Collection.add() returns true if the receiving collection was modified as the result of the call), that you want to ensure are upheld by implementers of the interface, you can create a Contract Test.
If you want to see that all methods of a test subject are exercised by a particular test class, you can run the test under a code coverage tool like EMMA or Cobertura and ensure the results are to your liking.
You should probably look into parameterized testing. Here is what it would look like with TestNG:
#Test(dataProvider = "dp")
public testInterface(StrategyInterface si) {
// will be invoked twice, one with each implementation
}
#DataProvider
static public Object[][] dp() {
return new Object[][] {
new Object[] { new Strategy1Impl() },
new Object[] { new Strategy2Impl() },
};
}

How should you write junit test cases for multiple implementation of the same interface?

Before asking question, let me explain the current setup:
I have an service interface, say Service, and one implementation, say ServiceImpl. This ServiceImpl uses some other services. All the services are loaded as bean by spring.
Now, I want to write junit test cases for the ServiceImpl. For the same, I use the applicationContext to get the Service bean and then call different methods on it to test them.
All looks fine for the public methods but how do I write test cases for private methods? Because we might not have the same private methods for different implementations?
Can anyone help me here on what should be preferred way of writing test cases?
The purist answer is that private methods are called that for a reason! ;-)
Turn the question around: given only the specification of a (publicly-accessible) interface, how would you lay out your test plan before writing the code? That interface describes the expected behavior of an object that implements it; if it isn't testable at that level, then there's something wrong with the design.
For example, if we're a transportation company, we might have these (pseudo-coded) interfaces:
CapitalAsset {
Money getPurchaseCost();
Money getCurrentValue();
Date whenPurchased();
...
}
PeopleMover {
Weight getVehicleWeight();
int getPersonCapacitly();
int getMilesOnFullTank();
Money getCostPerPersonMileFullyLoaded(Money fuelPerGallon);
...
}
and might have classes including these:
Bus implements CapitalAsset, PeopleMover {
Account getCurrentAdvertiser() {...}
boolean getArticulated() {...}
...
}
Computer implements CapitalAsset {
boolean isRacked() {...}
...
}
Van implements CapitalAsset, PeopleMover {
boolean getWheelchairEnabled() {...}
...
}
When designing the CapitalAsset concept and interface, we should have come to agreement with the finance guys as to how any instance of CapitalAsset should behave. We would write tests against CapitalAsset that depend only on that agreement; we should be able to run those tests on Bus, Computer, and Van alike, without any dependence on which concrete class was involved. Likewise for PeopleMover.
If we need to test something about a Bus that is independent from the general contract for CapitalAsset and PeopleMover then we need separate bus tests.
If a specific concrete class has public methods that are so complex that TDD and/or BDD can't cleanly express their expected behavior, then, again, that's a clue that something is wrong. If there are private "helper" methods in a concrete class, they should be there for a specific reason; it should be possible to ask the question "If this helper had a defect, what public behavior would be affected (and how)?"
For legitimate, inherent complexity (i.e. which comes from the problem domain), it may be appropriate for a class to have private instances of helper classes which take on responsibility for specific concepts. In that case, the helper class should be testable on its own.
A good rule of thumb is:
If it's too complicated to test, it's too complicated!
Private methods should be exercised through the public interface of a class. If you have multiple implementations of the same interface, I'd write test classes for each implementation.
You have written multiple implementations of an interface and want to test all implementations with the same JUnit4-test. In the following you will see how you can do this.
Also there is an example that shows how to get a new instance for each test case.
Example Code
Before I start explaining things here’s some example code for the java.util.list interface:
public class ListTest {
private List<Integer> list;
public ListTest(List<Integer> list){
this.list = list;
}
#Parameters
public static Collection<Object[]> getParameters() {
return Arrays.asList(new Object[][] {
{ new ArrayList<Integer>() },
{ new LinkedList<Integer>()}
});
}
#Test
public void addTest(){
list.add(3);
assertEquals(1, list.size());
}
}
I think you should split the testcases.
First you test the class which is calling the different implementations of the interfaces. This would mean you are testing the public methodes.
After this you test the interface implementaion classes in another testcase. Their you can call the methode with reflection.
I found an interesting article on how to test private methods with JUNIT at http://www.artima.com/suiterunner/privateP.html
So, I assume we should prefer testing private methods indirectly by testing the public methods. Only in exceptional circumstances, we should think of testing private methods.

Categories