I am writing module tests for a project using testng and Mockito2. I want to mock a few methods which make outbound requests. Now, the object to mock is created locally within another object's method. So, if I have say, 4 classes, A, B, C and D, such that A creates an object of type B, B creates an object of type C and so on, and object of type D is to be mocked, I see I have two options to mock it.
Option 1 is to spy on objects of type A,B,C and inject spy of B into A and C into B and finally inject mock of of D into C during object creation. Following is an example.
class A {
public B createB()
{
retrun new B();
}
public void someMethod ()
{
B b = createB();
}
}
In this way I can can spy on A and inject mock object for B when createB is called. This way I can ultimately mock D.
Option 2 is to not mock intermittent classes and directly have a Factory class like the one below:
class DFactory {
private static D d;
static public void setD (D newD)
{
d = newD;
}
public static D getD()
{
if (d!=null)
{
return d;
} else
{
return new D();
}
}
}
The above option is simple, but I am not sure if this is the right thing to do as it creates more static methods, something that should be avoided, I believe.
I would like to know which method should be preferred and if there is some other alternative.
Please note that I do not wish to use powermockito or any other such frameworks which encourage bad code design. I want to stick to mockito2. I am fine with refactoring my code to make it more testable.
The way you have it now, with A creating B and B creating C and C creating D, all of that creation are implementation details you can't see or change, specifically the creation of dependency objects.
You are admirably avoiding the use of PowerMockito, and you are also admirably interested in refactoring your code to handle this change well, which means delegating the choice of D to the creator of A. Though I understand that you only really mean for this choice to happen in testing, the language doesn't know that; you are choosing a different implementation for the dependency, and taking the choice away from C's implementation. This is known as an inversion of control, or dependency injection. (You've probably heard of them before, but I introduce those terms at the end because they typically associated with weight and frameworks that aren't really necessary for this conversation right now.)
It's a little trickier because it looks like you don't just need an implementation of D, but that you need to create new implementations of D. That makes things a little harder, but not by much, especially if you are able to use Java 8 lambdas and method references. Anywhere below that you see a reference to D::new, that's a method reference to D's constructor that could be accepted as a Supplier<D> parameter.
I would restructure your class in one of the following ways:
Construct A like new A(), but leave the control over the implementation of D for when you actually call A, like aInstance.doSomething(new D()) or aInstance.doSomething(D::new). This means that C would delegate to the caller every single time you call a method, giving more control to the callers. Of course, you might choose to offer an overload of aInstance.doSomething() that internally calls aInstance.doSomething(new D()), to make the default case easy.
Construct A like new A(D::new), where A calls new B(dSupplier), and B calls new C(dSupplier). This makes it harder to substitute B and C in unit tests, but if the only likely change is to have the network stack represented by D, then you are only changing your code as required for your use-case.
Construct A like new A(new B(new C(D::new))). This means that A is only involved with its direct collaborator B, and makes it much easier to substitute any implementation of B into A's unit tests. This assumes that A only needs a single instance of B without needing to create it, which may not be a good assumption; if all classes need to create new instances of their children, A would accept a Supplier<B>, and A's construction would look like new A(() -> new B(() -> new C(D::new))). This is compact, but complicated, and you might choose to create an AFactory class that manages the creation of A and the configuration of its dependencies.
If the third option is tempting for you, and you think you might want to automatically generate a class like AFactory, consider looking into a dependency injection framework like Guice or Dagger.
Related
I'm trying to understand how to do unit testing with Mockito.
All the cases that I found is when a class A depends on a class B, that is A has an attribute of type B. I understand well this case. But how to do the tests when A uses B without getting it as attribute.
Suppose I have this code:
import package.classB;
public class A
{
public int methodA(classB b)
{
int x= b.methodB();
//do something with x and then return the result
}
}
How can I test the methodA? Do I need to use mocks in this case?
I suspect that this will turn into a philosophical debate, which it probably mostly is.
The answer is - it depends.
You can use a Mockito object, or you can just pass in a fully formed object as the argument. There are up and downsides to both.
Let's assume that ClassB is just a simple POJO. Something like a user record. It has a userID and a name field, and you want to test methodA. I would generally just create an instance of the user object and than pass that into the argument.
Your test might look like
public class TestA {
public void testMethodA() {
B b = new B();
int expectedValue = 1000;
A a = new A();
assertEquals(expectedValue, a.methodA(b));
}
}
The benefit of this is that you have a fully formed object B and you are testing with real data. The downside of this that is that Object B can be extremely complex or take long to generate. For example, Object B could be a database lookup.
If Object B needs to be mocked, it can be mocked with Mockito, and then you get lots of ways to work with it. The simplest case would be a variation of above.
public class TestA {
public void testMethodA() {
B b = Mockito.mock(B.class);
Mockito.when(b.methodB()).thenReturn(10);
int expectedValue = 1000;
A a = new A();
assertEquals(expectedValue, a.methodA(b));
}
}
The upside of this is that you don't worry about what happens in Object B. All you care about is that methodB in Object B returns a 10. You are just testing method A, which doesn't care that much about what Object B is doing. This can be much faster. The downside is that you are making assumptions about what Object A is doing with Object B. If Object A decided that it needed another method from Object B in methodA, your test would fail. It also might hide some of the implementation from Object B, which might become important in some cases.
Personally, I tend to try to mock as little as possible. This makes the setup of the tests more complex and it takes longer to run each test, but the upside is that I'm testing the whole process up to method A, starting from root data.
But there is nothing wrong with mocking Object B. Tests become simpler and possibly quicker, with the downside that you are making assumptions about object B.
Assume you have three classes A, B, and C where A calls B and B calls C and every class has some number of internal states that influence the result given by the class' methods. Describe how using Mockito can considerably reduce the number of test cases for class A.
I am new to Mockito. I sort of got the concept of Mockito but stumbled upon the above question for which I do not have an answer.
Mockito can help you isolate a class' behavior without having to know the internal workings of a collaborating class.
Assume C looks like this:
public class C {
private B b;
public C (B b) {
this.b = b;
}
public boolean isLargerThanTen() {
return b.getSomeCalculation() > 10;
}
}
Now suppose you're testing C.isLargerThanTen(). Without mocking, you'll have to know how exactly to construct B so that its getSomeCalculation() returns such a number. E.g.:
A a = new A();
a.setSomething(1);
a.setSomethingElse(2);
B b = new B(a);
b.setAnotherThing(999);
b.setYetAnotherThing(888);
C c = new C(b);
assertTrue(c.isLargerThanTen());
This is problematic on two fronts - first, it's cumbersome. Second, it makes you know the internal workings of B (and in this case, A, which B depends on too). If B were to change its implementation, C's test would break, even though there's nothing wrong with C, just with the (bad) way its test is set up.
Instead, if you use mocking, you could just simulate the only part of the behavior you care about, without the hassle of understanding the internal implementation details of B:
B b = Mockito.mock(B.class);
Mockito.when(b.getSomeCalculation()).thenReturn(11);
C c = new C(b);
assertTrue(c.isLargerThanTen());
This sounds like homework, but the advantage to using a mocking framework like Mockito is that it allows you to completely separate the functionality of each component when performing tests. That is, you only need to test the interface presented by A and the possible scenarios (various inputs, etc) because you can control what the mocked B and C classes do.
For example, if you need to test error handling in A you can simply have your mock B object throw an exception without having to actually generate the error condition in B that would have otherwise triggered the exception.
The short answer is because it gives you a greater degree of control over the class under test (and that class's dependencies) and allows you to simulate real-world conditions that would otherwise take a lot more code to generate.
I have a Java package with a class hierarchy, say class A with subclasses A1, A2, A3, ...
For a specific application I need to make classes (of type A) which implement an interface B. These new classes will have some things in common so my first thought was to make a base class C which inherits from A and implements B, and my new classes C1, C2, C3, ... would just inherit from C. However, some of these classes Ci will need functionality existing in one of the Aj and by this method I'd need to re-implement such functionality (functionality not defined in A). For some of the Ci's I'd like to inherit behavior from the various Ai's, but at the same time inherit other behavior common to all Ci's which is in C. Of course, Java won't allow multiple inheritance so I can't do this directly. I certainly don't want to re-implement this much stuff just because MI is not supported, but I don't see a way around it. Any ideas?
Do you really need to subclass or do you just want code re-use?
Sounds like you use the Composite and possibly Strategy Patterns:
Your Class C will have fields of type A and possibly B and delegate calls to them where appropriate. This gives you all the advantages of code re-use without the messiness of inheritance (single or multi)
Everyone seems to notice this problem when they first begin seeing the usefulness of class hierarchies. The problem stems from orthogonal concerns in the classes. Some of the subclasses share characteristic 1 and some share characteristic 2. Some have neither. Some have both.
If there was just one characteristic ... (Say, some had an inside and so they could be filled.) ... we would be fine. Make an intermediate subclass to handle those concerns. Some subclasses actually subclass the intermediate one. And we are done.
But there are two and there is no way to do the subclassing. I suppose multiple inheritance is a solution to that problem but it adds a level of complexity and subverts the simplicity of thinking that makes hierarchical class structures useful.
I find it best to use subclassing for the one concern that it solves easily. Then pick a way to isolate and share the code besides subclassing.
One solution is to extract the functionality and move it elsewhere. Then all the Aj's and Ci's can call it there. The advantage is that you don't copy and paste any code and it can be fixed in one place if it gets broken.
One The code could go into the base class A and be given a name indicating it only applies to some of the children. Make the methods protected. Then call them from the actual classes. Its ugly but effective.
A
protected String formStringForSubclassesAjAndCi(String a, int b) {
return a + b;
}
Ci and Aj
public String formString(String a, int b) {
return formStringForSubclassesAjAndCi(a, b);
}
Two Similarly you can put the shared code in some sort of helper class:
CiAjHelper
public static String formStringForSubclassesAjAndCi(String a, int b) {
return a + b;
}
Aj and Ci
public String formString(String a, int b) {
return CaAjHelper.formStringForSubclassesAjAndCi(a, b);
}
Three The third way is to put the code in say, Aj, and then call it from the Cj by having an instance of the Aj for each Ci instance and delegating the common functions to it. (Its still ugly.)
Aj
public String formString(String a, int b) {
return a + b;
}
Cj
private Aj instanceAj = new Aj();
public String formString(String a, int b) {
return instanceAj.formString(a, b);
}
I am developing unit tests that can benefit from a lot from reusability when it comes to creation of test data. However, there are different things (API calls) I need to do with the same test data (API args).
I am wondering if the following idiom is possible in Java.
Class RunTestScenarios {
void runScenarioOne (A a, B b, C c) {
........
}
void runScenarioTwo (A a, B b, C c) {
........
}
void runScenario (/* The scenario */ scenarioXXX) {
A a = createTestDataA();
B b = createTestDataB();
C c = createTestDataC();
scenarioXXX(a, b, c);
}
public static void main (String[] args) {
runScenario(runScenarioOne);
runScenario(runScenarioTwo);
}
}
Essentially, I dont want to have something like the following repeated everywhere:
A a = createTestDataA();
B b = createTestDataB();
C c = createTestDataC();
To my knowledge such aliasing (scenarioXXX) is not possible but I will be happy to be corrected or if anyone can suggest possible design alternatives to accomplish this effect.
Btw, I am aware of Command Pattern to get this done. But not what I am looking for.
Thanks,
US
Use jUnit.
Convert a,b & c to fields.
Use jUnit's setUp and tearDown methods to initialise fresh objects for each test.
No, you cannot do that in Java. Not without lambda support (don't hold your breath...). You can do that in Scala, for example using ScalaTest. And probably in some other JVM languages.
In Java, the idiom is to wrap these functions with anonymous classes, perhaps implementing an interface defining the method to run.
One way you can achieve this is by using intefaces:-
Modify the runScenario() method to accept that interface as an argument.
Implement the inteface for each scenario types and pass the one that you need for the scenario testing.
Generally unit testing in java is done by using jUnit if you already did not know about it. There are many tutorials available and one such can be found here.
I'm not sure to quite get what you need, but it seems to me that you can use DataProvider that TestNG provides.
Is something like this:
...
#Test(dataProvider="someMethodWithDataProviderAnnotation")
void runScenarioOne (A a, B b, C c) {
...
}
#Test(dataProvider="someMethodWithDataProviderAnnotation")
void runScenarioTwo (A a, B b, C c) {
...
}
And then you craete your data provider:
#DataProvider(name="someMethodWithDataProviderAnnotation")
private Object [][] getTestData() {
return new Object[]][] {{
createTestDataA(),
createTestDataB(),
createTestDataC()
}};
}
And that's it, when you run your test they will be invoked with the right parameters which will be creates only once. Although this is more useful when you have to load a lot of resources, for instance, all the files in a directory or something like that.
You just run all the methods in the class.
Read more here:
http://testng.org/doc/documentation-main.html#parameters-dataproviders
BTW: TestNG is included as plugin in Intellj IDEA
I think, the following can't be done in Java. But I would be happy to learn how to implement something that resembles it.
Suppose we have a class C, that is already used in compiled code. (We can neither change that code nor the original definition of C).
Suppose further there is interesting code that could be re-used, if only C would implement interface I. It is, in fact, more or less trivial to derive D that is just C + the implementation of the interface methods.
Yet, it seems there is no way, once I have a C, to say: I want you to be a D, that is, a C implementing I.
(Side remark: I think the cast (D)c, where c's runtime type is C, should be allowed if D is a C and the only difference to C are added methods. This should be safe, should it not?)
How could one work around this calamity?
(I know of the factory design pattern, but this is not a solution, it seems. For, once we manage to create D's in all places where formerly were C's, somebody else finds another interface J useful and derives E extends C implements J. But E and D are incompatible, since they both add a different set of methods to C. So while we can always pass an E where a C is expected, we can't pass an E where a D is expected. Rather, now, we'd need a new class F extends C implements I,J.)
Couldn't you use a delegate class, i.e. a new class which wraps an instance of "Class C", but also implements "Interface I" ?
public class D implements I {
private C c;
public D (C _c) {
this.c = _c;
}
public void method_from_class_C() {
c.method_from_class_C();
}
// repeat ad-nauseum for all of class C's public methods
...
public void method_from_interface_I() {
// does stuff
}
// and do the same for all of interface I's methods too
}
and then, if you need to invoke a function which normally takes a parameter of type I just do this:
result = some_function(new D(c));
If all that you need to be compatible with is interfaces then no problem take a look at dynamic proxy classes, its basically how you implement interfaces at runtime in java.
if you need similar runtime compatibility with classes I suggest you take a look at cglib or javaassist opensource libraries.
If you (can) manage the ClassLoader that loads your class C then you can try to do some class-loading time shenanigans with bytecode instrumentation to make the class implement the interface.
The same can be done during build-time, of course. It might even be easier this way (as you don't need access to the ClassLoader).
(Side remark: I think the cast (D)c,
where c's runtime type is C, should be
allowed if D is a C and the only
difference to C are added methods.
This should be safe, should it not?)
Not at all. If you could make this cast, then you could compile code that attempted to call one of the "added methods" on this object, which would fail at runtime since that method does not exist in C.
I think you are imagining that the cast would detect the methods that are "missing" from C and delegate them to D automatically. I doubt that would be feasible, although I can't speak to the language design implications.
It seems to me the solution to your problem is:
Define class D, which extends C and implements I
Define a constructor D(C c) which essentially clones the state of the given C object into a new D object.
The D object can be passed to your existing code because it is a C, and it can be passed to code that wants an I because it is an I
I believe what you want is possible by using java.lang.reflect.Proxy; in fact I have done something similar for a current project. However, it's quite a bit of work, and the resulting "hybrid objects" can expose strange behaviour (because method calls on them are routed to different concrete objects, there are problems when those methods try to call each other).
I think you that can't do it because Java is strictly typed. I believe it can be done in languages like Ruby and Python with a usage of mixins.
As for Java it definitely looks like a good usage for the Adapter design pattern (it was already proposed earlier as a "wrapper" object).