I have a class that I want to test using mockito. The best way to describe the class is to paste the code, but I will try and do my best in a short phrase.
The class has one void function and calls another object that is passed in via setter and getter methods. The object that is being called (from the void function) is an asynchronous call.
The problem I am facing is mocking the asynchronous call that void function (testing via junit) uses.
public class Tester {
private Auth auth; // not mock'ed or spy'ed
#Mock private Http transport;
#Before
....
#Test
public void testVoidFunctionFromAuth() {
doAnswer(new Answer<Object>() {
#Override
public Object answer(InvocationOnMock invocation) throws Throwable {
return doOutput();
}
}).when(transport).executeAsync(param1, param2, param3...);
auth.obtainAuth(); // void function that uses transport mock class
// obtainAuth calls transport.executeAsync()
// as part of the code
}
// return type of transport.executeAsync() is
// ListenableFuture<ResponseEntity<String>>
private ListenableFuture<ResponseEntity<String>> doOutput() {
return new SimpleAsyncTaskExecutor()
.submitListenable(new Callable<ResponseEntity<String>>() {
#Override
public ResponseEntity<String> call() throws Exception {
....
return responseEntity
}
});
}
}
What happens is that the doOutput() function gets called before the auth.obtainAuth(); and when obtainAuth() tries to call doOutput() it returns null -- most likely because doOutput was already execute on the line before. I am not sure how to bind/inject the mock'ed class (transport) on the call executeAsync.
I'm not sure if I understood the question, but as chrylis pointed, the mock object returns a value instantly.
Unit test should have their own context, and not depend on external resources, so there is no point on testing the async call itself. It should return different values so you are able to test the behaviour of the classes that uses it.
To have a better understanding of mock definition take a look at this post: What is Mocking?
Quoting from Pro Spring MVC with Web Flow, a unit test should
• Run fast: A unit test must run extremely fast. If it needs to wait
for database
connections or external server processes, or to parse large files, its
usefulness will quickly become limited. A test should provide an
immediate response and instant gratification.
• Have zero external configuration: A unit test must not require any
external configuration files, not even simple text files. The test’s
configurations must be provided and set by the test framework itself
by calling code. The intent is to minimize both the runtime of the
test and to eliminate external dependencies (which can change over
time, becoming out of sync with the test). Test case conditions should
be expressed in the test framework, creating more readable test
conditions.
• Run independent of other tests: A unit test must be able to run in
complete isolation. In other words, the unit test can’t depend on some
other test running before or after itself. Each test is a stand-alone
unit. In fact, every test method inside a test should be stand-alone
and not depend on another method or on the test methods being run in a
certain order. • Depend on zero external resources: A unit test must
not depend on any outside resources, such as database connections or
web services. Not only will these resources slow the test down, but
they are outside the control of the test and thus aren’t guaranteed to
be in a correct state for testing.
• Leave external state untouched: A unit test must not leave any
evidence that it ever ran. Unit tests are written to be repeatable, so
they must clean up after themselves. Obviously, this is much easier
when the test doesn’t rely on external resources (which are often
harder to clean up or restore).
• Test smallest unit of code possible: A unit test must test the
smallest unit of code possible in order to isolate the code under
test. In object-oriented programming, this unit is usually a method of
an object or class. Writing unit tests such that a method is tested
independently of other methods reduces the number of code lines that
could contain a potential bug.
Related
Given a unit test class that needs to use a specific service, it seems that there are different ways to fake the behaviour of the service, such as using a mocking framework, or implement a stub class.
For instance, a service to read/write to disk:
public interface FileServiceInterface {
public void write(String text);
public String read(String fileName);
}
I might have a class that fakes its behaviour and later use it in the test class, injecting the Stub instead of the real implementation:
public class FileServiceStub implements FileServiceInterface {
ConcurrentHashMap<String, String> content = new ConcurrentHashMap<>();
public void write(String fileName, String text) {
content.put(fileName, text);
}
public String read(String fileName) {
return content.get(fileName);
}
}
Other option is to let Mockito (for example) intercept the calls to the service directly in the test class:
public class TestExample {
#Mock
private FileServiceImpl service; // A real implementation of the service
#Test
void doSomeReadTesting() {
when(service.read(any(String.class))).thenReturn("something");
...
}
}
I would like to know which of these alternatives is the best (or currently most accepted) approach, and if there's any other/better option. Thanks.
Short answer: depends on the use case when you need to check a behavior use Mock otherwise in the case of state-based test use Stub.
In state verification you have the object under testing perform a certain operation, after being supplied with all necessary stubs. When it ends, you examine the state of the object and verify it is the expected one.
In behavior verification, you specify exactly which methods are to be invoked, thus verifying not that the ending state is correct, but that the sequence of steps performed was correct.
Martin Fowler described a great comparison between Stubs and Mocks in the article Mocks Aren't Stubs.
There are several types of pretend object used in place of a real object for testing purposes:
Dummy objects are passed around but never actually used. Usually they
are just used to fill parameter lists.
Fake objects actually have
working implementations, but usually take some shortcut which makes
them not suitable for production (an in memory database is a good
example).
Stubs provide canned answers to calls made during the test,
usually not responding at all to anything outside what's programmed in
for the test.
Spies are stubs that also record some information based
on how they were called. One form of this might be an email service
that records how many messages it was sent.
Mocks objects pre-programmed with expectations which
form a specification of the calls they are expected to receive.
In your case we have Fake vs Mock object type.
They all look like real objects, but unlike Mocks, others types do not have pre-programmed expectations that could fail your test. A Stub or Fake only cares about the final state - not how that state was derived.
So there we have two different design styles of testing.
State based tests are more black-boxed. They don’t actually care how the System Under Test(SUT) achieves its result, as long as it is correct. This makes them more resistant to changes and less coupled to design.
But occasionally you do run into things that are really hard to use state verification on, even if they aren't awkward collaborations. A great example of this is a cache. The whole point of a cache is that you can't tell from its state whether the cache hit or missed - this is a case where behavior verification would be a wise choice.
Also good thing about Mock is the ability to define relaxed constraints on the expectation - you can use any(), anyString(), withAnyArguments(). It is flexible.
What's the difference between a mock & stub?
Why JUnit test methods can't return a value?
Documentation says (emphasis mine):
Test methods and lifecycle methods may be declared locally within the current test class, inherited from superclasses, or inherited from interfaces (see Test Interfaces and Default Methods). In addition, test methods and lifecycle methods must not be abstract and must not return a value.
Why is it enforced like this?
Is this purely a design choice? Is it a best practice?
One thing I haven't yet seen mentioned is that one of the reasons behind this design decision is historical.
Before test cases where marked by annotations, they were analysed by JUnit via reflection - all public void methods, the name of which started with test, and that didn't have any arguments, were considered test cases that were run by the framework.
Had these methods allowed to also return a value, you could not have created a public helper method starting with test that calculated a value commonly used in some tests, or the framework runner would have tried to run it.
You should have made them private anyway I guess.
Additionally, this was also the high time of "a method name that starts with a verb should do something" (rather than calculate something).
What is the reasoning behind this?
Think of each test method as being the logical analog of a public static void main(String[]) method in a classic Java application. There needs to be a well known method signature so that the JUnit framework is able to identify the test methods, and call them in a standard way.
As to the specific design points you are querying:
The reason that the methods cannot be abstract is that they need to be callable. You can't call an abstract method.
The reason that methods cannot return values is that the test framework would not be able to do anything useful with a returned value. (It would not be impossible to discard any result, but what is the point of a testcase result that can't be used?)
I was thinking maybe we could reuse one test method within another.
If you wanted to reuse code in unit tests, you would just use ordinary methods and other OO features of Java; e.g. base classes, helper classes, etcetera. There are few situations where you would want a free-standing test method and a "component" (for want of a better word) of another test method. And if you did you could write something like this.
public void test_xxx() {
real_test_xxx();
}
public int real_test_xxx() {
...
}
public void test_yyy() {
...
res = real_test_xxx()
...
}
Is this purely a design choice? Is it a best practice?
There are no best practices!
These are design choices that are guided by the general requirements and practicalities of unit testing1. Particularly, the requirement that unit test authors should NOT need to build configurations to tell the test framework what to do.
Note that TestNG takes a similar approach. If a TestNG #Test method is declared as returning a value, it is not treated as a test method; see Can a test method return a value in TestNG
Either way, unless you are proposing to implement yet another Java test framework, this is all moot.
1 - If there were strong practical reasons for needing abstract test methods or test methods that returned values, someone would have done something about it. The masses (of unit test writers) would rise up and overthrow the tyranical regime (of test framework developers). (Reprise of the "We won't be fooled again" by The Who.)
Is this purely a design choice?
Yes. In theory, nothing prevents the Junit runner (component thats runs tests of a test class) from accepting as valid a test method with a return value.
Is it a best practice?
Indeed. More you restrict the ways of specifying a unit test method, more you ensure that no misuse happens.
For example look at that simple example that would suppose a test method to return a value.
A addFoo() test method that asserts that the method does what it is expected and that also returns the Foo fixture created :
#Test
public Foo addFoo(){
Foo foo = new Foo(...);
// call the api ...
sut.addFoo(foo);
// assertion
assertEquals(...);
// return the foo fixture
return foo;
}
And here a addFooBar() test method that retrieves the Foo fixture created by the previous method and then asserts that the method addFooBar() does what it is expected :
#Test
public void addFooBar(){
Foo foo = addFoo();
Bar bar = new Bar(..., foo);
// call the api ...
sut.addFooBar(bar);
// assertion
assertEquals(...);
}
That lays multiple issues :
Does the method addFoo() should be executed multiple times ? It may hurt the test speed execution and make test reports unclear.
The addFoo() test result may make the addFooBar() test result to fail in spit of a correct implementation.
How to diagnostic the issue origin ? We added useless complexity.
The addFoo() test result may make the addFooBar() test result to succeed in spit of a incorrect implementation. Indeed, an incorrect implementation in addFoo() may compensate an incorrect implementation in addFooBar()
Test methods should be executed independently
If the tests can not run independently then they are not unit tests.
A unit test should not rely on any external state
If test method will return a value, it may applies that test results should be used/relevant to other tests
Same best practice in TestNG
Since TestNG follows best practices for unit testing, which means a unit test method should not have a return value
you can write some private methods and reuse them like
#Test
public void test1() {
privateTest();
}
#Test
public void test2() {
privateTest();
// more Logic
}
private int privateTest() {
return 0;
}
Currently all my tests extend another class that has a #BeforeAll tag but I want this BeforeAll code to be run just once in total for the entirety of the tests instead of once per class
#BeforeAll
static void createWriter() throws IOException {
// create file to be writen to
}
#AfterAll
static void finishFile() throws IOException {
// end file and create high level metrics
}
#Override
public void testSuccess(ExtensionContext extensionContext, Throwable throwable) {
// override normal test execution finishes
}
You have to run test cases as a test suite. You can use testing framework "TestNG" to achieve this functionality.
Use annotations #BeforeSuite and #AfterSuite, which will execute once for multiple test classes.
Please check with this link for detailed information.
I am not aware of any built-in way of doing that. So the simple technical answer would be to use a static boolean flag that tells that method whether the method ran already (or not). Upon the first method execution, that flag gets toggled, and prevents further executions.
But the real answer is: you are probably going down the wrong rabbit hole. Having test classes extend a base class is more of an anti pattern, and to have a BeforeAll method that is either too expensive to be called repeatedly, or worse: not idempotent makes things worse. Unit tests should be simple and self explanatory (thus self contained). Inheritance and "run only once" directly violate that requirement.
I am wondering would it have some race condition if I run my tests in parallel and the two tests (below) share an instance variable? So my test class runs with SpringJunit4ClassRunner, and I have 2 tests method a() and b(), the variable state will be modified or reassigned from each test, and the doSomethingWithState() would use the variable state and pass it to the testing method. I know with maven-surefire-plugin you can run it at the method level that both a() and b() will get assigned to a thread and run it in parallel.
#RunWith(SpringJUnit4ClassRunner.class)
public class TestA {
private Object state;
#Test
public void a() {
stateObjectA();
doSomethingWithState();
assertion();
}
#Test
public void b() {
stateObjectB();
doSomethingWithState();
assertion();
}
private void stateObjectA() {
// do some mocking and setup state
}
private void stateObjectB() {
// do some mocking and setup state
}
private void doSomethingWithState() {
// use the state object and feed into the testing method
}
}
I guess the only reasonable answer is: depends ... on your exact context and code base.
The essence if race conditions is: you have more than one thread manipulating ("writing") to shared data. Your lengthy question boils down to exactly such a setup. So there is high potential for race conditions in the setup that you described above.
And then: it doesn't make any difference if you are talking about methods in production code or methods called by some testing framework. Because race conditions do not care about that. They only "care" about more than one thread writing to shared data.
That is all that matters here!
You must take into account two things:
If you require to use instance variables, you instantiate them before loading (Junit4 provides #Before, #After annotations, and #BeforeClass and #AfterClass for static variables).
Junit doesn't guarantee you that it will run the test cases in the same order every time, so each test must be coded isolated from the rest.
Another obvious point is that you must not think tests based on the results of the others. Take this into account when mocking stuff for integration tests, perhaps tests will begin to fail randomly and you won't know why.
I have an orchestrator class which has one method implements a feature via calling multiple methods:
public class Orchestrator {
public void doImportantStuff(){
firstDoThis();
thenDoThis();
finallyDoThis();
}
private void firstDoThis(){
...
}
private void thenDoThis(){
...
}
private void finallyDoThis(){
...
}
}
My question is, I have lots of cases to test for all methods called from doImportantStuff() so I'm planning to write separate test classes for each method. Is it something I should avoid, or does it sounds good?
This actually depends on which kind of tests you are about to write. Unit-testing assumes you test each method in isolation, but integration test will test all-in-one.
Assuming the methods do contain some business logic I'd recomment to write a set of unit tests for each of the method (maybe moving them into separate classes to comply the Single Responsibility principle).
After you are sure that all the methods works fine, it is time to write an integration test and test how the logic integrates (e.g. the particular call order). BTW, here could be useful some mocking tools as you are not going to test each method again.