Currently all my tests extend another class that has a #BeforeAll tag but I want this BeforeAll code to be run just once in total for the entirety of the tests instead of once per class
#BeforeAll
static void createWriter() throws IOException {
// create file to be writen to
}
#AfterAll
static void finishFile() throws IOException {
// end file and create high level metrics
}
#Override
public void testSuccess(ExtensionContext extensionContext, Throwable throwable) {
// override normal test execution finishes
}
You have to run test cases as a test suite. You can use testing framework "TestNG" to achieve this functionality.
Use annotations #BeforeSuite and #AfterSuite, which will execute once for multiple test classes.
Please check with this link for detailed information.
I am not aware of any built-in way of doing that. So the simple technical answer would be to use a static boolean flag that tells that method whether the method ran already (or not). Upon the first method execution, that flag gets toggled, and prevents further executions.
But the real answer is: you are probably going down the wrong rabbit hole. Having test classes extend a base class is more of an anti pattern, and to have a BeforeAll method that is either too expensive to be called repeatedly, or worse: not idempotent makes things worse. Unit tests should be simple and self explanatory (thus self contained). Inheritance and "run only once" directly violate that requirement.
Related
Why JUnit test methods can't return a value?
Documentation says (emphasis mine):
Test methods and lifecycle methods may be declared locally within the current test class, inherited from superclasses, or inherited from interfaces (see Test Interfaces and Default Methods). In addition, test methods and lifecycle methods must not be abstract and must not return a value.
Why is it enforced like this?
Is this purely a design choice? Is it a best practice?
One thing I haven't yet seen mentioned is that one of the reasons behind this design decision is historical.
Before test cases where marked by annotations, they were analysed by JUnit via reflection - all public void methods, the name of which started with test, and that didn't have any arguments, were considered test cases that were run by the framework.
Had these methods allowed to also return a value, you could not have created a public helper method starting with test that calculated a value commonly used in some tests, or the framework runner would have tried to run it.
You should have made them private anyway I guess.
Additionally, this was also the high time of "a method name that starts with a verb should do something" (rather than calculate something).
What is the reasoning behind this?
Think of each test method as being the logical analog of a public static void main(String[]) method in a classic Java application. There needs to be a well known method signature so that the JUnit framework is able to identify the test methods, and call them in a standard way.
As to the specific design points you are querying:
The reason that the methods cannot be abstract is that they need to be callable. You can't call an abstract method.
The reason that methods cannot return values is that the test framework would not be able to do anything useful with a returned value. (It would not be impossible to discard any result, but what is the point of a testcase result that can't be used?)
I was thinking maybe we could reuse one test method within another.
If you wanted to reuse code in unit tests, you would just use ordinary methods and other OO features of Java; e.g. base classes, helper classes, etcetera. There are few situations where you would want a free-standing test method and a "component" (for want of a better word) of another test method. And if you did you could write something like this.
public void test_xxx() {
real_test_xxx();
}
public int real_test_xxx() {
...
}
public void test_yyy() {
...
res = real_test_xxx()
...
}
Is this purely a design choice? Is it a best practice?
There are no best practices!
These are design choices that are guided by the general requirements and practicalities of unit testing1. Particularly, the requirement that unit test authors should NOT need to build configurations to tell the test framework what to do.
Note that TestNG takes a similar approach. If a TestNG #Test method is declared as returning a value, it is not treated as a test method; see Can a test method return a value in TestNG
Either way, unless you are proposing to implement yet another Java test framework, this is all moot.
1 - If there were strong practical reasons for needing abstract test methods or test methods that returned values, someone would have done something about it. The masses (of unit test writers) would rise up and overthrow the tyranical regime (of test framework developers). (Reprise of the "We won't be fooled again" by The Who.)
Is this purely a design choice?
Yes. In theory, nothing prevents the Junit runner (component thats runs tests of a test class) from accepting as valid a test method with a return value.
Is it a best practice?
Indeed. More you restrict the ways of specifying a unit test method, more you ensure that no misuse happens.
For example look at that simple example that would suppose a test method to return a value.
A addFoo() test method that asserts that the method does what it is expected and that also returns the Foo fixture created :
#Test
public Foo addFoo(){
Foo foo = new Foo(...);
// call the api ...
sut.addFoo(foo);
// assertion
assertEquals(...);
// return the foo fixture
return foo;
}
And here a addFooBar() test method that retrieves the Foo fixture created by the previous method and then asserts that the method addFooBar() does what it is expected :
#Test
public void addFooBar(){
Foo foo = addFoo();
Bar bar = new Bar(..., foo);
// call the api ...
sut.addFooBar(bar);
// assertion
assertEquals(...);
}
That lays multiple issues :
Does the method addFoo() should be executed multiple times ? It may hurt the test speed execution and make test reports unclear.
The addFoo() test result may make the addFooBar() test result to fail in spit of a correct implementation.
How to diagnostic the issue origin ? We added useless complexity.
The addFoo() test result may make the addFooBar() test result to succeed in spit of a incorrect implementation. Indeed, an incorrect implementation in addFoo() may compensate an incorrect implementation in addFooBar()
Test methods should be executed independently
If the tests can not run independently then they are not unit tests.
A unit test should not rely on any external state
If test method will return a value, it may applies that test results should be used/relevant to other tests
Same best practice in TestNG
Since TestNG follows best practices for unit testing, which means a unit test method should not have a return value
you can write some private methods and reuse them like
#Test
public void test1() {
privateTest();
}
#Test
public void test2() {
privateTest();
// more Logic
}
private int privateTest() {
return 0;
}
Let's say we have a project full of unit tests (thousands) and they all should look like this
#Test
public void testExceptionInBla() {
// some test
}
But in one case someone forgot to put an #Test decorator on top of the test.
What would be an easy way to spot those tests, without looking through all the code manually?
I want to find code like this, it's a test without #Test:
public void testExceptionInBla() {
// some test
}
I were you I would look at some Sonnar rule here I found something that may can match requirement:
https://rules.sonarsource.com/java/RSPEC-2187
But in one case someone forgot to put an #Test decorator on top of the
test.
And
I want to find code like this, it's a test without #Test:
public void testExceptionInBla() { // some test }
Annotating the method with #Test or specifying a test prefix in the method name is about the same thing in terms of consequences if the developer forgets to do that.
If the #Test is the way today, that is not chance.
The #Test annotation brings two real advantages on the test prefix :
1) it is checked at compile test. For example #Tast will provoke a compilation error while tastWhen...() will not.
2) #Test makes the test method name more straight readable : it allows to focus on the scenario with a functional language.
should_throw_exception_if_blabla() sounds more meaningful than test_should_throw_exception_if_blabla().
About your issue : how to ensure that tests are effectively executed, I would take things in another way. Generally you want to ensure that unit tests execution covers a minimum level of the application source code (while you can go down at package or class level if makes sense).
And that is the coverage tools goal (Jacoco for example) to do that job.
You can even add rules to make the build fail if the level of coverage of classes belonging to some package are not covered at least at a specified minimum level (look at that post).
Small Adding :
If you really ensure that methods of test are correctly annotated, you have a way :
1) you have to choose a convention for test methods : for example all instance and not private methods in a test class are test methods.
2) Create a Sonar rule that retrieves all non private instance methods of test classes and ensure that all these methods are annotated with #Test.
3) Add that rule to your Sonar rules.
I am wondering would it have some race condition if I run my tests in parallel and the two tests (below) share an instance variable? So my test class runs with SpringJunit4ClassRunner, and I have 2 tests method a() and b(), the variable state will be modified or reassigned from each test, and the doSomethingWithState() would use the variable state and pass it to the testing method. I know with maven-surefire-plugin you can run it at the method level that both a() and b() will get assigned to a thread and run it in parallel.
#RunWith(SpringJUnit4ClassRunner.class)
public class TestA {
private Object state;
#Test
public void a() {
stateObjectA();
doSomethingWithState();
assertion();
}
#Test
public void b() {
stateObjectB();
doSomethingWithState();
assertion();
}
private void stateObjectA() {
// do some mocking and setup state
}
private void stateObjectB() {
// do some mocking and setup state
}
private void doSomethingWithState() {
// use the state object and feed into the testing method
}
}
I guess the only reasonable answer is: depends ... on your exact context and code base.
The essence if race conditions is: you have more than one thread manipulating ("writing") to shared data. Your lengthy question boils down to exactly such a setup. So there is high potential for race conditions in the setup that you described above.
And then: it doesn't make any difference if you are talking about methods in production code or methods called by some testing framework. Because race conditions do not care about that. They only "care" about more than one thread writing to shared data.
That is all that matters here!
You must take into account two things:
If you require to use instance variables, you instantiate them before loading (Junit4 provides #Before, #After annotations, and #BeforeClass and #AfterClass for static variables).
Junit doesn't guarantee you that it will run the test cases in the same order every time, so each test must be coded isolated from the rest.
Another obvious point is that you must not think tests based on the results of the others. Take this into account when mocking stuff for integration tests, perhaps tests will begin to fail randomly and you won't know why.
I have a class that I want to test using mockito. The best way to describe the class is to paste the code, but I will try and do my best in a short phrase.
The class has one void function and calls another object that is passed in via setter and getter methods. The object that is being called (from the void function) is an asynchronous call.
The problem I am facing is mocking the asynchronous call that void function (testing via junit) uses.
public class Tester {
private Auth auth; // not mock'ed or spy'ed
#Mock private Http transport;
#Before
....
#Test
public void testVoidFunctionFromAuth() {
doAnswer(new Answer<Object>() {
#Override
public Object answer(InvocationOnMock invocation) throws Throwable {
return doOutput();
}
}).when(transport).executeAsync(param1, param2, param3...);
auth.obtainAuth(); // void function that uses transport mock class
// obtainAuth calls transport.executeAsync()
// as part of the code
}
// return type of transport.executeAsync() is
// ListenableFuture<ResponseEntity<String>>
private ListenableFuture<ResponseEntity<String>> doOutput() {
return new SimpleAsyncTaskExecutor()
.submitListenable(new Callable<ResponseEntity<String>>() {
#Override
public ResponseEntity<String> call() throws Exception {
....
return responseEntity
}
});
}
}
What happens is that the doOutput() function gets called before the auth.obtainAuth(); and when obtainAuth() tries to call doOutput() it returns null -- most likely because doOutput was already execute on the line before. I am not sure how to bind/inject the mock'ed class (transport) on the call executeAsync.
I'm not sure if I understood the question, but as chrylis pointed, the mock object returns a value instantly.
Unit test should have their own context, and not depend on external resources, so there is no point on testing the async call itself. It should return different values so you are able to test the behaviour of the classes that uses it.
To have a better understanding of mock definition take a look at this post: What is Mocking?
Quoting from Pro Spring MVC with Web Flow, a unit test should
• Run fast: A unit test must run extremely fast. If it needs to wait
for database
connections or external server processes, or to parse large files, its
usefulness will quickly become limited. A test should provide an
immediate response and instant gratification.
• Have zero external configuration: A unit test must not require any
external configuration files, not even simple text files. The test’s
configurations must be provided and set by the test framework itself
by calling code. The intent is to minimize both the runtime of the
test and to eliminate external dependencies (which can change over
time, becoming out of sync with the test). Test case conditions should
be expressed in the test framework, creating more readable test
conditions.
• Run independent of other tests: A unit test must be able to run in
complete isolation. In other words, the unit test can’t depend on some
other test running before or after itself. Each test is a stand-alone
unit. In fact, every test method inside a test should be stand-alone
and not depend on another method or on the test methods being run in a
certain order. • Depend on zero external resources: A unit test must
not depend on any outside resources, such as database connections or
web services. Not only will these resources slow the test down, but
they are outside the control of the test and thus aren’t guaranteed to
be in a correct state for testing.
• Leave external state untouched: A unit test must not leave any
evidence that it ever ran. Unit tests are written to be repeatable, so
they must clean up after themselves. Obviously, this is much easier
when the test doesn’t rely on external resources (which are often
harder to clean up or restore).
• Test smallest unit of code possible: A unit test must test the
smallest unit of code possible in order to isolate the code under
test. In object-oriented programming, this unit is usually a method of
an object or class. Writing unit tests such that a method is tested
independently of other methods reduces the number of code lines that
could contain a potential bug.
I'm new to unit test. About the purpose of using #Before annotation in JUnit 4. I just don't know the point of using it:
public class FoodTestCase {
static private Food sandwich;
#BeforeClass
public static void initialise(){
sandwich = new Sandwich();
}
}
vs
public class FoodTestCase {
static private Food sandwich = new Sandwich();
}
What's the difference?
In this case it may not be necessary, as the initialization is really simple.
In case you have some logging, complex initialization or need to free some resources, you have to use #BeforeClass and #AfterClass
I think the idea is like that:
You use #AfterClass to free resources. Then it is logical to have #BeforeClass to acquire them. Because it may not be a good idea to let developer to guess that he need to use static block.
Almost no difference. But if constructor of Sandwich throws exception you cannot initialize it directly static private Food sandwich = new Sandwich(); but have to wrap initialization with try/catch block. However method initialise() may be declared as throws MyException, so the test case will fail if exception indeed thrown during initialization.
Suppose, you had all of your Food related data (say a Menu) setup at the backend in a database table. Your Food test cases could then pertain to updating the Menu (all the CRUD ops basically).
Instead of opening a DB connection for every test case (using #Before); it would be wise if you do it just once before you run all your test cases via a method marked #BeforeClass.
Now the use of a method makes sense as the setup would most probably be slightly complex (you may decide to use a Spring container to get your Connection from a DataSource) and you would not be able to achieve it with a single line where you declare your Connection object.
Similarly, you would use the #AfterClass to tear down your global setup (for all the test cases) i.e. closing your database connection here.
In your particular example - not much. However there is also #Before annotation which will run prior to every test in your class. Take a look at http://selftechy.com/2011/05/17/junit4-before-vs-beforeclass-after-vs-afterclass, it is explained well there.
#BeforeClass is for static initializations.
Instances created here will be reused across all of your #Test s
Whereas #Before is per #Test .
Usually #BeforeClass is reserved for objects which are relatively expensive to instantiate.
e.g. Database connections
Inheritance adds another wrinkle:
Let's say you have two JUnit tests that extend a common base class.
And let's say the base class has both a static initializer block and a #BeforeClass method.
In this case, the static initializer block will run once, while the #BeforeClass method will run twice.
So, if you have a very expensive computation or resource that you need set up across a whole suite of test cases that share a common base class, then you could use the static initializer block for that.