How can I run particular sets of JUnit tests under specific circumstances? - java

I have a Java codebase that is developed exclusively in Eclipse. There
are a set of JUnit4 tests that can be divided into two mutually exclusive
subsets based on when they are expected to run:
"Standard" tests should run when a developer right-clicks the test
class (or containing project) and selects Run As > JUnit Test. Nothing
unusual here—this is exactly how JUnit works in Eclipse.
"Runtime" tests should only run when called programmatically from
within the application when it is started up in a specific state.
The two types of tests might sit adjacent to each other in the same Java
package. (Ideally we could intermix them in the same class, though
that's not a hard requirement.)
My first solution was to annotate the "Runtime" test classes with a new
#TestOnLaunch annotation. The application is able to find these classes,
and was running the tests contained in them (annotated with #Test) using
JUnitCore.run(Class<?>...). However, these tests leak into the
"Standard" scenario above, because the Eclipse test runner will run any
method annotated with #Test, regardless of the intent of my custom class
annotation.
Next I tried moving the #TestOnLaunch annotation to the method level.
This prevents the "leakage" of the "Runtime" tests into the "Standard"
scenario, but now I can't seem to get JUnitCore to run those test
methods. run.(Request) with a Request targeted at the correct class and
method, for example, fails with "No runnable methods", presumably
because it can't find the #Test annotation (because it's not there).
I'm very interested to know if there's a "JUnit way" of solving this
kind of problem. Presumably I could write my own Runner (to run methods
annotated with #TestOnLaunch)—is this the right approach? If so, how do
I then kick off the testing programmatically with a bunch of classes,
analogous to calling JUnitCore.run(Class<?>...)?

If you don't mix the two type test method in the same test class, this below may help:
http://johanneslink.net/projects/cpsuite.jsp
You can use the filter feature to setup two test suite.
I setup three test suites in my project by defining several mark interfaces:
UnitTests, IntegrationTests, DeploySmokeTests, AcceptanceTests
And three test suites:
#RunWith(ClasspathSuite.class)
#SuiteTypes({UnitTests.class, IntegrationTests.class})
public class CommitTestSuite {}
#RunWith(ClasspathSuite.class)
#SuiteTypes({DeploySmokeTests.class})
public class DeploySmokeTestSuite {}
#RunWith(ClasspathSuite.class)
#SuiteTypes({AcceptanceTests.class})
public class AcceptanceTestSuite {}
Now you could achieve your goal by running specific test suite. An alternative solution is using junit category:
#Category(IntegrationTests.class)
public class SomeTest {
#Test
public void test1() {
...
}
#Test
public void test2() {
....
}
}
#RunWith(Categories.class)
#IncludeCategory(UnitTests.class, IntegrationTests.class)
#SuiteClasses( { //all test classes })
public class CommitTestSuite {}
As I said if you mix differenct type test method in one test class, the first one can't help you, but by using the seconde solution you could annotate your category interface on test method (I annotated it on test class in the example above). But if you choose the second solution, you have to maintain your test suite every time you add a new test class.

First, you should reevaluate why you're using JUnit tests at runtime; that seems like an odd choice for a problem that probably has a better solution.
However, you should look at using a Filter to determine which tests to run, possibly in conjunction with a custom annotation.

Related

Java and JUnit, find missing #Test decorators

Let's say we have a project full of unit tests (thousands) and they all should look like this
#Test
public void testExceptionInBla() {
// some test
}
But in one case someone forgot to put an #Test decorator on top of the test.
What would be an easy way to spot those tests, without looking through all the code manually?
I want to find code like this, it's a test without #Test:
public void testExceptionInBla() {
// some test
}
I were you I would look at some Sonnar rule here I found something that may can match requirement:
https://rules.sonarsource.com/java/RSPEC-2187
But in one case someone forgot to put an #Test decorator on top of the
test.
And
I want to find code like this, it's a test without #Test:
public void testExceptionInBla() { // some test }
Annotating the method with #Test or specifying a test prefix in the method name is about the same thing in terms of consequences if the developer forgets to do that.
If the #Test is the way today, that is not chance.
The #Test annotation brings two real advantages on the test prefix :
1) it is checked at compile test. For example #Tast will provoke a compilation error while tastWhen...() will not.
2) #Test makes the test method name more straight readable : it allows to focus on the scenario with a functional language.
should_throw_exception_if_blabla() sounds more meaningful than test_should_throw_exception_if_blabla().
About your issue : how to ensure that tests are effectively executed, I would take things in another way. Generally you want to ensure that unit tests execution covers a minimum level of the application source code (while you can go down at package or class level if makes sense).
And that is the coverage tools goal (Jacoco for example) to do that job.
You can even add rules to make the build fail if the level of coverage of classes belonging to some package are not covered at least at a specified minimum level (look at that post).
Small Adding :
If you really ensure that methods of test are correctly annotated, you have a way :
1) you have to choose a convention for test methods : for example all instance and not private methods in a test class are test methods.
2) Create a Sonar rule that retrieves all non private instance methods of test classes and ensure that all these methods are annotated with #Test.
3) Add that rule to your Sonar rules.

Disable particular tests through Annotation Transformer : TestNG

I have a huge project with numerous test cases. Some test cases are suppose to work on only particular environments and some are not. So I'm trying to skip or disable tests which don't belong to that environment.
I'm using Annotation Transformers to override #Test 's behaviour.
Here is my Transformer code in
package com.raghu.listener
public class SkipTestsTransformer implements IAnnotationTransformer {
public void transform(ITestAnnotation annotation, Class testClass,
Constructor testConstructor, java.lang.reflect.Method testMethod){
// I intend to do this later
// if(someCondition){
// // Do something.
// }
System.out.println("Inside Transform");
}
}
As of now I'm just trying to print.
I have many packages and classes on which I have to impose this Transformer.
How and Where should I initiate this class?
Please suggest any better methods for doing the same.
Thanks in advance
IAnnotationTransformer is a listener. You do not need to instantiate it, testng would do it for you. You can specify a listener in any of the listed ways here., either through your xmls or through service loaders, depending upon your test environment.
If you do not have groups marked in your testcases, then I think this is the way to go by setting the enabled attribute to false. There is another way to skip a test in IInvokedMethodListener, but I do not see any benefit of one over the other.

Is there a good way to engage the initialization of a test suite when running an individual test case?

Under JUnit4, I have a test suite which uses a #classrule annotation to bootstrap a framework. This is needed to be able to construct certain objects during tests. It also loads some arbitrary application properties into a static. These are usually specific to the current test suite and are to be used by numerous tests throughout the suite. My test suite would look something like this (where FrameworkResource extends ExternalResource and does a lot of bootstrap work):
#RunWith(Suite.class)
#SuiteClasses({com.example.test.MyTestCase.class})
public class MyTestSuite extends BaseTestSuite {
#ClassRule
public static FrameworkResource resource = new FrameworkResource();
#BeforeClass
public static void setup(){
loadProperties("props/suite.properties")
}
}
The above works really well and the main build has no problem executing all testsuites and their respective test cases (SuiteClasses?). The issue is when I'm in eclipse and I want to run just one test case individually withough having to run the entire suite (as part of the local development process). I would right click the java file Run As > JUnit Test and any test needing the framework resource or test properties would fail.
My question is this:
Does JUnit4 provide a solution for this problem (without duplicating the initialization code in every test case)? Can a Test Case say something like #dependsOn(MyTestSuite.class)?.
If Junit doesn't have a magic solution, is there a common design pattern that can help me here?
As you are running just one test class, a good solution would be to move the initialisation code to the test class. You would need to add the #Before annotation to initialise the properties.
That would require that you duplicate the code on all your tests classes. For resolving this, you could create an abstract parent class that has the #Before method so all the child classes have the same initialization.
Also, the initialised data could be on static variables for checking if it is already initialised for that particular execution.

JUnit 4: Test case returns an object used in other tests in the test suite

I have created several tests cases for my application, all these are chained in a single test suite class.
However, i would like to pass an object created in the first test to the others.
To be clearer, the first test tests a class that creates a complex object with data coming from my database. I would like the others to test the methods of the object itself.
Here is how i define my test suite class:
package tests.testSuites;
import tests.Test1;
import tests.Test2;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import org.junit.runners.Suite.SuiteClasses;
#RunWith(Suite.class)
#SuiteClasses({
Test1.class
Test2.class
})
public class DataTestSuite {
}
But I would like to have something like this somewhere:
MyObject obj=Test1.testcreation();
Test2.testMethod(obj);
How can I combine a regular JUnit testSuite class definition with the need to pass the created object to my other test?
EDIT
For information, the test suite is launched in a test runner class. This class helps formatting the results and creates a custom log file.
This is how it calls the test suite:
public class MetExploreTestRunner {
public static void main(String[] args) {
Result result=JUnitCore.runClasses(DataTestSuite.class);
if(result.wasSuccessful()){
System.out.println("All tests succesful ");
}else{
for(Failure failure:result.getFailures()){
System.out.println(failure.toString());
}
}
}
}
Do not do that. Each test should be independent of the others. You can then run tests singly, to debug them, and each test is easier to understand. Also, JUnit runs tests in an arbitrary order; you can not guarantee that the object will be created before it is used.
If several tests use a similar object, consider extracting the code for creating it into a shared private method of the test class. If many tests use the object, consider making it a test fixture set up by a #before method.
After a lot of searching on the internet, I found what I was looking for.
The library JExample allows you to create dependencies between different tests and to pass arguments between them which is what I needed. http://scg.unibe.ch/research/jexample
I really recommend it.
Thank you all for your answers
You can use #Rule instead of reuse some data from one test to the others. You can see an example of use here: https://github.com/junit-team/junit/wiki/Rules
That rule can create that big object and then you can use it in every test you want.
I don't recommend you to create any dependency between tests because things could change in the future and you'd have to change many things in your other tests.
If you really want to do what you said, there is an annotation #dependsOn that would be usefull for you
It seems that you are doing integration testing instead of unit tests.
What you can do is to:
Create mock class that connects to your database and creates this complex object
Write a new class that will run all your tests and will create an instance of your mock class (this might be helpfull http://www.tutorialspoint.com/junit/junit_suite_test.htm )
Create methods with #BeforeClass annotation that will get this object.
Run your tests.
For my point of view it is not very practical, and not unit test at all. Unit tests should be independent, and do not rely on other tests.
Why not factor out the creation of MyObject to a utility class?
public class Test2{
...
#Test
public void testMethod(){
MyObject obj = TestUtil.createMyObject();
//do test here
}
}
You can then call the TestUtil.createMyObject() from any test. Also if creating MyObject is expensive, you can lazily create it and cache the result or use any of the other patterns useful for factory methods.

Using JUnit categories vs simply organizing tests in separate classes

I have two logical categories of tests: plain functional unit tests (pass/fail) and benchmark performance tests that are just for metrics/diagnostics.
Currently, I have all test methods in a single class, call it MyTests:
public class MyTests
{
#Test
public void testUnit1()
{
...
assertTrue(someBool);
}
#Test
public void testUnit2()
{
...
assertFalse(someBool);
}
#Test
#Category(PerformanceTest.class)
public void bmrkPerfTest1()
{
...
}
#Test
#Category(PerformanceTest.class)
public void bmrkPerfTest2()
{
...
}
}
Then I have a UnitTestSuite defined as
#RunWith(Categories.class)
#Categories.ExcludeCategory(PerformanceTest.class)
#SuiteClasses({ MyTests.class })
public class UnitTestSuite {}
and a PerformanceTestSuite
#RunWith(Categories.class)
#Categories.IncludeCategory(PerformanceTest.class)
#SuiteClasses({ MyTests.class })
public class PerformanceTestSuite {}
so that I can run the unit tests in Ant separately from performance tests (I don't think including Ant code is necessary).
This means I have a total of FOUR classes (MyTests, PerformanceTest, PerformanceTestSuite, and UnitTestSuite). I realize I could have just put all the unit tests in one class and benchmark tests in another class and be done with it, without the additional complexity with categories and extra annotations. I call tests by class name in Ant, i.e. don't run all tests in a package.
Does it make sense and what are the reasons to keep it organized by category with the annotation or would it be better if I just refactored it in two simple test classes?
To the question of whether to split the tests in two classes:
As they are clearly very different kinds of tests (unit tests and performance tests), I would put them in different classes in any case, for that reason alone.
Some further musings:
I don't think using #Category annotations is a bad idea however. What I'd do, in a more typical project with tens or hundreds of classes containing tests, is annotate the test classes (instead of methods) with #Category, then use the ClassPathSuite library to avoid duplicated efforts of categorising the tests. (And maybe run the tests by category using Ant.)
If you will only ever have the two test classes, it of course doesn't matter much. You can keep the Categories and Suites, or throw them away (as you said the tests are run by class name in Ant) if having the extra classes bugs you. I'd keep them, and move towards the scenario described above, as usually (in a healthy project) more tests will accumulate over time. :-)
If you only have two test classes then it probably doesn't matter. The project I work on has 50-60 classes. Listing them all by name would be exhausting. You could use filename patterns but I feel annotations are cleaner.

Categories