Java and JUnit, find missing #Test decorators - java

Let's say we have a project full of unit tests (thousands) and they all should look like this
#Test
public void testExceptionInBla() {
// some test
}
But in one case someone forgot to put an #Test decorator on top of the test.
What would be an easy way to spot those tests, without looking through all the code manually?
I want to find code like this, it's a test without #Test:
public void testExceptionInBla() {
// some test
}

I were you I would look at some Sonnar rule here I found something that may can match requirement:
https://rules.sonarsource.com/java/RSPEC-2187

But in one case someone forgot to put an #Test decorator on top of the
test.
And
I want to find code like this, it's a test without #Test:
public void testExceptionInBla() { // some test }
Annotating the method with #Test or specifying a test prefix in the method name is about the same thing in terms of consequences if the developer forgets to do that.
If the #Test is the way today, that is not chance.
The #Test annotation brings two real advantages on the test prefix :
1) it is checked at compile test. For example #Tast will provoke a compilation error while tastWhen...() will not.
2) #Test makes the test method name more straight readable : it allows to focus on the scenario with a functional language.
should_throw_exception_if_blabla() sounds more meaningful than test_should_throw_exception_if_blabla().
About your issue : how to ensure that tests are effectively executed, I would take things in another way. Generally you want to ensure that unit tests execution covers a minimum level of the application source code (while you can go down at package or class level if makes sense).
And that is the coverage tools goal (Jacoco for example) to do that job.
You can even add rules to make the build fail if the level of coverage of classes belonging to some package are not covered at least at a specified minimum level (look at that post).
Small Adding :
If you really ensure that methods of test are correctly annotated, you have a way :
1) you have to choose a convention for test methods : for example all instance and not private methods in a test class are test methods.
2) Create a Sonar rule that retrieves all non private instance methods of test classes and ensure that all these methods are annotated with #Test.
3) Add that rule to your Sonar rules.

Related

JUnit test methods can't return a value

Why JUnit test methods can't return a value?
Documentation says (emphasis mine):
Test methods and lifecycle methods may be declared locally within the current test class, inherited from superclasses, or inherited from interfaces (see Test Interfaces and Default Methods). In addition, test methods and lifecycle methods must not be abstract and must not return a value.
Why is it enforced like this?
Is this purely a design choice? Is it a best practice?
One thing I haven't yet seen mentioned is that one of the reasons behind this design decision is historical.
Before test cases where marked by annotations, they were analysed by JUnit via reflection - all public void methods, the name of which started with test, and that didn't have any arguments, were considered test cases that were run by the framework.
Had these methods allowed to also return a value, you could not have created a public helper method starting with test that calculated a value commonly used in some tests, or the framework runner would have tried to run it.
You should have made them private anyway I guess.
Additionally, this was also the high time of "a method name that starts with a verb should do something" (rather than calculate something).
What is the reasoning behind this?
Think of each test method as being the logical analog of a public static void main(String[]) method in a classic Java application. There needs to be a well known method signature so that the JUnit framework is able to identify the test methods, and call them in a standard way.
As to the specific design points you are querying:
The reason that the methods cannot be abstract is that they need to be callable. You can't call an abstract method.
The reason that methods cannot return values is that the test framework would not be able to do anything useful with a returned value. (It would not be impossible to discard any result, but what is the point of a testcase result that can't be used?)
I was thinking maybe we could reuse one test method within another.
If you wanted to reuse code in unit tests, you would just use ordinary methods and other OO features of Java; e.g. base classes, helper classes, etcetera. There are few situations where you would want a free-standing test method and a "component" (for want of a better word) of another test method. And if you did you could write something like this.
public void test_xxx() {
real_test_xxx();
}
public int real_test_xxx() {
...
}
public void test_yyy() {
...
res = real_test_xxx()
...
}
Is this purely a design choice? Is it a best practice?
There are no best practices!
These are design choices that are guided by the general requirements and practicalities of unit testing1. Particularly, the requirement that unit test authors should NOT need to build configurations to tell the test framework what to do.
Note that TestNG takes a similar approach. If a TestNG #Test method is declared as returning a value, it is not treated as a test method; see Can a test method return a value in TestNG
Either way, unless you are proposing to implement yet another Java test framework, this is all moot.
1 - If there were strong practical reasons for needing abstract test methods or test methods that returned values, someone would have done something about it. The masses (of unit test writers) would rise up and overthrow the tyranical regime (of test framework developers). (Reprise of the "We won't be fooled again" by The Who.)
Is this purely a design choice?
Yes. In theory, nothing prevents the Junit runner (component thats runs tests of a test class) from accepting as valid a test method with a return value.
Is it a best practice?
Indeed. More you restrict the ways of specifying a unit test method, more you ensure that no misuse happens.
For example look at that simple example that would suppose a test method to return a value.
A addFoo() test method that asserts that the method does what it is expected and that also returns the Foo fixture created :
#Test
public Foo addFoo(){
Foo foo = new Foo(...);
// call the api ...
sut.addFoo(foo);
// assertion
assertEquals(...);
// return the foo fixture
return foo;
}
And here a addFooBar() test method that retrieves the Foo fixture created by the previous method and then asserts that the method addFooBar() does what it is expected :
#Test
public void addFooBar(){
Foo foo = addFoo();
Bar bar = new Bar(..., foo);
// call the api ...
sut.addFooBar(bar);
// assertion
assertEquals(...);
}
That lays multiple issues :
Does the method addFoo() should be executed multiple times ? It may hurt the test speed execution and make test reports unclear.
The addFoo() test result may make the addFooBar() test result to fail in spit of a correct implementation.
How to diagnostic the issue origin ? We added useless complexity.
The addFoo() test result may make the addFooBar() test result to succeed in spit of a incorrect implementation. Indeed, an incorrect implementation in addFoo() may compensate an incorrect implementation in addFooBar()
Test methods should be executed independently
If the tests can not run independently then they are not unit tests.
A unit test should not rely on any external state
If test method will return a value, it may applies that test results should be used/relevant to other tests
Same best practice in TestNG
Since TestNG follows best practices for unit testing, which means a unit test method should not have a return value
you can write some private methods and reuse them like
#Test
public void test1() {
privateTest();
}
#Test
public void test2() {
privateTest();
// more Logic
}
private int privateTest() {
return 0;
}

Mock a New Object instance in a Abstract Class

I have a AbstractDao class, where I am instantiating Rest Fore API. I am not able to mock the new forceAPI(config) in Power Mock. Please suggest.
public abstract class AbstractDao {
#Inject
private Configuration configuration;
public ForceApi getForceAPI() {
ApiConfig config = new ApiConfig();
config.setClientId("test");
config.setClientSecret("test");
config.setUsername("test");
config.setPassword("test");
config.setLoginEndpoint("test");
return new ForceApi(config);
}
}
I am trying to do in this way but it's not working.
My DAO class is extending Abstract DAO class
#RunWith(BlockJUnit4ClassRunner.class)
public class SalesForceDaoImplTest {
#InjectMocks
private SalesForceDaoImpl salesForceDao;
#Mock
private ForceApi forceApiMock;
#Mock
private ApiConfig apiConfigMock;
#Mock
private Configuration configMock;
#Mock
JsonObject jsonobject;
#Before
public void setup() {
initMocks(this);
when(configMock.getAppConfiguration()).thenReturn(jsonobject);
when(jsonobject.getString(anyString())).thenReturn("test");
when(salesForceDao.getForceAPI()).thenReturn(forceApiMock);
when(new ApiConfig()).thenReturn(apiConfigMock);
when(new ForceApi(apiConfigMock)).thenReturn(forceApiMock);
}
Probably this is a late reply, but I believe it still can be useful to some of us, programmers.
Disclaimer: I've never worked with PowerMockito but I've used PowerMock quite a lot
As for troig's suggestion:
PowerMock driven unit test assumes that you'll run with a dedicated runner:
#RunWith(PowerMockRunner.class)
In this case this clashes with #RunWith(BlockJUnit4ClassRunner.class) stated in the question, so the "slot" for RunWith is already occupied.
This particular one can still be resolved by running recent versions of power mock as a JUnit's rule (I assume you run JUnit) You can find an example of doing this here
But bottom line this is one of known issues with power mock.
There are other issues as well which basically made me to come to conclusion that the power mock should be avoided and should not be used in new project (and Power Mockito as well):
The unit test with power mock is slow (much slower than, say with EasyMock if it could be rewritten for using that)
The Power Mock sometimes instruments the byte code incompatible with tools like jacoco code coverage and as a consequence sonar doesn't cover classes unit tested with power mock or at least does it wrong
Surefire plugin responsible for running tests in maven has a feature of running multiple unit tests in parallel. Sometimes with power mock its not possible.
Even IntelliJ sometimes fails to run suits that contain power mock tests.
But the most important thing is that when you have to use tools like power mock, probably the code can (and should) be refactored to be more clean and easy-to-understand. Regarding your particular question:
Your class violates the coding principle that says that the class should not take care of dependencies of itself. Here DAO actually "constructs" and configures another (external) service for a later use.
I suggest you to watch an excellent lecture of Misko Hevery about clean code to better understand what I mean
So again, in your example. Its much better to maintain the ForceApi as a dependency constructed by Dependency Injection framework (I see that you already use #Inject so you're on the right track)
Take a look at this implementation:
public abstract class AbstractDao {
#Inject // this one is constructed and injected by your favorite DI framework in real use cases
private ForceApi forceApi;
public void doSomething() {
// do your dao stuff here
forceApi.callSomeAPIMethod();
// do your dao stuff here
}
}
Now for unit tests you don't really need power mock anymore. Its enough to use a simple Mock or even Stub depending on situation. All you need is to provide a constructor that will take a parameter of type ForceApi or maybe a setter (you can consider make it package private so that noone would be able to call it outside the test).
I don't have enough information out of your question, but the design I've offered probably can eliminate the need to have an Abstract class for the DAO, which is also can be helpful in some cases because inheritance sometimes can be a pretty heavy 'obligation' to maintain (at least think about this). And maybe in this case the inheritance is done only to support this getForceAPI behavior. In this case as the project grows, probably some methods will be added into this AbstractDAO just because its convenient to do so, but these methods will 'transparently' be added at this point to the whole hierarchy to all DAOs. This construction becomes fragile, because if at least one method changes its implementation the whole hierarchy of DAOs can potentially fail.
Hope this helps

Is there a good way to engage the initialization of a test suite when running an individual test case?

Under JUnit4, I have a test suite which uses a #classrule annotation to bootstrap a framework. This is needed to be able to construct certain objects during tests. It also loads some arbitrary application properties into a static. These are usually specific to the current test suite and are to be used by numerous tests throughout the suite. My test suite would look something like this (where FrameworkResource extends ExternalResource and does a lot of bootstrap work):
#RunWith(Suite.class)
#SuiteClasses({com.example.test.MyTestCase.class})
public class MyTestSuite extends BaseTestSuite {
#ClassRule
public static FrameworkResource resource = new FrameworkResource();
#BeforeClass
public static void setup(){
loadProperties("props/suite.properties")
}
}
The above works really well and the main build has no problem executing all testsuites and their respective test cases (SuiteClasses?). The issue is when I'm in eclipse and I want to run just one test case individually withough having to run the entire suite (as part of the local development process). I would right click the java file Run As > JUnit Test and any test needing the framework resource or test properties would fail.
My question is this:
Does JUnit4 provide a solution for this problem (without duplicating the initialization code in every test case)? Can a Test Case say something like #dependsOn(MyTestSuite.class)?.
If Junit doesn't have a magic solution, is there a common design pattern that can help me here?
As you are running just one test class, a good solution would be to move the initialisation code to the test class. You would need to add the #Before annotation to initialise the properties.
That would require that you duplicate the code on all your tests classes. For resolving this, you could create an abstract parent class that has the #Before method so all the child classes have the same initialization.
Also, the initialised data could be on static variables for checking if it is already initialised for that particular execution.

How can I run particular sets of JUnit tests under specific circumstances?

I have a Java codebase that is developed exclusively in Eclipse. There
are a set of JUnit4 tests that can be divided into two mutually exclusive
subsets based on when they are expected to run:
"Standard" tests should run when a developer right-clicks the test
class (or containing project) and selects Run As > JUnit Test. Nothing
unusual here—this is exactly how JUnit works in Eclipse.
"Runtime" tests should only run when called programmatically from
within the application when it is started up in a specific state.
The two types of tests might sit adjacent to each other in the same Java
package. (Ideally we could intermix them in the same class, though
that's not a hard requirement.)
My first solution was to annotate the "Runtime" test classes with a new
#TestOnLaunch annotation. The application is able to find these classes,
and was running the tests contained in them (annotated with #Test) using
JUnitCore.run(Class<?>...). However, these tests leak into the
"Standard" scenario above, because the Eclipse test runner will run any
method annotated with #Test, regardless of the intent of my custom class
annotation.
Next I tried moving the #TestOnLaunch annotation to the method level.
This prevents the "leakage" of the "Runtime" tests into the "Standard"
scenario, but now I can't seem to get JUnitCore to run those test
methods. run.(Request) with a Request targeted at the correct class and
method, for example, fails with "No runnable methods", presumably
because it can't find the #Test annotation (because it's not there).
I'm very interested to know if there's a "JUnit way" of solving this
kind of problem. Presumably I could write my own Runner (to run methods
annotated with #TestOnLaunch)—is this the right approach? If so, how do
I then kick off the testing programmatically with a bunch of classes,
analogous to calling JUnitCore.run(Class<?>...)?
If you don't mix the two type test method in the same test class, this below may help:
http://johanneslink.net/projects/cpsuite.jsp
You can use the filter feature to setup two test suite.
I setup three test suites in my project by defining several mark interfaces:
UnitTests, IntegrationTests, DeploySmokeTests, AcceptanceTests
And three test suites:
#RunWith(ClasspathSuite.class)
#SuiteTypes({UnitTests.class, IntegrationTests.class})
public class CommitTestSuite {}
#RunWith(ClasspathSuite.class)
#SuiteTypes({DeploySmokeTests.class})
public class DeploySmokeTestSuite {}
#RunWith(ClasspathSuite.class)
#SuiteTypes({AcceptanceTests.class})
public class AcceptanceTestSuite {}
Now you could achieve your goal by running specific test suite. An alternative solution is using junit category:
#Category(IntegrationTests.class)
public class SomeTest {
#Test
public void test1() {
...
}
#Test
public void test2() {
....
}
}
#RunWith(Categories.class)
#IncludeCategory(UnitTests.class, IntegrationTests.class)
#SuiteClasses( { //all test classes })
public class CommitTestSuite {}
As I said if you mix differenct type test method in one test class, the first one can't help you, but by using the seconde solution you could annotate your category interface on test method (I annotated it on test class in the example above). But if you choose the second solution, you have to maintain your test suite every time you add a new test class.
First, you should reevaluate why you're using JUnit tests at runtime; that seems like an odd choice for a problem that probably has a better solution.
However, you should look at using a Filter to determine which tests to run, possibly in conjunction with a custom annotation.

Using JUnit categories vs simply organizing tests in separate classes

I have two logical categories of tests: plain functional unit tests (pass/fail) and benchmark performance tests that are just for metrics/diagnostics.
Currently, I have all test methods in a single class, call it MyTests:
public class MyTests
{
#Test
public void testUnit1()
{
...
assertTrue(someBool);
}
#Test
public void testUnit2()
{
...
assertFalse(someBool);
}
#Test
#Category(PerformanceTest.class)
public void bmrkPerfTest1()
{
...
}
#Test
#Category(PerformanceTest.class)
public void bmrkPerfTest2()
{
...
}
}
Then I have a UnitTestSuite defined as
#RunWith(Categories.class)
#Categories.ExcludeCategory(PerformanceTest.class)
#SuiteClasses({ MyTests.class })
public class UnitTestSuite {}
and a PerformanceTestSuite
#RunWith(Categories.class)
#Categories.IncludeCategory(PerformanceTest.class)
#SuiteClasses({ MyTests.class })
public class PerformanceTestSuite {}
so that I can run the unit tests in Ant separately from performance tests (I don't think including Ant code is necessary).
This means I have a total of FOUR classes (MyTests, PerformanceTest, PerformanceTestSuite, and UnitTestSuite). I realize I could have just put all the unit tests in one class and benchmark tests in another class and be done with it, without the additional complexity with categories and extra annotations. I call tests by class name in Ant, i.e. don't run all tests in a package.
Does it make sense and what are the reasons to keep it organized by category with the annotation or would it be better if I just refactored it in two simple test classes?
To the question of whether to split the tests in two classes:
As they are clearly very different kinds of tests (unit tests and performance tests), I would put them in different classes in any case, for that reason alone.
Some further musings:
I don't think using #Category annotations is a bad idea however. What I'd do, in a more typical project with tens or hundreds of classes containing tests, is annotate the test classes (instead of methods) with #Category, then use the ClassPathSuite library to avoid duplicated efforts of categorising the tests. (And maybe run the tests by category using Ant.)
If you will only ever have the two test classes, it of course doesn't matter much. You can keep the Categories and Suites, or throw them away (as you said the tests are run by class name in Ant) if having the extra classes bugs you. I'd keep them, and move towards the scenario described above, as usually (in a healthy project) more tests will accumulate over time. :-)
If you only have two test classes then it probably doesn't matter. The project I work on has 50-60 classes. Listing them all by name would be exhausting. You could use filename patterns but I feel annotations are cleaner.

Categories