Coverage vs reachable code - java

Q: how to detect real test coverage ?
I've noticed one problem with code coverage metric and test quality: 100% code coverage doesn't mean that code is really tested.
Sometimes test gives 100% coverage even that it doesn't cover everything. Problem lays in coverage definition, we assume coverage==reachable code.
But it's not true, code could be 100% reachable but not 100% covered with the test.
Take a look into example, this test gives 100% coverage (EMMA), but in reality it doesn't cover values which will be passed to service mock. So, if value will be changed, test won't fail.
Example:
public class User {
public static final int INT_VALUE = 1;
public static final boolean BOOLEAN_VALUE = false;
public static final String STRING_VALUE = "";
private Service service;
public void setService(Service service) {
this.service = service;
}
public String userMethod() {
return service.doSomething(INT_VALUE, BOOLEAN_VALUE, STRING_VALUE);
}
}
And test for it:
public class UserTest {
private User user;
private Service easyMockNiceMock;
#Before
public void setUp() throws Exception {
user = new User();
easyMockNiceMock = EasyMock.createNiceMock(Service.class);
}
#Test
public void nonCoverage() throws Exception {
// given
user.setService(easyMockNiceMock);
expect(easyMockNiceMock.doSomething(anyInt(), anyBoolean(), (String) anyObject())).andReturn("");
replay(easyMockNiceMock);
// when
user.userMethod();
// then
verify(easyMockNiceMock);
}
}

Take a look at Jester, which performs mutation testing. From the site:
Jester finds code that is not covered by tests. Jester makes some
change to your code, runs your tests, and if the tests pass Jester
displays a message saying what it changed. Jester includes a script
for generating web pages that show the changes made that did not cause
the tests to fail.
Jester is different than code coverage tools, because it can find code
that is executed by the running of tests but not actually tested.
Jester's approach is called mutation testing or automated error
seeding. However, Jester is not meant as a replacement for code
coverage tools, merely as a complementary approach.

100% coverage has never meant 100% tested, and anyone who claimed it did either doesn't understand, or is lying to you. Coverage measurement simply measures what product code has been executed during testing. There are dozens of ways to write tests that produce 100% coverage, and then don't fully test your code.
The simplest way is to write a test that calls the product function, and then never makes any assertions about the return value!
Here's a blog post I wrote about this very topic: Flaws in Coverage Measurement, it's Python-centric, but the concepts are all the same.

Related

TestNG - Getting start time of before method

I'm executing a few hundred tests in test classes consisting of a singular beforeMethod test, followed by a variable amount of primary tests and occasionally an afterMethod.
The purpose of the beforMethod test, is to populate the test environment with data used in the primary tests while separating logging and recording from the primary tests, which we report on.
We have set up an automatic issue creation tool using a listener. We've found that it would give great value to add execution time to this tool, so that it can show us how long it would take to reproduce the errors in said issues.
To this end, I have made a simple addition to this code, that uses ITestResult.getEndMillis() and getStartMillis() to get the execution time.
The problem we're experiencing with this approach, is that if the test encounters a failure during the primary tests, ITestResult.getStartMillis() will not account for the start time of the before method, but only the primary method.
How would we go about determining the start time of the test class itself (always the beforeMethod), rather than just the current method?
Since we're running hundreds of tests in a massive setup, a solution that allows this without changing each separate test class, would definitely be preferable.
The setup of the java test classes look something like this (scrubbed of business specifics):
package foobar;
import foobar
#UsingTunnel
#Test
public class FLOWNAME_TESTNAME extends TestBase {
private final Value<String> parameter;
public FLOWNAME_TESTNAME(Value<String> parameter) {
super(PropertyProviderImpl.get());
this.parameter = parameter;
}
#StoryCreating(test = "TESTNAME")
#BeforeMethod
public void CONDITIONS() throws Throwable {
new TESTNAME_CONDITIONS(parameter).executeTest();
}
#TestCoverage(test = "TESTNAME")
public void PRIMARYTESTS() throws Throwable {
TESTCASE1 testcase1 = new TESTCASE1(parameter.get());
testcase1.executeTest();
testcase1.throwSoftAsserts();
TESTCASE2 testcase2 = new TESTCASE2(parameter.get());
testcase2.executeTest();
testcase2.throwSoftAsserts();
}
}
So in this case, the problem arises when the listener detects a failure in either TESTCASE1 or TESTCASE2, because these will not include the execution time of TESTNAME_CONDITIONS because that test is inside a different method, yet practically speaking, they are part of the same test flow, aka the same test class.
I found a solution to the issue.
It is possible to use ITestResult.getTestContext().getStartDate().getTime() to obtain the time of which the test class itself is run, rather than the current test method.
The final solution was quite simply:
result.getEndMillis() - result.getTestContext().getStartDate().getTime()) / 60000
Where "result" corresponds to ITestResult.
This outputs the time between the start of the test and the end of the last executed method.

Change the value for each test method before #Before annotation is called in JUnit

I am writing a test for a class which has a setup
class A
{
private String name;
public String getName()
{
return "Hello "+ name;
}
public void setName(String name)
{
this.name = name;
}
My test class
TestA
A a = new A();
{
#Before
void setup()
{
a.setName("Jack");
}
#Test
public void testTom()
{
assert(a.getString(), "Hello Tom");
}
#Test
public void testJack()
{
assert(a.getString(), "Hello Jack");
}
How to change the value of name between the methods since #Before calls for every test method?
ie) if execute testJack then the output should be Hello Jack.
I tried with #Parameters but before that setup is getting called so i couln't acheive this functionality.
First, the code:
#Before
void setup()
{
A a = new A();
a.setName("Jack");
}
Doesn't do anything which the Tests can see. You're creating a local variable a which goes out of scope almost immediately.
#Before is designed to set and reset a state or context before each Test is run. It doesn't vary unless something it relies on changes between invocations.
You could create a Stack as an instance variable and pre-populate it in a #BeforeClass method, and have #Before pop a value to be used every time it's called. This is unadvisable as it assumes that the Tests will be run in some particular order. It's much cleaner and clearer to just declare different values inside each Test.
There is simply no point in doing that; as your real problem is rooted in your statement "Just assume the scenario of 30 lines of code in setup".
If you need 30 lines of setup code, then your code under test is not following the "single responsibility principle" and doing way too many different things.
Of course, you can turn to "data driven" testing to somehow get there (see here for example); but that would be fixing the Y side of an XY problem.
I know, it sounds harsh: but you better step back; and learn about doing reasonable OO design (for example based on SOLID). Then you rework your code to not need 30 lines of setup code.
You see, if your code is so hard to test; I guarantee you: it is also hard to understand, and will be close to impossible to maintain/enhance over time. And beyond that: it will be even hard to get your code to be "correct" in the first place.
Long story short: have a look in these videos and improve your design skills.

JUnit: No runnable methods

I am newbie in JUnit. I am fixing JUnit in SonarQube report and also increasing Junit code coverage. I came across of a class where there is no #Test annotation. The Exception thrown is as below:
No runnable methods java.lang.Exception: No runnable methods at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
Test class below:
public class TestLimitPrice {
#BeforeClass
public static void setUpBeforeClass() {
JndiDataManager.init();
}
private LimitPriceBean limitPrice;
private LimitPriceValidator validator;
#Before
public void setUp() {
limitPrice = new LimitPriceBean();
validator = new LimitPriceValidator(limitPrice);
}}
My Question's are :
In Sonar Report is it necessary for every JUnit to have atleast one #Test to pass ?
Will empty #Test is good approach for increasing code coverage ?
If in case any test case is not executing, then assertEquals(true,true) is good practice or should be avoided ?
Update
Sonar Version 4.4.1
JUnit Version 4.12
Java 1.6-45
My Question's are :
In Sonar Report is it necessary for every JUnit to have at least one
#Test to pass ?
I don't understand the question.
Will empty #Test is good approach for increasing code
coverage ?
No, for two reasons.
First, if the #Test method is truly empty, then there's no possibility to increase coverage. So let's assume that you have a #Test method that does execute some of your program code but that contains no assertions.
In this scenario, you've increased your test coverage by executing program code in a #Test but totally subverted the purpose of tests by not making any statements (assertions) about the expected outcome / outputs of that code. Let's say I've got
public int addTwoAndTwo() {
return 2+2;
}
And a corresponding test
#Test public void testAddTwoAndTwo() {
MyClass mc = new MyClass();
my.addTwoAndTwo(); // this method now "covered"
}
Now that addTwoAndTwo is "covered" I'm supposed to be able to maintain it with confidence that as long as the unit tests continue to pass, I haven't broken anything.
So let's do that. My new version is this:
public int addTwoAndTwo() {
return 42;
}
After that change, my unit tests still succeed, so everything must be okay, right? Uhm... no. And that's why you need assertions:
#Test public void testAddTwoAndTwo() {
MyClass mc = new MyClass();
assertThat(mc.addTwoAndTwo()).isEqualTo(4); // this method now truly covered
}
If in case any test case is not executing, then
assertEquals(true,true) is good practice or should be avoided ?
The answer to this question should by now be obvious, but to be explicit, DO NOT DO THIS.

Debugging Java code tested with Spock and JMockit

I'm using Spock to write tests for a Java application. The test includes a JMockit MockUp for the class. When debugging a test (using IntelliJ) I would like to be able to step into the Java code. I realize using F5 to step into the code won't work, because of all the Groovy magic with reflection that goes on. Unfortunately, even if I set a breakpoint in the Java code, it still will not be hit, even though it runs through the code.
Here is the code to test:
public class TestClass {
private static void initializeProperties() {
// Code that will not run during test
}
public static boolean getValue() {
return true; // <=== BREAK POINT SET HERE WILL NEVER BE HIT
}
}
Here is the test code:
class TestClassSpec extends Specification {
void setup() {
new MockUp<TestClass>() {
#Mock
public static void initializeProperties() {}
}
}
void "run test"() {
when:
boolean result = TestClass.getValue()
then:
result == true
}
}
The test passes and everything works well, but I'm not able to debug into the Java code.
Note: I can debug if I do not use the JMockit MockUp, but that is required to test a private static method in my production code.
Any ideas on how to get the debugger to stop in the Java code from a Spock test that uses JMockit?
This answer has plagued me for days, and I have searched and searched for an answer. Fifteen minutes after posting for help, I find the answer . So for anyone else who runs into this, here's my solution - this is all thanks to this answer: https://stackoverflow.com/a/4852727/2601060
In short, the breakpoint is removed when the class is redefined by JMockit. The solution is to set a break point in the test, and once the debugger has stopped in the test THEN set the breakpoint in the Java code.
And there was much rejoicing...

Javolution test patterns, dos and don'ts

What are the patterns and dos and don'ts when one is writing tests for Javolution tests? In particular I was wondering:
TestCase.execute() does not allow throwing of exceptions. How to deal with them? Rethrow as RuntimeException or store in a variable and assert in TestCase.validate() or something?
Are there any graphical runners that show you the tests that fail, i.e. in Eclipse? Perhaps someone wrote a JUnit-Wrapper such that I could use the Eclipse JUnit Runner?
The javadoc and javolution sources give some examples and design rationale.
See also an article on serverside.
Javolution tests contain exactly one test, and the exercising of the tested code is separated from the validation into different methods execute() and validate(). Thus the same testclass can be used both for regression tests and speed tests (where the call to validate() is omitted). Also the execution of many tests is trivially parallelizable.
A disadvantages of this separation is: you will get more memory consumption, since the results of the execution of the exercised code needs to be saved until calling validate(). (Freeing those results in tearDown is probably a good idea.)
And if validate comes from a different class than exercise then it might be difficult to debug a failure.
You can get some kind of graphical testrunner by using the following JUnit adapter and running it in eclipse. You can start / debug the failed tests separately. Unfortunately the graphical representation does not include anything about the actual test - it just shows the numbers [0], [1], etc.
#RunWith(Parameterized.class)
public class JavolutionJUnit4Adapter {
protected final javolution.testing.TestCase test;
public JavolutionJUnit4Adapter(javolution.testing.TestCase testcase) {
this.test = testcase;
}
#org.junit.Test
public void executeTest() throws Exception {
enter(REGRESSION);
try {
new javolution.testing.TestSuite() {
#Override
public void run() {
test(test);
}
}.run();
} finally {
exit();
}
}
#Parameters
public static Collection<javolution.testing.TestCase[]> data() {
javolution.testing.TestSuite fp = new WhateverSuiteYouWantToRun();
List<javolution.testing.TestCase> tests = fp.getTestCases();
Collection<javolution.testing.TestCase[]> res = new ArrayList<javolution.testing.TestCase[]>();
for (javolution.testing.TestCase t : tests) {
res.add(new javolution.testing.TestCase[] { t });
}
return res;
}
}

Categories