Custom unit test result - java

Is there a way to create a custom unit test result in TestNG/JUnit (or any other Java testing framework)? I understand that unit tests can either pass, or fail (or ignored), but currently I really would like to have the third option.
The company I'm working with right now has adapted the testing style of cleverly comparing screenshots of their application and so the test can either pass, fail, or diff, when the screenshots does not match with predetermined tolerance. In addition, they have their in house test "framework" and runners. This was done long before I joined.
What I would like to do is to migrate test framework to the one of the standard ones, but this process should be very gradual.
The approach I was thinking about was to create a special exception (e.g. DiffTolleranceExcededException), fail the test and then customize test result in the report.

Maybe you already mean the following with
The approach I was thinking about was to create a special exception
(e.g. DiffTolleranceExcededException), fail the test and then
customize test result in the report.
but just in case: You certainly can use the possibility to give a pre-defined message string to the assertions. In your case, if the screenshots are identical, the tests pass. If they are too different, the tests just fail. If they are within tolerance, you make them fail with a message like "DIFFERENT BUT WITHIN-TOLERANCE" or whatever - these failures are then easily distinguishable. Certainly, you could also invert the logic: Add a message to the failures that are not within the tolerance, to make these visually prominent.

You should follow this approach for customize our test reports, adding a new column on test report and create your test report (with screenshot for example).

Related

Should I follow One Test Class per Class Pattern Or usecase Pattern to write test cases

I am learner in writing Junit Test Cases. I have seen writing Junit cases Pattern that We usually make test class for each class indivisually by their name and write test cases for each method of that class in its respective class so that maximum code coverage can occur.
What I was thinking If I make test cases for my feature that would be better choice because In future any number of methods Signature changes I don't have to change or create again unnecessary test cases for those modified methods or newly created. Because that moment I would have certain test cases for my developed feature. So my test cases are running fine for particular feature then I can be sure in minimum number of test cases code that everything is fine.
By keeping this I don't have to write test cases for each and every methods of each class. Is it a good way?
Well, test cases are written for a reason. Each and every methods have to be working properly as expected. If you only do test cases for the feature level, how do you find exactly where the error occurred and how confidently you can ship your code to next level?
The better approach will be to do unit test cases for each class and do an integration test to make sure everything works good.
We found success in utilizing both. By default we use one per class. But when particular use-cases come up, e.g. use-cases that involve multiple classes or use cases where the existing boiler plate testing code prevents the use case from being properly tested, then we would create a test class for that scenario.
by keeping this I don't have to write test cases for each and every
methods of each class. Is it a good way?
In this case you write only integration tests and no unit tests.
Writing tests for use cases is really nice but it is not enough because it is hard to cover all cases of all methods invoked in an integration test because there may have a very important number of branches while it is much easier in an unit test.
Besides, a use case test may be successful for bad reasons : thanks to side effects between multiple methods invoked.
By writing an unit test you protect yourself against this kind of issue.
Definitively, unit and integration tests are are not opposed but complementary. So you have to write both to get a robust application.

One unit per case or one unit per assert(s)

In unit-tests should I write one unit per case or one unit per assert from support cost point of view? I have the following code
void methodUnderTest(Resource resource) {
if(!resource.hasValue()) {
Value value = valueService.getValue(resource);
resource.setValue(value);
}
// resource.setLastUpdateTime(new Date()); // will be added in future
db.persist(resource);
email.send(resource);
}
The commented line will be added in near future and I think what it will cost to update units.
As far as I see there are two ways to test this code.
Write 2 units passing resource with value and without value. In both tests verify that db.persist and email.send were called. When LastUpdateTime is added I'll have to update both tests to verify the property was set.
Write separate unit tests: one checks that db.persist was called, the other checks email.send, third and fourth for resource with and without value. When LastUpdateTime is added I just write new unit.
I like the second way because I like the idea that I won't have to touch working units. But that would be probably a lot of code duplication because actually all 4 tests do the same and only use different asserts.
The first approach looks more correct from 'just one concept per unit test' point of view. But aren't such tests hard to maintain? Adding something new I will always have to revise all existing tests and it doesn't sound good.
Is there some best practise here I should follow?
I propose you put the common stuff of all tests into the setUp() for the tests, the have exactly one assert per unit test, as in your second way of testing.
When you add the new line of code, you just add one more test case with a single assert.
Not modification of existing tests, not code duplication.
There is no general answer for this. You need to balance your needs and constraints:
If a test is small, executes quickly and fails very rarely, there is no reason to split it into several.
If you have many assertions in a single test, then all of them probably provide value towards figuring out why the test fails but only the first one will be executed, keeping you from valuable information. Try to have only a single assert per test (combine the assertions into one big string, for example).
If you test several features in a single test, you have a variant of the "missing valuable information." If each test tests a single feature, the combination of succeeded and failed tests may give you the clue why they fail. Try to aim for a single feature per test.
Lastly, it gives me a good feeling when I see thousands of tests being executed.

How to build up test cases in junit?

I'm coming from a Perl background where I used Test::More to handle unit testing. Using that framework, I knew the order in which the tests took place and could rely on that, which I understand is not encouraged with the JUnit framework. I've seen several ways to get around this, but I want to understand the proper/intended way of doing things.
In my Perl unit testing I would build up tests, knowing that if test #3 passed, I could make some assumptions in further tests. I don't quite see how to structure that in the JUnit world so that I can make every test completely independent.
For example, suppose I have a class that parses a date from a string. Methods include:
parse a simple date (YYYY-MM-DD)
parse a simple date with alternate separator (YYYY_MM_DD or YYYY/MM/DD)
parse a date with a string for a month name (YYYY-MON-DD)
parse a date with a string month name in a different language
and so on
I usually write my code to focus as many of the externally-accessible methods into as few core methods as possible, re-using as much code as possible (which is what most of us would do, I'm sure). So, let's say I have 18 different tests for the first method, 9 that are expected to pass and 9 that throw an exception. For the second method, I only have 3 tests, one each with the separators that work ('_' & '/') and one with a separator that doesn't work ('*') which is expected to fail. I can limit myself to the new code being introduced because I already know that the code properly handles the standard boundary conditions and common errors, because the first 18 tests already passed.
In the Perl world, if test #20 fails, I know that it's probably something to do with the specific separator, and is not a general date parsing error because all of those tests have already passed. In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there. That's not too hard to do, of course, but maybe in a bigger, more complex class, it would be more difficult to do.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite? That seems tedious. (And before someone suggests that I put the first18 in one class and the second 3 in another, and use a test suite for just those groupings, let's pretend that all 18 of the early tests build on each other, too).
And, again, I know there are ways around this (FixedMethodOrder in JUnit 4.11+ or JUnit-HierarchicalContextRunner) but I want to understand the paradigm as its intended to be used.
In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there.
Yes that is correct. If something in your code is broken then multiple tests may fail. That is a good thing. Use intent revealing test method names and possibly use the optional String message parameter in the JUnit assertions to explain what exactly failed the test.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite?
The general convention is one test class per source class. Depending on what build tool you are using, you may or may not need to use test suites. If you are using Ant, you probably need to collect the tests into test suites, but if you are using Maven, the test plugins for maven will find all your test classes for you so you don't need suites.
I also want to point out that you should be coding to Java interfaces as much as possible. If you are testing class C that depends on an implementation of interface I, then you should mock your I implementation in your C test class so that C is tested in isolation. Your mock I should follow what the interface is supposed to do. This also keeps the number of failing tests down. If there is a bug in your real I implementation, then only your I tests should fail, the C tests should still all pass (since you are testing it against a fake but working I implementation)
Don't worry about suites yet. You'll know when you need them. I've only had to use them a handful of times, and I'm not entirely sold on their usefulness...but I leave that decision up to you.
To the meat of your question - the conventional way with JUnit tests is to neither know nor depend on the order of execution of your tests; this ensures that your tests are not run-order dependent, and if they are, something is wrong with your tests* and validation.
The main core concept behind unit tests is that they test a unit of code - as simple as a single function. If you're attempting to test five different things at once, your test is far too large, and should be broken out. If the method you're testing is monolithic in nature, and difficult to test, it should be refactored and broken out into different slices of responsibility.
Tests that exercise a larger flow are better suited for integration-style tests, which tend to be written as unit tests, but aren't actually unit tests.
I've not run into a scenario in which, if I knew that if a certain test failed, I could expect different behavior in the other tests. I've never thought that such a thing was necessary to be noted, since the only thing I care about in my unit test is how that unit of code behaves given a certain input.
Keep your tests small and simple to understand; the test should only make one assertion about the result (or a general assertion of the state of your result).
*: That's not to say that it's completely broken, but those sorts of tests should be fixed sooner rather than later.

Code Coverage Tool That Supports Annotation of JUnit Tests To Specify Which Method The UnitTest Covers

I'm looking for something like PHPUnit's "Specifying Covered Methods" for Java. That means a code coverage tool that comes with annotations for my JUnit tests so that I can specify which method is tested by which JUnit test.
The effect of this would be much more correct metrics on covered lines, because only the lines of the specified method are counted and and not all executed lines of the whole application.
This approach sounds as if it would place a heavy burden on the programmer. It either assumes a 1:1 correspondence between a test and a method (which goes against the usual advice to think in terms of testing the behaviour of a class, not methods) or requires to programmer to manually track the route through private methods etc for each entry point as the code is refactored and changed over time.
It's difficult to see how this could be implemented with type safety - quick automated operations such as method renames would require a manual step to update the annotations in the tests.
If your requirement is to gain a more accurate estimate of the effectiveness of your test suite, an alternative approach you might consider is mutation testing. This seeds faults into your code then checks if your suite is able to detect them.
There are a number of systems available for Java including
http://pitest.org
http://jumble.sourceforge.net/
https://github.com/david-schuler/javalanche/
A comparison of them is available here
http://pitest.org/java_mutation_testing_systems/

Is it possible to programmatically generate JUnit test cases and suites?

I have to write a very large test suite for a complex set of business rules that are currently captured in several tabular forms (e.g., if parameters X Y Z are such and such, the value should be between V1 and V2). Each rule has a name and its own semantics.
My end goal is to have a test suite, organized into sub test suites, with a test case to each rule.
One option is to actually hard code all these rules as tests. That is ugly, time consuming, and inflexible.
Another is to write a Python script that would read the rule files and generate Java classes with the unit tests. I'd rather avoid this if I can. Another variation would be to use Jython.
Ideally, however I would like to have a test suite that would read the files, and would then define sub-suites and tests within them. Each of these tests might be initialized with certain values taken from the table files, run fixed entry points in our system, and then call some validator function on the results based on the expected value.
Is there a reasonable way to pull this off using only Java?
Update: I may have somewhat simplified our kind of rules. Some of them are indeed tabular (excel style), others are more fuzzy. The general question though remains as I'm probably not the first person to have this problem.
Within JUnit 4 you will want to look at the Parameterized runner. It was created for the purpose you describe (data driven tests). It won't organize them into suites however.
In Junit 3 you can create TestSuites and Tests programatically. The answer is in Junit Recipes, which I can expand if you need it (remember that JUnit 4 can run Junit 3 tests).
Have you considered using FIT for that?
You seem to have the tables already ready, and "business rules" sounds like "business people write them using excel".
FIT is a system for checking tests based on tables with input->expected output mappings, and a open source java library for running those tests is available.
We tried FIT and decided to go with Concordion. The main advantages of this library are:
the tests can be checked in alongside the code base (into a Subversion repository, for example)
they are executed by a standard JUnit runner
I wrote something very similar using JUnit. I had a large number of test cases (30 pages) in an XML file. Instead of trying to generate different tests, I did it all in a single test, which worked just fine.
My test looked something like this:
void setup() {
cases = read in the xml file
}
void test_fn_works() {
for case in cases {
assert(case.expected_result, fn(case.inputs),
'Case ' + case.inputs + ' should yield ' + case.expected_result);
}
}
With Ruby, I did exactly what you are saying-- generating tests on the fly. Doing this in Java, though, is complex, and I don't think it is worth it since there is another, quite reasonable approach.
Hope this helps.

Categories