Is it possible to programmatically generate JUnit test cases and suites? - java

I have to write a very large test suite for a complex set of business rules that are currently captured in several tabular forms (e.g., if parameters X Y Z are such and such, the value should be between V1 and V2). Each rule has a name and its own semantics.
My end goal is to have a test suite, organized into sub test suites, with a test case to each rule.
One option is to actually hard code all these rules as tests. That is ugly, time consuming, and inflexible.
Another is to write a Python script that would read the rule files and generate Java classes with the unit tests. I'd rather avoid this if I can. Another variation would be to use Jython.
Ideally, however I would like to have a test suite that would read the files, and would then define sub-suites and tests within them. Each of these tests might be initialized with certain values taken from the table files, run fixed entry points in our system, and then call some validator function on the results based on the expected value.
Is there a reasonable way to pull this off using only Java?
Update: I may have somewhat simplified our kind of rules. Some of them are indeed tabular (excel style), others are more fuzzy. The general question though remains as I'm probably not the first person to have this problem.

Within JUnit 4 you will want to look at the Parameterized runner. It was created for the purpose you describe (data driven tests). It won't organize them into suites however.
In Junit 3 you can create TestSuites and Tests programatically. The answer is in Junit Recipes, which I can expand if you need it (remember that JUnit 4 can run Junit 3 tests).

Have you considered using FIT for that?
You seem to have the tables already ready, and "business rules" sounds like "business people write them using excel".
FIT is a system for checking tests based on tables with input->expected output mappings, and a open source java library for running those tests is available.

We tried FIT and decided to go with Concordion. The main advantages of this library are:
the tests can be checked in alongside the code base (into a Subversion repository, for example)
they are executed by a standard JUnit runner

I wrote something very similar using JUnit. I had a large number of test cases (30 pages) in an XML file. Instead of trying to generate different tests, I did it all in a single test, which worked just fine.
My test looked something like this:
void setup() {
cases = read in the xml file
}
void test_fn_works() {
for case in cases {
assert(case.expected_result, fn(case.inputs),
'Case ' + case.inputs + ' should yield ' + case.expected_result);
}
}
With Ruby, I did exactly what you are saying-- generating tests on the fly. Doing this in Java, though, is complex, and I don't think it is worth it since there is another, quite reasonable approach.
Hope this helps.

Related

testing my REPOSITORY layer with DBUnit/Junit, should I manually build scheme or automate it? what's the advantages?

I am thinking on how to design my unit-tests for my java based-repositories and encountering design problems:
Lets assume I have a Consumer table with data related to my consumer:
{ ID, Name, Phone }
And my ConsumerRepository extends BaseRepository which extends JPA repo and supports findByPhone, findByName, findAll queries and save option.
I'm using H2 in-memo DB and DBUnit for those tests, all configured and running and was thinking about this:
When launching data to my in-memory DB, should I be configuring the data with the ConsumerTestData.xml (DBUnit) and manually adding the Consumer data for each test for e.g:
<dataset>
<CONSUMER CONSUMER_ID="1" FIRST_NAME="Elvis" LAST_NAME="Presley" PHONE="+972123456789" EMAIL="elvis#isep.com" CREATION_DATE="2017-08-29"/>
<CONSUMER CONSUMER_ID="2" FIRST_NAME="Bob" LAST_NAME="Dylan" PHONE="+972123456780" EMAIL="bob#isep.com" CREATION_DATE="2017-08-29"/>
<CONSUMER CONSUMER_ID="3" FIRST_NAME="Lady" LAST_NAME="Gaga" PHONE="+972123456781" EMAIL="gaga#isep.com" CREATION_DATE="2017-08-29"/>
</dataset>
or should I automate it? for e.g:
#Test
public void findByPhone(){
ConsumerEntity consumerEntity = ConsumerUtil.createRandomConsumer();
ConsumerEntity savedConsumerEntity = consumerRepository.save(consumerEntity);
assertThat(consumerRepository.findByPhone(savedConsumerEntity.getPhone()).isEqualTo(savedConsumerEntity.getPhone());
}
While my createRandomConsumer generates random data.
Pros:
I think automating would be much more generic and handy as if ConsumerEntity might change or any code changes next - i will not have to change my .xml file and just be able to add things to the TestEntity function.
Cons:
Creating new objects and saving to in-memo DB might be more difficult if contains any constraints in the DB scheme.
Should I use DBUnit at all? if automating it - why should I use DBUnit? is it better just to use JUnit (Rolling back the data after each test and just adding the data I need for the test automatically as in the example above?)
If chose to use DBUnit for this - and manually added - what are the advantages of such thing? why is it better than using simple JUnit with Spring?
Thanks!
You seem to be asking 2 questions: whether to use DBUnit and whether to use randomization.
As for DBUnit
It adds extra steps and extra maintenance costs. If you already have code to save entities (via XxxRepository), then there is no reason to introduce yet additional tool.
This is true not only for DBUnit but for any tool that duplicates existing persistence logic.
Instead you can just create an object instance, fill all the fields and save with the repository. This makes refactoring much easier.
As for test randomization
I think your test looks very good. With randomization you can cover more cases with less tests, find tricky cases that you couldn't think of yourself, isolate your tests easily (e.g. generate unique username instead of keeping track of them somewhere), etc.
As per cons: good randomziation (and good tests in general) require a good command of OOP, so not everyone can easily use it when the project grows big. Also tests start failing from time to time because they are written in haste and not every possibility is considered. To catch such cases you should run the tests locally many times (which sometimes people forget). Good news: IntelliJ can repeat tests N times for JUnit (for TestNG there is an annotation).
In general you should think more when you write randomized tests. But if written properly they provide better coverage and lower maintenance overhead. If you're interested in different randomization techniques, check this out.
1) Testing with only random or runtime generated fixture is not fine.
It doesn't make tests reproducible, so make them harder to debug if fails and it doesn't make tests document the code either.
Besides, fixtures with explicit data reduces side effects as the data generation may introduce.
Are you sure that the generated data respect the requirements ?
Are you sure that your generate tool works as expected ?
Have you tested it ?
And so for...
Finding required edge cases is fine but inventing not required edge cases means your tests will change your requirements.
And you don't want that.
If you identified all specific cases and you want to generate some data because you deem that you have too many combinations (dozen of input cases for example), of course generating fixture by relying on the requirements is nice.
Otherwise, don't make it as it seems an overhead.
2) DBUnit, it is a choice.
Before I used it. Now, I stopped. It has some strengths but it is cumbersome and its maintenance/improvements are very light.
Recently, I tried DbSetup from JBNizet (a SO member).
It is rather a fine API to insert Data in a database from Java code : simple and straight usable.
For example to insert data in DB, An Operation can be defined as :
Operation consumerInserts = sequenceOf(
insertInto("CONSUMER")
.columns("ID", "FIRST_NAME", "LAST_NAME")
.values(1, "Elvis", "Presley")
.values(2, "Lady", "Gaga")
.values(3, "Bob", "Dylan")
.build();
)
3) So, nothing to add.

Custom unit test result

Is there a way to create a custom unit test result in TestNG/JUnit (or any other Java testing framework)? I understand that unit tests can either pass, or fail (or ignored), but currently I really would like to have the third option.
The company I'm working with right now has adapted the testing style of cleverly comparing screenshots of their application and so the test can either pass, fail, or diff, when the screenshots does not match with predetermined tolerance. In addition, they have their in house test "framework" and runners. This was done long before I joined.
What I would like to do is to migrate test framework to the one of the standard ones, but this process should be very gradual.
The approach I was thinking about was to create a special exception (e.g. DiffTolleranceExcededException), fail the test and then customize test result in the report.
Maybe you already mean the following with
The approach I was thinking about was to create a special exception
(e.g. DiffTolleranceExcededException), fail the test and then
customize test result in the report.
but just in case: You certainly can use the possibility to give a pre-defined message string to the assertions. In your case, if the screenshots are identical, the tests pass. If they are too different, the tests just fail. If they are within tolerance, you make them fail with a message like "DIFFERENT BUT WITHIN-TOLERANCE" or whatever - these failures are then easily distinguishable. Certainly, you could also invert the logic: Add a message to the failures that are not within the tolerance, to make these visually prominent.
You should follow this approach for customize our test reports, adding a new column on test report and create your test report (with screenshot for example).

How to build up test cases in junit?

I'm coming from a Perl background where I used Test::More to handle unit testing. Using that framework, I knew the order in which the tests took place and could rely on that, which I understand is not encouraged with the JUnit framework. I've seen several ways to get around this, but I want to understand the proper/intended way of doing things.
In my Perl unit testing I would build up tests, knowing that if test #3 passed, I could make some assumptions in further tests. I don't quite see how to structure that in the JUnit world so that I can make every test completely independent.
For example, suppose I have a class that parses a date from a string. Methods include:
parse a simple date (YYYY-MM-DD)
parse a simple date with alternate separator (YYYY_MM_DD or YYYY/MM/DD)
parse a date with a string for a month name (YYYY-MON-DD)
parse a date with a string month name in a different language
and so on
I usually write my code to focus as many of the externally-accessible methods into as few core methods as possible, re-using as much code as possible (which is what most of us would do, I'm sure). So, let's say I have 18 different tests for the first method, 9 that are expected to pass and 9 that throw an exception. For the second method, I only have 3 tests, one each with the separators that work ('_' & '/') and one with a separator that doesn't work ('*') which is expected to fail. I can limit myself to the new code being introduced because I already know that the code properly handles the standard boundary conditions and common errors, because the first 18 tests already passed.
In the Perl world, if test #20 fails, I know that it's probably something to do with the specific separator, and is not a general date parsing error because all of those tests have already passed. In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there. That's not too hard to do, of course, but maybe in a bigger, more complex class, it would be more difficult to do.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite? That seems tedious. (And before someone suggests that I put the first18 in one class and the second 3 in another, and use a test suite for just those groupings, let's pretend that all 18 of the early tests build on each other, too).
And, again, I know there are ways around this (FixedMethodOrder in JUnit 4.11+ or JUnit-HierarchicalContextRunner) but I want to understand the paradigm as its intended to be used.
In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there.
Yes that is correct. If something in your code is broken then multiple tests may fail. That is a good thing. Use intent revealing test method names and possibly use the optional String message parameter in the JUnit assertions to explain what exactly failed the test.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite?
The general convention is one test class per source class. Depending on what build tool you are using, you may or may not need to use test suites. If you are using Ant, you probably need to collect the tests into test suites, but if you are using Maven, the test plugins for maven will find all your test classes for you so you don't need suites.
I also want to point out that you should be coding to Java interfaces as much as possible. If you are testing class C that depends on an implementation of interface I, then you should mock your I implementation in your C test class so that C is tested in isolation. Your mock I should follow what the interface is supposed to do. This also keeps the number of failing tests down. If there is a bug in your real I implementation, then only your I tests should fail, the C tests should still all pass (since you are testing it against a fake but working I implementation)
Don't worry about suites yet. You'll know when you need them. I've only had to use them a handful of times, and I'm not entirely sold on their usefulness...but I leave that decision up to you.
To the meat of your question - the conventional way with JUnit tests is to neither know nor depend on the order of execution of your tests; this ensures that your tests are not run-order dependent, and if they are, something is wrong with your tests* and validation.
The main core concept behind unit tests is that they test a unit of code - as simple as a single function. If you're attempting to test five different things at once, your test is far too large, and should be broken out. If the method you're testing is monolithic in nature, and difficult to test, it should be refactored and broken out into different slices of responsibility.
Tests that exercise a larger flow are better suited for integration-style tests, which tend to be written as unit tests, but aren't actually unit tests.
I've not run into a scenario in which, if I knew that if a certain test failed, I could expect different behavior in the other tests. I've never thought that such a thing was necessary to be noted, since the only thing I care about in my unit test is how that unit of code behaves given a certain input.
Keep your tests small and simple to understand; the test should only make one assertion about the result (or a general assertion of the state of your result).
*: That's not to say that it's completely broken, but those sorts of tests should be fixed sooner rather than later.

What is the universally accepted way to provide inputs to JUnit Test Suite

I have designed JUnit Test Suite with required values hard-coded in the code itself. Each time if there is any changes, I need to open the project and do the modifications. To provide input from external file, I am using excel sheets, which can be easily designed. The excel file is having some drop down menu items also, which tells about the test cases needs to be executed. The excel sheet is also having some text box in which user has to fill the values before running test suite.
But the excel is not platform independent .
Is there any better way which is universally accepted and platform independent to provide inputs to JUnit Test Suite?
Do I understand you right, that the main thing here is to find a way that makes it easy for a tester to key in test data?
It's not so much about writing a test case, right?
Well, that issue happens in many different projects. One, for example, is to have users key in some basic values in a database table.
There're many ways to solve that. A lot of people use Excel, even more use MS-Access-Forms, Sharepoint or, if they're more familiar with Web-Tools, they end up building web sites.
In the end, your way and the tool you use depends on yours and the testers knowledge and the number of interfaces you have to build and maintain. In my company we ended up with some configurable web sites that is independant of any 3rd party software licence (which was a main requirement in our case).
The only tool, where one should be very careful, is Excel. If you need only a few interfaces, lets say 10-20, Excel for me can still be handled. When it gets more, than the maintenance of Excel will kill you, mainly because Excel keeps programming and business logic for each interface separately. Changing the business logic menas to change all excels separately. This kills you sooner or later.
One of the core concepts about test driven development is that you run all test cases, all of the time in an automated way. Having a user use excel to choose test cases and enter data breaks this model.
You could read from a file to drive your test cases, but perhaps your tests need to be redefined to be data independent. Also all of them should run every time you run JUnit.
Just yesterday I used random data to perform a test...here is an example:
#Test
public void testIntGetter() {
int value = random.getNextInt();
MyObj obj = new MyObj(value);
assertEquals(value,obj.getMyInt());
}
While this is an overly simple example it does test the functionality of the class while being data independent.
Once you decide to break the test driven development/JUnit model, then your question is not really applicable. This is fine to use a tool for other purposes, but your specific question is incorrect.
It is best to have data reside in code, with some exceptions testing is independent of the data as my example shows. Most of those exceptions are edge cases that should reside in code. For example a method that takes a String parameter, should be tested against null, and empty string and a non-empty String.
If the value of a parameter reveals a defect in the code, the code should be fixed and that value should be a permanent member of the collection of test conditions.
I believe there is no universal accepted way to provide input to JUnit tests. Afaik, a unit test is often - or by definition - small (smallest testable part). Data is provided hardcoded as part of a test.
That said, I also use unit testing to conduct test of larger numerical algorithms / models, for which I sometimes have to provide, more complicated data. I provide this data via a spreadsheet too. I believe the spreadsheet is the natural GUI for this kind of tabular data.
I trigger my Java code directly from the spreadsheet using Obba (disclaimer: I am the developer of Obba too, but my main open source project is a library of mathematical finance, for which I use these sheet).
My suggestion is to go both routes:
create small (classical) unit test with predefined hardcoded data as part of your build environment.
create bigger test with data provided via the sheet to analyse the codes behavior to input.
If possible, add a hardcoded "bigger test" to your automated test suit from time to time.
Note: There is also the concept of parametrized unit tests and there are tools which then generate (e.g. randomize) the parameters as part of a the testing.

Should my unit tests of GUI components contain many more lines than the code under test?

This is a sanity check because I'm finding this to be true in our code. Unlike our functional code, the tests of stateful GUIs have an unfortunate amount of weight due to state setup, combinatorial case analysis, and mocking/faking neighbors/collaborators/listeners/etc. Am I missing something? Thanks for your feedback.
Notes:
The tests are running in the JVM, everything is a POJO.
So far we've gotten some simplification by increasing unit size: testing more pieces glued together.
New Notes:
We're using jUnit and Mockito.
Avoid code duplication. Common setup code and actions should be extracted
Look for hierarchy. Don't write one huge test scenario. Group common lines together and extract them to a meaningfully named method. Proceed building multi-layered test scenario
Consider better tools. cucumber, FEST assertions, Scala or Groovy as test DSL (even if you don't use them in production code), mockito...
Besides that, the relation between the number of production and test lines of code is irrelevant. I can easily find an example of extremely short piece of code having so many edge cases that it requires dozens of tests.
And a real-life example of SQLite (emphasis mine):
[...] library consists of approximately 81.3 KSLOC of C code. [...] By comparison, the project has 1124 times as much test code and test scripts - 91421.1 KSLOC.
That's right, it's approximately 1100 lines of test code per each line of production code.

Categories