Retrieve Junit test expected result - java

I'm writing Junit test cases for a bunch of classes; each of them has a handful of method to test. The classes I'm about to test look like the following.
class A{
int getNth(int n);
int getCount();
}
class B{
int[] getAllNth(int n);
int getMin();
}
I store the expected result for each class.method() in a file. For example, in a CSV,
A; getNth(1):7; getNth(2):3; getCount():3
B; getAllNth(2):[7,3]; getAllNth(3):[7,3,4]; getMin():3
My question is how can retrieve those value easily in test cases. I hope to pass the method call A.getNth(2) to a class that can build a string "A.getNth(2)"
If the format I store the data is not ideal, free feel to give suggestion on that as well.

It sounds like you might want to use Fitnesse?

Not sure about JUnit, but here is how you would do it with TestNG, using data providers:
#DataProvider
public Object[][] dp() {
return new Object[][] {
new Object[] { 1, 7 },
new Object[] { 2, 3 },
};
}
#Test(dataProvider = "dp")
public nthShouldMatch(int parameter, int expected) {
Assert.assertEquals(getNth(parameter), expected);
}
Obviously, you should implement dp() in a way that it retrieves its values from the spreadsheet instead of hardcoding them like I just did, but you get the idea. Once you have implemented your data provider, all you need to do is update your spreadsheet and you don't even need to recompile your code.

Related

How to create JUnit tests for different input files as separate cases?

Right now, I have around 107 test input cases for my interpreter, and I have my JUnit tester set up to manually handle each case independently so as to not lump them all together. That is, if I use a loop to iterate over the test files as such
for (int i = 0; i < NUM_TESTS; i++) {
String fileName = "file_" + (i + 1) + ".in";
testFile(fileName);
}
JUnit will create one giant test result for all 107 tests, meaning if one fails, the entire test fails, which I don't want. As I said, right now I have something like
#Test
public static void test001() {
testFile("file1.in");
}
#Test
public static void test002() {
testFile("file2.in");
}
While this works, I imagine there's a much better solution to get what I'm after.
You can use #ParameterizedTest with #MethodSource annotations.
For exemple :
#ParameterizedTest
#MethodSource("fileNameSource")
void test(final String fileName) {
testFile(fileName);
}
private static Stream<String> fileNameSource() {
return IntStream.range(0,NUM_TESTS).mapToObj(i -> "file_" + (i + 1) + ".in");
}
Check the documentation at https://junit.org/junit5/docs/current/user-guide/#writing-tests-parameterized-tests
For each params returned by fileNameSource(), the corresponding test will be considered as a different case.
You have to define own structure based on your need,
one way to define your input to json file in like list of values as below.
{
[
"value1",
"value2"
]
}
Read this value when you test case execute with the help of object mapper.
objectMapper.readValue(fixture("filePathName.json"),CustomInput.class);
Where CustomInput would be something like below.
public class CustomInput {
List<String> values;
}
You can keep increase & decrease your inputs in json.

How to generate multiple test method from JSON?

I have a huge JSON-File which contains testcases. (No I don't wanna show it here because it doesn't matter in this case to know the file)
I parse the json file to my junit test - that works fine.
But I've got 50 testcases and if I want to show each in Junit like: "test 0 from 50 passed"
and have a list like: test 1 passed, test 2 passed, test 3 failed..
I have to put each testcase into a method. How can I dynamically do this? Is this possible in Junit? Because when I'm parsing the json, I don't know how many cases I have.
JUnit has direct support for CSV files, which means you can import and use them easily using #CSVFileSource.
However, since your case does not involve CSV files, I tried to create parametrized tests in JUnit 5 using JSON files.
Our class under test.
public class MathClass {
public static int add(int a, int b) {
return a + b;
}
}
Here's the JSON file I am using.
[
{
"name": "add positive numbers",
"cases": [[1, 1, 2],[2, 2, 4]]
},
{
"name": "add negative numbers",
"cases": [[-1, -1, -2 ], [-10, -10, -20 ]]
}
]
So, in JUnit 5 there is an annotation called #MethodSource which gives you the opportunity to provide arguments to your parametrized test. You only need to provide the method name. Here's my argument provider method.
#SneakyThrows
private static Stream<TestCase> getAddCases() {
final ObjectMapper mapper = new ObjectMapper();
TypeReference<List<Case>> typeRef = new TypeReference<>() {};
final File file = new File("src/test/resources/add-cases.json");
final List<Case> cases = mapper.readValue(file, typeRef);
return cases.stream()
.flatMap(caze -> caze.getCases()
.stream()
.map(el -> new TestCase(caze.getName(), el)));
}
In the code above, the class Case is used to map from the json object to Java Object and since the "cases" field is a multidimensional array, to represent each test case there is a class called TestCase. (Overall, this is not important for you, since you already are able to parse it, but I wanted to put it here anyway).
Finally, the test method itself.
#ParameterizedTest(name = "{index} : {arguments}")
#MethodSource("getAddCases")
void add_test(TestCase testCase) {
final List<Integer> values = testCase.getValues();
int i1 = values.get(0);
int i2 = values.get(1);
int e = values.get(2);
assertEquals(e, MathClass.add(i1, i2));
}
#ParametrizedTest annotation takes a name argument where you can provide a template for the test names. I just played around with the toString method of TestCase class to achieve a better description for each test case.
#Override
public String toString() {
return String.format("%s : (%s, %s) ==> %s", name, values.get(0), values.get(1), values.get(2));
}
And voila!

Dataproviders and Asserts

When using DataProviders, on TestNG, my test method has asserts that will fail since the data passed in navigates to a different url. Is there a way to work around this, i.e. a way for the data to only be injected to certain/specific asserts?
Instead of testing one scenario with different data, I am instead testing multiple scenarios with different data which is where my conflict arises.
#DataProvider(name = "VINNumbers")
public String[][] VINNumbers() {
return new String[][] {
{"2T1BU4ECC834670"},
{"1GKS2JKJR543989"},
{"2FTDF0820A04457"}
};
}
#Test(dataProvider = "VINNumbers")
public void shouldNavigateToCorrespondingVinEnteredIn(String VIN) {
driver.get(findYourCarPage.getURL() + VIN);
Assert.assertTrue(reactSRP.dealerListingMSRPIsDisplayed());
}
The assert test whether or not the page has an MSRP displayed, but not all dataproviders will have an MSRP displayed so it will fail. The only dataprovider that has it is the first array. Is there a way for dataproviders to be called to specific asserts?
If depending on the VIN, MSRP is displayed or not (boolean), you could for example create a provider the way it provides VIN and expected result:
#Test(dataProvider = "VINNumbers")
public void shouldNavigateToCorrespondingVinEnteredIn(String VIN, boolean isMSRPDisplayed) {
// act
// assert
assertThat(reactSRP.dealerListingMSRPIsDisplayed()).is(isMSRPDisplayed);
}
This way you end up with an provider like below:
{
{"2T1BU4ECC834670", true},
{"1GKS2JKJR543989", false},
{"2FTDF0820A04457", true},
}
In my opinion this is acceptable for simple cases. To make assertion more readable, I would add a custom message to it that is also parameterized.
I hope this helps.

Bind Datapoints in JUnit Theory to a particular variable

I have following theory to test. In the code I want variable a to be Even and variable b to be odd
#RunWith(Theories.class)
public class TestJunit{
// add the error
#DataPoints
public static Integer[] integersOdd() {
return new Integer[]{1, 3, 5};
}
#DataPoints
public static Integer[] integersEven() {
return new Integer[]{2, 4, 6};
}
#Theory
public void testAdd(Integer a , Integer b) {
...
}
}
For now I am using assumeTrue and a validation function as in:
public boolean validateInput(Integer a, Integer b){
Set<Integer> even = new HashSet<Integer>(Arrays.asList(integersEven()));
Set<Integer> odd = new HashSet<Integer>(Arrays.asList(integersOdd()));
return (even.contains(a) && odd.contains(b));
}
Modified Theory:
#Theory
public void testAdd(Integer a , Integer b) {
Assume.assumeTrue(validateInput(a,b));
System.out.println("a="+a+", b="+b);
assertTrue(a+b>-1);
// add any test
}
It is a very dirty way as Java will pick all the combinations and will discard the inputs at assumeTrue. What If I have 10 theories with 10 datapoints? Java will try 100 combinations where I wanted only 10!
Is there neat way to do so? May be some annotation to tell JUnit to pick values for variables from which DataPoint?
Edit:
Another way I found is to use Test Generators. I am using JUnit-QuickCheck [Read Here] to generate random data according to the range required by my variables. Then I encapsulate them in a class and pass this object into my theory to test.
JUnit 4.12 allows for named data points in theories. Here's the original pull request, and here are the release notes for 4.12 - look for "Added mechanism for matching specific data points".

How do I run the same JUnit test multiple times with different test data each time?

I am just getting started with unit testing. I did the junit tutorial from a pdf from the tutorial points website. So my question is, I want to test my shunting yard algorithm and my RPNEvaluator.
The constructors (and any other variables to help you out with the context) look like this:
ShuntingYard.java:
private ArrayList<String> tokens = new ArrayList<String>();
public ShuntingYard(ArrayList<String> tokens) {
this.tokens = tokens;
}
RPNEvaluator.java:
private Queue<String> polishExpression;
public RPNEvaluator(Queue<String> exp) {
polishExpression = exp;
}
ShuntingYard.java has a method called toRpn() which will take an ArrayList and return a Queue after some processing.
RPNEvaluator has a method called evaluate which will take a Queue type and return a double after some processing.
With Junit I am trying to write some unit tests and I wanted to know if this start was the best way to go about it:
package testSuite;
import static org.junit.Assert.fail;
import java.util.ArrayList;
import org.junit.Before;
import org.junit.Test;
public class ExpressionEvaluationTest {
/**
* Initialise the lists to be used
*/
#Before
public void beforeTest() {
ArrayList<String> exprOne = new ArrayList<String>();
exprOne.add("3");
exprOne.add("+");
exprOne.add("4");
exprOne.add("*");
exprOne.add("2");
exprOne.add("/");
exprOne.add("(");
exprOne.add("1");
exprOne.add("-");
exprOne.add("5");
exprOne.add(")");
exprOne.add("^");
exprOne.add("2");
exprOne.add("^");
exprOne.add("3");
ArrayList<String> exprTwo = new ArrayList<String>();
exprTwo.add("80");
exprTwo.add("+");
exprTwo.add("2");
ArrayList<String> exprThree = new ArrayList<String>();
exprThree.add("2");
exprThree.add("/");
exprThree.add("1");
exprThree.add("*");
exprThree.add("4");
ArrayList<String> exprFour = new ArrayList<String>();
exprFour.add("11");
exprFour.add("-");
exprFour.add("(");
exprFour.add("2");
exprFour.add("*");
exprFour.add("4");
exprFour.add(")");
ArrayList<String> exprFive = new ArrayList<String>();
exprFive.add("120");
exprFive.add("/");
exprFive.add("(");
exprFive.add("10");
exprFive.add("*");
exprFive.add("4");
exprFive.add(")");
ArrayList<String> exprSix = new ArrayList<String>();
exprSix.add("600");
exprSix.add("*");
exprSix.add("2");
exprSix.add("+");
exprSix.add("20");
exprSix.add("/");
exprSix.add("4");
exprSix.add("*");
exprSix.add("(");
exprSix.add("5");
exprSix.add("-");
exprSix.add("3");
exprSix.add(")");
}
#Test
public void test() {
}
}
I was going to put this in the before() method:
ShuntingYard sy = new ShuntingYard(/arraylist here/);
And then in the test, pass the lists to the algorithm. My question is that I think I am going the long way around it, would it be better to have a parameterised annotation and pass those lists as a list of parameters?
and a further question: if a test for any of the ArrayLists passes then I am sure I can execute a subsequent test to the RPNEvaluator evaluate method. I hope I haven't been ambiguous.
Help would be very much appreciated.
I would come at it a little differently. Instead of just creating several sets of test data and calling the same test each time break it up in to something meaningful. Instead of writing one test called test() write several separate tests for each aspect of ShuntingYard. For example:
#Test public void
itDoesntDivideByZero()
{
ArrayList<String> divideByZeroExpression = Arrays.asList("5", "0", "/");
// Add code to call your method with this data here
// Add code to verify your results here
}
#Test public void
itCanAdd()
{
ArrayList<String> simpleAdditionExpression = Arrays.asList("1", "2", "+");
// Add code to call your method with this data here
// Add code to verify your results here
}
and so on. This will make your JUnit output much easier to read. When there's a failure you know that it failed while trying to add, or it failed while trying to evaluate an expression that would cause a divide by zero, etc. Doing it the way you have it in the original you'd only know that it failed in the test() method.
Each of the tests here does 3 things:
Arranges the test data
Performs some action with that data
Asserts that the results of the action are as expected
This Arrange, Assert, Act idiom is very common in automated testing. You may also see it called Given, When, Then as in, "Given these conditions, when I call this method, then I should get this result".
Try to get out of the mindset of writing one test to test an entire class or method. Write a test to test one part of a method. Consider this class:
public class Adder {
public int addOneTo(int someNumber) {
return someNumber + 1;
}
}
You might end up with a test suite that looks like:
#Test public void
itAddsOne()
{
int numberToAddTo = 1;
int result = new Adder().addOneTo(numberToAddTo);
assertEquals("One plus one is two", 2, result);
}
#Test(expected="NullPointerException.class") public void
itChokesOnNulls()
{
new Adder().addOneTo((Integer)null);
}
#Test public void
itDoesntOverflow()
{
int result = new Adder().addOneTo(Integer.MAX_VALUE);
// do whatever here to make sure it worked correctly
}
And so on.
The advise from Mike B is very good, try to separate your test thinking in one test per behavior/functionality.
For make your test more readable i probably write a static constructor for the class ShuntingYard that receives a string, then you can write:
ShuntingYard addition = ShuntingYard.createFromExpresion("2+2");
assertThat(addition.getRpn().evaluate(), is(4));
you can refactor a little more and ends with something like that:
assertThat(evaluate("2+2"), is(4))
That is easy to understand an and easy to read, and in addition write more test with diferent scenarios its one-line of code.
Other option its to write parametrized test, one example: http://www.mkyong.com/unittest/junit-4-tutorial-6-parameterized-test/, but in my opinion are really ugly. This test are normally called "data driven test" and are used when you want to test the same code with different input values.
For this data-driven test a much better option its to use something like spock, a groovy framework for testing that allows you to write incredible semantic test, and of course you can use for testing java code, check this out: http://docs.spockframework.org/en/latest/data_driven_testing.html

Categories