I am trying to run a parameterized test in testNg using dataProvider. But somehow it is always ignoring that test case. Below is the reference code:
#DataProvider(name = "test")
public Object[][] testDP() throws Exception {
Object[][] arrayObject = getExcelData("TestData.xlsx", "TestData", "testName");
return arrayObject;
}
#Test(dataProvider = "test", groups = {"sanity"})
public void testMethod(String testName, String logisticsHandler) {
System.out.print(testName + "\n");
setUpdateLogisticsHandler(logisticsHandler);
updateLogisticsHandler(context.getAuthToken(),context.getQuoteIdForRfq());
}
There are two ways of sending data to the test.
Static Arrays usage - as suggested by Julien Herr
If you are using the excel to get the data then in excel you must have exactly same number of rows/fields as in the #test function.
For Example: String testName, String logisticsHandler are two fields in your #test function then the excel must have ONLY two rows with the required fields testdata so that the ObjectArray will have these fields.
You can use Apache POI and handle this very easily.
This is happening as i see the name is different
#Test(dataProvider = "test", groups = {"sanity"})
in the above line you should give the dataProvider as testDP instead of test the name of the function
#Test(dataProvider = "testDP", groups = {"sanity"})
testDP is providing data to your test and no function named as test
The above thing should solve the problem. Also the name can be removed from dataprovider to avoid any confusion
#DataProvider(name = "test")
Related
What would be the best approach to combine multiple arguments source for parameterized testing.
My scenario is something like this
#ParameterizedTest(name = "{index} => \"{0}\"")
#MethodSource("getEndpointsProvider")
#CsvSource(nullValues = "null",
value = {
"whenNullValues, null, BAD_REQUEST",
"whenNaNValues, uyiwfq, BAD_REQUEST",
"whenUnauthorized, 12345, FORBIDDEN",
"whenNonExisting, -1, NOT_FOUND"
})
#Test
void testEndpointsNegativeCases(Method endpoint, String case, String input, String expected)
{
//... process input args
//... assertThrows
}
The idea is to make a L x R combination, so Number of endpoints X CsvSource lines, and to keep it in a way that is perfectly clear for the reader which cases are tested and the expectations (ideally, in the annotations).
Extending getEndpointsProvider() to return a stream of multiple args with the combined data is an option, but I would like to know if there's a cleaner way.
Right now, I have around 107 test input cases for my interpreter, and I have my JUnit tester set up to manually handle each case independently so as to not lump them all together. That is, if I use a loop to iterate over the test files as such
for (int i = 0; i < NUM_TESTS; i++) {
String fileName = "file_" + (i + 1) + ".in";
testFile(fileName);
}
JUnit will create one giant test result for all 107 tests, meaning if one fails, the entire test fails, which I don't want. As I said, right now I have something like
#Test
public static void test001() {
testFile("file1.in");
}
#Test
public static void test002() {
testFile("file2.in");
}
While this works, I imagine there's a much better solution to get what I'm after.
You can use #ParameterizedTest with #MethodSource annotations.
For exemple :
#ParameterizedTest
#MethodSource("fileNameSource")
void test(final String fileName) {
testFile(fileName);
}
private static Stream<String> fileNameSource() {
return IntStream.range(0,NUM_TESTS).mapToObj(i -> "file_" + (i + 1) + ".in");
}
Check the documentation at https://junit.org/junit5/docs/current/user-guide/#writing-tests-parameterized-tests
For each params returned by fileNameSource(), the corresponding test will be considered as a different case.
You have to define own structure based on your need,
one way to define your input to json file in like list of values as below.
{
[
"value1",
"value2"
]
}
Read this value when you test case execute with the help of object mapper.
objectMapper.readValue(fixture("filePathName.json"),CustomInput.class);
Where CustomInput would be something like below.
public class CustomInput {
List<String> values;
}
You can keep increase & decrease your inputs in json.
When using DataProviders, on TestNG, my test method has asserts that will fail since the data passed in navigates to a different url. Is there a way to work around this, i.e. a way for the data to only be injected to certain/specific asserts?
Instead of testing one scenario with different data, I am instead testing multiple scenarios with different data which is where my conflict arises.
#DataProvider(name = "VINNumbers")
public String[][] VINNumbers() {
return new String[][] {
{"2T1BU4ECC834670"},
{"1GKS2JKJR543989"},
{"2FTDF0820A04457"}
};
}
#Test(dataProvider = "VINNumbers")
public void shouldNavigateToCorrespondingVinEnteredIn(String VIN) {
driver.get(findYourCarPage.getURL() + VIN);
Assert.assertTrue(reactSRP.dealerListingMSRPIsDisplayed());
}
The assert test whether or not the page has an MSRP displayed, but not all dataproviders will have an MSRP displayed so it will fail. The only dataprovider that has it is the first array. Is there a way for dataproviders to be called to specific asserts?
If depending on the VIN, MSRP is displayed or not (boolean), you could for example create a provider the way it provides VIN and expected result:
#Test(dataProvider = "VINNumbers")
public void shouldNavigateToCorrespondingVinEnteredIn(String VIN, boolean isMSRPDisplayed) {
// act
// assert
assertThat(reactSRP.dealerListingMSRPIsDisplayed()).is(isMSRPDisplayed);
}
This way you end up with an provider like below:
{
{"2T1BU4ECC834670", true},
{"1GKS2JKJR543989", false},
{"2FTDF0820A04457", true},
}
In my opinion this is acceptable for simple cases. To make assertion more readable, I would add a custom message to it that is also parameterized.
I hope this helps.
I am just getting started with unit testing. I did the junit tutorial from a pdf from the tutorial points website. So my question is, I want to test my shunting yard algorithm and my RPNEvaluator.
The constructors (and any other variables to help you out with the context) look like this:
ShuntingYard.java:
private ArrayList<String> tokens = new ArrayList<String>();
public ShuntingYard(ArrayList<String> tokens) {
this.tokens = tokens;
}
RPNEvaluator.java:
private Queue<String> polishExpression;
public RPNEvaluator(Queue<String> exp) {
polishExpression = exp;
}
ShuntingYard.java has a method called toRpn() which will take an ArrayList and return a Queue after some processing.
RPNEvaluator has a method called evaluate which will take a Queue type and return a double after some processing.
With Junit I am trying to write some unit tests and I wanted to know if this start was the best way to go about it:
package testSuite;
import static org.junit.Assert.fail;
import java.util.ArrayList;
import org.junit.Before;
import org.junit.Test;
public class ExpressionEvaluationTest {
/**
* Initialise the lists to be used
*/
#Before
public void beforeTest() {
ArrayList<String> exprOne = new ArrayList<String>();
exprOne.add("3");
exprOne.add("+");
exprOne.add("4");
exprOne.add("*");
exprOne.add("2");
exprOne.add("/");
exprOne.add("(");
exprOne.add("1");
exprOne.add("-");
exprOne.add("5");
exprOne.add(")");
exprOne.add("^");
exprOne.add("2");
exprOne.add("^");
exprOne.add("3");
ArrayList<String> exprTwo = new ArrayList<String>();
exprTwo.add("80");
exprTwo.add("+");
exprTwo.add("2");
ArrayList<String> exprThree = new ArrayList<String>();
exprThree.add("2");
exprThree.add("/");
exprThree.add("1");
exprThree.add("*");
exprThree.add("4");
ArrayList<String> exprFour = new ArrayList<String>();
exprFour.add("11");
exprFour.add("-");
exprFour.add("(");
exprFour.add("2");
exprFour.add("*");
exprFour.add("4");
exprFour.add(")");
ArrayList<String> exprFive = new ArrayList<String>();
exprFive.add("120");
exprFive.add("/");
exprFive.add("(");
exprFive.add("10");
exprFive.add("*");
exprFive.add("4");
exprFive.add(")");
ArrayList<String> exprSix = new ArrayList<String>();
exprSix.add("600");
exprSix.add("*");
exprSix.add("2");
exprSix.add("+");
exprSix.add("20");
exprSix.add("/");
exprSix.add("4");
exprSix.add("*");
exprSix.add("(");
exprSix.add("5");
exprSix.add("-");
exprSix.add("3");
exprSix.add(")");
}
#Test
public void test() {
}
}
I was going to put this in the before() method:
ShuntingYard sy = new ShuntingYard(/arraylist here/);
And then in the test, pass the lists to the algorithm. My question is that I think I am going the long way around it, would it be better to have a parameterised annotation and pass those lists as a list of parameters?
and a further question: if a test for any of the ArrayLists passes then I am sure I can execute a subsequent test to the RPNEvaluator evaluate method. I hope I haven't been ambiguous.
Help would be very much appreciated.
I would come at it a little differently. Instead of just creating several sets of test data and calling the same test each time break it up in to something meaningful. Instead of writing one test called test() write several separate tests for each aspect of ShuntingYard. For example:
#Test public void
itDoesntDivideByZero()
{
ArrayList<String> divideByZeroExpression = Arrays.asList("5", "0", "/");
// Add code to call your method with this data here
// Add code to verify your results here
}
#Test public void
itCanAdd()
{
ArrayList<String> simpleAdditionExpression = Arrays.asList("1", "2", "+");
// Add code to call your method with this data here
// Add code to verify your results here
}
and so on. This will make your JUnit output much easier to read. When there's a failure you know that it failed while trying to add, or it failed while trying to evaluate an expression that would cause a divide by zero, etc. Doing it the way you have it in the original you'd only know that it failed in the test() method.
Each of the tests here does 3 things:
Arranges the test data
Performs some action with that data
Asserts that the results of the action are as expected
This Arrange, Assert, Act idiom is very common in automated testing. You may also see it called Given, When, Then as in, "Given these conditions, when I call this method, then I should get this result".
Try to get out of the mindset of writing one test to test an entire class or method. Write a test to test one part of a method. Consider this class:
public class Adder {
public int addOneTo(int someNumber) {
return someNumber + 1;
}
}
You might end up with a test suite that looks like:
#Test public void
itAddsOne()
{
int numberToAddTo = 1;
int result = new Adder().addOneTo(numberToAddTo);
assertEquals("One plus one is two", 2, result);
}
#Test(expected="NullPointerException.class") public void
itChokesOnNulls()
{
new Adder().addOneTo((Integer)null);
}
#Test public void
itDoesntOverflow()
{
int result = new Adder().addOneTo(Integer.MAX_VALUE);
// do whatever here to make sure it worked correctly
}
And so on.
The advise from Mike B is very good, try to separate your test thinking in one test per behavior/functionality.
For make your test more readable i probably write a static constructor for the class ShuntingYard that receives a string, then you can write:
ShuntingYard addition = ShuntingYard.createFromExpresion("2+2");
assertThat(addition.getRpn().evaluate(), is(4));
you can refactor a little more and ends with something like that:
assertThat(evaluate("2+2"), is(4))
That is easy to understand an and easy to read, and in addition write more test with diferent scenarios its one-line of code.
Other option its to write parametrized test, one example: http://www.mkyong.com/unittest/junit-4-tutorial-6-parameterized-test/, but in my opinion are really ugly. This test are normally called "data driven test" and are used when you want to test the same code with different input values.
For this data-driven test a much better option its to use something like spock, a groovy framework for testing that allows you to write incredible semantic test, and of course you can use for testing java code, check this out: http://docs.spockframework.org/en/latest/data_driven_testing.html
I'm writing Junit test cases for a bunch of classes; each of them has a handful of method to test. The classes I'm about to test look like the following.
class A{
int getNth(int n);
int getCount();
}
class B{
int[] getAllNth(int n);
int getMin();
}
I store the expected result for each class.method() in a file. For example, in a CSV,
A; getNth(1):7; getNth(2):3; getCount():3
B; getAllNth(2):[7,3]; getAllNth(3):[7,3,4]; getMin():3
My question is how can retrieve those value easily in test cases. I hope to pass the method call A.getNth(2) to a class that can build a string "A.getNth(2)"
If the format I store the data is not ideal, free feel to give suggestion on that as well.
It sounds like you might want to use Fitnesse?
Not sure about JUnit, but here is how you would do it with TestNG, using data providers:
#DataProvider
public Object[][] dp() {
return new Object[][] {
new Object[] { 1, 7 },
new Object[] { 2, 3 },
};
}
#Test(dataProvider = "dp")
public nthShouldMatch(int parameter, int expected) {
Assert.assertEquals(getNth(parameter), expected);
}
Obviously, you should implement dp() in a way that it retrieves its values from the spreadsheet instead of hardcoding them like I just did, but you get the idea. Once you have implemented your data provider, all you need to do is update your spreadsheet and you don't even need to recompile your code.