I have a large set of files (>1000) on which I want to be running the same set of assertions. The assertions are organized in logical groups, so I have broken them down to more than 30 #Test methods in a single class.
In order to run the assertions I want, I need to run before for each file some expensive pre-processing, and my assertions will run on the results of this pre-processing.
As the pre-processing is required for each #Test method, I want to be running executing it once for each file.
I used the #DataProvider the following way:
#ContextConfiguration({ "classpath:/META-INF/spring/ev-xml-test-context.xml" })
public class FantasticTest extends AbstractTestNGSpringContextTests {
private String[][] FILES = new String[][] {
{ "file1.xml" },
{ "file2.xml" },
{ "..." },
{ "file1000.xml" }
};
private Map<String, Object> preProcessingResults = new HashMap<>();
#BeforeClass
public void setup() throws Exception {
for(int i=0; i<FILES.length; i++) {
preProcessingResults.put(FILES[i][0], expensivePreProcessing(FILES[i][0]));
}
}
private Object expensivePreProcessing(String file) {
// do something expensive
return new ExpensiveObject(file);
}
#DataProvider(name = "myDataProvider")
public String[][] dataProvider() {
return FILES;
}
#Test(dataProvider = "myDataProvider")
public void test1(String filename) {
Object preProcessingResultsForFile = preProcessingResults.get(filename);
// assertions
}
#Test(dataProvider = "myDataProvider")
public void test2(String filename) {
Object preProcessingResultsForFile = preProcessingResults.get(filename);
// assertions
}
// ...
#Test(dataProvider = "myDataProvider")
public void test30(String filename) {
Object preProcessingResultsForFile = preProcessingResults.get(filename);
// assertions
}
}
What I like about this approach is that in my IDE (IntelliJ) and in CI (Jenkins) I can see exactly where something failed. For example, I will see that the method test3("file823.xml") failed.
What I don't like about it is that:
The pre-processing results are held in memory for all files until the assertions begin
I have to wait until all the preprocessing is ran until I can see any assertions taking place (I would prefer if for each file the pre-processing took place and then the assertions, and then move to the next file).
I also tried the approach with the #Factory as described in this DZone article. What I liked about it is that each instance constructed in the #Factory can be self-contained, executing the pre-processing of its file in its #BeforeClass method.
What I didn't like was that:
I no longer had the name of the failing file in the failing test method. So all I could see was that the method test29() failed. This is because the dataProvider= attribute is now on the #Factory, and not the #Test methods.
I still haven't managed to make the part regarding the order of the test execution work (with testng.xml).
I was wondering if you have any suggestion to approach this so that failing tests are self-explanatory (like in the first approach) but also pre-processing and assertions are evenly spread (like the second approach).
Related
I have a test that runs multiple times using a data-provider and it looks something like:
#DataProvider(name="data-provider-carmakers")
public Object[][] dataProviderCarMakers() {
return new Object[][] {
{CarMaker.Ford},
{CarMaker.Chevrolet},
{CarMaker.Renault},
{CarMaker.Porsche}
};
}
#Test( dataProvider = "data-provider-carmakers",
retryAnalyzer = TestRetry.class)
public void validateCarMakerHasElectricModelsLoaded(CarMaker carMaker) {
validatecarMakerContainsElectricModelsLoadedInDB(carMaker);
}
In another test, I have a dependency to the first:
#Test( dependsOnMethods = { "validateCarMakerHasElectricModelsLoaded" })
public void validateChevroletElectricModelsPowerEfficiency() {
List<CarModel> electricCarModels = getChevroletCarModels(fuelType.Electric);
validatePowerEfficiency(electricCarModels);
}
(I know the test doesn't make a lot of sense, in reality the code is far more complex that this and data-provider has far more data, but for the sake of clarity I just went with this example).
So I want to run validateChevroletElectricModelsPowerEfficiency() only if validateCarMakerHasElectricModelsLoaded()[CarMaker.Chevrolet] was successful.
How the code is now, if the first test runs successful for Chevrolet, but fails for Renault, the second test won't run. Is there a way to make a dependency to just one set of data of a test?
I am trying to run a class with multiple tests under two different conditions. Basically I have a bunch of tests related to a search. I am adding new functionality of a new search strategy, and in the meantime want to run the already written tests under both configurations. As we have multiple classes each with multiple tests I want to streamline this process as much as possible. Ideally it'd be great to do the setup in a #BeforeClass with a data provider so that all tests in the class are basically run twice under the different configurations, but doesn't look like this is possible.
Right now I have:
public class SearchTest1 {
#Test(dataProvider = "SearchType")
public void test1(SearchType searchType) {
setSearchType(searchType);
//Do the test1 logic
}
#Test(dataProvider = "SearchType")
public void test2(SearchType searchType) {
setSearchType(searchType);
//Do the test2 logic
}
#DataProvider(name = "SearchType")
public Object[][] createData() {
return new Object[][]{
new Object[] {SearchType.scheme1, SearchType.scheme2}
}
}
}
Is there a better way to do this?
If you want to avoid having to annotate each and every method with the data provider, you can use a Factory instead.
public class SearchTest1 {
private final SearchType searchType;
public SearchTest1( SearchType searchType ) {
this.searchType = searchType;
}
#Test
public void test2() {
//Do the test2 logic
}
...
}
And your factory class will be:
public class SearchTestFactory {
#Factory
public Object [] createInstances() {
return new Object[] { new SeartchTest1( SearchType.ONE ), new SearchTest1( SearchType.TWO ) };
}
}
See more on this here.
Then you can either have one factory that enumerates every test class or a separate factory for each, the first one is obviously less flexible, the second one means slightly more code.
You can use parameters in #BeforeClass. Just use (with some cleanup)
context.getCurrentXmlTest().getParameters()
#SuppressWarnings("deprecation")
#BeforeClass
public void setUp(ITestContext context) {
System.out.println(context.getCurrentXmlTest().getAllParameters());
}
I have the following code, where each url in listOne is tested with the method testItem:
#Parameters(name="{0}")
public static Collection<Object[]> data() throws Exception {
final Set<String> listOne = getListOne();
final Collection<Object[]> data = new ArrayList<>();
for (final String url : listOne) {
data.add(new Object[] { url });
}
return data;
}
#Test
public void testItem() {
driverOne.makeDecision(urlToTest);
assertTrue(driverOne.success(urlToTest);
}
What if I now wanted to add a second list, listTwo, and run a test method defined as follows on JUST the items of listTwo (but not listOne?)
#Test
public void testItemAlternate() {
driverTwo.makeDecision(urlToTest);
assertTrue(driverTwo.success(urlToTest));
}
That is, I want driverOne to make the decision for all URLs in listOne, and I want driverTwo to make the decision for all URLs in listTwo. What is the best way to translate this to code? Thanks.
Cited from: https://github.com/junit-team/junit/wiki/Parameterized-tests
The custom runner Parameterized implements parameterized tests. When running a parameterized test class, instances are created for the cross-product of the test methods and the test data elements.
Thus, I assume No, that's not possible.
If you want to do such a thing I guess that you either
(1) will need to construct two test classes one for each test to be executed with the first collection and one for each test to be executed with the second collection or
(2) will need to use another mechanism besides the #Parameters annotation, maybe hand-crafted.
You could include some test set identifier in your test set data itself, and then use the org.junit.Assume class to help:
#Test
public void testItem() {
org.junit.Assume.assumeTrue(testSetId.equals("TEST_SET_1"));
driverOne.makeDecision(urlToTest);
assertTrue(driverOne.success(urlToTest);
}
#Test
public void testItemAlternate() {
org.junit.Assume.assumeTrue(testSetId.equals("TEST_SET_2"));
driverTwo.makeDecision(urlToTest);
assertTrue(driverTwo.success(urlToTest));
}
As a completely different answer, there exists junit-dataprovider
I used to write JUnit tests as methods, such as:
public class TextualEntailerTest {
#Test test1() {...}
#Test test2() {...}
#Test test3() {...}
}
Since most of the test cases has a similar structure, I decided to be "data-driven", and put the contents of the tests in XML files. So, I created a method "testFromFile(file)" and changed my test to:
public class TextualEntailerTest {
#Test test1() { testFromFile("test1.xml"); }
#Test test2() { testFromFile("test2.xml"); }
#Test test3() { testFromFile("test3.xml"); }
}
As I add more and more tests, I become tired of adding a line for each new test file I add. Of course I can put all files in a single test:
public class TextualEntailerTest {
#Test testAll() {
foreach (String file: filesInFolder)
testFromFile(file);
}
}
However, I prefer that each file will be a separate test, because this way JUnit gives nice statistics about the number of files passed and failed.
So, my question is: how to tell JUnit to run separate tests, where each test is of the form "testFromFile(file)", for all files in a given folder?
You could use Theories where the files are #DataPoints so you won't need to loop in your test and will allow for setup and cleanup after each file. But it will still be reported as such.
Theories also have the issue that they fail fast (quit after first failure) as your test above does. I find that this is not good practice since it can hide a situation where you have multiple bugs. I recommend using seperate tests or use the loop with an ErrorCollector. I really wish Theories had ErrorCollector built in.
Not sure, but may be these can help you.
Reference1 Reference2. Hope this helps.
#RunWith(value = Parameterized.class)
public class JunitTest {
private String filename;
public JunitTest(String filename) {
this.filename= filename;
}
#Parameters
public static Collection<Object[]> data() {
Object[][] data = new Object[][] { { "file1.xml" }, { "file2.xml" } };
return Arrays.asList(data);
}
#Test
public void Test() {
System.out.println("Test name:" + filename);
}
}
I just discovered when creating some CRUD tests that you can't set data in one test and have it read in another test (data is set back to its initialization between each test).
All I'm trying to do is (C)reate an object with one test, and (R)ead it with the next. Does JUnit have a way to do this, or is it ideologically coded such that tests are not allowed to depend on each other?
Well, for unit tests your aim should be to test the smallest isolated piece of code, usually method by method.
So testCreate() is a test case and testRead() is another. However, there is nothing that stops you from creating a testCreateAndRead() to test the two functions together. But then if the test fails, which code unit does the test fail at? You don't know. Those kind of tests are more like integration test, which should be treated differently.
If you really want to do it, you can create a static class variable to store the object created by testCreate(), then use it in testRead().
As I have no idea what version of Junit you talking about, I just pick up the ancient one Junit 3.8:
Utterly ugly but works:
public class Test extends TestCase{
static String stuff;
public void testCreate(){
stuff = "abc";
}
public void testRead(){
assertEquals(stuff, "abc");
}
}
JUnit promotes independent tests. One option would be to put the two logical tests into one #Test method.
TestNG was partly created to allow these kinds of dependencies among tests. It enforces local declarations of test dependencies -- it runs tests in a valid order, and does not run tests that depend on a failed test. See http://testng.org/doc/documentation-main.html#dependent-methods for examples.
JUnit is independent test. But, If you have no ways, you can use "static" instance to store it.
static String storage;
#Test
public void method1() {
storage = "Hello"
}
#Test
public void method2() {
Assert.assertThat(something, is(storage));
}
How much processing time do these tests take? If not a lot, then why sweat it. Sure you will create some object unnecessarily, but how much does this cost you?
#Test
void testCreateObject() {
Object obj = unit.createObject();
}
#Test
void testReadObject() {
Object obj = null;
try {
obj = unit.createObject(); // this duplicates tests aleady done
} catch (Exception cause) {
assumeNoException(cause);
}
unit.readObject(obj);
}
in this basic example, the variable is changed in the test A, and can be used in the test B
public class BasicTest extends ActivityInstrumentationTestCase2 {
public BasicTest() throws ClassNotFoundException {
super(TARGET_PACKAGE_ID, launcherActivityClass);
}
public static class MyClass {
public static String myvar = null;
public void set(String s) {
myvar = s;
}
public String get() {
return myvar;
}
}
private MyClass sharedVar;
#Override
protected void setUp() throws Exception {
sharedVar = new MyClass();
}
public void test_A() {
Log.d(S,"run A");
sharedVar.set("blah");
}
public void test_B() {
Log.d(S,"run B");
Log.i(S,"sharedVar is: " + sharedVar.get());
}
}
output result is:
run A
run B
sharedVar is: blah