I used to write JUnit tests as methods, such as:
public class TextualEntailerTest {
#Test test1() {...}
#Test test2() {...}
#Test test3() {...}
}
Since most of the test cases has a similar structure, I decided to be "data-driven", and put the contents of the tests in XML files. So, I created a method "testFromFile(file)" and changed my test to:
public class TextualEntailerTest {
#Test test1() { testFromFile("test1.xml"); }
#Test test2() { testFromFile("test2.xml"); }
#Test test3() { testFromFile("test3.xml"); }
}
As I add more and more tests, I become tired of adding a line for each new test file I add. Of course I can put all files in a single test:
public class TextualEntailerTest {
#Test testAll() {
foreach (String file: filesInFolder)
testFromFile(file);
}
}
However, I prefer that each file will be a separate test, because this way JUnit gives nice statistics about the number of files passed and failed.
So, my question is: how to tell JUnit to run separate tests, where each test is of the form "testFromFile(file)", for all files in a given folder?
You could use Theories where the files are #DataPoints so you won't need to loop in your test and will allow for setup and cleanup after each file. But it will still be reported as such.
Theories also have the issue that they fail fast (quit after first failure) as your test above does. I find that this is not good practice since it can hide a situation where you have multiple bugs. I recommend using seperate tests or use the loop with an ErrorCollector. I really wish Theories had ErrorCollector built in.
Not sure, but may be these can help you.
Reference1 Reference2. Hope this helps.
#RunWith(value = Parameterized.class)
public class JunitTest {
private String filename;
public JunitTest(String filename) {
this.filename= filename;
}
#Parameters
public static Collection<Object[]> data() {
Object[][] data = new Object[][] { { "file1.xml" }, { "file2.xml" } };
return Arrays.asList(data);
}
#Test
public void Test() {
System.out.println("Test name:" + filename);
}
}
Related
I want to make some integration test to test my whole program (it's a standart command line Java application with program args)
Basically I have 3 tests : one to create a resource, one to update the resource and finally one to delete it.
I could do something like this :
#Test
public void create_resource() {
MainApp.main(new String[] {"create", "my_resource_name"});
}
#Test
public void update_resource() {
MainApp.main(new String[] {"update", "my_resource_name"});
}
#Test
public void delete_resource() {
MainApp.main(new String[] {"delete", "my_resource_name"});
}
It works... as long as the methods are executed in the correct order. I've heard that the good execution of a test should not depend of the order.
It's true that ordering tests is considered a smell. Having said that, there might be cases where it might make sense, especially for integration tests.
Your sample code is a little vague since there are no assertions there. But it seems to me you could probably combine the three operation into a single test method. If you can't do that then you can just run them in order. JUnit 5 supports it using the #Order annotation:
#TestMethodOrder(OrderAnnotation.class)
class OrderedTestsDemo {
#Test
#Order(1)
void nullValues() {
// perform assertions against null values
}
#Test
#Order(2)
void emptyValues() {
// perform assertions against empty values
}
#Test
#Order(3)
void validValues() {
// perform assertions against valid values
}
}
My specific question is with regards to JUnit's Parameterized Tests, filtering (essentially not running) tests if it contains a certain property. For example:
#Test
public void test1() {
if (property.contains("example")) {
return;
}
assertEquals(expected, methodToTest1(actual));
}
#Test
public void test2() {
if (property.contains("example")) {
return;
}
assertEquals(expected, methodToTest2(actual));
}
The question is, does a technique exist where the constraint if (property.equals("example"))... be defined somewhere else statically, instead of before each and every test method? Like this:
/** define constraint "property.equals("example")" somewhere **/
#Test
public void test1() {
assertEquals(expected, methodToTest1(actual));
}
#Test
public void test2() {
assertEquals(expected, methodToTest2(actual));
}
You may use JUnit's Assume feature together with #Before.
Add an #Before method to your test class
#Before
public void dontRunIfExample() {
assumeFalse(property.contains("example"));
}
and remove the if block from each of your tests.
It depends on how you are running your JUnit tests. You can quite literally use Java's System.getProperty("conditionForTest"). Then if you are launching them by command line you will need to specify them with -DconditionForTest=true or if you are running the tests with ant then it can be passed into the an target.
<sysproperty key="conditionForTest" value="true"/>
I have a large set of files (>1000) on which I want to be running the same set of assertions. The assertions are organized in logical groups, so I have broken them down to more than 30 #Test methods in a single class.
In order to run the assertions I want, I need to run before for each file some expensive pre-processing, and my assertions will run on the results of this pre-processing.
As the pre-processing is required for each #Test method, I want to be running executing it once for each file.
I used the #DataProvider the following way:
#ContextConfiguration({ "classpath:/META-INF/spring/ev-xml-test-context.xml" })
public class FantasticTest extends AbstractTestNGSpringContextTests {
private String[][] FILES = new String[][] {
{ "file1.xml" },
{ "file2.xml" },
{ "..." },
{ "file1000.xml" }
};
private Map<String, Object> preProcessingResults = new HashMap<>();
#BeforeClass
public void setup() throws Exception {
for(int i=0; i<FILES.length; i++) {
preProcessingResults.put(FILES[i][0], expensivePreProcessing(FILES[i][0]));
}
}
private Object expensivePreProcessing(String file) {
// do something expensive
return new ExpensiveObject(file);
}
#DataProvider(name = "myDataProvider")
public String[][] dataProvider() {
return FILES;
}
#Test(dataProvider = "myDataProvider")
public void test1(String filename) {
Object preProcessingResultsForFile = preProcessingResults.get(filename);
// assertions
}
#Test(dataProvider = "myDataProvider")
public void test2(String filename) {
Object preProcessingResultsForFile = preProcessingResults.get(filename);
// assertions
}
// ...
#Test(dataProvider = "myDataProvider")
public void test30(String filename) {
Object preProcessingResultsForFile = preProcessingResults.get(filename);
// assertions
}
}
What I like about this approach is that in my IDE (IntelliJ) and in CI (Jenkins) I can see exactly where something failed. For example, I will see that the method test3("file823.xml") failed.
What I don't like about it is that:
The pre-processing results are held in memory for all files until the assertions begin
I have to wait until all the preprocessing is ran until I can see any assertions taking place (I would prefer if for each file the pre-processing took place and then the assertions, and then move to the next file).
I also tried the approach with the #Factory as described in this DZone article. What I liked about it is that each instance constructed in the #Factory can be self-contained, executing the pre-processing of its file in its #BeforeClass method.
What I didn't like was that:
I no longer had the name of the failing file in the failing test method. So all I could see was that the method test29() failed. This is because the dataProvider= attribute is now on the #Factory, and not the #Test methods.
I still haven't managed to make the part regarding the order of the test execution work (with testng.xml).
I was wondering if you have any suggestion to approach this so that failing tests are self-explanatory (like in the first approach) but also pre-processing and assertions are evenly spread (like the second approach).
I am trying to run a class with multiple tests under two different conditions. Basically I have a bunch of tests related to a search. I am adding new functionality of a new search strategy, and in the meantime want to run the already written tests under both configurations. As we have multiple classes each with multiple tests I want to streamline this process as much as possible. Ideally it'd be great to do the setup in a #BeforeClass with a data provider so that all tests in the class are basically run twice under the different configurations, but doesn't look like this is possible.
Right now I have:
public class SearchTest1 {
#Test(dataProvider = "SearchType")
public void test1(SearchType searchType) {
setSearchType(searchType);
//Do the test1 logic
}
#Test(dataProvider = "SearchType")
public void test2(SearchType searchType) {
setSearchType(searchType);
//Do the test2 logic
}
#DataProvider(name = "SearchType")
public Object[][] createData() {
return new Object[][]{
new Object[] {SearchType.scheme1, SearchType.scheme2}
}
}
}
Is there a better way to do this?
If you want to avoid having to annotate each and every method with the data provider, you can use a Factory instead.
public class SearchTest1 {
private final SearchType searchType;
public SearchTest1( SearchType searchType ) {
this.searchType = searchType;
}
#Test
public void test2() {
//Do the test2 logic
}
...
}
And your factory class will be:
public class SearchTestFactory {
#Factory
public Object [] createInstances() {
return new Object[] { new SeartchTest1( SearchType.ONE ), new SearchTest1( SearchType.TWO ) };
}
}
See more on this here.
Then you can either have one factory that enumerates every test class or a separate factory for each, the first one is obviously less flexible, the second one means slightly more code.
You can use parameters in #BeforeClass. Just use (with some cleanup)
context.getCurrentXmlTest().getParameters()
#SuppressWarnings("deprecation")
#BeforeClass
public void setUp(ITestContext context) {
System.out.println(context.getCurrentXmlTest().getAllParameters());
}
I just discovered when creating some CRUD tests that you can't set data in one test and have it read in another test (data is set back to its initialization between each test).
All I'm trying to do is (C)reate an object with one test, and (R)ead it with the next. Does JUnit have a way to do this, or is it ideologically coded such that tests are not allowed to depend on each other?
Well, for unit tests your aim should be to test the smallest isolated piece of code, usually method by method.
So testCreate() is a test case and testRead() is another. However, there is nothing that stops you from creating a testCreateAndRead() to test the two functions together. But then if the test fails, which code unit does the test fail at? You don't know. Those kind of tests are more like integration test, which should be treated differently.
If you really want to do it, you can create a static class variable to store the object created by testCreate(), then use it in testRead().
As I have no idea what version of Junit you talking about, I just pick up the ancient one Junit 3.8:
Utterly ugly but works:
public class Test extends TestCase{
static String stuff;
public void testCreate(){
stuff = "abc";
}
public void testRead(){
assertEquals(stuff, "abc");
}
}
JUnit promotes independent tests. One option would be to put the two logical tests into one #Test method.
TestNG was partly created to allow these kinds of dependencies among tests. It enforces local declarations of test dependencies -- it runs tests in a valid order, and does not run tests that depend on a failed test. See http://testng.org/doc/documentation-main.html#dependent-methods for examples.
JUnit is independent test. But, If you have no ways, you can use "static" instance to store it.
static String storage;
#Test
public void method1() {
storage = "Hello"
}
#Test
public void method2() {
Assert.assertThat(something, is(storage));
}
How much processing time do these tests take? If not a lot, then why sweat it. Sure you will create some object unnecessarily, but how much does this cost you?
#Test
void testCreateObject() {
Object obj = unit.createObject();
}
#Test
void testReadObject() {
Object obj = null;
try {
obj = unit.createObject(); // this duplicates tests aleady done
} catch (Exception cause) {
assumeNoException(cause);
}
unit.readObject(obj);
}
in this basic example, the variable is changed in the test A, and can be used in the test B
public class BasicTest extends ActivityInstrumentationTestCase2 {
public BasicTest() throws ClassNotFoundException {
super(TARGET_PACKAGE_ID, launcherActivityClass);
}
public static class MyClass {
public static String myvar = null;
public void set(String s) {
myvar = s;
}
public String get() {
return myvar;
}
}
private MyClass sharedVar;
#Override
protected void setUp() throws Exception {
sharedVar = new MyClass();
}
public void test_A() {
Log.d(S,"run A");
sharedVar.set("blah");
}
public void test_B() {
Log.d(S,"run B");
Log.i(S,"sharedVar is: " + sharedVar.get());
}
}
output result is:
run A
run B
sharedVar is: blah