Different Parametrized arguments for different Tests JUNIT - java

Okay so I'm building Test Cases for my project and I'm Using JUnit for testing. Now the problem I'm facing is that I need different set of arguments for different test cases of the same file.
public class ForTesting{
//Test 1 should run on ips {1, true} and {2,true}
#Test
public void Test1()
{
//Do first Test case
}
//Test 2 should run on ips {3,true} and {4,true}
#Test
public void Test2()
{
//Do another Test case
}
}
I know I can provide multiple arguments using parametrized arguments but the problem is the same set of arguments run for all the test cases. Is there a way to do this?

If you're not looking ONLY for standard junit parametrized tests, and depending on your company's legal policies you can use (at least) the following 2 libraries, which make things easier (both to implement and read):
1) JUnitParams (Apache 2)
#RunWith(JUnitParamsRunner.class)
public class PersonTest {
#Test
#Parameters({"17, false",
"22, true" })
public void shouldDecideAdulthood(int age, boolean expectedAdulthood) throws Exception {
assertThat(new Person(age).isAdult(), is(expectedAdulthood));
}
}
2) Zohhak (LGPL) inspired by JUnit params but bringing some more sugar to the table (easy separator config, converters, etc)
#RunWith(ZohhakRunner.class)
public class PersonTest {
#TestWith({"17, false",
"22, true" })
public void shouldDecideAdulthood(int age, boolean expectedAdulthood) throws Exception {
assertThat(new Person(age).isAdult(), is(expectedAdulthood));
}
}
Credits: Examples above have been shamelessly copied and adjusted from JUnitParams' readme.

Few options:
Use Theories.
In a #Theory, use Assume.assumeThat.
#Theory
public void shouldPassForSomeInts(int param) {
Assume.assumeTrue(param == 1 || param == 2);
}
#Theory
public void shouldPassForSomeInts(int param) {
...
}
Or use #TestedOn.
#Theory
public void shouldPassForSomeInts(#TestedOn(ints={1, 2}) int param) {
...
}
#Theory
public void shouldPassForSomeInts(#TestedOn(ints={3,4}) int param) {
...
}

Related

Java/JUnit Filtering Repeated Conditions

My specific question is with regards to JUnit's Parameterized Tests, filtering (essentially not running) tests if it contains a certain property. For example:
#Test
public void test1() {
if (property.contains("example")) {
return;
}
assertEquals(expected, methodToTest1(actual));
}
#Test
public void test2() {
if (property.contains("example")) {
return;
}
assertEquals(expected, methodToTest2(actual));
}
The question is, does a technique exist where the constraint if (property.equals("example"))... be defined somewhere else statically, instead of before each and every test method? Like this:
/** define constraint "property.equals("example")" somewhere **/
#Test
public void test1() {
assertEquals(expected, methodToTest1(actual));
}
#Test
public void test2() {
assertEquals(expected, methodToTest2(actual));
}
You may use JUnit's Assume feature together with #Before.
Add an #Before method to your test class
#Before
public void dontRunIfExample() {
assumeFalse(property.contains("example"));
}
and remove the if block from each of your tests.
It depends on how you are running your JUnit tests. You can quite literally use Java's System.getProperty("conditionForTest"). Then if you are launching them by command line you will need to specify them with -DconditionForTest=true or if you are running the tests with ant then it can be passed into the an target.
<sysproperty key="conditionForTest" value="true"/>

Adding own annotation to dynamic skip test in TestNG

I would like to provide elegant mechanism to skip chosen tests when value of some environmental variable is not admissible. I chose adding my own annotation #RunCondition to define which value are allowed for particular tests. Then I created my own listener for TestNG that marks tests as disabled when value of environmental variable is not within admissible scope defined in annotation parameters.
My code looks as follows:
public class ExampleTest {
private int envVar;
#BeforeClass
public void setUp() {
//set up of some environmental variables which depends on external source
StaticContext.setVar(getValueFromOuterSpace());
}
#RunCondition(envVar=2)
#Test
public void testFoo(){
}
}
public class SkipTestTransformer implements IAnnotationTransformer {
#Override
public void transform(ITestAnnotation iTestAnnotation, Class aClass, Constructor constructor, Method method) {
RunCondition annotation = method.getAnnotation(RunCondition.class);
int[] admissibleValues = annotation.envVar();
for (int val : admissibleValues) {
if (StaticContext.getVar() == val) {
return; // if environmental variable matches one of admissible values then do not skip
}
}
iTestAnnotation.setEnabled(false);
}
}
public #interface RunCondition {
int[] envVar();
}
My code works great, but there is a small problem that transform method is invoked before the setUp which is the #BeforeClass function. Is there any other possibility to run Transformer after all initialization of test? I consider such solution elegant and clear and I don't want any ugly if clauses to reach my goal...
I'm using Java 7 and TestNG v5.11.
Try to implement IMethodInterceptor (An instance of this class will be invoked right before TestNG starts invoking test methods.) instead of annotation transformer. It will allow to manage list of tests which will be executed. It also allows to work with your tests annotations. The restriction is that test methods having dependencies will not be passed to intercept method.
There is a better concept directly supported by the testing frameworks called assumptions. You should not disable the test, but rather skip the execution:
in JUnit you can use assumeThat(boolean) family of methods
in TestNG you can throw SkipException
In that case the method will not disappear, it will be marked as skipped.
You can check your own annotation in a setup method (#BeforeMethod) and throw a SkipException to skip this test.
public class ExampleTest {
private int envVar;
#BeforeClass
public void setUp() {
//set up of some environmental variables which depends on external source
StaticContext.setVar(2);
}
#BeforeMethod
public void checkRunCondition(Method method) {
RunCondition annotation = method.getAnnotation(RunCondition.class);
if (annotation != null) {
int[] admissibleValues = annotation.envVar();
for (int val : admissibleValues) {
if (StaticContext.getVar() == val) {
// if environmental variable matches one of admissible values then do not skip
throw new SkipException("skip because of RunCondition");
}
}
}
}
#RunCondition(envVar = 2)
#Test
public void testFoo() {
}
#Retention(RetentionPolicy.RUNTIME)
public #interface RunCondition {
int[] envVar();
}
}

TestNG dataproviders with a #BeforeClass

I am trying to run a class with multiple tests under two different conditions. Basically I have a bunch of tests related to a search. I am adding new functionality of a new search strategy, and in the meantime want to run the already written tests under both configurations. As we have multiple classes each with multiple tests I want to streamline this process as much as possible. Ideally it'd be great to do the setup in a #BeforeClass with a data provider so that all tests in the class are basically run twice under the different configurations, but doesn't look like this is possible.
Right now I have:
public class SearchTest1 {
#Test(dataProvider = "SearchType")
public void test1(SearchType searchType) {
setSearchType(searchType);
//Do the test1 logic
}
#Test(dataProvider = "SearchType")
public void test2(SearchType searchType) {
setSearchType(searchType);
//Do the test2 logic
}
#DataProvider(name = "SearchType")
public Object[][] createData() {
return new Object[][]{
new Object[] {SearchType.scheme1, SearchType.scheme2}
}
}
}
Is there a better way to do this?
If you want to avoid having to annotate each and every method with the data provider, you can use a Factory instead.
public class SearchTest1 {
private final SearchType searchType;
public SearchTest1( SearchType searchType ) {
this.searchType = searchType;
}
#Test
public void test2() {
//Do the test2 logic
}
...
}
And your factory class will be:
public class SearchTestFactory {
#Factory
public Object [] createInstances() {
return new Object[] { new SeartchTest1( SearchType.ONE ), new SearchTest1( SearchType.TWO ) };
}
}
See more on this here.
Then you can either have one factory that enumerates every test class or a separate factory for each, the first one is obviously less flexible, the second one means slightly more code.
You can use parameters in #BeforeClass. Just use (with some cleanup)
context.getCurrentXmlTest().getParameters()
#SuppressWarnings("deprecation")
#BeforeClass
public void setUp(ITestContext context) {
System.out.println(context.getCurrentXmlTest().getAllParameters());
}

JUnit tests in files

I used to write JUnit tests as methods, such as:
public class TextualEntailerTest {
#Test test1() {...}
#Test test2() {...}
#Test test3() {...}
}
Since most of the test cases has a similar structure, I decided to be "data-driven", and put the contents of the tests in XML files. So, I created a method "testFromFile(file)" and changed my test to:
public class TextualEntailerTest {
#Test test1() { testFromFile("test1.xml"); }
#Test test2() { testFromFile("test2.xml"); }
#Test test3() { testFromFile("test3.xml"); }
}
As I add more and more tests, I become tired of adding a line for each new test file I add. Of course I can put all files in a single test:
public class TextualEntailerTest {
#Test testAll() {
foreach (String file: filesInFolder)
testFromFile(file);
}
}
However, I prefer that each file will be a separate test, because this way JUnit gives nice statistics about the number of files passed and failed.
So, my question is: how to tell JUnit to run separate tests, where each test is of the form "testFromFile(file)", for all files in a given folder?
You could use Theories where the files are #DataPoints so you won't need to loop in your test and will allow for setup and cleanup after each file. But it will still be reported as such.
Theories also have the issue that they fail fast (quit after first failure) as your test above does. I find that this is not good practice since it can hide a situation where you have multiple bugs. I recommend using seperate tests or use the loop with an ErrorCollector. I really wish Theories had ErrorCollector built in.
Not sure, but may be these can help you.
Reference1 Reference2. Hope this helps.
#RunWith(value = Parameterized.class)
public class JunitTest {
private String filename;
public JunitTest(String filename) {
this.filename= filename;
}
#Parameters
public static Collection<Object[]> data() {
Object[][] data = new Object[][] { { "file1.xml" }, { "file2.xml" } };
return Arrays.asList(data);
}
#Test
public void Test() {
System.out.println("Test name:" + filename);
}
}

Passing JUnit data between tests

I just discovered when creating some CRUD tests that you can't set data in one test and have it read in another test (data is set back to its initialization between each test).
All I'm trying to do is (C)reate an object with one test, and (R)ead it with the next. Does JUnit have a way to do this, or is it ideologically coded such that tests are not allowed to depend on each other?
Well, for unit tests your aim should be to test the smallest isolated piece of code, usually method by method.
So testCreate() is a test case and testRead() is another. However, there is nothing that stops you from creating a testCreateAndRead() to test the two functions together. But then if the test fails, which code unit does the test fail at? You don't know. Those kind of tests are more like integration test, which should be treated differently.
If you really want to do it, you can create a static class variable to store the object created by testCreate(), then use it in testRead().
As I have no idea what version of Junit you talking about, I just pick up the ancient one Junit 3.8:
Utterly ugly but works:
public class Test extends TestCase{
static String stuff;
public void testCreate(){
stuff = "abc";
}
public void testRead(){
assertEquals(stuff, "abc");
}
}
JUnit promotes independent tests. One option would be to put the two logical tests into one #Test method.
TestNG was partly created to allow these kinds of dependencies among tests. It enforces local declarations of test dependencies -- it runs tests in a valid order, and does not run tests that depend on a failed test. See http://testng.org/doc/documentation-main.html#dependent-methods for examples.
JUnit is independent test. But, If you have no ways, you can use "static" instance to store it.
static String storage;
#Test
public void method1() {
storage = "Hello"
}
#Test
public void method2() {
Assert.assertThat(something, is(storage));
}
How much processing time do these tests take? If not a lot, then why sweat it. Sure you will create some object unnecessarily, but how much does this cost you?
#Test
void testCreateObject() {
Object obj = unit.createObject();
}
#Test
void testReadObject() {
Object obj = null;
try {
obj = unit.createObject(); // this duplicates tests aleady done
} catch (Exception cause) {
assumeNoException(cause);
}
unit.readObject(obj);
}
in this basic example, the variable is changed in the test A, and can be used in the test B
public class BasicTest extends ActivityInstrumentationTestCase2 {
public BasicTest() throws ClassNotFoundException {
super(TARGET_PACKAGE_ID, launcherActivityClass);
}
public static class MyClass {
public static String myvar = null;
public void set(String s) {
myvar = s;
}
public String get() {
return myvar;
}
}
private MyClass sharedVar;
#Override
protected void setUp() throws Exception {
sharedVar = new MyClass();
}
public void test_A() {
Log.d(S,"run A");
sharedVar.set("blah");
}
public void test_B() {
Log.d(S,"run B");
Log.i(S,"sharedVar is: " + sharedVar.get());
}
}
output result is:
run A
run B
sharedVar is: blah

Categories