Junit and Integration Tests best approach - java

I want to make some integration test to test my whole program (it's a standart command line Java application with program args)
Basically I have 3 tests : one to create a resource, one to update the resource and finally one to delete it.
I could do something like this :
#Test
public void create_resource() {
MainApp.main(new String[] {"create", "my_resource_name"});
}
#Test
public void update_resource() {
MainApp.main(new String[] {"update", "my_resource_name"});
}
#Test
public void delete_resource() {
MainApp.main(new String[] {"delete", "my_resource_name"});
}
It works... as long as the methods are executed in the correct order. I've heard that the good execution of a test should not depend of the order.

It's true that ordering tests is considered a smell. Having said that, there might be cases where it might make sense, especially for integration tests.
Your sample code is a little vague since there are no assertions there. But it seems to me you could probably combine the three operation into a single test method. If you can't do that then you can just run them in order. JUnit 5 supports it using the #Order annotation:
#TestMethodOrder(OrderAnnotation.class)
class OrderedTestsDemo {
#Test
#Order(1)
void nullValues() {
// perform assertions against null values
}
#Test
#Order(2)
void emptyValues() {
// perform assertions against empty values
}
#Test
#Order(3)
void validValues() {
// perform assertions against valid values
}
}

Related

Is it possible to make a dependency on a parameterized test in TestNG?

I have a test that runs multiple times using a data-provider and it looks something like:
#DataProvider(name="data-provider-carmakers")
public Object[][] dataProviderCarMakers() {
return new Object[][] {
{CarMaker.Ford},
{CarMaker.Chevrolet},
{CarMaker.Renault},
{CarMaker.Porsche}
};
}
#Test( dataProvider = "data-provider-carmakers",
retryAnalyzer = TestRetry.class)
public void validateCarMakerHasElectricModelsLoaded(CarMaker carMaker) {
validatecarMakerContainsElectricModelsLoadedInDB(carMaker);
}
In another test, I have a dependency to the first:
#Test( dependsOnMethods = { "validateCarMakerHasElectricModelsLoaded" })
public void validateChevroletElectricModelsPowerEfficiency() {
List<CarModel> electricCarModels = getChevroletCarModels(fuelType.Electric);
validatePowerEfficiency(electricCarModels);
}
(I know the test doesn't make a lot of sense, in reality the code is far more complex that this and data-provider has far more data, but for the sake of clarity I just went with this example).
So I want to run validateChevroletElectricModelsPowerEfficiency() only if validateCarMakerHasElectricModelsLoaded()[CarMaker.Chevrolet] was successful.
How the code is now, if the first test runs successful for Chevrolet, but fails for Renault, the second test won't run. Is there a way to make a dependency to just one set of data of a test?

How to test OS-specific method with JUnit?

I would like to test the following method with JUnit:
private static boolean systemIsWindows() {
String os = System.getProperty("os.name").toLowerCase();
return os.startsWith("win");
}
Frankly, the only thing I've come up with is to basically copy to same logic to the test. This would, of course, protect against the method being inadvertently broken, but sounds somehow counter-intuitive.
What would be a better way to test this method?
In your Unit tests, you can change the value of the property:
System.setProperty("os.name", "Linux")
After that, you can then test/call your systemIsWindows() method to check that what it returns using asserts.
To make it easier to set a System property and to unset that property on completion of the test (thereby facilitating test isolation, self containment) you could use either of the following JUnit add-ons:
JUnit4: JUnit System Rules
JUnit5: JUnit Extensions
For example:
#Test
#SystemProperty(name = "os.name", value = "Windows")
public void aTest() {
assertThat(systemIsWindows(), is(true));
}
#Test
#SystemProperty(name = "os.name", value = "MacOs")
public void aTest() {
assertThat(systemIsWindows(), is(false));
}
A much better way in JUnit 5 is to use #EnabledOnOs https://junit.org/junit5/docs/5.2.0/api/org/junit/jupiter/api/condition/EnabledOnOs.html
So for example:
#Test
#EnabledOnOs({OS.WINDOWS})
public void aTest() {
assertThat(systemIsWindows(), is(false));
}

Unit Tests name convention for grouping tests

I read some articles about tests naming conventions and decided to use one with "should". It works pretty nice in most cases like:
shouldAccessDeniedIfWrongPassword
shouldReturnFizzBuzzIfDiv3And5
shouldIncreaseAccountWhenDeposit
But I encountered problems while testing DecimalRepresentation class which displays numbers in diffrent numeral systems, just look at code:
public class DecimalRepresentationTest {
private DecimalRepresentation decimal;
#BeforeEach
void setup() {
decimal = new DecimalRepresentation();
}
#Test
void shouldReturnZeroIfNumberNotSpecified() {
assertEquals("0", decimal.toBinary());
}
#Test
void shouldReturn10IfNumber2() {
decimal.setNumber(2);
assertEquals("10", decimal.toBinary());
}
#Test
void shouldReturn1111IfNumber15() {
decimal.setNumber(15);
assertEquals("1111", decimal.toBinary());
}
}
Now it's not bad, but in case I'm testing negative inputs it looks terrible:
#Test
void shouldReturn11111111111111111111111110001000IfNumberNegative120() {
decimal.setNumber(-120);
assertEquals("11111111111111111111111110001000", decimal.toBinary());
}
#Test
void shouldReturn11111111111111111111111111111111IfNumberNegative1() {
decimal.setNumber(-1);
assertEquals("11111111111111111111111111111111", decimal.toBinary());
}
In examples above I'm testing twice for positive and negative input to be sure there is no hardcoded result and algorithm works fine so i decided to group tests in nested classes for keeping convention:
#Nested
#DisplayName("Tests for positive numbers")
class PositiveConverter {
#Test
void shouldReturn10IfNumber2() {
decimal.setNumber(2);
assertEquals("10", decimal.toBinary());
}
#Test
void shouldReturn1111IfNumber15() {
decimal.setNumber(15);
assertEquals("1111", decimal.toBinary());
}
}
#Nested
#DisplayName("Tests for negative numbers")
class NegativeConverter {
#Test
void shouldReturn11111111111111111111111110001000IfNumberNegative120() {
decimal.setNumber(-120);
assertEquals("11111111111111111111111110001000", decimal.toBinary());
}
#Test
void shouldReturn11111111111111111111111111111111IfNumberNegative1() {
decimal.setNumber(-1);
assertEquals("11111111111111111111111111111111", decimal.toBinary());
}
}
I realize it's overcomplicated because of convention. In case I make lapse it could look much better:
#Test
void testPositiveConversions() {
assertAll(
() -> {decimal.setNumber(2); assertEquals("10", decimal.toBinary());},
() -> {decimal.setNumber(15); assertEquals("1111", decimal.toBinary());}
);
}
#Test
void testNegativeConversions() {
assertAll(
() -> {decimal.setNumber(-120); assertEquals("11111111111111111111111110001000", decimal.toBinary());},
() -> {decimal.setNumber(-1); assertEquals("11111111111111111111111111111111", decimal.toBinary());}
);
}
Should i break convention to keep it simple? The same naming problem i have with tests they get Lists with input and outputs or dynamic tests:
#TestFactory
Stream<DynamicTest> shouldReturnGoodResultsForPositiveNumbers(){ // look at method name lol
List<Integer> inputs = new ArrayList<>(Arrays.asList(2, 15));
List<String> outputs = new ArrayList<>(Arrays.asList("10", "1111"));
return inputs.stream().map(number -> DynamicTest.dynamicTest("Test positive " + number, () -> {
int idx = inputs.indexOf(number);
decimal.setNumber(inputs.get(idx));
assertEquals(outputs.get(idx), decimal.toBinary());
}));
}
Names are supposed to be helpful. Sometimes rules help finding good names, sometimes, they do not. And then the answer is to drop the rule, and maybe go for something completely different, like:
#Test
void testResultForNegativeInput() {
decimal.setNumber(-120);
assertEquals("11111111111111111111111110001000", decimal.toBinary());
}
And if you have several of these methods, maybe adding "ForMinus120" or so would be acceptable.
But instead of spending energy naming here: the real issue is that you are using the wrong kind of testing: you have a whole bunch of input data, that simply result in different output values to check. All your tests are about: one special input value should lead to a special output value.
You don't do that with many almost similar test methods - instead you turn to parameterized tests! Meaning: use a table to drive your test. For JUnit5 and parameterized tests turn here (thanks to user Sam Brannen).
It is great that you spend time and energy to make your tests easy to read. But in this case, that leads to a lot of code duplication. Instead, put down the input/output values into a table, and have one test to check all entries in that table.
i've modeled mine after Roy Osherove's method, here's the regex
^(setup|teardown|([A-Z]{1}[0-9a-z]+)+_([A-Z0-9]+[0-9a-z]+)+_([A-Z0-9]+[0-9a-z]+)+)$

Java/JUnit Filtering Repeated Conditions

My specific question is with regards to JUnit's Parameterized Tests, filtering (essentially not running) tests if it contains a certain property. For example:
#Test
public void test1() {
if (property.contains("example")) {
return;
}
assertEquals(expected, methodToTest1(actual));
}
#Test
public void test2() {
if (property.contains("example")) {
return;
}
assertEquals(expected, methodToTest2(actual));
}
The question is, does a technique exist where the constraint if (property.equals("example"))... be defined somewhere else statically, instead of before each and every test method? Like this:
/** define constraint "property.equals("example")" somewhere **/
#Test
public void test1() {
assertEquals(expected, methodToTest1(actual));
}
#Test
public void test2() {
assertEquals(expected, methodToTest2(actual));
}
You may use JUnit's Assume feature together with #Before.
Add an #Before method to your test class
#Before
public void dontRunIfExample() {
assumeFalse(property.contains("example"));
}
and remove the if block from each of your tests.
It depends on how you are running your JUnit tests. You can quite literally use Java's System.getProperty("conditionForTest"). Then if you are launching them by command line you will need to specify them with -DconditionForTest=true or if you are running the tests with ant then it can be passed into the an target.
<sysproperty key="conditionForTest" value="true"/>

Specifying an order to junit 4 tests at the Method level (not class level)

I know this is bad practice, but it needs to be done, or I'll need to switch to testng. Is there a way, similar to JUnit 3's testSuite, to specify the order of the tests to be run in a class?
If you're sure you really want to do this: There may be a better way, but this is all I could come up with...
JUnit4 has an annotation: #RunWith which lets you override the default Runner for your tests.
In your case you would want to create a special subclass of BlockJunit4ClassRunner, and override computeTestMethods() to return tests in the order you want them executed. For example, let's say I want to execute my tests in reverse alphabetical order:
public class OrderedRunner extends BlockJUnit4ClassRunner {
public OrderedRunner(Class klass) throws InitializationError {
super(klass);
}
#Override
protected List computeTestMethods() {
List list = super.computeTestMethods();
List copy = new ArrayList(list);
Collections.sort(copy, new Comparator() {
public int compare(FrameworkMethod o1, FrameworkMethod o2) {
return o2.getName().compareTo(o1.getName());
}
});
return copy;
}
}
#RunWith(OrderedRunner.class)
public class OrderOfTest {
#Test public void testA() { System.out.println("A"); }
#Test public void testC() { System.out.println("C"); }
#Test public void testB() { System.out.println("B"); }
}
Running this test produces:
C
B
A
For your specific case, you would want a comparator that would sort the tests by name in the order you want them executed. (I would suggest defining the comparator using something like Google Guava's class Ordering.explicit("methodName1","methodName2").onResultOf(...); where onResultOf is provided a function that converts FrameworkMethod to its name... though obviously you are free to implement that any way you want.
I can see several reasons for doing this, especially when using JUnit to run functional tests or test persistent objects. For example, consider an object Article which is persisted to some kind of persistent storage. If I would like to test the insert, update and delete functionality on the Article object following the unit test principle "all tests should be reorderable and test only a specific part of the functionality", I would have three tests:
testInsertArticle()
testUpdateArticle()
testDeleteArticle()
However, to be able to test the update functionality, I would first need to insert the article. To test the delete functionality, I would also need to insert an article. So, in practice, the insert functionality is already tested both in testUpdateArticle() and testDeleteArticle(). It is then tempting to just create a test method testArticleFunctionality() which does it all, but methods like that will eventually get huge (and they won't just test part of the functionality of the Article object).
The same goes for running functional tests against for example a restful API. JUnit is great also for these cases if it wasn't for the undeterministic ordering of tests.
That said, I extended Michael D's OrderedRunner to use annotations to determine order of tests, just thought I should share. It can be extended further, for example by specifying exactly which tests each test depends on, but this is what I'm using for now.
This is how it is used. It avoids the need for naming tests like AA_testInsert(), AB_testUpdate(), AC_testDelete(), ..., ZC_testFilter(), etc.
#RunWith(OrderedRunner.class)
public class SomethingTest {
#Test
#Order(order=2)
public void testUpdateArticle() {
// test update
}
#Test
#Order(order=1)
public void testInsertArticle() {
// test insert
}
#Test
#Order(order=3)
public void testDeleteArticle() {
// test delete
}
}
No matter how these tests are placed in the file, they will always be run as order=1 first, order=2 second and last order=3, no matter if you run them from inside Eclipse, using Ant, or any other way.
Implementation follows. First, the annotation Order.
#Retention(RetentionPolicy.RUNTIME)
public #interface Order {
public int order();
}
Then, the modified OrderedRunner.
public class OrderedRunner extends BlockJUnit4ClassRunner {
public OrderedRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected List<FrameworkMethod> computeTestMethods() {
List<FrameworkMethod> copy = new ArrayList<>(super.computeTestMethods());
Collections.sort(list, new Comparator<FrameworkMethod>() {
#Override
public int compare(FrameworkMethod f1, FrameworkMethod f2) {
Order o1 = f1.getAnnotation(Order.class);
Order o2 = f2.getAnnotation(Order.class);
if(o1==null && o2 == null) return 0;
if (o1 == null) return 1;
if (o2 == null) return -1;
return o1.order() - o2.order();
}
});
return list;
}
}
From JUnit version 4.11 onwards, it is possible to influence the order of test execution by annotating your class with #FixMethodOrder and specifying any of the available MethodSorters. See this link for more details.
Using junit 4.11 the new annotation #FixMethodOrder allows to set a specific order:
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
If you want to run junit tests in order "just as they present in your source code",
and don't want to modify your tests code,
see my note about this here:
How to run junit tests in order as they present in your source code
But it is really not a good idea, tests must be independent.
Joscarsson and Michael D code in my github repo. I hope they don't mind. I also provide ordered version for Parameterized class. It's already to use as maven dependency
<repositories>
<repository>
<id>git-xxx</id>
<url>https://github.com/crsici/OrderedRunnerJunit4.11/raw/master/</url>
</repository>
</repositories>
<dependency>
<groupId>com.sici.org.junit</groupId>
<artifactId>ordered-runner</artifactId>
<version>0.0.1-RELEASE</version>
</dependency>

Categories