Unit Tests name convention for grouping tests - java

I read some articles about tests naming conventions and decided to use one with "should". It works pretty nice in most cases like:
shouldAccessDeniedIfWrongPassword
shouldReturnFizzBuzzIfDiv3And5
shouldIncreaseAccountWhenDeposit
But I encountered problems while testing DecimalRepresentation class which displays numbers in diffrent numeral systems, just look at code:
public class DecimalRepresentationTest {
private DecimalRepresentation decimal;
#BeforeEach
void setup() {
decimal = new DecimalRepresentation();
}
#Test
void shouldReturnZeroIfNumberNotSpecified() {
assertEquals("0", decimal.toBinary());
}
#Test
void shouldReturn10IfNumber2() {
decimal.setNumber(2);
assertEquals("10", decimal.toBinary());
}
#Test
void shouldReturn1111IfNumber15() {
decimal.setNumber(15);
assertEquals("1111", decimal.toBinary());
}
}
Now it's not bad, but in case I'm testing negative inputs it looks terrible:
#Test
void shouldReturn11111111111111111111111110001000IfNumberNegative120() {
decimal.setNumber(-120);
assertEquals("11111111111111111111111110001000", decimal.toBinary());
}
#Test
void shouldReturn11111111111111111111111111111111IfNumberNegative1() {
decimal.setNumber(-1);
assertEquals("11111111111111111111111111111111", decimal.toBinary());
}
In examples above I'm testing twice for positive and negative input to be sure there is no hardcoded result and algorithm works fine so i decided to group tests in nested classes for keeping convention:
#Nested
#DisplayName("Tests for positive numbers")
class PositiveConverter {
#Test
void shouldReturn10IfNumber2() {
decimal.setNumber(2);
assertEquals("10", decimal.toBinary());
}
#Test
void shouldReturn1111IfNumber15() {
decimal.setNumber(15);
assertEquals("1111", decimal.toBinary());
}
}
#Nested
#DisplayName("Tests for negative numbers")
class NegativeConverter {
#Test
void shouldReturn11111111111111111111111110001000IfNumberNegative120() {
decimal.setNumber(-120);
assertEquals("11111111111111111111111110001000", decimal.toBinary());
}
#Test
void shouldReturn11111111111111111111111111111111IfNumberNegative1() {
decimal.setNumber(-1);
assertEquals("11111111111111111111111111111111", decimal.toBinary());
}
}
I realize it's overcomplicated because of convention. In case I make lapse it could look much better:
#Test
void testPositiveConversions() {
assertAll(
() -> {decimal.setNumber(2); assertEquals("10", decimal.toBinary());},
() -> {decimal.setNumber(15); assertEquals("1111", decimal.toBinary());}
);
}
#Test
void testNegativeConversions() {
assertAll(
() -> {decimal.setNumber(-120); assertEquals("11111111111111111111111110001000", decimal.toBinary());},
() -> {decimal.setNumber(-1); assertEquals("11111111111111111111111111111111", decimal.toBinary());}
);
}
Should i break convention to keep it simple? The same naming problem i have with tests they get Lists with input and outputs or dynamic tests:
#TestFactory
Stream<DynamicTest> shouldReturnGoodResultsForPositiveNumbers(){ // look at method name lol
List<Integer> inputs = new ArrayList<>(Arrays.asList(2, 15));
List<String> outputs = new ArrayList<>(Arrays.asList("10", "1111"));
return inputs.stream().map(number -> DynamicTest.dynamicTest("Test positive " + number, () -> {
int idx = inputs.indexOf(number);
decimal.setNumber(inputs.get(idx));
assertEquals(outputs.get(idx), decimal.toBinary());
}));
}

Names are supposed to be helpful. Sometimes rules help finding good names, sometimes, they do not. And then the answer is to drop the rule, and maybe go for something completely different, like:
#Test
void testResultForNegativeInput() {
decimal.setNumber(-120);
assertEquals("11111111111111111111111110001000", decimal.toBinary());
}
And if you have several of these methods, maybe adding "ForMinus120" or so would be acceptable.
But instead of spending energy naming here: the real issue is that you are using the wrong kind of testing: you have a whole bunch of input data, that simply result in different output values to check. All your tests are about: one special input value should lead to a special output value.
You don't do that with many almost similar test methods - instead you turn to parameterized tests! Meaning: use a table to drive your test. For JUnit5 and parameterized tests turn here (thanks to user Sam Brannen).
It is great that you spend time and energy to make your tests easy to read. But in this case, that leads to a lot of code duplication. Instead, put down the input/output values into a table, and have one test to check all entries in that table.

i've modeled mine after Roy Osherove's method, here's the regex
^(setup|teardown|([A-Z]{1}[0-9a-z]+)+_([A-Z0-9]+[0-9a-z]+)+_([A-Z0-9]+[0-9a-z]+)+)$

Related

Simple way to have two values as ValueSource for Junit5 ParameterizedTest

I have many boolean methods like boolean isPalindrome(String txt) to test.
At the moment I test each of these methods with two parameterised tests, one for true results and one for false results:
#ParameterizedTest
#ValueSource(strings = { "racecar", "radar", "able was I ere I saw elba" })
void test_isPalindrome_true(String candidate) {
assertTrue(StringUtils.isPalindrome(candidate));
}
#ParameterizedTest
#ValueSource(strings = { "peter", "paul", "mary is here" })
void test_isPalindrome_false(String candidate) {
assertFalse(StringUtils.isPalindrome(candidate));
}
Instead I would like to test these in one parameterised method, like this pseudo Java code:
#ParameterizedTest
#ValueSource({ (true, "racecar"),(true, "radar"), (false, "peter")})
void test_isPalindrome(boolean res, String candidate) {
assertEqual(res, StringUtils.isPalindrome(candidate));
}
Is there a ValueSource for this? Or is there an other way to achieve this in a concise manner?
Through the very helpful comment from Dawood ibn Kareem (on the question) I got a solution involving #CsvSource:
#ParameterizedTest
#CsvSource(value = {"racecar,true",
"radar,true",
"peter,false"})
void test_isPalindrome(String candidate, boolean expected) {
assertEqual(expected, StringUtils.isPalindrome(candidate));
}
I quite like: Although the code uses strings to express boolean types, it is quite compact and keeps things together which IMHO belong together.
Read about #CsvSource here.

Junit and Integration Tests best approach

I want to make some integration test to test my whole program (it's a standart command line Java application with program args)
Basically I have 3 tests : one to create a resource, one to update the resource and finally one to delete it.
I could do something like this :
#Test
public void create_resource() {
MainApp.main(new String[] {"create", "my_resource_name"});
}
#Test
public void update_resource() {
MainApp.main(new String[] {"update", "my_resource_name"});
}
#Test
public void delete_resource() {
MainApp.main(new String[] {"delete", "my_resource_name"});
}
It works... as long as the methods are executed in the correct order. I've heard that the good execution of a test should not depend of the order.
It's true that ordering tests is considered a smell. Having said that, there might be cases where it might make sense, especially for integration tests.
Your sample code is a little vague since there are no assertions there. But it seems to me you could probably combine the three operation into a single test method. If you can't do that then you can just run them in order. JUnit 5 supports it using the #Order annotation:
#TestMethodOrder(OrderAnnotation.class)
class OrderedTestsDemo {
#Test
#Order(1)
void nullValues() {
// perform assertions against null values
}
#Test
#Order(2)
void emptyValues() {
// perform assertions against empty values
}
#Test
#Order(3)
void validValues() {
// perform assertions against valid values
}
}

Spring boot Unit test about best practics and currectli writing tests

I want start write unit test in my project. I tried make this many times. And he always quit, because he could not understand the meaning. because I can not find and form knowledge into a single whole. I read many articles, saw many examples, and they are all different. As a result, I understand why I need write tests, I understand how to write them, but I do not understand how correctly. And I do not understand how to write them so that they are useful. I have some questions:
For example I have service:
#Service
public class HumanServiceImpl implements HumanService {
private final HumanRepository humanRepository;
#Autowired
public HumanServiceImpl(HumanRepository humanRepository) {
this.humanRepository = humanRepository;
}
#Override
public Human getOneHumanById(Long id) {
return humanRepository.getOne(id);
}
#Override
public Human getOneHumanByName(String firstName) {
return humanRepository.findOneByFirstName(firstName);
}
#Override
public Human getOneHumanRandom() {
Human human = new Human();
human.setId(Long.parseLong(String.valueOf(new Random(100))));
human.setFirstName("firstName"+ System.currentTimeMillis());
human.setLastName("LastName"+ System.currentTimeMillis());
human.setAge(12);//any logic for create Human
return human;
}
}
And I tried write Unit test for this service:
#RunWith(SpringRunner.class)
public class HumanServiceImplTest {
#MockBean(name="mokHumanRepository")
private HumanRepository humanRepository;
#MockBean(name = "mockHumanService")
private HumanService humanService;
#Before
public void setup() {
Human human = new Human();
human.setId(1L);
human.setFirstName("Bill");
human.setLastName("Gates");
human.setAge(50);
when(humanRepository.getOne(1L)).thenReturn(human);
when(humanRepository.findOneByFirstName("Bill")).thenReturn(human);
}
#Test
public void getOneHumanById() {
Human found = humanService.getOneHumanById(1L);
assertThat(found.getFirstName()).isEqualTo("Bill");
}
#Test
public void getOneHumanByName() {
Human found = humanService.getOneHumanByName("Bill");
assertThat(found.getFirstName()).isEqualTo("Bill");
}
#Test
public void getOneHumanRandom() {
//???
}
}
I have questions:
1. Where should I fill the objects? I saw different implementations
in #Before like in my example, in #Test, mix implementations - when Human create in #Before and expression
when(humanRepository.getOne(1L)).thenReturn(human);
in #Test method
private Human human;
#Before
public void setup() {
human = new Human();
...
}
#Test
public void getOneHumanById() {
when(humanRepository.getOne(1L)).thenReturn(human);
Human found = humanService.getOneHumanById(1L);
assertThat(found.getFirstName()).isEqualTo("Bill");
}
2. How can I test getOneHumanRandom() method?
Service not use repository when call this method. I can make this method mock, but what will it give?
when(humanService.getOneHumanRandom()).thenReturn(human);
...
#Test
public void getOneHumanRandom() {
Human found = humanService.getOneHumanRandom();
assertThat(found.getFirstName()).isEqualTo("Bill");
}
I just copy the logic from the service in the test class. What is the point of such testing and is it necessary?
1. Where should I fill the objects? I saw different implementations
I would use #Before for any common setup between all / most tests. Any setup that is specific to a certain test should be go into that test method. If there is a common setup between some, but not all, of your tests you can write private setup method(s).
Remember to keep your tests / code DRY (dont repeat yourself!). Tests have a maintenance factor and keeping common code together with help alleviate some headaches in the future.
2. How can I test getOneHumanRandom() method?
You should create a Human toString() method. This method should concat all the properties on your Human object. You can call getOneHumanRandom() twice and assert that they are not equal.
#Test
public void getOneHumanRandomTest()
{
// SETUP / TEST
Human one = service.getOneHumanRandom();
Human two = service.getOneHumanRandom();
// VERIFY / ASSERT
assertNotEquals("these two objects should not be equal", one.toString(), two.toString())
}
Hope this helps!

Different Parametrized arguments for different Tests JUNIT

Okay so I'm building Test Cases for my project and I'm Using JUnit for testing. Now the problem I'm facing is that I need different set of arguments for different test cases of the same file.
public class ForTesting{
//Test 1 should run on ips {1, true} and {2,true}
#Test
public void Test1()
{
//Do first Test case
}
//Test 2 should run on ips {3,true} and {4,true}
#Test
public void Test2()
{
//Do another Test case
}
}
I know I can provide multiple arguments using parametrized arguments but the problem is the same set of arguments run for all the test cases. Is there a way to do this?
If you're not looking ONLY for standard junit parametrized tests, and depending on your company's legal policies you can use (at least) the following 2 libraries, which make things easier (both to implement and read):
1) JUnitParams (Apache 2)
#RunWith(JUnitParamsRunner.class)
public class PersonTest {
#Test
#Parameters({"17, false",
"22, true" })
public void shouldDecideAdulthood(int age, boolean expectedAdulthood) throws Exception {
assertThat(new Person(age).isAdult(), is(expectedAdulthood));
}
}
2) Zohhak (LGPL) inspired by JUnit params but bringing some more sugar to the table (easy separator config, converters, etc)
#RunWith(ZohhakRunner.class)
public class PersonTest {
#TestWith({"17, false",
"22, true" })
public void shouldDecideAdulthood(int age, boolean expectedAdulthood) throws Exception {
assertThat(new Person(age).isAdult(), is(expectedAdulthood));
}
}
Credits: Examples above have been shamelessly copied and adjusted from JUnitParams' readme.
Few options:
Use Theories.
In a #Theory, use Assume.assumeThat.
#Theory
public void shouldPassForSomeInts(int param) {
Assume.assumeTrue(param == 1 || param == 2);
}
#Theory
public void shouldPassForSomeInts(int param) {
...
}
Or use #TestedOn.
#Theory
public void shouldPassForSomeInts(#TestedOn(ints={1, 2}) int param) {
...
}
#Theory
public void shouldPassForSomeInts(#TestedOn(ints={3,4}) int param) {
...
}

JUnit tests in files

I used to write JUnit tests as methods, such as:
public class TextualEntailerTest {
#Test test1() {...}
#Test test2() {...}
#Test test3() {...}
}
Since most of the test cases has a similar structure, I decided to be "data-driven", and put the contents of the tests in XML files. So, I created a method "testFromFile(file)" and changed my test to:
public class TextualEntailerTest {
#Test test1() { testFromFile("test1.xml"); }
#Test test2() { testFromFile("test2.xml"); }
#Test test3() { testFromFile("test3.xml"); }
}
As I add more and more tests, I become tired of adding a line for each new test file I add. Of course I can put all files in a single test:
public class TextualEntailerTest {
#Test testAll() {
foreach (String file: filesInFolder)
testFromFile(file);
}
}
However, I prefer that each file will be a separate test, because this way JUnit gives nice statistics about the number of files passed and failed.
So, my question is: how to tell JUnit to run separate tests, where each test is of the form "testFromFile(file)", for all files in a given folder?
You could use Theories where the files are #DataPoints so you won't need to loop in your test and will allow for setup and cleanup after each file. But it will still be reported as such.
Theories also have the issue that they fail fast (quit after first failure) as your test above does. I find that this is not good practice since it can hide a situation where you have multiple bugs. I recommend using seperate tests or use the loop with an ErrorCollector. I really wish Theories had ErrorCollector built in.
Not sure, but may be these can help you.
Reference1 Reference2. Hope this helps.
#RunWith(value = Parameterized.class)
public class JunitTest {
private String filename;
public JunitTest(String filename) {
this.filename= filename;
}
#Parameters
public static Collection<Object[]> data() {
Object[][] data = new Object[][] { { "file1.xml" }, { "file2.xml" } };
return Arrays.asList(data);
}
#Test
public void Test() {
System.out.println("Test name:" + filename);
}
}

Categories