Do I need to double check flows in UT? - java

I have a class A with a dependency B.
I wrote a UT to B::foo(String s1, String s2). Say I test a flow of B::foo("a", "a")
Assuming A::foo(..) calls B::foo(..)
Do I have to write a UT of A::("a", "a") ?
I would inject B::foo mock and check it was called once and also the result from A is as expected given a mocked result from B.
Would you avoid mock in such situation?
Would you avoid the whole flow as it's already checked in B UT ?

Unit tests serve as an additional line of defense against software bugs. Making bug in production code is likely, making same bug in both production code and unit test is a lot less likely. This is one of the reasons you write unit tests - to gain one more guarantee your software works as intended.
I would inject B::foo mock and check it was called once and also the result from A is as expected given a mocked result from B.
You need to ask yourself how much you gain by doing so. If A is simple wrapper over B
How valuable would such A tests be?
How hard would a bug in A code be to detect?
And how easy to make?
And how hard to fix?
Every unit test is a decision to be made. There is no "Yes, write tests for this class" guideline nor rule. You need to determine whether your time can be spend on writing unit tests for wrapper class or whether it would be better to invest it elsewhere.

B::foo is unit tested, so the best course of action is to assume it is perfect. If you have reason to doubt that B::foo is perfect, add tests in BTest until you're comfortable with it, then assume that it is perfect.
At that point, writing a unit test of A::foo is probably redundant, unless you're asserting that it accurately returns (some permutation of) B::foo. As jimmy_keen said, this may mean that your test for A is trivial. Remember that unit tests are designed to cover things likely to break, so if all you have is a wrapper, you probably don't need thorough testing.
(Caveat: If B is not under your control, and you can't be confident of its perfection, by all means add B tests wherever in you can—including a separate test class, or in ATest. That's a separate abstraction-breaking case, though.)

Related

Mockito Object of willReturn() is a Parameter of the next given()

I don't know if I'm testing this method wrong or if the whole test is nonsense. (Example code is below)
I would like to test my ExampleService with Mockito. This has the client and the customerService as dependencies (both with #Mock). Both dependencies are used in exampleService.doYourThing().
client.findByName() returns a ResultPage, which in turn is used in customerService.createHitList(). I thought it would be a good idea to test that the page object does not change during the "doYourThing" method, after being returned by client.findByName(). Therefore, customerService.createHitList() should only return something if 'page' does not change.
The problem: If 'page' is returned in "doYourThing()" by client.findByName() and is changed with "page.setIsNew(true)" as in my example, then in my test method the 'pageForTest' object also changes. As a result, the condition I set with "given(this.customerService.createHitList(pageForTest))" is met and the customerList is returned.
I know that 'page' is a reference of 'pageForTest' and thus changes.
Anyway, the goal of my test is not fulfilled by this. How would you tackle the problem?
This is only sample code, there may be small syntax errors.
Test method:
ExampleService exampleService = new ExampleService(client, customerService);
ResultPage pageForTest = new ResultPage();
pageForTest.setIsFulltextSearch(true);
pageForTest.setIsNew(false);
pageForTest.setHits(hits);
given(this.client.findByName("Maria")).willReturn(pageForTest);
given(this.customerService.createHitList(pageForTest)).willReturn(customerList);
HitList hitList = this.exampleService.doYourThing("Maria");
ExampleService
public HitList doYourThing(String name) {
ResultPage page = this.clientFindByName(name);
page.setIsNew(true);
HitList hitList = this.customerService.createHitList(page);
return hitList;
}
When writing unit tests it's important to actually define the unit. A common misconception is treating every class as a separate unit, while we use classes to separate responsibilities, but a certain unit of work can be represented by more than one class (so it could include more than one responsibility as a whole). You can read more about it on the Internet, for example on the Martin Fowler's page.
Once you define a unit you want to test, you will need to make some assumptions regarding the dependencies used by the test. I will point you towards the Martin Fowler's page for more information, but in your case this could be a mock of the repository retrieving some data from the database or a service representing another unit, which has it's own tests and it's behavior can be assumed and reproduced using a mock. The important thing is that we do not say here that all the dependencies should be mocked, because maybe testing given method while mocking all the dependencies only tests Mockito and not actually your code.
Now it's time to actually understand what you want to test - what is the actual logic of the code. When developing using test-driven development, you write the tests first as the requirements are pre-defined and the tests can point you towards the best API that should be exposed by the class. When you have tests ready, you implement the code as designed during writing the tests. In your case you have your logic ready and let's assume it's doing what it's supposed to do - that's what you want to test. You are not actually testing the code, but the desired behavior of the code. After the test is written, you can change the code in many ways including extracting some parts to a method or even a separate class, but the behavior should be well defined and changing the internals of a method or the class (refactoring) should not require changing the tests, unless API or the expected behavior changes.
The main problem with trying to help you understand how to test your code is actually the lack of context. Method named doYourThing does not describe the expected behavior of the method, but this is actually the most important thing when writing a test - that thing should be done. If you strictly stick to how it's done internally in your test, the code will be hard to modify in the future and the tests may be unreliable. If the important behavior includes setting the isNew value, maybe the object returned from the mock should be a spy or it should be verified with assertions? If the actual logic lies in the customerService, then maybe it shouldn't be mocked, but it should be the part of the unit? It requires context, but hopefully I explained some things regarding testing as a concept. I recommend reading up on testing online as multiple articles like those by Martin Fowler can be more helpful in understanding it.

What is the point of mocking this Feign object?

I'm trying to write a unit test for an implementation of a Feign client. I wasn't sure how to go about it so I googled it and came across this answer and the accepted answer is this code snippet:
#Test
public someTestClient(){
Person expectedPerson = new Person("name",12));
when(mockPersonClient.getPerson()).return(expectedPerson);
Person person = mockPersionClient.getPerson();
assertEquals(expectedPerson, person);
}
I don't understand why this is a useful test or under what circumstance, other than a problem with the Person constructor, this test could ever fail. Isn't this test essentially the equivalent of:
Person person = new Person("a", 1)
Person expectedPerson = new Person("a", 1)
assertEquals(person, expectedPerson)
I understand unit testing should test functionality in isolation. Will this test just ensure that mockPersonClient exists at runtime?
We can configure a mock object to always return a hard coded and fake object when calling a method on that mock object.
In this example , the OP configured mockPersonClient.getPerson() return a fake Person.However, he just wonder why the fake person was not returned as configured when he called mockPersonClient.getPerson(). I think the codes example he shown was just for him to demonstrate this question. It does not mean he actually wrote that unit test codes to test some production codes.
A test like that doesn't have any value.
Here is a person, I am going to ask this call to return person and then I will check I got person when calling that thing. Of course you will get person, you just hard-coded that, so how is it useful?
Unit tests are about functionality, a simple fact which is lost on many.
Any code which changes data, which filters something, which changes something in a controlled way, is a good candidate for a unit test.
People use mocks a little bit too much and for most of the time for the wrong thing. Yes, we are advised to code against interfaces, but this doesn't mean you should have a very complex system, where you pass interfaces all over the place and then your test code tries to mimic that.
When you mock too much, it means the test you are writing is tied too much to the code it tests, it knows too much about it. Unit tests should not do that, because every time you change the code in some small way, you then discover that now you have to change the test, because you're no longer using 35 interfaces, now you have 47 to mock in a very specific order. That may not be an issue when you have one test, but imagine what happens when you have 1000 tests ...
If people tried to code in more of a functional way then this would not happen. If you pass data, instead of abstractions, now you don't have to mock anything.
Instead of mocking a call a database, isolate it, take the result and pass that to a method, you've just lost an abstraction and your code does not need to mock anything, you just call the method, pass the data in whatever format you want and see what happens.
If you want to test a real database, then you write an integration test. It's really not that complicated, mocking should not be the first thing you do, do it when it helps and you really must, but most of the time, you really don't and it simplifies things if you don't.

How to testing something like a converter

i have a question regarding testing classes like a converter.
Lets say i have a converter from EntityA to EntityB. The converter seems like this:
public EntityB convert(EntityA){
//call interal methods
return B.
}
private xy internalMethod1(...){
//call other interal Method
}
private xy internalMethod2(...){
....
}
private xy internalMethod3(...){
....
}
private xy internalMethod4(...){
....
}
The converter has one public method and 4 internal methods to convert the entity.
How should i test it?
Option1
I only test the public method and cover all cases from the internalMethods by different example inputs.
Advantages:
Testing only the "interface". Dont know the interal structure.
Internal refactoring is very easy and needs no changes at the tests.
Disadvantages:
Really big maybe unclear tests that tests all cases.
Every input must be pass all the methods.
Option2
I write tests for my public method and my private methods. (Some testframeworks can access private methods like powermock or spock (groovy))
I test every method alone and mock every other internal method.
Advantages:
Really small tests that only test the method itself and mock all other methods .
Disadvantages:
I know how it is implemented internal and must change the tests if i refactor some method, some methodname or something at the internal calling structure
Option3
I write some new classes that do the internal stuff and have public methods
Advantages:
Tests are maybe clearer and only for the special classes.
Disadvantages:
More classes for one conversion task.
Please help me what is the best practise here.
Maybe some good links/hints.
Thank you for your time.
The points you make are valid, but I think you might not be estimating their weight correctly.
Writing brittle tests (tests that are coupled to the implementation code) makes for a rigid code base that is hard to change. Since the point of writing tests in the first place is to be able to go fast, this is counter productive.
This is why you write your tests through the API only - it decouples the tests from the implementation. As you've said, this might make writing the tests a bit harder, but the reward is worth the effort since you'll get safety and be able to refactor easily.
Option 3 comes into play when you see a code smell where some tests cover only some of the code, and other tests only cover the other part of the code. This usually means there's a collaborator that maybe needs to be extracted. This is especially true when some internal functions only use some parameters and others don't. Also, when there's code duplication and the like.
What I would suggest, is to write it using the way you described in option 1, and then extract code out if needed, in the refactoring stage.

How do I write a TDD test for removing a field from a Java class?

I have a Java class with three fields. I realized I only need two of them due to changes in requirements.
Ideally I'd write a failing test case before modifying code.
Is there a standard way, or should I just ignore TDD for this task?
That's refactoring, so you don't need to start with failing tests.
Find all the methods using the field.
Make sure that they're covered by unit tests.
Refactor the methods so they no longer use the field.
Remove the field.
Ensure that the tests are running.
Does the drop of this field change the behavior of the class? If not, just drop the field and check if the class still works correctly (aka, passes the tests you should have already written).
TDD principle is to write code "designed by tests". Which may be sound silly but means that the first class you should write is the test class, testing the behavior of the class under test. You should iterate over few steps:
write the test. It should not compile (you don't have the class/classes under test)
Make the test compile. It should fail (you just have empty class which does not satisfy the assertions in the test)
Make the test to pass in the simplest way (usually, just making the method you are testing to return the expected value)
Refine/Refactor/Generalize the class under test, re-run the test (it should still pass). This step should be really fast, usually less than 2 minutes.
Repeat from step 2 until the desired behavior will emerge almost naturally.
If you have an exhaustive list of all the fields you need, you can compare that list of fields by reflection :
yourClassName.getClass().getDeclaredFields() vs your list of fields
Write a test for the constructor without the field you want to remove.
Obviously only works if the constructor takes the field's value as a parameter.
Delete all tests covering the removed functionality (this doesn't count as "writing production code" as per the 3 Rules of TDD).
Delete all references to the obsolete field in remaining tests. If any of them is to fail, you are then allowed to write the required production code to make it pass.
Once your tests are green again, all subsequent modifications fall into the "refactoring" category. You are allowed to remove your (now unused) field here.

Basic jUnit Questions

I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String)
public String multiply(String num1, String num2);
I have done the implementation and created a test class with the following test cases involving the input String parameter as
valid numbers
characters
special symbol
empty string
Null value
0
Negative number
float
Boundary values
Numbers that are valid but their product is out of range
numbers will + sign (+23)
Now my questions are these:
I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ?
If testing an input value like character ("a"), do I need to include test cases for ALL scenarios?
"a" as the first argument
"a" as the second argument
"a" and "b" as the 2 arguments
As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit?
Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough?
Following from the above point, have I successfully tested the multiply() method?
Unit testing is great (in the 200 KLOC project I'm working I've got as many unit test code as regular code) but (assuming a correct unit test):
a unit test that passes does not guarantee that your code works
Think of it this way:
a unit test that fails proves your code is broken
It is really important to realize this.
In addition to that:
it is usually impossible to test every possible input
And then, when you're refactoring:
if all your unit tests are passing does not mean you didn't introduce a regression
But:
if one of your unit test fails you know you have introduced a regression
This is really fundamental and should be unit testing 101.
1) I do think it's a good idea to limit the number of assertions you make in each test. JUnit only reports the first failure in a test, so if you have multiple assertions some problems may be masked. It's more useful to be able to see everything that passed and everything that failed. If you have 10 assertEquals in one test and the first one fails, then you just don't know what would have happened with the other 9. Those would be good data points to have when debugging.
2) Yes, you should include tests for all of your inputs.
3) It's not just end-user input that needs to be tested. You'll want to write tests for any public methods that could possibly fail. There are some good guidelines for this, particularly concerning getters and setters, at the JUnit FAQ.
4) I think you've got it pretty well covered. (At least I can't think of anything else, but see #5).
5) Give it to some users to test out. They always find sample data that I never think of testing. :)
1) There is a tradeoff between granularity of tests (and hence ease of diagnosis) and verbosity of your unit test code. I'm personally happy to go for relatively coarse-grained test methods, especially once the tests and tested code have stabilized. The granularity issue is only relevant when tests fail. (If I get a failure in a multi-assertion testcase, I either fix the first failure and repeat, or I temporarily hack the testcase as required to figure out what is going on.)
2) Use your common sense. Based on your understanding of how the code is written, design your tests to exercise all of the qualitatively different subcases. Recognize that it is impossible to test all possible inputs in all but the most trivial cases.
3) The point of unit testing is to provide a level of assurance that the methods under test do what they are required to do. What this means depends on the code being tested. For example, if I am unit testing a sort method, validation of user input is irrelevant.
4) The coverage seems reasonable. However, without a detailed specification of what your class is required to do, and examination of the actual unit tests, it is impossible to say if you ave covered everything. For example, is your method supposed to cope with leading / trailing whitespace characters, numbers with decimal points, numbers like "123,456", numbers expressed using non-latin digits, numbers in base 42?
5) Define "successfully tested". If you mean, do my tests prove that the code has no errors, then the answer is a definite "NO". Unless the unit tests enumerate each and every possible input, they cannot constitute a proof of correctness. (And in some circumstances, not even testing all inputs is sufficient.)
In all but the most trivial cases, testing cannot prove the absence of bugs. The only thing it can prove is that bugs are present. If you need to prove that a program has no bugs, you need to resort to "formal methods"; i.e. applying formal theorem proving techniques to your program.
And, as another answer points out, you need to give it to real users to see what they might come up with in the way of unexpected input. In other words ... whether the stated or inferred user requirements are actually complete and valid.
True numbers of tests are, of course, infinite. That is not practical. You have to choose valid representative cases. You seem to have done that. Good job.
1) It's best to keep your tests small and focused. That way, when a test fails, it's clear why the test failed. This usually results in a single assertion per test, but not always.
However, instead of hand-coding a test for each individual "invalid scenario", you might want to take a look at JUnit 4.4 Theories (see the JUnit 4.4 release notes and this blog post), or the JUnit Parameterized test runner.
Parametrized tests and Theories are perfect for "calculation" methods like this one. In addition, to keep things organized, I might make two test classes, one for "good" inputs, and one for "bad" inputs.
2) You only need to include the test cases that you think are most likely to expose any bugs in your code, not all possible combinations of all inputs (that would be impossible as WizardOfOdds points out in his comments). The three sets that you proposed are good ones, but I probably wouldn't test more than those three. Using theories or parametrized tests, however, would allow you to add even more scenarios.
3) There are many benefits to writing unit tests, not just the one you mention. Some other benefits include:
Confidence in your code - You have a high decree of certainty that your code is correct.
Confidence to Refactor - you can refactor your code and know that if you break something, your tests will tell you.
Regressions - You will know right away if a change in one part of the system breaks this particular method unintentionally.
Completeness - The tests forced you to think about the possible inputs your method can receive, and how the method should respond.
5) It sounds like you did a good job with coming up with possible test scenarios. I think you got all the important ones.
I just want to add, that with unit testing, you can gain even more if you think first of the possible cases and after that implement in the test driven development fashion, because this will help you stay focuesed on the current case and this will enable you to create easiest implementation possible in DRY fashion. You might also be usng some test coverage tool, e.g. in Eclipse EclEmma, which is really easy to use and will show you if tests have executed all of your code, which might help you to determine when it is enough (although this is not a proof, just a metric). Generally when it comes to unit testing I was much inspired by Kent Becks's Test Driven Development by Example book, I strongly recommend it.

Categories