I program mostly in scala and java, using scalatest in scala and junit for unit testing. I would like to apply the very same tests to several implementations of the same interface/trait. The idea is to verify that the interface contract is enforced and to check Liskov substitution principle.
For instance, when testing implementations of lists, tests could include:
An instance should be empty, if and only if and only if it has zero size.
After calling clear, the size should be zero.
Adding an element in the middle of a list, will increment by one the index of rhs elements.
etc.
What are the best practices ?
In Java/JUnit, I generally handle this by having an abstract testcase from which tests for the specific test class inherit all the tests and have a setup method instantiating the implementation. I can't watch the video abyx posted right now, but I suspect it's this general idea.
Another interesting possibility if you don't mind introducing yet another testing framework would be to use JDave Specification classes.
I haven't tried using either of these with Scalatest or with Scala traits and implementations, but it should be possible to do something similar.
This sounds like it could be a job for shared tests. Shared tests are tests that are shared by different fixture objects. I.e., the same test code is run on different data. ScalaTest does have support for that. Search for "shared tests" in the documentation of your favorite style trait that represents tests as functions (Spec, WordSpec, FunSuite, FlatSpec, etc.). An example is the syntax for FlatSpec:
it should behave like emptyList
See Sharing Tests in the FlatSpec documentation
Contract tests are easy to do with JUnit 4, here's a video by Ben Rady.
For Scala, strongly consider ScalaCheck. All of those contracts are expressible as one-line specifications in ScalaCheck. When run, ScalaCheck will generate a configurable number of sample inputs randomly, and check that all of the specifications hold. It's about the most semantically dense way possible to create unit tests.
Related
I am new to Unit Testing and recently tried my hands on JUnit test and Mockito.
I am trying to unit test a method that calls multiple private methods and also creates private objects of other classes.
How can I unit test the method.
For Example if I have the following code:
class ClassToTest {
private OuterClass outerClass;
public Boolean function() {
outerClass = new OuterClass(20);
return innerFunction(outerClass.getValue());
}
private Boolean innerFunction(int val) {
if (val % 2 == 0)
return true;
return false;
}
}
I'm confused how would I test the public function.
It doesn't matter how a method is implemented; that method should have a contract it obeys and that is what you are verifying with your test. In this example, function() should return true if the outerClass's value is even. One way to accomplish this would be to inject (pass into the ClassToTest constructor) the instance of outerClass so that you can control the value when testing:
#Test
public void trueWhenEven() {
var outer = new OuterClass(2);
var ctt = new ClassToTest(outer);
assertTrue(ctt.function());
}
Sometimes the contract is just that a method invokes methods on some other objects; in these kinds of cases, you can use Mockito or similar library to verify the interactions.
You are starting unit-testing with directly going at some non-trivial questions.
Question 1: How to handle implementation details / private functions? Answer: Unit-testing is about finding the bugs in your code, and that is one primary goal of unit-testing (and most other kinds of testing). Another primary goal is to prevent the introduction of bugs by acting as regression tests when the software is changed. Bugs are in the implementation - different implementations come with different bugs. Therefore, be sure to test the implementation details. One important tool to support here is coverage analysis, which shows you which parts of the implementation's code have been reached by your tests.
You may even test aspects beyond the contract of the function: a) Negative tests are tests that intentionally check behaviour for invalid / unspecified inputs, and are important to make a system secure. Because, even when provided with invalid input, the system should not allow to be hacked because of, for example, reading or writing out-of-bounds memory. This, however, does probably not apply for your example, because your method most likely is specified to implements a 'total function' rather than a 'partial function'. b) Tests of implementation details (if accessible) can even be performed beyond what is needed by the current implementation. This can be done to prevent bugs in upcoming changes to a component, like, extensions of the API.
There are, however, also secondary goals of unit-testing. One of them is to avoid that your tests break unnecessarily when implementation details change. One approach to also reach the secondary goal is, to test the implementation details via the public API. This way certain kinds of re-design of the implementation details will not break your tests: Renaming, splitting or merging of private functions would not affect the tests. Switching to a different algorithm, however, likely will require you to re-think your tests: Tests for an iterative / recursive implementation of the fibonacci function will look different than for an implementation using the closed-form-expression from Moivre/Binet, or for a lookup-table implementation.
For your example this means, you should try to test the functionality of your private function via the public API.
Question 2: How to deal with dependencies to other parts of the software? Unit-testing focuses on finding the bugs in small, isolated code pieces. When these code pieces have dependencies to other code parts, this can negatively influence your ability to unit-test them properly. But whether this is really the case depends on the actual dependency. For example, if your code uses the Math.sin() function, this is also a dependency to a different code part, but such a dependency does typically not harm your ability to properly test the code.
Dependencies to other components bother you in the following cases: The use of the other components makes it difficult to stimulate all interesting scenarios in your code under test. Or, the use of the other component leads to non-deterministic behaviour (time, randomness, ...). Or, the use of the other components causes unacceptably long build or execution times. Or, the other component is buggy or not even available yet.
If all of these criteria are not met (as it is normally the case with the Math.sin() function), you can typically just live with the other components being part of your tests. You should, however, keep in mind that in your unit-tests you still focus on the bugs in your code and do not start to write tests that actually test the other components: Keep in mind that the other components have tests of their own.
In your example you have chosen Outerclass to have some apparently trivial functionality. In this case you could live with Outerclass just remaining part of your tests. However, it is only an example from your side - The real other class may in fact be disturbing according to the above criteria. If that is the case, then you would somehow have to manage that dependency, which all requires in some way to come to a design that is testing-friendly.
There is a whole family of approaches here, so you better search the web for "design for testability" and "inversion of control". And, you also should try to learn about what distinguishes unit-testing and integration testing: This will help you to avoid trying to apply unit-testing on code parts that should rather be tested with integration testing.
Generally with Mockito this would require the use of Dependency Injection and then you would inject a mock of OuterClass for the test.
If you'd really like to test this without a Spring type framework being added I can think of 3 options:
1) Make this an Integration Test and test the real instances of everything
2) Alter your code so that OuterClass is created via a passed in Object from a setter or a constructor and then pass in a mock for your test
3) Change private OuterClass outerClass; to protected OuterClass outerClass; and make sure your test package structure is the same as your actual code package structure and then you can do outerClass = Mockito.mock(OuterClass); in your test set up.
Mocking ScheduledExecutorService would really make testing my classes easier, but according to the mockito recommendations this seems a bad idea, as the logic of the mocked class can change in a way that it would be used in an incorrect way, but unit tests would still report success.
It seems that writing a wrapper for it would be the "clean" way, but I have a feeling that this would merely result in the complete duplication of an interface, which would just make my code less straightforward. I'd like to follow the practical recommendations of this answer, but I am not sure that the contract of ScheduledExecutorService will always remain the same.
Can I assume that the contract for the existing methods of ScheduledExecutorService (or more generally, any other class in the JRE libs) will never change? If not, is it enough if I test the correct use of it in the integration tests, while still mocking it directly in the unit tests?
It's more of a guideline than a rule; do the thing that will most likely result in a clean, reliable, and non-brittle test. As in the document you quoted:
This is not a hard line, but crossing this line may have repercussions! (it most likely will)
One important thing here is that "don't mock types you don't own" usually refers to concrete or internal types, because those are much more likely to change their behavior between versions, or to gain or lose modifiers like final or static that Mockito's dynamic overrides might not pick up on. After all, if you were to subclass a third-party class manually, Java would throw a compiler error; Mockito's syntax would hide that from you until test runtime.
To list out the factors I think of:
As assylias pointed out in the comments, you're referring to a Java interface, which insulates you from common changes to final methods or method visibility.
The interface is well-documented and designed for third-party extension, providing yet another reason that Java would be unlikely to make breaking changes to the general contract of the interface.
The interface in question is a very well-used interface in Java, which overall has a lot of users, and a lot of backwards-compatibility concerns. It is very unlikely that you'd be subject to breaking changes, compared to a smaller library, or one under active development. One might even say that the JRE is in such lock step to the Java language, you have as much to worry about from breaking syntax changes than from breaking interface changes.
Though I believe strongly in "don't mock types you don't own" as a general heuristic or code smell, I'd agree with you here that the type is worth mocking, and that—unless you were to write and test a full implementation to be used in other tests—it's the best path forward for you here.
I'd say the "Don't mock type you don't own!" is the false conclusion out of the right reasoning.
Unittests should only need to be changes if your API changes or the part of an API of a dependency your code uses.
example:
You us an interface of a dependency as an input parameter, but your tested code uses only one method in that interface. If you don't mock this interface (which is a type you don't own) you have to create your own dummy implementation implementing all of the interfaces methods, even those you don't use.
If you change the version of that dependency this interface might have additional method and/or some methods have been removed. You have to change all of your the implementations of this interface throughout your program. If you mocked this interface you don't need to change your tests and they still give you confidence that your codes behavior did not change after the required refactoring.
Furthermore your Unittest should only fail because the behavior of your code changed, not because of a change in the dependencies behavior.
Changes in a dependencies behavior should be pinned with separate Unittest you setup for the dependencies behavior (if it is crucial for your application) and/or integration tests.
I just started using JMockit and am confused about the advantages of using MockUp for "Faking it" vs Expectations to mock an object.
From what I read through the docs, MockUp of a class allows me to override methods with my own implementations. However, I see that I can do things similarly in Expectations blocks.
So what is the advantage of a MockUp vs an Expectations? According to the JMockit docs,
Fakes are different from the mocking API in that, rather than
specifying in a test the invocations we expect a dependency will
receive when used by code under test, we modify the implementation of
the dependency so that it suits the needs of the test.
Isn't that just semantics? Functionally, are the same things not achievable using an Expectations() block instead of using a MockUp<>?
Your question is: What is difference between using Expectations and Mockup API?
I'm new to this, but to me that is mostly two different ways of doing the same thing. Which you pick is just matter of taste and how you want to test your code.
In Mockup API you can specify the mock in one statement block, whereas in Expectations you would use an Expectations block and a Verifications block. Otherwise they seem very similar to me too.
Is there any way in JUnit (or any other testing framework) to make assertions about all possible inputs to a method?
Something like:
assertTrue(myClass.myMethod(anyString))
EDIT:
I realize that there are an infinite number of possible inputs but I guess I was wondering if there were some frameworks that could statically analyze a method and detect that in all possible cases a certain result would be reached. E.g. public static boolean myMethod(String input) { return true; } will always return true?
No, there is practically an unlimited number of possible inputs.
Its your job to separate them into test cases with (expected) equivalent behaviour.
If such an artificial intelligence would exist, then it could also write the code to be tested.
There exist test case generators, that auto create test cases, but they are mostly useless. They produce a huge amount of test cases, and mainly only touch the code, instead of testing an expected result.
Such tools raise test coverage percentage, but in a very dubious way. (I would call that an illegal raise of test coverage: you should test, not touch!)
Such a tool is CodePro from Google. Use CodePro->Test Case generation (e.g within Eclipse)
On first you will be a bit suprised, it's not to bad to try it out. Then you will know the limits of auto test case generation.
You cannot do this with JUnit. The only way I think you could do such a thing would be using Formal Logic Verification
As said before, it's not possible. However, there's the approach of automated property based testing. This comes somehow as close as possible to your idea. Well, still far tough...
For instance, hava a look at scalacheck:
ScalaCheck is a library written in Scala and used for automated
property-based testing of Scala or Java programs. ScalaCheck was
originally inspired by the Haskell library QuickCheck, but has also
ventured into its own.
I am trying to write a EasyMock Junit test case for some code which is having a lot of extra bits and pieces of code which I am finding a little overkill to Mock.
Say for the given example http://java.dzone.com/articles/easymock-tutorial-%E2%80%93-getting,
Following expectation is set to test
portfolio.getTotalValue()
Expectation
EasyMock.expect(marketMock.getPrice("EBAY")).andReturn(42.00);
EasyMock.replay(marketMock);
Now in my case there are around 30-40 such expectations that I need to set before I can come to my piece of code to unit test.
Is there a way to generate expectations of a code or dynamically generate them ? So that I don't have to manually do all this stuff to test my specific piece of code ?
No.
Seriously, what would you expect it to do?
You can save some labor over the long run by looking at patterns of expectations in multiple tests, and combining those into reusable methods or "#Before" methods.
Actually, it's a code smell: Hard-to-Test Code. Your object might not fulfill the Single Responsibility Principle (SRP).
You can try extracting out some expectations to one or more allowXY or createMockedXY helper methods (void allowDownloadDocument(path, name, etc), Document createMockedDocument(...) for example). Eliminating static helper classes also could be helpful.