JUnit: best strategy for placing test-methods - java

I started a project and using JUnit for the 1st time.
Whats the best practice of putting testcases?
1 testclass for every "real" class.
1 testclass for every package or even the complete project.
Testmethods in the "real" class without a testclass.
As far I see I can technically do every 3 ways but I do not have experience in that so I ask for some guidance to do it right from the very beginning.
EDIT
I am talking about code unit testing. I am using maven too but I think that is not important for my question.

1 test class for every "real" class.
I typically go with this pattern. Of course, tests for interfaces don't make much sense and there are times when small "entity" classes with getter and setter methods only (i.e. no logic) don't need a corresponding test class.
That said, I've been surprised at what utility I've found in unit tests even on very small classes. For example, even entity classes with only get/set methods which are stored in databases through DAO methods should be tested in case some of the database wiring is incorrect. You never know when you have a mismatched get/set method or if the toString(), asymmetric hashcode() or equals(), or other issues.
The entire point of "unit" tests is (IMHO) to test the smallest unit of your code in isolation -- this is the class. So therefore when I have a ContainerUtil class, I look for a corresponding ContainerUtilTest class in the test directory. I run coverage tests often and I expect just about any logic portions of all classes to be covered.
1 test class for every package or even the complete project.
I might have this as well but then I'd consider these to be "integration" tests. Tests that bridge between classes or between various parts of the project to ensure that your project works as a whole.
But these would be in addition to your unit tests.
Test methods in the "real" class without a test class.
Yeah, no. Really bad idea. You don't want your production code to include test code if at all possible. It decreases the readability of your classes, increases the change that you break something while trying to test, etc.. Just say no.
I also keep my test classes away from my sources. I usually use maven so I have my sources in src/main/java and my tests in src/test/java. You don't want your tests to end up in the jar or war files where they might confuse others.

It really depends on how big your project is, but in my experience the best approach would be one test class for every "big" functionality (this may not apply for unit testing), or in this case, for every "real" class.
About the other 2:
1 testclass for every package or even the complete project.
This may grow to big and messy, wouldn't recommend mixing different things in the same test class, the same way that you wouldn't mix classes within the same file
Testmethods in the "real" class without a testclass.
I do not recommend this one either as you lose track of where are the tests and which things have tests implemented vs are missing. Also, your code may require accessing other classes as well, so this may become a mess as again.

For unit testing, i so far have used one test class for each tested class. FOR ME, it seemed to be the least messy order. I put the unit tests under src/test/java in the same package tree as the tested classes in src/main/java. Integration tests are different and have their own files each.
One testclass has different disadvantages. Source code will become unreadable. You will do a lot of unnecessary work in #Before and #BeforeEach methods.
And i don’t get the point of putting tests into the tested class. Lots of imports, and how would you differ between „real“ and test methods? And because of the additional methods, source code will become unreadable.

I would suggest you try the first approach. It is very useful because you can track using some tool as Sonar the percentage of coverage from your unit tests.
Also, I strongly recommend you apply TDD to develop your code: first you write your test code to fail, then you write code to make your test pass and then you refactor.
Allow me to suggest two sources of reading to help you with that:
Test Driven Development: By Example
Refactoring: Improving the Design of Existing Code
These are the same reading resources I used to start building tests and using TDD.
I woudn't recommend you to use the other approaches, as you don't need to ship test code to production and using a single test class would cause a code smell known as "Large Class".

Related

The cost of setting up tests in JUnit - using mocked objects versus repository-tests in legacy code

I work on a project which has existed for many years. The time it takes to build the project with all tests is almost sensational (not in a good way). This is mainly due to a lot of modules, as well as heaps of unit tests which uses a repository to set up test data rather then to mock the desired behaviour. Unit tests using a repository use a lot of time for test setup, and they run quite slowly. This adds up to a lot of time as the system is quite large.
We write all new unit tests by using Mockito to mock the repository (except when we are actually testing the repository obviously). We also try to rewrite all existing unit tests to using mocks of the repository instead of an actual repository whenever we have the time and opportunity. Completely eliminating the use of repo's in our tests has a huge effect on how much time it takes to run the tests.
A lot of the legacy code sets up its test data by using builders and test-utilities which in turn uses the repository. As the domain is quite complex, this often involves setting up a fair amount of objects and how they are related to each other. Re-writing a class of tests (say ~15 tests) to using only mocked object can therefore be quite time-consuming. And as everywhere else, time is not an infinite resource.
If we are adding some new functionality to a class, it would be far easier to just write one new repository test (in addition to the existing 15) than to find out exactly how the test data needs to be set up by using different mock objects.
I have tried to find some information on how and to what extent the test setup affects the actual time it takes to run the tests, but I have failed to find any useful information. My only "facts" are the observations I make when running a test class. The test setup for a repo test may easily take ~10 seconds, while the test setup for a mocked test class starts in less than a second.
NOTE: I am aware that I can use JUnit Stopwatch to benchmark a single or a set of tests, but my question is more concerned with best practices than exactly how long it takes me to run my tests.
I have two questions:
Say I encounter a test class which already has 15 unit tests where none of them mocks any behaviour. I write a test and do a small fix in the class. I do not have the time to re-write the whole test class to mock objects. Should I just add a new test without mocking any behaviour and follow the existing (and bad) pattern? Does it really matter whether I have 15 non-mocked tests and 1 mocked test or if I have 16 non-mocked tests?
In my test class with 15 unit tests, some of the tests are easier to refactor than others. Is it worth it to re-write only five of the tests to using mocked objects? Is it against best practice or in any other way not good to have a test class where some of the tests uses mocks and some don't?
Your question is really subjective but I'll try to suggest few options you can explore. It all depends upon how much you're willing to spend.
Should I just add a new test without mocking any behavior and follow the existing (and bad) pattern? Does it really matter whether I have 15 non-mocked tests and 1 mocked test or if I have 16 non-mocked tests?
Its not about just one new test. If you're still writing in bad/slow pattern, you're just increasing technical debt. You've to lay out the best practices for writing new unit tests yourself.
In my test class with 15 unit tests, some of the tests are easier to
refactor than others. Is it worth it to re-write only five of the
tests to using mocked objects?
Absolutely. Why not? You saying for yourself the improvements you're getting by following newer style of code.
Is it against best practice or in any other way not good to have a
test class where some of the tests uses mocks and some don't?
Well one best practice is to have consistent code everywhere. Mix of old styled repositories and newer one with mocks does not go too well as far as best practices are concerned. But I'd be more concerned if the code you write is not well covered with unit tests, whatever style if may be.
At the end of the day, you're the one to decide and look at all the trade offs like how much build time improvements can you achieve by newer mocked repositories, what is the frequency of your builds, and can this be achieved using hardware improvements and other factors.
As written by #ShanuGupta there is not general answer to your question.
But here is my thought:
After correctness the readability is the second most desirable value of code, especially test code (since it is executable specification)
On the other hand there is no rule that a unit cannot have more than one Test class. Therefore I'd separate "old" test methods not using mocks from the "mocking" tests by placing them in separate test classes.

How can I run a test class only if another test was passed?

I am doing some Junit testing and I need to know how to run a Test class only if a specific test from another class was passed.
There is 'Categories' feature in JUnit. (See: https://github.com/junit-team/junit/wiki/Categories)
This question has already been replied in this post
In the #Before function you need to retreive a runtime value from your system and check that it matches your requirement. Everything will stop if it doesn't
IMHO, This is bad practice, even in a well-designed Integration test-suite. I would encourage you to rethink your overall test class design.
If these tests are truly meant to be unit tests, they should be atomic and independent of each other. See this post for a good read.
Having said that, I have often used JUnit 4.x to build & run rather large suites of Integration tests (backend functional tests that test a RESTful services responses). If this is your use case, I recommend restructuring your tests such that you never have one test in TestClassA depend on a test in TestClassB. That is a bad idea, as it will making your tests more fragile. And difficult for other devs to understand the intention of your tests, taken together as a whole.
When I have found that I have dependencies across multiple test classes, I have factored out a "test-superclass" to both test classes and do my "setup work" in that superclass. Or you can factor out a utility class that contains static methods for creating somewhat complex test conditions to start with.
But, even using JUnit as a vehicle to run these kind of "integration" tests should be done with caution and careful intent.

How to organize unit tests for a complex class?

I have a complex class (300+ lines), which I'm trying to test from different "points of view". I've already created three different unit tests. Every test is a complex class itself (100+ lines). The question is -- what is the best place to store them, in project directory tree? This is how I'm doing it now (Maven is used):
pom.xml
/src
/main
/java
/com
/Foo
ComplexClass.java
/test
/java
/com
/Foo
/ComplexClass
FirstPointOfViewTest.java
SecondPointOfViewTest.java
ThirdPointOfViewTest.java
Of course, the names are just placeholders, used in order to explain the problem/question. What do you think about this approach?
Your class is so complex that you need three different test classes to test all the aspect of the class? Probably you have mixed too many concerns in a single class. I would suggest refactoring the class using proven design patterns to separate classes with orthogonal concerns that can then be tested individually.
One thing you might want to consider: if you keep your test code package structure the same as your main code package structure---even using different physical directories as you are currently doing---your test classes will be in the same logical package as your main classes. This means they gain access to default/protected members in the tested classes, which is often helpful. You'd have to get rid of the ComplexClass package in the test code to make that happen.
Another thing to consider: (I'll assuming you're testing with JUnit) Test classes are classes, so you can organize and structure them using inheritance. If you have 3 different points of view, maybe extract a base class which contains common functionality, this will make your tests easier to maintain in the long run, especially as more "points of view" are discovered.
Separating the source and test code as you are already doing is a great idea, it gives you more options for building and maintains a logical grouping which makes maintenance more straightforward.
I'd keep what you currently have. The main advantage of this maven structure is that rather than mixing source and test code together and trying to identify which classes to exclude from your build you just ignore the test directory entirely. The point of using the same package is to expose protected methods/variables to your test classes but not to a public API.
One thing I might suggest is something I picked up at a talk by John Smart on Test Driven Development which is to name your test classes in groups of functionality they are testing, so you just have FirstPointOfView.java which is testing the behaviour of your first point of view of the com.foo package. This approach should make it more obvious when you can split a test class into individual classes, if they are actually testing different sets of behaviour.
Edit: if ComplexClass is a directory you should drop that, so that your tests are in the same package, I think I may have missread your example tree

Separation of JUnit classes into special test package?

I am learning the concepts of Test-Driven Development through reading the Craftsman articles (click Craftsman under By Topic) recommended in an answer to my previous question, "Sample project for learning JUnit and proper software engineering". I love it so far!
But now I want to sit down and try it myself. I have a question that I hope will need only a simple answer.
How do you organize your JUnit test classes and your actual code? I'm talking mainly about the package structure, but any other concepts of note would be helpful too.
Do you put test classes in org.myname.project.test.* and normal code in org.myname.project.*? Do you put the test classes right alongside the normal classes? Do you prefer to prefix the class names with Test rather than suffix them?
I know this seems like the kind of thing I shouldn't worry about so soon, but I am a very organization-centric person. I'm almost the kind of person that spends more time figuring out methods to keep track of what to get done, rather than actually getting things done.
And I have a project that is currently neatly divided up into packages, but the project became a mess. Instead of trying to refactor everything and write tests, I want to start fresh, tests first and all. But first I need to know where my tests go.
edit: I totally forgot about Maven, but it seems a majority of you are using it! In the past I had a specific use case where Maven completely broke down on me but Ant gave me the flexibility I needed, so I ended up attached to Ant, but I'm thinking maybe I was just taking the wrong approach. I think I'll give Maven another try because it sounds like it will go well with test-driven development.
I prefer putting the test classes into the same package as the project classes they test, but in a different physical directory, like:
myproject/src/com/foo/Bar.java
myproject/test/com/foo/BarTest.java
In a Maven project it would look like this:
myproject/src/main/java/com/foo/Bar.java
myproject/src/test/java/com/foo/BarTest.java
The main point in this is that my test classes can access (and test!) package-scope classes and members.
As the above example shows, my test classes have the name of the tested class plus Test as a suffix. This helps finding them quickly - it's not very funny to try searching among a couple of hundred test classes, each of whose name starts with Test...
Update inspired by #Ricket's comment: this way test classes (typically) show up right after their tested buddy in a project-wise alphabetic listing of class names. (Funny that I am benefiting from this day by day, without having consciously realized how...)
Update2: A lot of developers (including myself) like Maven, but there seems to be at least as many who don't. IMHO it is very useful for "mainstream" Java projects (I would put about 90% of projects into this category... but the other 10% is still a sizeable minority). It is easy to use if one can accept the Maven conventions; however if not, it makes life a miserable struggle. Maven seems to be difficult to comprehend for many people socialized on Ant, as it apparently requires a very different way of thinking. (Myself, having never used Ant, can't compare the two.) One thing is for sure: it makes unit (and integration) testing a natural, first-class step in the process, which helps developers adopt this essential practice.
I put my test classes in the same package as what they are testing but in a different source folder or project. Organizing my test code in this fashion allows me to easily compile and package it separately so that production jar files do not contain test code. It also allows the test code to access package private fields and methods.
I use Maven. The structure that Maven promotes is:-
src/main/java/org/myname/project/MyClass.java
src/test/java/org/myname/project/TestMyClass.java
i.e. a test class with Test prepended to the name of the class under test is in a parallel directory structure to the main test.
One advantage of having the test classes in the same package (not necessarily directory though) is you can leverage package-scope methods to inspect or inject mock test objects.

Exclude individual JUnit Test methods without modifying the Test class?

I'm currently re-using JUnit 4 tests from another project against my code. I obtain them directly from the other project's repository as part of my automated Ant build. This is great, as it ensures I keep my code green against the very latest version of the tests.
However, there is a subset of tests that I never expect to pass on my code. But if I start adding #Ignore annotations to those tests, I will have to maintain my own separate copy of the test implementation, which I really don't want to do.
Is there a way of excluding individual tests without modifying the Test source? Here's what I have looked at so far:
As far as I can see, the Ant JUnit task only allows you to exclude entire Test classes, not individual test methods - so that's no good for me, I need method granularity.
I considered putting together a TestSuite that uses reflection to dynamically find and add all of the original tests, then add code to explicitly remove the tests I don't want to run. But I ditched that idea when I noticed that the TestSuite API doesn't provide a method for removing tests.
I can create my own Test classes that extend the original Test classes, override the specific tests I don't want to run, and annotate them with #Ignore. I then run JUnit on my subclasses. The downside here is that if new Test classes are added to the original project, I won't pick them up automatically. I'll have to monitor for new Test classes as they are added to the original project. This is my best option so far, but doesn't feel ideal.
The only other option I can think of is to run the bad tests anyway and ignore the failures. However, these tests take a while to run (and fail!) so I'd prefer to not run them at all. Additionally, I can't see a way of telling the Ant task to ignore failures on specific test methods (again - I see how you can do it for individual Test classes, but not methods).
If you can't touch the original test at all you are going to have some serious limitations. Your overriding sounds like the best bet, but with a couple of changes:
Build the Ant tests specifically excluding the super classes, so that additional classes that you don't know about get run.
You can use the #Rule annotation (new to JUnit 4.7) to know what test is being run and abort it (by returning an empty Statement implementation) rather than overriding specific methods, giving you more flexibility in knowing whether or not to avoid the test. The only problem with this method is that you can't stop the #Before methods from running using this method, which may be slow. If that is a problem (and you really can't touch the tests) then #Ignore in the overridden method is the only thing I can think of.
If, however, you can touch those tests, some additional options open up:
You could run them with a custom runner by specifying the #RunWith tag on the class. This runner would just pass over execution to the standard runner (JUnit4.class) in that project, but in your project (via a system property or some other mechanism) would inspect the test name and not run a test. This has the advantage of being the least intrusive, but the most difficult to implement (runners are hairy beasts, one of the stated goals of #Rule was to eliminate most of the need to make them).
Another is to make an assumeThat statement on the test that would check some configuration setting that would be true if that test should run. That would actually involve injecting right into the test, which is most likely a deal breaker in anything remotely labeled a "separate project."
It doesn't help you now, but TestNG supports this sort of ability.
OK, this is a rather heavyweight solution, but don't throw things at me if it sounds ridiculous.
The core of Junit4 is the org.junit.runner.Runner class, and its various subclasses, most importantly org.junit.runners.Suite. These runners determine what the tests are for a given test class, using things like #Test and #Ignore.
It's quite easy to create custom implementations of a runner, and normally you would hook them up by using the #RunWith annotation on your test classes, but obviously that's not an option for you.
However, in theory you could write your own Ant task, perhaps based upon the standard Ant Junit task, which takes your custom test runner and uses it directly, passing each test class to it in turn. Your runner implementation could use an external config file which specifies which test methods to ignore.
It'd be quite a lot of work, and you'd have to spend time digging around in the prehistoric Ant Junit codebase to find out how it works. The investment in time may be worth it, however.
It's just a shame that the Junit Ant task doesn't provide a mechanism to specify the test Runner, that would be ideal.
A possibility I can think of to achieve what you want with the stated constraints is to use bytecode modification. You could keep a list of classes and methods to ignore in a separate file, and patch the bytecode of the test classes as you load them to remove this methods altogether.
If I am not mistaken, JUnit uses reflection to find the test methods to execute. A method rename operation would then allow you to remove these methods before JUnit finds them. Or the method can be modified to return immediately, without performing any operation.
A library like BCEL can be used to modify the classes when loaded.
If you want to run only a subset of the tests it sounds like that class has more than one responsibility and should be refactored down. Alternately the test class could be broken apart so that the original project had all the tests but on one or more classes(I'm guessing some of the tests are really integration tests and touch the database or network) and you could exclude the class(es) you didn't want.
If you can't do any of that, your option of overriding is probably best. Take the process of whenever you need to ignore some methods you extend that class and add it to your Ant exclude list. That way you can exclude what you can't pass and will still pull in all new tests (methods you didn't override and new test classes) without modifying your build.
If the unwanted tests are in specific classes/packages, you could use a fileset exclude in Ant to exclude them during import.
Two options
Work with the owner of the borrowed tests to extract your ones into a separate class you both can share.
Create your own test class which proxies the test class you want to use. For each method you want to include have a method in your class. You'll need to construct an instance of the test class you are calling and do before and after methods too if they're in the original.
Create a custom Junit runner based on blockjunitrunner and use it to filter out or in the tests you want.

Categories