Why the name of a test method may influence other tests?
I have a suite with 2 classes of tests, and when I change a method name of class1, my test in class2 is ok (green).
I noticed that both classes have a method with the same name, but the test that is failing is neither of these. However if I rename any of them, all tests are ok.
Is it okay to have 2 methods with the same name in different classes, but in the same suite? And the fact that another test fails randomly is just a coincidence?
ps: the order of tests runned is changed after I rename that method.
ps2: sorry for my bad English.
That picture can explain better my question:
There is no bug in JUnit! Our team experienced similar results, which are caused by inproper resource management. You can try to rename your failing test so they are executed first. They should turn green now, that's mostly a sign that a resource is accidently shared between tests. In that case you can try to free the resource in the tear down (#After).
Here is a little checklist to find the cause:
Are there Thread's that survive a test?
Are all Executors shutdown and terminated?
Are files or streams still open after a test?
Are all fields in the Test-class cleared/reinitialised after a test?
Avoid using static references or singletons
Don't free resource in your test method, only in the tear down method. Otherwise an exception could make this piece of code unreachable.
Related
I am learner in writing Junit Test Cases. I have seen writing Junit cases Pattern that We usually make test class for each class indivisually by their name and write test cases for each method of that class in its respective class so that maximum code coverage can occur.
What I was thinking If I make test cases for my feature that would be better choice because In future any number of methods Signature changes I don't have to change or create again unnecessary test cases for those modified methods or newly created. Because that moment I would have certain test cases for my developed feature. So my test cases are running fine for particular feature then I can be sure in minimum number of test cases code that everything is fine.
By keeping this I don't have to write test cases for each and every methods of each class. Is it a good way?
Well, test cases are written for a reason. Each and every methods have to be working properly as expected. If you only do test cases for the feature level, how do you find exactly where the error occurred and how confidently you can ship your code to next level?
The better approach will be to do unit test cases for each class and do an integration test to make sure everything works good.
We found success in utilizing both. By default we use one per class. But when particular use-cases come up, e.g. use-cases that involve multiple classes or use cases where the existing boiler plate testing code prevents the use case from being properly tested, then we would create a test class for that scenario.
by keeping this I don't have to write test cases for each and every
methods of each class. Is it a good way?
In this case you write only integration tests and no unit tests.
Writing tests for use cases is really nice but it is not enough because it is hard to cover all cases of all methods invoked in an integration test because there may have a very important number of branches while it is much easier in an unit test.
Besides, a use case test may be successful for bad reasons : thanks to side effects between multiple methods invoked.
By writing an unit test you protect yourself against this kind of issue.
Definitively, unit and integration tests are are not opposed but complementary. So you have to write both to get a robust application.
This question already has answers here:
Difference between #Before, #BeforeClass, #BeforeEach and #BeforeAll
(7 answers)
Closed 4 years ago.
According to this answer, the #Before annotation is executed once before each test, whereas the #BeforeClass annotation is only executed once before all tests.
My intuition tells me to always use #BeforeClass, so the question is, why even use #Before? Is there a case where the #Before annotation performs better/faster than the #BeforeClass annotation?
Each test should be isolated. By using BeforeClass then the state of the previous test could be hanging about and messing up the later tests.
One generally uses BeforeClass to setup an expensive external resource and Before to reset the world (be that the external resource or local).
One final note:
performs better/faster
In this case I would say the two properties should be treated independently here (that is better!=faster). As long as the total time to run your complete tests suite is under 10 minutes then "faster" is simply not important.
Once you go above that magic ten minutes (and at this point you will definitely be talking integration tests not unit tests) then start looking for ways to make things "faster".
I'll give you an example:
Say you need to validate a form in your web application. You are going to use Selenium for that.
Your test class will have 10 tests. You don't need to open and close your webdriver browser each time you start your test suite. In such case, you iniatilize your webdrive browser on a #BeforeClass method.
However, for each validation test you need to reset your form. This action you will perform on a #Before method.
This is a very simple example, but may make things clearer.
When you use #BeforeClass it suggests that you do initialization before all the tests. So one test might depend on other one (e.g. if tests change the state of fields), so I would encourage usage of #Before, or even better to do preparations for the test method inside the method itself.
There are very rarely cases when your tests need exactly the same initial state, so why couple them together?
I'm coming from a Perl background where I used Test::More to handle unit testing. Using that framework, I knew the order in which the tests took place and could rely on that, which I understand is not encouraged with the JUnit framework. I've seen several ways to get around this, but I want to understand the proper/intended way of doing things.
In my Perl unit testing I would build up tests, knowing that if test #3 passed, I could make some assumptions in further tests. I don't quite see how to structure that in the JUnit world so that I can make every test completely independent.
For example, suppose I have a class that parses a date from a string. Methods include:
parse a simple date (YYYY-MM-DD)
parse a simple date with alternate separator (YYYY_MM_DD or YYYY/MM/DD)
parse a date with a string for a month name (YYYY-MON-DD)
parse a date with a string month name in a different language
and so on
I usually write my code to focus as many of the externally-accessible methods into as few core methods as possible, re-using as much code as possible (which is what most of us would do, I'm sure). So, let's say I have 18 different tests for the first method, 9 that are expected to pass and 9 that throw an exception. For the second method, I only have 3 tests, one each with the separators that work ('_' & '/') and one with a separator that doesn't work ('*') which is expected to fail. I can limit myself to the new code being introduced because I already know that the code properly handles the standard boundary conditions and common errors, because the first 18 tests already passed.
In the Perl world, if test #20 fails, I know that it's probably something to do with the specific separator, and is not a general date parsing error because all of those tests have already passed. In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there. That's not too hard to do, of course, but maybe in a bigger, more complex class, it would be more difficult to do.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite? That seems tedious. (And before someone suggests that I put the first18 in one class and the second 3 in another, and use a test suite for just those groupings, let's pretend that all 18 of the early tests build on each other, too).
And, again, I know there are ways around this (FixedMethodOrder in JUnit 4.11+ or JUnit-HierarchicalContextRunner) but I want to understand the paradigm as its intended to be used.
In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there.
Yes that is correct. If something in your code is broken then multiple tests may fail. That is a good thing. Use intent revealing test method names and possibly use the optional String message parameter in the JUnit assertions to explain what exactly failed the test.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite?
The general convention is one test class per source class. Depending on what build tool you are using, you may or may not need to use test suites. If you are using Ant, you probably need to collect the tests into test suites, but if you are using Maven, the test plugins for maven will find all your test classes for you so you don't need suites.
I also want to point out that you should be coding to Java interfaces as much as possible. If you are testing class C that depends on an implementation of interface I, then you should mock your I implementation in your C test class so that C is tested in isolation. Your mock I should follow what the interface is supposed to do. This also keeps the number of failing tests down. If there is a bug in your real I implementation, then only your I tests should fail, the C tests should still all pass (since you are testing it against a fake but working I implementation)
Don't worry about suites yet. You'll know when you need them. I've only had to use them a handful of times, and I'm not entirely sold on their usefulness...but I leave that decision up to you.
To the meat of your question - the conventional way with JUnit tests is to neither know nor depend on the order of execution of your tests; this ensures that your tests are not run-order dependent, and if they are, something is wrong with your tests* and validation.
The main core concept behind unit tests is that they test a unit of code - as simple as a single function. If you're attempting to test five different things at once, your test is far too large, and should be broken out. If the method you're testing is monolithic in nature, and difficult to test, it should be refactored and broken out into different slices of responsibility.
Tests that exercise a larger flow are better suited for integration-style tests, which tend to be written as unit tests, but aren't actually unit tests.
I've not run into a scenario in which, if I knew that if a certain test failed, I could expect different behavior in the other tests. I've never thought that such a thing was necessary to be noted, since the only thing I care about in my unit test is how that unit of code behaves given a certain input.
Keep your tests small and simple to understand; the test should only make one assertion about the result (or a general assertion of the state of your result).
*: That's not to say that it's completely broken, but those sorts of tests should be fixed sooner rather than later.
I've been searching for a while but I still can't find a way to do what I want.
I run my unit tests in Eclipse with the "Run as JUnit test" on the whole project.
I implemented a custom RunListener which record some test results information in a singleton (Just FYI, this is not meant to record only test results, I know we can export junit reports for that regard. This is meant to catch additional information which are in test methods custom annotations).
I'd like to persist the singleton information on disk once ALL test have been executed.
(The keyword being ALL :) )
I know I can override testRunFinished (This is what I do right now),
but this method is called everytime all tests of one single class are executed (So once by test class).
Is it because the Eclipse "Run as" consider each class as a Suite by default?
While it works, it is not really efficient to persist the singleton state thousands times.
Also, I'd like to give a proper name to the persisted file (Like the binary versions + date) but I can't right now since I don't know when ALL tests have been executed so each time the file is persisted it would create a new file (Date containing milliseconds) although the tests are not over (and the singleton data is thus incomplete)
Any idea ?
Thanks !
In the end, I created a TestSuite, with an #AfterClass static method to only persist once my singleton state.
This means I have to add all my test classes in the Suite.
Fortunately, thanks to JavaRocky, I dynamically included all my test classes with a custom annotation.
See answer here: How do I Dynamically create a Test Suite in JUnit 4?.
If someone has an easier way to accomplish a #AfterAllTestRuns ... let me know !
We are following below practices to write JUnit tests for our methods.
Each method will be having their own class which holds all the tests which are required for that method. For e.g.: class test {...}
#Before will consists of per-requisites setup for methods like "Entity" so that when we do Edit we don't need to copy/paste code for adding an entity at each method level.
Now here my question is, shall we delete all the data which we entered by writing code to trash test-data in #after method or just let it be?
I know we can make it configurable but what is best practice? keep it or delete it. As per my gut feeling deleting should be better as if there is some duplicate data already in db - it may trigger wrong true or false.
It depends on how much you adhere to the Don't Repeat Yourself principle. It's also worth remembering that you have #After called after each #Test and #AfterClass called after all the #Test have run. With this granularity, it should be simple to remove duplication but still split those tasks that should only run at the very end, or after each test.
As a best practice I would recommend to clear your data storage between every test, to guarantee each test is isolated from other tests.
This could be done with the #After method if you want to keep some of the settings alive (from the #BeforeClassfor example). It could also be done in the #Before method for example by overriding variables with a new instance for every test, if you do so you do not need a clean up after the tests.
To clean up your settings of the #BeforeClass method you should use #AfterClass for example to close a Database connection or something simular what only needed to be done once. But this is not needed for every kind of unit test.