I have a question about JUnit testing.
Our JUnit suite is testing various functions that we wrote that interact with our memory system.
The way our system was designed, requires it to be static, and therefore initialized prior to the running of the tests.
The problem we are having is that when subsequent tests are run, they are affected by tests prior to it, so it is possible (and likely) that we are getting false positive, or innaccurate failures.
Is there a way to maintain the testing order of our JUnit tests, but have it re-initialize the entire system, as if testing on the system from scratch.
The only option we can think of is to write a method that does this, and call it at the end of each test, but as there are lots and lots of things that need to be reset this way, I am hoping there is a simpler way to do this.
I've seen problems with tests many times where they depend on each other (sometimes deliberately!).
Firstly you need to setup a setUp method:
#Before
public void setUp() {
super.setUp();
// Now clear, reset, etc all your static data.
}
This is automatically run by JUnit before each test and will reset the environment. You can add one after as well, but before is better for ensuring a clean starting point.
The order of your tests is usually the order they are in the test class. But this should never be assumed and it's a really bad idea to base code on that.
Go back to the documentation. If you need more information.
The approach I took to this kind of problem was to do partial reinitialization before each test. Each test knows the preconditions that it requires, and the setup ensures that they are true. Not sure if this will be relevant for you. Relying on order often ends up being a continuing PITA - being able to run tests by themselves is better.
Oh yeah - there's one "test" that's run as the beginning of a suite that's responsible for static initialization.
You might want to look at TestNG, which supports test dependencies for this kind of functional testing (JUnit is a unit testing framework):
#Test
public void f1() {}
#Test(dependsOnMethods = "f1")
public void f2() {}
Related
I am creating Unit Tests in Java and for each method, I create the same lists, variables, etc. On the other hand, of course I thought that I could create all of these variables as global and set their values in the setup() method (in #Before), but I am not sure if the values may be changed when running tests due to multithreading, etc. So, what is the best way for this situation?
Nothing to worry about. JUnit will create a new instance of your test class, and then run each #Before method, and only then run the #Test method, and it does that song and dance routine all over again for every #Test annotated method in that class. You're using #Before exactly as it was intended: It's for storing initialization code that is required for all the tests in that test class.
JUnit does it this way because 'test independence' is nice to have: Tests, preferably, fail or pass independent of the ordering in which you execute them.
Every so often the init process is so expensive that it's not worth it to pay the 'cost' of running it over and over again for every test. The annotation #BeforeClass exists specifically for that purpose. The javadoc of #BeforeClass even spells out that this compromises test independence and should therefore only be used if the setup work you do within such a method is sufficiently expensive (computationally or otherwise) to make that tradeoff.
In other words:
Your worries about test independence are real, but they apply to #BeforeClass. #Before doesn't suffer from this problem; that code is re-run for every test.
NB: You can toss all this out the window if you have static stuff going on. Don't have static stuff in test code unless you really know what you're doing. I assume you don't have that in which case - carry on, your tests are independent.
I am learner in writing Junit Test Cases. I have seen writing Junit cases Pattern that We usually make test class for each class indivisually by their name and write test cases for each method of that class in its respective class so that maximum code coverage can occur.
What I was thinking If I make test cases for my feature that would be better choice because In future any number of methods Signature changes I don't have to change or create again unnecessary test cases for those modified methods or newly created. Because that moment I would have certain test cases for my developed feature. So my test cases are running fine for particular feature then I can be sure in minimum number of test cases code that everything is fine.
By keeping this I don't have to write test cases for each and every methods of each class. Is it a good way?
Well, test cases are written for a reason. Each and every methods have to be working properly as expected. If you only do test cases for the feature level, how do you find exactly where the error occurred and how confidently you can ship your code to next level?
The better approach will be to do unit test cases for each class and do an integration test to make sure everything works good.
We found success in utilizing both. By default we use one per class. But when particular use-cases come up, e.g. use-cases that involve multiple classes or use cases where the existing boiler plate testing code prevents the use case from being properly tested, then we would create a test class for that scenario.
by keeping this I don't have to write test cases for each and every
methods of each class. Is it a good way?
In this case you write only integration tests and no unit tests.
Writing tests for use cases is really nice but it is not enough because it is hard to cover all cases of all methods invoked in an integration test because there may have a very important number of branches while it is much easier in an unit test.
Besides, a use case test may be successful for bad reasons : thanks to side effects between multiple methods invoked.
By writing an unit test you protect yourself against this kind of issue.
Definitively, unit and integration tests are are not opposed but complementary. So you have to write both to get a robust application.
Lets say I have a JUnit class called Test.class. Test.class has around 50 JUnit tests and on 30 JUnit tests, this line of code always appears:
Note: I'm using Mockito/PowerMock
when(ConnectionHandler().getConnection()).thenReturn(connection);
I'm planning to create a utility class called TestUtils.class and create a private method for the line above like:
public static stubConnection(Connection connection) {
when(ConnectionHandler().getConnection()).thenReturn(connection);
}
So instead of writing down when(ConnectionHandler().getConnection()).thenReturn(connection); every time, I could just go for TestUtils.stubConnection(connection);
Is this advised? I just see a lot of repetitive code in my JUnit tests. If it helps, I'm testing really a class that has very low cohesion and is very tightly coupled.
Is this advised? I just see a lot of repetitive code in my JUnit
tests.
Absolutely. The fact that this is a unit test is (almost) not relevant, it's still code that you or someone else has to maintain. Encapsulating it into a util or service class is definitely a step in the right direction.
We are following below practices to write JUnit tests for our methods.
Each method will be having their own class which holds all the tests which are required for that method. For e.g.: class test {...}
#Before will consists of per-requisites setup for methods like "Entity" so that when we do Edit we don't need to copy/paste code for adding an entity at each method level.
Now here my question is, shall we delete all the data which we entered by writing code to trash test-data in #after method or just let it be?
I know we can make it configurable but what is best practice? keep it or delete it. As per my gut feeling deleting should be better as if there is some duplicate data already in db - it may trigger wrong true or false.
It depends on how much you adhere to the Don't Repeat Yourself principle. It's also worth remembering that you have #After called after each #Test and #AfterClass called after all the #Test have run. With this granularity, it should be simple to remove duplication but still split those tasks that should only run at the very end, or after each test.
As a best practice I would recommend to clear your data storage between every test, to guarantee each test is isolated from other tests.
This could be done with the #After method if you want to keep some of the settings alive (from the #BeforeClassfor example). It could also be done in the #Before method for example by overriding variables with a new instance for every test, if you do so you do not need a clean up after the tests.
To clean up your settings of the #BeforeClass method you should use #AfterClass for example to close a Database connection or something simular what only needed to be done once. But this is not needed for every kind of unit test.
I have not used Junit before and have not done unit testing automatically.
Scenario:
We are changing our backend DAO's from Sql Server to Oracle. So on the DB side all the stored procedures were converted to oracle. Now when our code calls these thew Oracle Stored Procedures we want to make sure that the data returned is same as compared to sql server stored procedures.
So for example I have the following method in a DAO:
//this is old method. gets data from sql server
public IdentifierBean getHeadIdentifiers_old(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
List result = getSqlMapClientTemplate().queryForList("Income.getIdentifiers", parmMap);
return (IdentifierBean)result.get(0);
}
//this is new method. gets data from Oracle
public IdentifierBean getHeadIdentifiers(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
getSqlMapClientTemplate().queryForObject("Income.getIdentifiers", parmMap);
return (IdentifierBean)((List)parmMap.get("Result0")).get(0);
}
now I want to write a Junit test method that would first call getHeadIdentifiers_old and then getHeadIdentifiers and would compare the Object returned (will have to over-write equals and hash in IdentifierBean). Test would pass only when both objects are same.
In the tester method I will have to provide a parameter (head in this case) for the two methods..this will be done manually for now. Yeah, from the front end parameters could be different and SPs might not return exact results for those parameters. But I think having these test cases will give us some relief that they return same data...
My questions are:
Is this a good approach?
I will have multiple DAO's. Do I write
the test methods inside the DAO
itself or for each DAO I should have
a seperate JUnit Test Class?
(might be n00b question) will all the
test cases be ran automatically? I do
not want to go to the front end click
bunch of stuff so that call to the
DAO gets triggered.
when tests are ran will I find out
which methods failed? and for the
ones failed will it tell me the test
method that failed?
lastly, any good starting points? any
tutorials, articles that show working
with Junit
Okay, lets see what can be done...
Is this a good approach?
Not really. Since instead of having one obsolete code path with somewhat known functionality, you now have two code paths with unequal and unpredictable functionality. Usually one would go with creating thorough unit tests for legacy code first and then refactor the original method to avoid incredibly large amounts of refactoring - what if some part of your jungle of codes forming the huge application keeps calling the other method while other parts call the new one?
However working with legacy code is never optimal so what you're thinking may be the best solution.
I will have multiple DAO's. Do I write the test methods inside the DAO itself or for each DAO I should have a seperate JUnit Test Class?
Assuming you've gone properly OO with your program structure where each class does one thing and one thing only, yes, you should make another class containing the test cases for that individual class. What you're looking for here is mock objects (search for it at SO and Google in general, lots of info available) which help you decouple your class under test from other classes. Interestingly high amount of mocks in unit tests usually mean that your class could use some heavy refactoring.
(might be n00b question) will all the test cases be ran automatically? I do not want to go to the front end click bunch of stuff so that call to the DAO gets triggered.
All IDE:s allow you to run all the JUnit test at the same time, for example in Eclipse just click the source folder/top package and choose Run -> Junit test. Also when running individual class, all the unit tests contained within are run in proper JUnit flow (setup() -> testX() -> tearDown()).
when tests are ran will I find out which methods failed? and for the ones failed will it tell me the test method that failed?
Yes, part of Test Driven Development is the mantra Red-Green-Refactor which refers to the colored bar shown by IDE:s for unit tests. Basically if any of the tests in test suite fails, the bar is red, if all pass, it's green. Additionally for JUnit there's also blue for individual tests to show assertion errors.
lastly, any good starting points? any tutorials, articles that show working with Junit
I'm quite sure there's going to be multiple of these in the answers soon, just hang on :)
You'll write a test class.
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
String head = "whatever your head should be";
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
You might find you need to parameterize this on head; that's straightforward.
Update: It looks like this:
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
compareIdentifiersWithHead("head1");
compareIdentifiersWithHead("head2");
compareIdentifiersWithHead("etc");
}
private static void compareIdentifiersWithHead(String head) {
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
* Is this a good approach?
Sure.
* I will have multiple DAOs. Do I write the test methods inside the DAO
itself or for each DAO I should have a separate JUnit Test Class?
Try it with a separate test class for each DAO; if that gets too tedious, try it the other way and see what you like best. It's probably more helpful to have the fine-grainedness of separate test classes, but your mileage may vary.
* (might be n00b question) will all the test cases be run automatically?
I do not want to go to the front end click bunch of stuff so that call
to the DAO gets triggered.
Depending on your environment, there will be ways to run all the tests automatically.
* when tests are ran will I find out which methods failed?
and for the ones failed will it tell me the test method that failed?
Yes and yes.
* lastly, any good starting points? any tutorials, articles that
show working with Junit
I really like Dave Astels' book.
Another useful introduction in writing and maintaining large unit test suites might be this book (which is partially available online):
XUnit Test Patterns, Refactoring Test Code by Gerard Meszaros
The book is organized in 3 major parts. Part I consists of a series of introductory narratives that describe some aspect of test automation using xUnit. Part II describes a number of "test smells" that are symptoms of problems with how we are automating our tests. Part III contains descriptions of the patterns.
Here's a quick yet fairly thorough intro to JUnit.