Is my way of reusing methods in JUnit tests bad? - java

Lets say I have a JUnit class called Test.class. Test.class has around 50 JUnit tests and on 30 JUnit tests, this line of code always appears:
Note: I'm using Mockito/PowerMock
when(ConnectionHandler().getConnection()).thenReturn(connection);
I'm planning to create a utility class called TestUtils.class and create a private method for the line above like:
public static stubConnection(Connection connection) {
when(ConnectionHandler().getConnection()).thenReturn(connection);
}
So instead of writing down when(ConnectionHandler().getConnection()).thenReturn(connection); every time, I could just go for TestUtils.stubConnection(connection);
Is this advised? I just see a lot of repetitive code in my JUnit tests. If it helps, I'm testing really a class that has very low cohesion and is very tightly coupled.

Is this advised? I just see a lot of repetitive code in my JUnit
tests.
Absolutely. The fact that this is a unit test is (almost) not relevant, it's still code that you or someone else has to maintain. Encapsulating it into a util or service class is definitely a step in the right direction.

Related

Should I follow One Test Class per Class Pattern Or usecase Pattern to write test cases

I am learner in writing Junit Test Cases. I have seen writing Junit cases Pattern that We usually make test class for each class indivisually by their name and write test cases for each method of that class in its respective class so that maximum code coverage can occur.
What I was thinking If I make test cases for my feature that would be better choice because In future any number of methods Signature changes I don't have to change or create again unnecessary test cases for those modified methods or newly created. Because that moment I would have certain test cases for my developed feature. So my test cases are running fine for particular feature then I can be sure in minimum number of test cases code that everything is fine.
By keeping this I don't have to write test cases for each and every methods of each class. Is it a good way?
Well, test cases are written for a reason. Each and every methods have to be working properly as expected. If you only do test cases for the feature level, how do you find exactly where the error occurred and how confidently you can ship your code to next level?
The better approach will be to do unit test cases for each class and do an integration test to make sure everything works good.
We found success in utilizing both. By default we use one per class. But when particular use-cases come up, e.g. use-cases that involve multiple classes or use cases where the existing boiler plate testing code prevents the use case from being properly tested, then we would create a test class for that scenario.
by keeping this I don't have to write test cases for each and every
methods of each class. Is it a good way?
In this case you write only integration tests and no unit tests.
Writing tests for use cases is really nice but it is not enough because it is hard to cover all cases of all methods invoked in an integration test because there may have a very important number of branches while it is much easier in an unit test.
Besides, a use case test may be successful for bad reasons : thanks to side effects between multiple methods invoked.
By writing an unit test you protect yourself against this kind of issue.
Definitively, unit and integration tests are are not opposed but complementary. So you have to write both to get a robust application.

How to build up test cases in junit?

I'm coming from a Perl background where I used Test::More to handle unit testing. Using that framework, I knew the order in which the tests took place and could rely on that, which I understand is not encouraged with the JUnit framework. I've seen several ways to get around this, but I want to understand the proper/intended way of doing things.
In my Perl unit testing I would build up tests, knowing that if test #3 passed, I could make some assumptions in further tests. I don't quite see how to structure that in the JUnit world so that I can make every test completely independent.
For example, suppose I have a class that parses a date from a string. Methods include:
parse a simple date (YYYY-MM-DD)
parse a simple date with alternate separator (YYYY_MM_DD or YYYY/MM/DD)
parse a date with a string for a month name (YYYY-MON-DD)
parse a date with a string month name in a different language
and so on
I usually write my code to focus as many of the externally-accessible methods into as few core methods as possible, re-using as much code as possible (which is what most of us would do, I'm sure). So, let's say I have 18 different tests for the first method, 9 that are expected to pass and 9 that throw an exception. For the second method, I only have 3 tests, one each with the separators that work ('_' & '/') and one with a separator that doesn't work ('*') which is expected to fail. I can limit myself to the new code being introduced because I already know that the code properly handles the standard boundary conditions and common errors, because the first 18 tests already passed.
In the Perl world, if test #20 fails, I know that it's probably something to do with the specific separator, and is not a general date parsing error because all of those tests have already passed. In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there. That's not too hard to do, of course, but maybe in a bigger, more complex class, it would be more difficult to do.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite? That seems tedious. (And before someone suggests that I put the first18 in one class and the second 3 in another, and use a test suite for just those groupings, let's pretend that all 18 of the early tests build on each other, too).
And, again, I know there are ways around this (FixedMethodOrder in JUnit 4.11+ or JUnit-HierarchicalContextRunner) but I want to understand the paradigm as its intended to be used.
In the JUnit world, where tests run in a random order, if test #20 fails, I don't know if it's because of a general date parsing issue or because of the separator. I'd have to go and see which other ones failed and then make some assumptions there.
Yes that is correct. If something in your code is broken then multiple tests may fail. That is a good thing. Use intent revealing test method names and possibly use the optional String message parameter in the JUnit assertions to explain what exactly failed the test.
How do other people deal with building up a set of tests? Should I put each and every test in a separate class and use a test suite?
The general convention is one test class per source class. Depending on what build tool you are using, you may or may not need to use test suites. If you are using Ant, you probably need to collect the tests into test suites, but if you are using Maven, the test plugins for maven will find all your test classes for you so you don't need suites.
I also want to point out that you should be coding to Java interfaces as much as possible. If you are testing class C that depends on an implementation of interface I, then you should mock your I implementation in your C test class so that C is tested in isolation. Your mock I should follow what the interface is supposed to do. This also keeps the number of failing tests down. If there is a bug in your real I implementation, then only your I tests should fail, the C tests should still all pass (since you are testing it against a fake but working I implementation)
Don't worry about suites yet. You'll know when you need them. I've only had to use them a handful of times, and I'm not entirely sold on their usefulness...but I leave that decision up to you.
To the meat of your question - the conventional way with JUnit tests is to neither know nor depend on the order of execution of your tests; this ensures that your tests are not run-order dependent, and if they are, something is wrong with your tests* and validation.
The main core concept behind unit tests is that they test a unit of code - as simple as a single function. If you're attempting to test five different things at once, your test is far too large, and should be broken out. If the method you're testing is monolithic in nature, and difficult to test, it should be refactored and broken out into different slices of responsibility.
Tests that exercise a larger flow are better suited for integration-style tests, which tend to be written as unit tests, but aren't actually unit tests.
I've not run into a scenario in which, if I knew that if a certain test failed, I could expect different behavior in the other tests. I've never thought that such a thing was necessary to be noted, since the only thing I care about in my unit test is how that unit of code behaves given a certain input.
Keep your tests small and simple to understand; the test should only make one assertion about the result (or a general assertion of the state of your result).
*: That's not to say that it's completely broken, but those sorts of tests should be fixed sooner rather than later.

What is the best approach for Unit testing when you have interfaces with both dummy & real implementations?

I'm familiar with the basic principles of TDD, being :
Write tests, these will fail because of no implementation
Write basic implementation to make tests pass
Refactor code
However, I'm a little confused as to where interfaces and implementation fit. I'm creating a Spring web application in my spare time, and rather than going in guns blazing, I'd like to understand how I can test interfaces/implementations a little better, take this simple example code I've created here :
public class RunMe
{
public static void main(String[] args)
{
// Using a dummy service now, but would have a real implementation later (fetch from DB etc.)
UserService userService = new DummyUserService();
System.out.println(userService.getUserById(1));
}
}
interface UserService
{
public String getUserById(Integer id);
}
class DummyUserService implements UserService
{
#Override
public String getUserById(Integer id)
{
return "James";
}
}
I've created the UserService interface, ultimately there will be a real implementation of this that will query a database, however in order to get the application off the ground I've substituted a DummyUserService implementation that will just return some static data for now.
Question : How can I implement a testing strategy for the above?
I could create a test class called DummyUserServiceTest and test that when I call getUserById() it'll return James, seems pretty simple if not a waste of time(?).
Subsequently, I could also create a test class RealUserService that would test that getUserById() returns a users name from the database. This is the part that confuses me slightly, in doing so, does this not essentially overstep the boundary of a unit test and become more of an intergration test (with the hit on the DB)?
Question (improved, a little): When using interfaces with dummy/stubbed, and real implementations, which parts should be unit tested, and which parts can safely be left untested?
I spent a few hours Googling on this topic last night, and mostly found either tutorials on what TDD is, or examples of how to use JUnit, but nothing in the realms of advising what should actually be tested. It is entirely possible though, that I didn't search hard enough or wasn't looking for the right thing...
Don't test the dummy implementations: they won't be used in production. It makes no real sense to test them.
If the real UserService implementation does nothing else than go to a database and get the user name by its ID, then the test should test that it does that and does it correctly. Call it an integration test if you want, but it's nevertheless a test that should be written and automated.
The usual strategy is to populate the database with minimal test data in the #Before annotated method of the test, and have you test method check that for an ID which exists in the database, the corresponding user name is returned.
I would recommend you to read this book first: Growing Object-Oriented Software Guided by Tests by Steve Freemand and Nat Pryce. It answers your question and many others, related to TDD.
In your particular case you should make your RealUserService configurable with a DB-adapter, which will make real DB queries. The service itself will the servicing, not data persistence. Read the book, it will help a lot :)
JB's Answer is a good one, I thought I'd throw out another technique I've used.
When developing the original test, don't bother stubbing out the UserService in the first place. In fact, go ahead and write the real thing. Proceed by following Kent Beck's 3 rules.
1) Make it work.
2) Make it right.
3) Make it fast.
Your code will have tests that then verify the find by id works. As JB stated, your tests will be considered Integration Tests at this point. Once they are passing we have successfully achieved step 1. Now, look at the design. Is it right? Tweak any design smells and check step 2 off your list.
For step 3, we need to make this test fast. We all know that integration tests are slow and error prone with all of the transaction management and database setups. Once we know the code works, I typically don't bother with the integration tests. It is at this time where you can introduce your dummy service, effectively turning your Integration Test into a unit test. Now that it doesn't touch the database in any way, we can check step 3 off the list because this test is now fast.
So, what are the problems with this approach? Well, many will say that I still need a test for the database-backed UserService. I typically don't keep integration tests laying around in my project. My opinion is that these types of tests are slow, brittle, and don't catch enough logic errors in most projects to pay for themselves.
Hope that helps!
Brandon

How to deal with interdependent JUnit tests?

I have a question about JUnit testing.
Our JUnit suite is testing various functions that we wrote that interact with our memory system.
The way our system was designed, requires it to be static, and therefore initialized prior to the running of the tests.
The problem we are having is that when subsequent tests are run, they are affected by tests prior to it, so it is possible (and likely) that we are getting false positive, or innaccurate failures.
Is there a way to maintain the testing order of our JUnit tests, but have it re-initialize the entire system, as if testing on the system from scratch.
The only option we can think of is to write a method that does this, and call it at the end of each test, but as there are lots and lots of things that need to be reset this way, I am hoping there is a simpler way to do this.
I've seen problems with tests many times where they depend on each other (sometimes deliberately!).
Firstly you need to setup a setUp method:
#Before
public void setUp() {
super.setUp();
// Now clear, reset, etc all your static data.
}
This is automatically run by JUnit before each test and will reset the environment. You can add one after as well, but before is better for ensuring a clean starting point.
The order of your tests is usually the order they are in the test class. But this should never be assumed and it's a really bad idea to base code on that.
Go back to the documentation. If you need more information.
The approach I took to this kind of problem was to do partial reinitialization before each test. Each test knows the preconditions that it requires, and the setup ensures that they are true. Not sure if this will be relevant for you. Relying on order often ends up being a continuing PITA - being able to run tests by themselves is better.
Oh yeah - there's one "test" that's run as the beginning of a suite that's responsible for static initialization.
You might want to look at TestNG, which supports test dependencies for this kind of functional testing (JUnit is a unit testing framework):
#Test
public void f1() {}
#Test(dependsOnMethods = "f1")
public void f2() {}

need suggestions on getting started with Junit

I have not used Junit before and have not done unit testing automatically.
Scenario:
We are changing our backend DAO's from Sql Server to Oracle. So on the DB side all the stored procedures were converted to oracle. Now when our code calls these thew Oracle Stored Procedures we want to make sure that the data returned is same as compared to sql server stored procedures.
So for example I have the following method in a DAO:
//this is old method. gets data from sql server
public IdentifierBean getHeadIdentifiers_old(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
List result = getSqlMapClientTemplate().queryForList("Income.getIdentifiers", parmMap);
return (IdentifierBean)result.get(0);
}
//this is new method. gets data from Oracle
public IdentifierBean getHeadIdentifiers(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
getSqlMapClientTemplate().queryForObject("Income.getIdentifiers", parmMap);
return (IdentifierBean)((List)parmMap.get("Result0")).get(0);
}
now I want to write a Junit test method that would first call getHeadIdentifiers_old and then getHeadIdentifiers and would compare the Object returned (will have to over-write equals and hash in IdentifierBean). Test would pass only when both objects are same.
In the tester method I will have to provide a parameter (head in this case) for the two methods..this will be done manually for now. Yeah, from the front end parameters could be different and SPs might not return exact results for those parameters. But I think having these test cases will give us some relief that they return same data...
My questions are:
Is this a good approach?
I will have multiple DAO's. Do I write
the test methods inside the DAO
itself or for each DAO I should have
a seperate JUnit Test Class?
(might be n00b question) will all the
test cases be ran automatically? I do
not want to go to the front end click
bunch of stuff so that call to the
DAO gets triggered.
when tests are ran will I find out
which methods failed? and for the
ones failed will it tell me the test
method that failed?
lastly, any good starting points? any
tutorials, articles that show working
with Junit
Okay, lets see what can be done...
Is this a good approach?
Not really. Since instead of having one obsolete code path with somewhat known functionality, you now have two code paths with unequal and unpredictable functionality. Usually one would go with creating thorough unit tests for legacy code first and then refactor the original method to avoid incredibly large amounts of refactoring - what if some part of your jungle of codes forming the huge application keeps calling the other method while other parts call the new one?
However working with legacy code is never optimal so what you're thinking may be the best solution.
I will have multiple DAO's. Do I write the test methods inside the DAO itself or for each DAO I should have a seperate JUnit Test Class?
Assuming you've gone properly OO with your program structure where each class does one thing and one thing only, yes, you should make another class containing the test cases for that individual class. What you're looking for here is mock objects (search for it at SO and Google in general, lots of info available) which help you decouple your class under test from other classes. Interestingly high amount of mocks in unit tests usually mean that your class could use some heavy refactoring.
(might be n00b question) will all the test cases be ran automatically? I do not want to go to the front end click bunch of stuff so that call to the DAO gets triggered.
All IDE:s allow you to run all the JUnit test at the same time, for example in Eclipse just click the source folder/top package and choose Run -> Junit test. Also when running individual class, all the unit tests contained within are run in proper JUnit flow (setup() -> testX() -> tearDown()).
when tests are ran will I find out which methods failed? and for the ones failed will it tell me the test method that failed?
Yes, part of Test Driven Development is the mantra Red-Green-Refactor which refers to the colored bar shown by IDE:s for unit tests. Basically if any of the tests in test suite fails, the bar is red, if all pass, it's green. Additionally for JUnit there's also blue for individual tests to show assertion errors.
lastly, any good starting points? any tutorials, articles that show working with Junit
I'm quite sure there's going to be multiple of these in the answers soon, just hang on :)
You'll write a test class.
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
String head = "whatever your head should be";
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
You might find you need to parameterize this on head; that's straightforward.
Update: It looks like this:
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
compareIdentifiersWithHead("head1");
compareIdentifiersWithHead("head2");
compareIdentifiersWithHead("etc");
}
private static void compareIdentifiersWithHead(String head) {
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
* Is this a good approach?
Sure.
* I will have multiple DAOs. Do I write the test methods inside the DAO
itself or for each DAO I should have a separate JUnit Test Class?
Try it with a separate test class for each DAO; if that gets too tedious, try it the other way and see what you like best. It's probably more helpful to have the fine-grainedness of separate test classes, but your mileage may vary.
* (might be n00b question) will all the test cases be run automatically?
I do not want to go to the front end click bunch of stuff so that call
to the DAO gets triggered.
Depending on your environment, there will be ways to run all the tests automatically.
* when tests are ran will I find out which methods failed?
and for the ones failed will it tell me the test method that failed?
Yes and yes.
* lastly, any good starting points? any tutorials, articles that
show working with Junit
I really like Dave Astels' book.
Another useful introduction in writing and maintaining large unit test suites might be this book (which is partially available online):
XUnit Test Patterns, Refactoring Test Code by Gerard Meszaros
The book is organized in 3 major parts. Part I consists of a series of introductory narratives that describe some aspect of test automation using xUnit. Part II describes a number of "test smells" that are symptoms of problems with how we are automating our tests. Part III contains descriptions of the patterns.
Here's a quick yet fairly thorough intro to JUnit.

Categories