java junit testing in a loop - java

I have a JUnit test that I would like to run from a main method. I would like to retrieve multiple records from a database (within the main method) and pass each record into the JUnit, using a data object, so that each record can be tested. Can I pass a data object into the run method of JUnit. If not what is the best way to accomplish this. There are so many different scenarios that I would like to use actual data from the database. There could be as many as 5000 or more records to test.
Thanks
Doug

Surely you are looking for Parameterized test case. You can do it easily by using JUnit instead of using main() method.
You need Parameterized to run your test.
It will run your test with different parameters by passing parameters via constructor.
Here is an easy article how to do that. You can also try the example in the documentation also, to understand how it works.

You want to use JUnit's Parameterized Tests. There's really no way to run a main method in a JUnit test case.
On top of the docs, here's a blog post which explains it a little better: http://ourcraft.wordpress.com/2008/08/27/writing-a-parameterized-junit-test/

I think that testing your main method is more along the lines of an integration test or a functional test. The same can be said for testing your database data. If you really want a unit test the firs step would be to refactor your main method using Extract Method to pull out the business logic you want to test.
Doing this gives you a few benefits. First can test your code in isolation (which is one of the more important properties of a good unit test). If you refactor out the business logic you'll know that you are only testing that code and that no other code is affecting your test. Second by having an isolated method you'll be able to easily mock the test data you are looking at by passing in different parameters to the method and make your assertions based on the known mock data.

Related

Annotate a full class with an extension, while intercepting all the tests and invoking them multiple times

TL;DR Is there any way we could annotate a full class with an extension, while at the same time being able to intercept all the tests and invoke them multiple times?
We are trying to update one of our projects from Junit4 to Junit5 (find the link here) but we have come across some problems in the migration from using Rules, Runners and Statements to the equivalents using JUnit5 Extensions.
The project performs mutation testing on SQL statements where each test method is run multiple times, one for each possible mutation of the SQL in order to generate a code coverage report of the SQL. Currently we do this using a JUnit4 Rule but we want to change this to support JUnit 5, so will need to change this Rule to an Extension. The way this currently works in the Rule is that when a (#Test) method gets called it in turn calls the Rule's apply() method and we then use this to return a custom Statement object. In this Statement we create the mutants and then call the base evaluate() method multiple times, once for each mutant, and monitor and store the results.
We tried looking into how to do the equivalent in JUnit5 with an extension but the similar functionality provided by Repeated and Paramaterized (which both run methods multiple times) require adding an annotation to every single method they apply to. We would prefer to just use #ExtendWith and then be able to intercept all test method calls and inject our multiple call functionality into all of them. We looked into how #Repeated and #Paramterized do this via the TestTemplate annotations and the 'TestTemplateInvocationContextProvider', but adding the annotation at the method level is something we want to avoid as generally users want coverage reports based on the entire class, not single methods. We also looked at various JUnit5 lifecycle interfaces that an Extension can implement, like InvocationInterceptor, but this only allows us to call proceed() once on the method being invoked while we need to do this multiple times.
Is there any way we could annotate a full class with an extension, while at the same time being able to intercept the tests and make changes to them?
Thank you so much for your time!

Should I follow One Test Class per Class Pattern Or usecase Pattern to write test cases

I am learner in writing Junit Test Cases. I have seen writing Junit cases Pattern that We usually make test class for each class indivisually by their name and write test cases for each method of that class in its respective class so that maximum code coverage can occur.
What I was thinking If I make test cases for my feature that would be better choice because In future any number of methods Signature changes I don't have to change or create again unnecessary test cases for those modified methods or newly created. Because that moment I would have certain test cases for my developed feature. So my test cases are running fine for particular feature then I can be sure in minimum number of test cases code that everything is fine.
By keeping this I don't have to write test cases for each and every methods of each class. Is it a good way?
Well, test cases are written for a reason. Each and every methods have to be working properly as expected. If you only do test cases for the feature level, how do you find exactly where the error occurred and how confidently you can ship your code to next level?
The better approach will be to do unit test cases for each class and do an integration test to make sure everything works good.
We found success in utilizing both. By default we use one per class. But when particular use-cases come up, e.g. use-cases that involve multiple classes or use cases where the existing boiler plate testing code prevents the use case from being properly tested, then we would create a test class for that scenario.
by keeping this I don't have to write test cases for each and every
methods of each class. Is it a good way?
In this case you write only integration tests and no unit tests.
Writing tests for use cases is really nice but it is not enough because it is hard to cover all cases of all methods invoked in an integration test because there may have a very important number of branches while it is much easier in an unit test.
Besides, a use case test may be successful for bad reasons : thanks to side effects between multiple methods invoked.
By writing an unit test you protect yourself against this kind of issue.
Definitively, unit and integration tests are are not opposed but complementary. So you have to write both to get a robust application.

Should I unit test generated Java code?

Simple question. If I use spring-data to generate CRUD methods for my DAO layer, should I still write unit tests against the generated methods? Or would that be the equivalent of unit testing library code?
Thanks in advance.
EDIT: To clarify, I'm asking whether or not the unit test needs to be written in addition to a suite of integration tests that get run before a release. For example, a unit test for the findAll() method of the DAO layer would be similar to the following:
class DepartmentDAOTest extends spock.lang.Specification {
/* ... */
def "returns all departments"() {
setup:
def result = new List<Department>()
when:
result = dao.findAll()
then:
result.size() == EXPECTED_SIZE
}
}
Whereas an integration test would be run probably by a test team or developer by hand, possibly before tagging a new release. This could either be automated using JWebUnit or Geb, and tests every component (including the platform) to ensure they work as expected when "integrated."
If I were to write the DAO implementation by hand using JdbcTemplate there would be no question that I should unit test every method. When I unit test the service layer (which makes calls to the DAO layer) I can mock out the DAO layer so I don't test it twice.
If I make a call into a third-party library like pdfbox for generating a PDF, there's an expectation for each method to work (because it is tested as a part of the pdfbox project). I don't test that their drawSquare method really draws a square, but during integration testing I'll see that my export PDF functionality correctly exports a PDF the way we want it to.
So the question should really be re-worded as, "Under which testing phase should I test my usage of spring-data?"
First, there is no code generated at all. We built a query meta model from the query methods you declare and dynamically execute these queries. The short answer here is: you definitely should test these methods declared. The reason is as obvious as it is simple: the query method declarations - no matter if they use derived queries or manually declared ones - interact with the mapping metadata you defined for your entities. Thus, it's definitely reasonable to check the query method execution to make sure you see the expected results. This then of course an more of an integration test and a semantical check for the queries executed, rather than a classical unit test.
No. As a general rule, don't test the platform.

When to use Mockito.verify()?

I write jUnit test cases for 3 purposes:
To ensure that my code satisfies all of the required functionality, under all (or most of) the input combinations/values.
To ensure that I can change the implementation, and rely on JUnit test cases to tell me that all my functionality is still satisfied.
As a documentation of all the use cases my code handles, and act as a spec for refactoring - should the code ever need to be rewritten. (Refactor the code, and if my jUnit tests fail - you probably missed some use case).
I do not understand why or when Mockito.verify() should be used. When I see verify() being called, it is telling me that my jUnit is becoming aware of the implementation. (Thus changing my implementation would break my jUnits, even though my functionality was unaffected).
I'm looking for:
What should be the guidelines for appropriate usage of Mockito.verify()?
Is it fundamentally correct for jUnits to be aware of, or tightly coupled to, the implementation of the class under test?
If the contract of class A includes the fact that it calls method B of an object of type C, then you should test this by making a mock of type C, and verifying that method B has been called.
This implies that the contract of class A has sufficient detail that it talks about type C (which might be an interface or a class). So yes, we're talking about a level of specification that goes beyond just "system requirements", and goes some way to describing implementation.
This is normal for unit tests. When you are unit testing, you want to ensure that each unit is doing the "right thing", and that will usually include its interactions with other units. "Units" here might mean classes, or larger subsets of your application.
Update:
I feel that this doesn't apply just to verification, but to stubbing as well. As soon as you stub a method of a collaborator class, your unit test has become, in some sense, dependent on implementation. It's kind of in the nature of unit tests to be so. Since Mockito is as much about stubbing as it is about verification, the fact that you're using Mockito at all implies that you're going to run across this kind of dependency.
In my experience, if I change the implementation of a class, I often have to change the implementation of its unit tests to match. Typically, though, I won't have to change the inventory of what unit tests there are for the class; unless of course, the reason for the change was the existence of a condition that I failed to test earlier.
So this is what unit tests are about. A test that doesn't suffer from this kind of dependency on the way collaborator classes are used is really a sub-system test or an integration test. Of course, these are frequently written with JUnit too, and frequently involve the use of mocking. In my opinion, "JUnit" is a terrible name, for a product that lets us produce all different types of test.
David's answer is of course correct but doesn't quite explain why you would want this.
Basically, when unit testing you are testing a unit of functionality in isolation. You test whether the input produces the expected output. Sometimes, you have to test side effects as well. In a nutshell, verify allows you to do that.
For example you have bit of business logic that is supposed to store things using a DAO. You could do this using an integration test that instantiates the DAO, hooks it up to the business logic and then pokes around in the database to see if the expected stuff got stored. That's not a unit test any more.
Or, you could mock the DAO and verify that it gets called in the way you expect. With mockito you can verify that something is called, how often it is called, and even use matchers on the parameters to ensure it gets called in a particular way.
The flip side of unit testing like this is indeed that you are tying the tests to the implementation which makes refactoring a bit harder. On the other hand, a good design smell is the amount of code it takes to exercise it properly. If your tests need to be very long, probably something is wrong with the design. So code with a lot of side effects/complex interactions that need to be tested is probably not a good thing to have.
This is great question!
I think the root cause of it is the following, we are using JUnit not only for unit testing. So the question should be splited up:
Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?
Should I use Mockito.verify() in my black-box unit-testing?
Should I use Mockito.verify() in my white-box unit-testing?
so if we will ignore higher-than-unit testing, the question can be rephrased "Using white-box unit-testing with Mockito.verify() creates great couple between unit test and my could implementation, can I make some "grey-box" unit-testing and what rules of thumb I should use for this".
Now, let's go through all of this step-by-step.
*- Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?*
I think the answer is clearly no, moreover you shouldn't use mocks for this. Your test should be as close to real application as possible. You are testing complete use case, not isolated part of the application.
*black-box vs white-box unit-testing*
If you are using black-box approach what is you really doing, you supply (all equivalence classes) input, a state, and tests that you will receive expected output. In this approach using of mocks in general is justifies (you just mimic that they are doing the right thing; you don't want to test them), but calling Mockito.verify() is superfluous.
If you are using white-box approach what is you really doing, you're testing the behaviour of your unit. In this approach calling to Mockito.verify() is essential, you should verify that your unit behaves as you're expecting to.
rules of thumbs for grey-box-testing
The problem with white-box testing is it creates a high coupling. One possible solution is to do grey-box-testing, not white-box-testing. This is sort of combination of black&white box testing. You are really testing the behaviour of your unit like in white-box testing, but in general you make it implementation-agnostic when possible. When it is possible, you will just make a check like in black-box case, just asserts that output is what is your expected to be. So, the essence of your question is when it is possible.
This is really hard. I don't have a good example, but I can give you to examples. In the case that was mentioned above with equals() vs equalsIgnoreCase() you shouldn't call Mockito.verify(), just assert the output. If you couldn't do it, break down your code to the smaller unit, until you can do it. On the other hand, suppose you have some #Service and you are writting #Web-Service that is essentially wrapper upon your #Service - it delegates all calls to the #Service (and making some extra error handling). In this case calling to Mockito.verify() is essential, you shouldn't duplicate all of your checks that you did for the #Serive, verifying that you're calling to #Service with correct parammeter list is sufficient.
I must say, that you are absolutely right from a classical approach's point of view:
If you first create (or change) business logic of your application and then cover it with (adopt) tests (Test-Last approach), then it will be very painful and dangerous to let tests know anything about how your software works, other than checking inputs and outputs.
If you are practicing a Test-Driven approach, then your tests are the first to be written, to be changed and to reflect the use cases of your software's functionality. The implementation depends on tests. That sometimes mean, that you want your software to be implemented in some particular way, e.g. rely on some other component's method or even call it a particular amount of times. That is where Mockito.verify() comes in handy!
It is important to remember, that there are no universal tools. The type of software, it's size, company goals and market situation, team skills and many other things influence the decision on which approach to use at your particular case.
In most cases when people don't like using Mockito.verify, it is because it is used to verify everything that the tested unit is doing and that means you will need to adapt your test if anything changes in it.
But, I don't think that is a problem. If you want to be able to change what a method does without the need to change it's test, that basically means you want to write tests which don't test everything your method is doing, because you don't want it to test your changes. And that is the wrong way of thinking.
What really is a problem, is if you can modify what your method does and a unit test which is supposed to cover the functionality entirely doesn't fail. That would mean that whatever the intention of your change is, the result of your change isn't covered by the test.
Because of that, I prefer to mock as much as possible: also mock your data objects. When doing that you can not only use verify to check that the correct methods of other classes are called, but also that the data being passed is collected via the correct methods of those data objects. And to make it complete, you should test the order in which calls occur.
Example: if you modify a db entity object and then save it using a repository, it is not enough to verify that the setters of the object are called with the correct data and that the save method of the repository is called. If they are called in the wrong order, your method still doesn't do what it should do.
So, I don't use Mockito.verify but I create an inOrder object with all mocks and use inOrder.verify instead. And if you want to make it complete, you should also call Mockito.verifyNoMoreInteractions at the end and pass it all the mocks. Otherwise someone can add new functionality/behavior without testing it, which would mean after while your coverage statistics can be 100% and still you are piling up code which isn't asserted or verified.
As some people said
Sometimes you don't have a direct output on which you can assert
Sometimes you just need to confirm that your tested method is sending the correct indirect outputs to its collaborators (which you are mocking).
Regarding your concern about breaking your tests when refactoring, that is somewhat expected when using mocks/stubs/spies. I mean that by definition and not regarding a specific implementation such as Mockito.
But you could think in this way - if you need to do a refactoring that would create major changes on the way your method works, it is a good idea to do it on a TDD approach, meaning you can change your test first to define the new behavior (that will fail the test), and then do the changes and get the test passed again.

need suggestions on getting started with Junit

I have not used Junit before and have not done unit testing automatically.
Scenario:
We are changing our backend DAO's from Sql Server to Oracle. So on the DB side all the stored procedures were converted to oracle. Now when our code calls these thew Oracle Stored Procedures we want to make sure that the data returned is same as compared to sql server stored procedures.
So for example I have the following method in a DAO:
//this is old method. gets data from sql server
public IdentifierBean getHeadIdentifiers_old(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
List result = getSqlMapClientTemplate().queryForList("Income.getIdentifiers", parmMap);
return (IdentifierBean)result.get(0);
}
//this is new method. gets data from Oracle
public IdentifierBean getHeadIdentifiers(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
getSqlMapClientTemplate().queryForObject("Income.getIdentifiers", parmMap);
return (IdentifierBean)((List)parmMap.get("Result0")).get(0);
}
now I want to write a Junit test method that would first call getHeadIdentifiers_old and then getHeadIdentifiers and would compare the Object returned (will have to over-write equals and hash in IdentifierBean). Test would pass only when both objects are same.
In the tester method I will have to provide a parameter (head in this case) for the two methods..this will be done manually for now. Yeah, from the front end parameters could be different and SPs might not return exact results for those parameters. But I think having these test cases will give us some relief that they return same data...
My questions are:
Is this a good approach?
I will have multiple DAO's. Do I write
the test methods inside the DAO
itself or for each DAO I should have
a seperate JUnit Test Class?
(might be n00b question) will all the
test cases be ran automatically? I do
not want to go to the front end click
bunch of stuff so that call to the
DAO gets triggered.
when tests are ran will I find out
which methods failed? and for the
ones failed will it tell me the test
method that failed?
lastly, any good starting points? any
tutorials, articles that show working
with Junit
Okay, lets see what can be done...
Is this a good approach?
Not really. Since instead of having one obsolete code path with somewhat known functionality, you now have two code paths with unequal and unpredictable functionality. Usually one would go with creating thorough unit tests for legacy code first and then refactor the original method to avoid incredibly large amounts of refactoring - what if some part of your jungle of codes forming the huge application keeps calling the other method while other parts call the new one?
However working with legacy code is never optimal so what you're thinking may be the best solution.
I will have multiple DAO's. Do I write the test methods inside the DAO itself or for each DAO I should have a seperate JUnit Test Class?
Assuming you've gone properly OO with your program structure where each class does one thing and one thing only, yes, you should make another class containing the test cases for that individual class. What you're looking for here is mock objects (search for it at SO and Google in general, lots of info available) which help you decouple your class under test from other classes. Interestingly high amount of mocks in unit tests usually mean that your class could use some heavy refactoring.
(might be n00b question) will all the test cases be ran automatically? I do not want to go to the front end click bunch of stuff so that call to the DAO gets triggered.
All IDE:s allow you to run all the JUnit test at the same time, for example in Eclipse just click the source folder/top package and choose Run -> Junit test. Also when running individual class, all the unit tests contained within are run in proper JUnit flow (setup() -> testX() -> tearDown()).
when tests are ran will I find out which methods failed? and for the ones failed will it tell me the test method that failed?
Yes, part of Test Driven Development is the mantra Red-Green-Refactor which refers to the colored bar shown by IDE:s for unit tests. Basically if any of the tests in test suite fails, the bar is red, if all pass, it's green. Additionally for JUnit there's also blue for individual tests to show assertion errors.
lastly, any good starting points? any tutorials, articles that show working with Junit
I'm quite sure there's going to be multiple of these in the answers soon, just hang on :)
You'll write a test class.
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
String head = "whatever your head should be";
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
You might find you need to parameterize this on head; that's straightforward.
Update: It looks like this:
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
compareIdentifiersWithHead("head1");
compareIdentifiersWithHead("head2");
compareIdentifiersWithHead("etc");
}
private static void compareIdentifiersWithHead(String head) {
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
* Is this a good approach?
Sure.
* I will have multiple DAOs. Do I write the test methods inside the DAO
itself or for each DAO I should have a separate JUnit Test Class?
Try it with a separate test class for each DAO; if that gets too tedious, try it the other way and see what you like best. It's probably more helpful to have the fine-grainedness of separate test classes, but your mileage may vary.
* (might be n00b question) will all the test cases be run automatically?
I do not want to go to the front end click bunch of stuff so that call
to the DAO gets triggered.
Depending on your environment, there will be ways to run all the tests automatically.
* when tests are ran will I find out which methods failed?
and for the ones failed will it tell me the test method that failed?
Yes and yes.
* lastly, any good starting points? any tutorials, articles that
show working with Junit
I really like Dave Astels' book.
Another useful introduction in writing and maintaining large unit test suites might be this book (which is partially available online):
XUnit Test Patterns, Refactoring Test Code by Gerard Meszaros
The book is organized in 3 major parts. Part I consists of a series of introductory narratives that describe some aspect of test automation using xUnit. Part II describes a number of "test smells" that are symptoms of problems with how we are automating our tests. Part III contains descriptions of the patterns.
Here's a quick yet fairly thorough intro to JUnit.

Categories