Unit testing methods which do not produce a distinct output - java

In my Vaadin GUI application, there are so many methods which look like below.
#Override
protected void loadLayout() {
CssLayout statusLayout = new CssLayout();
statusLayout.addComponent(connectedTextLabel);
statusLayout.addComponent(connectedCountLabel);
statusLayout.addComponent(notConnectedTextLabel);
statusLayout.addComponent(notConnectedCountLabel);
connectionsTable.getCustomHeaderLayout().addComponent(statusLayout);
connectionsTable.getCustomHeaderLayout().addComponent(commandLayout);
connectionsTable.getCustomHeaderLayout().addComponent(historyViewCheckbox);
bodySplitter.addComponent(connectionsTable);
bodySplitter.addComponent(connectionHistoryTable);
bodySplitter.setSplitPosition(75, Sizeable.Unit.PERCENTAGE);
bodySplitter.setSizeFull();
bodyLayout.addComponent(bodySplitter);
if (connectionDef.getConnectionHistoryDef() == null) {
historyViewCheckbox.setVisible(false);
}
if (connectionDef.getConnectionStatusField() == null || connectionDef.getConnectedStatusValue() == null || connectionDef.getConnectedStatusValue().isEmpty()) {
connectedTextLabel.setVisible(false);
connectedCountLabel.setVisible(false);
notConnectedTextLabel.setVisible(false);
notConnectedCountLabel.setVisible(false);
}
}
protected void setStyleNamesAndControlIds() {
mainLayout.setId("mainLayout");
header.setId("header");
footer.setId("footer");
propertyEditorLayout.setId("propertyEditorLayout");
propertyEditor.setId("propertyEditor");
mainLayout.setStyleName("mainLayout");
propertyEditorLayout.setStyleName("ui_action_edit");
header.setStyleName("TopPane");
footer.setStyleName("footer");
}
These methods are used for setting up the layout of GUIs. They do not produce a single distinct output. Almost every line in these methods is doing a separate job, which is not almost relevant to other lines.
Usually, when unit testing a method, I check the return value of the method, or validate calls on a limited number of external objects such as database connections.
But, for methods like above, there is no such single output. If I wrote unit tests for such methods, My test code checks for each method call happens in every line in the method, and in the end, it looks almost like the method itself.
If someone altered the code in any way, the test will break and they will have to update the test to match the change. But, there is no assurance that the change didn't actually break anything since test doesn't check the actual UI drawn in the browser.
For an example, if someone changed a style name of a control, he will have to update the test code with the new style name and the test will pass. But, for things to actually work without any issue, he has to change the relevant scss style files too. But the test didn't make any contribution to detect this issue. Same applies to layout setup code as well.
Is there any advantage of writing unit tests like above, other than keeping the code coverage rating at a higher level? For me, it feels useless and writing a test to compare the decompiled bytecode of the method to the original decompiled bytecode kept as a string in the test looks much better than these kinds of tests.

Is there any advantage of writing unit tests like above, other than
keeping the code coverage rating at a higher level?
Yes, if you take a sensible approach. It might not make sense, as you say, to test that a control has a particular style. So focus your tests on the parts of your code that are likely to break. If there is any conditional logic that goes into producing your UI, test that logic. The test will then protect your code from future changes that could break your logic.
As for you comment about testing methods that don't return a value, you can address that several ways.
It's your code, so you can restructure it to be more testable. Think about breaking it down into smaller methods. Isolate your logic into individual methods that can be called in a test.
Indirect verification - Rather than focusing on return values, focus on the effect your method has on other objects in the system.
Finally consider if unit testing of the UI is right for you and your organization. UIs are often difficult to unit test (as you have pointed out). Many companies write functional tests for their UIs. These are tests that drive the UI of the actual product. This is very different from unit tests which do not require the full product and are targeted at very small units of functionality.

Here's one simple example you could look to see how to fly over your application and try what is needed. This is vaadin 8, CDI & Wildfly Swarm example and in no way the only way to test UI's of Vaadin application.
https://github.com/wildfly-swarm/wildfly-swarm-examples/blob/master/vaadin/src/it/java/org/wildfly/swarm/it/vaadin/VaadinApplicationIT.java

Related

Integration tests, but how much? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
A recent debate within my team made me wonder. The basic topic is that how much and what shall we cover with functional/integration tests (sure, they are not the same but the example is dummy where it doesn't matter).
Let's say you have a "controller" class something like:
public class SomeController {
#Autowired Validator val;
#Autowired DataAccess da;
#Autowired SomeTransformer tr;
#Autowired Calculator calc;
public boolean doCheck(Input input) {
if (val.validate(input)) {
return false;
}
List<Stuff> stuffs = da.loadStuffs(input);
if (stuffs.isEmpty()) {
return false;
}
BusinessStuff businessStuff = tr.transform(stuffs);
if (null == businessStuff) {
return false;
}
return calc.check(businessStuff);
}
}
We need a lot of unit testing for sure (e.g., if validation fails, or no data in DB, ...), that's out of question.
Our main issue and on what we cannot agree is that how much integration tests shall cover it :-)
I'm on the side that we shall aim for less integration tests (test pyramid). What I would cover from this is only a single happy-unhappy path where the execution returns from the last line, just to see if I put these stuff together it won't blow up.
The problem is that it is not that easy to tell why did the test result in false, and that makes some of the guys feeling uneasy about it (e.g., if we simply check only the return value, it is hidden that the test is green because someone changed the validation and it returns false). Sure, yeah, we can cover all cases but that would be a heavy overkill imho.
Does anyone has a good rule of thumb for this kind of issues? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?
Thanks a lot in advance!
PS: Sry for the ugly example but it's quite hard to translate a specific code part to an example. Yeah, one can argue about throwing exceptions/using a different return type/etc. but our hand is more or less bound because of external dependencies.
It's easy to figure out where the test should reside if you follow these rules:
We check the logic on Unit Tests level and we check if the logic is invoked on Component or System levels.
We don't use mocking frameworks (mockito, jmock, etc).
Let's dive, but first let's agree on terminology:
Unit tests - check a method, a class or a few of them in isolation
Component Test - initializes a piece of the app but doesn't deploy it to the App Server. Example could be - initializing Spring Contexts in the tests.
System Test - requires a full deployment on App Server. Example could be: sending HTTP REST requests to a remote server.
If we build a balanced pyramid we'll end up with most tests on Unit and Component levels and few of them will be left to System Testing. This is good since lower-level tests are faster and easier. To do that:
We should put the business logic as low as possible (preferably in Domain Model) as this will allow us to easily test it in isolation. Each time you go through a collection of objects and put conditions there - it ideally should go to the domain model.
But the fact that the logic works doesn't mean it's invoked correctly. That's where you'd need Component Tests. Initialize your Controllers as well as services and DAO and then call it once or two times to see whether the logic is invoked.
Example: user's name cannot exceed 50 symbols, can have only latin as well as some special symbols.
Unit Tests - create Users with the right and wrong usernames, check that exceptions are thrown or vice versa - the valid names are passing
Component Tests - check that when you pass a non-valid user to the Controller (if you use Spring MVC - you can do that with MockMVC) it throws the error. Here you'll need to pass only one user - all the rules have been already checked by now, here you're interested only in knowing if those rules are invoked.
System Tests - you may not need them for this scenario actually..
Here is a more elaborate example of how you can implement a balanced pyramid.
In general we write an integration test at every starting point of the application (let's say every controller). We validate some happy flows and some error flows, with a couple of asserts to give us some peace of mind that we didn't break anything.
However, we also write tests at lower levels in response to regressions or when multiple classes are involved in a piece of complicated behaviour.
We use Integration tests mainly to catch the following types of regressions:
Refactoring mistakes (not caught by Unit tests).
For problems with refactoring, a couple of IT tests that hit a good portion of your application is more than sufficient. Refactoring often hits a large portion of classes and so these tests will expose things like using the wrong class or parameter somewhere.
Early detection of injection problems (context not loading, Spring)
Injection problems often happen because of missing annotations or mistakes in XML config. The first integration test that runs and sets up the entire context (apart from mocking the back-ends) will catch these every time.
Bugs in super complicated logic that are nearly impossible to test without controlling all inputs
Sometimes you have code that is spread over several classes, needs to filtering, transformations, etc. and sometimes no one really understands what is going on. What's worse, it is nearly impossible to test on a live system because the underlying data sources cannot easily provide the exact scenario that will trigger a bug.
For these cases (once discovered) we add a new integration test, where we feed the system the input that caused the bug, and then verify if it is performing as expected. This gives a lot of peace of mind after extensive code changes.

One unit per case or one unit per assert(s)

In unit-tests should I write one unit per case or one unit per assert from support cost point of view? I have the following code
void methodUnderTest(Resource resource) {
if(!resource.hasValue()) {
Value value = valueService.getValue(resource);
resource.setValue(value);
}
// resource.setLastUpdateTime(new Date()); // will be added in future
db.persist(resource);
email.send(resource);
}
The commented line will be added in near future and I think what it will cost to update units.
As far as I see there are two ways to test this code.
Write 2 units passing resource with value and without value. In both tests verify that db.persist and email.send were called. When LastUpdateTime is added I'll have to update both tests to verify the property was set.
Write separate unit tests: one checks that db.persist was called, the other checks email.send, third and fourth for resource with and without value. When LastUpdateTime is added I just write new unit.
I like the second way because I like the idea that I won't have to touch working units. But that would be probably a lot of code duplication because actually all 4 tests do the same and only use different asserts.
The first approach looks more correct from 'just one concept per unit test' point of view. But aren't such tests hard to maintain? Adding something new I will always have to revise all existing tests and it doesn't sound good.
Is there some best practise here I should follow?
I propose you put the common stuff of all tests into the setUp() for the tests, the have exactly one assert per unit test, as in your second way of testing.
When you add the new line of code, you just add one more test case with a single assert.
Not modification of existing tests, not code duplication.
There is no general answer for this. You need to balance your needs and constraints:
If a test is small, executes quickly and fails very rarely, there is no reason to split it into several.
If you have many assertions in a single test, then all of them probably provide value towards figuring out why the test fails but only the first one will be executed, keeping you from valuable information. Try to have only a single assert per test (combine the assertions into one big string, for example).
If you test several features in a single test, you have a variant of the "missing valuable information." If each test tests a single feature, the combination of succeeded and failed tests may give you the clue why they fail. Try to aim for a single feature per test.
Lastly, it gives me a good feeling when I see thousands of tests being executed.

When to use Mockito.verify()?

I write jUnit test cases for 3 purposes:
To ensure that my code satisfies all of the required functionality, under all (or most of) the input combinations/values.
To ensure that I can change the implementation, and rely on JUnit test cases to tell me that all my functionality is still satisfied.
As a documentation of all the use cases my code handles, and act as a spec for refactoring - should the code ever need to be rewritten. (Refactor the code, and if my jUnit tests fail - you probably missed some use case).
I do not understand why or when Mockito.verify() should be used. When I see verify() being called, it is telling me that my jUnit is becoming aware of the implementation. (Thus changing my implementation would break my jUnits, even though my functionality was unaffected).
I'm looking for:
What should be the guidelines for appropriate usage of Mockito.verify()?
Is it fundamentally correct for jUnits to be aware of, or tightly coupled to, the implementation of the class under test?
If the contract of class A includes the fact that it calls method B of an object of type C, then you should test this by making a mock of type C, and verifying that method B has been called.
This implies that the contract of class A has sufficient detail that it talks about type C (which might be an interface or a class). So yes, we're talking about a level of specification that goes beyond just "system requirements", and goes some way to describing implementation.
This is normal for unit tests. When you are unit testing, you want to ensure that each unit is doing the "right thing", and that will usually include its interactions with other units. "Units" here might mean classes, or larger subsets of your application.
Update:
I feel that this doesn't apply just to verification, but to stubbing as well. As soon as you stub a method of a collaborator class, your unit test has become, in some sense, dependent on implementation. It's kind of in the nature of unit tests to be so. Since Mockito is as much about stubbing as it is about verification, the fact that you're using Mockito at all implies that you're going to run across this kind of dependency.
In my experience, if I change the implementation of a class, I often have to change the implementation of its unit tests to match. Typically, though, I won't have to change the inventory of what unit tests there are for the class; unless of course, the reason for the change was the existence of a condition that I failed to test earlier.
So this is what unit tests are about. A test that doesn't suffer from this kind of dependency on the way collaborator classes are used is really a sub-system test or an integration test. Of course, these are frequently written with JUnit too, and frequently involve the use of mocking. In my opinion, "JUnit" is a terrible name, for a product that lets us produce all different types of test.
David's answer is of course correct but doesn't quite explain why you would want this.
Basically, when unit testing you are testing a unit of functionality in isolation. You test whether the input produces the expected output. Sometimes, you have to test side effects as well. In a nutshell, verify allows you to do that.
For example you have bit of business logic that is supposed to store things using a DAO. You could do this using an integration test that instantiates the DAO, hooks it up to the business logic and then pokes around in the database to see if the expected stuff got stored. That's not a unit test any more.
Or, you could mock the DAO and verify that it gets called in the way you expect. With mockito you can verify that something is called, how often it is called, and even use matchers on the parameters to ensure it gets called in a particular way.
The flip side of unit testing like this is indeed that you are tying the tests to the implementation which makes refactoring a bit harder. On the other hand, a good design smell is the amount of code it takes to exercise it properly. If your tests need to be very long, probably something is wrong with the design. So code with a lot of side effects/complex interactions that need to be tested is probably not a good thing to have.
This is great question!
I think the root cause of it is the following, we are using JUnit not only for unit testing. So the question should be splited up:
Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?
Should I use Mockito.verify() in my black-box unit-testing?
Should I use Mockito.verify() in my white-box unit-testing?
so if we will ignore higher-than-unit testing, the question can be rephrased "Using white-box unit-testing with Mockito.verify() creates great couple between unit test and my could implementation, can I make some "grey-box" unit-testing and what rules of thumb I should use for this".
Now, let's go through all of this step-by-step.
*- Should I use Mockito.verify() in my integration (or any other higher-than-unit testing) testing?*
I think the answer is clearly no, moreover you shouldn't use mocks for this. Your test should be as close to real application as possible. You are testing complete use case, not isolated part of the application.
*black-box vs white-box unit-testing*
If you are using black-box approach what is you really doing, you supply (all equivalence classes) input, a state, and tests that you will receive expected output. In this approach using of mocks in general is justifies (you just mimic that they are doing the right thing; you don't want to test them), but calling Mockito.verify() is superfluous.
If you are using white-box approach what is you really doing, you're testing the behaviour of your unit. In this approach calling to Mockito.verify() is essential, you should verify that your unit behaves as you're expecting to.
rules of thumbs for grey-box-testing
The problem with white-box testing is it creates a high coupling. One possible solution is to do grey-box-testing, not white-box-testing. This is sort of combination of black&white box testing. You are really testing the behaviour of your unit like in white-box testing, but in general you make it implementation-agnostic when possible. When it is possible, you will just make a check like in black-box case, just asserts that output is what is your expected to be. So, the essence of your question is when it is possible.
This is really hard. I don't have a good example, but I can give you to examples. In the case that was mentioned above with equals() vs equalsIgnoreCase() you shouldn't call Mockito.verify(), just assert the output. If you couldn't do it, break down your code to the smaller unit, until you can do it. On the other hand, suppose you have some #Service and you are writting #Web-Service that is essentially wrapper upon your #Service - it delegates all calls to the #Service (and making some extra error handling). In this case calling to Mockito.verify() is essential, you shouldn't duplicate all of your checks that you did for the #Serive, verifying that you're calling to #Service with correct parammeter list is sufficient.
I must say, that you are absolutely right from a classical approach's point of view:
If you first create (or change) business logic of your application and then cover it with (adopt) tests (Test-Last approach), then it will be very painful and dangerous to let tests know anything about how your software works, other than checking inputs and outputs.
If you are practicing a Test-Driven approach, then your tests are the first to be written, to be changed and to reflect the use cases of your software's functionality. The implementation depends on tests. That sometimes mean, that you want your software to be implemented in some particular way, e.g. rely on some other component's method or even call it a particular amount of times. That is where Mockito.verify() comes in handy!
It is important to remember, that there are no universal tools. The type of software, it's size, company goals and market situation, team skills and many other things influence the decision on which approach to use at your particular case.
In most cases when people don't like using Mockito.verify, it is because it is used to verify everything that the tested unit is doing and that means you will need to adapt your test if anything changes in it.
But, I don't think that is a problem. If you want to be able to change what a method does without the need to change it's test, that basically means you want to write tests which don't test everything your method is doing, because you don't want it to test your changes. And that is the wrong way of thinking.
What really is a problem, is if you can modify what your method does and a unit test which is supposed to cover the functionality entirely doesn't fail. That would mean that whatever the intention of your change is, the result of your change isn't covered by the test.
Because of that, I prefer to mock as much as possible: also mock your data objects. When doing that you can not only use verify to check that the correct methods of other classes are called, but also that the data being passed is collected via the correct methods of those data objects. And to make it complete, you should test the order in which calls occur.
Example: if you modify a db entity object and then save it using a repository, it is not enough to verify that the setters of the object are called with the correct data and that the save method of the repository is called. If they are called in the wrong order, your method still doesn't do what it should do.
So, I don't use Mockito.verify but I create an inOrder object with all mocks and use inOrder.verify instead. And if you want to make it complete, you should also call Mockito.verifyNoMoreInteractions at the end and pass it all the mocks. Otherwise someone can add new functionality/behavior without testing it, which would mean after while your coverage statistics can be 100% and still you are piling up code which isn't asserted or verified.
As some people said
Sometimes you don't have a direct output on which you can assert
Sometimes you just need to confirm that your tested method is sending the correct indirect outputs to its collaborators (which you are mocking).
Regarding your concern about breaking your tests when refactoring, that is somewhat expected when using mocks/stubs/spies. I mean that by definition and not regarding a specific implementation such as Mockito.
But you could think in this way - if you need to do a refactoring that would create major changes on the way your method works, it is a good idea to do it on a TDD approach, meaning you can change your test first to define the new behavior (that will fail the test), and then do the changes and get the test passed again.

Sanity Check - Significant increase in the number of objects when using JUNIT

I am using Junit for the first time in a project and I'm fascinated by the way it is forcing me to restructure my code. One thing I've noticed is that the number of objects I've created in order to be able to test chunks of code is significantly increasing. Is this typical?
Thanks,
Elliott
Yes, this is normal.
In general the smaller/more focused your classes and methods are, the easier to understand and test them. This might produce more files and actual lines of code, but it is because you are adding more abstractions that makes your code have a better/cleaner design.
You may want to read about the Single Responsibility Principle. Uncle Bob also has some re-factoring examples in his book called Clean Code where he touches on exactly these points.
One more thing when you are unit testing. Dependency Injection is one of the single most important thing that will save you a lot of headaches when it comes to structuring your code. (And just for clarification, DI will not necessary cause you to have more classes, but it will help decouple your classes more from each other.)
Yes, I think this is fairly typical. When I start introducing testing code into a legacy codebase, I find myself creating smaller utility classes and pojos and testing those. The original class just becomes a wrapper to call these smaller classes.
One example would be when you have a method which does a calculation, updates an object and then saves to a database.
public void calculateAndUpdate(Thing t) {
calculate(t); // quite a complex calculation with mutliple results & updates t
dao.save(t);
}
You could create a calculation object which is returned by the calculate method. The method then updates the Thing object and saves it.
public void calculateAndUpdate(Thing t) {
Calculation calculation = new Calculator().calculate(t); // does not update t at all
update(t, calculation); // updates t with the result of calculation
dao.save(t); // saves t to the database
}
So I've introduced two new objects, a Calculator & Calculation. This allows me to test the result of the calculation without having to have a database available. I can also unit test the update method as well. It's also more functional, which I like :-)
If I continued to test with the original method, then I would have to unit test the calculation udpate and save as one item. Which isn't nice.
For me, the second is a better code design, better separation of concerns, smaller classes, more easily tested. But the number of small classes goes up. But the overall complexity goes down.
depends on what kind of objects you are referring to. Typically, you should be fine with using a mocking framework like EasyMock or Mockito in which case the number of additional classes required solely for testing purposes should be pretty less. If you are referring to additional objects in your main source code, may be unit testing is helping you refactor your code to make it more readable and reusable, which is a good idea anyways IMHO :-)

need suggestions on getting started with Junit

I have not used Junit before and have not done unit testing automatically.
Scenario:
We are changing our backend DAO's from Sql Server to Oracle. So on the DB side all the stored procedures were converted to oracle. Now when our code calls these thew Oracle Stored Procedures we want to make sure that the data returned is same as compared to sql server stored procedures.
So for example I have the following method in a DAO:
//this is old method. gets data from sql server
public IdentifierBean getHeadIdentifiers_old(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
List result = getSqlMapClientTemplate().queryForList("Income.getIdentifiers", parmMap);
return (IdentifierBean)result.get(0);
}
//this is new method. gets data from Oracle
public IdentifierBean getHeadIdentifiers(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
getSqlMapClientTemplate().queryForObject("Income.getIdentifiers", parmMap);
return (IdentifierBean)((List)parmMap.get("Result0")).get(0);
}
now I want to write a Junit test method that would first call getHeadIdentifiers_old and then getHeadIdentifiers and would compare the Object returned (will have to over-write equals and hash in IdentifierBean). Test would pass only when both objects are same.
In the tester method I will have to provide a parameter (head in this case) for the two methods..this will be done manually for now. Yeah, from the front end parameters could be different and SPs might not return exact results for those parameters. But I think having these test cases will give us some relief that they return same data...
My questions are:
Is this a good approach?
I will have multiple DAO's. Do I write
the test methods inside the DAO
itself or for each DAO I should have
a seperate JUnit Test Class?
(might be n00b question) will all the
test cases be ran automatically? I do
not want to go to the front end click
bunch of stuff so that call to the
DAO gets triggered.
when tests are ran will I find out
which methods failed? and for the
ones failed will it tell me the test
method that failed?
lastly, any good starting points? any
tutorials, articles that show working
with Junit
Okay, lets see what can be done...
Is this a good approach?
Not really. Since instead of having one obsolete code path with somewhat known functionality, you now have two code paths with unequal and unpredictable functionality. Usually one would go with creating thorough unit tests for legacy code first and then refactor the original method to avoid incredibly large amounts of refactoring - what if some part of your jungle of codes forming the huge application keeps calling the other method while other parts call the new one?
However working with legacy code is never optimal so what you're thinking may be the best solution.
I will have multiple DAO's. Do I write the test methods inside the DAO itself or for each DAO I should have a seperate JUnit Test Class?
Assuming you've gone properly OO with your program structure where each class does one thing and one thing only, yes, you should make another class containing the test cases for that individual class. What you're looking for here is mock objects (search for it at SO and Google in general, lots of info available) which help you decouple your class under test from other classes. Interestingly high amount of mocks in unit tests usually mean that your class could use some heavy refactoring.
(might be n00b question) will all the test cases be ran automatically? I do not want to go to the front end click bunch of stuff so that call to the DAO gets triggered.
All IDE:s allow you to run all the JUnit test at the same time, for example in Eclipse just click the source folder/top package and choose Run -> Junit test. Also when running individual class, all the unit tests contained within are run in proper JUnit flow (setup() -> testX() -> tearDown()).
when tests are ran will I find out which methods failed? and for the ones failed will it tell me the test method that failed?
Yes, part of Test Driven Development is the mantra Red-Green-Refactor which refers to the colored bar shown by IDE:s for unit tests. Basically if any of the tests in test suite fails, the bar is red, if all pass, it's green. Additionally for JUnit there's also blue for individual tests to show assertion errors.
lastly, any good starting points? any tutorials, articles that show working with Junit
I'm quite sure there's going to be multiple of these in the answers soon, just hang on :)
You'll write a test class.
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
String head = "whatever your head should be";
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
You might find you need to parameterize this on head; that's straightforward.
Update: It looks like this:
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
compareIdentifiersWithHead("head1");
compareIdentifiersWithHead("head2");
compareIdentifiersWithHead("etc");
}
private static void compareIdentifiersWithHead(String head) {
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
* Is this a good approach?
Sure.
* I will have multiple DAOs. Do I write the test methods inside the DAO
itself or for each DAO I should have a separate JUnit Test Class?
Try it with a separate test class for each DAO; if that gets too tedious, try it the other way and see what you like best. It's probably more helpful to have the fine-grainedness of separate test classes, but your mileage may vary.
* (might be n00b question) will all the test cases be run automatically?
I do not want to go to the front end click bunch of stuff so that call
to the DAO gets triggered.
Depending on your environment, there will be ways to run all the tests automatically.
* when tests are ran will I find out which methods failed?
and for the ones failed will it tell me the test method that failed?
Yes and yes.
* lastly, any good starting points? any tutorials, articles that
show working with Junit
I really like Dave Astels' book.
Another useful introduction in writing and maintaining large unit test suites might be this book (which is partially available online):
XUnit Test Patterns, Refactoring Test Code by Gerard Meszaros
The book is organized in 3 major parts. Part I consists of a series of introductory narratives that describe some aspect of test automation using xUnit. Part II describes a number of "test smells" that are symptoms of problems with how we are automating our tests. Part III contains descriptions of the patterns.
Here's a quick yet fairly thorough intro to JUnit.

Categories