I have not used Junit before and have not done unit testing automatically.
Scenario:
We are changing our backend DAO's from Sql Server to Oracle. So on the DB side all the stored procedures were converted to oracle. Now when our code calls these thew Oracle Stored Procedures we want to make sure that the data returned is same as compared to sql server stored procedures.
So for example I have the following method in a DAO:
//this is old method. gets data from sql server
public IdentifierBean getHeadIdentifiers_old(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
List result = getSqlMapClientTemplate().queryForList("Income.getIdentifiers", parmMap);
return (IdentifierBean)result.get(0);
}
//this is new method. gets data from Oracle
public IdentifierBean getHeadIdentifiers(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
getSqlMapClientTemplate().queryForObject("Income.getIdentifiers", parmMap);
return (IdentifierBean)((List)parmMap.get("Result0")).get(0);
}
now I want to write a Junit test method that would first call getHeadIdentifiers_old and then getHeadIdentifiers and would compare the Object returned (will have to over-write equals and hash in IdentifierBean). Test would pass only when both objects are same.
In the tester method I will have to provide a parameter (head in this case) for the two methods..this will be done manually for now. Yeah, from the front end parameters could be different and SPs might not return exact results for those parameters. But I think having these test cases will give us some relief that they return same data...
My questions are:
Is this a good approach?
I will have multiple DAO's. Do I write
the test methods inside the DAO
itself or for each DAO I should have
a seperate JUnit Test Class?
(might be n00b question) will all the
test cases be ran automatically? I do
not want to go to the front end click
bunch of stuff so that call to the
DAO gets triggered.
when tests are ran will I find out
which methods failed? and for the
ones failed will it tell me the test
method that failed?
lastly, any good starting points? any
tutorials, articles that show working
with Junit
Okay, lets see what can be done...
Is this a good approach?
Not really. Since instead of having one obsolete code path with somewhat known functionality, you now have two code paths with unequal and unpredictable functionality. Usually one would go with creating thorough unit tests for legacy code first and then refactor the original method to avoid incredibly large amounts of refactoring - what if some part of your jungle of codes forming the huge application keeps calling the other method while other parts call the new one?
However working with legacy code is never optimal so what you're thinking may be the best solution.
I will have multiple DAO's. Do I write the test methods inside the DAO itself or for each DAO I should have a seperate JUnit Test Class?
Assuming you've gone properly OO with your program structure where each class does one thing and one thing only, yes, you should make another class containing the test cases for that individual class. What you're looking for here is mock objects (search for it at SO and Google in general, lots of info available) which help you decouple your class under test from other classes. Interestingly high amount of mocks in unit tests usually mean that your class could use some heavy refactoring.
(might be n00b question) will all the test cases be ran automatically? I do not want to go to the front end click bunch of stuff so that call to the DAO gets triggered.
All IDE:s allow you to run all the JUnit test at the same time, for example in Eclipse just click the source folder/top package and choose Run -> Junit test. Also when running individual class, all the unit tests contained within are run in proper JUnit flow (setup() -> testX() -> tearDown()).
when tests are ran will I find out which methods failed? and for the ones failed will it tell me the test method that failed?
Yes, part of Test Driven Development is the mantra Red-Green-Refactor which refers to the colored bar shown by IDE:s for unit tests. Basically if any of the tests in test suite fails, the bar is red, if all pass, it's green. Additionally for JUnit there's also blue for individual tests to show assertion errors.
lastly, any good starting points? any tutorials, articles that show working with Junit
I'm quite sure there's going to be multiple of these in the answers soon, just hang on :)
You'll write a test class.
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
String head = "whatever your head should be";
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
You might find you need to parameterize this on head; that's straightforward.
Update: It looks like this:
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
compareIdentifiersWithHead("head1");
compareIdentifiersWithHead("head2");
compareIdentifiersWithHead("etc");
}
private static void compareIdentifiersWithHead(String head) {
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
* Is this a good approach?
Sure.
* I will have multiple DAOs. Do I write the test methods inside the DAO
itself or for each DAO I should have a separate JUnit Test Class?
Try it with a separate test class for each DAO; if that gets too tedious, try it the other way and see what you like best. It's probably more helpful to have the fine-grainedness of separate test classes, but your mileage may vary.
* (might be n00b question) will all the test cases be run automatically?
I do not want to go to the front end click bunch of stuff so that call
to the DAO gets triggered.
Depending on your environment, there will be ways to run all the tests automatically.
* when tests are ran will I find out which methods failed?
and for the ones failed will it tell me the test method that failed?
Yes and yes.
* lastly, any good starting points? any tutorials, articles that
show working with Junit
I really like Dave Astels' book.
Another useful introduction in writing and maintaining large unit test suites might be this book (which is partially available online):
XUnit Test Patterns, Refactoring Test Code by Gerard Meszaros
The book is organized in 3 major parts. Part I consists of a series of introductory narratives that describe some aspect of test automation using xUnit. Part II describes a number of "test smells" that are symptoms of problems with how we are automating our tests. Part III contains descriptions of the patterns.
Here's a quick yet fairly thorough intro to JUnit.
Related
I am currently developing a really big app. We are now facing the problem of Unit Testing everything in it.
I am trying to record all the interactions in methods and classes during execution time to have inputs and outputs to compare.
Yeah, i know it is not the properly way of doing Unit Testing but we need to do it quickly. We are already working with Mockito/PowerMockito/JUnit.
Already tried AOP and AspectJ but the problem is having to create new files for each class we have.
I was thinking in a way of intercepting the execution flow layer or somewhat to just write then dynamically in a Json file the input + dependencies values and output of method and classes invoked.
Any clues?
We are now facing the problem of Unit Testing everything in it.
Unittest do not test code, unittest verify public observable behavior that has justification from your requirements.
public observable behavior does not nessessarrily mean public methods but observable from outside the code under test. This is return values and communication with dependencies.
In my Vaadin GUI application, there are so many methods which look like below.
#Override
protected void loadLayout() {
CssLayout statusLayout = new CssLayout();
statusLayout.addComponent(connectedTextLabel);
statusLayout.addComponent(connectedCountLabel);
statusLayout.addComponent(notConnectedTextLabel);
statusLayout.addComponent(notConnectedCountLabel);
connectionsTable.getCustomHeaderLayout().addComponent(statusLayout);
connectionsTable.getCustomHeaderLayout().addComponent(commandLayout);
connectionsTable.getCustomHeaderLayout().addComponent(historyViewCheckbox);
bodySplitter.addComponent(connectionsTable);
bodySplitter.addComponent(connectionHistoryTable);
bodySplitter.setSplitPosition(75, Sizeable.Unit.PERCENTAGE);
bodySplitter.setSizeFull();
bodyLayout.addComponent(bodySplitter);
if (connectionDef.getConnectionHistoryDef() == null) {
historyViewCheckbox.setVisible(false);
}
if (connectionDef.getConnectionStatusField() == null || connectionDef.getConnectedStatusValue() == null || connectionDef.getConnectedStatusValue().isEmpty()) {
connectedTextLabel.setVisible(false);
connectedCountLabel.setVisible(false);
notConnectedTextLabel.setVisible(false);
notConnectedCountLabel.setVisible(false);
}
}
protected void setStyleNamesAndControlIds() {
mainLayout.setId("mainLayout");
header.setId("header");
footer.setId("footer");
propertyEditorLayout.setId("propertyEditorLayout");
propertyEditor.setId("propertyEditor");
mainLayout.setStyleName("mainLayout");
propertyEditorLayout.setStyleName("ui_action_edit");
header.setStyleName("TopPane");
footer.setStyleName("footer");
}
These methods are used for setting up the layout of GUIs. They do not produce a single distinct output. Almost every line in these methods is doing a separate job, which is not almost relevant to other lines.
Usually, when unit testing a method, I check the return value of the method, or validate calls on a limited number of external objects such as database connections.
But, for methods like above, there is no such single output. If I wrote unit tests for such methods, My test code checks for each method call happens in every line in the method, and in the end, it looks almost like the method itself.
If someone altered the code in any way, the test will break and they will have to update the test to match the change. But, there is no assurance that the change didn't actually break anything since test doesn't check the actual UI drawn in the browser.
For an example, if someone changed a style name of a control, he will have to update the test code with the new style name and the test will pass. But, for things to actually work without any issue, he has to change the relevant scss style files too. But the test didn't make any contribution to detect this issue. Same applies to layout setup code as well.
Is there any advantage of writing unit tests like above, other than keeping the code coverage rating at a higher level? For me, it feels useless and writing a test to compare the decompiled bytecode of the method to the original decompiled bytecode kept as a string in the test looks much better than these kinds of tests.
Is there any advantage of writing unit tests like above, other than
keeping the code coverage rating at a higher level?
Yes, if you take a sensible approach. It might not make sense, as you say, to test that a control has a particular style. So focus your tests on the parts of your code that are likely to break. If there is any conditional logic that goes into producing your UI, test that logic. The test will then protect your code from future changes that could break your logic.
As for you comment about testing methods that don't return a value, you can address that several ways.
It's your code, so you can restructure it to be more testable. Think about breaking it down into smaller methods. Isolate your logic into individual methods that can be called in a test.
Indirect verification - Rather than focusing on return values, focus on the effect your method has on other objects in the system.
Finally consider if unit testing of the UI is right for you and your organization. UIs are often difficult to unit test (as you have pointed out). Many companies write functional tests for their UIs. These are tests that drive the UI of the actual product. This is very different from unit tests which do not require the full product and are targeted at very small units of functionality.
Here's one simple example you could look to see how to fly over your application and try what is needed. This is vaadin 8, CDI & Wildfly Swarm example and in no way the only way to test UI's of Vaadin application.
https://github.com/wildfly-swarm/wildfly-swarm-examples/blob/master/vaadin/src/it/java/org/wildfly/swarm/it/vaadin/VaadinApplicationIT.java
Folks it is always said in TDD that
we should write junits even before we write the actual code.
Somehow I am not able to understand this in right spirit. I hope what it means is that you just write empty methods wih right signatures and your test case is expected to fail initially
Say in TDD approach i need to get the list of customers.
As per my understanding i will write the empty method like below
public List<CustomerData> getCustomers(int custId){
return null;
}
Now i will write junit test case where i will check the size as 10(that i am eactually expecting). Is this Right?
Basically my question is in TDD, How we can write junit test case before writing actual code?
I hope what it means is that you just write empty methods wih right signatures
Yes. And with most modern IDEs, if you write a method name which does not exist in your test, they will create a stub for you.
Say in TDD approach i need to get the list of customers. whats the right way to proceed?
Your example is not quite there. You want to test for a 0-length array, but you already return it: you should first return null, the test will obviously fail.
Then modify the method so that the test succeeds.
Then create a test method for customer add. Test fails. Fix it. Rinse. Repeat.
So, basically: with TDD, you start and write test that you KNOW will fail, and then fix your code so that they work.
Recommended read.
Often you'll write the test alongside the skeleton of the code. Initially you can write a non-functional implementation (e.g. throw an UnsupportedOperationException) and that will trigger a test failure. Then you'd flesh out the implementation until finally your test passes.
You need to be pragmatic about this. Obviously you can't compile your test until at least your unit under test compiles, and so you have to do a minimal amount of implementation work alongside your test.
Check out this recent Dr Dobbs editoral, which discusses exactly this point and the role of pragmatism around this, especially by the mavens of this practise (Kent Beck et al)
A key principle of TDD is that you write no code without first writing
a failing unit test. But in fact, if you talk to the principal
advocates of TDD (such as Kent Beck, who popularized the technique,
and Bob Martin, who has taught it to thousands of developers), you
find that both of them write some code without writing tests first.
They do not — I should emphasize this — view these moments as lapses
of faith, but rather as the necessary pragmatism of the intelligent
developer.
That's partly right.
Using an IDE (Eclipse, IntelliJ) you can create a test. In that test invoke a method (that does not exist) and using a refactoring tool create a method with the proper signature.
That's a trick that makes working with TDD easier and more fun.
According to Now i will write junit test case where i will check the size as 0. Is this Right? you should write a test that fails, and the provide proper implementation.
I think go write the test first, Think about the signature of the function while writing the test.
It's the same as writing the signature and then writing the test, But when inventing the signature of the function while writing the test will be helpful since you will have all information about the responsibility of the function and you will be able to come up with the proper signature.
I'm familiar with the basic principles of TDD, being :
Write tests, these will fail because of no implementation
Write basic implementation to make tests pass
Refactor code
However, I'm a little confused as to where interfaces and implementation fit. I'm creating a Spring web application in my spare time, and rather than going in guns blazing, I'd like to understand how I can test interfaces/implementations a little better, take this simple example code I've created here :
public class RunMe
{
public static void main(String[] args)
{
// Using a dummy service now, but would have a real implementation later (fetch from DB etc.)
UserService userService = new DummyUserService();
System.out.println(userService.getUserById(1));
}
}
interface UserService
{
public String getUserById(Integer id);
}
class DummyUserService implements UserService
{
#Override
public String getUserById(Integer id)
{
return "James";
}
}
I've created the UserService interface, ultimately there will be a real implementation of this that will query a database, however in order to get the application off the ground I've substituted a DummyUserService implementation that will just return some static data for now.
Question : How can I implement a testing strategy for the above?
I could create a test class called DummyUserServiceTest and test that when I call getUserById() it'll return James, seems pretty simple if not a waste of time(?).
Subsequently, I could also create a test class RealUserService that would test that getUserById() returns a users name from the database. This is the part that confuses me slightly, in doing so, does this not essentially overstep the boundary of a unit test and become more of an intergration test (with the hit on the DB)?
Question (improved, a little): When using interfaces with dummy/stubbed, and real implementations, which parts should be unit tested, and which parts can safely be left untested?
I spent a few hours Googling on this topic last night, and mostly found either tutorials on what TDD is, or examples of how to use JUnit, but nothing in the realms of advising what should actually be tested. It is entirely possible though, that I didn't search hard enough or wasn't looking for the right thing...
Don't test the dummy implementations: they won't be used in production. It makes no real sense to test them.
If the real UserService implementation does nothing else than go to a database and get the user name by its ID, then the test should test that it does that and does it correctly. Call it an integration test if you want, but it's nevertheless a test that should be written and automated.
The usual strategy is to populate the database with minimal test data in the #Before annotated method of the test, and have you test method check that for an ID which exists in the database, the corresponding user name is returned.
I would recommend you to read this book first: Growing Object-Oriented Software Guided by Tests by Steve Freemand and Nat Pryce. It answers your question and many others, related to TDD.
In your particular case you should make your RealUserService configurable with a DB-adapter, which will make real DB queries. The service itself will the servicing, not data persistence. Read the book, it will help a lot :)
JB's Answer is a good one, I thought I'd throw out another technique I've used.
When developing the original test, don't bother stubbing out the UserService in the first place. In fact, go ahead and write the real thing. Proceed by following Kent Beck's 3 rules.
1) Make it work.
2) Make it right.
3) Make it fast.
Your code will have tests that then verify the find by id works. As JB stated, your tests will be considered Integration Tests at this point. Once they are passing we have successfully achieved step 1. Now, look at the design. Is it right? Tweak any design smells and check step 2 off your list.
For step 3, we need to make this test fast. We all know that integration tests are slow and error prone with all of the transaction management and database setups. Once we know the code works, I typically don't bother with the integration tests. It is at this time where you can introduce your dummy service, effectively turning your Integration Test into a unit test. Now that it doesn't touch the database in any way, we can check step 3 off the list because this test is now fast.
So, what are the problems with this approach? Well, many will say that I still need a test for the database-backed UserService. I typically don't keep integration tests laying around in my project. My opinion is that these types of tests are slow, brittle, and don't catch enough logic errors in most projects to pay for themselves.
Hope that helps!
Brandon
I have a question about JUnit testing.
Our JUnit suite is testing various functions that we wrote that interact with our memory system.
The way our system was designed, requires it to be static, and therefore initialized prior to the running of the tests.
The problem we are having is that when subsequent tests are run, they are affected by tests prior to it, so it is possible (and likely) that we are getting false positive, or innaccurate failures.
Is there a way to maintain the testing order of our JUnit tests, but have it re-initialize the entire system, as if testing on the system from scratch.
The only option we can think of is to write a method that does this, and call it at the end of each test, but as there are lots and lots of things that need to be reset this way, I am hoping there is a simpler way to do this.
I've seen problems with tests many times where they depend on each other (sometimes deliberately!).
Firstly you need to setup a setUp method:
#Before
public void setUp() {
super.setUp();
// Now clear, reset, etc all your static data.
}
This is automatically run by JUnit before each test and will reset the environment. You can add one after as well, but before is better for ensuring a clean starting point.
The order of your tests is usually the order they are in the test class. But this should never be assumed and it's a really bad idea to base code on that.
Go back to the documentation. If you need more information.
The approach I took to this kind of problem was to do partial reinitialization before each test. Each test knows the preconditions that it requires, and the setup ensures that they are true. Not sure if this will be relevant for you. Relying on order often ends up being a continuing PITA - being able to run tests by themselves is better.
Oh yeah - there's one "test" that's run as the beginning of a suite that's responsible for static initialization.
You might want to look at TestNG, which supports test dependencies for this kind of functional testing (JUnit is a unit testing framework):
#Test
public void f1() {}
#Test(dependsOnMethods = "f1")
public void f2() {}