I have small CRUD application that I would like to create integration tests for. I've seen the recommendation that "tests depending on other tests" is a no go. But how can I keep the code maintainable while at the same time not using the data from other tests?
So what I mean is easier to show by example with some pseudo code
TestCreateUser {
make POST to API creating a user
verify the a 200 is received back
}
TestReadUser {
GET the user from the previous test.
verify its the same user
}
TestUpdateUser {
PATCH the user from the previous test.
verify the user have the new data.
}
So this would be bad since all tests depend on the first one. So what are the alternatives? I guess I could use a beforeEach
#BeforeEach
public void initEach(){
make POST to API creating a user
verify the a 200 is received back
}
And then just skip the create user test. But this might create unnecessary calls if i for example have a test like this
TestCreateUserWithSpecialData {
make POST to API creating a user with additional data
verify the a 200 is received back
verify the additional data is correctl
}
Then the beforEach would just create a user that the test does not need. Whats a good solution to solving this? Should I split them up into smaller classes and more files? Or are there a better solution? I suppose i could create if statements in the beforEach but that feels like a hack.
You could use #BeforeAll to create some test data (once) and then have individual tests operate on it.
Or, for a test that’s doing something destructive like “delete”, you could create the user within the test itself.
(Purists might complain that this means the “delete” test will fail if the problem is actually with the “create” operation, but I don’t consider that a big problem — if something is sufficiently messed up with my testing environment that it can’t even create some test data, the exact number of tests that fail is not very interesting to me)
One way to do that is to do whatever setup you need using files containing database inserts. There's a Sql annotation in Spring-Boot you can put on the test method to tell what file to run before a test.
That way each test has its own dedicated setup and there is no dependence on another test working, it also means the test isn't dependent on java setup code working.
Related
I am thinking on how to design my unit-tests for my java based-repositories and encountering design problems:
Lets assume I have a Consumer table with data related to my consumer:
{ ID, Name, Phone }
And my ConsumerRepository extends BaseRepository which extends JPA repo and supports findByPhone, findByName, findAll queries and save option.
I'm using H2 in-memo DB and DBUnit for those tests, all configured and running and was thinking about this:
When launching data to my in-memory DB, should I be configuring the data with the ConsumerTestData.xml (DBUnit) and manually adding the Consumer data for each test for e.g:
<dataset>
<CONSUMER CONSUMER_ID="1" FIRST_NAME="Elvis" LAST_NAME="Presley" PHONE="+972123456789" EMAIL="elvis#isep.com" CREATION_DATE="2017-08-29"/>
<CONSUMER CONSUMER_ID="2" FIRST_NAME="Bob" LAST_NAME="Dylan" PHONE="+972123456780" EMAIL="bob#isep.com" CREATION_DATE="2017-08-29"/>
<CONSUMER CONSUMER_ID="3" FIRST_NAME="Lady" LAST_NAME="Gaga" PHONE="+972123456781" EMAIL="gaga#isep.com" CREATION_DATE="2017-08-29"/>
</dataset>
or should I automate it? for e.g:
#Test
public void findByPhone(){
ConsumerEntity consumerEntity = ConsumerUtil.createRandomConsumer();
ConsumerEntity savedConsumerEntity = consumerRepository.save(consumerEntity);
assertThat(consumerRepository.findByPhone(savedConsumerEntity.getPhone()).isEqualTo(savedConsumerEntity.getPhone());
}
While my createRandomConsumer generates random data.
Pros:
I think automating would be much more generic and handy as if ConsumerEntity might change or any code changes next - i will not have to change my .xml file and just be able to add things to the TestEntity function.
Cons:
Creating new objects and saving to in-memo DB might be more difficult if contains any constraints in the DB scheme.
Should I use DBUnit at all? if automating it - why should I use DBUnit? is it better just to use JUnit (Rolling back the data after each test and just adding the data I need for the test automatically as in the example above?)
If chose to use DBUnit for this - and manually added - what are the advantages of such thing? why is it better than using simple JUnit with Spring?
Thanks!
You seem to be asking 2 questions: whether to use DBUnit and whether to use randomization.
As for DBUnit
It adds extra steps and extra maintenance costs. If you already have code to save entities (via XxxRepository), then there is no reason to introduce yet additional tool.
This is true not only for DBUnit but for any tool that duplicates existing persistence logic.
Instead you can just create an object instance, fill all the fields and save with the repository. This makes refactoring much easier.
As for test randomization
I think your test looks very good. With randomization you can cover more cases with less tests, find tricky cases that you couldn't think of yourself, isolate your tests easily (e.g. generate unique username instead of keeping track of them somewhere), etc.
As per cons: good randomziation (and good tests in general) require a good command of OOP, so not everyone can easily use it when the project grows big. Also tests start failing from time to time because they are written in haste and not every possibility is considered. To catch such cases you should run the tests locally many times (which sometimes people forget). Good news: IntelliJ can repeat tests N times for JUnit (for TestNG there is an annotation).
In general you should think more when you write randomized tests. But if written properly they provide better coverage and lower maintenance overhead. If you're interested in different randomization techniques, check this out.
1) Testing with only random or runtime generated fixture is not fine.
It doesn't make tests reproducible, so make them harder to debug if fails and it doesn't make tests document the code either.
Besides, fixtures with explicit data reduces side effects as the data generation may introduce.
Are you sure that the generated data respect the requirements ?
Are you sure that your generate tool works as expected ?
Have you tested it ?
And so for...
Finding required edge cases is fine but inventing not required edge cases means your tests will change your requirements.
And you don't want that.
If you identified all specific cases and you want to generate some data because you deem that you have too many combinations (dozen of input cases for example), of course generating fixture by relying on the requirements is nice.
Otherwise, don't make it as it seems an overhead.
2) DBUnit, it is a choice.
Before I used it. Now, I stopped. It has some strengths but it is cumbersome and its maintenance/improvements are very light.
Recently, I tried DbSetup from JBNizet (a SO member).
It is rather a fine API to insert Data in a database from Java code : simple and straight usable.
For example to insert data in DB, An Operation can be defined as :
Operation consumerInserts = sequenceOf(
insertInto("CONSUMER")
.columns("ID", "FIRST_NAME", "LAST_NAME")
.values(1, "Elvis", "Presley")
.values(2, "Lady", "Gaga")
.values(3, "Bob", "Dylan")
.build();
)
3) So, nothing to add.
I have designed JUnit Test Suite with required values hard-coded in the code itself. Each time if there is any changes, I need to open the project and do the modifications. To provide input from external file, I am using excel sheets, which can be easily designed. The excel file is having some drop down menu items also, which tells about the test cases needs to be executed. The excel sheet is also having some text box in which user has to fill the values before running test suite.
But the excel is not platform independent .
Is there any better way which is universally accepted and platform independent to provide inputs to JUnit Test Suite?
Do I understand you right, that the main thing here is to find a way that makes it easy for a tester to key in test data?
It's not so much about writing a test case, right?
Well, that issue happens in many different projects. One, for example, is to have users key in some basic values in a database table.
There're many ways to solve that. A lot of people use Excel, even more use MS-Access-Forms, Sharepoint or, if they're more familiar with Web-Tools, they end up building web sites.
In the end, your way and the tool you use depends on yours and the testers knowledge and the number of interfaces you have to build and maintain. In my company we ended up with some configurable web sites that is independant of any 3rd party software licence (which was a main requirement in our case).
The only tool, where one should be very careful, is Excel. If you need only a few interfaces, lets say 10-20, Excel for me can still be handled. When it gets more, than the maintenance of Excel will kill you, mainly because Excel keeps programming and business logic for each interface separately. Changing the business logic menas to change all excels separately. This kills you sooner or later.
One of the core concepts about test driven development is that you run all test cases, all of the time in an automated way. Having a user use excel to choose test cases and enter data breaks this model.
You could read from a file to drive your test cases, but perhaps your tests need to be redefined to be data independent. Also all of them should run every time you run JUnit.
Just yesterday I used random data to perform a test...here is an example:
#Test
public void testIntGetter() {
int value = random.getNextInt();
MyObj obj = new MyObj(value);
assertEquals(value,obj.getMyInt());
}
While this is an overly simple example it does test the functionality of the class while being data independent.
Once you decide to break the test driven development/JUnit model, then your question is not really applicable. This is fine to use a tool for other purposes, but your specific question is incorrect.
It is best to have data reside in code, with some exceptions testing is independent of the data as my example shows. Most of those exceptions are edge cases that should reside in code. For example a method that takes a String parameter, should be tested against null, and empty string and a non-empty String.
If the value of a parameter reveals a defect in the code, the code should be fixed and that value should be a permanent member of the collection of test conditions.
I believe there is no universal accepted way to provide input to JUnit tests. Afaik, a unit test is often - or by definition - small (smallest testable part). Data is provided hardcoded as part of a test.
That said, I also use unit testing to conduct test of larger numerical algorithms / models, for which I sometimes have to provide, more complicated data. I provide this data via a spreadsheet too. I believe the spreadsheet is the natural GUI for this kind of tabular data.
I trigger my Java code directly from the spreadsheet using Obba (disclaimer: I am the developer of Obba too, but my main open source project is a library of mathematical finance, for which I use these sheet).
My suggestion is to go both routes:
create small (classical) unit test with predefined hardcoded data as part of your build environment.
create bigger test with data provided via the sheet to analyse the codes behavior to input.
If possible, add a hardcoded "bigger test" to your automated test suit from time to time.
Note: There is also the concept of parametrized unit tests and there are tools which then generate (e.g. randomize) the parameters as part of a the testing.
I have a JUnit test that I would like to run from a main method. I would like to retrieve multiple records from a database (within the main method) and pass each record into the JUnit, using a data object, so that each record can be tested. Can I pass a data object into the run method of JUnit. If not what is the best way to accomplish this. There are so many different scenarios that I would like to use actual data from the database. There could be as many as 5000 or more records to test.
Thanks
Doug
Surely you are looking for Parameterized test case. You can do it easily by using JUnit instead of using main() method.
You need Parameterized to run your test.
It will run your test with different parameters by passing parameters via constructor.
Here is an easy article how to do that. You can also try the example in the documentation also, to understand how it works.
You want to use JUnit's Parameterized Tests. There's really no way to run a main method in a JUnit test case.
On top of the docs, here's a blog post which explains it a little better: http://ourcraft.wordpress.com/2008/08/27/writing-a-parameterized-junit-test/
I think that testing your main method is more along the lines of an integration test or a functional test. The same can be said for testing your database data. If you really want a unit test the firs step would be to refactor your main method using Extract Method to pull out the business logic you want to test.
Doing this gives you a few benefits. First can test your code in isolation (which is one of the more important properties of a good unit test). If you refactor out the business logic you'll know that you are only testing that code and that no other code is affecting your test. Second by having an isolated method you'll be able to easily mock the test data you are looking at by passing in different parameters to the method and make your assertions based on the known mock data.
I have not used Junit before and have not done unit testing automatically.
Scenario:
We are changing our backend DAO's from Sql Server to Oracle. So on the DB side all the stored procedures were converted to oracle. Now when our code calls these thew Oracle Stored Procedures we want to make sure that the data returned is same as compared to sql server stored procedures.
So for example I have the following method in a DAO:
//this is old method. gets data from sql server
public IdentifierBean getHeadIdentifiers_old(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
List result = getSqlMapClientTemplate().queryForList("Income.getIdentifiers", parmMap);
return (IdentifierBean)result.get(0);
}
//this is new method. gets data from Oracle
public IdentifierBean getHeadIdentifiers(String head){
HashMap parmMap = new HashMap();
parmMap.put("head", head);
getSqlMapClientTemplate().queryForObject("Income.getIdentifiers", parmMap);
return (IdentifierBean)((List)parmMap.get("Result0")).get(0);
}
now I want to write a Junit test method that would first call getHeadIdentifiers_old and then getHeadIdentifiers and would compare the Object returned (will have to over-write equals and hash in IdentifierBean). Test would pass only when both objects are same.
In the tester method I will have to provide a parameter (head in this case) for the two methods..this will be done manually for now. Yeah, from the front end parameters could be different and SPs might not return exact results for those parameters. But I think having these test cases will give us some relief that they return same data...
My questions are:
Is this a good approach?
I will have multiple DAO's. Do I write
the test methods inside the DAO
itself or for each DAO I should have
a seperate JUnit Test Class?
(might be n00b question) will all the
test cases be ran automatically? I do
not want to go to the front end click
bunch of stuff so that call to the
DAO gets triggered.
when tests are ran will I find out
which methods failed? and for the
ones failed will it tell me the test
method that failed?
lastly, any good starting points? any
tutorials, articles that show working
with Junit
Okay, lets see what can be done...
Is this a good approach?
Not really. Since instead of having one obsolete code path with somewhat known functionality, you now have two code paths with unequal and unpredictable functionality. Usually one would go with creating thorough unit tests for legacy code first and then refactor the original method to avoid incredibly large amounts of refactoring - what if some part of your jungle of codes forming the huge application keeps calling the other method while other parts call the new one?
However working with legacy code is never optimal so what you're thinking may be the best solution.
I will have multiple DAO's. Do I write the test methods inside the DAO itself or for each DAO I should have a seperate JUnit Test Class?
Assuming you've gone properly OO with your program structure where each class does one thing and one thing only, yes, you should make another class containing the test cases for that individual class. What you're looking for here is mock objects (search for it at SO and Google in general, lots of info available) which help you decouple your class under test from other classes. Interestingly high amount of mocks in unit tests usually mean that your class could use some heavy refactoring.
(might be n00b question) will all the test cases be ran automatically? I do not want to go to the front end click bunch of stuff so that call to the DAO gets triggered.
All IDE:s allow you to run all the JUnit test at the same time, for example in Eclipse just click the source folder/top package and choose Run -> Junit test. Also when running individual class, all the unit tests contained within are run in proper JUnit flow (setup() -> testX() -> tearDown()).
when tests are ran will I find out which methods failed? and for the ones failed will it tell me the test method that failed?
Yes, part of Test Driven Development is the mantra Red-Green-Refactor which refers to the colored bar shown by IDE:s for unit tests. Basically if any of the tests in test suite fails, the bar is red, if all pass, it's green. Additionally for JUnit there's also blue for individual tests to show assertion errors.
lastly, any good starting points? any tutorials, articles that show working with Junit
I'm quite sure there's going to be multiple of these in the answers soon, just hang on :)
You'll write a test class.
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
String head = "whatever your head should be";
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
You might find you need to parameterize this on head; that's straightforward.
Update: It looks like this:
public class OracleMatchesSqlServer extends TestCase {
public void testHeadIdentifiersShouldBeEqual() throws Exception {
compareIdentifiersWithHead("head1");
compareIdentifiersWithHead("head2");
compareIdentifiersWithHead("etc");
}
private static void compareIdentifiersWithHead(String head) {
IdentifierBean originalBean = YourClass.getHeadIdentifiers_old(head);
IdentifierBean oracleBean = YourClass.getHeadIdentifiers(head);
assertEquals(originalBean, oracleBean);
}
}
* Is this a good approach?
Sure.
* I will have multiple DAOs. Do I write the test methods inside the DAO
itself or for each DAO I should have a separate JUnit Test Class?
Try it with a separate test class for each DAO; if that gets too tedious, try it the other way and see what you like best. It's probably more helpful to have the fine-grainedness of separate test classes, but your mileage may vary.
* (might be n00b question) will all the test cases be run automatically?
I do not want to go to the front end click bunch of stuff so that call
to the DAO gets triggered.
Depending on your environment, there will be ways to run all the tests automatically.
* when tests are ran will I find out which methods failed?
and for the ones failed will it tell me the test method that failed?
Yes and yes.
* lastly, any good starting points? any tutorials, articles that
show working with Junit
I really like Dave Astels' book.
Another useful introduction in writing and maintaining large unit test suites might be this book (which is partially available online):
XUnit Test Patterns, Refactoring Test Code by Gerard Meszaros
The book is organized in 3 major parts. Part I consists of a series of introductory narratives that describe some aspect of test automation using xUnit. Part II describes a number of "test smells" that are symptoms of problems with how we are automating our tests. Part III contains descriptions of the patterns.
Here's a quick yet fairly thorough intro to JUnit.
I am building an application that queries a web service. The data in the database varies and changes over time. How do I build a unit test for this type of application?
The web service sends back xml or a no search results html page. I cannot really change the web service. My application basically queries the web service using HTTPURLConnection and gets the response as a String.
Hope that helps with more detail.
Abstract out the web service using a proxy that you can mock out. Have your mock web service return various values representing normal data and corner cases. Also simulate getting exceptions from the web service. Make sure you code works under these conditions and you can be reasonably certain that it will work with any values the web service supplies.
Look at jMock for Java mocking.
Strictly speaking of unit-testing, you can only test units that have a deterministic behavior.
A test that connects to an external web server is an integration test.
The solution is to mock the HTTPURLConnection - that is, create a class in your unit tests that derives from HTTPURLConnection class and that returns an hardcoded, or a parameterizable value. EDIT: notice this can be done maunally, without any mocking framework.
The class that queries the web server shall not instanciate the HTTPURLConnection, but receive it via a parameter. In the unit tests, you create the HTTPURLConnectionMock, and passes it to the class that interrogates the web server which will use it as it is using a real HTTPURLConnection. In the production code, you create a real HTTPURLConnection and pass it to the class.
You can also make your HTTPURLConnectionMock able to throw an IOException, to test error conditions. Just have a method to tell it not to return the result but an exception at next request.
Your question is a little open-ended but there are definitely some testable options just using the information above:
You could test whether the query works at all. Assert that you should get back a non-empty / non-null result set.
You could test whether the query results is a valid result set. Assert that the results should pass your validation code (so at this point, you know that the data is non-null, not non-sensical and possibly useful).
If you know anything about the data schema / data description, you could assert that the fields are sensible in relation to each other. For example, if you get a result with a helicopter, it shouldn't be associated with an altitude of negative 100 meters....
If you know anything about the probabilistic distribution of the data, you should be able to collect a set of data and assert that your resulting distribution is within a standard deviation of what you'd expect to see.
I'm sure that with some more information, you'll get a pile of useful suggestions.
It sounds like your testing at too high a level. Consider mocking the web service interface and writing other unit tests on the data layer that access the database. Some more detail here might make this question easier to answer, for example the situation you're trying to test.
I would normally expect the results of a unit test not to change, or at least to be within a range that you're expecting
A problem I've run into is with convoluted (meaning "crappy") datamodels, where you can't ever be sure that problems are due to code errors or data errors.
A symptom of this is when your application works great, passes all tests, etc. with mocked data or a fresh dataset, but breaks horribly when you run your application on real data.