I am building an application that queries a web service. The data in the database varies and changes over time. How do I build a unit test for this type of application?
The web service sends back xml or a no search results html page. I cannot really change the web service. My application basically queries the web service using HTTPURLConnection and gets the response as a String.
Hope that helps with more detail.
Abstract out the web service using a proxy that you can mock out. Have your mock web service return various values representing normal data and corner cases. Also simulate getting exceptions from the web service. Make sure you code works under these conditions and you can be reasonably certain that it will work with any values the web service supplies.
Look at jMock for Java mocking.
Strictly speaking of unit-testing, you can only test units that have a deterministic behavior.
A test that connects to an external web server is an integration test.
The solution is to mock the HTTPURLConnection - that is, create a class in your unit tests that derives from HTTPURLConnection class and that returns an hardcoded, or a parameterizable value. EDIT: notice this can be done maunally, without any mocking framework.
The class that queries the web server shall not instanciate the HTTPURLConnection, but receive it via a parameter. In the unit tests, you create the HTTPURLConnectionMock, and passes it to the class that interrogates the web server which will use it as it is using a real HTTPURLConnection. In the production code, you create a real HTTPURLConnection and pass it to the class.
You can also make your HTTPURLConnectionMock able to throw an IOException, to test error conditions. Just have a method to tell it not to return the result but an exception at next request.
Your question is a little open-ended but there are definitely some testable options just using the information above:
You could test whether the query works at all. Assert that you should get back a non-empty / non-null result set.
You could test whether the query results is a valid result set. Assert that the results should pass your validation code (so at this point, you know that the data is non-null, not non-sensical and possibly useful).
If you know anything about the data schema / data description, you could assert that the fields are sensible in relation to each other. For example, if you get a result with a helicopter, it shouldn't be associated with an altitude of negative 100 meters....
If you know anything about the probabilistic distribution of the data, you should be able to collect a set of data and assert that your resulting distribution is within a standard deviation of what you'd expect to see.
I'm sure that with some more information, you'll get a pile of useful suggestions.
It sounds like your testing at too high a level. Consider mocking the web service interface and writing other unit tests on the data layer that access the database. Some more detail here might make this question easier to answer, for example the situation you're trying to test.
I would normally expect the results of a unit test not to change, or at least to be within a range that you're expecting
A problem I've run into is with convoluted (meaning "crappy") datamodels, where you can't ever be sure that problems are due to code errors or data errors.
A symptom of this is when your application works great, passes all tests, etc. with mocked data or a fresh dataset, but breaks horribly when you run your application on real data.
Related
I have small CRUD application that I would like to create integration tests for. I've seen the recommendation that "tests depending on other tests" is a no go. But how can I keep the code maintainable while at the same time not using the data from other tests?
So what I mean is easier to show by example with some pseudo code
TestCreateUser {
make POST to API creating a user
verify the a 200 is received back
}
TestReadUser {
GET the user from the previous test.
verify its the same user
}
TestUpdateUser {
PATCH the user from the previous test.
verify the user have the new data.
}
So this would be bad since all tests depend on the first one. So what are the alternatives? I guess I could use a beforeEach
#BeforeEach
public void initEach(){
make POST to API creating a user
verify the a 200 is received back
}
And then just skip the create user test. But this might create unnecessary calls if i for example have a test like this
TestCreateUserWithSpecialData {
make POST to API creating a user with additional data
verify the a 200 is received back
verify the additional data is correctl
}
Then the beforEach would just create a user that the test does not need. Whats a good solution to solving this? Should I split them up into smaller classes and more files? Or are there a better solution? I suppose i could create if statements in the beforEach but that feels like a hack.
You could use #BeforeAll to create some test data (once) and then have individual tests operate on it.
Or, for a test that’s doing something destructive like “delete”, you could create the user within the test itself.
(Purists might complain that this means the “delete” test will fail if the problem is actually with the “create” operation, but I don’t consider that a big problem — if something is sufficiently messed up with my testing environment that it can’t even create some test data, the exact number of tests that fail is not very interesting to me)
One way to do that is to do whatever setup you need using files containing database inserts. There's a Sql annotation in Spring-Boot you can put on the test method to tell what file to run before a test.
That way each test has its own dedicated setup and there is no dependence on another test working, it also means the test isn't dependent on java setup code working.
I am currently developing a really big app. We are now facing the problem of Unit Testing everything in it.
I am trying to record all the interactions in methods and classes during execution time to have inputs and outputs to compare.
Yeah, i know it is not the properly way of doing Unit Testing but we need to do it quickly. We are already working with Mockito/PowerMockito/JUnit.
Already tried AOP and AspectJ but the problem is having to create new files for each class we have.
I was thinking in a way of intercepting the execution flow layer or somewhat to just write then dynamically in a Json file the input + dependencies values and output of method and classes invoked.
Any clues?
We are now facing the problem of Unit Testing everything in it.
Unittest do not test code, unittest verify public observable behavior that has justification from your requirements.
public observable behavior does not nessessarrily mean public methods but observable from outside the code under test. This is return values and communication with dependencies.
In TestNG, I have many tests across many classes that require a page and/or article and possibly other data setup. This data needs to be unique (AKA, Test1 and Test2 both require an article, but they have to work on identical but separate articles so they don't conflict with one another). I am providing the article name/page name via dataProviders.
Here's what I've tried/considered:
#dependsOnMethods. Won't work, because it can't be cross-class.
#dependsOnGroups. This has the problem of creating a single article for all tests to work off of.
#beforeMethods. I can't use this because I can't pass in data
#factory. I am unable to use this because I am using a company-wide solution that currently uses this to pass around the webDriver and has code behind the scenes that uses it
Creating a method that the tests call at the beginning, . This is what I am currently doing, and it works fine, but when that method fails, TestNG will still run the setup methods (which will then fail, and cause 8-10 failures for 1 bug, and wasted testing time)
Basically I need a way to throw a SkipException in a function if it has previously failed, without using the 4 annotations above.
EDIT: I realized that this question isn't quite complete. I pass in two things to each of the functions: A role, and a name for the newly created page/article/other stuff. If I run the same method twice with different names but the same roles passed in, and it fails, then the second time, it should just skip it...however, I may be testing it with a role that doesn't have enough permissions, which would cause an exception to be thrown, but that doesn't mean I don't want to run it with other roles.
I have designed JUnit Test Suite with required values hard-coded in the code itself. Each time if there is any changes, I need to open the project and do the modifications. To provide input from external file, I am using excel sheets, which can be easily designed. The excel file is having some drop down menu items also, which tells about the test cases needs to be executed. The excel sheet is also having some text box in which user has to fill the values before running test suite.
But the excel is not platform independent .
Is there any better way which is universally accepted and platform independent to provide inputs to JUnit Test Suite?
Do I understand you right, that the main thing here is to find a way that makes it easy for a tester to key in test data?
It's not so much about writing a test case, right?
Well, that issue happens in many different projects. One, for example, is to have users key in some basic values in a database table.
There're many ways to solve that. A lot of people use Excel, even more use MS-Access-Forms, Sharepoint or, if they're more familiar with Web-Tools, they end up building web sites.
In the end, your way and the tool you use depends on yours and the testers knowledge and the number of interfaces you have to build and maintain. In my company we ended up with some configurable web sites that is independant of any 3rd party software licence (which was a main requirement in our case).
The only tool, where one should be very careful, is Excel. If you need only a few interfaces, lets say 10-20, Excel for me can still be handled. When it gets more, than the maintenance of Excel will kill you, mainly because Excel keeps programming and business logic for each interface separately. Changing the business logic menas to change all excels separately. This kills you sooner or later.
One of the core concepts about test driven development is that you run all test cases, all of the time in an automated way. Having a user use excel to choose test cases and enter data breaks this model.
You could read from a file to drive your test cases, but perhaps your tests need to be redefined to be data independent. Also all of them should run every time you run JUnit.
Just yesterday I used random data to perform a test...here is an example:
#Test
public void testIntGetter() {
int value = random.getNextInt();
MyObj obj = new MyObj(value);
assertEquals(value,obj.getMyInt());
}
While this is an overly simple example it does test the functionality of the class while being data independent.
Once you decide to break the test driven development/JUnit model, then your question is not really applicable. This is fine to use a tool for other purposes, but your specific question is incorrect.
It is best to have data reside in code, with some exceptions testing is independent of the data as my example shows. Most of those exceptions are edge cases that should reside in code. For example a method that takes a String parameter, should be tested against null, and empty string and a non-empty String.
If the value of a parameter reveals a defect in the code, the code should be fixed and that value should be a permanent member of the collection of test conditions.
I believe there is no universal accepted way to provide input to JUnit tests. Afaik, a unit test is often - or by definition - small (smallest testable part). Data is provided hardcoded as part of a test.
That said, I also use unit testing to conduct test of larger numerical algorithms / models, for which I sometimes have to provide, more complicated data. I provide this data via a spreadsheet too. I believe the spreadsheet is the natural GUI for this kind of tabular data.
I trigger my Java code directly from the spreadsheet using Obba (disclaimer: I am the developer of Obba too, but my main open source project is a library of mathematical finance, for which I use these sheet).
My suggestion is to go both routes:
create small (classical) unit test with predefined hardcoded data as part of your build environment.
create bigger test with data provided via the sheet to analyse the codes behavior to input.
If possible, add a hardcoded "bigger test" to your automated test suit from time to time.
Note: There is also the concept of parametrized unit tests and there are tools which then generate (e.g. randomize) the parameters as part of a the testing.
The application generates CSV's, custom/tab delimeter etc. reports & to make test case of these reports, I am using JUnit-4.
But some methods return huge data as a formatted output string, which has to be dumped in file. Now, I am facing issue for generating expected result for assertion.
Then how to simulate it, do I need to write code again, which is enourmous.
Is it a good practice to code(re-code as original) in JUnit, which I certainly doubt. It will be tightly coupled & test case fail in future if code changes.
How to make test case for method returning huge list, I can restrict it to some hundreds, but not able to mock expected result list to compare with it.
Also, I am using JUnit for testing the functionality of web-based application at application layer, not at the view layer. Is it fine or should I move to other framework like HttpUnit.
Edit : I am testing a method which takes input as ArrayList fetched from the database & then prepares output as properly formatted string for report. So basically, it generates CSV from data.
Also, can someone point few must-followed best practices for unit-testing. I have gone through various online resources, but can't relate much.
The method you're unit testing is a method which takes data as argument, and transforms it into CSV. So it should just test that this transformation works, and you should feed it with the minimal necessary data to verify that it indeed works, for all cases.
This method should be independant of the method which fetches the data from the database, and you should not fetch an enormous amount of data from the database to unit-test this method. Just prepare the data in memory, and give it as an argument to your data-transformation method in your test:
List<Foo> data = new ArrayList<Foo>();
data.add(createFooA());
data.add(createFooB());
String csv = myTestedObject.transformToCsv(data);
assertEquals("...", csv);