TestNG: Creating unique, testable data for #Tests - java

In TestNG, I have many tests across many classes that require a page and/or article and possibly other data setup. This data needs to be unique (AKA, Test1 and Test2 both require an article, but they have to work on identical but separate articles so they don't conflict with one another). I am providing the article name/page name via dataProviders.
Here's what I've tried/considered:
#dependsOnMethods. Won't work, because it can't be cross-class.
#dependsOnGroups. This has the problem of creating a single article for all tests to work off of.
#beforeMethods. I can't use this because I can't pass in data
#factory. I am unable to use this because I am using a company-wide solution that currently uses this to pass around the webDriver and has code behind the scenes that uses it
Creating a method that the tests call at the beginning, . This is what I am currently doing, and it works fine, but when that method fails, TestNG will still run the setup methods (which will then fail, and cause 8-10 failures for 1 bug, and wasted testing time)
Basically I need a way to throw a SkipException in a function if it has previously failed, without using the 4 annotations above.
EDIT: I realized that this question isn't quite complete. I pass in two things to each of the functions: A role, and a name for the newly created page/article/other stuff. If I run the same method twice with different names but the same roles passed in, and it fails, then the second time, it should just skip it...however, I may be testing it with a role that doesn't have enough permissions, which would cause an exception to be thrown, but that doesn't mean I don't want to run it with other roles.

Related

Randomized Testing in java- what is it and how to achieve it?

I was confused about Randomized testing.
It is cited form proj1b spec:
"The autograder project 1A largely relies on randomized tests. For
example, our JUnit tests on gradescope simply call random methods of
your LinkedListDeque class and our correct implementation
LinkedListDequeSolution and as soon as we see any disagreement, the
test fails and prints out a sequence of operations that caused the
failure. "
(http://datastructur.es/sp17/materials/proj/proj1b/proj1b.html)
I do not understand what it means by:
"call random methods of the tested class and the correct class"
I need to write something really similar with that autograder. But I do not know if I need to write tests for different methods together by using a loop to random pick up some to test?
If so, we can test all methods by using JUnit, why we need to randomized test?
Also, if I combine all the tests together, why I call it JUnit?
If you do not mind, some examples will be easier to understand.
Just to elaborate on the "random" testing.
There is a framework called QuickCheck, initially written for the Haskell programming language. But it has been ported to many other languages - also for Java. There is jqwik for junit5, or (probably outdated) jcheck.
The idea is "simply":
you describe properties of your methods, like (a(b(x)) == b(a(x))
the framework then created random input for method calls, and tries to find examples where a property doesn't hold
I assume they are talking about Model Based Testing. For that you'd have to create models - simplified versions of your production behaviour. Then you can list possible methods that can be invoked and the dependencies between those methods. After that you'd have to choose a random one and invoke both - method of your model and the method of your app. If the results are the same, then it works right. If the results differ - either there is a bug in your model or in your app. You can read more in this article.
In Java you can either write this logic on your own, or use existing frameworks. The only existing one that I know in Java is GraphWalker. But I haven't used it and don't know how good it is.
The original frameworks (like QuichCheck) are also able to "shrink" - if it took 50 calls to random methods to find a bug, then they will try to find the exact sequence of several steps that would lead to that bug. I don't know if there are such possibilities in Java frameworks, but it may be worth looking into ScalaCheck if you need a JVM (but not necessarily a Java solution).

Integration tests, but how much? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
A recent debate within my team made me wonder. The basic topic is that how much and what shall we cover with functional/integration tests (sure, they are not the same but the example is dummy where it doesn't matter).
Let's say you have a "controller" class something like:
public class SomeController {
#Autowired Validator val;
#Autowired DataAccess da;
#Autowired SomeTransformer tr;
#Autowired Calculator calc;
public boolean doCheck(Input input) {
if (val.validate(input)) {
return false;
}
List<Stuff> stuffs = da.loadStuffs(input);
if (stuffs.isEmpty()) {
return false;
}
BusinessStuff businessStuff = tr.transform(stuffs);
if (null == businessStuff) {
return false;
}
return calc.check(businessStuff);
}
}
We need a lot of unit testing for sure (e.g., if validation fails, or no data in DB, ...), that's out of question.
Our main issue and on what we cannot agree is that how much integration tests shall cover it :-)
I'm on the side that we shall aim for less integration tests (test pyramid). What I would cover from this is only a single happy-unhappy path where the execution returns from the last line, just to see if I put these stuff together it won't blow up.
The problem is that it is not that easy to tell why did the test result in false, and that makes some of the guys feeling uneasy about it (e.g., if we simply check only the return value, it is hidden that the test is green because someone changed the validation and it returns false). Sure, yeah, we can cover all cases but that would be a heavy overkill imho.
Does anyone has a good rule of thumb for this kind of issues? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?
Thanks a lot in advance!
PS: Sry for the ugly example but it's quite hard to translate a specific code part to an example. Yeah, one can argue about throwing exceptions/using a different return type/etc. but our hand is more or less bound because of external dependencies.
It's easy to figure out where the test should reside if you follow these rules:
We check the logic on Unit Tests level and we check if the logic is invoked on Component or System levels.
We don't use mocking frameworks (mockito, jmock, etc).
Let's dive, but first let's agree on terminology:
Unit tests - check a method, a class or a few of them in isolation
Component Test - initializes a piece of the app but doesn't deploy it to the App Server. Example could be - initializing Spring Contexts in the tests.
System Test - requires a full deployment on App Server. Example could be: sending HTTP REST requests to a remote server.
If we build a balanced pyramid we'll end up with most tests on Unit and Component levels and few of them will be left to System Testing. This is good since lower-level tests are faster and easier. To do that:
We should put the business logic as low as possible (preferably in Domain Model) as this will allow us to easily test it in isolation. Each time you go through a collection of objects and put conditions there - it ideally should go to the domain model.
But the fact that the logic works doesn't mean it's invoked correctly. That's where you'd need Component Tests. Initialize your Controllers as well as services and DAO and then call it once or two times to see whether the logic is invoked.
Example: user's name cannot exceed 50 symbols, can have only latin as well as some special symbols.
Unit Tests - create Users with the right and wrong usernames, check that exceptions are thrown or vice versa - the valid names are passing
Component Tests - check that when you pass a non-valid user to the Controller (if you use Spring MVC - you can do that with MockMVC) it throws the error. Here you'll need to pass only one user - all the rules have been already checked by now, here you're interested only in knowing if those rules are invoked.
System Tests - you may not need them for this scenario actually..
Here is a more elaborate example of how you can implement a balanced pyramid.
In general we write an integration test at every starting point of the application (let's say every controller). We validate some happy flows and some error flows, with a couple of asserts to give us some peace of mind that we didn't break anything.
However, we also write tests at lower levels in response to regressions or when multiple classes are involved in a piece of complicated behaviour.
We use Integration tests mainly to catch the following types of regressions:
Refactoring mistakes (not caught by Unit tests).
For problems with refactoring, a couple of IT tests that hit a good portion of your application is more than sufficient. Refactoring often hits a large portion of classes and so these tests will expose things like using the wrong class or parameter somewhere.
Early detection of injection problems (context not loading, Spring)
Injection problems often happen because of missing annotations or mistakes in XML config. The first integration test that runs and sets up the entire context (apart from mocking the back-ends) will catch these every time.
Bugs in super complicated logic that are nearly impossible to test without controlling all inputs
Sometimes you have code that is spread over several classes, needs to filtering, transformations, etc. and sometimes no one really understands what is going on. What's worse, it is nearly impossible to test on a live system because the underlying data sources cannot easily provide the exact scenario that will trigger a bug.
For these cases (once discovered) we add a new integration test, where we feed the system the input that caused the bug, and then verify if it is performing as expected. This gives a lot of peace of mind after extensive code changes.

How to design / organize tests related to Create/Read/Update/Delete operations in CMS? (Selenium)

we have CMS web app where we do following actions:
- add new / modify customer profile (company)
- add new / modify users of company
- add new / modify content by users
- etc...
For now we developed some amount of tests using Selenium + JUnit, in following fashion:
addNewCompanyTest()
updateCompanyTest()
deleteCompanyTest()
The thing the best practice would be to perform clean up after addNewCompanyTest() (deleting newly created company) which would in fact be performing same steps as in deleteCompanyTest(). I invoke the same method deleteCompany(Company c) in both tests.
And deleteCompanyTest() is in fact creating a new company, so it looks like addNewCompanyTest() is redundant, because it has to pass if the other works.
I have two ideas how this can be managed:
1) Force the tests to be executed in given order
I can use JUnit feature of executing tests in alphabetical order (which would require tests renaming), or switch to TestNG because it supports test order. But is this a good practice, since people even go opposite way, forcing random test order?
2) Create one test companyCRUDTest()
In this test I would create a new company, then update it, and delete it.
What it the best approach here?
If 2), then do you remove smaller tests like addNewCompanyTest()? Maybe it's better approach to just focus on the higher level tests like addAndDeleteContentAsCompanyUserTest() which would create Company, create User, log in as User and create a new Content, and then delete it, instead of maintaining low level ones?
In my view you shouldn't optimize out repetition in the tests simply because you happen to know how they are implemented.
You should explicitly test each required behavior and combinations of behavior. Each test should be runnable on its own and explicitly set-up its desired state and tear down (reset to a known state) whatever changes it made after it has run.
The way I would implement this is to create a little internal testing "API" for adding, deleting and updating companies. You might also write a reset() method that deletes all companies. You can then call these API methods from the test methods without creating lots of duplication. This API will also let you add more complex tests, such as adding several companies and then deleting some of them, adding the same company twice etc.
An alternative to resetting afterwards is to reset before each test. You could put the reset() either into the #Before method or an #After method. The advantage of doing the reset before the test method is that if a test fails, you will be able to see the erroneous state. The advantage of doing reset afterwards is that you could optimize it so that it only deletes what it created.

Using a shared Weka Classifier in a multi threaded application results with Exceptions

I'm running a multi-threaded Java application which gets requests to classify instances. In order to be able to run many threads concurrently my application shares a Classifier object and an Instances object among the threads. The Instances object contains only attributes' related data and does not have any instance associated with it.
When my application gets a classification request, I create an Instance object with the request's attributes data and set the pre-generated Instances object as the dataset using Instance.setDataset(), e.g.:
myNewInstance.setDataset(sharedInstances);
Then myNewInstance is sent to the shared Classifier.
It seems to work well in most cases. However sometimes when 2 concurrent requests occur, an exception is thrown from Classifier.distributionForInstance(). Unfortunately the error message is not clear, however these are 2 different exceptions I see:
Caused by: java.lang.RuntimeException: Queue is empty
at weka.core.Queue.pop(Queue.java:194)
at weka.filters.Filter.output(Filter.java:563)
at weka.filters.unsupervised.attribute.PrincipalComponents.convertInstance(PrincipalComponents.java:626)
at weka.filters.unsupervised.attribute.PrincipalComponents.input(PrincipalComponents.java:812)
at weka.classifiers.meta.RotationForest.convertInstance(RotationForest.java:1114)
at weka.classifiers.meta.RotationForest.distributionForInstance(RotationForest.java:1147)
Caused by: java.lang.NullPointerException
at weka.filters.unsupervised.attribute.Standardize.convertInstance(Standardize.java:238)
at weka.filters.unsupervised.attribute.Standardize.input(Standardize.java:142)
at weka.filters.unsupervised.attribute.PrincipalComponents.convertInstance(PrincipalComponents.java:635)
at weka.filters.unsupervised.attribute.PrincipalComponents.input(PrincipalComponents.java:812)
at weka.classifiers.meta.RotationForest.convertInstance(RotationForest.java:1114)
at weka.classifiers.meta.RotationForest.distributionForInstance(RotationForest.java:1147)
As you can see, when the latest happens it comes with an empty message string.
To my understanding I can't make the objects immutable, and I'd rather not wrap this part in a critical section in order to utilize the most out of the concurrency.
I've tried creating a different 'Instances' object per each classification request by using the constructor Instances(Instances dataset), however, it did not yield different results. Using a different Classifier is not an option since it takes too much time to construct the object and it needs to respond fast (10 to 20 milliseconds at most), and to my understanding the problem does not rely there.
I assume that the problem comes from using the same Instances object. Based on the documentation of Instances the constructor only copies the references to the header information which explains why the problem was not solved by creating another object. Is there an option to create a completely different Instances object based on a previous object without going over all attributes in realtime?
Any other performance-oriented solution will also be highly appreciated.
thanks!
Probably you have solved this issue by now. This is just for those who face a similar issue. I was testing instances in a multi-threaded Java application, and I faced the same exception. To break down the problem, there are two issues in your case:
The first one is that you are using the same Instances object where you are setting the data for each request. With this, you will most probably run into concurrency issues that might not break the code, but will yield wrong results. Because the data from different requests could get mixed up. Your best bet is to create a new Instances object for each request. However, this is not what is producing the exception you are facing. It is the second issue.
The second issue is that you are using the same Classifier, and this is what is producing the exception. In my case I had the classifiers built and serialized and written to a file since I constructed them. Once I need to classify a test set, I used to deserialize the object in each thread, giving me a new instance. However, a proper way to solve this issue would be to use the weka.classifiers.Classifier.makeCopy(model) static method to make a copy for each request, which internally uses serialization.

How do you make a unit test when the results vary?

I am building an application that queries a web service. The data in the database varies and changes over time. How do I build a unit test for this type of application?
The web service sends back xml or a no search results html page. I cannot really change the web service. My application basically queries the web service using HTTPURLConnection and gets the response as a String.
Hope that helps with more detail.
Abstract out the web service using a proxy that you can mock out. Have your mock web service return various values representing normal data and corner cases. Also simulate getting exceptions from the web service. Make sure you code works under these conditions and you can be reasonably certain that it will work with any values the web service supplies.
Look at jMock for Java mocking.
Strictly speaking of unit-testing, you can only test units that have a deterministic behavior.
A test that connects to an external web server is an integration test.
The solution is to mock the HTTPURLConnection - that is, create a class in your unit tests that derives from HTTPURLConnection class and that returns an hardcoded, or a parameterizable value. EDIT: notice this can be done maunally, without any mocking framework.
The class that queries the web server shall not instanciate the HTTPURLConnection, but receive it via a parameter. In the unit tests, you create the HTTPURLConnectionMock, and passes it to the class that interrogates the web server which will use it as it is using a real HTTPURLConnection. In the production code, you create a real HTTPURLConnection and pass it to the class.
You can also make your HTTPURLConnectionMock able to throw an IOException, to test error conditions. Just have a method to tell it not to return the result but an exception at next request.
Your question is a little open-ended but there are definitely some testable options just using the information above:
You could test whether the query works at all. Assert that you should get back a non-empty / non-null result set.
You could test whether the query results is a valid result set. Assert that the results should pass your validation code (so at this point, you know that the data is non-null, not non-sensical and possibly useful).
If you know anything about the data schema / data description, you could assert that the fields are sensible in relation to each other. For example, if you get a result with a helicopter, it shouldn't be associated with an altitude of negative 100 meters....
If you know anything about the probabilistic distribution of the data, you should be able to collect a set of data and assert that your resulting distribution is within a standard deviation of what you'd expect to see.
I'm sure that with some more information, you'll get a pile of useful suggestions.
It sounds like your testing at too high a level. Consider mocking the web service interface and writing other unit tests on the data layer that access the database. Some more detail here might make this question easier to answer, for example the situation you're trying to test.
I would normally expect the results of a unit test not to change, or at least to be within a range that you're expecting
A problem I've run into is with convoluted (meaning "crappy") datamodels, where you can't ever be sure that problems are due to code errors or data errors.
A symptom of this is when your application works great, passes all tests, etc. with mocked data or a fresh dataset, but breaks horribly when you run your application on real data.

Categories