Whether Cucumber JUnit can be used to UnitTest Spring? - java

Whether Cucumber JUnit along with Mockito can be used to test Spring? Wherever I see it's either only using Mockito JUnit or SpringJUnit Runner and have never seen UnitTestCases with Cucumber JUnit+Mockito for Spring..
Whether that is really possible?

The problems those technologies aim to solve is very different.
In an unit test you are trying to test the smallest possible piece of your software (normally a single class) and you abstract (mock) everything else. This is a very good use case for Mockito.
JUnit runner and SpringTestRunner are the engines running your tests, they will take care of configuration and class loading for you.
Cucumber allows you to write tests in a language that is more business friendly (Gherkin) and your goal here is to cover business user cases and test your entire application to see if it solves your problems.
For example: lets say you have an application that gives you the tax value to be added to a check.
You can have a class that given a product connects to an external service and returns the tax rate for that item. To test that class you don't hit the real service, you use Mockito to create a mock service where you can control the returned values. Here you are testing the ability of your class to make requests to external services.
Now you can have a business requirement that says that if you have multiple items in your check only some of them are taxed. Here you write a cucumber test and send the request to your application. Here you are testing the full logic of your app, recognizing items, getting taxes for the right ones and so on.
Thats a case where you will have cucumber and Mockito running on the same app. And who is running them? Most likely the JUnit runner.

Related

JUnit+Mockito Or RestAssured

Please help me to pick up the right approach for testing REST API (Java).
I've used JUnit and Mockito in my previous project(s) and I know it demands a lot of time/efforts to get enough code coverage. But recently I came to know about Rest-Assured and it seems promising. Please suggest based on your experience.
JUnit in this instance is a runner. This question is really about the difference between unit testing and integration testing, both of which can be implemented using JUnit as the surrounding execution framework.
There are many combinations of framework you could use. Some common ones:
JUnit + mockito for unit testing - where your REST API controller beans are wired up to lightweight/mocked dependencies and you execute the API through JAVA
JUnit + Cucumber + RESTAssured for integration testing - where you write a test fixture that expects to execute against a running server in order to exercise its API
There are points between these extremes too. You have to decide where your tests sit on the test pyramid. For highly permutational tests, you will want to write unit tests in order to be able to achieve the permutations easily and get speed of execution. If you're really trying to smoke test that your APIs are available, having already unit tested them, then you'll want to write a small number of integration tests.
In between the points of the spectrum there is a combination of mockito + the native test library of your service. For example, in Spring there's SpringTest and in Jersey there's the JerseyTest/Grizzly framwork. In these instances, a non-real http server is stood up to host your REST service and you test it by simulated REST calls to it, through the framework's client. This unit tests the HTTP marshalling layer as well as the first layer of REST controller code.

TESTING SELENIUM, TESTNG Questions

I am an entry level tester, mainly been doing manual testing for a company in the UK following scripts on a spreadsheet which I have written in the BDD format, however, I have been learning some automation on the side as that's what I want to move into full time. I have some questions though which are as follows.
I've been using Selenium web driver + java bindings to make simple tests such as logging in or filling out a registration form, i've also set up log4j but only basic to record low level recording. I have now come across testNG. My main question is this framework used by testers? or developers? Is testNG only for unit tests? or UI tests?
From what i've learnt so far the developer does the unit and component tests and the tester does the services/ui tests is this correct?
Unfortunately I was put into a team of developers and not testers as this is my first job outside of university. So I haven't had the chance to learn from other testers. There was no plan for me when I started just that I was going to be the first tester in this development team without any prior testing knowledge.
Which is why I need a bit of guidance on these issues.
My main question is this framework used by testers? or developers? Is
testNG only for unit tests? or UI tests?
TestNG can be used for both, developers and automation testers, it is a tool that can operate over and together with Junit, basically in some cases is being used to create the concept of test suite, that allows to split all the test cases based on specific criteria (time, module, complexity). Also this framework can be used in unit testing and integration testing as well as ui-testing.
TestNG also in some cases replaced Junit entirely, whit this approach you will have a framework with some out of the box capabilities as DataProviders, Multi threading support and other, you could check this link, consider this as and powerful option for Junit.
From what i've learnt so far the developer does the unit and component
tests and the tester does the services/ui tests is this correct?
Unit testing which I consider very similar as "component test" is being done by the developers. If you have web services or a REST API, developers sometimes are in charge of create some test using integration testing, basically verify that services are working as we expected, returning JSON/XML with the correct format and other kind of validations.
Testers also could check services, using tools such as Jmeter, SOAP-UI, they check more things related to the business logic.
Finally I would said UI test is being done in most of the places by the manual and automation testing team, in places where is no QA department this tasks also belongs to the DEV team.
In order to run tests you need to have a test runner it could be anything, most common in java world is JUnit and TestNG, with those frameworks you can run the tests which annotated by #Test tag, also you can group the tests the way you want it and run them in parallel.
Testers use it to run Selenium tests and do assertions, even though for assertions it is good to have knowledge of hamcrest matchers. Also it providing you reports after tests been completed.
Developers would use same frameworks for unit testing purposes.
Check out guys from toolsqa.com they have pretty comprehensive tutorials on using Selenium with TestNG.
TestNG is basically used by developers for doing unit testing, I agree. But it is also widely used by system test automation using Selenium. This framework is inspired by JUnit framework, and most of the automation test developers use this framework because of its advantages and more added features to support reporting.
I can say following advantages I got by using this framework:
1.Support for parameters.
2.Supports dependent methods testing.
3.Test configuration flexible. Supports powerful execution model.
4.Embeds BeanShell for further flexibility.
5.TestNG has a more elegant way of handling parameterized tests with the data-provider concept.
6.For the same test class TestNG support for multiple instances.
7.Extendibility of using different Tools and plug-ins like Eclipse, Maven, IDEA etc.
8.Default JDK functions for runtime and logging (no dependencies).
9.Supported different Annotations like #BeforeSuite, #AfterSuite, #BeforeClass, #AfterClass, #BeforeTest, #AfterTest, #BeforeGroups, #AfterGroups, #BeforeMethod, #AfterMethod, #DataProvider, #Factory, #Listeners, #Parameters, #Test.
The most beautiful part I found in testNG is, using data provider, i can easily read test inputs and expected results from excel. And I can able to see the Results of Pass/Fail and skip test cases in an emailable format.
For testing a system, we don't need any training/extra classes. Just if we know the system requirements, and this as a end user what they want from the system and start testing. If any deviations found in the system behavior and are not as per the expectations of user. Then mark it as an issue and raise a defect and track it until it get resolved. Retest the same and confirm that the system is working as per the expectations. even at the Unit test level this principle holds the same. But only the difference is that we can do Structure based testing there.
To your questions ..
1.My main question is this framework used by testers? or developers? Is testNG only for unit tests? or UI tests?
Answer = Test NG can be used for unit testing as well as UI testing. the advantage of test NG over JUNIT is that you dont need to write code for test result reporting.

What are typical uses cases for Test Execution Listeners in Testing Frameworks such as JUnit and TestNG?

JUnit,TestNG, and Spring testing have test execution listeners as an extension mechanism. Test execution listeners seem to be a low level feature of interest to framework developers and tool developers.
What are the main use cases in which Test execution listeners are useful? Can they be useful to application developers?
In Case of testng the listners are the most remarkable thing for me. The testNg listners allows me to simplyfy the test method content and manage each test with server startup , user registration, populating artifacts need for tests . And also clear up the environment for each test cycle.
At the each listener level im performing following operations so i don't have to bother about them in side my tests
IExecutionListener
- onExecutionStart()
Emma instrumentations
Server start
- onExecutionFinish()
Server Sutdown
Emma report generation
ISuiteListener
Set environment properties ex- Key Store Paths
Populate Users.
ITestListener
On Start
Artifact Deployment
On Finish
Artifact Clean up
IReporter
Generate TestNg Report,
Generate sure-fire report,
Export data for Dashboard
For JUnit, the listeners are designed to be a method of reporting, and are used like this internally. So, you can display counts of tests executed, success, failure, error, that sort of thing. This is used externally as well, such as in maven-surefire. See JUnit4RunListener.java as an example.
Another use would be to output in a different format, such as XML. I think the main use cases for the other frameworks are the same.
In JUnit, the listener class is not meant to be used in the manner that Dharshana uses his test listeners in testng, that is as a setup/teardown. The objects used in the Listeners (Description, Failure, Result) are immutable and don't encourage direct access to the test class itself. I'm not sure about TestNG, Cedric would be a better person to ask about that.
Are they useful for an application developers? They may well be, depending upon how your tests are set up. They would only be used in the context of tests, so if they can improve them, then go ahead, use them. One use case would be JUnit test report enrichment with JavaDoc, see my answer. To recap the answer, if the developers add a test for a specific bug, then they can add an annotation to that test linking it back to the bug. There is a custom RunListener, which collects all of the information in the annotations together and produces a report for the final customer.

Data-driven tests with jUnit

What do you use for writing data-driven tests in jUnit?
(My definition of) a data-driven test is a test that reads data from some external source (file, database, ...), executes one test per line/file/whatever, and displays the results in a test runner as if you had separate tests - the result of each run is displayed separately, not in one huge aggregate.
In JUnit4 you can use the Parameterized testrunner to do data driven tests.
It's not terribly well documented, but the basic idea is to create a static method (annotated with #Parameters) that returns a Collection of Object arrays. Each of these arrays are used as the arguments for the test class constructor, and then the usual test methods can be run using fields set in the constructor.
You can write code to read and parse an external text file in the #Parameters method (or get data from another external source), and then you'd be able to add new tests by editing this file without recompiling the tests.
This is where TestNG, with its #DataSource, shines. That's one reason why I prefer it to JUnit. The others are dependencies and parallel threaded tests.
I use an in-memory database such as hsqldb so that I can either pre-populate the database with a "production-style" set of data or I can start with an empty hsqldb database and populate it with rows that I need to perform my testing. On top of that I will write my tests using JUnit and Mockito.
I use combination of dbUnit, jMock and jUnit 4. Then you can ether run it as suite or separately
You are better off extending TestCase with a DataDrivenTestCase that suits your needs.
Here is working example:
http://mrlalonde.blogspot.ca/2012/08/data-driven-tests-with-junit.html
Unlike parameterized tests, it allows for nicely named test cases.
I'm with #DroidIn.net, that is exactly what I am doing, however to answer your question literally "and displays the results in a test runner as if you had separate tests," you have to look at the JUnit4 Parameterized runner. DBUnit doesn't do that. If you have to do a lot of this, honestly TestNG is more flexible, but you can absolutely get it done in JUnit.
You can also look at the JUnit Theories runner, but my recollection is that it isn't great for data driven datasets, which kind of makes sense because JUnit isn't about working with large amounts of external data.
Even though this is quite an old topic, i still thought of contributing my share.
I feel JUnit's support for data driven testing is to less and too unfriendly. for eg. in order to use parameterized, we need to write our constructor. With Theories runner we do not have control over the set of test data that is passed to the test method.
There are more drawbacks as identified in this blog post series: http://www.kumaranuj.com/2012/08/junits-parameterized-runner-and-data.html
There is now a comprehensive solution coming along pretty nicely in the form of EasyTest which is a a framework extended out of JUnit and is meant to give a lot of functionality to its users. Its primary focus is to perform Data Driven Testing using JUnit, although you are not required to actually depend on JUnit anymore. Here is the github project for refernece: https://github.com/anujgandharv/easytest
If anyone is interested in contributing their thoughts/code/suggestions then this is the time. You can simply go to the github repository and create issues.
Typically data driven tests use a small testable component to handle the data. (File reading object, or mock objects) For databases, and resources outside of the application mocks are used to similate other systems. (Web services, and databases etc). Typically I see is that there are external data files that handle the data and the output. This way the data file can be added to the VCS.
We currently have a props file with our ID numbers in it. This is horribly brittle, but is easy to get something going. Our plan is to initially have these ID numbers overridable by -D properties in our ant builds.
Our environment uses a legacy DB with horribly intertwined data that is not loadable before a run (e.g. by dbUnit). Eventually we would like to get to where a unit test would query the DB to find an ID with the property under test, then use that ID in the unit test. It would be slow and is more properly called integration testing, not "unit testing", but we would be testing against real data to avoid the situation where our app runs perfectly against test data but fails with real data.
Some tests will lend themselves to being interface driven.
If the database/file reads are retrieved by an interface call then simply get your unit test to implement the interface and the unit test class can return whatever data you want.

How can I improve my junit tests

Right my junit tests look like a long story:
I create 4 users
I delete 1 user
I try to login with the deleted user and make sure it fails
I login with one of the 3 remaining user and verify I can login
I send a message from one user to the other and verify that it appears in the outbox of the sender and in the inbox of the receiver.
I delete the message
...
...
Advantages:
The tests are quite effective (are very good at detecting bugs) and are very stable, becuase they only use the API, if I refactor the code then the tests are refactored too. As I don't use "dirty tricks" such as saving and reloading the db in a given state, my tests are oblivious to schema changes and implementation changes.
Disadvantages:
The tests are getting difficult to maintain, any change in a test affects other tests. The tests run 8-9 min which is great for continuous integration but is a bit frustrating for developers. Tests cannot be run isolated, the best you can do is to stop after the test you are interested in has run - but you absolutely must run all the tests that come before.
How would you go about improving my tests?
First, understand the tests you have are integration tests (probably access external systems and hit a wide range of classes). Unit tests should be a lot more specific, which is a challenge on an already built system. The main issue achieving that is usually the way the code is structured:
i.e. class tightly coupled to external systems (or to other classes that are). To be able to do so you need to build the classes in such a way that you can actually avoid hitting external systems during the unit tests.
Update 1: Read the following, and consider that the resulting design will allow you to actually test the encryption logic without hitting files/databases - http://www.lostechies.com/blogs/gabrielschenker/archive/2009/01/30/the-dependency-inversion-principle.aspx (not in java, but ilustrates the issue very well) ... also note that you can do a really focused integration tests for the readers/writers, instead of having to test it all together.
I suggest:
Gradually include real unit tests on your system. You can do this when doing changes and developing new features, refactoring appropriately.
When doing the previous, include focused integration tests where appropriate. Make sure you are able to run the unit tests separated from the integration tests.
Consider your tests are close to testing the system as a whole, thus are different from automated acceptance tests only in that they operate on the border of the API. Given this think about factors related to the importance of the API for the product (like if it will be used externally), and whether you have good coverage with automated acceptance tests. This can help you understand what is the value of having these on your system, and also why they naturally take so long. Take a decision on whether you will be testing the system as a whole on the interface level, or both the interface+api level.
Update 2: Based on other answers, I want to clear something regarding doing TDD. Lets say you have to check whether some given logic sends an email, logs the info on a file, saves data on the database, and calls a web service (not all at once I know, but you start adding tests for each of those). On each test you don't want to hit the external systems, what you really want to test is if the logic will make the calls to those systems that you are expecting it to do. So when you write a test that checks that an email is sent when you create an user, what you test is if the logic calls the dependency that does that. Notice that you can write these tests and the related logic, without actually having to implement the code that sends the email (and then having to access the external system to know what was sent ...). This will help you focus on the task at hand and help you get a decoupled system. It will also make it simple to test what is being sent to those systems.
unit tests should - ideally - be independent, and able to run in any order. So, I would suggest that you:
break up your tests to be independent
consider using an in-memory database as the backend for your tests
consider wrapping each test or suite in a transaction that is rolled back at the end
profile the unit tests to see where the time is going, and concentrate on that
if it takes 8 minutes to create a few users and send a few messages, the performance problem may not be in the tests, rather this may be a symptom of performance problems with the system itself - only your profiler knows for sure!
[caveat: i do NOT consider these kinds of tests to be 'integration tests', though i may be in the minority; i consider these kinds of tests to be unit tests of features, a la TDD]
Now you are testing many things in one method (a violation of One Assertion Per Test). This is a bad thing, because when any of those things changes, the whole test fails. This leads it to not being immediately obvious why a test failed and what needs to be fixed. Also when you intentionally change the behaviour of the system, you need to change more tests to correspond the changed behaviour (i.e. the tests are fragile).
To know what kind of tests are good, it helps to read more on BDD: http://dannorth.net/introducing-bdd http://techblog.daveastels.com/2005/07/05/a-new-look-at-test-driven-development/ http://jonkruger.com/blog/2008/07/25/why-behavior-driven-development-is-good/
To improve the test that you mentioned, I would split it into the following three test classes with these context and test method names:
Creating user accounts
Before a user is created
the user does not exist
When a user is created
the user exists
When a user is deleted
the user does not exist anymore
Logging in
When a user exists
the user can login with the right password
the user can not login with a wrong password
When a user does not exist
the user can not login
Sending messages
When a user sends a message
the message appears in the sender's outbox
the message appears in the reciever's inbox
the message does not appear in any other message boxes
When a message is deleted
the message does not anymore exist
You also need to improve the speed of the tests. You should have a unit test suite with good coverage, which can run in a couple of seconds. If it takes longer than 10-20 seconds to run the tests, then you will hesitate to run them after every change, and you lose some of quick feedback that running the tests gives you. (If it talks to the database, it's not a unit test, but a system or integration test, which have their uses, but are not fast enough to be executed continually.) You need to break the dependencies of the classes under test by mocking or stubbing them. Also from your description it appears that your tests are not isolated, but instead the tests depend on the side-effects caused by previous tests - this is a no-no. Good tests are FIRST.
Reduce dependencies between tests. This can be done by using Mocks. Martin Fowler speaks about it in Mocks aren't stubs, especially why mocking reduces dependencies between tests.
You can use JExample, an extension of JUnit that allows test methods to have return values that are reused by other tests. JExample tests run with the normal JUnit plugin in Eclipse, and also work side by side with normal JUnit tests. Thus migration should be no problem. JExample is used as follows
#RunWith(JExample.class)
public class MyTest {
#Test
public Object a() {
return new Object();
}
#Test
#Given("#a")
public Object b(Object object) {
// do something with object
return object;
}
#Test
#Given("#b")
public void c(Object object) {
// do some more things with object
}
}
Disclaimer, I am among the JExample developers.
If you use TestNG you can annotate tests in a variety of ways. For example, you can annotate your tests above as long-running. Then you can configure your automated-build/continuous integration server to run these, but the standard "interactive" developer build would not (unless they explicitly choose to).
This approach depends on developers checking into your continuous build on a regular basis, so that the tests do get run!
Some tests will inevitably take a long time to run. The comments in this thread re. performance are all valid. However if your tests do take a long time, the pragmatic solution is to run them but not let their time-consuming nature impact the developers to the point that they avoid running them.
Note: you can do something similar with JUnit by (say) naming tests in different fashions and getting your continuous build to run a particular subset of test classes.
By testing stories like you describe, you have very brittle tests. If only one tiny bit of functionality is changing, your whole test might be messed up. Then you will likely to change all tests, which are affected by that change.
In fact the tests you are describing are more like functional tests or component tests than unit tests. So you are using a unit testing framework (junit) for non-unit tests. In my point of view there is nothing wrong to use a unit testing framework to do non-unit tests, if (and only if) you are aware of it.
So there are following options:
Choose another testing framework which supports a "story telling"-style of testing much better, like other user already have suggested. You have to evaluate and find a suitable testing framework.
Make your tests more “unit test”-like. Therefore you will need to break up your tests and maybe change your current production code. Why? Because unit testing aims on testing small units of code (unit testing purists suggest only one class at once). By doing this your unit tests become more independent. If you change the behavior of one class, you just need to change a relatively small amount of unit test code. This makes your unit test more robust. During that process you might see that your current code does not support unit testing very well -- mostly because of dependencies between classes. This is the reason that you will also need to modify your production code.
If you are in a project and running out of time, both options might not help you any further. Then you will have to live with those tests, but you can try to ease your pain:
Remove code duplication in your tests: Like in production code eliminate code duplication and put the code into helper methods or helper classes. If something changes, you might only need to change the helper method or class. This way you will converge to the next suggestion.
Add another layer of indirection to your tests: Produce helper methods and helper classes which operate on a higher level of abstraction. They should act as API for your tests. These helpers are calling you production code. Your story tests should only call those helpers. If something changes, you need to change only one place in your API and don't need to touch all your tests.
Example signatures for your API:
createUserAndDelete(string[] usersForCreation, string[] userForDeletion);
logonWithUser(string user);
sendAndCheckMessageBoxes(string fromUser, string toUser);
For general unit testing I suggest to have a look into XUnit Test Patterns from Gerard Meszaros.
For breaking dependencies in your production tests have a look into Working Effectively with Legacy Code from Michael Feathers
In addition to the above, pick up a good book on TDD (I can recommend "TDD and Acceptance TDD for Java Developers"). Even though it will approach from a TDD point of view there is alot of helpful information about writing the right kind of unit tests.
Find someone who has alot of knowledge in the area and use them to figure out how you can improve your tests.
Join a mailing list to ask questions and just read the traffic coming through. The JUnit list at yahoo (something like groups.yahoo.com/junit). Some of the movers and shakers in the JUnit world are on that list and actively participate.
Get a list of the golden rules of unit tests and stick them on your (and others) cubicle wall, something like:
Thou shalt never access an external system
Thou shalt only test the code under test
Thou shalt only test one thing at once
etc.
Since everyone else is talking about structure I'll pick different points. This sounds like a good opportunity to profile the code to find bottleknecks and to run it through code coverage to see if you are missing anything (given the time it takes to run it the results could be interesting).
I personally use the Netbeans profiler, but there are ones in other IDEs and stand alone ones as well.
For code coverage I use Cobertura, but EMMA works too (EMMA had an annoyance that Cobertura didn't have... I forget what it was and it may not be an issue anymore). Those two are free, there are paid ones as well that are nice.

Categories