Integration tests with JUnit and web MVC - java

I am working on a Spring MVC application.
Unit tests are already writen (nice 85% code coverage and lots of assertions :)
Now I need to write integration tests. I already have a look at stackoverflow still I do have some questions
Right now I am using a standard maven structure with main and test directories, do you recommend to create another directory called integration-tests and write the unit tests there? If so, why?
Another alternative would be to write the integration tests in the "tests" directory, mixing unit and integration tests, and then using maven to run one or the other (maybe using different file suffixes depending on the test type)
In any case, the way I am planning to run the integration tests is essentially to (almost) reuse the unit tests of the controllers WITHOUT injecting mocks, that means that my current stack (Controller-Service-DAO) will contain no mocks (of course, in the unit tests, it does), by removing the mocks I will access to the real resources (Database and so), is this a good approach?
UPDATE: Just to clarify, the project has not JSP or any HTML-related views, output is XML which can be easily validated with XSDs

I am assuming that you have used JUNIT to create your unit tests to achieve the impressive 85% code coverage. Please notice that JUNIT is designed for unit testing only (thus the name JUNIT). Unit testing is done while the code is running in the development environment.
Integration testing can only be performed once the target code has been deployed in the target integration environment.
You have mentioned that you application isn't a web application. Is it SOAP/Rest Web Service? If so, you can use Soap UI [http://www.soapui.org/], to create and save automated regression/integration tests.

Related

How do I write tests for an application that loads functions at run time?

I built a backend server (ready-to-serve) that can load jar files as plugins and use the methods in it to serve different functionalities.
I want to write tests for it but I'm not sure what kind of tests I should write.
You should have a look at the different kinds of tests. Unit tests, integration tests, end-to-end tests.
In case you are writing the code for the imported jars yourself, you could write unit tests for the helper functions and services inside. They should be small and pure (self contained).
I guess, that you have a generalised interface exposed with each jar, that your main application reuses to communicate with the jar.
You could write a general integration test, that imports one of the jars and calls a general method, to show, that the execution succeeded. Something like a health check. Then you could write more tests for other functions and the expected results, though they will probably become more focused on each separate jar.
When it comes to testing plugins having only Unit tests is not enough. You'll need integration tests too, otherwise not possible to test combinations of plugins and/or dependencies between them.
Backing to your github project: from what i see, you basically need to test only plugin API and maybe some shared resources: configuration file, datasources and so on.

Should we mock in cucumber testing while testing java code. Till what extent we should use cucumber?

I am a Java developer. We want to use cucumber testing in our project. We are working mainly on creating APIs. I am good with unit testing and researching about cucumber.
I am thinking about testing persistence methods - CRUD operations as an starter. My questions is that what could be the scenerios in this testing.
Also should I mock the database by creating tables in the feature file. Should I use mockito with Cucumber to mock call to some other services which connects to database and server.
What should be the cucumber testing in these scenerios and whats the best way to create framework to use cucumber in our Java API's project.
Also, how to populate models if not using database
IMO Gherkin (the language you write Cucumber features in), is good for writing business readable, simple scenarios.
To answer quickly, I would say that Cucumber is not a good fit for testing methods, if it is what you want to do.
As you can see with the file naming convention, you write *.feature files, and I think these files must only contains feature-related descriptions.
However, if you do have features to test, you have to choose how to test them
disconnected, can be run quicky by your CI
you will have to mock everything that cannot start-up in the build lifecycle
and they are solutions to start almost anything using Docker, like Testcontainers
connected to a environment
you do not have to mock anything
your tests may be slower
your tests may break because of the environement (failed deployement, server down, etc.)

Automated test of deployed applications

Often some testing framework for automated testing - like Selenium - is used to continiuosly verify the integrity of a deployed application. These tests often cover real user scenarios and may also utilize a range of deployed applications in combination.
We would like to achieve some what the same for a "backend only" application - that is, an application (or more really) without frontend. We are currently building a series of batchjobs where one job produces input to the next.
We have a great unit-test suite that tests the individual jobs however we would really like to test the series of jobs when deployed to some environment.
Do you have any suggestions for such testing framework? The framework must be able to leverage other Java SDKs such as AWS SDK (e.g. to instruct startup of batchjob, inject data to queues etc.). Whether the framework with tests needs to be deployed as an application as well or run directly from CI is secondary.
If you already have backedn tests that can be run on the production all you need is to schedule running on those tests. Jenkins is fine for that (https://wiki.jenkins.io/display/JENKINS/Schedule+Build+Plugin)
You could have emails (or other alerts) for failed jobs. Jenkins will also care for test reports- exactly as it does for unit tests.
Technologies for scheduling test runs
You could schedule running your tests using any other technology- for example Amazon AWS instances, AWS Elastic Beanstalk Worker Environments (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html) etc.
I find Jenkins most reasonable, because you have out-of-the box support for test reports, notifications, etc.
For any other technologies you would have to write reportting, notifications on your own.
Technologies for writing tests
I could write tests in any technology that is capable of making HTTP REST calls. For performance tests Jmeter or Gatling are good choices.
For acceptance test you could use RestEasy, TestRestTemplate from Spring, Apache HTTP client, etc.
As test running framework you could use Junit4,Juni5, TestNG or Spock (if you are fine with Groovy language). Test structure could be similar to those of ordinary tests. Well named independend methods that test one thing well, meaningfull assertions, etc.
For writing assertions my personal preference is AssertJ, but JavaHamcrest would also do.
Those tests can (and should) be written in src/test directory, in separate repository (or in same repository or different module).
For that test module you may write test-releated services in src/main directory, so src/test directory would contain only test scenarios. Test services may everyting you need- manage files, inspect database, etc.
You may consider writting test scenarios in BDD-style and tools like JBehave or Cucumber. Personally I see value in BDD tests only if business is interested in test scenarios. If those tests are to be used only by technical people then I find easier to maintain such tests in non-BDD technologies (Junit, AssertJ).

TESTING SELENIUM, TESTNG Questions

I am an entry level tester, mainly been doing manual testing for a company in the UK following scripts on a spreadsheet which I have written in the BDD format, however, I have been learning some automation on the side as that's what I want to move into full time. I have some questions though which are as follows.
I've been using Selenium web driver + java bindings to make simple tests such as logging in or filling out a registration form, i've also set up log4j but only basic to record low level recording. I have now come across testNG. My main question is this framework used by testers? or developers? Is testNG only for unit tests? or UI tests?
From what i've learnt so far the developer does the unit and component tests and the tester does the services/ui tests is this correct?
Unfortunately I was put into a team of developers and not testers as this is my first job outside of university. So I haven't had the chance to learn from other testers. There was no plan for me when I started just that I was going to be the first tester in this development team without any prior testing knowledge.
Which is why I need a bit of guidance on these issues.
My main question is this framework used by testers? or developers? Is
testNG only for unit tests? or UI tests?
TestNG can be used for both, developers and automation testers, it is a tool that can operate over and together with Junit, basically in some cases is being used to create the concept of test suite, that allows to split all the test cases based on specific criteria (time, module, complexity). Also this framework can be used in unit testing and integration testing as well as ui-testing.
TestNG also in some cases replaced Junit entirely, whit this approach you will have a framework with some out of the box capabilities as DataProviders, Multi threading support and other, you could check this link, consider this as and powerful option for Junit.
From what i've learnt so far the developer does the unit and component
tests and the tester does the services/ui tests is this correct?
Unit testing which I consider very similar as "component test" is being done by the developers. If you have web services or a REST API, developers sometimes are in charge of create some test using integration testing, basically verify that services are working as we expected, returning JSON/XML with the correct format and other kind of validations.
Testers also could check services, using tools such as Jmeter, SOAP-UI, they check more things related to the business logic.
Finally I would said UI test is being done in most of the places by the manual and automation testing team, in places where is no QA department this tasks also belongs to the DEV team.
In order to run tests you need to have a test runner it could be anything, most common in java world is JUnit and TestNG, with those frameworks you can run the tests which annotated by #Test tag, also you can group the tests the way you want it and run them in parallel.
Testers use it to run Selenium tests and do assertions, even though for assertions it is good to have knowledge of hamcrest matchers. Also it providing you reports after tests been completed.
Developers would use same frameworks for unit testing purposes.
Check out guys from toolsqa.com they have pretty comprehensive tutorials on using Selenium with TestNG.
TestNG is basically used by developers for doing unit testing, I agree. But it is also widely used by system test automation using Selenium. This framework is inspired by JUnit framework, and most of the automation test developers use this framework because of its advantages and more added features to support reporting.
I can say following advantages I got by using this framework:
1.Support for parameters.
2.Supports dependent methods testing.
3.Test configuration flexible. Supports powerful execution model.
4.Embeds BeanShell for further flexibility.
5.TestNG has a more elegant way of handling parameterized tests with the data-provider concept.
6.For the same test class TestNG support for multiple instances.
7.Extendibility of using different Tools and plug-ins like Eclipse, Maven, IDEA etc.
8.Default JDK functions for runtime and logging (no dependencies).
9.Supported different Annotations like #BeforeSuite, #AfterSuite, #BeforeClass, #AfterClass, #BeforeTest, #AfterTest, #BeforeGroups, #AfterGroups, #BeforeMethod, #AfterMethod, #DataProvider, #Factory, #Listeners, #Parameters, #Test.
The most beautiful part I found in testNG is, using data provider, i can easily read test inputs and expected results from excel. And I can able to see the Results of Pass/Fail and skip test cases in an emailable format.
For testing a system, we don't need any training/extra classes. Just if we know the system requirements, and this as a end user what they want from the system and start testing. If any deviations found in the system behavior and are not as per the expectations of user. Then mark it as an issue and raise a defect and track it until it get resolved. Retest the same and confirm that the system is working as per the expectations. even at the Unit test level this principle holds the same. But only the difference is that we can do Structure based testing there.
To your questions ..
1.My main question is this framework used by testers? or developers? Is testNG only for unit tests? or UI tests?
Answer = Test NG can be used for unit testing as well as UI testing. the advantage of test NG over JUNIT is that you dont need to write code for test result reporting.

Unit tests vs integration tests with Spring

I'm working on a Spring MVC project, and I have unit tests for all of the various components in the source tree.
For example, if I have a controller HomeController, which needs to have a LoginService injected into it, then in my unit test HomeControllerTest I simply instantiate the object as normal (outside of Spring) and inject the property:
protected void setUp() throws Exception {
super.setUp();
//...
controller = new HomeController();
controller.setLoginService( new SimpleLoginService() );
//...
}
This works great for testing each component as an isolated unit - except now that I have a few dozen classes in the project, after writing a class and writing a successful unit test for it, I keep forgetting to update my Spring MVC context file that does the actual wiring-up in the deployed application. I find out that I forgot to update the context file when I deploy the project to Tomcat and find a bunch of NullPointers from non-wired-up beans.
So, here are my questions:
This is my first Spring project - is it normal to create unit tests for the individual beans, as I have done, and then create a second suite of tests (integration tests) to test that everything works as expected with the actual application context? Is there an established best practice for this?
In addition, how do you separate the unit tests from the integration tests? I have all of the source code in src, the unit tests in test - should there be a 2nd test folder (such as test-integration) for integration test cases?
Since this is my first Spring project, I'm curious how others usually go about doing this sort of thing - and rather than re-invent the wheel I rather ask the rest of the community.
I can't speak to being a best practice, but here's what I've done in the past.
Unit tests:
Create unit tests for non-trivial beans (ie, most of your Spring related beans)
Use Mocks for injected services where practical (ie, most if not all the time).
Use a standard naming convention for these tests in the project test directory. Using Test or TestCase as a prefix or suffix to the classname seems to be widely practiced.
Integration Tests:
Create an AbstractIntegrationTestCase that sets up a Spring WebApplicationContext for use in intetgration test clases.
Use a naming convention for integration tests in the test directory. I've used IntTest or IntegrationTest as a prefix or suffix for these tests.
Set up three Ant test targets:
test-all (or whatever you want to name it): Run Unit and Integration Tests
test: Run Unit tests (just because test seems to be the most common usage for unit testing
test-integration: run the integration tests.
As noted, you can use the naming conventions that make sense for your project.
As to separating unit from integration tests into a separate directory, I don't think it matters as long as the developers and their tools can find and execute them easily.
As an example, the last Java project I worked on with Spring used exactly what is described above, with integration tests and unit tests living in the same test directory. Grails projects, on the other hand, explicitly separate unit and integration test directories under a general test directory.
A few isolated points:
Yes, it's a common approach to Spring testing - seperate unit tests and integration tests where the former doesn't load any Spring context.
For your unit tests, maybe consider mocking to ensure that your tests are focussed on one isolated module.
If you're tests are wiring in a ton of dependencies then they aren't really unit tests. They're integration tests where you are wiring of dependencies using new rather than dependency injection. A waste of time and duplicated effort when your production application uses Spring!
Basic integration tests to bring up your Spring contexts are useful.
The #required annotation may help you to ensure you catch required dependencies in your Spring wiring.
Maybe look into Maven which will give you explicit phases to bind your unit and integration tests on to. Maven is quite widely used in the Spring community.
A lot of the tedious double-book-keeping with spring goes away if you also switch to a purely annotated regime, where you annotate all your beans with #Component, #Controller, #Service and #Repository. Just add #Autowired to the attributes you need to get injected.
See section 3.11 of the spring reference manual. http://static.springframework.org/spring/docs/2.5.x/reference/beans.html#beans-annotation-config
On a related note, we have been using the division Unit/Integratrion tests that KenG describe. In my most recent regime we have also introduced a third "class" of tests, "ComponentTests". These run with full spring wiring, but with wired stub implementations (using component-scan filters and annotations in spring).
The reason we did this was because for some of the "service" layer you end up with an horrendous amount of hand-coded wiring logic to manually wire up the bean, and sometimes ridiculous amounts of mock-objects. 100 lines of wiring for 5 lines of test is not uncommon. The component tests alleviate this problem.
Use the InitializingBean interface (implements a method "afterPropertiesSet") or specify an init-method for your beans. InitializingBean is typically easier because you don't need to remember to add the init method to your beans.
Use afterPropertiesSet to ensure everything is injected as non-null, if it is null, throw an Exception.
When I've created integration tests for web applications, I've put them in a separate directory. They are built using jUnit or TestNG and interact with the system under test using something like Selenium that hits the web pages as if they were users. The cycle would go like this: compile, run unit tests, build the web app, deploy it to a running server, execute the tests, undeploy the app, and report results. The idea is to test the whole system.
With regard to running unit tests separately from integration tests, I put all the latter into an integration-test directory and run them using IDE/Ant using an approach like this. Works for me.
the difference between unit test and integration test is , unit test does not necessarily load your context, you are focusing on the code which you have written - it works fails fast , that is with and without exceptions, by mocking any depends calls in it.
But in case of integration tests , you load context and perform end to end test like actual scenarios.

Categories