I have to work with some old java application.
There is a total of 6 projects which:
communicate using rest and mq and
already have some integration tests.
As part of this:
mvcMock mocks are used for the initial requests from test
additional http requests are made by services and
they go against dev server instead of calling code from current build;
it'll fail if my test use code which communicates with another project by new endpoint which dev do not have yet.
How I thought of testing this
My idea was to use single test project which will run all required projects using #SpringBootTest and mockmvc to mock real calls and transfer them inside test instead of using real endpoints.
The ask
I don't get how to make Spring to work with #Autowired and run 6
different WebApplicationContext's.
Or maybe i should forget my plan and use something different.
When it comes to #SpringBootTest its supposed to load everything that is required to load one single spring boot driven application.
So the "integration testing" referred in Spring Boot testing documentation is for one specific application.
Now, you're talking about 6 already existing applications. If these applications are all spring boot driven, then you can run #SpringBootTest for each one of them, and mock everything you don't need. MockMvc that you've mentioned BTW, doesn't start the whole application, but rather starts a "part" of application relevant for web request processing (for example it won't load your DAO layer), so its an entirely different thing, do not confuse between them :)
If you wan't to test the whole flow that involves all 6 services, you'll have to run a whole enviroment and run a full-fledged system test that will be executed on a remote JVM.
In this case you can containerize the applications and run them in test with TestContainers.
Obviously you'll also have to provide containers for databases if they have any, messaging systems, and so forth.
All-in-all I feel that the question is rather vague and lacks concrete details.
Related
I am designing integration tests on kinda legacy app and I am facing the problem I have a services I´d like to use only for one run of Integration tests.
The app contains a multiple modules, 4 spring (non boot) applications and these are using these services:
PostgreSQL database
RabiitMQ instance
ElasticSearch instance
Whole stack is currently dockerized via docker-compose (so using docker-compose up the whole app starts, database schemas are created etc).
I would like to achieve this via testcontainers. So start PostgreSQL container where I run flyway scripts to create schema and full database with data required to run (other data will be added in separate tests), then start RabbitMQ and then ElasticSearch instance.
All these automatically every time integration test run.
Is this even possible using "legacy" Spring (non Boot)?
And is this possible to automatize process that it could run many times on one server (so there wont be any port collisions). The goal is to run this on sobe Git repository after merge request was submitted to check if all integration tests pass.
Thank you for advices.
Testcontainers is absolutely independent from Spring from the beginning, actually as I know some king of integration with Spring Boot has only recently been added.
There are a few ways to achieve that, the simplest would be to create a few containers as test class fields as described here [1]
Yes, it is possible to achieve that without collisions, read here. In short - Testcontainers exposes container's port (for ex 5432 for Postgres) on a random host port in order to avoid collisions, you can get the actual port as described in the article. For JDBC containers it can be even easier.
I haven't personally worked with RabbitMQ and ElasticSearch but there are modules for that, you can read about that in docs.
P.S. It's also possible to use Docker Compose support for that, but I can't find any reason why, just FYI, the approach above is simpler.
[1] #Testcontainers annotation will start them for you but you can also manually manage container lifecycle.
I have two spring boot applications. The first one manages data in a PostgreSQL database. The other one exposes this data over REST.
In my first spring boot application I write a test that uses a test database. Now I want to write a test for the other application (REST), that test needs data inside the database.
How can I use the first spring boot application in my test for the second spring boot application?
Or can I setup that the test only can be run if the test from the first spring boot application?
There are different types of testing. The first is unit testing -- this confirms that your business logic works. The second form is integration testing, which is again split into two parts -- the first you test the component in isolation to confirm that it communicates the way you expect (sometimes called component testing), and the second you test the component against other, real, components.
You can easily do unit-tests in maven/spring-boot, and it's fairly easy to do component testing too. The integration testing however is usually a lot more complicated and usually needs to involve a mechanism outside the simple maven build system. The most common approach to this is to use a CI/CD tool, like Jenkins or CircleCI.
The usual pattern is to run the unit-tests first because they are the fastest, then component tests, then integration tests. The latter often requires an 'environment' to be created that contains all of the collaborating components that compose a service (the two spring-boot apps in your case).
For integration testing, we often find that the biggest problem is "Configuration Management", which is basically a description of which versions of which components work together. For your problem you need a database, data, and two spring-boot apps, along with their configuration and environment data.
First of all: you shouldn't start neither the first nor the second application in your tests. It will slower down your testing drastically. What's more you will be dependent on another application that in reality may be developed by another team - bad idea.
In fact, you've got something like 3 ways to do it:
Use Wiremock or some other dummy stub service- this approach will suite you if you're ready no to call the "real" application. The service should mimic your application (should expose the same interface i.e. url, HTTP method, the same response)
Use both the application in Docker containers and start it with Docker Compose - you can use here either a real database deployed somewhere or another container with predefined data
Deploy the first service somewhere along with the database and run either integration tests or system or end-to-end tests
Hope this helps!
UPDATE: I meant you shouldn't start apps in tests. Also, to my mind it is obvious enough that we're talking about integrating testing at least as the guy mentions testing of whole request trip starting with the REST app. Maybe that clarifies the things to downvoters
I am just started to write JUnit test cases.Now I am writing a test method to test RESTful web service in java using the IntelliJ IDEA. My directory structure as this.
I am calling the web service from my test case as:
Response response = target.path("groups").path("registergroup").request().accept(MediaType.APPLICATION_JSON).post(Entity.json(stringEmp.toString()));
String output = response.readEntity(String.class);
I have added the multiple breakpoint in this test method and source classes.
Is it possible to jump Webservice classes from above request point?
If possible then how can I do that?
I am using the embedded jetty server to test which is also running from this module.
Testing REST services using JUnit only is in my opinion not worth the effort, because you usually have to mock a lot of the REST library internals in order to make it work, and it's very hard to test some of the service behavior anyway (e.g. what happens when the client specifies the wrong Content-Type or Accept headers).
Assuming you are using Jersey, you have two options :
use JerseyTest
use Arquillian
My personal preference goes to Arquillian because (among other things) the resulting tests are completely independent from what is being tested (i.e. you can change the implementation of the service and the REST library without changing the tests).
I have a service that calls out to a third-party endpoint using java.net.URLConnection. As part of an integration test that uses this service I would like to use a fake endpoint of my own construction.
I have made a Spring MVC Controller that simulates that behaviour of the endpoint I require. (I know this endpoint works as expected as I included it in my web app's servlet config and hit it from a browser once started).
I am having trouble figuring out how I can get this fake endpoint available for my integration test.
Is there some feature of Spring-Test that would help me here?
Do I somehow need to start up a servlet at the beginning of my test?
Are there any other solutions entirely?
It's a bad idea to use a Spring MVC controller as a fake endpoint. There is no way to simply have the controller available for the integration test and starting a servlet with just that controller alongside whatever you are testing requires a lot of configuration.
It is much better to use a mocking framework like MockServer (http://www.mock-server.com/) to create your fake endpoint. MockServer should be powerful enough to cover even complex responses from the fake endpoint, with relatively little setup.
Check out Spring MVC Test that was added to Spring in version 3.2.
Here are some tutorials: 1, 2, 3
First I think we should get the terminology right. There are two general groups of "fake" objects in testing (simplified): a mock, which returns predefined answers on predefined input and stubs which are a simplified version of the object the SUT (system under test) communicates with. While a mock basically does nothing than to provide a response, a stub might use a live algorithm, but not store it's results in a database or send them to customers via eMail for example. I am no expert in testing, but those two fake objects are rather to be used in unit and depending on their scope in acceptance tests.
So your sut communicates with a remote system during integration test. In my book this is the perfect time to actually test how your software integrates with other systems, so your software should be tested against a test version of the remote system. In case this is not possible (they might not have a test system) you are conceptually in some sort of trouble. You can shape your stub or mock only in a way you expect it to work, very much like the part of the software you have written to communicate with that remote service. This leaves out some important things you want to test with integration tests: Was the client side implemented correctly so that it will work with the live server. Do we have to develop work around as there are implementation errors on the server side? In which scale will the communication with the remote system affect our software's performance? Do our authentication credentials work? Does the authentication mechanism work? What are the technical and conceptual implications of this communication relationship no one has thought of so far? (Believe me, the latter will happen more often than you might expect!)
Generally speaking: What will happen if you do integration tests against a mock or a stub is that you test against your own understanding of how to implement the client and the server side of communication, and you do not test how your client works with the actual remote server or at least the best thing next to that, a test system. I can tell you from experience: never make assumptions on how a remote system should behave - test it. Even when talking of a JMS server: test it!
In case you are working for a company, testing against a provided test system is even more important: if you software works against a test system and you can prove it (selenium is a good helper here, as well as good logging, believe it or not) and your software does not work with a live version, you have a situation which I call "instablame": it is immediately obvious that it is not your fault the software isn't working. I myself hate fingerpointing to the bone, but most suits tend to ask "Who's fault was it?" even before "Can we fix that immediately?" and way before "How can we solve that problem?". And there is a special group of suits called lawyers, you know ... ;)
That being said: if you absolutely have to use those stubs during your integration tests, I would create an own project for them (let's say "MyProject-IT-Stubs" and build and run the latest version of MyProject-IT-Stubs before I run the IT of my main project. When using maven, you could create MyProject-IT-Stubs with war packaging, call it as a dependency during the pre-integration-test phase and fire up a jetty for this war in the same phase. Then your integration tests run, either successful or not and you can tear down the jetty in the post-integration-test phase.
The IMHO best way to organize your project with maven would be to have a project with three modules: MyProject,MyProject-IT-Stubs and MyProject-IT(declaring dependencies on MyProject and MyProject-IT-Stubs. This keeps your projects nice and tidy and the stubs do not pollute your project. You might want to think about organizing MyProject-IT-Stubs into modules as well, one for each remote system you have to talk to. As soon as you have test access, you can simply deactivate the according module in MyProject-IT-Stubs.
I am sure according options exist for InsertYourBuildToolHere.
I want to allow consumers of a web services layer (web services are written in Java) to create automated integration tests to validate that the version of the web services layer that the consumers will use will still work for the consumer (i.e. the web services are on a different release lifecycle than the consumers and their APIs or behavior might change-- they shouldn't change wihtout notifying the consumer, but the point of this automated test is to validate that they haven't changed)
What would I do if the web service actually executes a transaction (updates database tables). Is there a common practice for how to handle this without having to put logic into the web service itself to know its in a unit test and rollback the transaction once finished? (basically baking in the capability to deal with testing of the web service). Or is that the recommended way to do it?
The consumers are created by one development team at our company and the web services are created by a seperate team. The tests would run in an integration environment (the integration environment is one environment behind the test environment used by QA functional testers, one environment behind the prod environment)
The best approach to this sort of thing is dependency injection.
Put your database handling code in a service or services that are injected into the webservice, and create mock versions that are used in your testing environment and do not actually update a database, or in which you add capability of reset under test control.
This way your tests can exercise the real webservices but not a real database, and the tests can be more easily made repeatable.
Dependency injection in Java can be done using (among others) Spring or Guice. Guice is a bit more lightweight.
It may be sensible in your situation to have the injection decision made during application startup based on a system property as you note.
If some tests need to actually update a database to be valid, then your testing version of the database handling will need to use a real database, but should also provide a way accessible from tests to reset your database to a known (possibly empty) state so that tests can be repeated.
My choice would be to host the web services layer in production and in pre-production. Your customers can test against pre-production, but won't get billed for their transactions.
Obviously, this requires you to update production and pre-production at the same time.
Let the web services run unchanged and update whatever they need to in the database.
Your integration tests should check that the correct database records have been written/updated after each test step.
We use a soapUI testbed to accomplish this.
You can write your post-test assertion scripts in Groovy and Java, which can easily connect to the db using JDBC and check records.
People get worried about using the actual database - I wouldn't get hung up on this, it's actually a GOOD thing and makes for a really accurate testbed.
Regarding the "state" of the db, you can approach this in a number of ways:
Restore a db in a known state before the tests run
Get the test to cleanup after themselves
Let the db fill up as more tests run and clean it out occassionally
We've taken the last approach for now but may change in future if it becomes problematic.
I actually don't mind filling the db up with records as it makes it even more like a real customer database. It also is really useful when investigating test failures.
e.g. cxf allows you to change the transport layer. so you can just change the configuration and use localTransport. then you can have to objects: client and server without any network activity. it's great for testing (un)marhasling. everything else should be separated so the business logic is not aware of webservices so it can be tested as any other class