Spring Integration Test Naming Conventions - java

I have a rest controller for creating and getting employees. To test these endpoints I have two classes. In one I mock the service layer and do status code checks such as:
Mockito.when(employeeService.createEmployee(any(Employee.class))).then(returnsFirstArg());
this.employeeClient.createEmployee(this.testEmployee)
.andExpect(status().isCreated());
In the other I start up a docker postgres database and don't mock anything. These tests involve multiple endpoints e.g. "If I create an employee and then get all the employees, the created employee should be returned".
Both these classes create an application context - the first uses the WebMvcTest annotation, the second uses SpringBootTest. For this reason I see them both as integration tests.
So then what do I call these tests? I'm currently using EmployeeControllerTest and EmployeeContorllerIT (running IT with the failsafe plugin), but the name for the first is a little misleading because it's not really a unit test.
I was thinking of using EmployeControllerMockedIT for the first, but not convinced that's the right way to go.

If you're mocking responses, I'd say its a unit test as you're testing the module of code in isolation from a network and 3rd party systems like a database.
If you're running a test against (real) 3rd party code/systems you don't control, its an integration (or functional) test.
Functional tests will invoke your service as if your system is a black-box where we don't know the internal details (vs. white-box where we know, and have access to change, the internals and invoke code as if we're making a method call rather than sending an HTTP request).
Functional tests may run against real/fake servers but there will typically be a running application waiting for requests. The point being here the responses are not "mocked", the request is made for real, from your code, to the third party code/system, and that system is expected to respond as the real server would.
We do not fake responding in functional tests but arbitrary responses may be provided from the real/fake server to simulate the data we expect to receive. It could be we write a fake HTTP service to "mock" a real API. It will provide the exact same type of responses over the network, and be invoked over the network. Or we could just spin up a dockerised database and send queries to it and receive responses from it. The database will be a real database-server, rather than a mocked response we pre-specified in code.
Functional tests (of a web service) will begin from outside your application and ensure requests are answered over the network layer. i.e. you'll be making an HTTP request to invoke the service.
Integration tests of a web service will invoke the service at an application level (not over the network layer) so you may call a Controller class directly or invoke some code that is supposed to connect/integrate with a database from within the application. Integration tests will test your code is integrated correctly with third party code/a library/a database. Your tests should alert you to any situation where if the 3rd party code (code/system you do not control) changes in such a way that your software would break.

Related

Citrus Framework - How to do multi-actor end-to-end integration tests

I have been working with the Citrus Framework to write integration tests for applications consisting of multiple interacting Spring Boot services. The services communicate via HTTP REST calls.
I am at a point where I need to write a test that will do end-to-end integration testing on a multi-service application.
The application scenario is as follows:
A master service that is invoked from a client via a POST call. The master calls two worker services, in sequence - it cannot be done in parallel, via POST calls. The master expects a response from each worker. Once the process is complete, the master returns a completion response to the client that triggered the whole process.
I have written Citrus tests for all three of these services, and in the case of the master, there are test actions that essentially mock the two worker services, dictating what they should receive and respond with. It took some time and hair pulling to get it to work correctly.
Now, I want to write a Citrus test that will allow for the testing of the process from start to finish with all of the services being real and not mocked. I would think that this test would be similar to the master integration test, except that the workers are no longer mocked.
I don't know how to do this with Citrus. There is a section in the user guide (https://citrusframework.org/citrus/reference/2.8.0/html/index.html#test-actors) that seems to address this, but it's quite short and I can't see how it's actually done. How are the "real" actors "wired up"? How are they configured and started/stopped? Also, the section only provides examples in XML, there are no Java DSL examples.
Is this question outside the scope of just Citrus? Does it need to involve a wider cast of characters, such as maven or a maven plugin?

For Java testing, should I mock client or mock the server

In the client-server architecture, what should be the best approach when one should mock the client and when one should mock the server. I understand that unit tests should only test given class, with every dependent object mocked, while integration tests should test a feature as a whole. When it comes to the API calls, I am puzzled should I mock the client that I use for api calls or should I use some server mocking framework and let the real client call the mock server.
I have a situation where I should (it is not mandatory) test if I hit the proper API url, with proper method and with certain values passed in query parameters or in request body. With client mocking, I can simply "verify" if given parameters (path, method, request body) are passed and consider test successful. If I am to mock the server, then I need to create a little bit of processing on mock server to check if path, method and request body are correctly passed and return a value. In this case, server response is the only thing that I can use to measure if test is successful (send 500 if required data is not received).
By your experience, what is the proper way to test calls in client-server architecture for integration testing? If we go by the book, then server mocking should be used, but is that practical in the above mentioned case? For unit testing it is clear that mocks should be used.
Update: To avoid confusion: I am making a client that depends on the publicly available service which I cannot control. So, I am testing if the client is working properly and focus is on the client. Question is related should I (in integration-tests) mock the client object that is responsible for remote connections(In my case WSClient from Play framework), or should I use some server mock (like okhttp mock web server).
Another update: Most people suggest that simple rule of the thumb should be followed: if these are the unit tests then mock the client and if these are the integration tests then use mock server. Now, I am curious about this situation: I have a class A that has dependency on class B and class B has a client object that is tasked with remote calls. If I was to write unit test for class A, I should mock the class B and that is fine. However, if I am doing integration testing for class A, is it acceptable to mock client object in class B and "verify" if proper parameters are passed to it (path, method and request body), or, for the sake of completeness, I should run the mock server even if mocking the client object is easier. In this case, I will not test for timeouts and low level network errors. Or the whole point of integration testing, in this case, should be to test network connectivity as well.
You need:
Integration tests - use stub server like Wiremock, test real scenarios like timeouts, malformed response, 500 etc.
Test your client class - mock server endpoint using Mockito and verify response

How do I use a Spring MVC Controller as a Fake Endpoint for an Integration Test?

I have a service that calls out to a third-party endpoint using java.net.URLConnection. As part of an integration test that uses this service I would like to use a fake endpoint of my own construction.
I have made a Spring MVC Controller that simulates that behaviour of the endpoint I require. (I know this endpoint works as expected as I included it in my web app's servlet config and hit it from a browser once started).
I am having trouble figuring out how I can get this fake endpoint available for my integration test.
Is there some feature of Spring-Test that would help me here?
Do I somehow need to start up a servlet at the beginning of my test?
Are there any other solutions entirely?
It's a bad idea to use a Spring MVC controller as a fake endpoint. There is no way to simply have the controller available for the integration test and starting a servlet with just that controller alongside whatever you are testing requires a lot of configuration.
It is much better to use a mocking framework like MockServer (http://www.mock-server.com/) to create your fake endpoint. MockServer should be powerful enough to cover even complex responses from the fake endpoint, with relatively little setup.
Check out Spring MVC Test that was added to Spring in version 3.2.
Here are some tutorials: 1, 2, 3
First I think we should get the terminology right. There are two general groups of "fake" objects in testing (simplified): a mock, which returns predefined answers on predefined input and stubs which are a simplified version of the object the SUT (system under test) communicates with. While a mock basically does nothing than to provide a response, a stub might use a live algorithm, but not store it's results in a database or send them to customers via eMail for example. I am no expert in testing, but those two fake objects are rather to be used in unit and depending on their scope in acceptance tests.
So your sut communicates with a remote system during integration test. In my book this is the perfect time to actually test how your software integrates with other systems, so your software should be tested against a test version of the remote system. In case this is not possible (they might not have a test system) you are conceptually in some sort of trouble. You can shape your stub or mock only in a way you expect it to work, very much like the part of the software you have written to communicate with that remote service. This leaves out some important things you want to test with integration tests: Was the client side implemented correctly so that it will work with the live server. Do we have to develop work around as there are implementation errors on the server side? In which scale will the communication with the remote system affect our software's performance? Do our authentication credentials work? Does the authentication mechanism work? What are the technical and conceptual implications of this communication relationship no one has thought of so far? (Believe me, the latter will happen more often than you might expect!)
Generally speaking: What will happen if you do integration tests against a mock or a stub is that you test against your own understanding of how to implement the client and the server side of communication, and you do not test how your client works with the actual remote server or at least the best thing next to that, a test system. I can tell you from experience: never make assumptions on how a remote system should behave - test it. Even when talking of a JMS server: test it!
In case you are working for a company, testing against a provided test system is even more important: if you software works against a test system and you can prove it (selenium is a good helper here, as well as good logging, believe it or not) and your software does not work with a live version, you have a situation which I call "instablame": it is immediately obvious that it is not your fault the software isn't working. I myself hate fingerpointing to the bone, but most suits tend to ask "Who's fault was it?" even before "Can we fix that immediately?" and way before "How can we solve that problem?". And there is a special group of suits called lawyers, you know ... ;)
That being said: if you absolutely have to use those stubs during your integration tests, I would create an own project for them (let's say "MyProject-IT-Stubs" and build and run the latest version of MyProject-IT-Stubs before I run the IT of my main project. When using maven, you could create MyProject-IT-Stubs with war packaging, call it as a dependency during the pre-integration-test phase and fire up a jetty for this war in the same phase. Then your integration tests run, either successful or not and you can tear down the jetty in the post-integration-test phase.
The IMHO best way to organize your project with maven would be to have a project with three modules: MyProject,MyProject-IT-Stubs and MyProject-IT(declaring dependencies on MyProject and MyProject-IT-Stubs. This keeps your projects nice and tidy and the stubs do not pollute your project. You might want to think about organizing MyProject-IT-Stubs into modules as well, one for each remote system you have to talk to. As soon as you have test access, you can simply deactivate the according module in MyProject-IT-Stubs.
I am sure according options exist for InsertYourBuildToolHere.

How best to enable a web service consumer to integration test a transactional web service?

I want to allow consumers of a web services layer (web services are written in Java) to create automated integration tests to validate that the version of the web services layer that the consumers will use will still work for the consumer (i.e. the web services are on a different release lifecycle than the consumers and their APIs or behavior might change-- they shouldn't change wihtout notifying the consumer, but the point of this automated test is to validate that they haven't changed)
What would I do if the web service actually executes a transaction (updates database tables). Is there a common practice for how to handle this without having to put logic into the web service itself to know its in a unit test and rollback the transaction once finished? (basically baking in the capability to deal with testing of the web service). Or is that the recommended way to do it?
The consumers are created by one development team at our company and the web services are created by a seperate team. The tests would run in an integration environment (the integration environment is one environment behind the test environment used by QA functional testers, one environment behind the prod environment)
The best approach to this sort of thing is dependency injection.
Put your database handling code in a service or services that are injected into the webservice, and create mock versions that are used in your testing environment and do not actually update a database, or in which you add capability of reset under test control.
This way your tests can exercise the real webservices but not a real database, and the tests can be more easily made repeatable.
Dependency injection in Java can be done using (among others) Spring or Guice. Guice is a bit more lightweight.
It may be sensible in your situation to have the injection decision made during application startup based on a system property as you note.
If some tests need to actually update a database to be valid, then your testing version of the database handling will need to use a real database, but should also provide a way accessible from tests to reset your database to a known (possibly empty) state so that tests can be repeated.
My choice would be to host the web services layer in production and in pre-production. Your customers can test against pre-production, but won't get billed for their transactions.
Obviously, this requires you to update production and pre-production at the same time.
Let the web services run unchanged and update whatever they need to in the database.
Your integration tests should check that the correct database records have been written/updated after each test step.
We use a soapUI testbed to accomplish this.
You can write your post-test assertion scripts in Groovy and Java, which can easily connect to the db using JDBC and check records.
People get worried about using the actual database - I wouldn't get hung up on this, it's actually a GOOD thing and makes for a really accurate testbed.
Regarding the "state" of the db, you can approach this in a number of ways:
Restore a db in a known state before the tests run
Get the test to cleanup after themselves
Let the db fill up as more tests run and clean it out occassionally
We've taken the last approach for now but may change in future if it becomes problematic.
I actually don't mind filling the db up with records as it makes it even more like a real customer database. It also is really useful when investigating test failures.
e.g. cxf allows you to change the transport layer. so you can just change the configuration and use localTransport. then you can have to objects: client and server without any network activity. it's great for testing (un)marhasling. everything else should be separated so the business logic is not aware of webservices so it can be tested as any other class

JAX-WS unit tests

I have a stand-alone java app that creates services using Java JAX-WS. I would like to create test cases to test the services, how can I go about that?
I thought about using an external client, outside the project, is this the best method?
Strictly speaking, deploying a web service and testing it is an integration test, not a unit test. With that said, it's probably better to unit test this. I would make a separate layer that implements the business logic, and other layer that exposes it as a web service. Then you can test the business logic, without having to worry about the web service.
After all, you probably don't want to bother re-testing the web framework you are using to start up a web service. You really want to test your business logic. This will let you create faster and less brittle tests.
The arguments about integration vs unit tests could go on and on. Just removing the web service layer and testing the inner business logic does not change an integration test into a unit test.
IMHO web services are a public API into your application, and you need them to work consistently between versions of your app. Therefore I would recommend an extensive soapUI test suite, hitting your app and db as if it were a regular client. You can add assertions to check for expected success and failure messages (don't forget to test what your web services do when incorrect data is thrown at them). Also you can add groovy asserts to check your database state after each web service call.
I would fully recommend quick-running unit tests to complement the above, but getting a robust integration suite running every night against your overnight build will ensure the quality of your API and avoid many problems which would otherwise only be flushed out when your customers start hitting your services when your app has been deployed .
It is the nature of web services that they have no UI so are not well-tested if left to human testers.
I don't think I would test the actual web service endpoint or the client. I would move all of the business logic into some service layer and then unit-test those objects. For example:
#Path("/user")
public class UserWebService {
#Inject
private UserService userService;
#Path("/delete")
public void deleteUser(#RequestParam long id) {
userService.deleteUser(id);
}
}
Then I would unit test my UserService implementation.

Categories