Using embedded gemfire locator and server for integration testing - java

I have a application that uses gemfire locator and servers. I would like to write an integration test that could help me start a locator & a server within the JVM and also shut them down when ending the tests. I could not find any single documentation which could help me do this.
I have tried starting a locator and a server when the tests start using LocatorLauncher an ServerLauncher. It starts the locator but throws an exception stating IllegalStateException: A connection to a distributed system already exists in this VM.
I am not very good with gemfire and do not understand what am I missing here, or is it that I am trying in a complete wrong direction.

It would be useful to know a bit more about what you're trying to test exactly. We have different levels of testing in the Geode codebase. If you can get away with just a server, I'd suggest using the ServerStarterRule in your JUnits. Here is an example of that: https://github.com/apache/geode/blob/f12055ae3ae4b1f4731c0447af0c4cb9abdd4159/geode-core/src/integrationTest/java/org/apache/geode/management/internal/cli/commands/AlterRegionCommandIntegrationTest.java
This rule will start up a server as part of the JUnit JVM. This means that you won't be able to use a ClientCache at the same time (you cannot have both a ClientCache and a Cache instance in the same JVM instance).
The next level of test is called DUnit testing. This framework allows you to spin up multiple JVMs and form an actual cluster. The best way to use this is with the ClusterStartupRule together with the GfshCommandRule. An example of this would be: https://github.com/apache/geode/blob/10d89ede6f90f046c15e12e3d16aed259d7044b0/geode-cq/src/distributedTest/java/org/apache/geode/management/internal/cli/commands/ListClientCommandDUnitTest.java
Here, various components are being started up including a client VM. The nice thing about using these rules is that they will handle startup and teardown for you in a consistent and safe manner.

Related

Is it possible to use Testcontainers on Spring (non Boot)?

I am designing integration tests on kinda legacy app and I am facing the problem I have a services I´d like to use only for one run of Integration tests.
The app contains a multiple modules, 4 spring (non boot) applications and these are using these services:
PostgreSQL database
RabiitMQ instance
ElasticSearch instance
Whole stack is currently dockerized via docker-compose (so using docker-compose up the whole app starts, database schemas are created etc).
I would like to achieve this via testcontainers. So start PostgreSQL container where I run flyway scripts to create schema and full database with data required to run (other data will be added in separate tests), then start RabbitMQ and then ElasticSearch instance.
All these automatically every time integration test run.
Is this even possible using "legacy" Spring (non Boot)?
And is this possible to automatize process that it could run many times on one server (so there wont be any port collisions). The goal is to run this on sobe Git repository after merge request was submitted to check if all integration tests pass.
Thank you for advices.
Testcontainers is absolutely independent from Spring from the beginning, actually as I know some king of integration with Spring Boot has only recently been added.
There are a few ways to achieve that, the simplest would be to create a few containers as test class fields as described here [1]
Yes, it is possible to achieve that without collisions, read here. In short - Testcontainers exposes container's port (for ex 5432 for Postgres) on a random host port in order to avoid collisions, you can get the actual port as described in the article. For JDBC containers it can be even easier.
I haven't personally worked with RabbitMQ and ElasticSearch but there are modules for that, you can read about that in docs.
P.S. It's also possible to use Docker Compose support for that, but I can't find any reason why, just FYI, the approach above is simpler.
[1] #Testcontainers annotation will start them for you but you can also manually manage container lifecycle.

How do I use a Spring MVC Controller as a Fake Endpoint for an Integration Test?

I have a service that calls out to a third-party endpoint using java.net.URLConnection. As part of an integration test that uses this service I would like to use a fake endpoint of my own construction.
I have made a Spring MVC Controller that simulates that behaviour of the endpoint I require. (I know this endpoint works as expected as I included it in my web app's servlet config and hit it from a browser once started).
I am having trouble figuring out how I can get this fake endpoint available for my integration test.
Is there some feature of Spring-Test that would help me here?
Do I somehow need to start up a servlet at the beginning of my test?
Are there any other solutions entirely?
It's a bad idea to use a Spring MVC controller as a fake endpoint. There is no way to simply have the controller available for the integration test and starting a servlet with just that controller alongside whatever you are testing requires a lot of configuration.
It is much better to use a mocking framework like MockServer (http://www.mock-server.com/) to create your fake endpoint. MockServer should be powerful enough to cover even complex responses from the fake endpoint, with relatively little setup.
Check out Spring MVC Test that was added to Spring in version 3.2.
Here are some tutorials: 1, 2, 3
First I think we should get the terminology right. There are two general groups of "fake" objects in testing (simplified): a mock, which returns predefined answers on predefined input and stubs which are a simplified version of the object the SUT (system under test) communicates with. While a mock basically does nothing than to provide a response, a stub might use a live algorithm, but not store it's results in a database or send them to customers via eMail for example. I am no expert in testing, but those two fake objects are rather to be used in unit and depending on their scope in acceptance tests.
So your sut communicates with a remote system during integration test. In my book this is the perfect time to actually test how your software integrates with other systems, so your software should be tested against a test version of the remote system. In case this is not possible (they might not have a test system) you are conceptually in some sort of trouble. You can shape your stub or mock only in a way you expect it to work, very much like the part of the software you have written to communicate with that remote service. This leaves out some important things you want to test with integration tests: Was the client side implemented correctly so that it will work with the live server. Do we have to develop work around as there are implementation errors on the server side? In which scale will the communication with the remote system affect our software's performance? Do our authentication credentials work? Does the authentication mechanism work? What are the technical and conceptual implications of this communication relationship no one has thought of so far? (Believe me, the latter will happen more often than you might expect!)
Generally speaking: What will happen if you do integration tests against a mock or a stub is that you test against your own understanding of how to implement the client and the server side of communication, and you do not test how your client works with the actual remote server or at least the best thing next to that, a test system. I can tell you from experience: never make assumptions on how a remote system should behave - test it. Even when talking of a JMS server: test it!
In case you are working for a company, testing against a provided test system is even more important: if you software works against a test system and you can prove it (selenium is a good helper here, as well as good logging, believe it or not) and your software does not work with a live version, you have a situation which I call "instablame": it is immediately obvious that it is not your fault the software isn't working. I myself hate fingerpointing to the bone, but most suits tend to ask "Who's fault was it?" even before "Can we fix that immediately?" and way before "How can we solve that problem?". And there is a special group of suits called lawyers, you know ... ;)
That being said: if you absolutely have to use those stubs during your integration tests, I would create an own project for them (let's say "MyProject-IT-Stubs" and build and run the latest version of MyProject-IT-Stubs before I run the IT of my main project. When using maven, you could create MyProject-IT-Stubs with war packaging, call it as a dependency during the pre-integration-test phase and fire up a jetty for this war in the same phase. Then your integration tests run, either successful or not and you can tear down the jetty in the post-integration-test phase.
The IMHO best way to organize your project with maven would be to have a project with three modules: MyProject,MyProject-IT-Stubs and MyProject-IT(declaring dependencies on MyProject and MyProject-IT-Stubs. This keeps your projects nice and tidy and the stubs do not pollute your project. You might want to think about organizing MyProject-IT-Stubs into modules as well, one for each remote system you have to talk to. As soon as you have test access, you can simply deactivate the according module in MyProject-IT-Stubs.
I am sure according options exist for InsertYourBuildToolHere.

Simulate slow HTTP connect for Java integration test?

is it possible to simulate a slow HTTP connect in a Java integration test, so that I can define how long the server should wait until he confirms the connection? A solution which also supports a JAX-WS webservice would be perfect.
Background:
I have to integration test a central timeout configurator. The configurator itself must be technology-independent. At first it supports JAX-WS webservices, therefore the attribute com.sun.xml.ws.connect.timeout will be set in request context.
(I'll try to convince them, that it is part of JAX-WS and thus we don't need to test it, but this question is my backup plan.)
P.S.: There are other questions which ask about simulating a slow connection in general. This is different because I cannot use external tools in an unit test and I must be able to define a specific connection time.
In one of my unit test, I used NanoHTTPd https://github.com/NanoHttpd/nanohttpd which is a pure Java. It is only one class.
You can go through a proxy (http or socks) which can be embedded in your application. You can then tell the proxy to provide the behavior you need to test.
I had to test a dropped TCP/IP connection (How to reproduce a silently dropped TCP/IP connection?) and ended up going through a SOCKS proxy written in Java, which I just suspended to emulate the behavior I was looking for.

How best to enable a web service consumer to integration test a transactional web service?

I want to allow consumers of a web services layer (web services are written in Java) to create automated integration tests to validate that the version of the web services layer that the consumers will use will still work for the consumer (i.e. the web services are on a different release lifecycle than the consumers and their APIs or behavior might change-- they shouldn't change wihtout notifying the consumer, but the point of this automated test is to validate that they haven't changed)
What would I do if the web service actually executes a transaction (updates database tables). Is there a common practice for how to handle this without having to put logic into the web service itself to know its in a unit test and rollback the transaction once finished? (basically baking in the capability to deal with testing of the web service). Or is that the recommended way to do it?
The consumers are created by one development team at our company and the web services are created by a seperate team. The tests would run in an integration environment (the integration environment is one environment behind the test environment used by QA functional testers, one environment behind the prod environment)
The best approach to this sort of thing is dependency injection.
Put your database handling code in a service or services that are injected into the webservice, and create mock versions that are used in your testing environment and do not actually update a database, or in which you add capability of reset under test control.
This way your tests can exercise the real webservices but not a real database, and the tests can be more easily made repeatable.
Dependency injection in Java can be done using (among others) Spring or Guice. Guice is a bit more lightweight.
It may be sensible in your situation to have the injection decision made during application startup based on a system property as you note.
If some tests need to actually update a database to be valid, then your testing version of the database handling will need to use a real database, but should also provide a way accessible from tests to reset your database to a known (possibly empty) state so that tests can be repeated.
My choice would be to host the web services layer in production and in pre-production. Your customers can test against pre-production, but won't get billed for their transactions.
Obviously, this requires you to update production and pre-production at the same time.
Let the web services run unchanged and update whatever they need to in the database.
Your integration tests should check that the correct database records have been written/updated after each test step.
We use a soapUI testbed to accomplish this.
You can write your post-test assertion scripts in Groovy and Java, which can easily connect to the db using JDBC and check records.
People get worried about using the actual database - I wouldn't get hung up on this, it's actually a GOOD thing and makes for a really accurate testbed.
Regarding the "state" of the db, you can approach this in a number of ways:
Restore a db in a known state before the tests run
Get the test to cleanup after themselves
Let the db fill up as more tests run and clean it out occassionally
We've taken the last approach for now but may change in future if it becomes problematic.
I actually don't mind filling the db up with records as it makes it even more like a real customer database. It also is really useful when investigating test failures.
e.g. cxf allows you to change the transport layer. so you can just change the configuration and use localTransport. then you can have to objects: client and server without any network activity. it's great for testing (un)marhasling. everything else should be separated so the business logic is not aware of webservices so it can be tested as any other class

Making a reliable web service unreliable, but in a controlled way?

I have a Java 6 based web service client using the standard Java 6 annotation based approach (i.e. no Axis or other third party web service library), which works very well. So does the web service I am calling, which is nice, but now I need to write error handling code, and I need to be able to make the existing web service unreliable in a controlled way.
There are many mock frameworks, and they may be helpful, but I don't need right now to be able to mock out the service with prerecorded answers or anything, just introduce unreliability causing the web service library to fail so I can handle the situation gracefully. This would probably be a proxy server running locally.
I work with Eclipse Java EE 3.6, but Netbeans, IntelliJ and JDeveloper are also options.
What would be the best way to do this?
Tcpmon, http://ws.apache.org/commons/tcpmon/index.html can be set up to act as a proxy and even simulate slow connections.
That would give you a chance to simulate both "sorry, not here" and "yes, we are here but we time out".
Any introduced instability is likely to lead to operation avenues of instability being missed. Aim to cover all potential error vectors in your code rather than trying to mitigate for specifics.
Since You've not disclosed enough details of your setup, maybe throwing Exception here and there would be enough?
Seriously, for integration tests like this I'd suggest running some subset of a real web service container.
Based on service's logic it may behave unreliably because of:
external system it is using is misbehaving - try to mock the external system and throw faults - different types - from it
database access problem - try mocking DAO layer and throw Exception from there
general hardware problem - depends :) try to stress your code as you see fit
I think rather than introducing unreliability to a running-instance of the web service application, you are better off simulating error conditions in your unit/integration tests and asserting that your top-layer of the service responds the way that you would like.
For example:
How does the service entry-point respond to a request if the data layer reports that it cannot communicate with the backend (if the data layer throws exceptions, or however it indicates failure)
How does the service entry-point behave if other required components are throwing "unavailable"-like exceptions?
Do you have any timeout logic in place, i.e. the service returns an error if it takes more than X seconds to process the request? If so, this can be simulated in a mock test as well.

Categories