Is it possible to use Testcontainers on Spring (non Boot)? - java

I am designing integration tests on kinda legacy app and I am facing the problem I have a services I´d like to use only for one run of Integration tests.
The app contains a multiple modules, 4 spring (non boot) applications and these are using these services:
PostgreSQL database
RabiitMQ instance
ElasticSearch instance
Whole stack is currently dockerized via docker-compose (so using docker-compose up the whole app starts, database schemas are created etc).
I would like to achieve this via testcontainers. So start PostgreSQL container where I run flyway scripts to create schema and full database with data required to run (other data will be added in separate tests), then start RabbitMQ and then ElasticSearch instance.
All these automatically every time integration test run.
Is this even possible using "legacy" Spring (non Boot)?
And is this possible to automatize process that it could run many times on one server (so there wont be any port collisions). The goal is to run this on sobe Git repository after merge request was submitted to check if all integration tests pass.
Thank you for advices.

Testcontainers is absolutely independent from Spring from the beginning, actually as I know some king of integration with Spring Boot has only recently been added.
There are a few ways to achieve that, the simplest would be to create a few containers as test class fields as described here [1]
Yes, it is possible to achieve that without collisions, read here. In short - Testcontainers exposes container's port (for ex 5432 for Postgres) on a random host port in order to avoid collisions, you can get the actual port as described in the article. For JDBC containers it can be even easier.
I haven't personally worked with RabbitMQ and ElasticSearch but there are modules for that, you can read about that in docs.
P.S. It's also possible to use Docker Compose support for that, but I can't find any reason why, just FYI, the approach above is simpler.
[1] #Testcontainers annotation will start them for you but you can also manually manage container lifecycle.

Related

Spring Boot - Running one specific background job per pod

I'm coming from the PHP/Python/JS environment where it's a standard to run multiple instances of web application as separate processes and asynchronous tasks like queue processing as separate scripts.
eg. in the k8s environment, there would be
N instances of web server only, each running in separate pod
For each queue, dynamic number of consumers, each in separate pod
Cron scheduling using k8s crontab functionality, leaving the scheduling process to k8s
Such approach matches well the cloud nature where the workload can be scheduled across both smaller number of powerful machines and lot of less powerful machines and allows very fine control of auto scaling (based on the number of messages in specific queue for example).
Also, there is a clear separation between the developer and DevOps responsibility.
Recently, I tried to replicate the same setup with Java Spring Boot application and failed miserably.
Even though Java frameworks say that they are "cloud native", it seems like all the documentation is still built around monolith application, which handles all consumers and cron scheduling in separate threads.
Clear answer to this problem is microservices but that's way out of scope.
What I need is to deploy separate parts of application (like 1 queue listener only) per pod in the cloud yet keep the monolith code architecture.
So, the question is:
How do I design my Spring Boot application so that:
I can run the webserver separately without queue listeners and scheduled jobs
I can run one queue listener per pod in the k8s
I can use k8s cron scheduling instead of App level Spring scheduler?
I found several ways to achieve something like this but I expect there must be some "more or less standard way".
Alternative solutions that came to my mind:
Having separate module with separate Application definition so that each "command" is built separately
Using Spring Profiles to instantiate specific services only according to some environment variables
Implement custom command line runner which would parse command name/queue name and dynamically create appropriate services (this seems to be the most similar approach to the way how it's done in "scripting languages")
What I mainly want to achieve with such setup is:
To be able to run the application on lot of weak HW instead of having 1 machine with 32 cpu cores
Easier scaling per workload
Removing one layer from already complex monitoring infrastructure (k8s already allows very fine resource monitoring, application level task scheduling and parallelism makes this way more difficult)
Do I miss something or is it just that it's not standard to write Java server apps this way?
Thank you!
What I need is to deploy separate parts of application (like 1 queue listener only) per pod in the cloud yet keep the monolith code architecture.
I agree with #jacky-neo's answer in terms of the appropriate architecture/best practice, but that may require you to break up your monolithic application.
To solve this without breaking up your monolithic application, deploy multiple instances of your monolith to Kubernetes each as a separate Deployment. Each deployment can have its own configuration. Then you can utilize feature flags and define the environment variables for each deployment based on the functionality you would like to enable.
In application.properties:
myapp.queue.listener.enabled=${QUEUE_LISTENER_ENABLED:false}
In your Deployment for the queue listener, enable the feature flag:
env:
- name: 'QUEUE_LISTENER_ENABLED'
value: 'true'
You would then just need to configure your monolithic application to use this myapp.queue.listener.enabled property and only enable the queue listener when the property is set to true.
Similarly, you could also apply this logic to the Spring profile to only run certain features in your app based on the profile defined in your ConfigMap.
This Baeldung article explains the process I'm presenting here in detail.
For the scheduled task, just set up a CronJob using a curl container which can invoke the service you want to perform the work.
Edit
Another option based on your comments below -- split the shared logic out into a shared module (using Gradle or Maven), and have two other runnable modules like web and listener that depend on the shared module. This will allow you to keep your shared logic in the same repository, and keep you from having to build/maintain an extra library which you would like to avoid.
This would be a good step in the right direction, and it would lend well to breaking the app into smaller pieces later down the road.
Here's some additional info about multi-module Spring Boot projects using Maven or Gradle.
According to my expierence, I will resolve these issue as below. Hope it is what you want.
I can run the webserver separately without queue listeners and
scheduled jobs
Develop a Spring Boot app to do it and deploy it as service-A in Kubernetes. In this app, you use spring-mvc to define the controller or REST controller to receive requests. Then use the Kubernetes Nodeport or define ingress-gateway to make the service accessible from outside the Kubernetes cluster. If you use session, you should save it into Redis or a similar shared place so that more instances of the service (pod) can share same session value.
I can run one queue listener per pod in the k8s
Develop a new Spring Boot app to do it and deploy it as service-B in Kubernetes. This service only processes queue messages from RabbitMQ or others, which can be sent from service-A or another source. In most times it should not be accessed from outside the Kubernetes cluster.
I can use k8s cron scheduling instead of App level Spring scheduler?
In my opinion, I like to define a new Spring Boot app with spring-scheduler called service-C in Kubernetes. It will have only one instance and will not be scaled. Then, it will invoke service-A method at the scheduled time. It will not be accessible from outside the Kubernetes cluster. But if you like Kubernetes CronJob, you can just write a bash shell using service-A's dns name in Kubernetes to access its REST endpoint.
The above three services can each be configured with different resources such as CPU and memory usage.
I do not get the essence of your post.
You want to have an application with "monolithic code architecture".
And then deploy it to several pods, but only parts of the application are actually running.
Why don't you separate the parts you want to be special to be applications in their own right?
Perhaps this is because I come from a Java background and haven't deployed monolithic scripting apps.

Java test project for integration tests

I have to work with some old java application.
There is a total of 6 projects which:
communicate using rest and mq and
already have some integration tests.
As part of this:
mvcMock mocks are used for the initial requests from test
additional http requests are made by services and
they go against dev server instead of calling code from current build;
it'll fail if my test use code which communicates with another project by new endpoint which dev do not have yet.
How I thought of testing this
My idea was to use single test project which will run all required projects using #SpringBootTest and mockmvc to mock real calls and transfer them inside test instead of using real endpoints.
The ask
I don't get how to make Spring to work with #Autowired and run 6
different WebApplicationContext's.
Or maybe i should forget my plan and use something different.
When it comes to #SpringBootTest its supposed to load everything that is required to load one single spring boot driven application.
So the "integration testing" referred in Spring Boot testing documentation is for one specific application.
Now, you're talking about 6 already existing applications. If these applications are all spring boot driven, then you can run #SpringBootTest for each one of them, and mock everything you don't need. MockMvc that you've mentioned BTW, doesn't start the whole application, but rather starts a "part" of application relevant for web request processing (for example it won't load your DAO layer), so its an entirely different thing, do not confuse between them :)
If you wan't to test the whole flow that involves all 6 services, you'll have to run a whole enviroment and run a full-fledged system test that will be executed on a remote JVM.
In this case you can containerize the applications and run them in test with TestContainers.
Obviously you'll also have to provide containers for databases if they have any, messaging systems, and so forth.
All-in-all I feel that the question is rather vague and lacks concrete details.

start a spring boot application from test

I have two spring boot applications. The first one manages data in a PostgreSQL database. The other one exposes this data over REST.
In my first spring boot application I write a test that uses a test database. Now I want to write a test for the other application (REST), that test needs data inside the database.
How can I use the first spring boot application in my test for the second spring boot application?
Or can I setup that the test only can be run if the test from the first spring boot application?
There are different types of testing. The first is unit testing -- this confirms that your business logic works. The second form is integration testing, which is again split into two parts -- the first you test the component in isolation to confirm that it communicates the way you expect (sometimes called component testing), and the second you test the component against other, real, components.
You can easily do unit-tests in maven/spring-boot, and it's fairly easy to do component testing too. The integration testing however is usually a lot more complicated and usually needs to involve a mechanism outside the simple maven build system. The most common approach to this is to use a CI/CD tool, like Jenkins or CircleCI.
The usual pattern is to run the unit-tests first because they are the fastest, then component tests, then integration tests. The latter often requires an 'environment' to be created that contains all of the collaborating components that compose a service (the two spring-boot apps in your case).
For integration testing, we often find that the biggest problem is "Configuration Management", which is basically a description of which versions of which components work together. For your problem you need a database, data, and two spring-boot apps, along with their configuration and environment data.
First of all: you shouldn't start neither the first nor the second application in your tests. It will slower down your testing drastically. What's more you will be dependent on another application that in reality may be developed by another team - bad idea.
In fact, you've got something like 3 ways to do it:
Use Wiremock or some other dummy stub service- this approach will suite you if you're ready no to call the "real" application. The service should mimic your application (should expose the same interface i.e. url, HTTP method, the same response)
Use both the application in Docker containers and start it with Docker Compose - you can use here either a real database deployed somewhere or another container with predefined data
Deploy the first service somewhere along with the database and run either integration tests or system or end-to-end tests
Hope this helps!
UPDATE: I meant you shouldn't start apps in tests. Also, to my mind it is obvious enough that we're talking about integrating testing at least as the guy mentions testing of whole request trip starting with the REST app. Maybe that clarifies the things to downvoters

Automated Weblogic Service Testing using Jenkins

What I am looking for is a "best practices" description or example by which testing can be automated for components that are deployed to a Weblogic server.
I am not expecting anyone to present a step by step solution to this problem.
I am looking for a resource (book, manual, website, etc.) that can describe a path to this integration and testing goal.
The situation is that we have a pair of (Maven) Project deployments (in Eclipse) which are managed/reviewed/maintained through: Git, Stash, and Jenkins.
The first component is providing Web Services (RESTful services as well as Stateful and Stateless services). It is connected to the second component. The second component exposes Stateless and RESTful services that provide access services (CRUD: Create Read Update Delete) to an Oracle SQL Database.
Currently, the Jenkins Service is testing the Client UI through Jasmine Zzzzz.spec.js tests. This is all well and good for the "front-end hipsters", but not helpful for the Java service component developers.
What I would like to do is to be able to write (?JUnit?) tests to evaluate Service component operations that can be automatically executed by Jenkins continuous integration components. What I would like to avoid doing is mocking up everything to the point that the tests become trivial and pointless.
What needs to happen is:
1. Developer completes a Work Product (JIRA Task) to add functionality to a Service hosted by a Weblogic Server.
2. Work Product contains a Test (?JUnit?).
3. Work Product (including test) is pushed by Git to Stash.
4. Work Product Test is added to Integration Tests.
5. Stash and Jenkins execute and evaluate Work Product JUnit Test as part of [Integration Testing].
Integration Testing will:
1. Start a (configured) Weblogic Server (if one is not already started).
2. Compile and Publish Deployment containing Work Product.
3. Deployment will connect to a Configured Datasource.
4. Start [Work Product JUnit Test].
Work Product JUnit Test will:
1. Connect and authenticate to Weblogic Service Deployment.
2. Call tested Service methods.
3. Evaluate test results
Yes, that is a tall order with a hive full of buzzwords. However, I am having difficulty finding a worthwhile resource that isn't trying to direct me to mock up the very components that I am trying to test.
What you have described is a pretty standard CI setup; the sort of thing that is notable via its absence rather than its existence.
In that vein, it's probably appropriate for you and your team to read up on the fundamentals.
Continuous Integration (Fowler series)
Continuous Delivery (Fowler series)
The DevOps Handbook
then if you really want, you can pick up books on the specifics of Jenkins:
Jenkins: the definitive guide (oreilly)
Jenkins CI Cookbook

How best to enable a web service consumer to integration test a transactional web service?

I want to allow consumers of a web services layer (web services are written in Java) to create automated integration tests to validate that the version of the web services layer that the consumers will use will still work for the consumer (i.e. the web services are on a different release lifecycle than the consumers and their APIs or behavior might change-- they shouldn't change wihtout notifying the consumer, but the point of this automated test is to validate that they haven't changed)
What would I do if the web service actually executes a transaction (updates database tables). Is there a common practice for how to handle this without having to put logic into the web service itself to know its in a unit test and rollback the transaction once finished? (basically baking in the capability to deal with testing of the web service). Or is that the recommended way to do it?
The consumers are created by one development team at our company and the web services are created by a seperate team. The tests would run in an integration environment (the integration environment is one environment behind the test environment used by QA functional testers, one environment behind the prod environment)
The best approach to this sort of thing is dependency injection.
Put your database handling code in a service or services that are injected into the webservice, and create mock versions that are used in your testing environment and do not actually update a database, or in which you add capability of reset under test control.
This way your tests can exercise the real webservices but not a real database, and the tests can be more easily made repeatable.
Dependency injection in Java can be done using (among others) Spring or Guice. Guice is a bit more lightweight.
It may be sensible in your situation to have the injection decision made during application startup based on a system property as you note.
If some tests need to actually update a database to be valid, then your testing version of the database handling will need to use a real database, but should also provide a way accessible from tests to reset your database to a known (possibly empty) state so that tests can be repeated.
My choice would be to host the web services layer in production and in pre-production. Your customers can test against pre-production, but won't get billed for their transactions.
Obviously, this requires you to update production and pre-production at the same time.
Let the web services run unchanged and update whatever they need to in the database.
Your integration tests should check that the correct database records have been written/updated after each test step.
We use a soapUI testbed to accomplish this.
You can write your post-test assertion scripts in Groovy and Java, which can easily connect to the db using JDBC and check records.
People get worried about using the actual database - I wouldn't get hung up on this, it's actually a GOOD thing and makes for a really accurate testbed.
Regarding the "state" of the db, you can approach this in a number of ways:
Restore a db in a known state before the tests run
Get the test to cleanup after themselves
Let the db fill up as more tests run and clean it out occassionally
We've taken the last approach for now but may change in future if it becomes problematic.
I actually don't mind filling the db up with records as it makes it even more like a real customer database. It also is really useful when investigating test failures.
e.g. cxf allows you to change the transport layer. so you can just change the configuration and use localTransport. then you can have to objects: client and server without any network activity. it's great for testing (un)marhasling. everything else should be separated so the business logic is not aware of webservices so it can be tested as any other class

Categories