Parallel tests consuming ActiveMQ/JMS topic - java

I have a vm://localhost in-memory activemq setting on an Spring Boot JMS project.
My controllers send events to a particular topic, and some tests check the events are properly sent with #JmsMessagingTemplate. The problem is when I execute multiple tests at the same time, some pf them fail because they are getting the unexpected event.
How can I fix that? I tried to play with acknowledge modes, concurrent users, exclusive consumers, jms.listener.max-concurrency, activemq pool configuration...

You should do one of the following:
Start an instance of in-memory ActiveMQ for each test (group of tests). For example you may use embedded broker to spawn multiple instances.
Dynamically generate unique topic name for test and create separate topic for each test.

Related

Is there a way to handle the sent of thousand messages in Rabbitmq without losting any message

I am working on a spring boot application that receives messages from an other application through rabbitmq and do some tasks after that it sent these messages in an other queue of rabbitmq, every thing is good but some times the application don't sent one or two messages in the queue, to handle this problem I used Synchronized but now I have an issue of performance. I am using rabbitmq 2.1.8 and spring boot 2.1.8
With current versions (since 2.0) there are a couple of different ways to wait for publisher confirmations.
See the documentation "Publishing is Asynchronous — How to Detect Successes and Failures"

Can I use spring.cloud.stream.bindings.<channel>.group when using RabbitMQ to obtain exactly-once delivery?

so I was reading this tutorial to configure RabbitMQ and SpringBoot.
At a certain point it is said:
Most of the time, we need the message to be processed only once.
Spring Cloud Stream implements this behavior via consumer groups.
So I started looking for more information on Spring docs it is written that:
When doing so, different instances of an application are placed in a
competing consumer relationship, where only one of the instances is
expected to handle a given message.
Spring Cloud Stream models this behavior through the concept of a
consumer group. (Spring Cloud Stream consumer groups are similar to
and inspired by Kafka consumer groups.)
So I setup here two nodes with Spring Boot Cloud Stream and RabbitMQ and using spring.cloud.stream.bindings.<channel>.group.
This to me still looks like at-least-once behavior. Am I wrong in assuming that? Should I still manage the possibility to process a message twice even using spring.cloud.stream.bindings.<channel>.group?
Thank you
It's at least once. The connection might close before the ack is sent. Rare, but possible.

Start Spring Boot app with Spring Integration Kafka consumers paused

I am working on a Spring Boot application that uses Spring Integration flows that have Kafka topics as their source. Our integration flow starts using an interface containing SubscribableChannels with springframework.cloud.stream.annotation.Input and Output annotations. These are configured to read from Kafka via Cloud Config with spring.cloud.stream.kafka.bindings.
When the app first starts up it immediately begins reading from the Kafka topics. This is a problem as the app needs to initialize some local, non-persistable databases before it can start correctly processing incoming Kafka messages.
We are currently using a #PostConstruct to populate these in-memory databases before Kafka starts but this is suboptimal as the app can't use Eureka, Feign, etc, to reliably find a healthy service that has the latest data for the in-memory database.
For a variety of reasons the architecture can't be changed such that the in-memory database is shared or prepopulated. Just know that when I call it an in-memory database I'm simplifying things a bit, it's actually another service, of sorts.
What is the best way to start a Spring Boot app such that an Integration Flow that reads from Kafka starts in a paused state and can be unpaused after some other process completes?
I assume you use KafkaMessageDrivenChannelAdapter and according your mentioning of Spring Integration Java DSL - Kafka.messageDrivenChannelAdapter() to be exact. That one can be configured with the id and autoStartup(false). Therefore it isn't going to start to consume Kafka topic immediately. Whenever you are ready to consume, you can start() this component obtaining it as a Lifecycle from the application context using the mentioned id.
Or you can send an appropriate message to the Control Bus.
UPDATE
If you deal with Spring Cloud Stream and Kafka Binder, you should consider to inject a BindingsEndpoint bean and perform its changeState(#Selector String name, State state) for the name of your binding and the State.STOPPED. When your in-memory DB is ready you call it back with the State.STARTED: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_binding_visualization_and_control

Background job Framework Needed in Production

We have a requirement, where we have to run many async background processes which accesses DBs, Kafka queues, etc. As of now, we are using Spring Batch with Tomcat (exploded WAR) for the same. However, we are facing certain issues which I'm unable to solve using Spring Batch. I was thinking of other frameworks to use, but couldn't find any that solves all my problems.
It would be great to know if there exists a framework which solves the following problems:
Since Spring Batch runs inside one Tomcat container (1 java process), any small update in any job/step will result in restarting the Tomcat server. This results in hard-stopping of all running jobs, resulting in incomplete/stale data.
WHAT I WANT: Bundle all the jars and run each job as a separate process. The framework should store the PID and should be able to manage (stop/force-kill) the job on demand. This way, when we want to update a JAR, the existing process won't be hindered (however, we should be able to stop the existing process from UI), and no other job (running or not) will also be touched.
I have looked at hot-update of JARs in Tomcat, but I'm skeptical whether to use such a mechanism in production.
Sub-question: Will OSGI integrate with Spring Batch? If so, is it possible to run each job as a separate container with all JARs embedded in it?
Spring batch doesn't have a master-slave architecture.
WHAT I WANT: There should be a master, where the list of jobs are specified. There should be slave machines (workers), which are specified to master in a configuration file. There should exist a scheduler in the master, which when needed to start a job, should assign a slave a job (possibly load-balanced, but not necessary) and the slave should update the DB. The master should be able to send and receive data from the slaves (start/stop/kill any job, give me update of running jobs, etc.) so that it can be displayed on a UI.
This way, in case I have a high load, I should be able to just add machines into the cluster and modify the master configuration file and the load should get balanced right away.
Spring batch doesn't have an in-built alerting mechanism in case of job stall/failure.
WHAT I WANT: I should be able to set up alerts for jobs in case of failure. If necessary, a job should have a timeout where it should able to notify the user (via email probably) or should force stop the job when the job crosses a specified threshold.
Maybe vertx can do the trick.
Since Spring Batch runs inside one Tomcat container (1 java process), any small update in any job/step will result in restarting the Tomcat server. This results in hard-stopping of all running jobs, resulting in incomplete/stale data.
Vertx allows you to build microservices. Each vertx instance is able to communicate with other instances. If you stop one, the others can still work (if there are not dependant, eg if you stop master, slaves will fail)
Vert.x is not an application server.
There's no monolithic Vert.x instance into which you deploy applications.
You just run your apps wherever you want to.
Spring batch doesn't have a master-slave architecture
Since vertx is even driven, you can easily create a master slave architecture. For example handle the http request in an vertx instance and dispatch them between severals other instances depending on the nature of the request.
Spring batch doesn't have an in-built alerting mechanism in case of job stall/failure.
In vertx, you can set a timeout for each message and handle failure.
Sending with timeouts
When sending a message with a reply handler you can specify a timeout in the DeliveryOptions.
If a reply is not received within that time, the reply handler will be called with a failure.
The default timeout is 30 seconds.
Send Failures
Message sends can fail for other reasons, including:
There are no handlers available to send the message to
The recipient has explicitly failed the message using fail
In all cases the reply handler will be called with the specific failure.
EDIT There are other frameworks to do microservices in java. Dropwizard is one of them, but I can't talk much more about it.

Wiping out embedded activemq data during testing

I'm actively using ActiveMQ in my project. Although production use standalone ActiveMQ instance my tests require embedded ActiveMQ instance. After execution of particular test method ActiveMQ holds unprocessed messages in queues. I'd like to wipe out ActiveMQ instance after each test. I tried to use JMX to connect to local ActiveMQ instance and wipe out queues, but it's heavy-weight solution. Could anyone suggest me something more lightweight?
just turn off broker persistence when you define the broker URL for your unit tests
vm://localhost?broker.persistent=false
ActiveMQ has an option to delete all message on startup, if you use the XML way of configuring ActiveMQ broker, you can set it on the < activemq > tag,
<activemq:broker .... deleteAllMessagesOnStartup="true">
...
</activemq:broker>
Another approach could be to use unique data directories per unit test which is what we do when unit testing camel-jms component with embedded ActiveMQ broker. We have a helper class that setup ActiveMQ for us, depending on we needed persistent queues or not
https://git-wip-us.apache.org/repos/asf?p=camel.git;a=blob;f=components/camel-jms/src/test/java/org/apache/camel/component/jms/CamelJmsTestHelper.java;h=8c81f3e2bed738a75841988fd1239f54a100cd89;hb=HEAD
I believe you want to purge the queue. There are several options for that.
https://activemq.apache.org/how-do-i-purge-a-queue.html
from the link
"You can use the Web Console to view queues, add/remove queues, purge queues or delete/forward individual messages. Another option is to use JMX to browse the queues and call the purge() method on the QueueViewMBean. You could also delete the queue via removeQueue(String) or removeTopic(String) methods on the BrokerViewMBean. You can also do it programmatically"
The link describes each option in details

Categories