Junit 5 functional testing the Micronaut Messaging-Driven Application - java

I have a Rabbit MQ Micronaut Messaging-Driven application. The application only contains the Consumer side, Producer side is on another REST API application.
Now I want to perform JUnit 5 testing with the consumer side only. Trying to get the best idea to test the Messaging-Driven application that contains only the Rabbit MQ Listener
#RabbitListener
public record CategoryListener(IRepository repository) {
#Queue(ConstantValues.ADD_CATEGORY)
public CategoryViewModel Create(CategoryViewModel model) {
LOG.info(String.format("Listener --> Adding the product to the product collection"));
Category category = new Category(model.name(), model.description());
return Single.fromPublisher(this.repository.getCollection(ConstantValues.PRODUCT_CATEGORY_COLLECTION_NAME, Category.class)
.insertOne(category)).map(success->{
return new CategoryViewModel(
success.getInsertedId().asObjectId().getValue().toString(),
category.getName(),
category.getDescription());
}).blockingGet();
}
}
After some research, I found that we can use Testcontainers for integration testing, In my case, the Producer and receiver are on a different server. So do I need to create RabbitClient for each RabbitListener in the test environment or is there any way to mock RabbitClient
#MicronautTest
#Testcontainers
public class CategoryListenerTest {
#Container
private static final RabbitMQContainer RABBIT_MQ_CONTAINER = new RabbitMQContainer("rabbitmq")
.withExposedPorts(5672, 15672);
#Test
#DisplayName("Rabbit MQ container should be running")
void rabbitMqContainerShouldBeRunning() {
Assertions.assertTrue(RABBIT_MQ_CONTAINER.isRunning());
}
}
What is the best way to perform functional tests of Micronaut Messaging-Driven Application? In this question, I have a PRODUCER on another application. So I can't inject a PRODUCER client. How can I test this function on the LISTENER side?

Create producers with #RabbitClient or use the java api directly

Related

Validate repository state after producing request to Kafka test container

I am creating an integration test with Kafka and Postgres test containers, such as:
#Slf4j
#SpringBootTest
#Testcontainers
#EnableKafka
#ContextConfiguration(
initializers = {MyContainersInitializer.class} //test containers are initialized here
)
class MyIntegrationTest {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Autowired
private MyRepository myRepository;
#MethodSource("testCases")
#ParameterizedTest(name = "#{index} {0}")
public void myTest(MyTestcase myTestcase) {
kafkaTemplate.send(
env.getProperty(KEY_TOPIC),
myTestcase.input()
);
//... backoff ...
assertEquals(myTestcase.expected(), myRepository.findById(myTestcase.input().getId());
}
I've confirmed that everything is processed correctly, i.e. the request is received by Kafka and is processed asynchronously by the application and inserted in the database. However the test is unable to see those changes in its final step, even if I add a backoff period.
I've noticed that if a #Transactional annotation is added to the application service it does the trick, unfortunately I am not allowed to do it (I don't have ownership), hence I was wondering if there is another way?
Thank you for your attention
I would advise against using #Transactional to solve this issue.
As the kafka data will be received in a different thread (possibly), you are better off testing that the message is received in one test and then testing if the receiving method itself saves the data in another test.
I would bet that the async nature of this will make the use of #Transactional a flaky test if you plan on doing just one test.
You can also use a Mockito verify to see if the desired function has been called.

How to #Schedule a task according to a WebSocket Event in Spring Boot

I would like to create an application in Java to automate trades in my Binance account. Thankfully, joaopsilva has made it easy through an open source API which fetches candlesticks through REST Client or WebSocket. I would like to use WebSocket since it is lighter.
I searched in several sources and still I could not find an example project which uses the Spring Boot framework to build an event-driven application which interacts with a connected WebSocket.
If my line of reason is correct, I should define a bean for Spring to instantiate the WebSocket client:
#Configuration
public class WebSocketConfig {
#Bean
public BinanceApiWebSocketClient binanceApiWebSocketClient() {
return BinanceApiClientFactory.newInstance().newWebSocketClient();
}
}
To interact with the socket event, I created a #Scheduled response, in which I used an arbitrary rate of 1 per second just for testing. I did it like this:
#Configuration
#EnableScheduling
public class SocketListener {
#Autowired BinanceApiWebSocketClient client;
#Scheduled(fixedRate = 1000)
public void scheduleFixedDelayTask() {
client.onCandlestickEvent("ethbtc", CandlestickInterval.ONE_MINUTE, response ->
System.out.println(response));
}
}
It works. If I launch the Spring application, it successfuly configures the client Bean and it prints the candlestick events. However, every 1 second, what I receive is an enormous chunk of events. It looks like this:
So, I'm wondering. Am I doing this correctly? Would there be a way to Schedule the listener not to receive chunks of events, but to listen the socket exactly when an event happens (not setting delay = 1, which of course causes unnecessary performance issues).
If your question is about the correct place to start the event streaming through the websocket client defined as a bean, one of the options is an ApplicationRunner bean whose run() method will be executed once on application start:
#SpringBootApplication
public class BinanceClientApplication {
public static void main(String[] args) {
SpringApplication.run(BinanceClientApplication.class, args);
}
#Bean
public ApplicationRunner applicationRunner(BinanceApiWebSocketClient binanceApiWebSocketClient) {
return args -> {
binanceApiWebSocketClient.onCandlestickEvent("ethbtc",
CandlestickInterval.ONE_MINUTE,
response ->
System.out.println(response));
};
}
}
#Configuration
public class WebSocketConfig {
#Bean
public BinanceApiWebSocketClient binanceApiWebSocketClient() {
return BinanceApiClientFactory.newInstance().newWebSocketClient();
}
}
With the scheduled task you have a new event stream is started with every task execution and you end up having multiple streams of the same events.

Using StateRestoreListener with Spring Cloud Kafka Streams binder

I'm going to use StateRestoreListener with Spring Cloud Kafka Streams binder.
I need to monitor the restoration progress of fault-tolerant state stores of my applications.
There is example in confluent https://docs.confluent.io/current/streams/monitoring.html#streams-monitoring-runtime-status .
In order to observe the restoration of all state stores you provide
your application an instance of the
org.apache.kafka.streams.processor.StateRestoreListener interface. You
set the org.apache.kafka.streams.processor.StateRestoreListener by
calling the KafkaStreams#setGlobalStateRestoreListener method.
The first problem is getting the Kafka Streams from the app. I solved this problem with using
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
The second problem is setting StateRestoreListener to KafkaStreams, because I get error
java.lang.IllegalStateException: Can only set
GlobalStateRestoreListener in CREATED state. Current state is: RUNNING
Is it possible to use StateRestoreListener in Spring Cloud Kafka Streams binder?
Thanks
You can do that by using a StreamsBuilderFactoryBeanCustomizer that gives you access to the underlying KafkaStreams object. If you are using binder versions 3.0 or above, this is the recommended approach. For e.g., you can provide the following bean in your application and customize it with the GlobalStateRestoreListener.
#Bean
public StreamsBuilderFactoryBeanCustomizer streamsBuilderFactoryBeanCustomizer() {
return factoryBean -> {
factoryBean.setKafkaStreamsCustomizer(new KafkaStreamsCustomizer() {
#Override
public void customize(KafkaStreams kafkaStreams) {
kafkaStreams.setGlobalStateRestoreListener(...);
}
});
};
}
This blog has more details on this strategy.

Connect to message broker with Spring cloud stream from test

There are articles on how to test Spring cloud stream applications without connecting to a messaging system with spring-cloud-stream-test-support. But I want to really connect to RabbitMQ from my integration test, and cannot do that. Here is test class:
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#EnableBinding(Source.class)
public class StreamIT {
#Autowired
private Source source;
#Test
public void testMessageSending() throws InterruptedException {
source.output().send(MessageBuilder.withPayload("test").build());
System.out.println("Message sent.");
}
}
Everything is the same as in #SpringBootApplication, they use the same properties from application.yml.
But there is no log line that message is sent (o.s.a.r.c.CachingConnectionFactory : Created new connection: SpringAMQP#22e79d25:0/SimpleConnection#5cce3ab6 [delegate=amqp://guest#127.0.1.1:5672/, localPort= 60934]
),
and even if broker is not started, there is no java.net.ConnectException: Connection refused (Connection refused).
Am I doing something wrong? What is needed to create real connection to broker and send message from test?
Since you are using #SpringBootTest annotation in your test, Spring Boot will evaluate all available auto-configurations.
If you have spring-cloud-stream-test-support dependency in your test classpath then following auto-configurations will be also evaluated:
org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration
org.springframework.cloud.stream.test.binder.MessageCollectorAutoConfiguration
As a result, you have only one binder in the application context - org.springframework.cloud.stream.test.binder.TestSupportBinder. By its name, you can understand that it does nothing about real binding.
Excluding/removing of spring-cloud-stream-test-support dependency from test classpath - is a dubious solution. Since it forces you to create two separate modules for unit and integration tests.
If you want to exclude previously mentioned auto-configurations in your test. You can do it as follows:
#RunWith(SpringRunner.class)
#SpringBootTest
#EnableAutoConfiguration(exclude = {TestSupportBinderAutoConfiguration.class, MessageCollectorAutoConfiguration.class})
public class StreamIT {
EDIT
You need to remove the test-support jar from the pom. It's presence (in test scope) is what triggers replacing the real binder with a test binder.
After removing the test binder support, this works fine for me...
#RunWith(SpringRunner.class)
#SpringBootTest
public class So49816044ApplicationTests {
#Autowired
private Source source;
#Autowired
private AmqpAdmin admin;
#Autowired
private RabbitTemplate template;
#Test
public void test() {
// bind an autodelete queue to the destination exchange
Queue queue = this.admin.declareQueue();
this.admin.declareBinding(new Binding(queue.getName(), DestinationType.QUEUE, "mydest", "#", null));
this.source.output().send(new GenericMessage<>("foo"));
this.template.setReceiveTimeout(10_000);
Message received = template.receive(queue.getName());
assertThat(received.getBody()).isEqualTo("foo".getBytes());
}
}
Although there is not a rabbit sample; there is a kafka sample that uses a real (embedded) kafka binder for testing, although the test jar is excluded, it doesn't explicitly say that's needed.

Spring boot and Akka cluster actor dependency injection not working

I am trying to use Spring boot and akka. I have two processes and communicate with akka cluster. Only process A uses spring boot.
#Autowired
private ActorSystem springActorSystem;
#Autowired
private SpringExtension springExtension;
private ActorRef caActor;
caActor = springActorSystem.actorOf(springExtension.props("clientAgentActor"), "ca");
If I create the actor on process A, of course, using springExtension, all injections are working. However, the caActor is a cluster actor. If process B send a message to process A, the ClientAgentActor invoked somewhere, all injections are failed.
How to solve it?
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class ClientAgentActor extends AbstractActor {
private static final Logger logger = LogManager.getLogger(ClientAgentActor.class);
#Autowired
ClientAgentService caService;
#Autowired
LineService lineService;
#Override
public Receive createReceive() {
//TODO
return receiveBuilder().match(String.class, msg -> logger.debug(msg)).build();
}
Thought the same for almost whole day. And I think there is no way to integrate Spring into Akka cluster with full DI for cross-cluster calls without changing the core of Akka Cluster.
When you do call without cluster inside one JVM you use Akka wrapper instead of pure Akka.
But when you do call in cluster this call is recieved on other node by pure Akka infrastructure without Spring wrapper, so this infrastructure doesn't know about Spring actors proxies that's why you see no injections.
So if you need Spring in Akka Cluster you need to wrap the core of this library with Spring infrastructure. Except it would be not easy to implement it would be also hard to follow the Akka rules and conventions in application architecture. For example, it would be too easy to inject transitive dependency which has blocking calls or multithreading code.
If you need to use some Spring functionality I think the best way to do this is to fully separate Akka infrastructure from Spring's one. And after application initialization set global static field with created ApplicationContext and make applicationContext.getBean(...) calls where they are needed. Of course you can do comfortable method for this. Or for example class with public static fields with needed beans which are set once after Spring initialization completes.

Categories