I am attempting to write a Spring service which subscribes to an external read only STOMP broker and read/process the messages it publishes.
The messages are pushed to the topic "/topic/TRAIN_MVT_ALL_TOC" by a rail company. I can successfully connect to the topic, but can't seem to be able to instantiate a listener to its messages.
I have set up a Spring #Configuration class to connect to this and after running the application it appears to connect correctly.
I've also created the message handling routine, using the #MessageMapping annotation to listen to the particular topic I'm interested in ("TRAIN_MVT_ALL_TOC"). The problem is that it never seems to get called.
Configuration Class code:`
#Configuration
#EnableWebSocketMessageBroker
public class StompConfig implements WebSocketMessageBrokerConfigurer {
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/TRAIN_MVT_ALL_TOC").withSockJS();
}
#Override
public void configureMessageBroker(MessageBrokerRegistry registry) {
registry.setApplicationDestinationPrefixes("/app");
registry.enableStompBrokerRelay("/topic")
.setRelayHost("datafeeds.networkrail.co.uk")
.setRelayPort(61618)
.setSystemLogin("MYEMAIL")
.setSystemPasscode("MYPASSWORD")
;
}
}
Message handler code:
#MessageMapping("/TRAIN_MVT_ALL_TOC")
public void onMessage(#Payload String message) throws Exception {
System.out.println(message);
}
The following log entry is output to the console, indicating that the connection was successful.
o.s.w.s.c.WebSocketMessageBrokerStats : WebSocketSession[0 current WS(0)-HttpStream(0)-HttpPoll(0), 0 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(0)-CONNECTED(0)-DISCONNECT(0)], stompBrokerRelay[1 sessions, ReactorNettyTcpClient[TcpClient: connecting to datafeeds.networkrail.co.uk:61618] (available), processed CONNECT(1)-CONNECTED(1)-DISCONNECT(0)], inboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], outboundChannelpool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
The message never gets printed however. I have been trying to get to the bottom of this one for a few days now so any help would be hugely appreciated.
The messages are pushed to the topic "/topic/TRAIN_MVT_ALL_TOC" by a rail company. I can successfully connect to the topic, but can't seem to be able to instantiate a listener to its messages.
You mean a listener on client side, for example a sockjs client?
#MessageMapping("/TRAIN_MVT_ALL_TOC")
public void onMessage(#Payload String message) throws Exception {
System.out.println(message);
}
You do not return anything, you need to send it to a topic like this:
#MessageMapping("/TRAIN_MVT_ALL_TOC")
#SendTo("/topic/TRAIN_MVT_ALL_TOC")
public Greeting onMessage(HelloMessage message) throws Exception {
return new Greeting("hello");
}
Or if your constructor has a SimpMessageSendingOperations in its parameters (should be autoWired by spring boot itself) you can send mutiple messages to the same topic like this:
#Autowired
public Constructor(SimpMessageSendingOperations messagingTemplate) {
this.messagingTemplate = messagingTemplate;
}
#MessageMapping(WebSockets.READER_MAPPING)
public void streamOverWebsocket(HelloMessage message) throws Throwable {
String topicUrl = "/topic/TRAIN_MVT_ALL_TOC";
messagingTemplate.convertAndSend(topicUrl, new Message("response 1"));
messagingTemplate.convertAndSend(topicUrl, new Message("response 2"));
...
}
it's also best to wrap you incoming and outgoing in a defined class. That way it's easier to serialize and deserialize it.
sources:
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/messaging/simp/SimpMessageSendingOperations.html
https://spring.io/guides/gs/messaging-stomp-websocket/
Related
Is it possible to log retries attempts on client side with resilience4j please?
Maybe via some kind of configuration, or settings.
Currently, I am using resilience4j with Spring boot Webflux annotation based.
It is working great, the project is amazing.
While we put server logs on server side, to see that a same http call has been made due to a retry (we log time, client IP, request ID, etc...) Would I be possible to have client side logs?
I was expecting to see something like "Resilience4j - client side: 1st attempt failed because of someException, retying with attend number 2. 2nd attempt failed because of someException, retying with attend number 3. 3rd attempt successful!"
Something like that. Is there a property, some config, some setup, that can help to do this easily please? Without adding too much boiler code.
#RestController
public class TestController {
private final WebClient webClient;
public TestController(WebClient.Builder webClientBuilder) {
this.webClient = webClientBuilder.baseUrl("http://localhost:8443/serviceBgreeting").build();
}
#GetMapping("/greeting")
public Mono<String> greeting() {
System.out.println("Greeting method is invoked ");
return someRestCall();
}
#Retry(name = "greetingRetry")
public Mono<String> someRestCall() {
return this.webClient.get().retrieve().bodyToMono(String.class);
}
}
Thank you
Fortunately (or unfortunately) there is an undocumented feature :)
You can add a RegistryEventConsumer Bean in order to add event consumers to any Retry instance.
#Bean
public RegistryEventConsumer<Retry> myRetryRegistryEventConsumer() {
return new RegistryEventConsumer<Retry>() {
#Override
public void onEntryAddedEvent(EntryAddedEvent<Retry> entryAddedEvent) {
entryAddedEvent.getAddedEntry().getEventPublisher()
.onEvent(event -> LOG.info(event.toString()));
}
#Override
public void onEntryRemovedEvent(EntryRemovedEvent<Retry> entryRemoveEvent) {
}
#Override
public void onEntryReplacedEvent(EntryReplacedEvent<Retry> entryReplacedEvent) {
}
};
}
Log entry look as follows:
2020-10-26T13:00:19.807034700+01:00[Europe/Berlin]: Retry 'backendA', waiting PT0.1S until attempt '1'. Last attempt failed with exception 'org.springframework.web.client.HttpServerErrorException: 500 This is a remote exception'.
2020-10-26T13:00:19.912028800+01:00[Europe/Berlin]: Retry 'backendA', waiting PT0.1S until attempt '2'. Last attempt failed with exception 'org.springframework.web.client.HttpServerErrorException: 500 This is a remote exception'.
2020-10-26T13:00:20.023250+01:00[Europe/Berlin]: Retry 'backendA' recorded a failed retry attempt. Number of retry attempts: '3'. Giving up. Last exception was: 'org.springframework.web.client.HttpServerErrorException: 500 This is a remote exception'.
There seems to be a lot of information about this on the web if you Google for "resilience4j retry example logging". I found this as a potential solution:
RetryConfig config = RetryConfig.ofDefaults();
RetryRegistry registry = RetryRegistry.of(config);
Retry retry = registry.retry("flightSearchService", config);
...
Retry.EventPublisher publisher = retry.getEventPublisher();
publisher.onRetry(event -> System.out.println(event.toString()));
where you can register a callback to get an event whenever a Retry occurs. This. came from "https://reflectoring.io/retry-with-resilience4j".
Configured with application.properties, and using the #Retry annotation, I managed to get some output with
resilience4j.retry.instances.myRetry.maxAttempts=3
resilience4j.retry.instances.myRetry.waitDuration=1s
resilience4j.retry.instances.myRetry.enableExponentialBackoff=true
resilience4j.retry.instances.myRetry.exponentialBackoffMultiplier=2
resilience4j.retry.instances.myRetry.retryExceptions[0]=java.lang.Exception
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import io.github.resilience4j.retry.RetryRegistry;
import io.github.resilience4j.retry.annotation.Retry;
#Service
public class MyService {
private static final Logger LOG = LoggerFactory.getLogger(MyService.class);
public MyService(RetryRegistry retryRegistry) {
// all
retryRegistry.getAllRetries()
.forEach(retry -> retry
.getEventPublisher()
.onRetry(event -> LOG.info("{}", event))
);
// or single
retryRegistry
.retry("myRetry")
.getEventPublisher()
.onRetry(event -> LOG.info("{}", event));
}
#Retry(name = "myRetry")
public void doSomething() {
throw new RuntimeException("It failed");
}
}
eg.
2021-03-31T07:42:23 [http-nio-8083-exec-1] INFO [myService] - 2021-03-31T07:42:23.228892500Z[UTC]: Retry 'myRetry', waiting PT1S until attempt '1'. Last attempt failed with exception 'java.lang.RuntimeException: It failed'.
2021-03-31T07:42:24 [http-nio-8083-exec-1] INFO [myService] - 2021-03-31T07:42:24.231504600Z[UTC]: Retry 'myRetry', waiting PT2S until attempt '2'. Last attempt failed with exception 'java.lang.RuntimeException: It failed'.
So I'm diving deeper into the world of JMS.
I am writing some dummy projects right now and understanding how to consume messages. I am using Active MQ artemis as the message broker.
Whilst following a tutorial, I stumbled upon something in terms on consuming messages. What exactly is the difference between a message listener to listen for messages and using the #JmsListener annotion?
This is what I have so far:
public class Receiver {
#JmsListener(containerFactory = "jmsListenerContainerFactory", destination = "helloworld .q")
public void receive(String message) {
System.out.println("received message='" + message + "'.");
}
}
#Configuration
#EnableJms
public class ReceiverConfig {
#Value("${artemis.broker-url}")
private String brokerUrl;
#Bean
public ActiveMQConnectionFactory activeMQConnectionFactory(){
return new ActiveMQConnectionFactory(brokerUrl);
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(){
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(activeMQConnectionFactory());
factory.setConcurrency("3-10");
return factory;
}
#Bean
public DefaultMessageListenerContainer orderMessageListenerContainer() {
SimpleJmsListenerEndpoint endpoint =
new SimpleJmsListenerEndpoint();
endpoint.setMessageListener(new StatusMessageListener("DMLC"));
endpoint.setDestination("helloworld.q"); //Try renaming this and see what happens.
return jmsListenerContainerFactory()
.createListenerContainer(endpoint);
}
#Bean
public Receiver receiver() {
return new Receiver();
}
}
public class StatusMessageListener implements MessageListener {
public StatusMessageListener(String dmlc) {
}
#Override
public void onMessage(Message message) {
System.out.println("In the onMessage().");
System.out.println(message);
}
}
From what I've read is that we register a message listener to the container listener which in turn is created by the listener factory. So essentially the flow is this:
DefaultJmsListenerContainerFactory -> creates -> DefaultMessageListenerContainer -> registers a message listener which is used to listen to messages from the endpoint configured.
From my research, i've gathered that messageListeners are used to asynchornously consume messages from the queues/topic whilst using the #JmsListener annotation is used to synchronously listen to messages?
Furthermore, there's a few other ListenerContainerFactory out there such as DefaultJmsListenerContainerFactory and SimpleJmsListenerContainerFactory but not sure I get the difference. I was reading https://codenotfound.com/spring-jms-listener-example.html and from what I've gathered from that is Default uses a pull model so that suggests it's async so why would it matter if we consume the message via a messageListener or the annotation? I'm a bit confused and muddled up so would like my doubts to be cleared up. Thanks!
This is the snippet of the program when sending 100 dummy messages (just noticed it's not outputting the even numbered messages..):
received message='This the 95 message.'.
In the onMessage().
ActiveMQMessage[ID:006623ca-d42a-11ea-a68e-648099ad9459]:PERSISTENT/ClientMessageImpl[messageID=24068, durable=true, address=helloworld.q,userID=006623ca-d42a-11ea-a68e-648099ad9459,properties=TypedProperties[__AMQ_CID=00651257-d42a-11ea-a68e-648099ad9459,_AMQ_ROUTING_TYPE=1]]
received message='This the 97 message.'.
In the onMessage().
ActiveMQMessage[ID:006ba214-d42a-11ea-a68e-648099ad9459]:PERSISTENT/ClientMessageImpl[messageID=24088, durable=true, address=helloworld.q,userID=006ba214-d42a-11ea-a68e-648099ad9459,properties=TypedProperties[__AMQ_CID=0069cd51-d42a-11ea-a68e-648099ad9459,_AMQ_ROUTING_TYPE=1]]
received message='This the 99 message.'.
The following configuration
#Configuration
#EnableJms
public class ReceiverConfig {
//your config code here..
}
would ensure that every time a Message is received on the Destination named "helloworld .q", Receiver.receive() is called with the content of the message.
You can read more here: https://docs.spring.io/spring/docs
I'm sending messages to ibm mq with some correlationId (unique for each message). Then I want to read from output queue this concrete message with specific correlationId, and i want it to be non-blocking to use it in java webflux controller.
I'm wondering if there is a way to do it without lot of pain? Options like jmsTemplate.receiveSelected(...) is blocking, while creating a bean implementing interface MessageListener doesn't provide a way to select message by dynamic selector(i.e. correlationId is unique for each message).
You could use spring MessageListener to retrieve all messages and connect it with controller by Mono.create(...) and your own event listener which trigger result Mono
// Consumes message and trigger result Mono
public interface MyEventListener extends Consumer<MyOutputMessage> {}
Class to route incoming messages to correct MyEventListener
public class MyMessageProcessor {
// You could use in-memory cache here if you need ttl etc.
private static final ConcurrentHashMap<String, MyEventListener> REGISTRY
= new ConcurrentHashMap<>();
public void register(String correlationId, MyEventListener listener) {
MyEventListener oldListeer = REGISTRY.putIfAbsent(correlationId, listener);
if (oldListeer != null)
throw new IllegalStateException("Correlation ID collision!");
}
public void unregister(String correlationId) {
REGISTRY.remove(correlationId);
}
public void accept(String correlationId, MyOutputMessage myOutputMessage) {
Optional.ofNullable(REGISTRY.get(correlationId))
.ifPresent(listener -> listener.accept(myOutputMessage));
}
}
Webflux controller
private final MyMessageProcessor messageProcessor;
....
#PostMapping("/process")
Mono<MyOutputMessage> process(Mono<MyInputMessage> inputMessage) {
String correlationId = ...; //generate correlationId
// then send message asynchronously
return Mono.<MyOutputMessage>create(sink ->
// create and save MyEventListener which call MonoSink.success
messageProcessor.register(correlationId, sink::success))
// define timeout if you don't want to wait forever
.timeout(...)
// cleanup MyEventListener after success, error or cancel
.doFinally(ignored -> messageProcessor.unregister(correlationId));
}
And into onMessage of your JMS MessageListener implementation you could call
messageProcessor.accept(correlationId, myOutputMessage);
You could find similar example for Flux in the reactor 3 reference guide
Environment
Spring Boot: 1.5.13.RELEASE
Cloud: Edgware.SR3
Cloud AWS: 1.2.2.RELEASE
Java 8
OSX 10.13.4
Problem
I am trying to write an integration test for SQS.
I have a local running localstack docker container with SQS running on TCP/4576
In my test code I define an SQS client with the endpoint set to local 4576 and can successfully connect and create a queue, send a message and delete a queue. I can also use the SQS client to receive messages and pick up the message that I sent.
My problem is that if I remove the code that is manually receiving the message in order to allow another component to get the message nothing seems to be happening. I have a spring component annotated as follows:
Listener
#Component
public class MyListener {
#SqsListener(value = "my_queue", deletionPolicy = ON_SUCCESS)
public void receive(final MyMsg msg) {
System.out.println("GOT THE MESSAGE: "+ msg.toString());
}
}
Test
#RunWith(SpringRunner.class)
#SpringBootTest(properties = "spring.profiles.active=test")
public class MyTest {
#Autowired
private AmazonSQSAsync amazonSQS;
#Autowired
private SimpleMessageListenerContainer container;
private String queueUrl;
#Before
public void setUp() {
queueUrl = amazonSQS.createQueue("my_queue").getQueueUrl();
}
#After
public void tearDown() {
amazonSQS.deleteQueue(queueUrl);
}
#Test
public void name() throws InterruptedException {
amazonSQS.sendMessage(new SendMessageRequest(queueUrl, "hello"));
System.out.println("isRunning:" + container.isRunning());
System.out.println("isActive:" + container.isActive());
System.out.println("isRunningOnQueue:" + container.isRunning("my_queue"));
Thread.sleep(30_000);
System.out.println("GOT MESSAGE: " + amazonSQS.receiveMessage(queueUrl).getMessages().size());
}
#TestConfiguration
#EnableSqs
public static class SQSConfiguration {
#Primary
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://127.0.0.1:4576", "eu-west-1");
return new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("key", "secret")))
.withEndpointConfiguration(endpoint)
.build());
}
}
}
In the test logs I see:
o.s.c.a.m.listener.QueueMessageHandler : 1 message handler methods found on class MyListener: {public void MyListener.receive(MyMsg)=org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a}
2018-05-31 22:50:39.582 INFO 16329 ---
o.s.c.a.m.listener.QueueMessageHandler : Mapped "org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a" onto public void MyListener.receive(MyMsg)
Followed by:
isRunning:true
isActive:true
isRunningOnQueue:false
GOT MESSAGE: 1
This demonstrates that in the 30 second pause between sending the message the container didn't pick it up and when I manually poll for the message it is there on the queue and I can consume it.
My question is, why isn't the listener being invoked and why is the isRunningOnQueue:false line suggesting that it's not auto started for that queue?
Note that I also tried setting my own SimpleMessageListenerContainer bean with autostart set to true explicitly (the default anyway) and observed no change in behaviour. I thought that the org.springframework.cloud.aws.messaging.config.annotation.SqsConfiguration#simpleMessageListenerContainer that is set up by #EnableSqs ought to configure an auto started SimpleMessageListenerContainer that should be polling for me message.
I have also set
logging.level.org.apache.http=DEBUG
logging.level.org.springframework.cloud=DEBUG
in my test properties and can see the HTTP calls create the queue, send a message and delete etc but no HTTP calls to receive (apart from my manual one at the end of the test).
I figured this out after some tinkering.
Even if the simple message container factory is set to not auto start, it seems to do its initialisation anyway, which involves determining whether the queue exists.
In this case, the queue is created in my test in the setup method - but sadly this is after the spring context is set up which means that an exception occurs.
I fixed this by simply moving the queue creation to the context creation of the SQS client (which happens before the message container is created). i.e.:
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://localhost:4576", "eu-west-1");
final AmazonSQSBufferedAsyncClient client = new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("dummyKey", "dummySecret")))
.withEndpointConfiguration(endpoint)
.build());
client.createQueue("test-queue");
return client;
}
Here is a small Spring program that is expected to insert a message into a rabbitmq queue:
public class Main {
public static void main(String [] args) throws IOException {
AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(QueueConfiguration.class);
AmqpTemplate template = context.getBean(AmqpTemplate.class);
template.convertAndSend("asdflk ...");
context.destroy();
}
}
The ApplicationContext is as follows:
#Configuration
public class QueueConfiguration {
#Bean
public ConnectionFactory connectionFactory() {
return new CachingConnectionFactory("192.168.1.39");
}
#Bean
public RabbitTemplate rabbitTemplate() {
return new RabbitTemplate(connectionFactory());
}
}
When I check the contents of the queues on the server, nothing gets inserted. I also tried to set the name of the exchange or the name of the queue on the RabbitTemplate, but still nothing shows up on the server.
The log of the application does not show any errors, but logs this:
17:28:02.441 [main] DEBUG o.s.amqp.rabbit.core.RabbitTemplate - Executing callback on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://guest#192.168.1.39:5672/,1)
17:28:02.441 [main] DEBUG o.s.amqp.rabbit.core.RabbitTemplate - Publishing message on exchange [], routingKey = []
Any ideas what's wrong?
I had to give the queue as a parameter in the call to convertAndSend():
template.convertAndSend("hello2", "asdflk ...");
Still wondering why spring-amqp would not throw an exception. Anybody knows where the messages are delivered when no queue is given?
I think I will keep the Routing Key and Queue name in the bean rabbitTemplate() as per spring-amqp example. Since I am working with multiple queues currently I have different class for each queue in which I have the rabbitTemplate like this:
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
//The routing key = name of the queue in default exchange.
template.setRoutingKey("MyQueue");
// Queue name
template.setQueue("MyQueue");
return template;
}
Are you using tomcat to deploy this? If yes then these can be loaded at startup which will initialize all the connection/channel/queues etc as well.