I currently have implemented in a Spring Boot project running on Fargate an SQS listener.
It's possible that under the hood, the SqsAsyncClient which appears to be a listener, is actually polling though.
Separately, as a PoC, on I implemented a Lambda function trigger on a different queue. This would be invoked when there are items in the queue and would post to my service. This seems unnecessarily complex to me but removes a single point of failure if I were to only have one instance of the service.
I guess my major point of confusion is whether I am needlessly worrying about polling vs listening on a SQS queue and whether it matters.
Code for example purposes:
#Component
#Slf4j
#RequiredArgsConstructor
public class SqsListener {
private final SqsAsyncClient sqsAsyncClient;
private final Environment environment;
private final SmsMessagingServiceImpl smsMessagingService;
#PostConstruct
public void continuousListener() {
String queueUrl = environment.getProperty("aws.sqs.sms.queueUrl");
Mono<ReceiveMessageResponse> responseMono = receiveMessage(queueUrl);
Flux<Message> messages = getItems(responseMono);
messages.subscribe(message -> disposeOfFlux(message, queueUrl));
}
protected Flux<Message> getItems(Mono<ReceiveMessageResponse> responseMono) {
return responseMono.repeat().retry()
.map(ReceiveMessageResponse::messages)
.map(Flux::fromIterable)
.flatMap(messageFlux -> messageFlux);
}
protected void disposeOfFlux(Message message, String queueUrl) {
log.info("Inbound SMS Received from SQS with MessageId: {}", message.messageId());
if (someConditionIsMet())
deleteMessage(queueUrl, message);
}
protected Mono<ReceiveMessageResponse> receiveMessage(String queueUrl) {
return Mono.fromFuture(() -> sqsAsyncClient.receiveMessage(
ReceiveMessageRequest.builder()
.maxNumberOfMessages(5)
.messageAttributeNames("All")
.queueUrl(queueUrl)
.waitTimeSeconds(10)
.visibilityTimeout(30)
.build()));
}
protected void deleteMessage(String queueUrl, Message message) {
sqsAsyncClient.deleteMessage(DeleteMessageRequest.builder()
.queueUrl(queueUrl)
.receiptHandle(message.receiptHandle())
.build())
.thenAccept(deleteMessageResponse -> log.info("deleted message with handle {}", message.receiptHandle()));
}
}
Related
I’ve a SQS queue which has max receives value of 3 & default visibility timeout of 30 seconds.
Currently I’m listening to its messages using annotation #SqsListener, which works fine.
Now I want to implement exponential backoff for retries on this queue.
The only pointer I got in this direction in AWS documentation is to use ClientConfiguration.
But I'm not able to find any exmample on how to use it.
I thought SimpleMessageListenerContainer might provide a setter to apply backoff strategy, but it just provides to add a timer.
If there any examples to guide how to add backoff in spring in SQS listener, that will be great.
Exponential backoff can be achieved in a Spring SQS listener by using a custom error handler in combination with the SimpleMessageListenerContainer.
#EnableScheduling
#Configuration
public class ExponentialBackoffSqsListener {
private QueueMessagingTemplate queueMessagingTemplate;
private String queueUrl;
#Autowired
public ExponentialBackoffSqsListener(AmazonSQSAsync amazonSqs, String queueUrl) {
this.queueMessagingTemplate = new QueueMessagingTemplate(amazonSqs);
this.queueUrl = queueUrl;
}
#SqsListener(value = "${queue.name}")
public void receiveMessage(String message) {
// Your business logic goes here
}
#Bean
public SimpleMessageListenerContainer messageListenerContainer() {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
listenerContainer.setAmazonSqs(this.queueMessagingTemplate.getAmazonSqs());
listenerContainer.setErrorHandler(t -> {
if (RetryUtils.isRetryableServiceException(t)) {
RetryPolicy retryPolicy = RetryUtils.getDefaultRetryPolicy();
int backoffTime = retryPolicy.getBackoffStrategy().computeBackoffInMilliseconds(retryPolicy.getRetryCondition().getRetryCount());
// Schedule a retry for the failed message after the backoff time
scheduleRetry(backoffTime, message);
}
});
listenerContainer.setQueueUrl(this.queueUrl);
return listenerContainer;
}
private void scheduleRetry(int backoffTime, String message) {
// Schedule a retry using the #Scheduled annotation
new ScheduledThreadPoolExecutor(1).schedule(() -> {
this.queueMessagingTemplate.convertAndSend(this.queueUrl, message);
}, backoffTime, TimeUnit.MILLISECONDS);
}
}
We have a Java application that sends messages through a stomp connection over websocket using SpringBoot messaging support. The data should be sent to specific users once they connect and subscribe to the topic but when we reload the page the websocket breaks and never sends any messages again.
We listen for the SessionSubscribeEvent here (so we can send an initial message after subscription is made):
#Component
#AllArgsConstructor
public class TransactionSubscriptionListener implements ApplicationListener<SessionSubscribeEvent> {
private static final String DESTINATION_HEADER = "simpDestination";
private final RegionTransactionSender regionTransactionSender;
#Override
public void onApplicationEvent(SessionSubscribeEvent subscribeEvent) {
Object simpDestination = subscribeEvent.getMessage().getHeaders().get(DESTINATION_HEADER);
if (simpDestination == null) {
return;
}
String destination = String.valueOf(simpDestination);
if (destination.matches(RegionTransactionSender.REGEXP)) {
regionTransactionSender.send();
}
}
}
Region transaction sender implementation:
#Component
#AllArgsConstructor
public class RegionTransactionSender {
public static final String REGEXP =
ApiVersionConstants.TRANSACTIONS_FOR_REGION_DESTINATION_WITH_SUBSCRIBER + "/\\S*";
private static final String TOPIC_URL_PREFIX = ApiVersionConstants.TRANSACTIONS_FOR_REGION_DESTINATION + "/";
private final SimpMessageSendingOperations sendingOperations;
private final TransactionService transactionService;
private final SimpUserRegistry simpUserRegistry;
public void send() {
Set<SimpUser> users = simpUserRegistry.getUsers();
users.stream()
.filter(SimpUser::hasSessions)
.forEach(this::sendToSubscriptions);
}
private void sendToSubscriptions(SimpUser user) {
user.getSessions().forEach(session -> session.getSubscriptions()
.forEach(subscription -> sendToTopics(user, subscription)));
}
private void sendToTopics(final SimpUser user, final SimpSubscription subscription) {
String destination = subscription.getDestination();
if (destination.matches(REGEXP)) {
Optional<String> regionOptional = WebsocketUtils.retrieveOrganizationRegionFromDestination(destination);
regionOptional.ifPresent(region -> sendForRegionTopic(user, region));
}
}
private void sendForRegionTopic(final SimpUser user, final String region) {
Set<TransactionResponse> transactionsForRegion = transactionService
.getTransactionsForRegion(AbstractWebsocketSender.TRANSACTIONS_COUNT, region);
sendingOperations.convertAndSendToUser(user.getName(), TOPIC_URL_PREFIX + region, transactionsForRegion);
}
}
The send() method is called later on but no messages are sent.
Messages visible in Chrome's network debugging tool
As you can see our other websocket (systemBalanceSummary) works great. The difference is that on systemBalanceSummary we sent messages to a non user-specific destination.
It's also worth mentioning that when we access the website for the first time everything works fine.
Why does that websocket break when we reload the page?
EDIT
After some debugging we've found out that even though the subscription event is fired there are no users in SimpUserRegistry but we do not know what causes that.
I have found solution for this.
First, you need to implement SimpUserRegistry instead of using DefaultSimpUserRegistry. The reason for that is that DefaultSimpUserRegistry seem to add user after SessionConnctedEvent is triggered and it is not always connected. I changed that so user is added after SessionConnectEvent.
This resolves problem of not having users in user registry after reload though. If this is not a problem, you can probably skip it.
After that I changed usage of convertAndSendToUser method. In code provided in question data is being sent to username. I changed that so I am sending data to sessionId, but also added some headers. Here is the code for that:
private void sendForRegionTopic(final String region, final String sessionId) {
Set<TransactionResponse> transactionsForRegion = transactionService
.getTransactionsForRegion(AbstractWebsocketSender.TRANSACTIONS_COUNT, region);
sendingOperations.convertAndSendToUser(sessionId,
TOPIC_URL_PREFIX + region,
transactionsForRegion,
createHeaders(sessionId));
}
private MessageHeaders createHeaders(final String sessionId) {
SimpMessageHeaderAccessor accessor = SimpMessageHeaderAccessor.create(SimpMessageType.MESSAGE);
accessor.setSessionId(sessionId);
accessor.setLeaveMutable(true);
return accessor.getMessageHeaders();
}
I'm sending messages to ibm mq with some correlationId (unique for each message). Then I want to read from output queue this concrete message with specific correlationId, and i want it to be non-blocking to use it in java webflux controller.
I'm wondering if there is a way to do it without lot of pain? Options like jmsTemplate.receiveSelected(...) is blocking, while creating a bean implementing interface MessageListener doesn't provide a way to select message by dynamic selector(i.e. correlationId is unique for each message).
You could use spring MessageListener to retrieve all messages and connect it with controller by Mono.create(...) and your own event listener which trigger result Mono
// Consumes message and trigger result Mono
public interface MyEventListener extends Consumer<MyOutputMessage> {}
Class to route incoming messages to correct MyEventListener
public class MyMessageProcessor {
// You could use in-memory cache here if you need ttl etc.
private static final ConcurrentHashMap<String, MyEventListener> REGISTRY
= new ConcurrentHashMap<>();
public void register(String correlationId, MyEventListener listener) {
MyEventListener oldListeer = REGISTRY.putIfAbsent(correlationId, listener);
if (oldListeer != null)
throw new IllegalStateException("Correlation ID collision!");
}
public void unregister(String correlationId) {
REGISTRY.remove(correlationId);
}
public void accept(String correlationId, MyOutputMessage myOutputMessage) {
Optional.ofNullable(REGISTRY.get(correlationId))
.ifPresent(listener -> listener.accept(myOutputMessage));
}
}
Webflux controller
private final MyMessageProcessor messageProcessor;
....
#PostMapping("/process")
Mono<MyOutputMessage> process(Mono<MyInputMessage> inputMessage) {
String correlationId = ...; //generate correlationId
// then send message asynchronously
return Mono.<MyOutputMessage>create(sink ->
// create and save MyEventListener which call MonoSink.success
messageProcessor.register(correlationId, sink::success))
// define timeout if you don't want to wait forever
.timeout(...)
// cleanup MyEventListener after success, error or cancel
.doFinally(ignored -> messageProcessor.unregister(correlationId));
}
And into onMessage of your JMS MessageListener implementation you could call
messageProcessor.accept(correlationId, myOutputMessage);
You could find similar example for Flux in the reactor 3 reference guide
Environment
Spring Boot: 1.5.13.RELEASE
Cloud: Edgware.SR3
Cloud AWS: 1.2.2.RELEASE
Java 8
OSX 10.13.4
Problem
I am trying to write an integration test for SQS.
I have a local running localstack docker container with SQS running on TCP/4576
In my test code I define an SQS client with the endpoint set to local 4576 and can successfully connect and create a queue, send a message and delete a queue. I can also use the SQS client to receive messages and pick up the message that I sent.
My problem is that if I remove the code that is manually receiving the message in order to allow another component to get the message nothing seems to be happening. I have a spring component annotated as follows:
Listener
#Component
public class MyListener {
#SqsListener(value = "my_queue", deletionPolicy = ON_SUCCESS)
public void receive(final MyMsg msg) {
System.out.println("GOT THE MESSAGE: "+ msg.toString());
}
}
Test
#RunWith(SpringRunner.class)
#SpringBootTest(properties = "spring.profiles.active=test")
public class MyTest {
#Autowired
private AmazonSQSAsync amazonSQS;
#Autowired
private SimpleMessageListenerContainer container;
private String queueUrl;
#Before
public void setUp() {
queueUrl = amazonSQS.createQueue("my_queue").getQueueUrl();
}
#After
public void tearDown() {
amazonSQS.deleteQueue(queueUrl);
}
#Test
public void name() throws InterruptedException {
amazonSQS.sendMessage(new SendMessageRequest(queueUrl, "hello"));
System.out.println("isRunning:" + container.isRunning());
System.out.println("isActive:" + container.isActive());
System.out.println("isRunningOnQueue:" + container.isRunning("my_queue"));
Thread.sleep(30_000);
System.out.println("GOT MESSAGE: " + amazonSQS.receiveMessage(queueUrl).getMessages().size());
}
#TestConfiguration
#EnableSqs
public static class SQSConfiguration {
#Primary
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://127.0.0.1:4576", "eu-west-1");
return new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("key", "secret")))
.withEndpointConfiguration(endpoint)
.build());
}
}
}
In the test logs I see:
o.s.c.a.m.listener.QueueMessageHandler : 1 message handler methods found on class MyListener: {public void MyListener.receive(MyMsg)=org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a}
2018-05-31 22:50:39.582 INFO 16329 ---
o.s.c.a.m.listener.QueueMessageHandler : Mapped "org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#1cd4082a" onto public void MyListener.receive(MyMsg)
Followed by:
isRunning:true
isActive:true
isRunningOnQueue:false
GOT MESSAGE: 1
This demonstrates that in the 30 second pause between sending the message the container didn't pick it up and when I manually poll for the message it is there on the queue and I can consume it.
My question is, why isn't the listener being invoked and why is the isRunningOnQueue:false line suggesting that it's not auto started for that queue?
Note that I also tried setting my own SimpleMessageListenerContainer bean with autostart set to true explicitly (the default anyway) and observed no change in behaviour. I thought that the org.springframework.cloud.aws.messaging.config.annotation.SqsConfiguration#simpleMessageListenerContainer that is set up by #EnableSqs ought to configure an auto started SimpleMessageListenerContainer that should be polling for me message.
I have also set
logging.level.org.apache.http=DEBUG
logging.level.org.springframework.cloud=DEBUG
in my test properties and can see the HTTP calls create the queue, send a message and delete etc but no HTTP calls to receive (apart from my manual one at the end of the test).
I figured this out after some tinkering.
Even if the simple message container factory is set to not auto start, it seems to do its initialisation anyway, which involves determining whether the queue exists.
In this case, the queue is created in my test in the setup method - but sadly this is after the spring context is set up which means that an exception occurs.
I fixed this by simply moving the queue creation to the context creation of the SQS client (which happens before the message container is created). i.e.:
#Bean(destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQS() {
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://localhost:4576", "eu-west-1");
final AmazonSQSBufferedAsyncClient client = new AmazonSQSBufferedAsyncClient(AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("dummyKey", "dummySecret")))
.withEndpointConfiguration(endpoint)
.build());
client.createQueue("test-queue");
return client;
}
I'm playing around with reactive patterns in a Java (8) Spring Boot (1.5.2.RELEASE) application with Akka (2.5.1). It's coming along nicely but now I'm stuck trying to run a CompletableFuture from an actor. To simulate this I have created a very simple service that returns a CompletableFuture. However, when I then try to return the result to the calling controller I get errors about dead-letters and no response is returned.
The error I am getting is:
[INFO] [05/05/2017 13:12:25.650] [akka-spring-demo-akka.actor.default-dispatcher-5] [akka://akka-spring-demo/deadLetters] Message [java.lang.String] from Actor[akka://akka-spring-demo/user/$a#-1561144664] to Actor[akka://akka-spring-demo/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
Here is my code. This is the controller calling the actor:
#Component
#Produces(MediaType.TEXT_PLAIN)
#Path("/")
public class AsyncController {
#Autowired
private ActorSystem system;
private ActorRef getGreetingActorRef() {
ActorRef greeter = system.actorOf(SPRING_EXTENSION_PROVIDER.get(system)
.props("greetingActor"));
return greeter;
}
#GET
#Path("/foo")
public void test(#Suspended AsyncResponse asyncResponse, #QueryParam("echo") String echo) {
ask(getGreetingActorRef(), new Greet(echo), 1000)
.thenApply((greet) -> asyncResponse.resume(Response.ok(greet).build()));
}
}
Here is the service:
#Component
public class GreetingService {
public CompletableFuture<String> greetAsync(String name) {
return CompletableFuture.supplyAsync(() -> "Hello, " + name);
}
}
Then here is the actor receiving the call. At first I had this:
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class GreetingActor extends AbstractActor {
#Autowired
private GreetingService greetingService;
#Autowired
private ActorSystem system;
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Greet.class, this::onGreet)
.build();
}
private void onGreet(Greet greet) {
greetingService.greetAsync(greet.getMessage())
.thenAccept((greetingResponse) -> getSender().tell(greetingResponse, getSelf()));
}
}
This resulted in 2 calls being handled correctly but after that I would get dead-letter errors. Then I read here what was probably causing my problems:
http://doc.akka.io/docs/akka/2.5.1/java/actors.html
Warning
When using future callbacks, inside actors you need to carefully avoid closing over the containing actor’s reference, i.e. do not call methods or access mutable state on the enclosing actor from within the callback. This would break the actor encapsulation and may introduce synchronization bugs and race conditions because the callback will be scheduled concurrently to the enclosing actor. Unfortunately there is not yet a way to detect these illegal accesses at compile time. See also: Actors and shared mutable state
So I figured the idea is that you pipe the result to self() after which you can do getSender().tell(response, getSelf()).
So I altered my code to this:
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class GreetingActor extends AbstractActor {
#Autowired
private GreetingService greetingService;
#Autowired
private ActorSystem system;
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Greet.class, this::onGreet)
.match(String.class, this::onGreetingCompleted)
.build();
}
private void onGreet(Greet greet) {
pipe(greetingService.greetAsync(greet.getMessage()), system.dispatcher()).to(getSelf());
}
private void onGreetingCompleted(String greetingResponse) {
getSender().tell(greetingResponse, getSelf());
}
}
The onGreetingCompleted method is being called with the response from the GreetingService but at that time I again get the dead-letters error so for some reason it can't send the response back to the calling controller.
Note that if I change the service to this:
#Component
public class GreetingService {
public String greet(String name) {
return "Hello, " + name;
}
}
And the onGreet in the actor to:
private void onGreet(Greet greet) {
getSender().tell(greetingService.greet(greet.getMessage()), getSelf());
}
Then everything works fine. So it would appear that I have my basic Java/Spring/Akka set up correctly, it's just when trying to call a CompletableFuture from my actor that the problems start.
Any help would be much appreciated, thanks!
The getSender method is only reliably returning the ref of the sender during the synchronous processing of the message.
In your first case, you have:
greetingService.greetAsync(greet.getMessage())
.thenAccept((greetingResponse) -> getSender().tell(greetingResponse, getSelf()));
Which means that getSender() is invoked async once the future completes. Not reliable anymore. You can change that to:
ActorRef sender = getSender();
greetingService.greetAsync(greet.getMessage())
.thenAccept((greetingResponse) -> sender.tell(greetingResponse, getSelf()));
In your second example, you have
pipe(greetingService.greetAsync(greet.getMessage()), system.dispatcher()).to(getSelf());
You are piping the response to "getSelf()", i.e. your worker actor. The original sender will never get anything (thus the ask expires). You can fix that into:
pipe(greetingService.greetAsync(greet.getMessage()), system.dispatcher()).to(getSender());
In the third case, you have getSender() being executed synchronously during the processing of the message, thus it works.