AWS SQS temporary queues not being deleted on app shutdown - java

We are attempting to use the aws sqs temporary queues library for synchronous communication between two of our apps. One app utilises a AmazonSQSRequester while the other uses a AmazonSQSResponder - both are created using the builders from the library and wired in as Spring beans in app config. Through AWS console we create an SQS queue to work as the 'host queue' required for the request/return pattern. The requesting app sends to this queue and the responding app uses a SQSMessageConsumer to poll the queue and pass messages to the AmazonSQSResponder. Part of how (I'm fairly sure) the library works is that the Requester spins up a temporary SQS queue (a real, static one), then sends that queue url as an attribute in a message to the Responder, which then posts its response there.
Communications between the apps work fine and temporary queues are automatically created. The issue is that when the Requester app shuts down, the temporary queue (now orphaned and useless) persists when it should be cleared up by the library. Information on how we're expecting this clean up to work can be found in this aws post:
The Temporary Queue Client client addresses this issue as well. For each host queue with recent API calls, the client periodically uses the TagQueue API action to attach a fresh tag value that indicates the queue is still being used. The tagging process serves as a heartbeat to keep the queue alive. According to a configurable time period (by default, 5 minutes), a background thread uses the ListQueues API action to obtain the URLs of all queues with the configured prefix. Then, it deletes each queue that has not been tagged recently.
The problem we are having is that when we kill the Requester app, unexplained messages appear in the temporary queue/response queue. We are unsure which app puts them there. Messages in queue prevent the automagic cleanup from happening. The unexplained messages share the same content, a short string:
.rO0ABXA=

This looks like it was logged as a bug with the library: https://github.com/awslabs/amazon-sqs-java-temporary-queues-client/issues/11. Will hopefully be fixed soon!

Related

How can I monitor EWS SOAP messages relating to subscription creation

We have a spring java app using EWS to connect to our on prem 2016 Exchange server and 'stream' pulling emails. Every 30 minutes a new 30 minute subscription is made (via new thread). We assume old connection just expires.
When one instance is running in our environment, it works perfectly fine, but when two instances run, after some time one instance will eventually start throwing error about
You have exceeded the available concurrent connections for your account. Try again once your other requests have completed.
It seems like an issue which is then hit by throttling. I found that the Exchange servers config is:
EWSMaxConcurrency=27, MaxStreamingConcurrency=10,
HangingConnectionLimit=10
Our code previously didn't explicitly close connections and unsubscribe (was running fine without when one instance). We tried including both but the issue still persists and we noticed the close method for StreamingSubscriptionConnection throws error. The team that handles the Exchange server can find errors referencing the exceeding connection count error above, but nothing relating to the close connection error
...[m.e.w.d.n.StreamingSubscriptionConnection.close(349)]: java.lang.Exception: microsoft.exchange.webservices.data.notification.StreamingSubscriptionConnection
Currently we don't have much ability to make changes on the exchange server side. I'm not familiar with SOAP messages but I was planning to look into how to monitor them to see what inbound and outbound messages there are for some insights
For the service I set service.setTraceEnabled(true) and service.setTraceFlags(EnumSet.allOf(TraceFlags.class)
However I only see trace messages in console when an email arrives. I dont see any messages during start up when a subscription/connection is created
Can anyone help provide any advice on how I can monitor these subscription related messages?
I tried using SOAPUI but I'm having difficulty applying our server's WSDL. I considered using the Tunnelij plugin for intellij but I'm not too familiar with how to set it up either
My suspicion is that there is some intermittent latency issue on Exchange server side, perhaps response messages are not coming back in a timely manner, and this may be screwing up. I presume if I monitor these SOAP messages then I should see more than 10 requests to subscribe before that error appears
The EWS Logs on the CAS (Client Access Server) should have details about the throttling issue. Are you using Impersonation in you Application if you not using Impersonation then the concurrent connections are charged against the account your using with Impersonation that get charged against the account your impersonating. The difference here is that a single user can have no more the 10 streaming subscriptions (unless you modify the web.config) if your using impersonation than you can scale your application to 1000's of users see https://github.com/MicrosoftDocs/office-developer-exchange-docs/blob/main/docs/exchange-web-services/how-to-maintain-affinity-between-group-of-subscriptions-and-mailbox-server.md

Delete Mail using camel without the consumer

Hy all,
in the software I'm developing, I have different camel routes that work on data, that is (in this case) loaded from an imap server using the camel-mail component.
Each of those routes does something with the data and then gives the data to the next route. They are dynamically configured at runtime.
In between those routes is an embedded ActiveMQ server which is used by each route to load the data from and save the data to (for the next route to pick it up).
Because of this structure I'm having a special case with the camel-mail consumer.
When loading a mail and sending it to the first ActiveMQ queue, it is immediatelly deleted/marked as read (depending on the settings on the mail consumer), but the actual processing of the mail has not concluded yet, as the next routes still have to process it.
This is a simplified view:
from("imaps://imap.server.com?...")
// Format mail in a way the other routes understand
.to("activemq:queue1"); // After this the mail is delete on the imap server
from("activemq:queue1")
// do some processing
.to("activemq:queue2");
from("activemq:queue2")
// Do some final processing
.to("..."); // NOW the mail should be delete on the imap server
This issue is even more a problem with the error handling I do.
Ever route in this "chain" sends failed exchanges to a deadLetterQueue on the ActiveMQ server. This way there is one error handling route, which picks up the failed exchanges and deals with them, on matter where it crashed.
In case there is a problem I want the email on the imap server to be handled differently (maybe even do nothing an try again on the next poll)
As camels InOut MEP returns the exchange to the (mail)consumer when the route ends i.e. when the exchange is given to the queue, I can't use the consumer to delete the mails after the whole process has ended.
Unfortunatelly I also don't see a delete option on the mail producer (which makes sense I guess, because its not how imap works).
I could also use smtp for this if thats necessary.
Does anybody have an idea how I could achieve this using no other connector then the camel component to connect to the mail server?
Greets and thanks in advance
Chris
Edit:
Adding the parameter "exchangePattern=InOut" to the jms queues (.to("activemq:queue1?exchangePattern=InOut")) lets the mail component wait for the whole process to finish.
The problem with that is, that we lose the big advantage with ActiveMQ that all routes are independent of each other. This is important so we are don't run into issues with consuming the mail when a later route takes a long time to process, which is very likely to happen.
So idealy we find a solution, where the mail is deleted without any component waiting for something to finish

is there a java pattern for a process to constantly run to poll or listen for messages off a queue and process them?

planning on moving a lot of our single threaded synchronous processing batch jobs to a more distributed architecture with workers. the thought is having a master process read records off the database, and send them off to a queue. then have a multiple workers read off the queue to process the records in parallel.
is there any well known java pattern for a simple CLI/batch job that constantly runs to poll/listen for messages on queues? would like to use that for all the workers. or is there a better way to do this? should the listener/worker be deployed in an app container or can it be just a standalone program?
thanks
edit: also to note, im not looking to use JavaEE/JMS, but more hosted solutions like SQS, a hosted RabbitMQ, or IronMQ
If you're using a JavaEE application server (and if not, you should), you don't have to program that logic by hand since the application server does it for you.
You then implement and deploy a message driven bean that listens to a queue and processes the message received. The application server will manage a connection pool to listen to queue messages and create a thread with an instance of your message driven bean which will receive the message and be able to process it.
The messages will be processed concurrently since the application server will have a connection pool and a thread pool available to listen to the queue.
All JavaEE-featured application servers like IBM Websphere or JBoss have configurations available in their admin consoles to create Message Queue listeners depending or the message queue implementation and then bind this message queue listeners to your Message Driven Bean.
I don't a lot about this, and I maybe don't really answer your question, but I tried something a few month ago that might interest you to deals with message queue.
You can have a look at this: http://www.rabbitmq.com/getstarted.html
I seems Work Queue could fix your requirements.

How to setup RabbitMQ RPC in a web context

RabbitMQ RPC
I decided to use RabbitMQ RPC as described here.
My Setup
Incoming web requests (on Tomcat) will dispatch RPC requests over RabbitMQ to different services and assemble the results. I use one reply queue with one custom consumer that listens to all RPC responses and collects them with their correlation id in a simple hash map. Nothing fancy there.
This works great in a simple integration test on controller level.
Problem
When I try to do this in a web project deployed on Tomcat, Tomcat refuses to shut down. jstack and some debugging learned me a thread is spawn to listen for the RPC response and is blocking Tomcat from shutting down gracefully. I guess this is because the created thread is created on application level instead of request level and is not managed by Tomcat. When I set breakpoints in Servlet.destroy() or ServletContextListener.contextDestroyed(ServletContextEvent sce), they are not reached, so I see no way to manually clean things up.
Alternative
As an alternative, I could use a new reply queue (and simple QueueingConsumer) for each web request. I've tested this, it works and Tomcat shuts down as it should. But I'm wondering if this is the way to go.. Can a RabbitMQ cluster deal with thousands (or even millions) of short living queues/consumers? I can imagine queues aren't that big, but still.. constantly broadcasting to all cluster nodes.. the total memory footprint..
Question
So in short, is it wise do create a queue for each incoming web request or how should I setup RabbitMQ with one queue and consumer so Tomcat can shutdown gracefully?
I found a solution for my problem:
The Java client is creating his own threads. There is the possibility to add your own ExecutorService when creating a new connection. Doing so in the ServletContextListener.initialized() method, one can keep track of the ExecutorService and shut it down manually in the ServletContextListener.destroyed() method.
executorService.shutdown();
executorService.awaitTermination(20, TimeUnit.SECONDS);
I used Executors.newCachedThreadPool(); as the threads have many short executions, and they get cleaned up when being idle for more then 60s.
This is the link to the RabbitMQ Google group thread (thx to Michael Klishin for showing me the right direction)

client failure detection in client-server systems (distributed)

Assume a distributed communication system where client and server communicate via a stateless channel.
The client sends requests to the server and the server does processing and keeps internal records for each client.
Server sends back notifications to the clients as various events happen to the system, as needed.
The notification mechanism depends on the internal records.
My question is, what is the standard appoach in distributed computing to handle the client failures?
I.e. in this context, assume that the client process crashes or simply restarts.
The server still has the records for the client but now client and server are of sync.
As a result client will get notifications according to records created before restart. This is undesirable.
What is a standardized way to detect the client failures? E.g. client has restarted and previous records must be erased?
I thought of periodic callbacks to clients and if a client is not reachable, erase its records but I am not sure if this is a good idea. [EDIT] I thought of callbacks because, the period events send back to the client can be in very large intervals and so the client failure would not be noticable soon
Can anyone help on this? The context of my application domain is web services.
Thank you!
The standard approach varies from system to system depending to the architecture and domain. How the server finds out that the client is down? I think you don't need callbacks, since you send the notifications and can detect that the client is unreachable. For example:
send a notification to the client;
if success, goto 1;
else erase all the notifications in the queue for the client, set a flag to not collect events for the client.
When a client is connected:
unset the flag;
start sending notifications
Or even a simpler approach:
erase the notification queue for the client when it connects before initializing the conversation;
run a low-priority thread to erase all the notifications for all the clients which are older then X, to clean notifications for the client which will never come back.
Update after the original author comments
It strongly depends on how things are organized in your system. Assuming:
The server starts a thread (let's call it "agent") to serve a client, a thread per client.
The agent exits when the clients shuts down the session properly or goes down.
there is a private (which is not shared among agents/clients) record set for each client
there is a shared list of current clients which is used by another component (not an ordinary agent, let's call it "dispatcher") to distribute records for clients.
solution:
1. the server starts an agent and registers the client just connected to list of clients. The dispatcher gets notified that a new client arrived.
2. the agent consumes the records until client is connected. On client's shutdown and/or failure the agents unregisters the client and cleans the record set.
If things in your system aren't organized in the way described above, please provide some details.

Categories