How to configure Non Persistent messages in GCP Pub Sub? - java

I have an architecture with multiple pods subscribing to a GCP Topic.
Every pod handles messages while it's up but is not interested in receiving messages it missed when it was not up.
In ActiveMQ this was Non Persistent messages, but I don't see the equivalent in GCP.
The only thing I thought is message lifetime with a minimum of 10 minutes.
Is this possible in GCP and where can it be configured ?

There is no option in Cloud Pub/Sub to disable storage. You have two options.
As you suggest, set the message retention duration to the minimum, 10 minutes. This does mean you'll get messages that are up to ten minutes old when the pod starts up. The disadvantage to this approach is that if there is an issue with your pod and it falls more than ten minutes behind in processing messages, then it will not get those messages, even when it is up.
Use the seek operation and seek to seek forward to the current timestamp. When a pod starts up, the first thing it could do is issue a Seek command that would acknowledge all messages before the provided timestamp. Note that this operation is eventually consistent and so it is possible you may still get some older messages when you initially start your subscriber (or, if you are using push, once your endpoint is up). Also keep in mind that the seek operation is an administrative operation and therefore is limited to 6,000 operations a minute (100 operations per second). Therefore, if you have a lot of pods that are often restarting, you may run into this limit.

Related

Anyway to set RMQ Consumer delivery timeout on per-message level?

We have a use case that holds active jobs in a RabbitMQ job queue. The Clients, when they are free, pull jobs from this queue. Pretty normal. But in our case, we do not ACK the jobs. We allow them to stay in the Unacked state so if the Client dies, the job goes to a pre-defined DeadLetter queue. We then have a process that pulls messages from dead-letter, and decides to either requeue message back to original job-queue, or discard.
This has worked well for a long time. Now, we have upgraded to a newer version of RMQ, and found that we get disconnects with PRECONDITION_FAILED, because the default ack timeout of 30 minutes has expired.
Beyond removing this from the server, does anyone know a way to configure this on a per-message level?
While some might say just ACK the job, and use a handler to return to DEADLETTER if needed. Well, sorry, that will not work for us.
So, any thoughts?
No, there is not at this time. You should configure the default to be greater than the longest expected job duration. Please note that if you are using quorum queues this may cause disk usage growth because the log files can't be compacted while messages are outstanding.
We may make this timeout configurable in a more granular way, so please keep an eye on future RabbitMQ releases for that.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

kafka springboot about receiving messages only from consumer application launch time and ignoring unprocessed messages

Currently when starting consumer application it will receive old messages that have not been processed by KafkaListener and I only want to receive the latest messages since starting the consumer application ignore those old messages, I have to do that any?
This is pretty brief introduction into your issue - it would be handy to show versions of libraries you are working with, configurations, etc.
Nevertheless, if you do not want to receive old messages, that has not been ack before, you need to move offset for you consumer group.
Offset is basically pointer at last successfully read item, so when consumer is stopped, it remains here until consumer starts reading again - that is the reason why "old" messages are read. In this thread are some answers, but it is difficult to answer completely without further information.
Set consumer settings as auto.offset.reset=latest and enable.auto.commit=false, then your app will always start reading from the very end of the Kafka topic and ignore whatever was written while the app is stopped (between restarts, for example)
You could also add a random UUID to the group.id to ensure no other consumer would easily join that consumer group and "take away" events from your app.
Kafka Consumer API also has a method seekToEnd that you can try.

How to execute a method once (multiple processes/instances) per minute utilizing AWS

I have a process that sends SQS messages every minute. It's important that the messages go out every minute so I'm planning on running the process on multiple instances so that it's more fault tolerant.
Even though it's running on multiple instances I only want the SQS messages to go out once per minute. So if Machine A dispatches the messages I don't want Machine B to send them and vise versa.
I want to avoid having a master/slave setup.
I thought of using a separate SQS queue to send a done message that could be received by one of the processes to start dispatching the messages and send a done message when complete / after a minute, but if the done message doesn't get sent because of a failure or other issue they cycle would end and that's not acceptable.
I also thought of having the process that dispatches the messages place a timestamp in simpleDB or possibly another DB and have the processes check the timestamp on an interval. The first one that checks it and finds that it's older than a minute would update the timestamp and dispatch the messages.
I investigated SWF and found that it can run workers/activities on a timer, but SWF seems like overkill for this and I'd rather avoid getting it setup and running with access to my DB.
Does anyone have an elegant solution for problems like this?
We used our mysql db to do this similar to what you suggested. But we don't try to read the timestamp (race condition?). The table has a unique index on the timestamp. The processes on each instance attempt to insert a timestamp of the minute they run, eg '2015-02-27 12:47:00'. If mysql returns a duplicate key error then another instance came first and they do nothing. If the insert was successful they send the SQS message.
You may also want to try google for distributed cron systems.

Tib RV - listing all the processes that are publishing to a given topic

We have RV messaging systems publishing and receiving messages.Recently some underlying jars were upgraded - these are serialization jars used by all publishers and subscribers. However , it seems that some of the publishers are still referencing old versions of the serialization jars and therefore the receivers fail when trying to deserialize received messages.
Obviously restarting these publisher services should fix the problem. However , how do I identify all publishers using a particular topic to send messages to ? There must be some RV admin way of listing all the processes that are publishing to a given topic ?
I just gave a similar answer on another question:
There is a really great tool for this called Rai Insight
Basically what it can do is to sit on a box and silently listen all the multicast data and represent statistics even in real time. We used it to monitor traffic flow spikes with just few seconds delay.
It can give you traffic statistics braked down by multicast group, service number or even sending machine. Traffic flow peak/average, retransmission rate peak/average. All you can think of.
It will also give you per-service per-topic information.

JMS queue is full

My Java EE application sends JMS to queue continuously, but sometimes the JMS consumer application stopped receiving JMS. It causes the JMS queue very large even full, that collapses the server.
My server is JBoss or Websphere. Do the application servers provide strategy to remove "timeout" JMS messages?
What is strategy to handle large JMS queue? Thanks!
With any asynchronous messaging you must deal with the "fast producer/slow consumer" problem. There are a number of ways to deal with this.
Add consumers. With WebSphere MQ you can trigger a queue based on depth. Some shops use this to add new consumer instances as queue depth grows. Then as queue depth begins to decline, the extra consumers die off. In this way, consumers can be made to automatically scale to accommodate changing loads. Other brokers generally have similar functionality.
Make the queue and underlying file system really large. This method attempts to absorb peaks in workload entirely in the queue. This is after all what queuing was designed to do in the first place. Problem is, it doesn't scale well and you must allocate disk that 99% of the time will be almost empty.
Expire old messages. If the messages have an expiry set then you can cause them to be cleaned up. Some JMS brokers will do this automatically while on others you may need to browse the queue in order to cause the expired messages to be deleted. Problem with this is that not all messages lose their business value and become eligible for expiry. Most fire-and-forget messages (audit logs, etc.) fall into this category.
Throttle back the producer. When the queue fills, nothing can put new messages to it. In WebSphere MQ the producing application then receives a return code indicating that the queue is full. If the application distinguishes between fatal and transient errors, it can stop and retry.
The key to successfully implementing any of these is that your system be allowed to provide "soft" errors that the application will respond to. For example, many shops will raise the MAXDEPTH parameter of a queue the first time they get a QFULL condition. If the queue depth exceeds the size of the underlying file system the result is that instead of a "soft" error that impacts a single queue the file system fills and the entire node is affected. You are MUCH better off tuning the system so that the queue hits MAXDEPTH well before the file system fills but then also instrumenting the app or other processes to react to the full queue in some way.
But no matter what else you do, option #4 above is mandatory. No matter how much disk you allocate or how many consumer instances you deploy or how quickly you expire messages there is always a possibility that your consumer(s) won't keep up with message production. When this happens your producer app should throttle back, or raise an alarm and stop or do anything other than hang or die. Asynchronous messaging is only asynchronous up to the point that you run out of space to queue messages. After that your apps are synchronous and must gracefully handle that situation, even if that means to (gracefully) shut own.
Sure!
http://download.oracle.com/docs/cd/E17802_01/products/products/jms/javadoc-102a/index.html
Message#setJMSExpiration(long) does exactly what you want.

Categories