I have service in place which monitors key expiry topic __keyevent#*__:expired in redis. I am running 3 instance of the service. Which means 3 message listener.
The RedisKeyExpirationListener setup based on the suggestion in this solution https://developpaper.com/implementation-code-of-expired-key-monitoring-in-redis-cluster/
The above solution suggests using Distributed redis lock to make sure there is parallel processing i.e. same event not being processed again by a different node. Is there a different solution to make sure that Redis passes a event to just 1 node, to have true parallel processing across the 3 nodes rather than same event being processed with different nodes.
I know how to implement distributed lock with redis but want to understand if there are precise settings to enable or make sure the event is sent to only 1 active messagelistener and not all the keyexpirationlisteners ??
Assume that I have a topic with numerous partitions. Im writing K/V data in there and want to aggregate said data in Tumbling Windows by keys.
Assume that I've launched as many worker instances as I have partitions and each worker instance is running on a separate machine.
How would I go about insuring that the resultant aggregations include all values for each key? IE I don't want each worker instance to have some subset of the values.
Is this something that a StateStore would be used for? Does Kafka manage this on its own or do I need to come up with a method?
How would I go about insuring that the resultant aggregations include all values for each key? IE I don't want each worker instance to have some subset of the values.
In general, Kafka Streams ensures that all values for the same key will be processed by the same (and only one) stream task, which also means only one application instance (what you described as "worker instance") will process the values for that key. Note that an app instance may run 1+ stream tasks, but these tasks are isolated.
This behavior is achieved through the partitioning of the data, and Kafka Streams ensures that a partition is always processed by the same and only one stream task. The logical link to keys/values is that, in Kafka and Kafka Streams, a key is always sent to the same partition (there is a gotcha here, but I'm not sure whether it makes sense to go into details for the scope of this question), hence one particular partition -- among possible many partitions -- contains all the values for the same key.
In some situations, such as when joining two streams A and B, you must ensure though that the aggregation will operate on the same key to ensure that data from both streams are co-located in the same stream task -- which, again, is all about ensuring that the relevant input stream partitions and thus matching the keys (from A and B, respectively) are made available in the same stream task. A typical method you'd use here is selectKey(). Once that is done, Kafka Streams ensures that, for joining the two streams A and B as well as for creating the joined output stream, all values for the same key will be processed by the same stream task and thus the same application instance.
Example:
Stream A has key userId with value { georegion }.
Stream B has key georegion with value { continent, description }.
Joining two streams only works (as of Kafka 0.10.0) when both streams use the same key. In this example, this means that you must re-key (and thus re-partition) stream A so that the resulting key is changed from userId to georegion. Otherwise, as of Kafka 0.10, you can't join A and B because data is not co-located in the stream task that is responsible for actually performing the join.
In this example, you could re-key/re-partition stream A via:
// Kafka 0.10.0.x (latest stable release as of Sep 2016)
A.map((userId, georegion) -> KeyValue.pair(georegion, userId)).through("rekeyed-topic")
// Upcoming versions of Kafka (not released yet)
A.map((userId, georegion) -> KeyValue.pair(georegion, userId))
The through() call is only required in Kafka 0.10.0 to actually trigger re-partitioning, and later versions of Kafka will do these automatically for you (this upcoming functionality is already completed and available in Kafka trunk).
Is this something that a StateStore would be used for? Does Kafka manage this on its own or do I need to come up with a method?
In general, no. The behavior above is achieved through partitioning, not through state stores.
Sometimes state stores are involved because of the operations you have defined for a stream, which might explain why you were asking this question. For example, a windowing operation will require state to be managed, and thus a state store will be created behind the scenes. But your actual question -- "insuring that the resultant aggregations include all values for each key" -- has nothing to do with state stores, it's about the partitioning behavior.
With worker instance, I assume you mean a Kafka Streams application instance, right? (Because there is no master/worker pattern in Kafka Streams -- it's a library and not a framework -- we do not use the term "worker".)
If you want to co-locate data per key, you need to partition the data by key. Thus, either your data is partitioned by key by your external producer when data gets written into a topic from the beginning on. Or you explicitly set a new key within Kafka Streams application (using for example selectKey() or map()) and re-distributed via a call to through().
(The explicit call to through() will not be necessary in future releases, ie, 0.10.1 and Kafka Streams will re-distribute records automatically if necessary.)
If messages/record should be partitioned, the key must not be null. You can also change the partitioning schema via producer configuration partitioner.class (see https://kafka.apache.org/documentation.html#producerconfigs).
Partitioning is completely independent from StateStores, even if StateStores are usually used on top of partitioned data.
I am trying to figure out if I can switch from a blocking scenario to a more reactive pattern.
I have incoming update commands arriving in a queue, and I need to handle them in order, but only those regarding the same entity. In essence, I can create as many parallel streams of update events as I wish, as long as no two streams contain events regarding the same entity.
I was thinking that the consumer of the primary queue would possibly be able to leverage amqp's routing mechanisms, and temporary queues, by creating temporary queues for each entity id, and hooking a consumer to them. Once the subscriber is finished and no other events regarding the entity in question are currently in the queue, the queue could be disposed of.
Is this scenario something that is used regularly? Is there a better way to achieve this? In our current system we use a named lock based on the id to prevent concurrent updates.
There are at least 2 Options:
A single queue for each entity
And n Consumers on one Entity-Queue.
One queue with messages of all entities. Where the message contains data what it is for an entity. You could than split this up into several queues (One AMQP-Queue for one type of entity) or by using a BlockingQueue implementation.
Benefits of splitting up the Entities in qmqp-queues
You could create an ha-setup with rabbitmq
You could route messages
You could maybe have more than one consumer of an entity queue if it
is necessary someday (scalability)
Messages could be persistent and therefore recoverable on an
application-crash
Benefits of using an internal BlockingQueue implementation
It is faster (no net-io obviously)
Everything has to happen in one JVM
Anyway it does depend on what you want since both ways could have their benefits.
UPDATE:
I am not sure if I got you now, but let me give you some resources to try some things out.
There are special rabbitmq extensions maybe some of them can give you an idea. Take a look at alternate exchanges and exchange to exchange bindings.
Also for basic testing, I am not sure if it covers all rabbitmq features or at all all amqp features but this can sometimes be usefull. Keep in mind the routing key in this visualization is the producer name, you can also find there some examples. Import and Export your configuration.
We currently have a distributed setup where we are publishing events to SQS and we have an application which has multiple hosts that drains messages from the queue and does some transformation over it and transmits to interested parties. I have a use case where the receiving end point has scalability concerns with the message volume and hence we would like to batch these messages periodically (say every 15 mins) in the application before sending it.
The incoming message rate is around 200 messages per second and each message is no more than 10 KB. This system need not be real time, but would definitely be a good to have and also the order is not important (its okay if a batch containing older messages gets sent first).
One approach that I can think of is maintaining an embedded database within the application (each host) that batches the events and another thread that runs periodically and clears the data.
Another approach could be to create timestamped buckets in a a distributed key-value store (s3, dynamo etc.) where we write the message to the correct bucket based the messages time stamp and we periodically clear the buckets.
We can run into several issues here, since the messages would be out of order a bucket might have already been cleared (can be solved by having a default bucket though), would need to accurately decide when to clear a bucket etc.
The way I see it, at least two components would be required one which does the batching into a temporary storage and another that clears it.
Any feedback on the above approaches would help, also it looks like a common problem are they any existing solutions that I can leverage ?
Thanks
In my distributed application, I am dispatching processing requests to a JMS queue. I have multiple nodes consuming from that queue (load balancing). Processing the requests requires a rather large chunk of user-specific data to be loaded into memory and I obviously want to keep that data in memory for subsequent requests. Thus, I'm using JMSXGroupId with the user-id to make sure, that all requests for a specific user are handled by the node that already has the data cached.
After some time, when the user is no longer active, I want to unload the data on the node. At that same time, I would like that node to give up ownership of the associated JMS message group.
I know I give up ownership of the group by shutting down the corresponding consumer. However, that would mean I'd lose ownership of all groups associated with that consumer and not just the one for which I just unloaded the cached data.
Is there a way of giving up ownership of a specific group on the consumer side?
A broker-independant way would be preferable but I'd settle for an ActiveMQ specific solution if that is the only way. Also, feel free to suggest how this might be done with your favorite message broker.
You can't do this right now without closing the consumer.
Consumer != connection btw - so why aren't you using multiple consumers per connection - one per group ?