Can producers have the same clientId and publish to a topic in Artemis? - java

I was wondering if it's possible to have multiple producers using the same clientId to send messages to a durable topic. And on the consuming side, what would happen if the clientID is the same as the producer side but the subscription name is different?
E.g. The producer has a clientId of 123abc and sends messages to a durable topic. A consumer is subscribed to this durable topic and this consumer has a clientId of 123abc but also a subscriptionName of abc123? Would the consumer still be able to pick up the message? What would happened if I bring another consumer in the mix?

Section 6.1.2 of the JMS 2 specification states:
By definition, the client state identified by a client identifier can be ‘in use’ by only one client at a time.
By "client" the specification really means "connection." Therefore, the same client identifier can only be in use by one connection at a time. So if you create multiple producers from the same connection that's OK. However, creating multiple connections with the same client ID will fail before you even get to the point where you can create a producer as the broker will validate the client ID when the connection is created.
That said, there's no real point in setting the client ID on a connection that's just used for producing messages. Section 6.1.2 of the JMS 2 specification also states:
The only use of a client identifier defined by JMS is its mandatory use in identifying an unshared durable subscription or its optional use in identifying a shared durable or non-durable subscription.
Therefore, it's not really necessary to set the client ID unless you're creating an unshared durable subscription or possibly a shared durable or non-durable subscription.

Two subscribers cannot have the same clientid: when they both try to connect to the broker, the second will go into exception. However, you can override the clientid: using TomEE or Tomcat you can add a simple line to system.properties file like this:
<classname>.activation.clientId=<newclientid>
No problem for producers.

Related

Difference between KafkaTemplate and KafkaProducer send method?

My question is in a spring boot microservice using kafka what is appropriate to use KafkaTemplate.send() or KafkaProducer.send()
I have used KafkaConsumer and not KafkaListner to poll the records because KafkaListner was fetching the records as and when they were coming to the topics, I wanted the records to be polled periodically based on business needs.
Have gone through the documentation of KafkaProducer https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
and Spring KafkaTemplate
https://docs.spring.io/spring-kafka/reference/html/#kafka-template
I am unable to make a decision like what is ideal to use or atleast the reason of using one over the other is unclear?
What my need is I want the operation to be sync i.e. I want to know if the published happened successfully or not because If the record is not delivered I need to retry publishing.
Any help will be appreciated.
For your first question, which one should I use kafka Template or Kafka producer?
The Kafka Producer is defined in Apache Kafka. The KafkaTemplate is Spring's implementation of it (although it does not implement Producer
directly) and so it provides more methods for you to use.
Read this link::
What is the difference between Kafka Template and kafka producer?
For retry mechanism, in case of failure in publishing.
I have answered this in another question.
The acks parameter control how many partition replicas must receive
the record before the producer can consider the write successful.
There are 3 values for the acks parameter:
acks=0, the producer will not wait for a reply from the broker before
assuming the message sent successfully.
acks=1, the producer will receive a successful response from the
broker the moment the leader replica received the message. If the
message can't be written to the leader, the producer will receive an
error response and can retry.
acks=all, the producer will receive a successful response from the
broker once all in-sync replicas received the message.
Best way to configure retries in Kaka Producer

How to maintain SseEmitters list between multiple instances of a microservice?

Language: Spring Boot, JS
Overview: I am implementing server sent events functionality in my application which will be deployed in cloud foundry,
wherein based on a new message in a queue(which I have subscribed in my micro-service), I will send some update to my client/browser(which is using EventSource).
For this, I am maintaining a SseEmitters List(for mainitaining all the active SseEmitter) on my server side. Once I receive a new message from the queue, based on the id(a field in the queue message), I will emit the message to corresponding client.
PROBLEM: How will the above scenario work, when I scale my application by creating multiple instances of it. Since only one instance will receive the new queue message, it may happen that the active SseEmitter is not maintained in that particular instance, how do I solve this?
To solve this problem, following approaches can be observed.
DNS concept
If you think about it, knowing where your user (SSE Emitter) is, is like knowing where some website is. You can use DNS-look-alike protocol to figure out where your user is. Protocol would be as follows:
Whe user lands on any of your instances, associate user with that instance. Association can be done by either using external component, e.g. Redis or a distributed map solution like Hazelcast.
Whenever user disconnects from SSE, remove association. Sometimes disconnect is not registered properly with Spring SSEEmiter, so disassociation can be done when sendig message fails.
Other parties (microservices) can easily query Redis/Hazelcast to figure on which instance user is.
Message routing concept
If you're using messaging middleware for communication between your microservices, you can use routing feature which AMQP protocol provides. Protocol would be as follows:
each SSE instance creates their own queue on boot
user lands on any of SSE instances and instance adds exchange-queue binding with routing key = user uid
Whenever user disconnects from SSE, remove association. Sometimes disconnect is not registered properly with Spring SSEEmiter, so disassociation can be done when sendig message fails.
Other parties (microservices) need to send message to the exchange and define routing key. AMQP broker figures out which queue should receive message based on the routing key.
Bindings are not resource intesive on modern AMQP brokers like RabbitMQ.
Your question is old, and if you didnt figure this out by now, hope this helps.

Asynchronous request-reply with Spring Boot and RabbitMQ

We want to implement the following scenario:
A producer service sends some input params to another service asking for the details based on these params.
A producer wants to specify the queue where it will be listening for the reply.
Moreover, a producer wants to provide some metadata so that it can correlate the params it sent with a result it got.
Please advice how to do this properly.
See the AsyncRabbitTemplate.
It uses the correlationId and replyTo properties to convey that information to the service that handles the request.

Topic creation in ActiveMQ

http://docs.oracle.com/javaee/1.4/api/javax/jms/Session.html#createTopic(java.lang.String)
This API says that session.createTopic(topicname) is not for creating the physical topic. What does this mean?
If I want one group of user which has authority of "admin" is responsible for creating topics and another group of user which has authority of "write" is responsible for publishing messages to this topic, how can I implement this? It seems that the latter group must also have the authority of "admin" because they have to use this method: session.createTopic(topicname).
How can I separate the "admin" and "write" authority?
What the JMS spec means is that createTopic(String) is used to give you a logical handle (javax.jms.Topic, a subtype of Destination) which you can subsequently use in other calls such as createProducer(Destination) or createConsumer(Destination). It just so happens in ActiveMQ that a physical destination will be created at the same time.
If you want to make sure that users can only publish to already created destinations, assign that group read and write permissions, but not admin. Obviously that assumes that those topics already exist - if they do not, then you'll get an exception thrown.
You haven't said exactly how you would like to administer topic creation, but if you are OK with doing that in the ActiveMQ config for them to be created at startup, then define those topics in a destinations block:
<broker xmlns="http://activemq.apache.org/schema/core">
<destinations>
<topic physicalName="topic.1" />
<topic physicalName="topic.2" />
</destinations>
</broker>
The JMS api is not for administration, only for using existing topics and queues. In ActiveMQ, default is that the physical queue/topic does is auto-created once needed (someone is sending to it/consuming from it).
How to create physical objects in a JMS implementation is vendor specific and you should checkout how this is handled in ActiveMQ.
How this is treated in AMQ

How to receive from a queue and publish it into a topic?

I am trying to receive a message from a queue and publish it into a topic. I have a QueueSession instance but it cannot be used to create a topic. If I understand correctly, QueueSession is only used for receiving messages from a queue and sending messages to another queue. How can I mix it up - receiving from a queue and publishing it into a topic in a single session?
Forget about all of the domain-specific classes and use the unified domain available in JMS 1.1. Substitute the classes as follows:
QueueConnectionFactory --> ConnectionFactory
QueueSession --> Session
Queue --> Destination
Once you switch to the unified domain, the application does not need to know whether a destination is a queue or a topic. For example, if your app has two managed objects myInputDest and myOuputDest you can assign either of these to a queue or a topic in your managed object definitions (sometimes referred to as the .bindings file) in any combination. You can read from the queue and write to a topic, read from a topic and write to a queue, go queue-to-queue or topic-to-topic and all of this is resolved at run time and can change between program invocations just by changing the managed objects.
There is some sample code that demonstrates this in the IdeveloperWorks article Running a standalone Java application on WebSphere MQ V6.0
You have to create a TopicSession and TopicPublisher for the destination topic, but do this outside of your queue message handler--for example at the same time you create the QueueSession and subscribe to queue. In the queue message handler, you will then call publish(message) on the TopicPublisher instance.

Categories