Topic creation in ActiveMQ - java

http://docs.oracle.com/javaee/1.4/api/javax/jms/Session.html#createTopic(java.lang.String)
This API says that session.createTopic(topicname) is not for creating the physical topic. What does this mean?
If I want one group of user which has authority of "admin" is responsible for creating topics and another group of user which has authority of "write" is responsible for publishing messages to this topic, how can I implement this? It seems that the latter group must also have the authority of "admin" because they have to use this method: session.createTopic(topicname).
How can I separate the "admin" and "write" authority?

What the JMS spec means is that createTopic(String) is used to give you a logical handle (javax.jms.Topic, a subtype of Destination) which you can subsequently use in other calls such as createProducer(Destination) or createConsumer(Destination). It just so happens in ActiveMQ that a physical destination will be created at the same time.
If you want to make sure that users can only publish to already created destinations, assign that group read and write permissions, but not admin. Obviously that assumes that those topics already exist - if they do not, then you'll get an exception thrown.
You haven't said exactly how you would like to administer topic creation, but if you are OK with doing that in the ActiveMQ config for them to be created at startup, then define those topics in a destinations block:
<broker xmlns="http://activemq.apache.org/schema/core">
<destinations>
<topic physicalName="topic.1" />
<topic physicalName="topic.2" />
</destinations>
</broker>

The JMS api is not for administration, only for using existing topics and queues. In ActiveMQ, default is that the physical queue/topic does is auto-created once needed (someone is sending to it/consuming from it).
How to create physical objects in a JMS implementation is vendor specific and you should checkout how this is handled in ActiveMQ.
How this is treated in AMQ

Related

Can producers have the same clientId and publish to a topic in Artemis?

I was wondering if it's possible to have multiple producers using the same clientId to send messages to a durable topic. And on the consuming side, what would happen if the clientID is the same as the producer side but the subscription name is different?
E.g. The producer has a clientId of 123abc and sends messages to a durable topic. A consumer is subscribed to this durable topic and this consumer has a clientId of 123abc but also a subscriptionName of abc123? Would the consumer still be able to pick up the message? What would happened if I bring another consumer in the mix?
Section 6.1.2 of the JMS 2 specification states:
By definition, the client state identified by a client identifier can be ‘in use’ by only one client at a time.
By "client" the specification really means "connection." Therefore, the same client identifier can only be in use by one connection at a time. So if you create multiple producers from the same connection that's OK. However, creating multiple connections with the same client ID will fail before you even get to the point where you can create a producer as the broker will validate the client ID when the connection is created.
That said, there's no real point in setting the client ID on a connection that's just used for producing messages. Section 6.1.2 of the JMS 2 specification also states:
The only use of a client identifier defined by JMS is its mandatory use in identifying an unshared durable subscription or its optional use in identifying a shared durable or non-durable subscription.
Therefore, it's not really necessary to set the client ID unless you're creating an unshared durable subscription or possibly a shared durable or non-durable subscription.
Two subscribers cannot have the same clientid: when they both try to connect to the broker, the second will go into exception. However, you can override the clientid: using TomEE or Tomcat you can add a simple line to system.properties file like this:
<classname>.activation.clientId=<newclientid>
No problem for producers.

How to maintain SseEmitters list between multiple instances of a microservice?

Language: Spring Boot, JS
Overview: I am implementing server sent events functionality in my application which will be deployed in cloud foundry,
wherein based on a new message in a queue(which I have subscribed in my micro-service), I will send some update to my client/browser(which is using EventSource).
For this, I am maintaining a SseEmitters List(for mainitaining all the active SseEmitter) on my server side. Once I receive a new message from the queue, based on the id(a field in the queue message), I will emit the message to corresponding client.
PROBLEM: How will the above scenario work, when I scale my application by creating multiple instances of it. Since only one instance will receive the new queue message, it may happen that the active SseEmitter is not maintained in that particular instance, how do I solve this?
To solve this problem, following approaches can be observed.
DNS concept
If you think about it, knowing where your user (SSE Emitter) is, is like knowing where some website is. You can use DNS-look-alike protocol to figure out where your user is. Protocol would be as follows:
Whe user lands on any of your instances, associate user with that instance. Association can be done by either using external component, e.g. Redis or a distributed map solution like Hazelcast.
Whenever user disconnects from SSE, remove association. Sometimes disconnect is not registered properly with Spring SSEEmiter, so disassociation can be done when sendig message fails.
Other parties (microservices) can easily query Redis/Hazelcast to figure on which instance user is.
Message routing concept
If you're using messaging middleware for communication between your microservices, you can use routing feature which AMQP protocol provides. Protocol would be as follows:
each SSE instance creates their own queue on boot
user lands on any of SSE instances and instance adds exchange-queue binding with routing key = user uid
Whenever user disconnects from SSE, remove association. Sometimes disconnect is not registered properly with Spring SSEEmiter, so disassociation can be done when sendig message fails.
Other parties (microservices) need to send message to the exchange and define routing key. AMQP broker figures out which queue should receive message based on the routing key.
Bindings are not resource intesive on modern AMQP brokers like RabbitMQ.
Your question is old, and if you didnt figure this out by now, hope this helps.

on-demand/trigger based camel route

I'm trying to implement the following JMS message flow using camel routes:
there is a topic published on external message broker. My program is listening for messages on this topic. Each incoming message triggers specific route to be executed - ONE TIME ONLY (some kind of ad-hoc, disposable route). This route is supposed to move messages between queues within my internal message broker based on some selector (get all messages from queue A matching given selector and move them to queue B). I'm only starting with camel and so far I figured out just the first part - listening for messages on topic:
<bean id="somebroker" class="org.apache.camel.component.jms.JmsComponent"
p:connectionFactory-ref="rmAdvisoriesConnectionFactory"/>
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
<endpoint id="jms" uri="somebroker:topic:sometopic"/>
<route id="routeAdvisories">
<from ref="jms"/>
<to>???</to>
</route>
</camelContext>
Can you suggest a destination for these advisory messages? I need to read some of their JMS properties and use these values to construct JMS selector that will be used for the "move messages" operation. But I have no idea how to declare and trigger this ad-hoc route. It would be ideal if I could define it within the same camelContext using only Spring DSL. Or alternatively, I could route advisories to some java method, which would be creating and executing this ad-hoc routes. But if this isn't possible, I'll be grateful for any suggestion.
Thanks.
As far as I understand, it will be useful to use the 'selector' option, in your JMS consumer route, for example:
from("activemq:queue:test?selector=key='value1'").to("mock:a");
from("activemq:queue:test?selector=key='value2'").to("mock:b");
Maybe, another option is to implement some routes based on 'Content Based Router Pattern" through "choice" option. You can find more info here: http://camel.apache.org/content-based-router.html
I hope it helps.
I couldn't get it working the way I intended, so I had to abandon my original approach. Instead of using camel routes to move messages between queues (now I'm not sure camel routes are even intended to be used this way) I ended up using ManagedRegionBroker - the way JMX operation "moveMatchingMessagesTo" is implemented - to move messages matching given selector.

Spring Integration SFTP - Getting configurations from XML

Let say I have these configurations in my xml,
<int-sftp:outbound-channel-adapter id="sftpOutbound"
channel="sftpChannel"
auto-create-directory="true"
remote-directory="/path/to/remote/directory/"
session-factory="cachingSessionFactory">
<int-sftp:request-handler-advice-chain>
<int:retry-advice />
</int-sftp:request-handler-advice-chain>
</int-sftp:outbound-channel-adapter>
How can I retrieve the attributes i.e, remote-directory in Java class ?
I tried to use context.getBean('sftpOutbound') but it returns EventDrivenConsumer class which doesn't have methods to get the configurations.
I'm using spring-integration-sftp v 4.0.0.
I am actually more concerned with why you wan to access it. I mean the remote directory and other attributes will come with the headers of each message, so you will have access to it at the Message level, but not at the level of Event Driven Consumer and that is by design, hence my question.

Messages dispatching sytem design in Java

I am looking for a lightweight and efficient solution for the following use case:
The gateway module receives resources to deliver for different acceptors.
The resources queued (by order of arrive) for each acceptor.
A purge process scans those queues, if resources are available for some acceptor then he bundles them under some tag (unique id) and sends a notification that a new bundle is available.
System characteristics:
The number of acceptors is dynamic.
No limitations on number of resources in one bundle.
The module will be used in Tomcat 7 under Java 7 (not clustered).
I considered the following solutions:
JMS - dymanic queue configuration for each acceptor, is it possible to consume all available messages in a queue? Threads configuration per queue (not scalable)?
AKKA Actors. Didn't find a suitable pattern for usage.
Naive pure Java implementaion, where queues will be scanned by one thread (round robin).
I think that this is the right place to discuss about available solutions for this problem.
Please share your ideas when considering the following points:
Suitable third parties frameworks.
Resources queues scalable scanning.
Thanks in Advance.
You can use various technologies eg:
JMS dynamic queues
Extended LMAX disruptor ( eg. https://github.com/hicolour/disruptor-ext)
but from the high availability and scalability reasons you should use Akka
Akka
The starting point for your implementation will be Consistent Hashing routing algorithm built into Akka - in simple words this type of routing logic selects consistent route based on a provided key. Routes comparing to your problem description are acceptors.
Router actor comes in two distinct flavors, which gives you flexible mechanism to deploy new acceptors in your infrastructure.
Pool - The router creates routees as child actors and removes them from the router if they terminate.
Group - The routee actors are created externally to the router and the router sends messages to the specified path using actor selection, without watching for termination.
First of all please read Akka routing documentation to get better understanding of routing implementation in the Akka framework:
http://doc.akka.io/docs/akka/2.3.7/java/routing.html
You can also check this article about scalable and high available systems design:
http://prochera.com/blog/2014/07/15/building-a-scalable-and-highly-available-reactive-applications-with-akka-load-balancing-revisited/
Q1 Is it possible for Actor to know his route (his hash key)?
Actor may know what key is currently handled, because it may be just part of the message - but you shouldn't build cross-messages logic/state based on this key.
Message:
import akka.routing.ConsistentHashingRouter.ConsistentHashable
class Message(key : String) extends ConsistentHashable with Serializable {
override def consistentHashKey(): AnyRef = key
}
Actor:
import akka.actor.{Actor, ActorLogging}
class EchoActor extends Actor with ActorLogging {
log.info("Actor created {}", self.path.name)
def receive = {
case message: Message =>
log.info("Received message {} in actor {}", message.consistentHashKey(), self.path.name)
case _ => log.error("Received unsupported message");
}
}
Q2 Can Actor manage a state except his mailbox?
Actors states can be changed only through the messages sent between them.
If you will initialize actor containing reference to the classic java/spring/.. bean, it will be able to interact with non-actor world/state eg. dao layer, but this type of integration should be limited as possible and treated as anti pattern.
Q3 Is there a way to use configuration that is collision resistant?
As an API consumer, you need to define on your own hand collision resistant model, but once again Akka gives infrastructure required to do it.
In most cases key will be part of the domain eg. auction id , customer id
If key needs to be generated on demand you can use an ClusterSingleton
with Persistence extension.
Generator may be an Actor responsible for the generation of the unique ID, other actor may obtain new id using ask pattern.
ClusterSingleton is initialized using ClusterSingletonManager and obtained using ClusterSingletonProxy
system.actorOf(ClusterSingletonManager.props(
singletonProps = Props(classOf[Generator]),
singletonName = "gnerator",
terminationMessage = End,
role = Some("generator")),
name = "singleton")
system.actorOf(ClusterSingletonProxy.props(
singletonPath = "/user/singleton/generator",
role = Some("generator")),
name = "generatorProxy")
I think for your problem JMS will the proper solution. You can go with RabbitMQ which has routers which route messages to different queue as per the key and provides built in solution for message flow and message acknowledgement mechanism.
You could use Apache Camel for this. Its lightweight and supports alot of enterprise integration patterns. Particularly Content Based Router is a possible solution.

Categories