JMS CreateQueue Question - java

I have a general question about the JMS createQueue method. In WebSphere MQ is this method used as an alternative to the JNDI lookup? I was thinking that I could dynamically create a queue. Is this possible? Thank you.

Assuming you mean QueueSession.createQueue, this is a very misleading method, and doesn't do what you might think:
Creates a queue identity given a Queue
name.
This facility is provided for the rare
cases where clients need to
dynamically manipulate queue identity.
It allows the creation of a queue
identity with a provider-specific
name. Clients that depend on this
ability are not portable.
Note that this method is not for
creating the physical queue. The
physical creation of queues is an
administrative task and is not to be
initiated by the JMS API. The one
exception is the creation of temporary
queues, which is accomplished with the
createTemporaryQueue method.
The JMS API does not provide a way of dynamically creating queues (unless you mean temporary queues, which are a very different beast used by request-response messaging). If you want to create queues are runtime, that's going to be proprietary to WebSphere.

Yes as per specs and correctly pointed out in above answer
Creates a queue identity given a Queue name.
This facility is provided for the rare cases where clients need to dynamically
manipulate queue identity. It allows the creation of a queue identity with a
provider-specific name. Clients that depend on this ability are not portable.
Note that this method is not for creating the physical queue.
The physical creation of queues is an administrative task and is not to be
initiated by the JMS API. The one exception is the creation of temporary queues,
which is accomplished with the createTemporaryQueue method.
So JMS does not provide a direct way to create queues dynamically. The way it will be done will be specific to the JMS provider. JMS provider may provide some kind of console or admin APIs by which you can do so.
As far as createQueue() method of Session is considered, it will return the reference to the Queue if it is already created. If not JMSException will be thrown.
Also point to note is createTemporaryQueue() creates actual physical queue. You will have to call delete() to cleanup related resources.

Related

Demultiplexing messages from a queue to process in parallel streams using amqp?

I am trying to figure out if I can switch from a blocking scenario to a more reactive pattern.
I have incoming update commands arriving in a queue, and I need to handle them in order, but only those regarding the same entity. In essence, I can create as many parallel streams of update events as I wish, as long as no two streams contain events regarding the same entity.
I was thinking that the consumer of the primary queue would possibly be able to leverage amqp's routing mechanisms, and temporary queues, by creating temporary queues for each entity id, and hooking a consumer to them. Once the subscriber is finished and no other events regarding the entity in question are currently in the queue, the queue could be disposed of.
Is this scenario something that is used regularly? Is there a better way to achieve this? In our current system we use a named lock based on the id to prevent concurrent updates.
There are at least 2 Options:
A single queue for each entity
And n Consumers on one Entity-Queue.
One queue with messages of all entities. Where the message contains data what it is for an entity. You could than split this up into several queues (One AMQP-Queue for one type of entity) or by using a BlockingQueue implementation.
Benefits of splitting up the Entities in qmqp-queues
You could create an ha-setup with rabbitmq
You could route messages
You could maybe have more than one consumer of an entity queue if it
is necessary someday (scalability)
Messages could be persistent and therefore recoverable on an
application-crash
Benefits of using an internal BlockingQueue implementation
It is faster (no net-io obviously)
Everything has to happen in one JVM
Anyway it does depend on what you want since both ways could have their benefits.
UPDATE:
I am not sure if I got you now, but let me give you some resources to try some things out.
There are special rabbitmq extensions maybe some of them can give you an idea. Take a look at alternate exchanges and exchange to exchange bindings.
Also for basic testing, I am not sure if it covers all rabbitmq features or at all all amqp features but this can sometimes be usefull. Keep in mind the routing key in this visualization is the producer name, you can also find there some examples. Import and Export your configuration.

How to get number of consumers connected to Websphere MQ queue from Java

I am trying to get the number of consumers of a particular Websphere MQ queue from Java? I need to know whether someone is going to consume the messages before placing them on the queue.
First, it is worth noting that the design proposed is a very, VERY bad design. The effect is to turn async messaging back into synchronous messaging. This couples message producers to consumers, introduces location and resolution dependencies, breaks clustering, defeats WMQ's load distribution and balancing, embeds network topology into the application, and makes the whole system brittle. Please do not blame WMQ for not working correctly after intentionally defeating all its best features except the actual queue/dequeue operations.
However, to answer your question more directly, use the getOpenInputCount method of the queue object to obtain the number of open input handles. Here's how:
MQQueue outQ = qMgr.accessQueue(qName,
openOptions,
null, // default q manager
null, // no dynamic q name
null); // no alternate user id
int inCount = outQ.getOpenInputCount();
Note that you can only inquire the input handles on a local queue. If the queue is hosted on a QMgr other than the one where the message sender is connected, this method will not work. Of course it is the normal case that the message sender and receiver would reside on different QMgrs. However since you do not mention much about the design, I'll assume for purposes of this answer that connections from the message producer and consumer attach to the same QMgr. If that's not the case, we need to have a discussion about PCF and even stronger warnings about the design.

RabbitMQ multi-threaded channels and queue binding

I have inherited some legacy RabbitMQ code that is giving me some serious headaches. Can anyone help, ideally pointing to some "official" documentation where I can browse for similar questions?
We create some channels receive responses from workers which perform a search using channels like so:
channelIn.queueDeclare("", false, false, true, null);
channelIn.queueBind("", AmqpClient.NAME_EXCHANGE,
AmqpClient.ROUTING_KEY_ROOT_INCOMING + uniqueId);
My understanding from browsing mailing lists and forums is that
declaring a queue with an empty name allows the server auto-generate a unique name, and
queues must have a globally unique name.
Is this true?
Also, in the second line above, my understanding based on some liberal interpretation of blogs and mailing lists is that queuebind with an empty queue name automatically binds to the last created queue. It seems nice because then you wouldn't have to pull the auto-generated name out of the clunky DeclareOK object.
Is this true? If so, will this work in a multithreaded environment?
I.e. is it possible some channel will bind itself to another channel's queue, then if that other channel closes, the incorrectly bound channel would get an error trying to use the queue? (note that the queue was created with autodelete=true.) My testing leads me to think yes, but I'm not confident that's where the problem is.
I cannot be certain that this will work in a multithreaded environment. It may be fine a high percentage of the time but it is possible you will get the wrong queue. Why take the risk?
Wouldn't this be better and safer?
String queueName = channelIn.queueDeclare("", false, false, true, null).getQueue();
channelIn.queueBind(queueName, AmqpClient.NAME_EXCHANGE,
AmqpClient.ROUTING_KEY_ROOT_INCOMING + uniqueId);
Not exactly clunky.
Q: What happens when a queue is declared with no name?
A: The server picks a unique name for the queue. When no name is supplied, the RabbitMQ server will generate a unique-for-that-RabbitMQ-cluster name, create a queue with that name, and then transmit the name back to the client that called queue.declare. RabbitMQ does this in a thread-safe way internally (e.g. many clients calling queue.declare with blank names will never get the same name). Here is the documentation on this behavior.
Q: Do queue names need to be globally unique?
A: No, but they may need to be in your use case. Any number of publishers and subscribers can share a queue. Queue declarations are idempotent, so if 2 clients declare a queue with the same name and settings at the same time, or at different times, the server state will be the same as if just one declared it. Queues with blank names, however, will never collide. Consider declaring a queue with a blank name as if it were two operations: an RPC asking RabbitMQ "give me a globally unique name that you will reserve just for my use", and then idempotently declaring a queue with that name.
Q: Will queue.bind with a blank name bind to the last created queue in a multithreaded environment?
A: Yes, but you should not do that; it achieves nothing, is confusing, and has unspecified/poorly-specified behavior. This technique is largely pointless and prone to bugs in client code (What if lines got added between the declare and the add? Then it would be very hard to determine what queue was being bound).
Instead, use the return value of queueDeclare; that return value will contain the name of the queue that was declared. If you declared a queue with no name, the return value of queueDeclare will contain the new globally-unique name provided by RabbitMQ. You can provide that explicitly to subsequent calls that work with that queue (like binding it).
For an additional reason not to do this, the documentation regarding blank-queue-name behavior is highly ambiguous:
The client MUST either specify a queue name or have previously
declared a queue on the same channel
What does that mean? If more than one queue was declared, which one will be bound? What if the previously-declared queue was then deleted on that same channel? This seems like a very good reason to be as explicit as possible and not rely on this behavior.
Q: Can queues get deleted "underneath" channels connected to them?
A: Yes, in specific circumstances. Minor clarification on your question's terminology: channels don't "bind" themselves to queues: a channel can consume a queue, though. Think of a channel like a network port and a queue like a remote peer: you don't bind a port to a remote peer, but you can talk to more than one peer through the same port. Consumers are the equivalent of connected sockets; not channels. Anyway:
Channels don't matter here, but consumers and connections do (can have more than one consumer, even to the same queue, per channel; you can have more than one channel per connection). Here are the situations in which a queue can be deleted "underneath" a channel subscribing to it (I may have missed some, but these are all the non-disastrous--e.g. "the server exploded" conditions I know of):
A queue was declared with exclusive set to true, and the connection on which the queue was declared closes. The channel used to declare the queue can be closed, but so long as the connection stays open the queue will keep existing. Clients connected to the exclusive queue will see it disappear. However, clients may not be able to access the exclusive queue for consumption in the first place if it is "locked" to its declarer--the documentation is not clear on what "used" means with regards to exclusive locking.
A queue which is manually deleted via a queue.delete call. In this case, all consumers connected to the queue may encounter an error the next time they try to use it.
Note that in many client situations, consumers are often "passive"
enough that they won't realize that a queue is gone; they'll just
listen forever on what is effectively a closed socket. Publishing to
a queue, or attempting to redeclare it with passive (existence
poll) is guaranteed to surface the nonexistence; consumption alone
is not: sometimes you will see a "this queue was deleted!" error,
sometimes it will take minutes or hours to arrive, sometimes you
will never see such an error if all you're doing is consuming.
Q: Will auto_delete queues get deleted "underneath" one consumer when another consumer exits?
A: No. auto_delete queues are deleted sometime after the last consumer leaves the queue. So if you start two consumers on an auto_delete queue, you can exit one without disturbing the other. Here's the documentation on that behavior.
Additionally, queues which expire (via per-queue TTL) follow the same behavior: the queue will only go away sometime after the last consumer leaves.

Deploying several copies of an MDB

We have a JMS topic that is receiving several types of messages(number of types is determined at deploy time) with a requirement that the messages are processed in order by type. All of the types can be handled by the same MDB.
We have a solution where we deploy several versions of that MDB with selectors for each type. While that works, it means that we need to update deployment descriptors in our application every time we deploy a new version which seems to be an error prone process.
We've considered using deployment plans to handle that, but from what I understand it's only possible to change existing MDBs, not add new ones.
Is there anything we are missing?
We are using weblogic 10.3
Here is one way you could handle it. Since you are effectively single-threading the handling of messages of a given type, you could ditch your MDBs and instead manage a pool of threads, each handling a single type. you could implement a singleton service which exposes a JMX management interface (or a remote EJB interface) which allows you to dynamically add/remove types. when this service receives a call to add a new type, it starts a new thread which just loops doing a normal JMS receive call (with the appropriate selector). if your service maintains a map of type -> thread, you could also implement logic for removing a type (e.g. interrupting the thread or otherwise informing it that it is finished).
if you get to the point where a type per thread no longer scales, then you would need to implement a more complex queuing and pooling solution in your service.
Just an idea: Turn your topic into a queue. Create a "distributor" MDB of which you configure one (1) that listens on that queue.
That MDB accepts messages, maintains a (dynamic, static) map of type ("xyz") to queue ("queue15"), and re-sends the message to the appropriate "worker queue".
Create N (fixed, N > number of types) worker queues, with one of your MDBs listening on each queue. No special config required, since your MDB can handle any type.
This way, you do not need to know up-front just which message types there are.
Your worker MDBs will find that they receive only messages of one type, and in order.
Does that scale? Well, there is only one "distributor", but that just looks at the type, and re-sends. That should really be fast. The actual work is asynchronous on a second level.
I think it should even be possible to create "worker queues" dynamically, with the MDB to listen on them (probably via container-specific APIs?).

JMS onMessage() and concurrency

I have a stand-alone JMS app that subscribes to several different JMS topics. Each topic has its own session and onMessage() listener. Each onMessage() method updates a common current value table - all the onMessage() methods update the same current value table.
I've read that the onMessage method is actually called on the JMS provider's thread. So, my question is: if all these onMessage() methods are called on a separate thread than my app, doesn't this present a concurrency problem since all these threads update a common CVT? Seems like I need to synchronize access to the CVT somehow?
Short answer to your question: YES, you need to take care of concurrency concerns when your JMS code is updating some common in-memory object.
However, I'm not sure what you mean by "common current value table"? If this is some database table, then database should take care of concurrency issues for you.
EDIT: it turned out that "common current value table" is a common in-memory object. As I mentioned earlier, in this case you need to handle the concurrency concerns yourself (Java concurrency tutorial).
There are mainly two approaches to this problem:
synchronization - suitable if you have low-contention or you are stuck with some non-threadsafe object, then your best choice is synchronization.
high-level concurrency objects that come with the JDK - best fit if you have high-contention and you are using some class from regular collections; just swap in an instance of concurrent collections.
In any case, it is highly recommended to do your own testing to choose the best approach for you.
If you would be dealing with expensive to create non-threadsafe stateless procedural code (no storage of data involved) then you could also use object pooling (e.g. Commons Pool), but this is not relevant in your current issue.
JMS onMessage() method is always called by the JMS provider's thread (also known as asynchronous calling).

Categories