We have a JMS topic that is receiving several types of messages(number of types is determined at deploy time) with a requirement that the messages are processed in order by type. All of the types can be handled by the same MDB.
We have a solution where we deploy several versions of that MDB with selectors for each type. While that works, it means that we need to update deployment descriptors in our application every time we deploy a new version which seems to be an error prone process.
We've considered using deployment plans to handle that, but from what I understand it's only possible to change existing MDBs, not add new ones.
Is there anything we are missing?
We are using weblogic 10.3
Here is one way you could handle it. Since you are effectively single-threading the handling of messages of a given type, you could ditch your MDBs and instead manage a pool of threads, each handling a single type. you could implement a singleton service which exposes a JMX management interface (or a remote EJB interface) which allows you to dynamically add/remove types. when this service receives a call to add a new type, it starts a new thread which just loops doing a normal JMS receive call (with the appropriate selector). if your service maintains a map of type -> thread, you could also implement logic for removing a type (e.g. interrupting the thread or otherwise informing it that it is finished).
if you get to the point where a type per thread no longer scales, then you would need to implement a more complex queuing and pooling solution in your service.
Just an idea: Turn your topic into a queue. Create a "distributor" MDB of which you configure one (1) that listens on that queue.
That MDB accepts messages, maintains a (dynamic, static) map of type ("xyz") to queue ("queue15"), and re-sends the message to the appropriate "worker queue".
Create N (fixed, N > number of types) worker queues, with one of your MDBs listening on each queue. No special config required, since your MDB can handle any type.
This way, you do not need to know up-front just which message types there are.
Your worker MDBs will find that they receive only messages of one type, and in order.
Does that scale? Well, there is only one "distributor", but that just looks at the type, and re-sends. That should really be fast. The actual work is asynchronous on a second level.
I think it should even be possible to create "worker queues" dynamically, with the MDB to listen on them (probably via container-specific APIs?).
Related
I have a project that reads data from many different providers; some via SOAP, some via HTTP, etc. Some of these providers also have a restriction on the number of concurrent connections to them. For example, provider A may allow unlimited connections, provider B may only allow 2, and provider C may allow 5.
I'm decent with Micronaut, but I'm unaware of anything built into it that would allow me to limit connections to specific URLs as necessary. So, my first thought is to create a per-provider thread limit (perhaps using RxJava's scheduler system? I believe you can create custom ones using Java's Executor class) and let that do the work of queuing for me. I think I could also go the more manual route of creating a ConcurrentMap and storing the number of active connections in that, but that seems messier and more error-prone.
Any advice would be greatly appreciated! Thanks!
Limiting thread numbers is suitable only if the network connections are made by threads, that is, synchronously. But Micronaut also can make asynchronous connections, and then limiting the number of threads won't work. Better do limiting the number of connections directly. Create an intermediate proxy object with has the same interface as Micronaut and passes all incoming requests to the real Micronaut. It also has a parameter - limit, and when a request is passed, decrements the limit. when the limit becomes 0, the proxy object stops passing requests, keeping them in an input queue. As soon as a request is finished, it signals the proxy object and it passes one request from the input queue, if any, or just increments the limit.
The simplest implementation of the proxy is a thread with BlockingQueue for input requests and Semaphore for limit. But if there are many providers and creating a thread for each provider is expensive, the proxy can be implemented as an asynchronous object.
I am trying to figure out if I can switch from a blocking scenario to a more reactive pattern.
I have incoming update commands arriving in a queue, and I need to handle them in order, but only those regarding the same entity. In essence, I can create as many parallel streams of update events as I wish, as long as no two streams contain events regarding the same entity.
I was thinking that the consumer of the primary queue would possibly be able to leverage amqp's routing mechanisms, and temporary queues, by creating temporary queues for each entity id, and hooking a consumer to them. Once the subscriber is finished and no other events regarding the entity in question are currently in the queue, the queue could be disposed of.
Is this scenario something that is used regularly? Is there a better way to achieve this? In our current system we use a named lock based on the id to prevent concurrent updates.
There are at least 2 Options:
A single queue for each entity
And n Consumers on one Entity-Queue.
One queue with messages of all entities. Where the message contains data what it is for an entity. You could than split this up into several queues (One AMQP-Queue for one type of entity) or by using a BlockingQueue implementation.
Benefits of splitting up the Entities in qmqp-queues
You could create an ha-setup with rabbitmq
You could route messages
You could maybe have more than one consumer of an entity queue if it
is necessary someday (scalability)
Messages could be persistent and therefore recoverable on an
application-crash
Benefits of using an internal BlockingQueue implementation
It is faster (no net-io obviously)
Everything has to happen in one JVM
Anyway it does depend on what you want since both ways could have their benefits.
UPDATE:
I am not sure if I got you now, but let me give you some resources to try some things out.
There are special rabbitmq extensions maybe some of them can give you an idea. Take a look at alternate exchanges and exchange to exchange bindings.
Also for basic testing, I am not sure if it covers all rabbitmq features or at all all amqp features but this can sometimes be usefull. Keep in mind the routing key in this visualization is the producer name, you can also find there some examples. Import and Export your configuration.
I have inherited some legacy RabbitMQ code that is giving me some serious headaches. Can anyone help, ideally pointing to some "official" documentation where I can browse for similar questions?
We create some channels receive responses from workers which perform a search using channels like so:
channelIn.queueDeclare("", false, false, true, null);
channelIn.queueBind("", AmqpClient.NAME_EXCHANGE,
AmqpClient.ROUTING_KEY_ROOT_INCOMING + uniqueId);
My understanding from browsing mailing lists and forums is that
declaring a queue with an empty name allows the server auto-generate a unique name, and
queues must have a globally unique name.
Is this true?
Also, in the second line above, my understanding based on some liberal interpretation of blogs and mailing lists is that queuebind with an empty queue name automatically binds to the last created queue. It seems nice because then you wouldn't have to pull the auto-generated name out of the clunky DeclareOK object.
Is this true? If so, will this work in a multithreaded environment?
I.e. is it possible some channel will bind itself to another channel's queue, then if that other channel closes, the incorrectly bound channel would get an error trying to use the queue? (note that the queue was created with autodelete=true.) My testing leads me to think yes, but I'm not confident that's where the problem is.
I cannot be certain that this will work in a multithreaded environment. It may be fine a high percentage of the time but it is possible you will get the wrong queue. Why take the risk?
Wouldn't this be better and safer?
String queueName = channelIn.queueDeclare("", false, false, true, null).getQueue();
channelIn.queueBind(queueName, AmqpClient.NAME_EXCHANGE,
AmqpClient.ROUTING_KEY_ROOT_INCOMING + uniqueId);
Not exactly clunky.
Q: What happens when a queue is declared with no name?
A: The server picks a unique name for the queue. When no name is supplied, the RabbitMQ server will generate a unique-for-that-RabbitMQ-cluster name, create a queue with that name, and then transmit the name back to the client that called queue.declare. RabbitMQ does this in a thread-safe way internally (e.g. many clients calling queue.declare with blank names will never get the same name). Here is the documentation on this behavior.
Q: Do queue names need to be globally unique?
A: No, but they may need to be in your use case. Any number of publishers and subscribers can share a queue. Queue declarations are idempotent, so if 2 clients declare a queue with the same name and settings at the same time, or at different times, the server state will be the same as if just one declared it. Queues with blank names, however, will never collide. Consider declaring a queue with a blank name as if it were two operations: an RPC asking RabbitMQ "give me a globally unique name that you will reserve just for my use", and then idempotently declaring a queue with that name.
Q: Will queue.bind with a blank name bind to the last created queue in a multithreaded environment?
A: Yes, but you should not do that; it achieves nothing, is confusing, and has unspecified/poorly-specified behavior. This technique is largely pointless and prone to bugs in client code (What if lines got added between the declare and the add? Then it would be very hard to determine what queue was being bound).
Instead, use the return value of queueDeclare; that return value will contain the name of the queue that was declared. If you declared a queue with no name, the return value of queueDeclare will contain the new globally-unique name provided by RabbitMQ. You can provide that explicitly to subsequent calls that work with that queue (like binding it).
For an additional reason not to do this, the documentation regarding blank-queue-name behavior is highly ambiguous:
The client MUST either specify a queue name or have previously
declared a queue on the same channel
What does that mean? If more than one queue was declared, which one will be bound? What if the previously-declared queue was then deleted on that same channel? This seems like a very good reason to be as explicit as possible and not rely on this behavior.
Q: Can queues get deleted "underneath" channels connected to them?
A: Yes, in specific circumstances. Minor clarification on your question's terminology: channels don't "bind" themselves to queues: a channel can consume a queue, though. Think of a channel like a network port and a queue like a remote peer: you don't bind a port to a remote peer, but you can talk to more than one peer through the same port. Consumers are the equivalent of connected sockets; not channels. Anyway:
Channels don't matter here, but consumers and connections do (can have more than one consumer, even to the same queue, per channel; you can have more than one channel per connection). Here are the situations in which a queue can be deleted "underneath" a channel subscribing to it (I may have missed some, but these are all the non-disastrous--e.g. "the server exploded" conditions I know of):
A queue was declared with exclusive set to true, and the connection on which the queue was declared closes. The channel used to declare the queue can be closed, but so long as the connection stays open the queue will keep existing. Clients connected to the exclusive queue will see it disappear. However, clients may not be able to access the exclusive queue for consumption in the first place if it is "locked" to its declarer--the documentation is not clear on what "used" means with regards to exclusive locking.
A queue which is manually deleted via a queue.delete call. In this case, all consumers connected to the queue may encounter an error the next time they try to use it.
Note that in many client situations, consumers are often "passive"
enough that they won't realize that a queue is gone; they'll just
listen forever on what is effectively a closed socket. Publishing to
a queue, or attempting to redeclare it with passive (existence
poll) is guaranteed to surface the nonexistence; consumption alone
is not: sometimes you will see a "this queue was deleted!" error,
sometimes it will take minutes or hours to arrive, sometimes you
will never see such an error if all you're doing is consuming.
Q: Will auto_delete queues get deleted "underneath" one consumer when another consumer exits?
A: No. auto_delete queues are deleted sometime after the last consumer leaves the queue. So if you start two consumers on an auto_delete queue, you can exit one without disturbing the other. Here's the documentation on that behavior.
Additionally, queues which expire (via per-queue TTL) follow the same behavior: the queue will only go away sometime after the last consumer leaves.
I am using protobuf for implementing a communication protocol between a Java application and a native application written in C++. The messages are event driven: when an event occurs in the C++ application a protobuf message is conructed and sent.
message MyInterProcessMessage {
int32 id = 1;
message EventA { ... }
message EventB { ... }
...
}
In Java I receive on my socket an object of the class: MyInterProcessMessageProto. From this I can get my data very easily since they are encapsulated into each other: myMessage.getEventA().getName();
I am facing two problems:
How to delegate the processing of the received messages?
Because, analysising the whole message and distinguishing the different event types and the actions they imply resulted in a huge and not maintainable method with many if-cases.
I would like to find a pattern, where I can preserve the messages and not only apply them, but also undo them, like the Command pattern is used to implement this.
My first approach would be: create different wrapper classes for each event with a specified apply() and undo() method and delegate the job this way.
However I am not sure if this is the right way or whether there are not any better solutions.
To clarify my application:
The Java application models a running Java Virtual Machine and holds information, for instance Threads, Monitors, Memory, etc.
Every event changes the current state of the modeled JVM. For instance, a new thread was launched, another thread goes into blocking state, memory was freed etc. In the same meaning the events are modeled: ThreadEvent, MemoryEvent, etc.
This means, the messages have to be processed sequentially. In order to iterate back to previous states of the JVM, I would like to implement this undo functionality.
For undo I already tried. clearAllStates, apply Events until Event #i.
Unfortunately with 20.000+ events this is total inefficient.
To provide a tailored answer it would be good to know what you're doing with received messages, if they can be processed concurrently or not, and how an undo impacts the processing of messages received after and undo'ed message.
However, here's a generic suggestion: A typical approach is to delegate received messages to a queue-like handler class, which usually runs in an own thread (to let the message receiver get ready for the next incoming message as soon as possible) and sequentially processes received messages. You could use a stack-like class to keep track of processed messages for the sake of the undo feature. You could also use specific queues and stacks for different event types.
Basically this resembles the thread pool pattern.
I have a general question about the JMS createQueue method. In WebSphere MQ is this method used as an alternative to the JNDI lookup? I was thinking that I could dynamically create a queue. Is this possible? Thank you.
Assuming you mean QueueSession.createQueue, this is a very misleading method, and doesn't do what you might think:
Creates a queue identity given a Queue
name.
This facility is provided for the rare
cases where clients need to
dynamically manipulate queue identity.
It allows the creation of a queue
identity with a provider-specific
name. Clients that depend on this
ability are not portable.
Note that this method is not for
creating the physical queue. The
physical creation of queues is an
administrative task and is not to be
initiated by the JMS API. The one
exception is the creation of temporary
queues, which is accomplished with the
createTemporaryQueue method.
The JMS API does not provide a way of dynamically creating queues (unless you mean temporary queues, which are a very different beast used by request-response messaging). If you want to create queues are runtime, that's going to be proprietary to WebSphere.
Yes as per specs and correctly pointed out in above answer
Creates a queue identity given a Queue name.
This facility is provided for the rare cases where clients need to dynamically
manipulate queue identity. It allows the creation of a queue identity with a
provider-specific name. Clients that depend on this ability are not portable.
Note that this method is not for creating the physical queue.
The physical creation of queues is an administrative task and is not to be
initiated by the JMS API. The one exception is the creation of temporary queues,
which is accomplished with the createTemporaryQueue method.
So JMS does not provide a direct way to create queues dynamically. The way it will be done will be specific to the JMS provider. JMS provider may provide some kind of console or admin APIs by which you can do so.
As far as createQueue() method of Session is considered, it will return the reference to the Queue if it is already created. If not JMSException will be thrown.
Also point to note is createTemporaryQueue() creates actual physical queue. You will have to call delete() to cleanup related resources.