I'm about to refactor a broker application written for websphere MQ. In the existing application, while reading a message from the queue, the following options are being set:
MQConstants.MQGMO_WAIT and
waitInterval = 1000 (milliseconds).
In our application, there is no guarantee that we receive a message every second. We may not receive a message even for hours. I'm not sure why the creators of this application chose to go for waitInterval = 1000 instead of setting the waitInterval to MQWI_UNLIMITED.
At the moment, there is a catch block in the code which does not do anything when MQException.MQRC_NO_MSG_AVAILABLE occurs.
The creators of this application were really smart people so I do not know why they opted for this approach. I'm new to MQ series, so can anyone please explain the reason behind this?
Well its just to check the queue every second for a message. You can be more intelligent by using features like async message delivery in a thread of use some of the new features of MQ that does not do lot of polling on the Queue.
Related
I am totally new to spring framework. I am trying to create a java maven project where I can have the connectivity to the rabbitMq and I even before publish the message, I want to check if the queues are alive or not. Is this possible to ping the queue to see if it a alive or not.. I am totally new to this rabbitMQ.
Thanks for the answers
Checking for the availability of a queue is a bit of an anti-pattern with messaging systems.
The message producer should not care if there is something on the other end to receive / process the message. The producer only cares that the RabbitMQ instance is available, with the correct exchange.
If the message must be delivered to a consumer, guaranteed, then the consumer needs to configure the queue with durability in mind and the producer should send the message with the persistence flag to ensure it is written to disk.
...
re-reading your question, i'm wondering if you mean "rabbitmq server" when you say "queue". are you wanting to check if the rabbitmq server is available?
if that is the case, the proper thing to do is use a heartbeat in your RabbitMQ connection. the spring framework should know how to do this, and should respond with some kind of event or other code that executes when the connection dies. i'm not really familiar with spring, though, so i don't know the details of doing that with this framework.
You might check this post or this RabbitMQ page on handling this.
We are developing an application in JAVA. We will use JMS to listen to messages coming on to MQ. We are expecting around 100K message from approx 100 users (each message approx. 1400 charachters long) per day. How many listeners is good to have for this scenario. What I am trying to know is, how many messages a JMS listeners can process per unit. Approximate number is enough for now. Is there a documentation where we can find out this information?
You have to look at two things here: server performance and client performance.
Major JMS providers (HornetQ, ActiveMQ, etc.) can easily handle 5000+ messages per second, so you are covered on that side (if you want more information have a look at the SPECjms2007 results).
Client performance depends on the computing power of your clients (obviously) and what you want to do with those messages. Technically, there isn't a limit in how many messages a client can process. My experience is that message marshalling/unmarshalling is a huge factor, so as a rough estimate you can assume that your client can handle about the same message load as your server, assuming equally powerful machines and light processing of message content.
In the end you will want to do some load testing.
I have a production system that uses ActiveMQ (5.3.2) to send messages from server A to server B. A few weeks ago, the system inexplicably started taking 10+ second to send a message. After a reboot of the producer, the system worked fine.
After investigation, I'm pretty sure this is due to producer flow control. (I have a fairly standard activemq setup). The day before this happened (for other reasons) my consumer software had been acting erratically and had even stopped accepting connections for a while. So I'm guessing this triggered this. (It does puzzle me that the requests were still being throttled a day later).
Question -- how can I confirm that the requests were being throttled. I took a heap dump of the server -- is there data in memory I can look for?
Edit: I've found the following:
WireFormatNegotiator.tcpNoDelayEnabled=false for one of three WireFormatNegotiator instances in the memory. I'm trying to figure out what sets this.
And second (and more important), is there a way I can use JMX to tell if the messages are being throttled? I'd like to set up a Nagios alert to let me know if this happens in the future. What property should I check for with JMX?
you can configure your producer client to throw javax.jms.ResourceAllocationException exceptions which can then be detected/logged, etc. just set one of the following...
<systemUsage>
<systemUsage sendFailIfNoSpaceAfterTimeout="3000">
...OR...
<systemUsage sendFailIfNoSpace="true">
We are running a high throughput system that utilizes tibco-ems JMS to pass large numbers of messages to and from our main server to our client connections. We've done some statistics and have determined that JMS is the causing a lot of latency. How can we make tibco JMS more performant? Are there any resources that give a good discussion on this topic.
Using non-persistent messages is one option if you don't need persistence.
Note that even if you do need persistence, sometimes it's better to use non persistent messages, and in case of a crash perform a different recovery action (like resending all messages)
This is relevant if:
crashes are rare (as the recovery takes time)
you can easily detect a crash
you can handle duplicate messages (you may not know exactly which messages were delivered before the crash
EMS also provides some mechanisms that are persistent, but less bullet proof then classic guaranteed delivery
these include:
instead of "exactly once" message delivery you can use "at least once" or "up to once" delivery.
you may use the pre-fetch mechanism which causes the client to fetch messages to memory before your application request them.
EMS should not be the bottle neck. I've done testing and we have gotten a shitload of throughput on our server.
You need to try to determine where the bottle neck is. Is the problem in the producer of the message or the consumer. Are messages piling up on the queue.
What type of scenario are you doing.
Pub/sup or request reply?
are you having temporary queue pile up. Too many temporary queues can cause performance issues. (Mostly when they linger because you didn't close something properly)
Are you publishing to a topic with durable subscribers if so. Try bridging the topic to queue and reading from those. Durable subscribers can cause a little hiccup in performance too since it needs to track who has copies of all messages.
Ensure that your sending process has one session and multiple calls through that session. Don't open a complete session for each operation. Re-use where possible. Do the same for the consumer.
make sure you CLOSE when you are done. EMS doesn't clear things up. So if you make a connection and just close your app the connection still is there and sucking up resources.
review your tolerance for lost messages in the even of a crash. If you are doing Client ack and it doesn't matter if you crash processing the message then switch to auto. Also I believe if you are using (TEMS - Tibco EMS for WCF) there's a problem with the session acknowledge. So a message is only when its processed on the whole message, we switched from Client ACK to the one that had Dups ok and it worked better)
My Java EE application sends JMS to queue continuously, but sometimes the JMS consumer application stopped receiving JMS. It causes the JMS queue very large even full, that collapses the server.
My server is JBoss or Websphere. Do the application servers provide strategy to remove "timeout" JMS messages?
What is strategy to handle large JMS queue? Thanks!
With any asynchronous messaging you must deal with the "fast producer/slow consumer" problem. There are a number of ways to deal with this.
Add consumers. With WebSphere MQ you can trigger a queue based on depth. Some shops use this to add new consumer instances as queue depth grows. Then as queue depth begins to decline, the extra consumers die off. In this way, consumers can be made to automatically scale to accommodate changing loads. Other brokers generally have similar functionality.
Make the queue and underlying file system really large. This method attempts to absorb peaks in workload entirely in the queue. This is after all what queuing was designed to do in the first place. Problem is, it doesn't scale well and you must allocate disk that 99% of the time will be almost empty.
Expire old messages. If the messages have an expiry set then you can cause them to be cleaned up. Some JMS brokers will do this automatically while on others you may need to browse the queue in order to cause the expired messages to be deleted. Problem with this is that not all messages lose their business value and become eligible for expiry. Most fire-and-forget messages (audit logs, etc.) fall into this category.
Throttle back the producer. When the queue fills, nothing can put new messages to it. In WebSphere MQ the producing application then receives a return code indicating that the queue is full. If the application distinguishes between fatal and transient errors, it can stop and retry.
The key to successfully implementing any of these is that your system be allowed to provide "soft" errors that the application will respond to. For example, many shops will raise the MAXDEPTH parameter of a queue the first time they get a QFULL condition. If the queue depth exceeds the size of the underlying file system the result is that instead of a "soft" error that impacts a single queue the file system fills and the entire node is affected. You are MUCH better off tuning the system so that the queue hits MAXDEPTH well before the file system fills but then also instrumenting the app or other processes to react to the full queue in some way.
But no matter what else you do, option #4 above is mandatory. No matter how much disk you allocate or how many consumer instances you deploy or how quickly you expire messages there is always a possibility that your consumer(s) won't keep up with message production. When this happens your producer app should throttle back, or raise an alarm and stop or do anything other than hang or die. Asynchronous messaging is only asynchronous up to the point that you run out of space to queue messages. After that your apps are synchronous and must gracefully handle that situation, even if that means to (gracefully) shut own.
Sure!
http://download.oracle.com/docs/cd/E17802_01/products/products/jms/javadoc-102a/index.html
Message#setJMSExpiration(long) does exactly what you want.