MQ - delete all messages from a queue - java

I'm new to MQ programming with java.
For my Integration tests, I would like to clean up the destination queue before posting any messages to it. Is there any option in MQ-Java to delete all messages in a queue in one go?

You can use a WMQ PCF program to clear messages in queue in one go. PCF classes provide an interface to administer WMQ programatically. There is a sample, PCF_ClearQueue.java that demonstrates clearing messages from a queue.
The sample is located (on Windows platforms) \tools\pcf\samples directory. More information on clear queue can be found here.

If you have access to runmqsc then use the MQSC command called: CLEAR QLOCAL
Note: If an application has the queue open then that command will fail and PCF commands will fail too. Hence, you will need to get all messages from the queue one at a time. You can download a Java program called EmptyQ from http://www.capitalware.com/mq_code_java.html that will do the trick.

Related

Delete Mail using camel without the consumer

Hy all,
in the software I'm developing, I have different camel routes that work on data, that is (in this case) loaded from an imap server using the camel-mail component.
Each of those routes does something with the data and then gives the data to the next route. They are dynamically configured at runtime.
In between those routes is an embedded ActiveMQ server which is used by each route to load the data from and save the data to (for the next route to pick it up).
Because of this structure I'm having a special case with the camel-mail consumer.
When loading a mail and sending it to the first ActiveMQ queue, it is immediatelly deleted/marked as read (depending on the settings on the mail consumer), but the actual processing of the mail has not concluded yet, as the next routes still have to process it.
This is a simplified view:
from("imaps://imap.server.com?...")
// Format mail in a way the other routes understand
.to("activemq:queue1"); // After this the mail is delete on the imap server
from("activemq:queue1")
// do some processing
.to("activemq:queue2");
from("activemq:queue2")
// Do some final processing
.to("..."); // NOW the mail should be delete on the imap server
This issue is even more a problem with the error handling I do.
Ever route in this "chain" sends failed exchanges to a deadLetterQueue on the ActiveMQ server. This way there is one error handling route, which picks up the failed exchanges and deals with them, on matter where it crashed.
In case there is a problem I want the email on the imap server to be handled differently (maybe even do nothing an try again on the next poll)
As camels InOut MEP returns the exchange to the (mail)consumer when the route ends i.e. when the exchange is given to the queue, I can't use the consumer to delete the mails after the whole process has ended.
Unfortunatelly I also don't see a delete option on the mail producer (which makes sense I guess, because its not how imap works).
I could also use smtp for this if thats necessary.
Does anybody have an idea how I could achieve this using no other connector then the camel component to connect to the mail server?
Greets and thanks in advance
Chris
Edit:
Adding the parameter "exchangePattern=InOut" to the jms queues (.to("activemq:queue1?exchangePattern=InOut")) lets the mail component wait for the whole process to finish.
The problem with that is, that we lose the big advantage with ActiveMQ that all routes are independent of each other. This is important so we are don't run into issues with consuming the mail when a later route takes a long time to process, which is very likely to happen.
So idealy we find a solution, where the mail is deleted without any component waiting for something to finish

AWS SQS temporary queues not being deleted on app shutdown

We are attempting to use the aws sqs temporary queues library for synchronous communication between two of our apps. One app utilises a AmazonSQSRequester while the other uses a AmazonSQSResponder - both are created using the builders from the library and wired in as Spring beans in app config. Through AWS console we create an SQS queue to work as the 'host queue' required for the request/return pattern. The requesting app sends to this queue and the responding app uses a SQSMessageConsumer to poll the queue and pass messages to the AmazonSQSResponder. Part of how (I'm fairly sure) the library works is that the Requester spins up a temporary SQS queue (a real, static one), then sends that queue url as an attribute in a message to the Responder, which then posts its response there.
Communications between the apps work fine and temporary queues are automatically created. The issue is that when the Requester app shuts down, the temporary queue (now orphaned and useless) persists when it should be cleared up by the library. Information on how we're expecting this clean up to work can be found in this aws post:
The Temporary Queue Client client addresses this issue as well. For each host queue with recent API calls, the client periodically uses the TagQueue API action to attach a fresh tag value that indicates the queue is still being used. The tagging process serves as a heartbeat to keep the queue alive. According to a configurable time period (by default, 5 minutes), a background thread uses the ListQueues API action to obtain the URLs of all queues with the configured prefix. Then, it deletes each queue that has not been tagged recently.
The problem we are having is that when we kill the Requester app, unexplained messages appear in the temporary queue/response queue. We are unsure which app puts them there. Messages in queue prevent the automagic cleanup from happening. The unexplained messages share the same content, a short string:
.rO0ABXA=
This looks like it was logged as a bug with the library: https://github.com/awslabs/amazon-sqs-java-temporary-queues-client/issues/11. Will hopefully be fixed soon!

How to know status of Kafka broker in java?

i am working on apache storm which has a topolgy main class. This topology contains the kafkaSpout which listen a kafka topic over a kafka broker. Now before i submit this topology i want to make sure the status of the kafka broker which has the topic. But i didnt found any way to do it? How a kafka brokers status can be known from storm tolopogy class ? Please help...
If you simply want a quick way to know if it is running or not you can just run the start command again with the same config:
bin/kafka-server-start.sh config/server.properties
If it's running then you should get an exception about the port already being in use.
Not foolproof, so better would be to use Zookeeper as mentioned above:
Personally I use intellij which has a Zookeeper plugin which helps you browse the brokers/topics running within it. Probably something similar for Eclipse or other IDEs.
(IntelliJ)
Go to File > Settings > type zookeeper in the search, then install and click ok (may need to restart)
Go to File > Settings > type zookeeper in the search. Click enable then put in the address where your zookeeper server is running and apply changes. (Note you may need to check the port is correct too)
You should now see your zookeeper server as a tab on the left side of the IDE.
This should show you your broker and topics, consumers etc
Hope that helps!
If you have configured storm-ui, then that should give you a brief information about the running cluster, including informations such as currently running topologies, available free slots, supervisor info etc.
Programitically you can write a thrift client to retrieve those information from the storm cluster. You can possibly choose almost any language to develope your own client.
Check out this article for further reference
Depending on what kind of status you want to have, for most cases you would actually retrieve this from Zookeeper. In Zookeeper you can see registered brokers, topics and other useful things which might be what you're looking for.
Another solution would be to deploy a small regular consumer which would be able to perform those checks for you.

is there a java pattern for a process to constantly run to poll or listen for messages off a queue and process them?

planning on moving a lot of our single threaded synchronous processing batch jobs to a more distributed architecture with workers. the thought is having a master process read records off the database, and send them off to a queue. then have a multiple workers read off the queue to process the records in parallel.
is there any well known java pattern for a simple CLI/batch job that constantly runs to poll/listen for messages on queues? would like to use that for all the workers. or is there a better way to do this? should the listener/worker be deployed in an app container or can it be just a standalone program?
thanks
edit: also to note, im not looking to use JavaEE/JMS, but more hosted solutions like SQS, a hosted RabbitMQ, or IronMQ
If you're using a JavaEE application server (and if not, you should), you don't have to program that logic by hand since the application server does it for you.
You then implement and deploy a message driven bean that listens to a queue and processes the message received. The application server will manage a connection pool to listen to queue messages and create a thread with an instance of your message driven bean which will receive the message and be able to process it.
The messages will be processed concurrently since the application server will have a connection pool and a thread pool available to listen to the queue.
All JavaEE-featured application servers like IBM Websphere or JBoss have configurations available in their admin consoles to create Message Queue listeners depending or the message queue implementation and then bind this message queue listeners to your Message Driven Bean.
I don't a lot about this, and I maybe don't really answer your question, but I tried something a few month ago that might interest you to deals with message queue.
You can have a look at this: http://www.rabbitmq.com/getstarted.html
I seems Work Queue could fix your requirements.

How can i check whether temporary queue is created or not in activemq?

i am new to activemq.i created one temporary queue. i can get the temporary queue name from my application using below code.
Destination temdest=session.createTemporaryQueue();
System.out.println("<<Temporary Queue Name while connection is active: >>"+temdest.toString());
when i create one static queue, i can see my queue name under queue in activemq.but when i create one temporary queue i can't see that.so how can i check whether temporary queue is created or not ?is there any way to see temporary queue in activemq ?
As far as I know, it is not (yet) possible via the web console, but per JMX via JConsole (see screenshot below).
But you have to ensure that the broker provides JMX information on your specified port (default 1099), so check the server configuration first.
But remind, normally you don't need to check whether a temporary queue was created. To check that per JMX is like taking a sledgehammer to crack a nut.
If you can identify the connection in the AMQ console on the 'Connections' tab you can click on it and it shows you the list of Destinations being listened to including temporary queues.

Categories