RabbitMQ delete queue can not release connection - java

on my java program, some kind of messages are being sent over RabbitMQ queues as below :
if(!con.isConnected()){
log.error("Not connected !!!");
return false;
}
con.getChannel().basicPublish("",queueName, MessageProperties.PERSISTENT_BASIC, bytes)
I deleted queues via RabbitMQ management GUI plugin
try to send a message over that deleted queue
Result: queues were deleted from RabbitMQ GUI but when I am trying to send message over that deleted RabbitMQ queues, connection is still alive.(con.isConnected() == true ) I need to find a way to listen the queue, if it is deleted , I shouldn't send any message to the deleted queue.
Note: After deleting queue, I am not restarting RabbitMQ.
channel creation :
channel = connection.createChannel();
channel.queueDeclare(prop.getQueueName(), true, false, false, null);
example code channel, queue,exchange creation :
ConnectionFactory cf = new ConnectionFactory();
cf.setUsername("guest");
cf.setPassword("guest");
cf.setHost("localhost");
cf.setPort(5672);
cf.setAutomaticRecoveryEnabled(true);
cf.setConnectionTimeout(10000);
cf.setNetworkRecoveryInterval(10000);
cf.setTopologyRecoveryEnabled(true);
cf.setRequestedHeartbeat(5);
Connection connection = cf.newConnection();
channel = connection.createChannel();
channel.queueDeclare("test", true, false, false, null);
channel.exchangeDeclare("testExchange", "direct",true);
channel.queueBind("test", "testExchange", "testRoutingKey");
connection.addShutdownListener(new ShutdownListener() {
#Override
public void shutdownCompleted(ShutdownSignalException cause) {
System.out.println("test"+cause);
}
});
Sending message :
channel.basicPublish("testExchange", "testRoutingKey", null,messageBodyBytes);

From RabbitMQ google
Messages in AMQP 0-9-1 are not published to queues; they are published to exchanges, from where they
are routed to a queue (or another exchange) or not. [1]
basic.publish is a completely asynchronous protocol method by design: there is no response for it
unless you ask for it [2]. Messages that are unroutable can be returned to the publisher
if you define a return listener and publish with the mandatory flag set to true.
Note that publisher confirms and the mandatory flag/returns are orthogonal and one does not imply
the other.
Defining return listener and setting mandatory flag true was solved my problem. If any message was not routed , I can catch them by using ReturnListener and add to my persisted queue to send another time when system becomes active.

Related

Manual Acknowledgement of ActiveMQ Messages with Alpakka

I am working on implementing Akka Alpakka for consuming from and producing to ActiveMQ queues, in Java. I can consume from the queue successfully, but I haven't yet been able to implement application-level message acknowledgement.
My goal is to consume messages from a queue and send them to another actor for processing. When that actor has completed processing, I want it to be able control the acknowledgement of the message in ActiveMQ. Presumably this would be done by sending a message to another actor that can do the acknowledgement, calling an acknowledge function on the message itself, or some other way.
In my test, 2 messages are put into the AlpakkaTest queue, and then this code attempts to consume and acknowledge them. However, I don't see a way to set the ActiveMQ session to CLIENT_ACKNOWLEDGE, and I don't see any difference in behavior with or without the call to m.acknowledge();. Because of this, I think messages are still being auto-acknowledged.
Does anyone know the accepted way to configure ActiveMQ sessions for CLIENT_ACKNOWLEDGE and manually acknowledge ActiveMQ messages in Java Akka systems using Alpakka?
The relevant test function is:
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://0.0.0.0:2999"); // An embedded broker running in the test.
Source<Message, NotUsed> jmsSource = JmsSource.create(
JmsSourceSettings.create(connectionFactory)
.withQueue("AlpakkaTest")
.withBufferSize(2)
);
Materializer materializer = ActorMaterializer.create(system); // `system` is an ActorSystem passed to the function.
try {
List<Message> messages = jmsSource
.take(2)
.runWith(Sink.seq(), materializer)
.toCompletableFuture().get(4, TimeUnit.SECONDS);
for(Message m:messages) {
System.out.println("Found Message ID: " + m.getJMSMessageID());
try {
m.acknowledge();
} catch(JMSException jmsException) {
System.out.println("Acknowledgement Failed for Message ID: " + m.getJMSMessageID() + " (" + jmsException.getLocalizedMessage() + ")");
}
}
} catch (InterruptedException e1) {
e1.printStackTrace();
} catch (ExecutionException e1) {
e1.printStackTrace();
} catch (TimeoutException e1) {
e1.printStackTrace();
} catch (JMSException e) {
e.printStackTrace();
}
This code prints:
Found Message ID: ID:jmstest-43178-1503343061195-1:26:1:1:1
Found Message ID: ID:jmstest-43178-1503343061195-1:27:1:1:1
Update: The acknowledgement mode is configurable in the JMS connector since Alpakka 0.15. From the linked documentation:
Source<Message, NotUsed> jmsSource = JmsSource.create(JmsSourceSettings
.create(connectionFactory)
.withQueue("test")
.withAcknowledgeMode(AcknowledgeMode.ClientAcknowledge())
);
CompletionStage<List<String>> result = jmsSource
.take(msgsIn.size())
.map(message -> {
String text = ((ActiveMQTextMessage)message).getText();
message.acknowledge();
return text;
})
.runWith(Sink.seq(), materializer);
As of version 0.11, Alpakka's JMS connector does not support application-level message acknowledgment. Alpakka creates internally a Session with the CLIENT_ACKNOWLEDGE mode here and acknowledges each message here in the internal MessageListener. The API does not expose these settings for overriding.
There is an open ticket that discusses enabling downstream acknowledgement of queue-based sources, but that ticket has been inactive for a while.
Currently you cannot prevent Alpakka from acknowledging the messages at the JMS level. However, that doesn't preclude you from adding a stage to your stream that sends each message to an actor for processing and uses the actor's replies as backpressure signals. The Akka Streams documentation describes how to do this with either a combination of mapAsync and ask or with Sink.actorRefWithAck. For example, to use the former:
Timeout askTimeout = Timeout.apply(4, TimeUnit.SECONDS);
jmsSource
.mapAsync(2, msg -> ask(processorActor, msg, askTimeout))
.runWith(Sink.seq(), materializer);
(Side note: In the related Streamz project, there is a recently opened ticket to allow application-level acknowledgement. Streamz is the replacement for the old akka-camel module and, like Alpakka, is built on Akka Streams. Streamz also has a Java API and is listed in the Alpakka documentation as an external connector.)
Looking at the source code for the Alpakka JmsSourceStage it already acknowledges each incoming message for you (and it's session is a Client Ack session). From what I can tell from the source there is no mode that allows you to do the acknowledgement of messages.
You can view the source code for Alpakka here.

IBM MQ failure to send/receive all JMS Messages

I am using IBM MQ to produce messages while receiving it through a consumer on my client. To create the connection I'm using JmsConnectionFactory, along with provided properties to set up the connection with the server. So from what I understand is, as the consumer the only way to recognize the messages produced by the server is through the onMessage call. I'm currently testing this by creating a local producer and local consumer and assuring that every message sent by the producer is received by the consumer.
I'm running into the following problems:
I'm not receiving all messages produced.
Depending on the size of the message, more of them are received if they are smaller.
Here is code for the creation of the producer:
JmsConnectionFactory cf = ff.createConnectionFactory();
cf.setStringProperty(WMQConstants.WMQ_HOST_NAME, qm.getHost());
int port = ###;
cf.setIntProperty(WMQConstants.WMQ_PORT, port);
cf.setStringProperty(WMQConstants.WMQ_CHANNEL, qm.getChannel());
cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT);
cf.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, qm.getQueueManagerName());
Connection connection = cf.createConnection(qm.getUser().getUsername(), qm.getUser().getPassword());
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(qm.getDestinationName());
LOG.debug("Destination Created at " +qm.getDestinationName());
msgSender = session.createProducer(destination);
msgSender.setDeliveryMode(DeliveryMode.PERSISTENT);
And this is how the producer is sending messages:
/**
* msgSender is the MessageProducer object
**/
private void produceMessages(int numOfMessages) throws JMSException, InterruptedException {
for (int i = 0; i < numOfMessages; i++) {
String text = "Message #" +i;
TextMessage message = session.createTextMessage(text);
msgSender.send(message);
}
}
On the consumer side, I am simply printing received messages and verifying visually:
#Override
public void onMessage(Message m) {
System.out.println(((TextMessage)m).getText());
}
I am not fully familiar with how IBM MQ works. Could the reason for the missing messages reside on the MQ simply ignoring messages that are produced before a message is fully sent?
I would say the issue is residing on your consumer side, rather than your simulated producer. Your message producer should be sending messages to MQ just fine, but multiple consumers are probably competing to retrieve these messages from the connection you have set up (given the same queue manager properties). So unless no one else is trying to consume from your IBM MQ, you're going to be expected to miss some messages.
You should use other method of send(Message m, CompletionListener l) to send new messages only after completion.
And if you use "Best Effort", it still will lose messages. You can try "Express" instead.

How to search for a particular message in JMS queue

I am sending some messages to a JMS queue. What are the possible ways to search for a particular message in a queue to consume?
I tried out in the following way: I am setting the JMSCorrelationID while sending a message to the queue:
public void createDQueue(String queuename, String json, Integer userid) {
try {
QueueSession.AUTO_ACKNOWLEDGE );
Queue queue = session.createQueue(queuename);
ObjectMessage objectMessage = session.createObjectMessage();
objectMessage.setJMSCorrelationID(String.valueOf(userid));
objectMessage.setObject(json);
session.createSender(queue).send(objectMessage);
session.close();
connection.close();
}catch(Exception e){
e.printStackTrace();
}
}
In the consumer code I want to get that particular message based on the JMSCorrelationID. I am not able to get that particular message. Can you suggest a solution?
public void getSpecificMessage(String queuename, Integer userid) {
try {
QueueConnectionFactory connectionFactory = new ActiveMQConnectionFactory( "tcp://localhost:61616");
((ActiveMQConnectionFactory) connectionFactory).setUseAsyncSend(true);
QueueConnection connection = connectionFactory.createQueueConnection();
connection.start();
QueueSession session = connection.createQueueSession( false,
QueueSession.AUTO_ACKNOWLEDGE );
String id = String.valueOf(userid);
Queue queue = session.createQueue(queuename);
QueueReceiver receiver = session.createReceiver(queue, "JMSCorrelationID="+id);
Message message = receiver.receive();
} catch (JMSException e) {
e.printStackTrace();
}
}
Your first problem is that you are trying to think about the message broker as a database, you must always remember this sage piece of advice, "A message broker is not a database".
There are certain limits on how deep a consumer or Queue browser can go into a destination before the broker will not page in more messages from disk, so you need to check your depth and see if its large than you maxPageSize setting and adjust as needed, but remember that messages paged in remain in memory until consumed.
Just wrap the id value in single quotes
"JMSCorrelationID='"+id+"'"
This functionality is not recommended to be used , there are lot more complications as explained by Tim , but if you want to obsolutely work with it make the change
You can search messages using the MeessageID of a message. This would be fast as messaging providers index messages on message id. There are other way to search based on CorrelationId, meta data etc.
But please remember the primary objective of using a messaging provider is to connect applications in a time independent manner. The receiving application must get messages as soon as possible. If messages are piling up in a queue, it indicates a problem that must be addressed.

rabbitMQ reading one flushes other values out of the que as well

here is the problem I have:
I want to write 2 objects into rabbitMQ and only read 1 ( this is a testing to ensure that my data stays in RabbitMQ if reader suddenly stops e.g. ctrl+c).
I don't have problem with writting to MQ but when I read only one object and close the connection the other object disappears too. I don't know why that happens.
I followed the instruction given at : here
creating a channel:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("127.0.0.1");
factory.setPort(5672);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
writing into rabbitMQ (no problem with writing to MQ )
channel.queueDeclare("myque", false, false, false, null);
channel.basicPublish("", "myque", null, "one".getBytes("UTF-8"));
channel.basicPublish("", "myque", null, "two".getBytes("UTF-8"));
the way I read is :
QueueingConsumer consumer =new QueueingConsumer(channel);
channel.basicConsume("queuethroughProxy", true, consumer);
//while(true){
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println("message is : " + message);
//}
connection.close();
I'm not quite sure what I'm doing wrong here.
You are doing two mistakes here.
Not setting channel.basicQos(1) --> which leads to bringing all messages in queue from ACK to NACK when you run your consumer program.
Enabling Auto ACK while consuming --> which leads to acknowledging all NACK messages on stopping the consumer program.
These are the reasons you are loosing all the messages in queue though you consumed one.
You can refer my blog post here for more detail.
I guess you are confused on the line
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
One might think as calling consumer.nextDelivery() will get the next message from the broker.
But if you see the documentation it says, "Since the server will push messages asynchronously, we provide a callback in the form of an object that will buffer the messages until we're ready to use them. That is what QueueingConsumer does."
Since auto_ack is enabled, once the consumer is created immediately server will push the 2 messages to the consumer. consumer.nextDelivery() just iterates through the messages that are already received at the client side.

Can i use RabbitMQ to move data to Amazon Kinesis stream?

I have a server containing folders date wise and each folder further contains many files (size 200kb each) containing all the log for a particular day. I am new to RabbitMQ , while going through the documentation of RabbitMQ i found below code for Producer
Refer Link: https://github.com/rabbitmq/rabbitmq-tutorials/blob/master/java/Send.java
public class Send {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = "Hello World!";
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println(" [x] Sent '" + message + "'");
channel.close();
connection.close();
}
}
on the above code i have added sample string "Hello World!" to publish. As stated above in the problem description that i have to read the log information from the server with different date stamp directory So do i need to write a simply an infinite loop(as logs are continuously updated) and recursively read all directory and files and Then for each line of File i can compose a message and then publish it to receiver ?
In this case our channel will never close and Connection will be always up is it an idle condition with RabbitMQ ?
Is it possible for RabbitMQ to mark the file which are read and don't read it again OR i need to manage it programmatically like renaming the file and folder with some different names. I was thinking this as might be our program gets terminated with some power failure or something while i am in mid of any file and then how can i guarantee that records would not be duplicated ?
Any other best way to achieve this would be great help for me. Thanks in advance.
I would enqueue a list of files to process to RabbitMQ and then have a separate set of processes picking up messages from that queue to do what you want with the data. Then try to make sure to subscribe to the queues in ack mode, so RabbitMQ will only delete the message from the queue once you ack it. With this setting, you should prevent sending the same information twice.
That would work on most situations. I say most, because if RabbitMQ sends a message to your consumer, then your consumer takes an action (like replicating the information, or placing an entry on a database) and then the connection to RabbitMQ dies before you sent the ack to RabbitMQ, then the broker has no way of telling that you already processed the message, so it will deliver it again later.

Categories