basicAck does not remove message from broker - RabbitMQ - java

I'm doing following flow in my application:
get 1 message from broker(manual acknowledge)
do some processing
start transaction on database and broker
insert some records in database and publish some messages on
broker(different queue)
commit database and broker
ack message that you got from broker in step 1.
All operation on broker is done via a single channel. here is the preparation code:
Connection brokerConnection = factory.newConnection();
Channel channel = brokerConnection.createChannel();
channel.basicQos(1);
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume("receive-queue", false, consumer);
Following is my code. I have removed try, catch parts to make it clear. I log all exceptions to file.
Step 1:
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
Request request = (Request) SerializationUtils.deserialize(delivery.getBody());
Step 2, 3, 4, 5:
dbConnection.setAutoCommit(false);
channel.txSelect();
stmt = dbConnection.prepareStatement(query);
/* set paramteres */
stmt.executeUpdate();
channel.basicPublish(/* exchange name */, "KEY", MessageProperties.PERSISTENT_BASIC, /* result */ result);
dbConnection.commit();
channel.txCommit();
dbConnection.setAutoCommit(true);
Step 6:
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
After one iteration I can see records in database and broker(means it is working fine until step 5). The problem is the message on receive queue is not removed after step 6 and management plug-in shows one un-acked message. Also I don't see any exception in log file. Can anyone help?
[UPDATE1]
Now I create one channel for publishing and another channel for receiving. This is working now. So how to use a single channel for receiving and publishing(with transactions)? I have used a single channel for receiving and publishing before but that was without transactions.
[UPDATE2]
I moved step 6 inside transaction and it is working now.
dbConnection.setAutoCommit(false);
channel.txSelect();
stmt = dbConnection.prepareStatement(query);
/* set paramteres */
stmt.executeUpdate();
channel.basicPublish(/* exchange name */, "KEY", MessageProperties.PERSISTENT_BASIC, /* result */ result);
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
dbConnection.commit();
channel.txCommit();
dbConnection.setAutoCommit(true);
I'm a bit confused. I just want the publish section to be inside transaction.

You've put the channel in transactional mode - and acks are transactional things. So you either need to consume and ack on a separate non-transactional channel, or else just accept that your ack needs to come before the tx.commit.

Related

RabbitMQ delete queue can not release connection

on my java program, some kind of messages are being sent over RabbitMQ queues as below :
if(!con.isConnected()){
log.error("Not connected !!!");
return false;
}
con.getChannel().basicPublish("",queueName, MessageProperties.PERSISTENT_BASIC, bytes)
I deleted queues via RabbitMQ management GUI plugin
try to send a message over that deleted queue
Result: queues were deleted from RabbitMQ GUI but when I am trying to send message over that deleted RabbitMQ queues, connection is still alive.(con.isConnected() == true ) I need to find a way to listen the queue, if it is deleted , I shouldn't send any message to the deleted queue.
Note: After deleting queue, I am not restarting RabbitMQ.
channel creation :
channel = connection.createChannel();
channel.queueDeclare(prop.getQueueName(), true, false, false, null);
example code channel, queue,exchange creation :
ConnectionFactory cf = new ConnectionFactory();
cf.setUsername("guest");
cf.setPassword("guest");
cf.setHost("localhost");
cf.setPort(5672);
cf.setAutomaticRecoveryEnabled(true);
cf.setConnectionTimeout(10000);
cf.setNetworkRecoveryInterval(10000);
cf.setTopologyRecoveryEnabled(true);
cf.setRequestedHeartbeat(5);
Connection connection = cf.newConnection();
channel = connection.createChannel();
channel.queueDeclare("test", true, false, false, null);
channel.exchangeDeclare("testExchange", "direct",true);
channel.queueBind("test", "testExchange", "testRoutingKey");
connection.addShutdownListener(new ShutdownListener() {
#Override
public void shutdownCompleted(ShutdownSignalException cause) {
System.out.println("test"+cause);
}
});
Sending message :
channel.basicPublish("testExchange", "testRoutingKey", null,messageBodyBytes);
From RabbitMQ google
Messages in AMQP 0-9-1 are not published to queues; they are published to exchanges, from where they
are routed to a queue (or another exchange) or not. [1]
basic.publish is a completely asynchronous protocol method by design: there is no response for it
unless you ask for it [2]. Messages that are unroutable can be returned to the publisher
if you define a return listener and publish with the mandatory flag set to true.
Note that publisher confirms and the mandatory flag/returns are orthogonal and one does not imply
the other.
Defining return listener and setting mandatory flag true was solved my problem. If any message was not routed , I can catch them by using ReturnListener and add to my persisted queue to send another time when system becomes active.

rabbitMQ reading one flushes other values out of the que as well

here is the problem I have:
I want to write 2 objects into rabbitMQ and only read 1 ( this is a testing to ensure that my data stays in RabbitMQ if reader suddenly stops e.g. ctrl+c).
I don't have problem with writting to MQ but when I read only one object and close the connection the other object disappears too. I don't know why that happens.
I followed the instruction given at : here
creating a channel:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("127.0.0.1");
factory.setPort(5672);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
writing into rabbitMQ (no problem with writing to MQ )
channel.queueDeclare("myque", false, false, false, null);
channel.basicPublish("", "myque", null, "one".getBytes("UTF-8"));
channel.basicPublish("", "myque", null, "two".getBytes("UTF-8"));
the way I read is :
QueueingConsumer consumer =new QueueingConsumer(channel);
channel.basicConsume("queuethroughProxy", true, consumer);
//while(true){
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println("message is : " + message);
//}
connection.close();
I'm not quite sure what I'm doing wrong here.
You are doing two mistakes here.
Not setting channel.basicQos(1) --> which leads to bringing all messages in queue from ACK to NACK when you run your consumer program.
Enabling Auto ACK while consuming --> which leads to acknowledging all NACK messages on stopping the consumer program.
These are the reasons you are loosing all the messages in queue though you consumed one.
You can refer my blog post here for more detail.
I guess you are confused on the line
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
One might think as calling consumer.nextDelivery() will get the next message from the broker.
But if you see the documentation it says, "Since the server will push messages asynchronously, we provide a callback in the form of an object that will buffer the messages until we're ready to use them. That is what QueueingConsumer does."
Since auto_ack is enabled, once the consumer is created immediately server will push the 2 messages to the consumer. consumer.nextDelivery() just iterates through the messages that are already received at the client side.

Can i use RabbitMQ to move data to Amazon Kinesis stream?

I have a server containing folders date wise and each folder further contains many files (size 200kb each) containing all the log for a particular day. I am new to RabbitMQ , while going through the documentation of RabbitMQ i found below code for Producer
Refer Link: https://github.com/rabbitmq/rabbitmq-tutorials/blob/master/java/Send.java
public class Send {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = "Hello World!";
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println(" [x] Sent '" + message + "'");
channel.close();
connection.close();
}
}
on the above code i have added sample string "Hello World!" to publish. As stated above in the problem description that i have to read the log information from the server with different date stamp directory So do i need to write a simply an infinite loop(as logs are continuously updated) and recursively read all directory and files and Then for each line of File i can compose a message and then publish it to receiver ?
In this case our channel will never close and Connection will be always up is it an idle condition with RabbitMQ ?
Is it possible for RabbitMQ to mark the file which are read and don't read it again OR i need to manage it programmatically like renaming the file and folder with some different names. I was thinking this as might be our program gets terminated with some power failure or something while i am in mid of any file and then how can i guarantee that records would not be duplicated ?
Any other best way to achieve this would be great help for me. Thanks in advance.
I would enqueue a list of files to process to RabbitMQ and then have a separate set of processes picking up messages from that queue to do what you want with the data. Then try to make sure to subscribe to the queues in ack mode, so RabbitMQ will only delete the message from the queue once you ack it. With this setting, you should prevent sending the same information twice.
That would work on most situations. I say most, because if RabbitMQ sends a message to your consumer, then your consumer takes an action (like replicating the information, or placing an entry on a database) and then the connection to RabbitMQ dies before you sent the ack to RabbitMQ, then the broker has no way of telling that you already processed the message, so it will deliver it again later.

MessageConsumer.receive() doesn't remove message

I am trying to write a test for a class used to send a JMS-message to ActiveMQ. What I am trying to accomplish is to get a method in the class under test to send the message to an ActiveMQ instance in localhost, and then pick the message up in the test and verify that it is correct.
I have chosen this as my broker url: vm://localhost?broker.persistent=true, which means that a local ActiveMQ instance will be created, and the messages stored in a KahaDB (which is also created.) (I tried using broker.persistent=false, but since the method under test has a finally-clause that closes the connection, the in-memory messages are then lost before I can retrieve them.)
In order to retrieve the message and verify it, I have the following code:
//call method under test to send a message
//create a ConnectionFactory with url vm://localhost?broker.persistent=true
final Connection connection = connectionFactory.createConnection();
connection.start();
final Session session = connection.createSession(true, Session.AUTO_ACKNOWLEDGE);
final Destination dest = session.createQueue("my.queue");
final MessageConsumer messageConsumer = session.createConsumer(dest);
Message message = messageConsumer.receive(1000);
messageConsumer.close();
session.close();
connection.close();
My problem is that upon running this code, the messages are not being removed from KahaDb! Upon multiple test runs, the message added the first time will be read again and again. Am I missing something here, or is this a bug in KahaDB/ActiveMQ? I am using ActiveMQ 5.7.0.
Try
final Session session =
connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
otherwise you get a "transacted" session.
Otherwise, if you really want to have a "transacted" session, you have to call
// 2nd parameter is ignored, if the session is transacted
final Session session =
connection.createSession(true, -1);
// Read messages
session.commit();
messageConsumer.close();
session.close();
connection.close();
in order to remove all messages you have read during this session.
For your reference, there is an excellent overview from Javaworld regarding Transactions and redelivery in JMS. It covers additional possibilities as well (using Session.CLIENT_ACKNOWLEDGE to acknowledge messages individually, for example).
You've created a transacted session but never called commit. In this case when the close method is called the in-flight transaction is rolled back and so the message that you received is placed back into the queue and will be redelivered to another consumer. You can test this by querying the redelivered count on the message and see that it increases each time. To consume the message call session.commit() before closing the session.

How to optimize activemq

I'm using ActiveMQ on a simulation of overloading servers in Java. And mainly it goes ok, but when I get over 600 requests the thing just go WTF!
I think the bottleneck is my Master Server which is this guy below. I'm already reusing the connection and creating various sessions to consume messages from clients. Like I said, I'm using about 50-70 sessions per connection, reutilizing the connection and queue. Any idea of what I can reuse/optimize of my components/listener below?
The architecture is the follow:
* = various
Client ---> JMS MasterQueue ---> * Master ---> JMS SlavaQueue ---> * SlaveQueue
Mainly I'm creating a Temp Queue for each session of Master --> Slave communication, is that a big problem on performance?
/**
* This subclass implements the processing log of the Master JMS Server to
* propagate the message to the Server (Slave) JMS queue.
*
* #author Marcos Paulino Roriz Junior
*
*/
public class ReceiveRequests implements MessageListener {
public void onMessage(Message msg) {
try {
ObjectMessage objMsg = (ObjectMessage) msg;
// Saves the destination where the master should answer
Destination originReplyDestination = objMsg.getJMSReplyTo();
// Creates session and a sender to the slaves
BankQueue slaveQueue = getSlaveQueue();
QueueSession session = slaveQueue.getQueueConnection()
.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
QueueSender sender = session
.createSender(slaveQueue.getQueue());
// Creates a tempQueue for the slave tunnel the message to this
// master and also create a masterConsumer for this tempQueue.
TemporaryQueue tempDest = session.createTemporaryQueue();
MessageConsumer masterConsumer = session
.createConsumer(tempDest);
// Setting JMS Reply Destination to our tempQueue
msg.setJMSReplyTo(tempDest);
// Sending and waiting for answer
sender.send(msg);
Message msgReturned = masterConsumer.receive(getTimeout());
// Let's check if the timeout expired
while (msgReturned == null) {
sender.send(msg);
msgReturned = masterConsumer.receive(getTimeout());
}
// Sends answer to the client
MessageProducer producerToClient = session
.createProducer(originReplyDestination);
producerToClient.send(originReplyDestination, msgReturned);
} catch (JMSException e) {
logger.error("NO REPLY DESTINATION PROVIDED", e);
}
}
}
Well, After some reading I found out how to optimize.
We should reuse some session variables, such as the sender and tempqueue. Instead of creating new ones.
Another approach is put the stack size for thread in Java lower, following this link
ActiveMQ OutOfMemory Can't create more threads
It could have to do with the configuration of the listener thread pool. It could be that up to a certain threshold number of requests per second the listener is able to keep up and process the incoming requests in a timely way, but above that rate it starts to fall behind. it depends on the work done for each incoming request, the incoming request rate, the memory and CPU available to each listener, and the number of listeners allocated.
If this is true, you should be able to watch the queue and see when the number of incoming messages start to back up. That's the point at which you need to increase the resources and number of listeners to process efficiently.

Categories