here is the problem I have:
I want to write 2 objects into rabbitMQ and only read 1 ( this is a testing to ensure that my data stays in RabbitMQ if reader suddenly stops e.g. ctrl+c).
I don't have problem with writting to MQ but when I read only one object and close the connection the other object disappears too. I don't know why that happens.
I followed the instruction given at : here
creating a channel:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("127.0.0.1");
factory.setPort(5672);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
writing into rabbitMQ (no problem with writing to MQ )
channel.queueDeclare("myque", false, false, false, null);
channel.basicPublish("", "myque", null, "one".getBytes("UTF-8"));
channel.basicPublish("", "myque", null, "two".getBytes("UTF-8"));
the way I read is :
QueueingConsumer consumer =new QueueingConsumer(channel);
channel.basicConsume("queuethroughProxy", true, consumer);
//while(true){
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println("message is : " + message);
//}
connection.close();
I'm not quite sure what I'm doing wrong here.
You are doing two mistakes here.
Not setting channel.basicQos(1) --> which leads to bringing all messages in queue from ACK to NACK when you run your consumer program.
Enabling Auto ACK while consuming --> which leads to acknowledging all NACK messages on stopping the consumer program.
These are the reasons you are loosing all the messages in queue though you consumed one.
You can refer my blog post here for more detail.
I guess you are confused on the line
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
One might think as calling consumer.nextDelivery() will get the next message from the broker.
But if you see the documentation it says, "Since the server will push messages asynchronously, we provide a callback in the form of an object that will buffer the messages until we're ready to use them. That is what QueueingConsumer does."
Since auto_ack is enabled, once the consumer is created immediately server will push the 2 messages to the consumer. consumer.nextDelivery() just iterates through the messages that are already received at the client side.
Related
I am using IBM MQ to produce messages while receiving it through a consumer on my client. To create the connection I'm using JmsConnectionFactory, along with provided properties to set up the connection with the server. So from what I understand is, as the consumer the only way to recognize the messages produced by the server is through the onMessage call. I'm currently testing this by creating a local producer and local consumer and assuring that every message sent by the producer is received by the consumer.
I'm running into the following problems:
I'm not receiving all messages produced.
Depending on the size of the message, more of them are received if they are smaller.
Here is code for the creation of the producer:
JmsConnectionFactory cf = ff.createConnectionFactory();
cf.setStringProperty(WMQConstants.WMQ_HOST_NAME, qm.getHost());
int port = ###;
cf.setIntProperty(WMQConstants.WMQ_PORT, port);
cf.setStringProperty(WMQConstants.WMQ_CHANNEL, qm.getChannel());
cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT);
cf.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, qm.getQueueManagerName());
Connection connection = cf.createConnection(qm.getUser().getUsername(), qm.getUser().getPassword());
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(qm.getDestinationName());
LOG.debug("Destination Created at " +qm.getDestinationName());
msgSender = session.createProducer(destination);
msgSender.setDeliveryMode(DeliveryMode.PERSISTENT);
And this is how the producer is sending messages:
/**
* msgSender is the MessageProducer object
**/
private void produceMessages(int numOfMessages) throws JMSException, InterruptedException {
for (int i = 0; i < numOfMessages; i++) {
String text = "Message #" +i;
TextMessage message = session.createTextMessage(text);
msgSender.send(message);
}
}
On the consumer side, I am simply printing received messages and verifying visually:
#Override
public void onMessage(Message m) {
System.out.println(((TextMessage)m).getText());
}
I am not fully familiar with how IBM MQ works. Could the reason for the missing messages reside on the MQ simply ignoring messages that are produced before a message is fully sent?
I would say the issue is residing on your consumer side, rather than your simulated producer. Your message producer should be sending messages to MQ just fine, but multiple consumers are probably competing to retrieve these messages from the connection you have set up (given the same queue manager properties). So unless no one else is trying to consume from your IBM MQ, you're going to be expected to miss some messages.
You should use other method of send(Message m, CompletionListener l) to send new messages only after completion.
And if you use "Best Effort", it still will lose messages. You can try "Express" instead.
on my java program, some kind of messages are being sent over RabbitMQ queues as below :
if(!con.isConnected()){
log.error("Not connected !!!");
return false;
}
con.getChannel().basicPublish("",queueName, MessageProperties.PERSISTENT_BASIC, bytes)
I deleted queues via RabbitMQ management GUI plugin
try to send a message over that deleted queue
Result: queues were deleted from RabbitMQ GUI but when I am trying to send message over that deleted RabbitMQ queues, connection is still alive.(con.isConnected() == true ) I need to find a way to listen the queue, if it is deleted , I shouldn't send any message to the deleted queue.
Note: After deleting queue, I am not restarting RabbitMQ.
channel creation :
channel = connection.createChannel();
channel.queueDeclare(prop.getQueueName(), true, false, false, null);
example code channel, queue,exchange creation :
ConnectionFactory cf = new ConnectionFactory();
cf.setUsername("guest");
cf.setPassword("guest");
cf.setHost("localhost");
cf.setPort(5672);
cf.setAutomaticRecoveryEnabled(true);
cf.setConnectionTimeout(10000);
cf.setNetworkRecoveryInterval(10000);
cf.setTopologyRecoveryEnabled(true);
cf.setRequestedHeartbeat(5);
Connection connection = cf.newConnection();
channel = connection.createChannel();
channel.queueDeclare("test", true, false, false, null);
channel.exchangeDeclare("testExchange", "direct",true);
channel.queueBind("test", "testExchange", "testRoutingKey");
connection.addShutdownListener(new ShutdownListener() {
#Override
public void shutdownCompleted(ShutdownSignalException cause) {
System.out.println("test"+cause);
}
});
Sending message :
channel.basicPublish("testExchange", "testRoutingKey", null,messageBodyBytes);
From RabbitMQ google
Messages in AMQP 0-9-1 are not published to queues; they are published to exchanges, from where they
are routed to a queue (or another exchange) or not. [1]
basic.publish is a completely asynchronous protocol method by design: there is no response for it
unless you ask for it [2]. Messages that are unroutable can be returned to the publisher
if you define a return listener and publish with the mandatory flag set to true.
Note that publisher confirms and the mandatory flag/returns are orthogonal and one does not imply
the other.
Defining return listener and setting mandatory flag true was solved my problem. If any message was not routed , I can catch them by using ReturnListener and add to my persisted queue to send another time when system becomes active.
I have a server containing folders date wise and each folder further contains many files (size 200kb each) containing all the log for a particular day. I am new to RabbitMQ , while going through the documentation of RabbitMQ i found below code for Producer
Refer Link: https://github.com/rabbitmq/rabbitmq-tutorials/blob/master/java/Send.java
public class Send {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = "Hello World!";
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println(" [x] Sent '" + message + "'");
channel.close();
connection.close();
}
}
on the above code i have added sample string "Hello World!" to publish. As stated above in the problem description that i have to read the log information from the server with different date stamp directory So do i need to write a simply an infinite loop(as logs are continuously updated) and recursively read all directory and files and Then for each line of File i can compose a message and then publish it to receiver ?
In this case our channel will never close and Connection will be always up is it an idle condition with RabbitMQ ?
Is it possible for RabbitMQ to mark the file which are read and don't read it again OR i need to manage it programmatically like renaming the file and folder with some different names. I was thinking this as might be our program gets terminated with some power failure or something while i am in mid of any file and then how can i guarantee that records would not be duplicated ?
Any other best way to achieve this would be great help for me. Thanks in advance.
I would enqueue a list of files to process to RabbitMQ and then have a separate set of processes picking up messages from that queue to do what you want with the data. Then try to make sure to subscribe to the queues in ack mode, so RabbitMQ will only delete the message from the queue once you ack it. With this setting, you should prevent sending the same information twice.
That would work on most situations. I say most, because if RabbitMQ sends a message to your consumer, then your consumer takes an action (like replicating the information, or placing an entry on a database) and then the connection to RabbitMQ dies before you sent the ack to RabbitMQ, then the broker has no way of telling that you already processed the message, so it will deliver it again later.
I'm doing following flow in my application:
get 1 message from broker(manual acknowledge)
do some processing
start transaction on database and broker
insert some records in database and publish some messages on
broker(different queue)
commit database and broker
ack message that you got from broker in step 1.
All operation on broker is done via a single channel. here is the preparation code:
Connection brokerConnection = factory.newConnection();
Channel channel = brokerConnection.createChannel();
channel.basicQos(1);
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume("receive-queue", false, consumer);
Following is my code. I have removed try, catch parts to make it clear. I log all exceptions to file.
Step 1:
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
Request request = (Request) SerializationUtils.deserialize(delivery.getBody());
Step 2, 3, 4, 5:
dbConnection.setAutoCommit(false);
channel.txSelect();
stmt = dbConnection.prepareStatement(query);
/* set paramteres */
stmt.executeUpdate();
channel.basicPublish(/* exchange name */, "KEY", MessageProperties.PERSISTENT_BASIC, /* result */ result);
dbConnection.commit();
channel.txCommit();
dbConnection.setAutoCommit(true);
Step 6:
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
After one iteration I can see records in database and broker(means it is working fine until step 5). The problem is the message on receive queue is not removed after step 6 and management plug-in shows one un-acked message. Also I don't see any exception in log file. Can anyone help?
[UPDATE1]
Now I create one channel for publishing and another channel for receiving. This is working now. So how to use a single channel for receiving and publishing(with transactions)? I have used a single channel for receiving and publishing before but that was without transactions.
[UPDATE2]
I moved step 6 inside transaction and it is working now.
dbConnection.setAutoCommit(false);
channel.txSelect();
stmt = dbConnection.prepareStatement(query);
/* set paramteres */
stmt.executeUpdate();
channel.basicPublish(/* exchange name */, "KEY", MessageProperties.PERSISTENT_BASIC, /* result */ result);
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
dbConnection.commit();
channel.txCommit();
dbConnection.setAutoCommit(true);
I'm a bit confused. I just want the publish section to be inside transaction.
You've put the channel in transactional mode - and acks are transactional things. So you either need to consume and ack on a separate non-transactional channel, or else just accept that your ack needs to come before the tx.commit.
I have a producer which connects to ActiveMQ broker to send me messages to the client.
Since it expects some response from the client, it first creates a temp queue and associates it to the JMS replyto header.
It then sends the message over to the broker and waits for the response on temp queue from the client.
Receives the response from the client over the temp queue, performs required actions and then exits.
This works fine most of the times, but sporadically the application throws error messsages saying " Cannot use queue created from another connection ".
I am unable to identify what could cause this to happen as the temp queue is being created from the current session itself.
Did anyone else come across this situation and knows how to fix it?
Code snippet:
Connection conn = myJmsTemp. getConnectionFactory().createConnection();
ses = conn.createSession(transacted,ackMode);
responseQueue = ses.createTemporaryQueue();
...
MyMessageCreator msgCrtr = new MyMessageCreator(objects,responseQueue);
myJmsTemp.send(dest, msgCrtr);
myJmsTemp.setReceiveTimeout(timeout);
ObjectMessage response = (ObjectMessage)myJmsTemplate.receive(responseQueue);
Here MyMessageCreator implements MessageCreator interface.
All am trying to do is send a message to the broker and wait for a response from the client over the temp queue. Also am using a pooled connection factory to get the connection.
You get an error like this if you have a client that is trying to subscribe as a consumer on a temporary destination that was created by a different connection instance. The JMS spec defines that only the connection that created the temp destination can consume from it, so that's why the limitation exists. As for the reason you are seeing it its hard to say without seeing your code that encounters the error.
Given that your update says you are using the Pooled connection factory I'd guess that this is the root of you issue. If the consume call happens to use a different connection from the Pool than the one that created the temp destination then you would see the error that you mentioned.