I have a server containing folders date wise and each folder further contains many files (size 200kb each) containing all the log for a particular day. I am new to RabbitMQ , while going through the documentation of RabbitMQ i found below code for Producer
Refer Link: https://github.com/rabbitmq/rabbitmq-tutorials/blob/master/java/Send.java
public class Send {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = "Hello World!";
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println(" [x] Sent '" + message + "'");
channel.close();
connection.close();
}
}
on the above code i have added sample string "Hello World!" to publish. As stated above in the problem description that i have to read the log information from the server with different date stamp directory So do i need to write a simply an infinite loop(as logs are continuously updated) and recursively read all directory and files and Then for each line of File i can compose a message and then publish it to receiver ?
In this case our channel will never close and Connection will be always up is it an idle condition with RabbitMQ ?
Is it possible for RabbitMQ to mark the file which are read and don't read it again OR i need to manage it programmatically like renaming the file and folder with some different names. I was thinking this as might be our program gets terminated with some power failure or something while i am in mid of any file and then how can i guarantee that records would not be duplicated ?
Any other best way to achieve this would be great help for me. Thanks in advance.
I would enqueue a list of files to process to RabbitMQ and then have a separate set of processes picking up messages from that queue to do what you want with the data. Then try to make sure to subscribe to the queues in ack mode, so RabbitMQ will only delete the message from the queue once you ack it. With this setting, you should prevent sending the same information twice.
That would work on most situations. I say most, because if RabbitMQ sends a message to your consumer, then your consumer takes an action (like replicating the information, or placing an entry on a database) and then the connection to RabbitMQ dies before you sent the ack to RabbitMQ, then the broker has no way of telling that you already processed the message, so it will deliver it again later.
Related
on my java program, some kind of messages are being sent over RabbitMQ queues as below :
if(!con.isConnected()){
log.error("Not connected !!!");
return false;
}
con.getChannel().basicPublish("",queueName, MessageProperties.PERSISTENT_BASIC, bytes)
I deleted queues via RabbitMQ management GUI plugin
try to send a message over that deleted queue
Result: queues were deleted from RabbitMQ GUI but when I am trying to send message over that deleted RabbitMQ queues, connection is still alive.(con.isConnected() == true ) I need to find a way to listen the queue, if it is deleted , I shouldn't send any message to the deleted queue.
Note: After deleting queue, I am not restarting RabbitMQ.
channel creation :
channel = connection.createChannel();
channel.queueDeclare(prop.getQueueName(), true, false, false, null);
example code channel, queue,exchange creation :
ConnectionFactory cf = new ConnectionFactory();
cf.setUsername("guest");
cf.setPassword("guest");
cf.setHost("localhost");
cf.setPort(5672);
cf.setAutomaticRecoveryEnabled(true);
cf.setConnectionTimeout(10000);
cf.setNetworkRecoveryInterval(10000);
cf.setTopologyRecoveryEnabled(true);
cf.setRequestedHeartbeat(5);
Connection connection = cf.newConnection();
channel = connection.createChannel();
channel.queueDeclare("test", true, false, false, null);
channel.exchangeDeclare("testExchange", "direct",true);
channel.queueBind("test", "testExchange", "testRoutingKey");
connection.addShutdownListener(new ShutdownListener() {
#Override
public void shutdownCompleted(ShutdownSignalException cause) {
System.out.println("test"+cause);
}
});
Sending message :
channel.basicPublish("testExchange", "testRoutingKey", null,messageBodyBytes);
From RabbitMQ google
Messages in AMQP 0-9-1 are not published to queues; they are published to exchanges, from where they
are routed to a queue (or another exchange) or not. [1]
basic.publish is a completely asynchronous protocol method by design: there is no response for it
unless you ask for it [2]. Messages that are unroutable can be returned to the publisher
if you define a return listener and publish with the mandatory flag set to true.
Note that publisher confirms and the mandatory flag/returns are orthogonal and one does not imply
the other.
Defining return listener and setting mandatory flag true was solved my problem. If any message was not routed , I can catch them by using ReturnListener and add to my persisted queue to send another time when system becomes active.
I am using Durable subscription of RabbitMQ Stomp (documentation here). As per the documentation, when a client reconnects (subscribes) with the same id, he should get all the queued up messages. However, I am not able to get anything back, even though the messages are queued up on the server side. Below is the code that I am using:
RabbitMQ Version : 3.6.0
Client code:
var sock;
var stomp;
var messageCount = 0;
var stompConnect = function() {
sock = new SockJS(options.url);
stomp = Stomp.over(sock);
stomp.connect({}, function(frame) {
debug('Connected: ', frame);
console.log(frame);
var id = stomp.subscribe('<url>' + options.source + "." + options.type + "." + options.id, function(d) {
console.log(messageCount);
messageCount = messageCount + 1;
}, {'auto-delete' : false, 'persistent' : true , 'id' : 'unique_id', 'ack' : 'client'});
}, function(err) {
console.log(err);
debug('error', err, err.stack);
setTimeout(stompConnect, 10);
});
};
Server Code:
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(final MessageBrokerRegistry config) {
config.enableStompBrokerRelay("<endpoint>", "<endpoint>").setRelayHost(host)
.setSystemLogin(username).setSystemPasscode(password).setClientLogin(username)
.setClientPasscode(password);
}
#Override
public void registerStompEndpoints(final StompEndpointRegistry registry) {
registry.addEndpoint("<endpoint>").setAllowedOrigins("*").withSockJS();
}
}
Steps I am executing:
Run the script at client side, it sends subscribe request.
A queue gets created on server side (with name stomp-subscription-*), all the messages are pushed in the queue and client is able to stream those.
Kill the script, this results in disconnection. Server logs show that client is disconnected and messages start getting queued up.
Run the script again with the same id. It somehow manages to connect to server, however, no message is returned from the server. Message count on that queue remains the same (also, RabbitMQ Admin console doesn't show any consumer for that queue).
After 10 seconds, the connection gets dropped and following gets printed on the client logs:
Whoops! Lost connection to < url >
Server also shows the same messages (i.e. client disconnected). As shown in the client code, it tries to establish the connection after 10 seconds and then, same cycle gets repeated again.
I have tried the following things:
Removed 'ack' : 'client' header. This results in all the messages getting drained out of queue, however, none reaches to client. I added this header after going through this SO answer.
Added d.ack(); in the function, before incrementing messageCount. This results in error at server side as it tries to ack the message after session is closed (due to disconnection).
Also, in some cases, when I reconnect with number of queued up messages is less than 100, I am able to get all the messages. However, once it crosses 100, nothing happens(not sure whether this has anything to do with the problem).
I don't know whether the problem exists at server or client end. Any inputs?
Finally, I was able to find (and fix) the issue. We are using nginx as proxy and it had proxy_buffering set to on (default value), have a look at the documentation here.
This is what it says:
When buffering is enabled, nginx receives a response from the proxied
server as soon as possible, saving it into the buffers set by the
proxy_buffer_size and proxy_buffers directives.
Due to this, the messages were getting buffered (delayed), causing disconnection. We tried bypassing nginx and it worked fine, we then disabled proxy buffering and it seems to be working fine now, even with nginx proxy.
What would be the best way to consume "topic-ed" message batches from RabbitMQ in parallel and in order.
We have a server that processes data for many customers. Every time a customer's data is processed, a bunch of messages is sent to RMQ. On the other side we have a process that consumes the data and stores it in a database.
The consumption process is slow and we want to parallelize it and make it scalable. The problem is that data for a single customer cannot be processed by two consumers at the same time.
The producer runs every now and then and can add messages to the queue even for a customer that already has messages in the queue.
One of the suggestions was to create a new DB table that will indicated for each customer if it's data is being processed. A consumer will only ask for messages for customers that are not being processed by other consumers and will register itself in the DB for that customer.
I'm reluctant to use that solution because it requires connecting to a database and it holds runtime state in a database.
I was hoping to find a solution that could be handled within the scope of our consumer/producer code and RMQ.
A suggestion was made to have messages written to RMQ under customer "topics" and have the consumers read a single "topic". A message will be added to a separate queue (or "topic") for each batch or customer messages. A consumer will consumer a "customers" message and use it's data to select a "topic" from the main queue.
The problem is what happens when the producer wants to add new data to the main queue for a customer that already has data in the main queue that is currently being processed.
How can we sync the consumption an production over RMQ?
I think you could have a look at rabbitmq tutorials available.
http://www.rabbitmq.com/tutorials/tutorial-five-java.html
Sample code:
import com.rabbitmq.client.*;
import java.io.IOException;
public class ReceiveLogsTopic {
private static final String EXCHANGE_NAME = "topic_logs";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.exchangeDeclare(EXCHANGE_NAME, "topic");
String queueName = channel.queueDeclare().getQueue();
if (argv.length < 1) {
System.err.println("Usage: ReceiveLogsTopic [binding_key]...");
System.exit(1);
}
for (String bindingKey : argv) {
channel.queueBind(queueName, EXCHANGE_NAME, bindingKey);
}
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope,
AMQP.BasicProperties properties, byte[] body) throws IOException {
String message = new String(body, "UTF-8");
System.out.println(" [x] Received '" + envelope.getRoutingKey() + "':'" + message + "'");
}
};
channel.basicConsume(queueName, true, consumer);
}
}
Method to execute the code and saving in a file:
java -cp $CP ReceiveLogsTopic "#" > logfile.log
I hope it helps and give you a Idea.
Actually best way is to use DB but if you are not ok with it means you can give a try by having the messages in a file and track and reuse it.
That is you can save the details in file while executing and can track it as you needed in runtime.
Note : I have attached the sample code given in tutorial, because anyone can track details even though if link is getting changed in future.
here is the problem I have:
I want to write 2 objects into rabbitMQ and only read 1 ( this is a testing to ensure that my data stays in RabbitMQ if reader suddenly stops e.g. ctrl+c).
I don't have problem with writting to MQ but when I read only one object and close the connection the other object disappears too. I don't know why that happens.
I followed the instruction given at : here
creating a channel:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("127.0.0.1");
factory.setPort(5672);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
writing into rabbitMQ (no problem with writing to MQ )
channel.queueDeclare("myque", false, false, false, null);
channel.basicPublish("", "myque", null, "one".getBytes("UTF-8"));
channel.basicPublish("", "myque", null, "two".getBytes("UTF-8"));
the way I read is :
QueueingConsumer consumer =new QueueingConsumer(channel);
channel.basicConsume("queuethroughProxy", true, consumer);
//while(true){
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println("message is : " + message);
//}
connection.close();
I'm not quite sure what I'm doing wrong here.
You are doing two mistakes here.
Not setting channel.basicQos(1) --> which leads to bringing all messages in queue from ACK to NACK when you run your consumer program.
Enabling Auto ACK while consuming --> which leads to acknowledging all NACK messages on stopping the consumer program.
These are the reasons you are loosing all the messages in queue though you consumed one.
You can refer my blog post here for more detail.
I guess you are confused on the line
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
One might think as calling consumer.nextDelivery() will get the next message from the broker.
But if you see the documentation it says, "Since the server will push messages asynchronously, we provide a callback in the form of an object that will buffer the messages until we're ready to use them. That is what QueueingConsumer does."
Since auto_ack is enabled, once the consumer is created immediately server will push the 2 messages to the consumer. consumer.nextDelivery() just iterates through the messages that are already received at the client side.
I have a producer which connects to ActiveMQ broker to send me messages to the client.
Since it expects some response from the client, it first creates a temp queue and associates it to the JMS replyto header.
It then sends the message over to the broker and waits for the response on temp queue from the client.
Receives the response from the client over the temp queue, performs required actions and then exits.
This works fine most of the times, but sporadically the application throws error messsages saying " Cannot use queue created from another connection ".
I am unable to identify what could cause this to happen as the temp queue is being created from the current session itself.
Did anyone else come across this situation and knows how to fix it?
Code snippet:
Connection conn = myJmsTemp. getConnectionFactory().createConnection();
ses = conn.createSession(transacted,ackMode);
responseQueue = ses.createTemporaryQueue();
...
MyMessageCreator msgCrtr = new MyMessageCreator(objects,responseQueue);
myJmsTemp.send(dest, msgCrtr);
myJmsTemp.setReceiveTimeout(timeout);
ObjectMessage response = (ObjectMessage)myJmsTemplate.receive(responseQueue);
Here MyMessageCreator implements MessageCreator interface.
All am trying to do is send a message to the broker and wait for a response from the client over the temp queue. Also am using a pooled connection factory to get the connection.
You get an error like this if you have a client that is trying to subscribe as a consumer on a temporary destination that was created by a different connection instance. The JMS spec defines that only the connection that created the temp destination can consume from it, so that's why the limitation exists. As for the reason you are seeing it its hard to say without seeing your code that encounters the error.
Given that your update says you are using the Pooled connection factory I'd guess that this is the root of you issue. If the consume call happens to use a different connection from the Pool than the one that created the temp destination then you would see the error that you mentioned.