How to set timeout to RabbitMQ DefaultConsumer? - java

I have an application which works as rabbitmq producer. I have applied RPC approach and there is no problem. Producer publishes message and consumes its response in replyQueues (temporary queue). Firstly, I have used QueueingConsumer for producer consuming and I used to set an timeout to nextDelivery(timeout) method. QueueingConsumer is deprecated now and In RabbitMQ offical site they have changed their RPC tutorial and They have used DefaultConsumer instead of QueueingConsumer. I have replaced QueueingConsumer with DefaultConsumer too. But there is a problem now: How can I set an timeout to DefaultConsumer? Because if consumer does not sent any response, trash temporary queues remains in the broker. Old and new producer consuming part is below. Thanks for your helps.
Old producer consuming approach:
consumer = new QueueingConsumer(channel);
channel.basicConsume(replyQueueName, true, consumer);
channel.basicPublish("", requestQueueName, props, message.getBytes("UTF-8"));
while (true) {
QueueingConsumer.Delivery deliver = consumer.nextDelivery(timeout);
if (deliver.getProperties().getCorrelationId().equals(corrId)) {
response = new String(deliver.getBody(), "UTF-8");
break;
}
}
return response;
new producer consuming approach:
final BlockingQueue<String> response = new ArrayBlockingQueue<String>(1);
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
if (properties.getCorrelationId().equals(corrId)) {
response.offer(new String(body, "UTF-8"));
}
}
};
channel.basicConsume(replyQueueName, true, consumer);
return response.take();

It solved. A timeout can be set to "response" object. Changes in the "new producer consuming approach" can be as following:
Timeout to response: response.poll(5000, TimeUnit.MILLISECONDS) must be used instead of response.take().

Related

how to validate that Kafka consumer received a message

I have a scenario on my application, where I am making a REST request to an endpoint and I only want the response to be sent back when a kafka consumer received a message from a producer. I have a listener on the consumer but I am not sure how link between the listener and the API execution, so that it stops it until the consumer receives a message. any ideas? Thanks
EDIT This is the example I am trying to work on
This is the consumer listener
#Component
public class ReplyingKafkaConsumer {
#KafkaListener(topics = "${kafka.topic.request-topic}")
#SendTo
public UserRequest listen(UserRequest request) throws InterruptedException {
UserRequest response = new UserRequest();
response.setCompanyId(68L);
response.setCompanyName("AdiBas");
response.setEmail("adibas#gmail.com");
response.setUserId(102L);
return response;
}
and I need to listen on the consumer from within this REST API
#ResponseBody
#PostMapping(value = "/user", produces = MediaType.APPLICATION_JSON_VALUE, consumes = MediaType.APPLICATION_JSON_VALUE)
public UserRequest post(#RequestBody UserRequest request) throws InterruptedException, ExecutionException {
// create producer record
ProducerRecord<String, UserRequest> record = new ProducerRecord<>(requestTopic, request);
// set reply topic in header
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, requestReplyTopic.getBytes()));
// post in kafka topic
RequestReplyFuture<String, UserRequest, UserRequest> sendAndReceive = kafkaTemplate.sendAndReceive(record);
// confirm if producer produced successfully
SendResult<String, UserRequest> sendResult = sendAndReceive.getSendFuture().get();
//print all headers
sendResult.getProducerRecord().headers().forEach(header -> System.out.println(header.key() + ":" + header.value().toString()));
// get consumer record
ConsumerRecord<String, UserRequest> consumerRecord = sendAndReceive.get();
// return consumer value
return consumerRecord.value();
}

How to send/receive messages from every client with RabbitMQ Java

How to implement the following situation on Java with RabbitMQ:
where every node, send messages to all other nodes and every node receive messages from all other nodes ?
UPDATE 1:
I tried to create the above situation with the following code:
ReceiveLogs.java
public class ReciveLogs {
...
public void start() throws IOException, TimeoutException, InterruptedException {
connection = factory.newConnection();
channel = connection.createChannel();
channel.queueDeclare(coda, false, false, false, null);
channel.exchangeDeclare(exName, BuiltinExchangeType.FANOUT);
channel.queueBind(coda, exName, "");
channel.basicPublish(exName, codaX, null, message.getBytes("UTF-8"));
System.out.println(" ReceiveLogs Sent: " + message);
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties,
byte[] body) throws IOException {
String message = new String(body, "UTF-8");
System.out.println(" ReceiveLogs RECEIVED:" + message);
}
};
channel.basicConsume(codaX, true, consumer);
}
}
EmitLog.java
public class EmitLog {
...
public void start() throws IOException, TimeoutException {
connection = factory.newConnection();
channel = connection.createChannel();
channel.exchangeDeclare(exName, BuiltinExchangeType.FANOUT);
channel.queueDeclare(codaX, false, false, false, null);
channel.queueBind(codaX, exName, "");
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties,
byte[] body) throws IOException {
String message = new String(body, "UTF-8");
System.out.println(" ROUTER Received: '" + message);
}
};
String message = channel.basicConsume(codaX, true, consumer);
channel.basicPublish(exName, "", null, message.getBytes("UTF-8"));
System.out.println("ROUTER Sent: " + message);
channel.close();
connection.close();
}
}
You can achieve this by creating a "fanout" exchange.
This is the setup you need to do:
Create a fanount exchange
Create 3 queues, one for each node. Say Q1, Q2 and Q3 corresponding to C1, C2 and C3.
Bind all queues (Q1, Q2 and Q3) to the fanount exchange created in step 1
Listening Code:
Create a listener for each of the nodes. For example, C1 node listens for messages on Queue "Q1", C2 for "Q2" and so on.
Publishing:
- Whenever you want to send a message, publish the message on broadcast exchange you created.
There is small caveat here: If C1 publishes the message, then C1 receives the same message as well. So, if you don't any node to process the same message it published, then you can use one of the attributes in the message to filter it out.
Further documentation:
https://www.rabbitmq.com/tutorials/amqp-concepts.html#exchange-fanout

Rabbitmq , send message to two clients at the same time

I'm developing a Java REST API service and now I need to make a TCP connection between server and mobile devices to send message. I have found that RabbitMQ is a good idea but I'm really newbee in AMQP protocol. The question is how to send message from server to two clients that read bytes from the same queue.
My code:
public class RabitSecClient {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages2");
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body)
throws IOException {
String message = new String(body, "UTF-8");
System.out.println(" [x] Received 2 '" + message + "'");
}
};
channel.basicConsume(QUEUE_NAME, true, consumer);
}
}
I execute this code twice for testing and when I send message only first client take it. What is the reason?
Hi look into below site may be you got the answer of you question. Basically RabbitMq is based on publish subscribe mechanism if you put any msg to particular queue. Any number of user got that msg if that access same queue.
https://pubs.vmware.com/vfabric52/index.jsp?topic=/com.vmware.vfabric.rabbitmq.2.8/rabbit-web-docs/tutorials/tutorial-three-java.html

Vert.x how to pass/get messages from REST to message bus?

I want to pass messages to bus via REST, and get it back. But I cant correctly setup the message bus receiver, it throws java.lang.IllegalStateException: Response has already been written. In real life message bus should receive messages from different sources and pass a message to another target. Therefore we just need to publish the message to the bus. But how to correctly read messages and handle all of them? For example from a REST interface: read that messages!
My simple app start:
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new RESTVerticle());
vertx.deployVerticle(new Receiver());
EventBus eventBus = vertx.eventBus();
eventBus.registerDefaultCodec(MessageDTO.class, new CustomMessageCodec());
}
REST part
public class RESTVerticle extends AbstractVerticle {
private EventBus eventBus = null;
#Override
public void start() throws Exception {
Router router = Router.router(vertx);
eventBus = vertx.eventBus();
router.route().handler(BodyHandler.create());
router.route().handler(CorsHandler.create("*")
.allowedMethod(HttpMethod.GET)
.allowedHeader("Content-Type"));
router.post("/api/message").handler(this::publishToEventBus);
// router.get("/api/messagelist").handler(this::getMessagesFromBus);
router.route("/*").handler(StaticHandler.create());
vertx.createHttpServer().requestHandler(router::accept).listen(9999);
System.out.println("Service running at 0.0.0.0:9999");
}
private void publishToEventBus(RoutingContext routingContext) {
System.out.println("routingContext.getBodyAsString() " + routingContext.getBodyAsString());
final MessageDTO message = Json.decodeValue(routingContext.getBodyAsString(),
MessageDTO.class);
HttpServerResponse response = routingContext.response();
response.setStatusCode(201)
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(message));
eventBus.publish("messagesBus", message);
}
And the Receiver: I move it to a different class, but it does not help
public class Receiver extends AbstractVerticle {
#Override
public void start() throws Exception {
EventBus eventBus = vertx.eventBus();
Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
router.route().handler(CorsHandler.create("*")
.allowedMethod(HttpMethod.GET)
.allowedHeader("Content-Type"));
router.get("/api/messagelist").handler(this::getMessagesFromBus);
router.route("/*").handler(StaticHandler.create());
vertx.createHttpServer().requestHandler(router::accept).listen(9998);
System.out.println("Service Receiver running at 0.0.0.0:9998");
private void getMessagesFromBus(RoutingContext routingContext) {
EventBus eventBus = vertx.eventBus();
eventBus.consumer("messagesBus", message -> {
MessageDTO customMessage = (MessageDTO) message.body();
HttpServerResponse response = routingContext.response();
System.out.println("Receiver ->>>>>>>> " + customMessage);
if (customMessage != null) {
response.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(customMessage));
}
response.closed();
});
}
So if i post message to REST and handler publish it to the bus, when I am runtime get http://localhost:9998/api/messagelist it is return json, but second time it trow exception
java.lang.IllegalStateException: Response has already been written
at io.vertx.core.http.impl.HttpServerResponseImpl.checkWritten(HttpServerResponseImpl.java:561)
at io.vertx.core.http.impl.HttpServerResponseImpl.putHeader(HttpServerResponseImpl.java:154)
at io.vertx.core.http.impl.HttpServerResponseImpl.putHeader(HttpServerResponseImpl.java:52)
at com.project.backend.Receiver.lambda$getMessagesFromBus$0(Receiver.java:55)
at io.vertx.core.eventbus.impl.HandlerRegistration.handleMessage(HandlerRegistration.java:207)
at io.vertx.core.eventbus.impl.HandlerRegistration.handle(HandlerRegistration.java:201)
at io.vertx.core.eventbus.impl.EventBusImpl.lambda$deliverToHandler$127(EventBusImpl.java:498)
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$18(ContextImpl.java:335)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
Receiver ->>>>>>>> Message{username=Aaaewfewf2d, message=41414wefwef2d2}
How to correctly get all messages from the receiver? Or if the bus received messages, should I immediately store them to the db? Can a message bus keep messages and not lost them?
Thanks
Each hit in the entry point "/api/messagelist" creates one new consumer with the request routing context.
The first request will create the consumer and reply to the request. When the second message was published, that consumer will receive the message and will reply to the previous request (instance) and this was closed.
I think that you misunderstood the event bus purpose and I really recommend you to read the documentation.
http://vertx.io/docs/vertx-core/java/#event_bus
I did not had the chance to test your code but it seems that the publish operation is throwing an exception and vertx will try to send back an error message. However you already replied and ended the connection.
Now the error might be from your codec but due to the asynchronous nature of vertx you only see it at a later stage and mangled with the internal error handler.

Does JMS receiveNoWait() guarantee message delivery when messages are available?

Hello I am writing some kind of simple testing scenario where I execute the following source code:
Here is my send() method:
public void send() throws JMSException {
Session session = null;
MessageProducer producer = null;
try {
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue("TEST.FOO");
producer = session.createProducer(destination);
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
byte[] uselessData = new byte[1024];
BytesMessage message = session.createBytesMessage();
message.writeBytes(uselessData);
producer.send(message);
} finally {
producer.close();
session.close();
}
}
Here is my receive() method:
public void receive() throws JMSException {
Session session = null;
MessageConsumer consumer = null;
try {
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue("TEST.FOO");
consumer = session.createConsumer(destination);
Message hugeMessage = consumer.receiveNoWait();
if (hugeMessage == null) {
System.out.println("Message was not received");
unsucsesfullCount++;
} else {
if (hugeMessage instanceof BytesMessage) {
System.out.println("Message received");
}
}
} finally {
consumer.close();
session.close();
}
}
I execute:
send();
receive();
The message value after receiveNoWait() is always null.
My question here is does the receiveNoWait() guarantee message delivery when there are messages in the broker? The send() is executed successfully so there is at least one message in the destination.
I've searched in the specification but there is not really clear definition if a message, which is available at the broker side should be explicitly received by receiveNoWait() at client side.
Also I want to ask, if receiveNoWait() does not have message available, should it trigger some refresh consumer process in the broker, so the next receiveNoWait() will receive the message?
The example code I provided run on ActiveMQ but my question is more conceptual than provider specific, because I have the same observation for other JMS providers.
No, the specification does not guarantee that any call to receiveNoWait will return a message, it might and then again it might not. When using receiveNoWait you must always check for a null return and act accordingly.
In the case of ActiveMQ the client will return a message if the broker has dispatched one to it and it is immediately available in the consumer prefetch buffer otherwise it just returns null.
Other implementations my indeed send a poll request to the broker, Qpid JMS for instance uses an AMQP link drain request to ask that the broker send it any messages that are available for dispatch and the broker will either send them or signal that the link is drained and there are no messages ready.
In short it's completely up to the client and broker how they implement receiveNoWait but no matter what you always need to account for the chance that you won't get a message returned to you from that method.

Categories