I have a program that sends and receives messages over an exchange. My program needs to continue execution regardless of whether there is a message for it in the queue. Almost all the tutorials have blocking examples:
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
System.out.println("Message: " + new String(delivery.getBody()));
ch.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}
I came across what I understand to be the asynchronous version i.e., the handleDelivery function is called (callback) when a message is available in the queue:
boolean autoAck = false;
channel.basicConsume(queueName, autoAck, "myConsumerTag",
new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body)
throws IOException
{
String routingKey = envelope.getRoutingKey();
String contentType = properties.contentType;
long deliveryTag = envelope.getDeliveryTag();
// (process the message components here ...)
channel.basicAck(deliveryTag, false);
}
});
After reading over the documentation I'm still unsure whether the above code snippet is indeed asynchronous and I still can't figure out how to get the actual message that was sent. Some help please.
With out trying the second code snippet I can say that it is possible it does what you want. However it presumably does this but using a thread internally (which will be blocked while waiting for a new message). What I do is stick the while loop in a new Thread so that only that thread is blocked and the rest of your program continues asyncrhonously
Related
Java development, using netty's channel When writeandflush is executed in a new thread, the client does not receive a response; It can be received when executed in the normal method
private READER_ERR initReader() {
//not receive
new Thread(new Runnable() {
#Override
public void run() {
Channel channel = NettyChannelMap.get(clientId);
if (channel != null) {
ChatDto returnDto = new ChatDto();
returnDto.setClientId(clientId).setMsgType("READ").setMsg("返回数据");
channel.writeAndFlush(JSON.toJSONString(returnDto));
}
}
}).start();
//receive
Channel channel = NettyChannelMap.get(clientId);
if (channel != null) {
ChatDto returnDto = new ChatDto();
returnDto.setClientId(clientId).setMsgType("READ").setMsg("返回数据");
channel.writeAndFlush(JSON.toJSONString(returnDto));
}
}
The interrupt point debugs the netty source code and finds that it enters the AbstractChannelHandlerContext class
if (executor.inEventLoop()) {
if (flush) {
next.invokeWriteAndFlush(m, promise);
} else {
next.invokeWrite(m, promise);
}
} else {
AbstractChannelHandlerContext.WriteTask task = AbstractChannelHandlerContext.WriteTask.newInstance(next, m, promise, flush);
if (!safeExecute(executor, task, promise, m, !flush)) {
task.cancel();
}
}
We can see
public boolean inEventLoop(Thread thread) {
return thread == this.thread;
}
false is returned, it indicates that the thread has been started, and the thread does not belong to EventLoop. It is a thread created by the user. Therefore, netty creates a task internally and puts it into the queue for execution. The problem is that this queue has not been executed, and there is no place to forcibly find the task for execution.
Add a point: if the client receives the message normally, entering ineventloop() will return true, indicating that if the current thread belongs to EventLoop, it can be executed immediately.
Now you need to send messages to the client multiple times in a callback method, so how can the server actively send messages to the client in the new thread?
I tried again and found that if the last message was sent by the ineventloop worker thread, those messages that were not sent before will also be sent along with the messages of the last worker thread according to the insertion time sequence. For example, when the program runs, the order of messages added is 1, 2, 3, and the "end" sent by a worker thread is added, then the order of messages received by the client is: end, 1, 2, 3
I am writing a SpringBoot RabbitMQ Consumer and I have a need to occasionally re queue a message to the BACK of the queue
I thought this was how negative acknowledgment worked, but
basicReject(deliveryTag, true) simply places the message back as close to its original position in the queue as it can, which in my one-at-a-time case is right back at the FRONT of queue.
My first thought was to use a Dead Letter Queue feeding back into the Message Queue on some time interval (similar to the approach mentioned in this answer) but I would rather not create an additional queue if there is some way to simply re queue to the BACK of the initial queue
My below structure simply consumes the message and fails to re-add it to the queue.
How can this be accomplished without a DLQ?
#ServiceActivator(inputChannel = "amqpInputChannel")
public void handle(#Payload String message,
#Header(AmqpHeaders.CHANNEL) Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) Long deliveryTag){
try{
methodThatThrowsRequeueError();
methodThatThrowsMoveToErrorQueueError();
} catch (RequeueError re) {
channel.basicAck(deliveryTag, false);
sendMessageToBackOfQueue(message);
return;
} catch (MoveToErrorQueueError me) {
//Structured the same as sendMessageToBackOfQueue, works fine
moveMessageToErrorQueue(message);
}
channel.basicAck(deliveryTag, false);
}
private void sendMessageToBackOfQueue(String message) {
try {
rabbitTemplate.convertAndSend(
exchangeName,
routingKeyRequeueMessage,
message,
message -> {
message.getMessageProperties().setContentType(MessageProperties.CONTENT_TYPE_TEXT_PLAIN);
return message;
}
);
} catch (AmqpException amqpEx) {
//error handling which is not triggered...
}
}
TL;DR : There is no way I have found to forward a Message from a listening Service back into the originating Queue with no intermediary.
There are several options that revolve around Dead Letter Queues/Dead Letter Exchanges, but a non-DLQ/DLX solution we found was a timed Exchange, a psuedo DLX if you will. Essentially:
Message enters MessageExchange (MsgX), which propagates to the Service Queue (SvcQ).
The Service (Svc) Gets a Message from the SvcQ.
Once you have determined that the message should be sent to the back of the SvcQ, Svc should:
Send an Acknowledgement to SvcQ.
Send the message to another exchange, our timed psuedo-DLX
The psuedo-DLX can be configured to release messages to the (BACK OF!!) SvcQ on some timed interval
I currently have 4 queues:
test-queue
test-queue-short-term-dead-letter
test-queue-long-term-dead-letter
test-queue-parking-lot
When a message comes into test-queue, I do a check to see if the message is in the correct format. If it isn't I want to send the message directly to the parking lot queue.
I can't use AmqpRejectAndDontRequeue() because it will automatically send the message to the configured DLQ (test-queue-short-term-dead-letter).
Using RabbitTemplate.convertAndSend() with another exception such as BadRequestException doesn't work. The message goes to the parking lot queue as expected, however the same message will stay in the test-queue
Using RabbitTemplate.convertAndSend() on it's own won't work as the program continues execution.
All queues are bound to a single direct exchange, each with unique routing keys. The test-queue is configured with the following arguments:
x-dead-letter-exchange: ""
x-dead-letter-routing-key: <shortTermDeadLetterKey>
Receiver:
#RabbitListener(queues = "test-queue")
public void receiveMessage(byte[] person) {
String personString = new String(person);
if (!personString.matches(desiredRegex)) {
rabbitTemplate.convertAndSend("test-exchange", "test-queue-parking-lot",
"invalid person");
log.info("Invalid person");
}
...some other code which I dont want to run as the message has arrived in the incorrect format
}
The problem was solved by manually acknowledging the message and returning from the method.
#RabbitListener(queues = "test-queue")
public void receiveMessage(byte[] person, Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) long tag) throws Exception) {
String personString = new String(person);
if (!personString.matches(desiredRegex)) {
rabbitTemplate.convertAndSend("test-exchange", "test-queue-parking-lot",
"invalid person");
log.info("Invalid person");
channel.basicAck(tag, false);
return;
}
...some other code which I dont want to run as the message has arrived in the incorrect format
}
I encountered a knotty problem when receiving message from WildFly JMS queue. My code is below:
Session produceSession = connectionFactory.createConnection().createSession(false, Session
.CLIENT_ACKNOWLEDGE);
Session consumerSession = connectionFactory.createConnection().createSession(false, Session
.CLIENT_ACKNOWLEDGE);
ApsSchedule apsSchedule = new ApsSchedule();
boolean success;
MessageProducer messageProducer = produceSession.createProducer(outQueueMaxusOrder);
success = apsSchedule.sendD90Order(produceSession,messageProducer, d90OrderAps);
if (!success) {
logger.error("Can't send APS schedule msg ");
} else {
MessageConsumer consumer = consumerSession.createConsumer(inQueueDeliveryDate);
data = apsSchedule.receiveD90Result(consumerSession,consumer);
}
then getting into the receiveD90Result():
public DeliveryData receiveD90Result(Session session, MessageConsumer consumer) {
DeliveryData data = null;
try {
Message message = consumer.receive(10000);
if (message == null) {
return null;
}
TextMessage msg = (TextMessage) message;
String text = msg.getText();
logger.debug("Receive APS d90 result: {}", text);
ObjectMapper mapper = new ObjectMapper();
data = mapper.readValue(text, DeliveryData.class);
} catch (JMSException je) {
logger.error("Can't receive APS d90 order result: {}", je.getMessage());
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
consumer.close();
} catch (JMSException e) {
e.printStackTrace();
}
}
return data;
}
But when implementing the consumer.receive(10000), the project can't get a message from queue. If I use asynchronous way of MDB to listen the queue, I can get the message from queue. How to resolve it?
There are multiple modes you can choose to get a message from the queue. Message Queues are by default asynchronous in usage. There are however cases when you want to read it synchronously , for example sending a message with account number and using another queue to read the response and match it with a message id or a message correlation id. When you do a receive , the program is waiting for a message to arrive within that polling interval specified in receive.
The code snippet you have , as i see it uses the psuedo synchronous approach. If you have to use it as an MDB , you will have to implement message driven bean (EJB Resource) or message listener.
The way that MDB/Message Listener works is more event based , instead of a poll with a timeout (like the receive) , you implement a callback called onMessage() that is invoked every time there is a message. Instead of a synchronous call , this becomes asynchronous. Your application may require some changes both in terms of design.
I don't see where you're calling javax.jms.Connection.start(). In fact, it doesn't look like you even have a reference to the javax.jms.Connection instance used for your javax.jms.MessageConsumer. If you don't have a reference to the javax.jms.Connection then you can't invoke start() and you can't invoke close() when you're done so you'll be leaking connections.
Furthermore, connections are "heavy" objects and are meant to be re-used. You should create a single connection for both the producer and consumer. Also, if your application is not going to use the javax.jms.Session from multiple threads then you don't need multiple sessions either.
I'm experimenting with NIO2 and running into an issue.
Here's the code I'm using:
ByteBuffer buffer = ByteBuffer.allocate(DEFAULT_BUFFER_SIZE);
channel.read(buffer, null, new CompletionHandler<Integer, Object>() {
#Override
public void completed(Integer result, Object attachment) {
Packet packet = new Packet(buffer.getInt(), buffer);
PacketHandler handler = PacketHandler.forOpcode(packet.getOpcode());
if(!Objects.isNull(handler)) {
handler.handle(channel, packet);
} else {
System.out.println("Unexpected opcode received from client. Opcode: " + packet.getOpcode());
}
}
#Override
public void failed(Throwable exc, Object attachment) {
System.out.println("DEBUG A");
exc.printStackTrace();
}
});
The issue is that no-matter what I send the server, it never completes. For testing purposes I have a very flat-format login packet set up and I'm sending this data through the client:
ByteBuffer buffer = ByteBuffer.allocate(28);
buffer.putInt(1); //opcode
ByteBufferUtils.putString(buffer, "admin");
ByteBufferUtils.putString(buffer, "admin");
channel.write(buffer);
Even though the client writes the data, the server never finishes reading this. I've also made sure that (DEFAULT_BUFFER_SIZE) was equal to the sent buffer size to see if that was the issue, but there were still not any changes in functionality.
Whenever I disconnect the client (Currently using a thread to keep it alive, for absolutely no reason) I get the following print stack trace from #failed
java.io.IOException: The specified network name is no longer available.
at sun.nio.ch.Iocp.translateErrorToIOException(Iocp.java:309)
at sun.nio.ch.Iocp.access$700(Iocp.java:46)
at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:399)
at java.lang.Thread.run(Thread.java:745)
You aren't sending anything. You need to flip() the buffer before calling write(), and compact() it afterwards.