is there any way to return the number of messages that are unacknowledged?
I am using this code to get the number of messages in the queue:
DeclareOk declareOk = amqpAdmin.getRabbitTemplate().execute(
new ChannelCallback<DeclareOk>() {
public DeclareOk doInRabbit(Channel channel)
throws Exception {
return channel.queueDeclarePassive(name);
}
});
return declareOk.getMessageCount();
but I would like to know as well the number of unacknowledged messages.
I have seen that the RabbitMQ Admin tool includes that information (for each queue it gives out the number of Ready/ Unacked and Total messages) and I guess there must be a way to retrieve that from Java/ Spring.
Thanks
UPDATE
Oks, it seems there is no way to accomplish that programmatically since listing of configuration/ queues is not part of AMPQ.
There is the possibility to enable the management plugin and query the REST web services about the queues (among other things). More info here:
http://www.rabbitmq.com/management.html
As you say in your update, if you enable the management plugin, you can query the rest api:
Eg:
`http://username:password#queue-server:15672/api/queues/%2f/queue_name.queue`
This returns json with (among other things)
messages_unacknowledged
messages_ready
It's good stuff if you have a safe route to the server.
actual url for 3.8.9 version:
http://username:password#queue-server:15672/api/queues/%2F/queue-name
Related
I am unable to read in batch with the kafka camel consumer, despite following an example posted here. Are there changes I need to make to my producer, or is the problem most likely with my consumer configuration?
The application in question utilizes the kafka camel component to ingest messages from a rest endpoint, validate them, and place them on a topic. I then have a separate service that consumes them from the topic and persists them in a time-series database.
The messages were being produced and consumed one at a time, but the database expects the messages to be consumed and committed in batch for optimal performance. Without touching the producer, I tried adjusting the consumer to match the example in the answer to this question:
How to transactionally poll Kafka from Camel?
I wasn't sure how the messages would appear, so for now I'm just logging them:
from(kafkaReadingConsumerEndpoint).routeId("rawReadingsConsumer").process(exchange -> {
// simple approach to generating errors
String body = exchange.getIn().getBody(String.class);
if (body.startsWith("error")) {
throw new RuntimeException("can't handle the message");
}
log.info("BODY:{}", body);
}).process(kafkaOffsetManager);
But the messages still appear to be coming across one at a time with no batch read.
My consumer config is this:
kafka:
host: myhost
port: myport
consumer:
seekTo: beginning
maxPartitionFetchBytes: 55000
maxPollRecords: 50
consumerCount: 1
autoOffsetReset: earliest
autoCommitEnable: false
allowManualCommit: true
breakOnFirstError: true
Does my config need work, or are there changes I need to make to the producer to have this work correctly?
At the lowest layer, the KafkaConsumer#poll method is going to return an Iterator<ConsumerRecord>; there's no way around that.
I don't have in-depth experience with Camel, but in order to get a "batch" of records, you'll need some intermediate collection to "queue" the data that you want to eventually send downstream to some "collection consumer" process. Then you will need some "switch" processor that says "wait, process this batch" or "continue filling this batch".
As far as databases go, that process is exactly what Kafka Connect JDBC Sink does with batch.size config.
We solved a similar requirement by using the Aggregation [1] capability provided by Camel
A rough code snippet
#Override
public void configure() throws Exception {
// 1. Define your Aggregation Strat
AggregationStrategy agg = AggregationStrategies.flexible(String.class)
.accumulateInCollection(ArrayList.class)
.pick(body());
from("kafka:your-topic?and-other-params")
// 2. Define your Aggregation Strat Params
.aggregate(constant(true), agg)
.completionInterval(1000)
.completionSize(100)
.parallelProcessing(true)
// 3. Generate bulk insert statement
.process(exchange -> {
List<String> body = (List<String>) exchange.getIn().getBody();
String query = generateBulkInsertQueryStatement("target-table", body);
exchange.getMessage().setBody(query);
})
.to("jdbc:dataSource");
}
There are a variety of strategies that you can implement, but we chose this particular one because it allows you to create a List of strings for the message contents that we need to ingest into the db. [2]
We set a variety of different params such as completionInterval & completionSize. The most important one for us was to set parallellProcessing(true) [3] ; without that our performance wasn't nearly getting the required throughput.
Once the aggregation has either collected 100 messages or 1000 ms has passed, then the processor generates a bulk insert statement, which then gets sent to the db.
[1] https://camel.apache.org/components/3.18.x/eips/aggregate-eip.html
[2] https://camel.apache.org/components/3.18.x/eips/aggregate-eip.html#_aggregating_into_a_list
[3] https://camel.apache.org/components/3.18.x/eips/aggregate-eip.html#_worker_pools
I am fairly new to developing distributed applications with messaging, and to Spring Cloud Stream in particular. I am currently wondering about best practices on how to deal with errors on the broker side.
In our application, we need to both consume and produce messages from/to multiple sources/destinations like this:
Consumer side
For consuming, we have defined multiple #Beans of type java.util.function.Consumer. The configuration for those looks like this:
spring.cloud.stream.bindings.consumeA-in-0.destination=inputA
spring.cloud.stream.bindings.consumeA-in-0.group=$Default
spring.cloud.stream.bindings.consumeB-in-0.destination=inputB
spring.cloud.stream.bindings.consumeB-in-0.group=$Default
This part works quite well - wenn starting the application, the exchanges "inputA" and "inputB" as well as the queues "inputA.$Default" and "inputB.$Default" with corresponding binding are automatically created in RabbitMQ.
Also, in case of an error (e.g. a queue is suddenly not available), the application gets notified immediately with a QueuesNotAvailableException and continuously tries to re-establish the connection.
My only question here is: Is there some way to handle this exception in code? Or, what are best practices to deal with failures like this on broker side?
Producer side
This one is more problematic. Producing messages is triggered by some internal logic, we cannot use function #Beans here. Instead, we currently rely on StreamBridge to send messages. The problem is that this approach does not trigger creation of exchanges and queues on startup. So when our code calls streamBridge.send("outputA", message), the message is sent (result is true), but it just disappears into the void since RabbitMQ automatically drops unroutable messages.
I found that with this configuration, I can at least get RabbitMQ to create exchanges and queues as soon as the first message is sent:
spring.cloud.stream.source=produceA;produceB
spring.cloud.stream.default.producer.requiredGroups=$Default
spring.cloud.stream.bindings.produceA-out-0.destination=outputA
spring.cloud.stream.bindings.produceB-out-0.destination=outputB
I need to use streamBridge.send("produceA-out-0", message) in code to make it work, which is not too great since it means having explicit configuration hardcoded, but at least it works.
I also tried to implement the producer in a Reactor style as desribed in this answer, but in this case the exchange/queue also is not created on application startup and the sent message just disappears even though the return status of the sending method is "OK".
Failures on the broker side are not registered at all with this approach - when I simulate one e.g. by deleting the queue or the exchange, it is not registered by the application. Only when another message is sent, I get in the logs:
ERROR 21804 --- [127.0.0.1:32404] o.s.a.r.c.CachingConnectionFactory : Shutdown Signal: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'produceA-out-0' in vhost '/', class-id=60, method-id=40)
But still, the result of StreamBridge#send was true in this case. But we need to know that sending did actually fail at this point (we persist the state of the sent object using this boolean return value). Is there any way to accomplish that?
Any other suggestions on how to make this producer scenario more robust? Best practices?
EDIT
I found an interesting solution to the producer problem using correlations:
...
CorrelationData correlation = new CorrelationData(UUID.randomUUID().toString());
messageHeaderAccessor.setHeader(AmqpHeaders.PUBLISH_CONFIRM_CORRELATION, correlation);
Message<String> message = MessageBuilder.createMessage(payload, messageHeaderAccessor.getMessageHeaders());
boolean sent = streamBridge.send(channel, message);
try {
final CorrelationData.Confirm confirm = correlation.getFuture().get(30, TimeUnit.SECONDS);
if (correlation.getReturned() == null && confirm.isAck()) {
// success logic
} else {
// failed logic
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
// failed logic
} catch (ExecutionException | TimeoutException e) {
// failed logic
}
using these additional configurations:
spring.cloud.stream.rabbit.default.producer.useConfirmHeader=true
spring.rabbitmq.publisher-confirm-type=correlated
spring.rabbitmq.publisher-returns=true
This seems to work quite well, although I'm still clueless about the return value of StreamBridge#send, it is always true and I cannot find information in which cases it would be false. But the rest is fine, I can get information on issues with the exchange or the queue from the correlation or the confirm.
But this solution is very much focused on RabbitMQ, which causes two problems:
our application should be able to connect to different brokers (e.g. Azure Service Bus)
in tests we use Kafka binder and I don't know how to configure the application context to make it work in this case, too
Any help would be appreciated.
On the consumer side, you can listen for an event such as the ListenerContainerConsumerFailedEvent.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#consumer-events
On the producer side, producers only know about exchanges, not any queues bound to them; hence the requiredGroups property which causes the queue to be bound.
You only need spring.cloud.stream.default.producer.requiredGroups=$Default - you can send to arbitrary destinations using the StreamBridge and the infrastructure will be created.
#SpringBootApplication
public class So70769305Application {
public static void main(String[] args) {
SpringApplication.run(So70769305Application.class, args);
}
#Bean
ApplicationRunner runner(StreamBridge bridge) {
return args -> bridge.send("foo", "test");
}
}
spring.cloud.stream.default.producer.requiredGroups=$Default
I'm trying to build a custom mq exit to archive messages that hit a queue. I have the following code.
class MyMqExits implements WMQSendExit, WMQReceiveExit{
#Override
public ByteBuffer channelReceiveExit(MQCXP arg0, MQCD arg1, ByteBuffer arg2) {
// TODO Auto-generated method stub
if ( arg2){
def _bytes = arg2.array()
def results = new String(_bytes)
println results;
}
return arg2;
}
...
The content of the message (header/body) is in the byte buffer, along with some unreadable binary information. How can I parse the message (including the body and the queue name) from arg2? We've gone through IBM's documentation, but haven't found an object or anything that makes this easy.
Assuming the following two points:
1) Your sender application has not hard coded the queue name where it puts messages. So you can change the application configuration to send messages to a different object.
2) MessageId of the archived message is not important, only message body is important.
Then one alternative I can think of is to create an Alias queue that resolves to a Topic and use two subscribers to receive messages.
1) Subscriber 1: An administratively defined durable subscriber with a queue provided to receive messages. Provide the same queue name from which your existing consumer application is receiving messages.
2) Subscriber 2: Another administratively defined durable subscriber with queue provided. You can write a simple java application to get messages from this queue and archive.
3) Both subscribers subscribe to the same topic.
Here are steps:
// Create a topic
define topic(ANY.TOPIC) TOPICSTR('/ANY_TOPIC')
// Create an alias queue that points to above created topic
define qalias(QA.APP) target(ANY.TOPIC) targtype(TOPIC)
// Create a queue for your application that does business logic. If one is available already then no need to create.
define ql(Q.BUSLOGIC)
// Create a durable subscription with destination queue as created in previous step.
define sub(SB.BUSLOGIC) topicstr('/ANY_TOPIC') dest(Q.BUSLOGIC)
// Create a queue for application that archives messages.
define ql(Q.ARCHIVE)
// Create another subscription with destination queue as created in previous step.
define sub(SB.ARCHIVE) topicstr('/ANY_TOPIC') dest(Q.ARCHIVE)
Write a simple MQ Java/JMS application to get messages from Q.ARCHIVE and archive messages.
A receive exit is not going to give you the whole message. Send and receive exits operate on the transmission buffers sent/received by channels. These will contain various protocol flows which are not documented because the protocol is not public, and part of those protocol flows will be chunks of the messages broken down to fit into 32Kb chunks.
You don't give enough information in your question for me to know what type of channel you are using, but I'm guessing it's on the client side since you are writing it in Java and that is the only environment where that is applicable.
Writing the exit at the client side, you'll need to be careful you deal with the cases where the message is not successfully put to the target queue, and you'll need to manage syncpoints etc.
If you were using QMgr-QMgr channels, you should use a message exit to capture the MQXR_MSG invocations where the whole message is given to you. If you put any further messages in a channel message exit, the messages you put are included in the channel's Syncpoint and so committed if the original messages were committed.
Since you are using client-QMgr channels, you could look at an API Exit on the QMgr end (currently client side API Exits are only supported for C clients) and catch all the MQPUT calls. This exit would also give you the MQPUT return codes so you could code your exit to look out for, and deal with failed puts.
Of course, writing an exit is a complicated task, so it may be worth finding out if there are any pre-written tools that could do this for you instead of starting from scratch.
I fully agree with Morag & Shashi, wrong approach. There is an open source project called Message Multiplexer (MMX) that will get a message from a queue and output it to one or more queues. Context information is maintained across the message put(s). For more info on MMX go to: http://www.capitalware.com/mmx_overview.html
If you cannot change the source or target queues to insert MMX into the mix then an API Exit may do the trick. Here is a blog posting about message replication via an API Exit: http://www.capitalware.com/rl_blog/?p=3304
This is quite an old question but it's worth replying with an update that's relevant to MQ 9.2.3 or later. There is a new feature called Streaming Queues (see https://www.ibm.com/docs/en/ibm-mq/9.2?topic=scenarios-streaming-queues) and one of the use-cases it is designed to support is putting a copy of every message sent to a given queue, to an alternative queue. Another application can then consume the duplicate messages and archive them separately to the application that is processing the original messages.
I have a scheduled task that performs the following bit of code:
try {
rabbitTemplate.convertAndSend("TEST");
if (!isOn()) {
turnOn();
}
}
catch (AmqpException e) {
if (isOn()) {
turnOff();
}
}
Everything works just fine. It sends this message to the default "AMQP default" exchange. I do not have a consumer on the other end to consume these messages because I am just ensuring that the server is still alive. Will these messages accumulate over time and cause a memory leak?
Thanks!
K
Do you have a RabbitMQ user interface?
You should be able to see the queues that are being created and whether they are persistent or not. Last time I checked, the default behaviour of Spring AMQP is to create persistent queues.
Have a look at the RabbitMQ Management Plugin: http://www.rabbitmq.com/management.html
Using the RabbitMQ Management Plugin, you can also consume messages that you've published via your code.
Regarding what happens with the messages, they will just pile up and pile up until RabbitMQ hits its limits, then it will no longer accept messages until you purge the queue or consume those messages. With the default RabbitMQ settings, I was able to send about 4 million simple text messages to the queue before it started blocking.
I have a Java client which monitors RabbitMQ queue. I am able to get the count of messages currently in queue with this code
#Resource
RabbitAdmin rabbitAdmin;
..........
DeclareOk declareOk = rabbitAdmin.getRabbitTemplate().execute(new ChannelCallback<DeclareOk>() {
public DeclareOk doInRabbit(Channel channel) throws Exception {
return channel.queueDeclarePassive("test.pending");
}
});
return declareOk.getMessageCount();
I want to get some more additional details like -
Message body of currently enqueued items.
Total number of messages that was enqueued in the queue since the queue was created.
Is there any way to retrieve these data in Java client?
With AMQP protocol (including RabbitMQ implementation) you can't get such info with 100% guarantee.
The closest number to messages count is messages count returned with queue.declare-ok (AMQP.Queue.DeclareOk in java AMQP client library).
Whilst messages count you receive with queue.declare-ok may match exact messages number enqueues, you can't rely on it as it doesn't count messages which waiting acknowledges or published to queue during transaction but not committed yet.
It really depends what kind of precission do you need.
As to enqueued messages body, you may want to manually extract all messages in queue, view their body and put them back to queue. This is the only way to do what you want.
You can get some information about messages count with Management Plugin, RabbitMQ Management HTTP API and rabbitmqctl util (see list_queues, list_channels).
You can't get total published messages count since queue was created and I think nobody implement such stats while it useless (FYI, with messages flow in average 10k per second you will not even reach uint64 in a few thousand years).
AMQP.Queue.DeclareOk dok = channel.queueDeclare(QUEUE_NAME, true, false, false, queueArgs);
dok.getMessageCount();
To access queue details via http api,
http://public-domain-name:15672/api/queues/%2f/queue_name
To access queue details via command from localhost cli promt,
curl -i -u guest_uname:guest_password http://localhost:15672/api/queues/%2f/queue_name
Where,
%2f is default vhost "/"