Spring AMQP - Channel transacted vs publisher confirms - java

I have a Jersey application in which I am using spring amqp library to publish messages to rabbitMQ exchanges. I am using CachingConnectionFactory in my rabbit template and initially Channel-Transacted was set to false. I noticed that some messages were not actually published to the exchange, so I changed the channel-transacted value to true.
On doing this, my publishing function started taking 500ms (It was 5ms while the channel transacted was false). Is there something I am missing here because 500ms is way too much.
As an alternative, I tried setting publisherConfirms to true and added a ConfirmCallback. I haven't yet benchmarked this, but would like to know if this will have better performance as compared to channel-transacted, given the sole purpose of this application is to publish a message to an exchange in RabbitMQ?
Also, if I go with publisherConfirms, I would like to implement retries in case of failures or at least be able to throw exceptions. With channel-transacted, I will get exception in case of failures, but the latency is high in that case. I am not sure how to implement retries with publisherConfirms.
I tried retries with publisher confirms but my code just hangs.
Here's my code:
CompleteMessageCorrelationData.java
public class CompleteMessageCorrelationData extends CorrelationData {
private final Message message;
private final int retryCount;
public CompleteMessageCorrelationData(String id, Message message, int retryCount) {
super(id);
this.message = message;
this.retryCount = retryCount;
}
public Message getMessage() {
return this.message;
}
public int getRetryCount() {
return this.retryCount;
}
#Override
public String toString() {
return "CompleteMessageCorrelationData [id=" + getId() + ", message=" + this.message + "]";
}
}
Setting up the CachingConnectionFactory:
private static CachingConnectionFactory factory = new CachingConnectionFactory("host");
static {
factory.setUsername("rmq-user");
factory.setPassword("rmq-password");
factory.setChannelCacheSize(50);
factory.setPublisherConfirms(true);
}
private final RabbitTemplate rabbitTemplate = new RabbitTemplate(factory);
rabbitTemplate.setConfirmCallback((correlation, ack, reason) -> {
if (correlation != null && !ack) {
CompleteMessageCorrelationData data = (CompleteMessageCorrelationData)correlation;
log.info("Received nack for message: " + data.getMessage() + " for reason : " + reason);
int counter = data.getRetryCount();
if (counter < Integer.parseInt(max_retries)){
this.rabbitTemplate.convertAndSend(data.getMessage().getMessageProperties().getReceivedExchange(),
data.getMessage().getMessageProperties().getReceivedRoutingKey(),
data.getMessage(), new CompleteMessageCorrelationData(id, data.getMessage(), counter++));
} else {
log.error("Max retries exceeded for message: " + data.getMessage());
}
}
});
Publishing the message:
rabbitTemplate.convertAndSend(exchangeName, routingKey, message, new CompleteMessageCorrelationData(id, message, 0));
So, in short :
Am I doing something wrong with Channel-transacted that the latency is so high?
If I were to implement publisherConfirms instead, along with retries, what's wrong with my approach and will it perform better than channel transacted, considering there is no other job this application has other than publishing messages to rabbitmq?

As you have found, transactions are expensive and significantly degrade performance; 500ms seems high, though.
I don't believe publisher confirms will help much. You still have to wait for the round-trips to the broker, before releasing the servlet thread. Publisher confirms are useful when you send a bunch of messages and then wait for all the confirms to come back; but when you are only sending one message and then waiting for the confirm, it likely won't be much faster than using a transaction.
You could try it, though, but the code is a bit complex, especially if you want to handle exceptions, which you get for "free" with transactions.

Related

Spring transaction synchronisation not working (TransactionalEventListener)

I am aware this question has been asked in slightly different formats on this site but following the advices given on those posts took me nowhere. I already spent close to two days on this and I am out of ideas.
We have a spring boot micro service which does nothing more than listening for a message coming into an IBM MQ queue do a little bit of transformation and forwarding it to a Kafka topic. We want this to be transactional so there would be no message lost (critical to our business). We also want to be able to react on transaction commit and rollback events for the purpose of monitoring and support.
I just followed a few "how to" places on the internet and I can easily achieve transactional behaviour in a declarative way using #Transactional annotation like below:
#Transactional(transactionManager = "chainedTransactionManager", rollbackFor = Throwable.class)
#JmsListener(destination = "DEV.QUEUE.1", containerFactory = "mqListenerContainerFactory", concurrency = "10")
public void receiveMessage(#Headers Map<String, Object> jmsHeaders, String message) {
// Some work here including forward to Kafka topic:
// ...
// ...
// Then publish an event which is supposed to be acted on:
applicationEventPublisher.publishEvent(new MqConsumedEvent("JMS Correlation ID", "Message Payload"));
// Uncommented exception below to create a rollback scenario
// or comment it out to have the processing completed
throw new RuntimeException("No good Pal!");
}
As expected when playing a message with the exception in place the processing will spin forever because of the transaction manager rollbacking again and again. This is good for us.
Now we expect the MqConsumedEvent being published inside our listener method to be intercepted by the onRollback method below:
#Component
#Slf4j
public class MqConsumedEventListener {
#TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT, classes = MqConsumedEvent.class)
public void onCommit(MqConsumedEvent event) {
log.info("MQ message with correlation id {} committed to Kafka", event.getCorrelationId());
}
#TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK, classes = MqConsumedEvent.class)
public void onRollback(MqConsumedEvent event) {
log.info("Failed to commit MQ message with correlation id {} to Kafka", event.getCorrelationId());
}
}
This is not happening. Similar commenting out the Exception throwing in the listener makes our MQ message being passed to Kafka. However the onCommit method is not executed.
From further research and spring debug I believe this is not executing because spring thinks there is no active transaction when publishing the event and such my event it is just ignored. Evaluating TransactionSynchronizationManager.isActualTransactionActive() and printing it in the logs shows false which is hard to explain because as I said the transaction rollbacks as expected when an exception is thrown on purpose.
Thank you in advance for your inputs.
UPDATE:
The breakpoints I put brought me to the execution of this ApplicationListenerMethodTransactionalAdapter class:
#Override
public void onApplicationEvent(ApplicationEvent event) {
if (TransactionSynchronizationManager.isSynchronizationActive() &&
TransactionSynchronizationManager.isActualTransactionActive()) {
TransactionSynchronization transactionSynchronization = createTransactionSynchronization(event);
TransactionSynchronizationManager.registerSynchronization(transactionSynchronization);
}
else if (this.annotation.fallbackExecution()) {
if (this.annotation.phase() == TransactionPhase.AFTER_ROLLBACK && logger.isWarnEnabled()) {
logger.warn("Processing " + event + " as a fallback execution on AFTER_ROLLBACK phase");
}
processEvent(event);
}
else {
// No transactional event execution at all
if (logger.isDebugEnabled()) {
logger.debug("No transaction is active - skipping " + event);
}
}
}
For reason I am not understanding the first if condition is false. Then fallback execution is false as I haven't set it true in my #TransactionalEventListener usage it will end up on the else branch and just skip the event.
I had the same problem. In my case it turns out that I had defined an ApplicationEventMulticaster in my project.
#Bean
public ApplicationEventMulticaster applicationEventMulticaster() {
var eventMulticaster = new SimpleApplicationEventMulticaster();
eventMulticaster.setTaskExecutor(new SimpleAsyncTaskExecutor());
return eventMulticaster;
}
That make the ApplicationListenerMethodTransactionalAdapter to be executed in a different thread (not the one where the event was published). That's why TransactionSynchronizationManager.isActualTransactionActive() ends up to be false and the event do not get executed.
Removing the definition of the ApplicationEventMulticaster worked fine for me.

RSocket Channel with Spring Boot - Clients miss their own first message

Suppose I have a simple RSocket and Spring Boot Server. The server broadcasts all incoming client messages to all connected clients (including the sender). Client and server look like this:
Server:
public RSocketController() {
this.processor = DirectProcessor.<String>create().serialize();
this.sink = this.processor.sink();
}
#MessageMapping("channel")
Flux<String> channel(final Flux<String> messages) {
this.registerProducer(messages);
// breakpoint here
return processor
.doOnSubscribe(subscription -> logger.info("sub"))
.doOnNext(message -> logger.info("[Sent] " + message));
}
private Disposable registerProducer(Flux<String> flux) {
return flux
.doOnNext(message -> logger.info("[Received] " + message))
.map(String::toUpperCase)
// .delayElements(Duration.ofSeconds(1))
.subscribe(this.sink::next);
}
Client:
#ShellMethod("Connect to the server")
public void connect(String name) {
this.name = name;
this.rsocketRequester = rsocketRequesterBuilder
.rsocketStrategies(rsocketStrategies)
.connectTcp("localhost", 7000)
.block();
}
#ShellMethod("Establish a channel")
public void channel() {
this.rsocketRequester
.route("channel")
.data(this.fluxProcessor.doOnNext(message -> logger.info("[Sent] {}", message)))
.retrieveFlux(String.class)
.subscribe(message -> logger.info("[Received] {}", message));
}
#ShellMethod("Send a lower case message")
public void send(String message) {
this.fluxSink.next(message.toLowerCase());
}
The problem is: the first message a client sends is processed by the server, but does not reach the sender again. All subsequent messages are delivered without any problems. All other clients already connected will receive all messages.
What I noticed so far while debugging
when I call channel() in the client, retrieveFlux() and subscribe() are called. But on the server the breakpoint is not triggered in the corresponding method.
Only when the client sends the first message with send() is the breakpoint triggered on the server.
Using the .delayElements() on the server seems to "solve" the problem.
What am i doing wrong here?
And why does it need the send() first to trigger the servers breakpoint?
Thanks in advance!
A DirectProcessor does not have a buffer. If it does not have a subscriber, the message is dropped.
(Citing from its Javadoc: If there are no Subscribers, upstream items are dropped)
I think that when RSocketController.registerProducer() calls flux.[...].subscribe() it immediately starts processing the incoming messages from flux and passing them to the sink of the processor, but subscription to the processor has not happened yet. Thus the messages are dropped.
I guess that subscription to the processor is done by the framework, after returning from RSocketController.channel(...) method. -- I think that you are able to set a breakpoint in your processor.doOnSubscribe(..) method to see where it actually happens.
Thus maybe moving a registerProducer() call into a processor.doOnSubscribe() callback will solve your issue, like this:
#MessageMapping("channel")
Flux<String> channel(final Flux<String> messages) {
return processor
.doOnSubscribe(subscription -> this.registerProducer(messages))
.doOnSubscribe(subscription -> logger.info("sub"))
.doOnNext(message -> logger.info("[Sent] " + message));
}
But I think that personally I would prefer to replace a DirectProcessor with UnicastProcessor.create().onBackpressureBuffer().publish(). So that broadcasting to multiple subscribers is moved into a separate operation, so that there could be a buffer between the sink and subscribers, and late subscribers and backpressure could be handled in a better way.

Camel: PollEnrich generating a lot of Timed Waiting threads

I have this camel route
from("file:{{PATH_INPUT}}?charset=iso-8859-1&delete=true")
.process(new ProcessorName())
.pollEnrich().simple("${property.URI_FILE}", String.class).aggregationStrategy(new Estrategia()).timeout(10000).aggregateOnException(true)
.choice()
.when(simple("${property.result} == 'OK'"))
.to(URI_OUTPUT)
.endChoice();
This route takes a file from PATH_INPUT, compare it with the file URI_FILE (I generate URI_FILE property in ProccessorName()) and if URI_FILE body contains a specific data, then the result is "OK" and send it to URI_OUTPUT (activeMQ).
This works ok, but later I noticed that this generated a lot of waiting threads, one for each exchange.
I don't know why is this happening. I have tried with a ConsumerTemplate and the results are the same.
Yes this is expected if you generate a unique URI per endpoint you poll. I assume you generate a dynamic fileName which you specify in that URI, and that you see a thread per endpoint?
I have logged a ticket to make this easier in the future
https://issues.apache.org/jira/browse/CAMEL-11250
If you just want to set the message body to a specify file name, then the fastest and easiest is to use setBody as a java.io.File type:
.setBody(simple("${property.URI_FILE}", java.io.File))
I have run into the same trouble and faced memory leak. As a workaround, I implemented my own 'org.apache.camel.spi.PollingConsumerPollStrategy' which catches the Consumer when it is begun (by pollEnrich) and sends it to a bean that shall hold all of these consumers in a Map.
Then, I added a timer-route only to trigger a purge action onto the Map that checks if a given time limit has been reached for each of them. If so, it stops the Consumer (leading to interrupt its related thread) and then removes it from the Map.
Like this:
from("direct://foo")
.to("an endpoint that returns the file name")
.pollEnrich()
.simple("file://{{app.runtime.draft.path}}"
+ "?fileName=${body}"
+ "&recursive=true"
+ "&delete=true"
+ "&pollStrategy=#myFilePollingStrategy" // my poll strategy
+ "&maxMessagesPerPoll=1")
.timeout(6 * 1000L)
.end()
.to("direct://a")
.to("direct://b")
.to("direct://c")
.end();
from("timer://file-consumer-purge?period=5s")
.bean(fileConsumerController, "purge")
.end();
#Component
public class FileConsumerController {
private Map<Consumer, Long> mapConsumers = new ConcurrentHashMap<>();
private static final long LIMIT = 25 * 1000L; // 25 seconds
public void hold(Consumer consumer) {
mapConsumers.put(consumer, System.currentTimeMillis());
}
public void purge() {
mapConsumers.forEach((consumer, startTime) -> {
if (System.currentTimeMillis() - startTime > LIMIT) {
try {
consumer.stop();
} catch (Exception e) {
e.printStackTrace();
} finally {
mapConsumers.remove(consumer);
}
}
});
}
}
#Component
public class MyFilePollingStrategy extends DefaultPollingConsumerPollStrategy {
#Autowired
FileConsumerController fileConsumerController;
#Override
public boolean begin(Consumer consumer, Endpoint endpoint) {
fileConsumerController.hold(consumer);
return super.begin(consumer, endpoint);
}
}
Notes:
I monitored the behavior through jconsole;
I've only overwritten the begin() method and haven't tested the effects over unexpected / error scenarios.
Hope this helps for now, and may the component be evolved. :)

Returning a value from thread

First of all, yes I looked up this question on google and I did not find any answer to it. There are only answers, where the thread is FINISHED and than the value is returned. What I want, is to return an "infinite" amount of values.
Just to make it more clear for you: My thread is reading messages from a socket and never really finishes. So whenever a new message comes in, I want another class to get this message. How would I do that?
public void run(){
while(ircMessage != null){
ircMessage = in.readLine();
System.out.println(ircMessage);
if (ircMessage.contains("PRIVMSG")){
String[] ViewerNameRawRaw;
ViewerNameRawRaw = ircMessage.split("#");
String ViewerNameRaw = ViewerNameRawRaw[2];
String[] ViewerNameR = ViewerNameRaw.split(".tmi.twitch.tv");
viewerName = ViewerNameR[0];
String[] ViewerMessageRawRawRaw = ircMessage.split("PRIVMSG");
String ViewerMessageRawRaw = ViewerMessageRawRawRaw[1];
String ViewerMessageRaw[] = ViewerMessageRawRaw.split(":", 2);
viewerMessage = ViewerMessageRaw[1];
}
}
}
What you are describing is a typical scenario of asynchronous communication. Usually solution could be implemented with Queue. Your Thread is a producer. Each time your thread reads a message from socket it builds its result and sends it into a queue. Any Entity that is interested to receive the result should be listening to the Queue (i.e. be a consumer). Read more about queues as you can send your message so that only one consumer will get it or (publishing) means that all registered consumers may get it. Queue implementation could be a comercialy available products such as Rabbit MQ for example or as simple as Java provided classes that can work as in memory queues. (See Queue interface and its various implementations). Another way to go about it is communication over web (HTTP). Your thread reads a message from a socket, builds a result and sends it over http using let's say a REST protocol to a consumer that exposes a rest API that your thread can call to.
Why not have a status variable in your thread class? You can then update this during execution and before exiting. Once the thread has completed, you can still query the status.
public static void main(String[] args) throws InterruptedException {
threading th = new threading();
System.out.println("before run Status:" + th.getStatus());
th.start();
Thread.sleep(500);
System.out.println("running Status:" + th.getStatus());
while(th.isAlive()) {}
System.out.println("after run Status:" + th.getStatus());
}
Extend thread to be:
public class threading extends Thread {
private int status = -1; //not started
private void setStatus(int status){
this.status = status;
}
public void run(){
setStatus(1);//running
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
setStatus(0); //exit clean
}
public int getStatus(){
return this.status;
}
}
And get an output of:
before run Status:-1
running Status:1
after run Status:0

Camel ActiveMQ Performance Tuning

Situation
At present, we use some custom code on top of ActiveMQ libraries for JMS messaging. I have been looking at switching to Camel, for ease of use, ease of maintenance, and reliability.
Problem
With my present configuration, Camel's ActiveMQ implementation is substantially slower than our old implementation, both in terms of delay per message sent and received, and time taken to send and receive a large flood of messages. I've tried tweaking some configuration (e.g. maximum connections), to no avail.
Test Approach
I have two applications, one using our old implementation, one using a Camel implementation. Each application sends JMS messages to a topic on local ActiveMQ server, and also listens for messages on that topic. This is used to test two Scenarios:
- Sending 100,000 messages to the topic in a loop, and seen how long it takes from start of sending to end of handling all of them.
- Sending a message every 100 ms and measuring the delay (in ns) from sending to handling each message.
Question
Can I improve upon the implementation below, in terms of time sent to time processed for both floods of messages, and individual messages? Ideally, improvements would involve tweaking some config that I have missed, or suggesting a better way to do it, and not be too hacky. Explanations of improvements would be appreciated.
Edit: Now that I am sending messages asyncronously, I appear to have a concurrency issue. receivedCount does not reach 100,000. Looking at the ActiveMQ web interface, 100,000 messages are enqueued, and 100,000 dequeued, so it's probably a problem on the message processing side. I've altered receivedCount to be an AtomicInteger and added some logging to aid debugging. Could this be a problem with Camel itself (or the ActiveMQ components), or is there something wrong with the message processing code? As far as I can tell, only ~99,876 messages are making it through to floodProcessor.process.
Test Implementation
Edit: Updated with async sending and logging for concurrency issue.
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.camel.component.ActiveMQComponent;
import org.apache.activemq.pool.PooledConnectionFactory;
import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.jms.JmsConfiguration;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.log4j.Logger;
public class CamelJmsTest{
private static final Logger logger = Logger.getLogger(CamelJmsTest.class);
private static final boolean flood = true;
private static final int NUM_MESSAGES = 100000;
private final CamelContext context;
private final ProducerTemplate producerTemplate;
private long timeSent = 0;
private final AtomicInteger sendCount = new AtomicInteger(0);
private final AtomicInteger receivedCount = new AtomicInteger(0);
public CamelJmsTest() throws Exception {
context = new DefaultCamelContext();
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616");
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(connectionFactory);
JmsConfiguration jmsConfiguration = new JmsConfiguration(pooledConnectionFactory);
logger.info(jmsConfiguration.isTransacted());
ActiveMQComponent activeMQComponent = ActiveMQComponent.activeMQComponent();
activeMQComponent.setConfiguration(jmsConfiguration);
context.addComponent("activemq", activeMQComponent);
RouteBuilder builder = new RouteBuilder() {
#Override
public void configure() {
Processor floodProcessor = new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
int newCount = receivedCount.incrementAndGet();
//TODO: Why doesn't newCount hit 100,000? Remove this logging once fixed
logger.info(newCount + ":" + exchange.getIn().getBody());
if(newCount == NUM_MESSAGES){
logger.info("all messages received at " + System.currentTimeMillis());
}
}
};
Processor spamProcessor = new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
long delay = System.nanoTime() - timeSent;
logger.info("Message received: " + exchange.getIn().getBody(List.class) + " delay: " + delay);
}
};
from("activemq:topic:test?exchangePattern=InOnly")//.threads(8) // Having 8 threads processing appears to make things marginally worse
.choice()
.when(body().isInstanceOf(List.class)).process(flood ? floodProcessor : spamProcessor)
.otherwise().process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
logger.info("Unknown message type received: " + exchange.getIn().getBody());
}
});
}
};
context.addRoutes(builder);
producerTemplate = context.createProducerTemplate();
// For some reason, producerTemplate.asyncSendBody requires an Endpoint to be passed in, so the below is redundant:
// producerTemplate.setDefaultEndpointUri("activemq:topic:test?exchangePattern=InOnly");
}
public void send(){
int newCount = sendCount.incrementAndGet();
producerTemplate.asyncSendBody("activemq:topic:test?exchangePattern=InOnly", Arrays.asList(newCount));
}
public void spam(){
Executors.newSingleThreadScheduledExecutor().scheduleWithFixedDelay(new Runnable() {
#Override
public void run() {
timeSent = System.nanoTime();
send();
}
}, 1000, 100, TimeUnit.MILLISECONDS);
}
public void flood(){
logger.info("starting flood at " + System.currentTimeMillis());
for (int i = 0; i < NUM_MESSAGES; i++) {
send();
}
logger.info("flooded at " + System.currentTimeMillis());
}
public static void main(String... args) throws Exception {
CamelJmsTest camelJmsTest = new CamelJmsTest();
camelJmsTest.context.start();
if(flood){
camelJmsTest.flood();
}else{
camelJmsTest.spam();
}
}
}
It appears from your current JmsConfiguration that you are only consuming messages with a single thread. Was this intended?
If not, you need to set the concurrentConsumers property to something higher. This will create a threadpool of JMS listeners to service your destination.
Example:
JmsConfiguration config = new JmsConfiguration(pooledConnectionFactory);
config.setConcurrentConsumers(10);
This will create 10 JMS listener threads that will process messages concurrently from your queue.
EDIT:
For topics you can do something like this:
JmsConfiguration config = new JmsConfiguration(pooledConnectionFactory);
config.setConcurrentConsumers(1);
config.setMaxConcurrentConsumers(1);
And then in your route:
from("activemq:topic:test?exchangePattern=InOnly").threads(10)
Also, in ActiveMQ you can use a virtual destination. The virtual topic will act like a queue and then you can use the same concurrentConsumers method you would use for a normal queue.
Further Edit (For Sending):
You are currently doing a blocking send. You need to do producerTemplate.asyncSendBody().
Edit
I just built a project with your code and ran it. I set a breakpoint in your floodProcessor method and newCount is reaching 100,000. I think you may be getting thrown off by your logging and the fact that you are sending and receiving asynchronously. On my machine newCount hit 100,000 and the "all messages recieved" message was logged in well under 1 second after execution, but the program continued to log for another 45 seconds afterwards since it was buffered. You can see the effect of logging on how close your newCount number is to your body number by reducing the logging. I turned the logging to info, shutting off camel logging, and the two numbers matched at the end of the logging:
INFO CamelJmsTest - 99996:[99996]
INFO CamelJmsTest - 99997:[99997]
INFO CamelJmsTest - 99998:[99998]
INFO CamelJmsTest - 99999:[99999]
INFO CamelJmsTest - 100000:[100000]
INFO CamelJmsTest - all messages received at 1358778578422
I took over from the original poster in looking at this as part of another task, and found the problem with losing messages was actually in the ActiveMQ config.
We had a setting sendFailIfNoSpace=true, which was resulting in messages being dropped if we were sending fast enough to fill the publishers cache. Playing around with the policyEntry topic cache size I could vary the number of messages that disappeared with as much reliability as can be expected of such a race condition. Setting sendFailIfNoSpace=false (default), I could have any cache size I liked and never fail to receive all messages.
In theory sendFailIfNoSpace should throw a ResourceAllocationException when it drops a message, but that is either not happening(!) or is being ignored somehow. Also interesting is that our custom JMS wrapper code doesn't hit this problem despite running the throughput test faster than Camel. Maybe that code is faster in such a way that it means the publishing cache is being emptied faster, or else we are overriding sendFailIfNoSpace in the connection code somewhere that I haven't found yet.
On the question of speed, we have implemented all the suggestions mentioned here so far except for virtual destinations, but the Camel version test with 100K messages still runs in 16 seconds on my machine compared to 10 seconds for our own wrapper. As mentioned above, I have a sneaking suspicion that we are (implicitly or otherwise) overriding config somewhere in our wrapper, but I doubt it is anything that would cause that big a performance boost within ActiveMQ.
Virtual destinations as mentioned by gwithake might speed up this particular test, but most of the time with our real workloads it is not an appropriate solution.

Categories