Send 35000 jms messages per minute - java

We have a spring boot application for performing load test on one other component. We need to send max 35000 JMS messages per minute and for that reason I am using a scheduler for running a task every one minute.
The problem is when I keep the intensity low it manages to send the messages within the specified time interval (one minute). But when the intensity is high it takes more than 1 minute to send the chunk of messages. Any suggestions on the below implementation?
Scheduler class
#Component
public class MessageScheduler {
private final Logger log = LoggerFactory.getLogger(getClass());
private static ScheduledExecutorService executorService = Executors.newScheduledThreadPool(16);
private final static int TIME_PERIOD = ConfigFactory.getConfig().getInt("messages.period").orElse(60000);
#Autowired
JmsSender sender;
public void startScheduler() {
Runnable runnableTask = sender::sendMessagesChunk;
executorService.scheduleAtFixedRate(runnableTask, 0, TIME_PERIOD,
TimeUnit.MILLISECONDS);
}
}
Class for sending the messages
#Component
public class JmsSender {
#Autowired
TrackingManager manager;
private final Logger log = LoggerFactory.getLogger(getClass());
private final static int TOTAL_MESSAGES = ConfigFactory.getConfig().getInt("total.tracking.messages").orElse(10);
private final static int TIME_PERIOD = ConfigFactory.getConfig().getInt("messages.period").orElse(60000);
private static int failedPerPeriod=0;
private static int totalFailed=0;
private static int totalMessageCounter=0;
public void sendMessagesChunk() {
log.info("Started at: {}", Instant.now());
log.info("Sending messages with intensity {} messages/minute", TOTAL_MESSAGES);
for (int i=0; i<TOTAL_MESSAGES; i++) {
try {
long start = System.currentTimeMillis();
MessageDTO msg = manager.createMessage();
send(msg);
long stop = System.currentTimeMillis();
if (timeOfDelay(stop-start)>=0L) {
Thread.sleep(timeOfDelay(stop-start));
}
} catch (Exception e) {
log.info("Error : " + e.getMessage());
failedPerPeriod++;
}
}
totalMessageCounter += TOTAL_MESSAGES;
totalFailed += failedPerPeriod;
log.info("Finished at: {}", Instant.now());
log.info("Success rate(of last minute): {} %, Succeeded: {}, Failed: {}, Success rate(in total): {} %, Succeeded: {}, Failed: {}"
,getSuccessRatePerPeriod(), getSuccededPerPeriod(), failedPerPeriod,
getTotalSuccessRate(), getTotalSucceded(), totalFailed);
failedPerPeriod =0;
}
private long timeOfDelay(Long elapsedTime){
return (TIME_PERIOD / TOTAL_MESSAGES) - elapsedTime;
}
private int getSuccededPerPeriod(){
return TOTAL_MESSAGES - failedPerPeriod;
}
private int getTotalSucceded(){
return totalMessageCounter - totalFailed;
}
private double getSuccessRatePerPeriod(){
return getSuccededPerPeriod()*100D / TOTAL_MESSAGES;
}
private double getTotalSuccessRate(){
return getTotalSucceded()*100D / totalMessageCounter;
}
private void send(MessageDTO messageDTO) throws Exception {
requestContextInitializator();
JmsClient client = JmsClientBuilder.newClient(UriScheme.JmsType.AMQ);
client.target(new URI("activemq:queue:" + messageDTO.getDestination()))
.msgTypeVersion(messageDTO.getMsgType(), messageDTO.getVersion())
.header(Header.MSG_VERSION, messageDTO.getVersion())
.header(Header.MSG_TYPE, messageDTO.getMsgType())
.header(Header.TRACKING_ID, UUID.randomUUID().toString())
.header(Header.CLIENT_ID, "TrackingJmsClient")
.post(messageDTO.getPayload());
}

You should solve two problems:
total send operation time must be under max time.
messages should be sent not as fast as possible, instead, they should be sent uniformly along all available time.
Obviously, if your send method is too slow, the max time will be exceeded.
The faster way to send messages is to use some sort of bulk operation. Never mind if your MQ API don't support bulk operation, you can't use it! because of the second restriction ("uniformly").
You can send messages asynchronously, but if your MQ API create threads for that instead of "non-blocking" async, you could have memory problems.
Using javax.jms.MessageProducer.send you can send messages asynchronously, but a new one thread will be created for each one (a lot of memory and server threads will be created).
Another speedup could be create only one JMS client (your send method).
To achieve the second requirement, you should fix your timeOfDelay function, it's wrong. Really, you should take in account the probability distribution of the send function to estimate the proper value but, you can simply do:
long accTime = 0L;
for (int i=0; i<TOTAL_MESSAGES; i++) {
try {
long start = System.currentTimeMillis();
MessageDTO msg = manager.createMessage();
send(msg);
long stop = System.currentTimeMillis();
accTime += stop - start;
if(accTime < TIME_PERIOD)
Thread.sleep((TIME_PERIOD - accTime) / (TOTAL_MESSAGES - i));
} catch (Exception e) {
log.info("Error : " + e.getMessage());
failedPerPeriod++;
}
}

35000 msg/min is a notch below 600 msg/sec. That is not considered "a lot" and should be relatively easy goal to clear. Primary idea is to "reuse" all heavy weight JMS objects: connection, session and destination. Single thread should be enough.
ConnectionFactory connFactory = .... // initialize connection factory
#Cleanup Connection conn = connFactory.createConnection();
#Cleanup Session session = conn.createSession(true, Session.SESSION_TRANSACTED);
Queue q = session.createQueue("example_destiation");
#Cleanup MessageProducer producer = session.createProducer(q);
for (String payload: messagesToSend) {
TextMessage message = session.createTextMessage(payload);
producer.send(msg);
session.commit();
}
Additional speedups are possible by:
commiting every n-th message
by using faster ACKNOWLEDGE modes
by using non-persistent messages
by using destination object created outside session
sending messages asynchronously
Example of NON_PERSISTENT, ACKOWLEDGE, ASYNC delivery:
#Cleanup Connection conn = connFactory.createConnection();
#Cleanup Session session = conn.createSession(false, Session.DUPS_OK_ACKNOWLEDGE);
Queue q = session.createQueue("example_destiation");
#Cleanup MessageProducer producer = session.createProducer(q);
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
producer.setAsync(new ExmpleSendListener());
for (String payload: messagesToSend) {
TextMessage message = session.createTextMessage(payload);
producer.send(msg);
}

Related

Multiple queues receiving same message from virtual topic creates a deadletter entry for one queue only

I'm am using Virtual Destinations to implement Publish Subscribe model in ActiveMQ 5.15.13.
I have a virtual topic VirtualTopic and there are two queues bound to it. Each queue has its own redelivery policy. Let's say Queue 1 will retry message 2 times in case there is an exception while processing the message and Queue 2 will retry message 3 times. Post retry message will be sent to deadletter queue. I'm also using Individual Dead letter Queue strategy so that each queue has it's own deadletter queue.
I've observed that when a message is sent to VirtualTopic, the message with same message id is delivered to both the queues. I'm facing an issue where if the consumers of both queues are not able to process the message successfully. The message destined for Queue 1 is moved to deadletter queue after retrying for 2 times. But there is no deadletter queue for Queue 2, though message in Queue 2 is retried for 3 times.
Is it the expected behavior?
Code:
public class ActiveMQRedelivery {
private final ActiveMQConnectionFactory factory;
public ActiveMQRedelivery(String brokerUrl) {
factory = new ActiveMQConnectionFactory(brokerUrl);
factory.setUserName("admin");
factory.setPassword("password");
factory.setAlwaysSyncSend(false);
}
public void publish(String topicAddress, String message) {
final String topicName = "VirtualTopic." + topicAddress;
try {
final Connection producerConnection = factory.createConnection();
producerConnection.start();
final Session producerSession = producerConnection.createSession(false, AUTO_ACKNOWLEDGE);
final MessageProducer producer = producerSession.createProducer(null);
final TextMessage textMessage = producerSession.createTextMessage(message);
final Topic topic = producerSession.createTopic(topicName);
producer.send(topic, textMessage, PERSISTENT, DEFAULT_PRIORITY, DEFAULT_TIME_TO_LIVE);
} catch (JMSException e) {
throw new RuntimeException("Message could not be published", e);
}
}
public void initializeConsumer(String queueName, String topicAddress, int numOfRetry) throws JMSException {
factory.getRedeliveryPolicyMap().put(new ActiveMQQueue("*." + queueName + ".>"),
getRedeliveryPolicy(numOfRetry));
Connection connection = factory.createConnection();
connection.start();
final Session consumerSession = connection.createSession(false, CLIENT_ACKNOWLEDGE);
final Queue queue = consumerSession.createQueue("Consumer." + queueName +
".VirtualTopic." + topicAddress);
final MessageConsumer consumer = consumerSession.createConsumer(queue);
consumer.setMessageListener(message -> {
try {
System.out.println("in listener --- " + ((ActiveMQDestination)message.getJMSDestination()).getPhysicalName());
consumerSession.recover();
} catch (JMSException e) {
e.printStackTrace();
}
});
}
private RedeliveryPolicy getRedeliveryPolicy(int numOfRetry) {
final RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(0);
redeliveryPolicy.setMaximumRedeliveries(numOfRetry);
redeliveryPolicy.setMaximumRedeliveryDelay(-1);
redeliveryPolicy.setRedeliveryDelay(0);
return redeliveryPolicy;
}
}
Test:
public class ActiveMQRedeliveryTest {
private static final String brokerUrl = "tcp://0.0.0.0:61616";
private ActiveMQRedelivery activeMQRedelivery;
#Before
public void setUp() throws Exception {
activeMQRedelivery = new ActiveMQRedelivery(brokerUrl);
}
#Test
public void testMessageRedeliveries() throws Exception {
String topicAddress = "testTopic";
activeMQRedelivery.initializeConsumer("queue1", topicAddress, 2);
activeMQRedelivery.initializeConsumer("queue2", topicAddress, 3);
activeMQRedelivery.publish(topicAddress, "TestMessage");
Thread.sleep(3000);
}
#After
public void tearDown() throws Exception {
}
}
I recently came across this problem. To fix this there are 2 attributes that needs to be added to individualDeadLetterStrategy as below
<deadLetterStrategy>
<individualDeadLetterStrategy destinationPerDurableSubscriber="true" enableAudit="false" queuePrefix="DLQ." useQueueForQueueMessages="true"/>
</deadLetterStrategy>
Explanation of attributes:
destinationPerDurableSubscriber - To enable a separate destination per durable subscriber.
enableAudit - The dead letter strategy has a message audit that is enabled by default. This prevents duplicate messages from being added to the configured DLQ. When the attribute is enabled, the same message that isn't delivered for multiple subscribers to a topic will only be placed on one of the subscriber DLQs when the destinationPerDurableSubscriber attribute is set to true i.e. say two consumers fail to acknowledge the same message for the topic, that message will only be placed on the DLQ for one consumer and not the other.

How to properly use wait() and notify() in Java? (HiveMQ Client) [duplicate]

This question already has answers here:
java.lang.IllegalMonitorStateException: object not locked by thread before wait()?
(3 answers)
Closed 3 years ago.
I've writing a program using HiveMQ Client (an MQTT Open source implementation in Java) that involves using two multithreaded clients. One client is designated as the publisher and the other as the subscriber (I'm aware I could the same client can both publish and subscribe). I'm trying to design a test where the publisher sends 100 messages to the client. The goal is to time how long it takes to send and receive all the messages. I realized if I wanted to time how long it would take for the messages to be received, I would need to have the Subscribing thread wait until the publishing thread was ready to send the message. I decided to use wait() and notify() but I can't seem to implement it correctly. I'm aware that you need to use the same object which I tried to do but I can't get the design correct. I added snipers on code for both of the run methods of the two clients. CommonThread.java isn't actually a thread and I'm not running it but I tried to use it an in between class to be able to wait() and notify() but I'm missing something.
HiveMQ:
https://github.com/hivemq/hivemq-community-edition
https://github.com/hivemq/hivemq-mqtt-client
SubMainThread.java:
public void run() {
// Creates the client object using Blocking API
Mqtt5BlockingClient subscriber = Mqtt5Client.builder()
.identifier(UUID.randomUUID().toString()) // the unique identifier of the MQTT client. The ID is randomly generated between
.serverHost("localhost") // the host name or IP address of the MQTT server. Kept it localhost for testing. localhost is default if not specified.
.serverPort(1883) // specifies the port of the server
.addConnectedListener(context -> ClientConnectionRetreiver.printConnected("Subscriber1")) // prints a string that the client is connected
.addDisconnectedListener(context -> ClientConnectionRetreiver.printDisconnected("Subscriber1")) // prints a string that the client is disconnected
.buildBlocking(); // creates the client builder
subscriber.connect(); // connects the client
ClientConnectionRetreiver.getConnectionInfo(subscriber); // gets connection info
try {
Mqtt5Publishes receivingClient1 = subscriber.publishes(MqttGlobalPublishFilter.ALL); // creates a "publishes" instance thats used to queue incoming messages // .ALL - filters all incoming Publish messages
subscriber.subscribeWith()
.topicFilter(subscriberTopic)
.qos(MqttQos.EXACTLY_ONCE)
.send();
PubSubUtility.printSubscribing("Subscriber1");
System.out.println("Publisher ready to send: " + PubMainThread.readyToSend);
x.threadCondWait(); // <<<<< HOW TO MAKE THIS WORK
System.out.println("Back to the normal execution flow :P");
startTime = System.currentTimeMillis();
System.out.println("Timer started");
for (int i = 1; i <= messageNum; i++) {
Mqtt5Publish receivedMessage = receivingClient1.receive(MESSAGEWAITTIME,TimeUnit.SECONDS).get(); // receives the message using the "publishes" instance waiting up to 5 minutes // .get() returns the object if available or throws a NoSuchElementException
PubSubUtility.convertMessage(receivedMessage); // Converts a Mqtt5Publish instance to string and prints
}
endTime = System.currentTimeMillis();
finalTime = endTime - startTime;
System.out.println( finalTime + PubMainThread.finalTime + " milliseconds");
finalSecTime = TimeUnit.MILLISECONDS.toSeconds(finalTime);
System.out.println(finalSecTime + PubMainThread.finalSecTime);
}
catch (InterruptedException e) { // Catches interruptions in the thread
LOGGER.log(Level.SEVERE, "The thread was interrupted while waiting for a message to be received", e);
}
catch (NoSuchElementException e){
System.out.println("There are no received messages"); // Handles when a publish instance has no messages
}
subscriber.disconnect();
}
PubMainThread.java:
public void run() {
// Creates the client object using Blocking API
Mqtt5BlockingClient publisher = Mqtt5Client.builder()
.identifier(UUID.randomUUID().toString()) // the unique identifier of the MQTT client. The ID is randomly generated between
.serverHost("localhost") // the host name or IP address of the MQTT server. Kept it localhost for testing. localhost is default if not specified.
.serverPort(1883) // specifies the port of the server
.addConnectedListener(context -> ClientConnectionRetreiver.printConnected("Publisher1")) // prints a string that the client is connected
.addDisconnectedListener(context -> ClientConnectionRetreiver.printDisconnected("Publisher1")) // prints a string that the client is disconnected
.buildBlocking(); // creates the client builder
publisher.connect(); // connects the client
ClientConnectionRetreiver.getConnectionInfo(publisher); // gets connection info
PubSubUtility.printPublising("Publisher1");
readyToSend = true;
x.threadCondNotify(); <<<<< HOW TO MAKE THIS WORK
// Think about making the PubClient Thread sleep for a short while so its not too ahead of the client
startTime = System.currentTimeMillis();
for (int i = 1; i <= messageNum; i++) {
publisher.publishWith()
.topic(publisherTopic) // publishes to the specified topic
.qos(MqttQos.EXACTLY_ONCE)
.payload(convertedMessage) // the contents of the message
.send();
}
endTime = System.currentTimeMillis();
finalTime = endTime - startTime;
finalSecTime = TimeUnit.MILLISECONDS.toSeconds(finalTime);
PubSubUtility.printNumOfPublished("Publisher1", messageNum);
publisher.disconnect();
}
public class CommonThread {
private static final Logger LOGGER = Logger.getLogger(SubMainThread.class.getName()); // Creates a logger instance
public synchronized void threadCondNotify() {
notify();
System.out.println("Notified other thread");
}
public synchronized void threadCondWait() {
try {
while (PubMainThread.readyToSend != true) {
System.out.println("Waiting for another thread....");
wait();
}
}
catch (InterruptedException e) {
LOGGER.log(Level.SEVERE, "The thread was interrupted while waiting for another thread", e);
}
}
}
In Sender (rough Java code with some details omitted):
//package statement and imports here
class Sender extends Thread {
public static final Boolean x= new Boolean(true);
public void run() {
//initialize here
synchronized(x) {
x.notify();
}
//send messages here
}
}
In Receiver (start before Sender):
//package statement and imports here
class Receiver extends Thread {
public void run() {
//initialize here
synchronized(Sender.x) {
Sender.x.wait(); //blocks till Sender.x.notify()
}
Date start= new Date();
//receive messages here
Date end= new Date();
int duration_milliseconds= end.getTime()-start.getTime();
}
}
maybe you have to add
try{ /* code here */ } catch (InterruptedException e) {}
Feel free to discuss sense and nonsense of direct use of notify() and wait() especially in Java versions with extended concurrency libraries...

unable to send single message to kafka topic

I am using kafka java client 0.11.0 and kafka server 2.11-0.10.2.0.
My code :
KafkaManager
public class KafkaManager {
// Single instance for producer per topic
private static Producer<String, String> karmaProducer = null;
/**
* Initialize Producer
*
* #throws Exception
*/
private static void initProducer() throws Exception {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, Constants.kafkaUrl);
props.put(ProducerConfig.RETRIES_CONFIG, Constants.retries);
//props.put(ProducerConfig.BATCH_SIZE_CONFIG, Constants.batchSize);
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, Constants.requestTimeout);
//props.put(ProducerConfig.LINGER_MS_CONFIG, Constants.linger);
//props.put(ProducerConfig.ACKS_CONFIG, Constants.acks);
//props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, Constants.bufferMemory);
//props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, Constants.maxBlock);
props.put(ProducerConfig.CLIENT_ID_CONFIG, Constants.kafkaProducer);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
try {
karmaProducer = new org.apache.kafka.clients.producer.KafkaProducer<String, String>(props);
}
catch (Exception e) {
throw e;
}
}
/**
* get Producer based on topic
*
* #return
* #throws Exception
*/
public static Producer<String, String> getKarmaProducer(String topic) throws Exception {
switch (topic) {
case Constants.topicKarma :
if (karmaProducer == null) {
synchronized (KafkaProducer.class) {
if (karmaProducer == null) {
initProducer();
}
}
}
return karmaProducer;
default:
return null;
}
}
/**
* Flush and close kafka producer
*
* #throws Exception
*/
public static void closeKafkaInstance() throws Exception {
try {
karmaProducer.flush();
karmaProducer.close();
} catch (Exception e) {
throw e;
}
}
}
Kafka Producer
public class KafkaProducer {
public void sentToKafka(String topic, String data) {
Producer<String, String> producer = null;
try {
producer = KafkaManager.getKarmaProducer(topic);
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(topic, data);
producer.send(producerRecord);
} catch (Exception e) {
e.printStackTrace();
}
}
}
Main Class
public class App {
public static void main(String[] args) throws InterruptedException {
System.out.println("Hello World! I am producing to stream " + Constants.topicKarma);
String value = "google";
KafkaProducer kafkaProducer = new KafkaProducer();
for (int i = 1; i <= 1; i++) {
kafkaProducer.sentToKafka(Constants.topicKarma, value + i);
//Thread.sleep(100);
System.out.println("Send data to producer=" + value);
System.out.println("Send data to producer=" + value + i + " to tpoic=" + Constants.topicKarma);
}
}
}
What is my problem:
When my loop length if around 1000 (in class App), I am successfully able to send data to Kafka topic.
But when My loop length is 1 or less than 10, I am not able to send data to Kafka topic. Note I am not getting any error.
According to my finding, If I want to send a single message to Kafka topic, According to this program I get the successful message but never get a message on my topic.
But If I use Thread.sleep(10) (as you can see in my App class I have commented it), then I successfully send data on my topic.
Can you please explain why kafka showing this ambigous behabiour.
Each call to KafkaProducer.send() is returning a Future. You can use the last of those Futures to block the main Thread before exiting. Even easier, you can just call KafkaProducer.flush() after sending all your messages:
http://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html#flush()
Invoking this method makes all buffered records immediately available to send (even if linger.ms is greater than 0) and blocks on the completion of the requests associated with these records.
You are facing the problem because the producer executes sending in async way. When you send, the message is put inside an internal buffer in order to get a bigger batch and then send more messages with one shot.
This batching features is configured with batch.size and linger.ms, it means that messages are sent when the batch size reached that value or a linger time elapsed.
I have replied on something similar here : Cannot produce Message when Main Thread sleep less than 1000
Even you are saying "When my loop length if around 1000 (in class App), I am successfully able to send data to Kafka topic." ... but maybe you don't see all the sent messages because the latest batch isn't sent. With shorter loop the above conditions aren't reached in time so you shutdown the application before producer has enough time/batch size for sending.
Can you add Thread.sleep(100); just before exiting main?
If I understand correctly then everything works well if you sleep for a small amount of time. If that's the case, then it implies that your application is getting killed before the message is sent asynchronously.

Getting the response time for JMS requests using MessageListener interface

I am using the following block of code to post JMS messages to a queue, and get the response messages in a response queue. (The following code runs for 100 messages in batch of 20 per thread, five threads running concurrently)
for(int i=0;i<=20;i++)
{
msg=myMessages.get(i); // myMessages is an array of TextMessages
qsender = qsession.createSender((Queue)msg.getJMSDestination());
qreceiver=qsession.createReceiver((Queue)msg.getJMSDestination());
tempq = qsession.createTemporaryQueue();
responseConsumer = qsession.createConsumer(tempq);
msg.setJMSReplyTo(tempq);
responseConsumer.setMessageListener(new Listener());
msg.setJMSCorrelationID(msg.getJMSCorrelationID()+i);
qsender.send(msg);
}
The Listener implementation:
public class Listener
implements MessageListener
{
public void onMessage(Message msg)
{
TextMessage tm = (TextMessage) msg;
// to calculate the response time
}
}
The requirement is to get the response time each message takes and store it. How do I go about it? Thinking of setting the time/date in the properties for the message and then use the Correlation id to calculate the time in Listener.
Is there another way to go about it?
You could have a Map<String, Long> that maps your CorrelationID to time sent and then look them up from the listener. The process that is sending the responses will have to put the correct CorrelationID on the response message for this to work.
For this example assume timemap is a Map<String, Long> and that it is in scope for both the sender and response listener (How you want to accomplish that is up to you).
Your loop body from above, modified:
msg=myMessages.get(i); // myMessages is an array of TextMessages
qsender = qsession.createSender((Queue)msg.getJMSDestination());
qreceiver=qsession.createReceiver((Queue)msg.getJMSDestination());
tempq = qsession.createTemporaryQueue();
responseConsumer = qsession.createConsumer(tempq);
msg.setJMSReplyTo(tempq);
responseConsumer.setMessageListener(new Listener());
msg.setJMSCorrelationID(msg.getJMSCorrelationID()+i);
/* MODIFICATIONS */
synchronzied(timemap){
timemap.put(msg.getJMSCorrelationID(), System.currentTimeMillis());
} /* END MODIFICATIONS */
qsender.send(msg);
Your listener, modified:
public void onMessage(Message msg)
{
TextMessage tm = (TextMessage) msg;
long now = System.currentTimeMillis();
long responseTime = 0;
synchronized(timemap){
Long sent = timemap.get(msg.getJMSCorrelationID());
if(sent != null){
/* Store this value, this is the response time in milliseconds */
responseTime = now - sent;
}else{
/* Error condition. */
}
}
}

Slow HornetQ Producer when Queue is persistent

I have tried with Persistent Queue in horntQ. I have made two separate examples (Producer, Consumer). My consumer is working well but the Producer is taking too much time to finish sending message. I have run both separately as well as together. What could be the problem?
my code is:
public class HornetProducer implements Runnable{
Context ic = null;
ConnectionFactory cf = null;
Connection connection = null;
Queue queue = null;
Session session = null;
MessageProducer publisher = null;
TextMessage message = null;
int messageSent=0;
public synchronized static Context getInitialContext()throws javax.naming.NamingException {
Properties p = new Properties( );
p.put(Context.INITIAL_CONTEXT_FACTORY,"org.jnp.interfaces.NamingContextFactory");
p.put(Context.URL_PKG_PREFIXES," org.jboss.naming:org.jnp.interfaces");
p.put(Context.PROVIDER_URL, "jnp://localhosts:1099");
return new javax.naming.InitialContext(p);
}
public HornetProducer()throws Exception{
ic = getInitialContext();
cf = (ConnectionFactory)ic.lookup("/ConnectionFactory");
queue = (Queue)ic.lookup("queue/testQueue2");
connection = cf.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
publisher = session.createProducer(queue);
connection.start();
}
public void publish(){
try{
message = session.createTextMessage("Hello!");
System.out.println("StartDate: "+new Date());
for(int i=0;i<10000;i++){
messageSent++;
publisher.send(message);
}
System.out.println("EndDate: "+new Date());
}catch(Exception e){
System.out.println("Exception in Consume: "+ e.getMessage());
}
}
public void run(){
publish();
}
public static void main(String[] args) throws Exception{
new HornetProducer().publish();
}
}
You are sending these messages persistently, and non transactionally. What means, each message sent has to be completed individually.
That means for each message you send, you have to make a network round trip to the server, and wait it finish persistency before you can send another message.
If you had multiple producers on this situation, hornetq would batch both producers and you would save a lot of time. (i.e. the server will batch many write requests).
If you want to speed up the sending of a single producer, you should use transactions probably.
for example:
I - Change your session to transactioned:
session = connection.createSession(true, Session.SESSION_TRANSACTIONED);
II - commit every N messages:
for(int i=0;i<10000;i++){
messageSent++;
publisher.send(message);
if (messageSent % 1000 == 0) session.commit();
}
session.commit();
You could also disable sync on Persistent messages. (Sending them asynchronously).

Categories