I'm using spring boot with mq-jms-spring-boot-starter to create a JMS Listener application which reads a message from a queue, process it and forward the message in to another queue.
In case of a poison message scenario, I'm trying to generate an alert. However, in order to not generate multiple alerts to the same message, I'm thinking of comparing the JMSXDeliveryCount with BOTHRESH value and generate the alert in the last redelivery before sending to the BOQ.
BOTHRESH and BOQNAME are configured for the source queue.
#JmsListener(destination = "${sourceQueue}")
public void processMessages(Message message) {
TextMessage msg = (TextMessage) message;
int boThresh;
int redeliveryCount;
try {
boThresh = message.getIntProperty("<WHAT COMES HERE>");
redeliveryCount = message.getIntProperty("JMSXDeliveryCount");
String processedMessage = this.processMessage(message);
this.forwardMessage("destinationQueue", processedMessage);
} catch (Exception e) {
if (redeliveryCount >= boThresh) {
//generate alert here
}
}
}
How should I get the value of BOTHRESH here? Is it possible at all? I tried to get all the available properties using getPropertyNames() method and following are all the properties I see.
JMS_IBM_Format
JMS_IBM_PutDate
JMS_IBM_Character_Set
JMSXDeliveryCount
JMS_IBM_MsgType
JMSXUserID
JMS_IBM_Encoding
JMS_IBM_PutTime
JMSXAppID
JMS_IBM_PutApplType
This will do it, but the code does need admin access to an admin channel, which may not be optimal for a client application.
The Configuration
import com.ibm.mq.*;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.ibm.mq.constants.CMQC;
import java.util.Hashtable;
#Configuration
public class MQConfiguration {
protected final Log logger = LogFactory.getLog(getClass());
#Value("${ibm.mq.queueManager:QM1}")
public String qMgrName;
#Value("${app.mq.admin.channel:DEV.ADMIN.SVRCONN}")
private String adminChannel;
#Value("${app.mq.host:localhost}")
private String host;
#Value("${app.mq.host.port:1414}")
private int port;
#Value("${app.mq.adminuser:admin}")
private String adminUser;
#Value("${app.mq.adminpassword:passw0rd}")
private String password;
#Bean
public MQQueueManager mqQueueManager() {
try {
Hashtable<String,Object> connectionProperties = new Hashtable<String,Object>();
connectionProperties.put(CMQC.CHANNEL_PROPERTY, adminChannel);
connectionProperties.put(CMQC.HOST_NAME_PROPERTY, host);
connectionProperties.put(CMQC.PORT_PROPERTY, port);
connectionProperties.put(CMQC.USER_ID_PROPERTY, adminUser);
connectionProperties.put(CMQC.PASSWORD_PROPERTY, password);
return new MQQueueManager(qMgrName, connectionProperties);
} catch (MQException e) {
logger.warn("MQException obtaining MQQueueManager");
logger.warn(e.getMessage());
}
return null;
}
}
Obtain the Queue's backout threshold
import com.ibm.mq.*;
import com.ibm.mq.constants.CMQC;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;
#Component
public class Runner {
protected final Log logger = LogFactory.getLog(getClass());
#Value("${app.mq.queue:DEV.QUEUE.1}")
private String queueName = "";
private final MQQueueManager mqQueueManager;
Runner(MQQueueManager mqQueueManager) {
this.mqQueueManager = mqQueueManager;
}
#Bean
CommandLineRunner init() {
return (args) -> {
logger.info("Determining Backout threshold");
try {
int[] selectors = {
CMQC.MQIA_BACKOUT_THRESHOLD,
CMQC.MQCA_BACKOUT_REQ_Q_NAME };
int[] intAttrs = new int[1];
byte[] charAttrs = new byte[MQC.MQ_Q_NAME_LENGTH];
int openOptions = MQC.MQOO_INPUT_AS_Q_DEF | MQC.MQOO_INQUIRE | MQC.MQOO_SAVE_ALL_CONTEXT;
MQQueue myQueue = mqQueueManager.accessQueue(queueName, openOptions, null, null, null);
logger.info("Queue Obtained");
MQManagedObject moMyQueue = (MQManagedObject) myQueue;
moMyQueue.inquire(selectors, intAttrs, charAttrs);
int boThresh = intAttrs[0];
String backoutQname = new String(charAttrs);
logger.info("Backout Threshold: " + boThresh);
logger.info("Backout Queue: " + backoutQname);
} catch (MQException e) {
logger.warn("MQException Error obtaining threshold");
logger.warn(e.getMessage());
}
};
}
}
This sounds like you are mixing retriable and non-retriable error handling.
If you are tracking redelivers and need to send an alert, then you probably do not want to set a BOTHRESH value, and instead manage it all in your client-side code.
Recommended consumer error handling pattern:
If the message is invalid (ie.. bad JSON or XML) move to DLQ immediately. The message will never improve in quality and there is no reason to do repeated retries.
If the 'next step' in processing is down (ie. the database) reject delivery and allow redelivery delays and backout retries to kick in. This also has the benefit of allowing other consumers on the queue to attempt processing the message and eliminates the problem where one consumer has a dead path from holding up a messages.
Also, consider that using client-side consumer code to do monitoring and alerting can be problematic, since it combines different functions. If your goal is to track invalid messages, monitoring the DLQ is generally a better design pattern and it removes 'monitoring' code from your consumer code.
Related
I would like to dynamically create Kafka topics. In my case, there can be up to several hundred topics in the application. There can be multiple concurrent calls to this method for each topic during system startup.
The AdminClient object has local scope, so it will be created every time. I suspect that a socket and a connection to the Kafka broker are opened underneath, so this solution is not optimal in terms of performance, as there may be several hundred connections open in memory at any one time.
import java.util.Collections;
import java.util.Properties;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutionException;
import lombok.RequiredArgsConstructor;
import org.apache.kafka.clients.admin.Admin;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.CreateTopicsResult;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.common.KafkaFuture;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
#Service
#RequiredArgsConstructor
class TopicFactory {
private final Logger log = LoggerFactory.getLogger(TopicFactory.class);
private final Set<String> topics = ConcurrentHashMap.newKeySet();
#Value("${kafka.bootstrap.servers}")
private final String bootstrapServers;
#Value("${kafka.topic.replication.factor}")
private final String replicationFactor;
void createTopicIfNotExists(String topicName, int partitionCount) {
if (topics.contains(topicName)) {
return;
}
Properties properties = new Properties();
properties.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
try (Admin admin = Admin.create(properties)) {
if (isTopicExists(admin, topicName)) {
topics.add(topicName);
return;
}
NewTopic newTopic = new NewTopic(topicName, partitionCount, Short.parseShort(replicationFactor));
CreateTopicsResult result = admin.createTopics(Collections.singleton(newTopic));
KafkaFuture<Void> future = result.values().get(topicName);
try {
future.get();
topics.add(topicName);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
log.error("Interrupted exception occurred during topic creation", e);
} catch (ExecutionException e) {
log.error("Execution exception occurred during topic creation", e);
}
}
}
private boolean isTopicExists(Admin admin, String topicName) {
try {
return admin.listTopics().names().get().contains(topicName);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
log.error("Interrupted exception occurred during topic creation", e);
return false;
} catch (ExecutionException e) {
log.error("Execution exception occurred during topic creation", e);
return false;
}
}
}
How to improve the performance of this solution? Is connection caching a good idea? If so, in what way? As an initialized field in a class or maybe using e.g. Guava cache or Suppliers.memoize(...)? However, then the connection with the broker would have to be maintained all the time.
If you want to improve this solution for hundreds of topics, as it is written, then admin.createTopics takes a whole collection, so don't use a singleton list.
Also, admin.listTopics() result can be cached so that you don't query all topics every time you create one more topic.
Otherwise, I personally would use alternative solutions like Terraform rather than Spring. Since topics aren't going to need to be recreated very often (in the same Kafka cluster, at least), so your code might only be ran a handful of times, but you're needlessly increasing the size of your Spring app by dragging that TopicFactory class around.
I am working with GetEventStore as the journal provider for the events persisted by akka-persistence and accessing the akka.persistence.query.javadsl to query the events from eventstore. The actor system and the journal provider is configured using spring.
The eventstore configuration is the following:
eventstore {
# IP & port of Event Store
address {
host = "xxxx"
port = 1113
}
http {
protocol = "http"
port = 2113
prefix = ""
}
# The desired connection timeout
connection-timeout = 10s
# Maximum number of reconnections before backing, -1 to reconnect forever
max-reconnections = 100
reconnection-delay {
# Delay before first reconnection
min = 250ms
# Maximum delay on reconnections
max = 1s
}
# The default credentials to use for operations where others are not explicitly supplied.
credentials {
login = "admin"
password = "changeit"
}
heartbeat {
# The interval at which to send heartbeat messages.
interval = 500ms
# The interval after which an unacknowledged heartbeat will cause the connection to be considered faulted and disconnect.
timeout = 5s
}
operation {
# The maximum number of operation retries
max-retries = 10
# The amount of time before an operation is considered to have timed out
timeout = 500s
}
# Whether to resolve LinkTo events automatically
resolve-linkTos = false
# Whether or not to require EventStore to refuse serving read or write request if it is not master
require-master = true
# Number of events to be retrieved by client as single message
read-batch-size = 990
# The size of the buffer in element count
buffer-size = 100000
# Strategy that is used when elements cannot fit inside the buffer
# Possible values DropHead, DropTail, DropBuffer, DropNew, Fail
buffer-overflow-strategy = "DropHead"
# The number of serialization/deserialization functions to be run in parallel
serialization-parallelism = 8
# Serialization done asynchronously and these futures may complete in any order,
# but results will be used with preserved order if set to true
serialization-ordered = true
cluster {
# Endpoints for seeding gossip
# For example: ["127.0.0.1:1", "127.0.0.2:2"]
gossip-seeds = []
# The DNS name to use for discovering endpoints
dns = null
# The time given to resolve dns
dns-lookup-timeout = 2s
# The well-known endpoint on which cluster managers are running
external-gossip-port = 30778
# Maximum number of attempts for discovering endpoints
max-discover-attempts = 10
# The interval between cluster discovery attempts
discover-attempt-interval = 500ms
# The interval at which to keep discovering cluster
discovery-interval = 1s
# Timeout for cluster gossip
gossip-timeout = 1s
}
persistent-subscription {
# Whether to resolve LinkTo events automatically
resolve-linkTos = false
# Where the subscription should start from (position)
start-from = last
# Whether or not in depth latency statistics should be tracked on this subscription.
extra-statistics = false
# The amount of time after which a message should be considered to be timedout and retried.
message-timeout = 30s
# The maximum number of retries (due to timeout) before a message get considered to be parked
max-retry-count = 500
# The size of the buffer listening to live messages as they happen
live-buffer-size = 500
# The number of events read at a time when paging in history
read-batch-size = 100
# The number of events to cache when paging through history
history-buffer-size = 20
# The amount of time to try to checkpoint after
checkpoint-after = 2s
# The minimum number of messages to checkpoint
min-checkpoint-count = 10
# The maximum number of messages to checkpoint if this number is a reached a checkpoint will be forced.
max-checkpoint-count = 1000
# The maximum number of subscribers allowed
max-subscriber-count = 0
# The [[ConsumerStrategy]] to use for distributing events to client consumers
# Known are RoundRobin, DispatchToSingle
# however you can provide a custom one, just make sure it is supported by server
consumer-strategy = RoundRobin
}
}
The journal provider code is the following:
package com.org.utils;
import static akka.stream.ActorMaterializer.create;
import static java.util.concurrent.CompletableFuture.allOf;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutionException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import akka.Done;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.japi.function.Predicate;
import akka.japi.function.Procedure;
import akka.persistence.query.EventEnvelope;
import akka.persistence.query.PersistenceQuery;
import akka.persistence.query.javadsl.AllPersistenceIdsQuery;
import akka.persistence.query.javadsl.CurrentEventsByPersistenceIdQuery;
import akka.persistence.query.javadsl.CurrentPersistenceIdsQuery;
import akka.persistence.query.javadsl.EventsByPersistenceIdQuery;
import akka.persistence.query.javadsl.ReadJournal;
import akka.stream.ActorMaterializer;
import akka.stream.Materializer;
import akka.stream.javadsl.Source;
import lombok.extern.log4j.Log4j;
#Service
#Log4j
public class JournalProvider {
private ActorSystem system;
private ReadJournal readJournal;
#Autowired
public JournalProvider(ActorSystem system) {
super();
this.system = system;
}
#SuppressWarnings({ "rawtypes", "unchecked" })
public ReadJournal journal(ActorSystem system) {
if (readJournal == null) {
String queryJournalClass = system.settings().config().getString("queryJournalClass");
String queryIdentifier = system.settings().config().getString("queryIdentifier");
if (queryJournalClass == null || queryIdentifier == null) {
throw new RuntimeException(
"Please set queryIdentifier and queryJournalClass variables in application.conf or reference.conf");
}
try {
Class clasz = Class.forName(queryJournalClass);
readJournal = PersistenceQuery.get(system).getReadJournalFor(clasz, queryIdentifier);
} catch (ClassNotFoundException e) {
throw new RuntimeException("Caught exception : " + e);
}
}
return readJournal;
}
public CompletableFuture<Void> runForEachId(Procedure<EventEnvelope> function,
Map<String, Long> idsWithStartSequenceNr) {
List<CompletableFuture<Done>> allFutures = new ArrayList<>();
for (String id : idsWithStartSequenceNr.keySet()) {
Long fromSequenceNr = idsWithStartSequenceNr.get(id);
CompletionStage<Done> mapPreparedCompletionStage = runForEachEvent(id, fromSequenceNr, function);
allFutures.add(mapPreparedCompletionStage.toCompletableFuture());
}
CompletableFuture<Void> combinedFuture = allOf(allFutures.toArray(new CompletableFuture[0]));
return combinedFuture;
}
public CompletionStage<Done> runForEachEvent(String id, long sequenceNr, Procedure<EventEnvelope> function) {
ActorMaterializer materializer = ActorMaterializer.create(system);
Source<EventEnvelope, NotUsed> eventsForId = ((CurrentEventsByPersistenceIdQuery) journal(system))
.currentEventsByPersistenceId(id, sequenceNr, Long.MAX_VALUE);
return eventsForId.runForeach(function, materializer);
}
public final List<Object> fetchEventsByPersistenceId1(String id, Predicate<EventEnvelope> filter) {
List<Object> allEvents = new ArrayList<>();
try {
((CurrentEventsByPersistenceIdQuery) journal(system)).currentEventsByPersistenceId(id, 0, Long.MAX_VALUE)
.filter(filter).runForeach((event) -> allEvents.add(event.event()), create(system)).toCompletableFuture()
.get();
} catch (InterruptedException | ExecutionException e) {
log.error(" Error while getting currentEventsForPersistenceId for id " + id, e);
}
return allEvents;
}
public List<Object> fetchEventsByPersistenceId(String id) {
List<Object> allEvents = new ArrayList<>();
try {
((CurrentEventsByPersistenceIdQuery) journal(system)).currentEventsByPersistenceId(id, 0, Long.MAX_VALUE)
.runForeach((event) -> allEvents.add(event.event()), create(system)).toCompletableFuture()
.get();
} catch (InterruptedException | ExecutionException e) {
log.error(" Error while getting currentEventsForPersistenceId for id " + id, e);
}
return allEvents;
}
#SafeVarargs
public final List<String> currentPersistenceIds(Materializer materializer, Predicate<String>... filters)
throws InterruptedException, ExecutionException {
Source<String, NotUsed> currentPersistenceIds = ((CurrentPersistenceIdsQuery) journal(system))
.currentPersistenceIds();
for (Predicate<String> filter : filters)
currentPersistenceIds = currentPersistenceIds.filter(filter);
List<String> allIds = new ArrayList<String>();
CompletionStage<Done> allIdCompletionStage = currentPersistenceIds.runForeach(id -> allIds.add(id), materializer);
allIdCompletionStage.toCompletableFuture().get();
return allIds;
}
#SafeVarargs
public final Source<String, NotUsed> allPersistenceIds(Predicate<String>... filters) {
Source<String, NotUsed> allPersistenceIds = ((AllPersistenceIdsQuery) journal(system)).allPersistenceIds();
for (Predicate<String> filter : filters)
allPersistenceIds = allPersistenceIds.filter(filter);
return allPersistenceIds;
}
public final Source<EventEnvelope, NotUsed> currentEventsSourceForPersistenceId(String id) {
return ((CurrentEventsByPersistenceIdQuery) journal(system)).currentEventsByPersistenceId(id, 0, Long.MAX_VALUE);
}
public final Source<EventEnvelope, NotUsed> allEventsSourceForPersistenceId(String id) {
return allEventsSourceForPersistenceId(id, 0, Long.MAX_VALUE);
}
public final Source<EventEnvelope, NotUsed> allEventsSourceForPersistenceId(String id, long from, long to) {
return ((EventsByPersistenceIdQuery) journal(system)).eventsByPersistenceId(id, from, to);
}
}
The eventstore is populated with the relevant events through the actor system and the following code tests the incoming events and consumes them through an actor as the sink.
The issue that i am facing here are that some messages are being dropped and not all the events are being fed to the stream mapping function.
package com.org.utils;
import static com.wt.utils.akka.SpringExtension.SpringExtProvider;
import java.util.List;
import java.util.concurrent.ExecutionException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.stereotype.Service;
import com.org.domain.Ad;
import com.wt.domain.Px;
import com.wt.domain.repo.AdRepo;
import com.wt.domain.write.events.AdCalc;
import akka.NotUsed;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.PoisonPill;
import akka.stream.ActorMaterializer;
import akka.stream.javadsl.MergeHub;
import akka.stream.javadsl.RunnableGraph;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
#Service("Tester")
public class Tester {
private AdRepo adRepo;
private ActorRef readerActor;
private ActorMaterializer materializer;
private JournalProvider provider;
#Autowired
public Tester(JournalProvider provider, AdRepo adRepo, ActorSystem system) {
super();
this.provider = provider;
this.adRepo= adRepo;
this.readerActor = system.actorOf(SpringExtProvider.get(system).props("ReaderActor"), "reader-actor");
this.materializer = ActorMaterializer.create(system);
}
public void testerFunction() throws InterruptedException, ExecutionException {
// retrieve events of type Event1 from eventstore
Source<Event1, NotUsed> event1 = provider.allEventsSourceForPersistenceId("persistence-id")
.filter(evt -> evt.event() instanceof Event1)
.map(evt -> (Event1) evt.event());
// fetch a list of domain object of type Ad from the repository
List<Ad> adSym= adRepo.findBySymbol("symbol-name");
Ad ad = adSym.stream().findAny().get();
// map the event1 source stream to AdCalc domain event source stream
// the ad.calculator function returns a source of AdCalc domain event source stream
// Here lies the issue. Not all the event1 source objects are being converted to
// AdCalc domain event objects and are being dropped
Source<AdCalc, NotUsed> adCalcResult = event1.map(evt-> ad.calculator(evt, evt.getData());
Sink<AdCalc, NotUsed> consumer = Sink.actorRef(readerActor, PoisonPill.getInstance());
RunnableGraph<Sink<AdCalc, NotUsed>> runnableGraph = MergeHub
.of(AdCalc.class).to(consumer);
Sink<AdCalc, NotUsed> resultAggregator = runnableGraph.run(materializer);
adCalcResult .runWith(resultAggregator , materializer);
}
public static void main(String[] args) throws InterruptedException, ExecutionException {
try (AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(CoreAppConfiguration.class)) {
Tester tester= (Tester) ctx
.getBean("Tester");
Tester.historicalPerformance();
}
}
}
Here is the actor that does the processing
package com.org.utils;
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Service;
import com.wt.domain.write.events.AdCalc;
import akka.actor.UntypedActor;
#Scope("prototype")
#Service("ReaderActor")
public class ReaderActor extends UntypedActor {
public void onReceive(Object message) throws Exception {
if (message instanceof AdCalc) {
final AdCacl adCalculation = (AdCalc) message;
// the above event also consists a timestamp and by that and ofcourse the persistence id of the events in the eventstore,
// i realize that not all events are being processed and are being dropped
System.out.println(adCalculation);
} else
unhandled(message);
context().system().stop(getSelf());
}
}
The issue as mentioned in the above code snippet comments are:
The incoming source stream is dropping a lot of events and that some events are not being transmitted to the actor.
I need some help with the syntax for mapAsync stream integration as the one given in the document gives compilation issue.
A syntax for actorWithRef again for the stream integration would be very helpful. The Akka documentation does not have that.
Thanks a ton !
I am new to ActiveMQ. I have tried to implement producer-consumer (sender-receiver) in activemq. In my code, I am easily send & receive the messages from single producer to single consumer via ActiveMQ. But, the problem is, I can't send the message to multiple consumers from the same producer.
Here is my producer & consumer class.
MsgProducer.java
package jms_service;
import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.TextMessage;
import org.apache.activemq.ActiveMQConnectionFactory;
public class MsgProducer {
private static String url = "failover://tcp://localhost:61616";
public static javax.jms.ConnectionFactory connFactory;
public static javax.jms.Connection connection;
public static javax.jms.Session mqSession;
public static javax.jms.Topic topic;
public static javax.jms.MessageProducer producer;
public static void main(String[] args) throws JMSException {
connFactory = new ActiveMQConnectionFactory(url);
connection = connFactory.createConnection("system","manager");
connection.start();
mqSession = connection.createSession(false,Session.AUTO_ACKNOWLEDGE);
topic = mqSession.createTopic("RealTimeData");
producer = mqSession.createProducer(topic);
producer.setTimeToLive(30000);
TextMessage message = mqSession.createTextMessage();
int seq_id =1;
while(true)
{
message.setText("Hello world | " +"seq_id #"+seq_id);
producer.send(message);
seq_id++;
System.out.println("sent_msg =>> "+ message.getText());
// if(seq_id>100000) break;
try {
Thread.sleep(1000);
}
catch (InterruptedException e) { e.printStackTrace();}
}
}
}
MsgConsumer.java
package jms_service;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.Session;
import javax.jms.TextMessage;
import org.apache.activemq.ActiveMQConnectionFactory;
public class MsgConsumer {
private static String url = "failover://tcp://localhost:61616";
public static javax.jms.ConnectionFactory connFactory;
public static javax.jms.Connection connection;
public static javax.jms.Session mqSession;
public static javax.jms.Topic topic;
public static javax.jms.MessageConsumer consumer;
public static void main(String[] args) throws JMSException, InterruptedException {
connFactory = new ActiveMQConnectionFactory(url);
connection = connFactory.createConnection("system", "manager");
connection.setClientID("0002");
//connection.start();
mqSession = connection.createSession(true, Session.CLIENT_ACKNOWLEDGE);
topic = mqSession.createTopic("RealTimeData");
consumer = mqSession.createDurableSubscriber(topic, "SUBS01");
connection.start();
MessageListener listner = new MessageListener() {
public void onMessage(Message message) {
try {
if (message instanceof TextMessage) {
TextMessage txtmsg = (TextMessage) message;
Calendar cal = Calendar.getInstance();
//cal.getTime();
SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss");
String time = sdf.format(cal.getTime());
String msg="received_message =>> "+ txtmsg.getText() + " | received_at :: "+time;
System.out.println(msg);
//consumer.sendData(msg);
}
} catch (JMSException e) {
System.out.println("Caught:" + e);
e.printStackTrace();
}
}
};
consumer.setMessageListener(listner);
}
}
Can anyone help to figure out the way for sending message to multiple consumers.
Thanks in advance.
Queue semantics deliver a message once-and-only-once across all consumers. This is per the JMS spec (a great read to understand the basics).
Topic semantics deliver a message to every consumer. So, a Topic may be the answer to your needs.
Assuming your question is
Can anyone help to figure out the way for sending message to multiple consumers
and without reading through your complete code, an approach could be to put your clients in a collection
static Vector<consumer> vecConsumer;
where you put in every new client and give a reference to all existing clients.
The broadcasting is just like sending to a single client, encapsulated in, for an example, a foreach loop
for(consumer cons : vecConsumer)
{
//send stuff or put in sending queue
}
Topics is the best route. One producer to many consumers or one publisher to many subscribers. With Queues you have go write a loop to to get all the possible consumers and use different destinations to send the messages. Your motive would also determine whether to use Queues or Topics.
If u you think your consumers can be offline or have network issues then choose queues. In this case when they come back on they will receive the pending messages
With topics there is no way they will receive the message when there is a disconnection unless u explicitly persist the message however new messages would overwrite them
What are the options available to develop Java applications using Service Bus for Windows?
Java Message Broker API - This need ACS to work with, which SB for Win doesnt support.
AMQP - This doesnt seem to work on SB for Windows, I keep getting error
org.apache.qpid.amqp_1_0.client.Sender$SenderCreationException: Peer did not create remote endpoint for link, target:
While the same code works with Azure SB. So AMQP on SB for Windows seems to be not fully working?
Correct me if I have missed something?
Update
To test AMQP on local machine, this is what I did
Installed Service bus 1.1 on my local machine
Took the sample mentioned here http://www.windowsazure.com/en-us/develop/java/how-to-guides/service-bus-amqp/
Created a new namespace on my local machine
Specified the following connection string in servicebus.properties (which is correctly referred in the code
connectionfactory.SBCF = amqps://<username>:<password>#<MachineName>:5671/StringAnalyzerNS/
queue.QUEUE = queue1
Code is updated with certificates.
At runtime I get this error
javax.jms.JMSException: Peer did not create remote endpoint for link, target: queue1
at org.apache.qpid.amqp_1_0.jms.impl.MessageProducerImpl.<init>(MessageProducerImpl.java:77)
at org.apache.qpid.amqp_1_0.jms.impl.SessionImpl.createProducer(SessionImpl.java:348)
at org.apache.qpid.amqp_1_0.jms.impl.SessionImpl.createProducer(SessionImpl.java:63)
at com.stringcompany.Analyzer.SimpleSenderReceiver.<init>(SimpleSenderReceiver.java:70)
at com.stringcompany.Analyzer.SimpleSenderReceiver.main(SimpleSenderReceiver.java:95)
Caused by: org.apache.qpid.amqp_1_0.client.Sender$SenderCreationException: Peer did not create remote endpoint for link, target: queue1
at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:171)
at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:104)
at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:97)
at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:83)
at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:69)
at org.apache.qpid.amqp_1_0.client.Sender.<init>(Sender.java:63)
at org.apache.qpid.amqp_1_0.client.Session.createSender(Session.java:74)
at org.apache.qpid.amqp_1_0.client.Session.createSender(Session.java:66)
at org.apache.qpid.amqp_1_0.jms.impl.MessageProducerImpl.<init>(MessageProducerImpl.java:72)
... 4 more
javax.jms.JMSException: Session remotely closed
With the same code If I point to Azure service bus by setting the SB namespace and queue like below
connectionfactory.SBCF = amqps://<Policy name>:<Sec. Key>#<ns>.servicebus.windows.net
queue.QUEUE = testq
This works, messages are exchanged.
Here is the code if someone wants to try it
package com.stringcompany.Analyzer;
//SimpleSenderReceiver.java
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.Hashtable;
import java.util.Random;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.ExceptionListener;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageListener;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.Context;
import javax.naming.InitialContext;
public class SimpleSenderReceiver implements MessageListener {
private static boolean runReceiver = true;
private Connection connection;
private Session sendSession;
private Session receiveSession;
private MessageProducer sender;
private MessageConsumer receiver;
private static Random randomGenerator = new Random();
public SimpleSenderReceiver() throws Exception {
// Configure JNDI environment
Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.qpid.amqp_1_0.jms.jndi.PropertiesFileInitialContextFactory");
env.put(Context.PROVIDER_URL, "D:\\Java\\Azure\\workspace\\Analyzer\\src\\main\\resources\\servicebus.properties");
Context context = new InitialContext(env);
// Lookup ConnectionFactory and Queue
ConnectionFactory cf = (ConnectionFactory) context.lookup("SBCF");
System.out.println("cf:"+cf);
// Create Connection
connection = cf.createConnection();
System.out.println("connection :"+connection);
connection.setExceptionListener(new ExceptionListener() {
public void onException(JMSException arg0) {
System.err.println(arg0);
}
});
connection.start();
// Create sender-side Session and MessageProducer
sendSession = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
System.out.println("Session open");
Destination queue = (Destination) context.lookup("QUEUE");
System.out.println("queue:"+queue);
sender = sendSession.createProducer(queue);
Queue q=(Queue) queue;
System.out.println(sender.getDestination());
System.out.println("sender:"+sender);
if (runReceiver) {
System.out.println("Waitng for new message");
// Create receiver-side Session, MessageConsumer,and MessageListener
receiveSession = connection.createSession(false,
Session.CLIENT_ACKNOWLEDGE);
receiver = receiveSession.createConsumer(queue);
receiver.setMessageListener(this);
connection.start();
}
}
public static void main(String[] args) {
try {
if ((args.length > 0) && args[0].equalsIgnoreCase("sendonly")) {
runReceiver = false;
}
//System.setProperty("javax.net.debug","ssl");
System.setProperty("javax.net.ssl.trustStore","D:\\Java\\Azure\\workspace\\Analyzer\\src\\main\\resources\\SBKeystore.keystore");
System.setProperty("log4j.configuration","D:\\Java\\Azure\\workspace\\Analyzer\\src\\main\\resources\\log4j.properties");
SimpleSenderReceiver simpleSenderReceiver = new SimpleSenderReceiver();
System.out
.println("Press [enter] to send a message. Type 'exit' + [enter] to quit.");
BufferedReader commandLine = new java.io.BufferedReader(
new InputStreamReader(System.in));
while (true) {
String s = "Message";//commandLine.readLine();
if (s.equalsIgnoreCase("exit")) {
simpleSenderReceiver.close();
System.exit(0);
} else {
simpleSenderReceiver.sendMessage();
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void sendMessage() throws JMSException {
TextMessage message = sendSession.createTextMessage();
message.setText("Test AMQP message from JMS");
long randomMessageID = randomGenerator.nextLong() >>> 1;
message.setJMSMessageID("ID:" + randomMessageID);
sender.send(message);
System.out.println("Sent message with JMSMessageID = "
+ message.getJMSMessageID());
}
public void close() throws JMSException {
connection.close();
}
public void onMessage(Message message) {
try {
System.out.println("Received message with JMSMessageID = "
+ message.getJMSMessageID());
message.acknowledge();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Hi we had the same problems and thankfully MS updated their documentation to show how to do this correctly. :
http://msdn.microsoft.com/en-us/library/dn574799.aspx
The simplest answer to the question is as you should URL Encode the SASPolicyKey.
connectionfactory.SBCF = amqps://[SASPolicyName]:[SASPolicyKey]#[namespace].servicebus.windows.net
Where SASPolicyKey should be URL-Encoded.
AMQP 1.0 is supported with Service Bus 1.1 for windows server. Basically there are two differences between the cloud and on-prem usage of AMQP in ServiceBus:
1. Addressing: You will need to build an AMQP connection strings (and will need DNS in case you're looking for HA)
2. Authentication: You will need to use domain joined accounts as ACS is not there on-prem. You will also need to distribute your SB certificate to your clients.
Ok, I have sorted the first issue (Java Message Broker API not supporting SAS endpoint), by writing a wrapper which will seamlessly work with existing API. You can get the library from this GitHub repository. With this, I can develop/test my Java application on local service bus environment and host it on Azure / On-Premise Service Bus farm.
https://github.com/Dhana-Krishnasamy/ServiceBusForWindows-SASWrapper
The sender and receiver Queues you will have to configure differently. Here is an example of my working configuration (servicebus.properties):
connectionfactory.SBCF = amqps://$PolicyName:$UrlEncodedKey#$Your-EventHub-NamespaceName.servicebus.windows.net
queue.EventHubSender=$YourEventHubName
queue.EventHubReceiver=$YourEventHubName/ConsumerGroups/$YourConsumerGroupName/Partitions/1
Replace appropriately your own '$' items in there.
The Shared Policy Key has to be URL encoded.
Make sure that your sender will reference the 'EventHubSender' defined in this config and the receiver will reference the 'EventHubReciever'.
Grab the Azure Java SDK from http://www.windowsazure.com/en-us/develop/java/ and then follow this guide: http://www.windowsazure.com/en-us/develop/java/how-to-guides/service-bus-queues/
I have made a simple ActiveMQ application.
It listens to a queue. If a message comes, print out the dataId
Here is the code:
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.ExceptionListener;
import javax.jms.JMSException;
import javax.jms.MapMessage;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageListener;
import javax.jms.Session;
import org.apache.activemq.ActiveMQConnectionFactory;
public class MQ implements MessageListener {
private Connection connection = null;
private Session session = null;
private Destination destination = null;
private void errorOnConnection(JMSException e) {
System.out.println("MQ is having problems. Exception::"+ e);
}
private void init() throws JMSException {
String BROKER_URL = "failover:(tcp://myQueue001:61616,tcp://myQueue002:61616)?randomize=false";
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(BROKER_URL);
connection = connectionFactory.createConnection("user", "password");
connection.setExceptionListener(
new ExceptionListener() {
#Override public void onException(JMSException e) {
errorOnConnection(e);
}
});
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
destination = session.createQueue("myQueue");
MessageConsumer consumer = session.createConsumer(destination);
consumer.setMessageListener(this);
}
public boolean start() {
try {
if(connection==null )
init();
connection.start();
} catch (Exception e) {
System.out.println("MQListener cannot be started, exception: " + e);
}
return true;
}
#Override
public void onMessage(Message msg) {
try {
if(msg instanceof MapMessage){
MapMessage m = (MapMessage)msg;
int dataId = m.getIntProperty("dataId");
System.out.println(dataId);
}
} catch (JMSException e) {
System.out.println("Got an exception: " + e);
}
}
public static void main(String[] args) {
MQ mq = new MQ();
mq.start();
}
}
It works fine and does what it is meant to accomplish.
However, the problem is that it can run only for several days. After several days, it just quits silently without any exceptions or error.
The queue I am listening to is from 3rd party. From a guy there, the queue sometimes will be closed or restarted or interrupted.
But I think even if that happen, the default ActiveMQ settings will handle it by consistently reconnect to it, right? (according to http://activemq.apache.org/cms/configuring.html)
So any other possible causes which lead my code to quitting silently?
Depends on bit on your version. Since you are not doing anything yourself to keep the application running but instead depending on the ActiveMQ code to keep at least one non-deamon thread running. In some ActiveMQ versions the client wasn't always doing this so your application could quite while a failover was occurring. Best bet is to switch to v5.8.0 which I believe had some fixes for this.
You could add some polling code in main to read something from console or what not to ensure that the client stays up until you are sure you want it to go down.