I would like to dynamically create Kafka topics. In my case, there can be up to several hundred topics in the application. There can be multiple concurrent calls to this method for each topic during system startup.
The AdminClient object has local scope, so it will be created every time. I suspect that a socket and a connection to the Kafka broker are opened underneath, so this solution is not optimal in terms of performance, as there may be several hundred connections open in memory at any one time.
import java.util.Collections;
import java.util.Properties;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutionException;
import lombok.RequiredArgsConstructor;
import org.apache.kafka.clients.admin.Admin;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.CreateTopicsResult;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.common.KafkaFuture;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
#Service
#RequiredArgsConstructor
class TopicFactory {
private final Logger log = LoggerFactory.getLogger(TopicFactory.class);
private final Set<String> topics = ConcurrentHashMap.newKeySet();
#Value("${kafka.bootstrap.servers}")
private final String bootstrapServers;
#Value("${kafka.topic.replication.factor}")
private final String replicationFactor;
void createTopicIfNotExists(String topicName, int partitionCount) {
if (topics.contains(topicName)) {
return;
}
Properties properties = new Properties();
properties.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
try (Admin admin = Admin.create(properties)) {
if (isTopicExists(admin, topicName)) {
topics.add(topicName);
return;
}
NewTopic newTopic = new NewTopic(topicName, partitionCount, Short.parseShort(replicationFactor));
CreateTopicsResult result = admin.createTopics(Collections.singleton(newTopic));
KafkaFuture<Void> future = result.values().get(topicName);
try {
future.get();
topics.add(topicName);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
log.error("Interrupted exception occurred during topic creation", e);
} catch (ExecutionException e) {
log.error("Execution exception occurred during topic creation", e);
}
}
}
private boolean isTopicExists(Admin admin, String topicName) {
try {
return admin.listTopics().names().get().contains(topicName);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
log.error("Interrupted exception occurred during topic creation", e);
return false;
} catch (ExecutionException e) {
log.error("Execution exception occurred during topic creation", e);
return false;
}
}
}
How to improve the performance of this solution? Is connection caching a good idea? If so, in what way? As an initialized field in a class or maybe using e.g. Guava cache or Suppliers.memoize(...)? However, then the connection with the broker would have to be maintained all the time.
If you want to improve this solution for hundreds of topics, as it is written, then admin.createTopics takes a whole collection, so don't use a singleton list.
Also, admin.listTopics() result can be cached so that you don't query all topics every time you create one more topic.
Otherwise, I personally would use alternative solutions like Terraform rather than Spring. Since topics aren't going to need to be recreated very often (in the same Kafka cluster, at least), so your code might only be ran a handful of times, but you're needlessly increasing the size of your Spring app by dragging that TopicFactory class around.
Related
I'm using spring boot with mq-jms-spring-boot-starter to create a JMS Listener application which reads a message from a queue, process it and forward the message in to another queue.
In case of a poison message scenario, I'm trying to generate an alert. However, in order to not generate multiple alerts to the same message, I'm thinking of comparing the JMSXDeliveryCount with BOTHRESH value and generate the alert in the last redelivery before sending to the BOQ.
BOTHRESH and BOQNAME are configured for the source queue.
#JmsListener(destination = "${sourceQueue}")
public void processMessages(Message message) {
TextMessage msg = (TextMessage) message;
int boThresh;
int redeliveryCount;
try {
boThresh = message.getIntProperty("<WHAT COMES HERE>");
redeliveryCount = message.getIntProperty("JMSXDeliveryCount");
String processedMessage = this.processMessage(message);
this.forwardMessage("destinationQueue", processedMessage);
} catch (Exception e) {
if (redeliveryCount >= boThresh) {
//generate alert here
}
}
}
How should I get the value of BOTHRESH here? Is it possible at all? I tried to get all the available properties using getPropertyNames() method and following are all the properties I see.
JMS_IBM_Format
JMS_IBM_PutDate
JMS_IBM_Character_Set
JMSXDeliveryCount
JMS_IBM_MsgType
JMSXUserID
JMS_IBM_Encoding
JMS_IBM_PutTime
JMSXAppID
JMS_IBM_PutApplType
This will do it, but the code does need admin access to an admin channel, which may not be optimal for a client application.
The Configuration
import com.ibm.mq.*;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.ibm.mq.constants.CMQC;
import java.util.Hashtable;
#Configuration
public class MQConfiguration {
protected final Log logger = LogFactory.getLog(getClass());
#Value("${ibm.mq.queueManager:QM1}")
public String qMgrName;
#Value("${app.mq.admin.channel:DEV.ADMIN.SVRCONN}")
private String adminChannel;
#Value("${app.mq.host:localhost}")
private String host;
#Value("${app.mq.host.port:1414}")
private int port;
#Value("${app.mq.adminuser:admin}")
private String adminUser;
#Value("${app.mq.adminpassword:passw0rd}")
private String password;
#Bean
public MQQueueManager mqQueueManager() {
try {
Hashtable<String,Object> connectionProperties = new Hashtable<String,Object>();
connectionProperties.put(CMQC.CHANNEL_PROPERTY, adminChannel);
connectionProperties.put(CMQC.HOST_NAME_PROPERTY, host);
connectionProperties.put(CMQC.PORT_PROPERTY, port);
connectionProperties.put(CMQC.USER_ID_PROPERTY, adminUser);
connectionProperties.put(CMQC.PASSWORD_PROPERTY, password);
return new MQQueueManager(qMgrName, connectionProperties);
} catch (MQException e) {
logger.warn("MQException obtaining MQQueueManager");
logger.warn(e.getMessage());
}
return null;
}
}
Obtain the Queue's backout threshold
import com.ibm.mq.*;
import com.ibm.mq.constants.CMQC;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;
#Component
public class Runner {
protected final Log logger = LogFactory.getLog(getClass());
#Value("${app.mq.queue:DEV.QUEUE.1}")
private String queueName = "";
private final MQQueueManager mqQueueManager;
Runner(MQQueueManager mqQueueManager) {
this.mqQueueManager = mqQueueManager;
}
#Bean
CommandLineRunner init() {
return (args) -> {
logger.info("Determining Backout threshold");
try {
int[] selectors = {
CMQC.MQIA_BACKOUT_THRESHOLD,
CMQC.MQCA_BACKOUT_REQ_Q_NAME };
int[] intAttrs = new int[1];
byte[] charAttrs = new byte[MQC.MQ_Q_NAME_LENGTH];
int openOptions = MQC.MQOO_INPUT_AS_Q_DEF | MQC.MQOO_INQUIRE | MQC.MQOO_SAVE_ALL_CONTEXT;
MQQueue myQueue = mqQueueManager.accessQueue(queueName, openOptions, null, null, null);
logger.info("Queue Obtained");
MQManagedObject moMyQueue = (MQManagedObject) myQueue;
moMyQueue.inquire(selectors, intAttrs, charAttrs);
int boThresh = intAttrs[0];
String backoutQname = new String(charAttrs);
logger.info("Backout Threshold: " + boThresh);
logger.info("Backout Queue: " + backoutQname);
} catch (MQException e) {
logger.warn("MQException Error obtaining threshold");
logger.warn(e.getMessage());
}
};
}
}
This sounds like you are mixing retriable and non-retriable error handling.
If you are tracking redelivers and need to send an alert, then you probably do not want to set a BOTHRESH value, and instead manage it all in your client-side code.
Recommended consumer error handling pattern:
If the message is invalid (ie.. bad JSON or XML) move to DLQ immediately. The message will never improve in quality and there is no reason to do repeated retries.
If the 'next step' in processing is down (ie. the database) reject delivery and allow redelivery delays and backout retries to kick in. This also has the benefit of allowing other consumers on the queue to attempt processing the message and eliminates the problem where one consumer has a dead path from holding up a messages.
Also, consider that using client-side consumer code to do monitoring and alerting can be problematic, since it combines different functions. If your goal is to track invalid messages, monitoring the DLQ is generally a better design pattern and it removes 'monitoring' code from your consumer code.
I want to be able to monitor events of cache creation in Apache Ignite.
Whenever such events happen - I want to be able to do something with those caches, after they are created, but before anyone else could inserts something.
So I used local listener. Below is all the code:
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.events.CacheEvent;
import org.apache.ignite.events.EventType;
import org.apache.ignite.lang.IgnitePredicate;
import org.apache.ignite.resources.IgniteInstanceResource;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.env.Environment;
#Configuration
public class ServerConfig {
public ServerConfig(Environment e) throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIncludeEventTypes(EventType.EVT_CACHE_STARTED);
Ignite ignite = Ignition.start(cfg);
String cacheName = "test";
registerCacheCreationListener(ignite);
IgniteCache<Integer, String> cache = ignite.getOrCreateCache(cacheName);
}
private void registerCacheCreationListener(Ignite ignite){
IgnitePredicate<CacheEvent> locLsnr = new IgnitePredicate<CacheEvent>(){
#IgniteInstanceResource
private Ignite ignite;
#Override
public boolean apply(CacheEvent evt) {
System.out.println("Received event [evt=" + evt.name() + " cacheName=" + evt.cacheName());
IgniteCache<Integer, String > cache = ignite.cache(evt.cacheName()); // CANNOT ACCESS evt.cacheName() - STUCKS HERE
System.out.println("finish listener");
return true;
}
};
ignite.events().localListen(locLsnr, EventType.EVT_CACHE_STARTED);
}
}
So when I do:
ignite.cache(evt.cacheName())
inside IgnitePredicate - it is not yet available as I understand.
Please help me find out where can I be wrong.
Thanks.
As a rule you should not perform cache operations, or most of other operations that block or access Ignite internals. Events should be very fast and lightweight, meaning that they are executed from inside Ignite threads and Ignite internal locks.
Just schedule an operation in a different thread on event arrival.
I haven't been able to find a comprehensive example of connecting to and then querying a remote Apache Tinkerpop Graph Database with Gremlin and Java. And I can't quite get it to work. Can anyone that's done something like this before offer any advice?
I've set up a Azure Cosmos database in Graph-DB mode, which is expecting Gremlin queries in order to modify and access its data. I have the database host name, port, username, and password, and I'm able to execute queries, but only if I pass in a big ugly query string. I would like to be able to leverage the org.apache.tinkerpop.gremlin.structure.Graph traversal methods, but I can't quite get it working.
import java.util.List;
import java.util.concurrent.CompletableFuture;
import org.apache.tinkerpop.gremlin.driver.Result;
import org.apache.tinkerpop.gremlin.driver.ResultSet;
import org.apache.tinkerpop.gremlin.structure.Graph;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
//More imports...
#Service
public class SearchService {
private final static Logger log = LoggerFactory.getLogger(SearchService.class);
#Autowired
private GraphDbConnection graphDbConnection;
#Autowired
private Graph graph;
public Object workingQuery() {
try {
String query = "g.V('1234').outE('related').inV().both().as('v').project('vertex').by(select('v')).by(bothE().fold())";
log.info("Submitting this Gremlin query: {}", query);
ResultSet results = graphDbConnection.executeQuery(query);
CompletableFuture<List<Result>> completableFutureResults = results.all();
List<Result> resultList = completableFutureResults.get();
Result result = resultList.get(0);
log.info("Query result: {}", result.toString());
return result.toString();
} catch (Exception e) {
log.error("Error fetching data.", e);
}
return null;
}
public Object failingQuery() {
return graph.traversal().V(1234).outE("related").inV()
.both().as("v").project("vertex").by("v").bothE().fold()
.next();
/* I get an Exception:
"org.apache.tinkerpop.gremlin.process.remote.RemoteConnectionException:
java.lang.RuntimeException: java.lang.RuntimeException:
java.util.concurrent.TimeoutException: Timed out while waiting for an
available host - check the client configuration and connectivity to the
server if this message persists" */
}
}
This is my configuration class:
import java.util.HashMap;
import java.util.Map;
import org.apache.tinkerpop.gremlin.driver.Cluster;
import org.apache.tinkerpop.gremlin.driver.MessageSerializer;
import org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection;
import org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0;
import org.apache.tinkerpop.gremlin.structure.Graph;
import org.apache.tinkerpop.gremlin.structure.util.GraphFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class GraphDbConfig {
private final static Logger log = LoggerFactory.getLogger(GraphDbConfig.class);
#Value("${item.graph.hostName}")
private String hostName;
#Value("${item.graph.port}")
private int port;
#Value("${item.graph.username}")
private String username;
#Value("${item.graph.password}")
private String password;
#Value("${item.graph.enableSsl}")
private boolean enableSsl;
#Bean
public Graph graph() {
Map<String, String> graphConfig = new HashMap<>();
graphConfig.put("gremlin.graph",
"org.apache.tinkerpop.gremlin.process.remote.RemoteGraph");
graphConfig.put("gremlin.remoteGraph.remoteConnectionClass",
"org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection");
Graph g = GraphFactory.open(graphConfig);
g.traversal().withRemote(DriverRemoteConnection.using(cluster()));
return g;
}
#Bean
public Cluster cluster() {
Cluster cluster = null;
try {
MessageSerializer serializer = new GraphSONMessageSerializerGremlinV2d0();
Cluster.Builder clusterBuilder = Cluster.build().addContactPoint(hostName)
.serializer(serializer)
.port(port).enableSsl(enableSsl)
.credentials(username, password);
cluster = clusterBuilder.create();
} catch (Exception e) {
log.error("Error in connecting to host address.", e);
}
return cluster;
}
}
And I have to define this connection component currently in order to send queries to the database:
import org.apache.tinkerpop.gremlin.driver.Client;
import org.apache.tinkerpop.gremlin.driver.Cluster;
import org.apache.tinkerpop.gremlin.driver.ResultSet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
#Component
public class GraphDbConnection {
private final static Logger log = LoggerFactory.getLogger(GraphDbConnection.class);
#Autowired
private Cluster cluster;
public ResultSet executeQuery(String query) {
Client client = connect();
ResultSet results = client.submit(query);
closeConnection(client);
return results;
}
private Client connect() {
Client client = null;
try {
client = cluster.connect();
} catch (Exception e) {
log.error("Error in connecting to host address.", e);
}
return client;
}
private void closeConnection(Client client) {
client.close();
}
}
You cannot leverage the remote API with CosmosDB yet. It does not support Gremlin Bytecode yet.
https://github.com/Azure/azure-documentdb-dotnet/issues/439
https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/33632779-support-gremlin-bytecode-to-enable-the-fluent-api
You would have to continue with strings until then, though.....since you are using Java you could try a somewhat unadvertised feature: GroovyTranslator
gremlin> g = EmptyGraph.instance().traversal()
==>graphtraversalsource[emptygraph[empty], standard]
gremlin> translator = GroovyTranslator.of('g')
==>translator[g:gremlin-groovy]
gremlin> translator.translate(g.V().out('knows').has('person','name','marko').asAdmin().getBytecode())
==>g.V().out("knows").has("person","name","marko")
As you can see, it takes Gremlin Bytecode and converts it into a String of Gremlin that you could submit to CosmosDB. Later, when CosmosDB supports Bytecode, you could drop the GroovyTranslator and change from EmptyGraph construction of your GraphTraversalSource and everything should start working. To make this really seamless, you could go the extra step and write a TraversalStrategy that would do something similar to TinkerPop's RemoteStrategy. Instead of submitting Bytecode as that strategy does, you would just just use GroovyTranslator and submit the string of Gremlin. That approach would make it even easier to switch over when CosmosDB supports Bytecode because then all you would have to do is remove your custom TraversalStrategy and reconfigure your remote GraphTraversalSource in the standard way.
I have a directory that contains 200 million HTML files (don't look at me, I didn't create this mess, I just have to deal with it). I need to index every HTML file in that directory into Solr. I've been reading guides on getting the job done, and I've got something going right now. After about an hour, I've got about 100k indexed, meaning this is going to take roughly 85 days.
I'm indexing the files to a standalone Solr server, running on a c4.8xlarge AWS EC2 instance. Here's the output from free -m with the Solr server running, and the indexer I wrote running as well:
total used free shared buffers cached
Mem: 60387 12981 47405 0 19 4732
-/+ buffers/cache: 8229 52157
Swap: 0 0 0
As you can see, I'm doing pretty good on resources. I increased the number of maxWarmingSearchers to 200 in my Solr config, because I was getting the error:
Exceeded limit of maxWarmingSearchers=2, try again later
Alright, but I don't think increasing that limit was really the right approach. I think the issue is that for each file, I am doing a commit, and I should be doing this in bulk (say 50k files / commit), but I'm not entirely sure how to adapt this code for that, and every example I see does a single file at a time. I really need to do everything I can to make this run as fast as possible, since I don't really have 85 days to wait on getting the data in Solr.
Here's my code:
Index.java
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
public class Index {
public static void main(String[] args) {
String directory = "/opt/html";
String solrUrl = "URL";
final int QUEUE_SIZE = 250000;
final int MAX_THREADS = 300;
BlockingQueue<String> queue = new LinkedBlockingQueue<>(QUEUE_SIZE);
SolrProducer producer = new SolrProducer(queue, directory);
new Thread(producer).start();
for (int i = 1; i <= MAX_THREADS; i++)
new Thread(new SolrConsumer(queue, solrUrl)).start();
}
}
Producer.java
import java.io.IOException;
import java.nio.file.*;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.concurrent.BlockingQueue;
public class SolrProducer implements Runnable {
private BlockingQueue<String> queue;
private String directory;
public SolrProducer(BlockingQueue<String> queue, String directory) {
this.queue = queue;
this.directory = directory;
}
#Override
public void run() {
try {
Path path = Paths.get(directory);
Files.walkFileTree(path, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
if (!attrs.isDirectory()) {
try {
queue.put(file.toString());
} catch (InterruptedException e) {
}
}
return FileVisitResult.CONTINUE;
}
});
} catch (IOException e) {
e.printStackTrace();
}
}
}
Consumer.java
import co.talentiq.common.net.SolrManager;
import org.apache.solr.client.solrj.SolrServerException;
import java.io.IOException;
import java.util.concurrent.BlockingQueue;
public class SolrConsumer implements Runnable {
private BlockingQueue<String> queue;
private static SolrManager sm;
public SolrConsumer(BlockingQueue<String> queue, String url) {
this.queue = queue;
if (sm == null)
this.sm = new SolrManager(url);
}
#Override
public void run() {
try {
while (true) {
String file = queue.take();
sm.indexFile(file);
}
} catch (InterruptedException e) {
e.printStackTrace();
} catch (SolrServerException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
SolrManager.java
import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.client.solrj.request.AbstractUpdateRequest;
import org.apache.solr.client.solrj.request.ContentStreamUpdateRequest;
import java.io.File;
import java.io.IOException;
import java.util.UUID;
public class SolrManager {
private static String urlString;
private static SolrClient solr;
public SolrManager(String url) {
urlString = url;
if (solr == null)
solr = new HttpSolrClient(url);
}
public void indexFile(String fileName) throws IOException, SolrServerException {
ContentStreamUpdateRequest up = new ContentStreamUpdateRequest("/update/extract");
String solrId = UUID.randomUUID().toString();
up.addFile(new File(fileName), solrId);
up.setParam("literal.id", solrId);
up.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
solr.request(up);
}
}
You can use up.setCommitWithin(10000); to make Solr just commit automagically at least every ten seconds. Increase the value to make Solr commit each minute (60000) or each ten minutes (600000). Remove the explicit commit (setAction(..)).
Another option is to configure autoCommit in your configuration file.
You might also be able to index quicker by moving the HTML extraction process out from Solr (and just submitting the text to be indexed), or expanding the amount of servers you're posting to (more nodes in the cluster).
Am guessing you wont be searching the index in parallel while documents are being indexed. So here are the things that you could do.
You can configure the auto commit option in your solrconfig.xml. It can be done based on number of documents / time interval. For you, number of documents option would make more sense.
Remove that call to setAction() method in ContentStreamUpdateRequest object. you can maintain a count for number of calls made to indexFile() method. Say if it reaches 25000/10000 (based on your heap you can limit the count) then for that indexing call alone you can perform the commit using the SolrClient object like solr.commit(). so that the commit will be made once for specified count.
Let me know the results. Good Luck!
I have made a simple ActiveMQ application.
It listens to a queue. If a message comes, print out the dataId
Here is the code:
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.ExceptionListener;
import javax.jms.JMSException;
import javax.jms.MapMessage;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageListener;
import javax.jms.Session;
import org.apache.activemq.ActiveMQConnectionFactory;
public class MQ implements MessageListener {
private Connection connection = null;
private Session session = null;
private Destination destination = null;
private void errorOnConnection(JMSException e) {
System.out.println("MQ is having problems. Exception::"+ e);
}
private void init() throws JMSException {
String BROKER_URL = "failover:(tcp://myQueue001:61616,tcp://myQueue002:61616)?randomize=false";
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(BROKER_URL);
connection = connectionFactory.createConnection("user", "password");
connection.setExceptionListener(
new ExceptionListener() {
#Override public void onException(JMSException e) {
errorOnConnection(e);
}
});
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
destination = session.createQueue("myQueue");
MessageConsumer consumer = session.createConsumer(destination);
consumer.setMessageListener(this);
}
public boolean start() {
try {
if(connection==null )
init();
connection.start();
} catch (Exception e) {
System.out.println("MQListener cannot be started, exception: " + e);
}
return true;
}
#Override
public void onMessage(Message msg) {
try {
if(msg instanceof MapMessage){
MapMessage m = (MapMessage)msg;
int dataId = m.getIntProperty("dataId");
System.out.println(dataId);
}
} catch (JMSException e) {
System.out.println("Got an exception: " + e);
}
}
public static void main(String[] args) {
MQ mq = new MQ();
mq.start();
}
}
It works fine and does what it is meant to accomplish.
However, the problem is that it can run only for several days. After several days, it just quits silently without any exceptions or error.
The queue I am listening to is from 3rd party. From a guy there, the queue sometimes will be closed or restarted or interrupted.
But I think even if that happen, the default ActiveMQ settings will handle it by consistently reconnect to it, right? (according to http://activemq.apache.org/cms/configuring.html)
So any other possible causes which lead my code to quitting silently?
Depends on bit on your version. Since you are not doing anything yourself to keep the application running but instead depending on the ActiveMQ code to keep at least one non-deamon thread running. In some ActiveMQ versions the client wasn't always doing this so your application could quite while a failover was occurring. Best bet is to switch to v5.8.0 which I believe had some fixes for this.
You could add some polling code in main to read something from console or what not to ensure that the client stays up until you are sure you want it to go down.