One socket connection per cluster - java

I have an webapp built with Jboss Seam, which runs on cluster Jboss EAP. Webapp have a client library, which should stay connected with server for receiving events. When event arrived from client, it fires JMS message. The question is how can I achieve only one client connection per cluster(to avoid JMS message duplication) in this environment?

Possibly Clustering Singleton Services might work for your problem. See reference https://docs.jboss.org/jbossas/docs/Clustering_Guide/4/html/ch05s11.html
For a more details reference, see https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6/html/Development_Guide/Implement_an_HA_Singleton.html
Hope that helps

Related

Forcefully kill JMS connection threads

First time posting so hopefully I can give enough info.
We are currently using Apache ActiveMQ to setup a pretty standard Producer/Consumer queue app. Our application is hosted on various different client servers and at random times/loads we experience issues where the JMS connection permanently dies, so our producer can no longer connect to the consumer, so we have to restart the producer. We're pretty sure on the issue, that we're running our of connections on the JMS cached connection factory, so need to do a better cleanup/recycling of these connections. This is a relatively common issue described here (Our setup is pretty similar):
Is maven shade-plugin culprit for my jar to not work in server
ActiveMQ Dead Connection issue
Our difficulty is that this problem is only experienced on our application when it's deployed on servers, however we don't have access to these, as they house confidential client info, so we can't do any debugging/reproduction on the servers where the issues occur, however it's not possible so far for us to reproduce the issue on our local environment.
So in short is there any way that we could forcefully kill/corrupt our JMS connection threads so that we can reproduce and test various fixes and approaches? Unfortunately we don't have the option to add in fixes without testing/demo'ing any solutions, so replication of the issue on our local setup is our only option.
Thanks in advance

How does one make activemq actually reliable? When brokers disconnect messages are lost

We have a JEE6 app built in Apache TomEE v1.6.0+. There are two parts, a cloud part, and a ground part. The cloud part is intended to be never restarted, since it monitors a transient source of information, but creates JMS messages and sends them to it's broker.
The ground part is intended to be restart-able during the day and is where the complex processing logic is. It too has a broker which connects to the cloud broker.
The problem we are having is that if we take down the ground instance of TomEE for more than a few mins, then start it up again, the cloud broker will not deliver all the messages that stacked up. Furthermore, it doesn't deliver any new messages either, forcing us to restart it, which makes us lose our messages.
Here are the two connection URIs... What on earth are we doing wrong??
Cloud:
<Resource
id="ActiveMQResourceAdapter"
type="ActiveMQResourceAdapter">
BrokerXmlConfig = broker:(ssl://0.0.0.0:61617?needClientAuth=true&transport.keepAlive=true&transport.soTimeout=30000,vm://localhost,network:static:(failover:(ssl://ground.somedomain.com:61617?keepAlive=true&soTimeout=30000)))?persistent=true
ServerUrl = vm://localhost
DataSource = jdbc/activemq
</Resource>
Ground:
<Resource
id="ActiveMQResourceAdapter"
type="ActiveMQResourceAdapter">
BrokerXmlConfig = broker:(ssl://0.0.0.0:61617?needClientAuth=true&transport.keepAlive=true&transport.soTimeout=30000,vm://localhost,network:static:(failover:(ssl://cloud.somedomain.com:61617?keepAlive=true&soTimeout=30000)))?persistent=true
ServerUrl = vm://localhost
DataSource = jdbc/activemq
</Resource>
Any help is much appreciated. Thank you very much!!
Ok we learned a couple of things.
First, we switched to using an external instance of ActiveMQ, instead of relying on the embedded one inside TomEE. You must start the broker first, before start TomEE, or TomEE will create an internal broker on startup and you'll be scratching your head going gee why aren't any messages processing. You then connect TomEE to the broker by setting BrokerXmlConfig = and ServerUrl = tcp://localhost.
Next, we switched to using the activemq http transport. This completely negates any network disconnect problems, since http is stateless. It is VERY slow however relative to tcp/ssl, but the message transport is not the slowest point in our system so it doesn't matter anyway. You MUST have the external broker listen on both http and tcp since TomEE connects via TCP and the remote broker connects via http.
These two things fixed our problems and we have a completely solid system running now. I hope this helps someone!!
Not sure if you are using topics or queues but the JMS spec says that only queues and durable subscribers can take advantage of the store-and-forward guaranteed delivery.
For a non-durable subscriber, a non-persistent message will be delivered “at most once” but will be missed if inactive.
Please take a look to the following URL which explains in detail how guaranteed messaging works for topics and queues in ActiveMQ:
http://www.christianposta.com/blog/?p=265

How to setup RabbitMQ RPC in a web context

RabbitMQ RPC
I decided to use RabbitMQ RPC as described here.
My Setup
Incoming web requests (on Tomcat) will dispatch RPC requests over RabbitMQ to different services and assemble the results. I use one reply queue with one custom consumer that listens to all RPC responses and collects them with their correlation id in a simple hash map. Nothing fancy there.
This works great in a simple integration test on controller level.
Problem
When I try to do this in a web project deployed on Tomcat, Tomcat refuses to shut down. jstack and some debugging learned me a thread is spawn to listen for the RPC response and is blocking Tomcat from shutting down gracefully. I guess this is because the created thread is created on application level instead of request level and is not managed by Tomcat. When I set breakpoints in Servlet.destroy() or ServletContextListener.contextDestroyed(ServletContextEvent sce), they are not reached, so I see no way to manually clean things up.
Alternative
As an alternative, I could use a new reply queue (and simple QueueingConsumer) for each web request. I've tested this, it works and Tomcat shuts down as it should. But I'm wondering if this is the way to go.. Can a RabbitMQ cluster deal with thousands (or even millions) of short living queues/consumers? I can imagine queues aren't that big, but still.. constantly broadcasting to all cluster nodes.. the total memory footprint..
Question
So in short, is it wise do create a queue for each incoming web request or how should I setup RabbitMQ with one queue and consumer so Tomcat can shutdown gracefully?
I found a solution for my problem:
The Java client is creating his own threads. There is the possibility to add your own ExecutorService when creating a new connection. Doing so in the ServletContextListener.initialized() method, one can keep track of the ExecutorService and shut it down manually in the ServletContextListener.destroyed() method.
executorService.shutdown();
executorService.awaitTermination(20, TimeUnit.SECONDS);
I used Executors.newCachedThreadPool(); as the threads have many short executions, and they get cleaned up when being idle for more then 60s.
This is the link to the RabbitMQ Google group thread (thx to Michael Klishin for showing me the right direction)

Client Side JMS Configuration - JMS Cluster - Connets to only one server

So i wrote a program to connect to a Clustered WebLogic server behind a VIP with 4 servers and 4 queues that are all connected( i think they call them distributed...) When i run the program from my local machine and just get JMS Connections, look for messages and disconnect, it works great. and by that i mean it:
iteration #1
connects to server 1.
look for a message
disconnects
iteration #2
connects to server 2.
look for a message
disconnects
and so on.
When i run it on the server though, the application picks a server and stick to it. It will never pick a new server, so the queues on the other servers don't ever get worked. like with a "sticky session" setup... My OS is Win7, and the Server os is Win2008r2 JDK is identical for both machines.. How is this configured client side? The server implementation uses "Apache Procrun" to run it as a service. but i haven't seen too many issues with that part...
is there a session cookie getting written out somewhere?
any ideas?
Thanks!
Try disabling 'Server Affinity' on the JMS Connection factory. If you are using the Default Connection Factory, define your own an disable Server Affinity.
EDIT:
Server Affinity is a Server-side setting, but it controls how messages are routed to consumers after a WebLogic JMS Server receives the message. The other option is to use round-robin DNS and send to only one hostname that resolves to a different IP(Managed Server) such that each connection goes to a different server.
I'm pretty sure this is the setting you're looking for :)

JMS catching when a JMS server goes away

When there is a network problem which results in the client being disconnected from the JMS server, is there some other way to detect the problem other than waiting until the next JMS message being sent fails?
You can register an ExceptionListner with the JMS Connection using Connection.setExceptionListener(ExceptionListener)
The ExceptionListener will get notified of more problems than an actual disconnection, so you may have to filter the JMSException that gets passed to the listener.
ExceptionListener isn't necessarily enough. You also need to catch exceptions on any JMS calls you make (sending messages, for example). See Reconnecting JMS listener to JBossMQ
if your are running on MQ and looking to solve this problem, install a local MQ instance. More license but you will get guaranty delivery if your main corporate MQ goes down.
Other Option, use Spring and let the framework do the recovery of the connection.

Categories