In our architecture JMS publisher may continue to work (and generate new messages) even if connection with local network is lost. Is it possible to make publisher server tolerant to network or broker outages with JMS:
publish call may not block application, even if broker is not available;
published messages (during outage) must be delivered after network connection is restored;
As far as I understand it can be done with embedded (local) broker on each publishing machine. If it's the only way, are there any non obvious problems with that topology - performance, maintenance, etc? Will the local broker be tolerant to outages by itself?
I've not tried this but it seems like you could use local failover to reduce impedance:
With ActiveMQ you can configure a failover transport:
failover:(tcp://primary:61616,tcp://secondary:61616)?randomize=false
To try and draw this:
client +---> primary: network broker <-------+
| |
+---> secondary: embedded broker -----+
Here primary would be your network broker, and your secondary broker would be the locally embedded broker with a bridge to the primary broker. This seems like it would work well when the client publishes allot; I'm not sure if this would be any better for subscribes then the solution put forward by #Biju: illustrated below:
client +---> secondary: embedded broker ------> primary: network broker
For example here is my embedded broker (which is usually non-persistent).
<bean id="activeMQBroker" class="org.apache.activemq.broker.BrokerService">
<property name="transportConnectors">
<list>
<bean id="brokerConnection" class="org.apache.activemq.broker.TransportConnector">
<property name="connectUri">
<bean id="brokerURI" class="java.net.URI">
<constructor-arg value="tcp://localhost:61616" />
</bean>
</property>
</bean>
</list>
</property>
<property name="persistent" value="true" />
</bean>
The only way that I can think of is along the lines you have suggested -
Have a local embedded broker and provide a bridge from this embedded broker to a network based broker. Even the local one can go down though, so you may have to publish transactionally between your resources(db and jms infrastructure)
Do not publish directly, instead have an abstraction which buffers it - to a database, file, or like above to a local embedded jms, and provide a bridge like above from the buffer to the JMS queue.
A distributed architecture if queue managers \ brokers is very common in cases as you describe.
The exact configuration depends on the specific product you use, but it's usually well documented and easy to manage
regarding local redundancy, you may use two such queue manager in a fault tolerant configuration (again - the exact method of creating fault tolerant clusters is product dependent) - but this appears to be some what of an overkill.
JMS standardizes only the API of the message queue provider, other
Related
I'm trying to configure Apache Camel with ActiveMQ to bridge between a queue on my ActiveMQ server and a queue on a remote ActiveMQ server. So far so simple. Here is the relevant bit of my camel.xml:
<camelContext xmlns="http://camel.apache.org/schema/spring" id="camel">
<route>
<from uri="local:Request"/>
<to uri="remote:Request"/>
</route>
</camelContext>
<bean id="local" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="brokerURL" value="tcp://localhost:61616"/>
</bean>
<bean id="remote" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="brokerURL" value="tcp://remote:61616"/>
</bean>
I've tested this on two servers I control, and it works fine. However, the remote server I'm trying to connect to is one I don't control, and (probably due to a badly-written bespoke authorization implementation) it is exhibiting a behaviour that doesn't seem to work nicely with Camel.
The issue is this: the remote server relies on all Producer instances that connect to it being for a specified destination, whereas by default, Camel seems to create an unidentified producer (JMS reference for context). If an unidentified producer is created, this remote server simply terminates the connection.
So the question I have is this: is there a way to force Camel to not use an unidentified producer, preferably without having to modify the Camel source code?
What you describe about specified destinations sounds like the default endpoint of ProducerTemplate. I have no idea if this really creates the producer as you like, but you could give it a try.
Create a Java bean that uses a ProducerTemplate to send the messages to the remote broker. Create the ProducerTemplate with a default endpoint so that you don't need to specify the endpoint to send messages.
Then change your route to use the bean as sender
.to("bean:mySenderBean")
I have a requirement to implement multiple MDB listeners listening to a single queue. As the load on the queue increases, one listener is not enough to handle that load. I would like to know the best way what we can do to achieve this?
a. I can create similar MDB classes and deploy them on websphere server.
b. Any other way using any configuration?
Could you please provide the correct approach as if it is possible to configure the listeners dynamically and enable them as and when needed or there is only one way point (a) to achieve this?
You are missing a crucial point. It's the container that instantiate one or more beans to handle clients request. The whole world of enterprise beans has scalability as its key point.
Basically, you don't need to do anything else than design and deploy your bean. The container will do the rest.
If you are using MDBs as defined in JEE spec using #MessageDriven annotation, then it is up to the server container to manage actual instantiation and scaling of these beans. I am not that familiar with Websphere, but most servers have a notion of EJB pooling, that roughly translates to thread pool - giving you parallel execution out-of-the-box. This way, the server has a set of instances ready to process the messages in your queue. Each bean instance will only be active for the time required to execute its onMessage method, after that it will be cleaned up and returned to the queue. So lets say, that you have a pool of MDBs, the size of 20. If you have more then 20 message waiting in the queue, then the server will use up all of the available instances and process 20 message simultaneously.
In Wildfly/JBoss for example, you manage your EJB pools using the EJB subsystem and corresponding pool settings.
<subsystem xmlns="urn:jboss:domain:ejb3:4.0">
<!--omitted for brevity... -->
<mdb>
<resource-adapter-ref resource-adapter-name="${ejb.resource-adapter-name:activemq-ra.rar}"/>
<bean-instance-pool-ref pool-name="mdb-strict-max-pool"/>
</mdb>
<pools>
<bean-instance-pools>
<strict-max-pool name="mdb-strict-max-pool" derive-size="from-cpu-count" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
</bean-instance-pools>
</pools>
<!--omitted for brevity... -->
Here we specify, that Message driven beans should use a pool named mdb-strict-max-pool that derives its size from the number of CPUs on our system. You can also specify absolute values, e.g. max-pool-size="20"
All this is only relevant, if you are running the queue on a single server instance. If you are really doing a message intensive application, chances are that you will need a distributed messaging, with dedicated message broker and multiple processing instances. While many servers support such scenarios(e.g. Wildfly ActiveMQ cluster), it is a really a topic for another discussion.
For more info have a look on MDB tutorial and your server documentation.
Happy hacking.
I'm working on an application that uses a couple of jms queues to send/receive updates to/from an external system. In order to test my application I'm using Mockrunner and specifically the jms module.
I'm facing a strange behavior: when I start my application I can see the CPU skyrocketing at 100% and, by analyzing thread dumps, I can see that the main reason is related to the jms listeners I have that looks like receiving empty messages this causing messages like:
Consumer ... did not receive a message
Now I'm trying to understand if the issue is related to a bad interaction of my app and mockrunner or is a configuration error.
The relevant parts of the configuration are:
<bean id="destinationManager" factory-bean="mockRunnerJMSObjectFactory" factory-method="getDestinationManager" />
<bean id="mockJmsConnectionFactory" factory-bean="mockRunnerJMSObjectFactory" factory-method="createMockConnectionFactory" lazy-init="true"/>
and the listener that cause the CPU to spin indefinitely are:
<jms:listener-container concurrency="5" connection-factory="mockJmsConnectionFactory" destination-type="queue" message-converter="myMessageConverter" acknowledge="transacted" >
<jms:listener
id="myListener"
destination="myQueue"
ref="myConsumer"
method="consume"
/>
</jms:listener-container>
<bean id="myConsumer"... />
UPDATE
I opened an issue on Mockrunner project, you can see it here.
After some investigation I found out that the problem lies in a bad interaction with Spring DefaultMessageListenerContainer. That Listener has a polling-based implementation and, given that the mocked infrastructure is very fast when answering requests, cause the CPU to overload. I patched mock runner by adding an ugly thread sleep in the response method, maybe this is going to be fixed sooner or later.
I currently have a working two-broker JDBC MasterSlave configuration, and the next step for me is to implement a scheduler with failover. I've looked around and haven't seen any information about this, and was curious to see if this is possible or if I should try a different approach.
Currently, I have the two brokers using the same dataDirectory both within the broker tag and the JDBCPersistenceAdapter tag. However, within that data directory ActiveMQ creates two separate scheduler folders. I cannot seem to force it to use the same one, so failover with scheduling isn't working.
I've also tried the KahaDB approach with the same criteria, and that doesn't seem to work either.
Another option would be for the scheduler information to be pushed to the database (in this case, oracle) and be able to be picked up from there (not sure if possible).
Here is a basic overview of what I need:
Master and slave brokers up and running, using same dataDirectory (lets say, broker1 and broker2)
If I send a request to process messages through master at a certain time and master fails, slave should be able to pick up the scheduler information from master (this is where I'm stuck)
Slave should be processing these messages at the scheduled time
activemq.xml (relevant parts)
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="b1" useJmx="true"
persistent="true" schedulerSupport="true">
<!-- kahaDB persistenceAdapter -->
<persistenceAdapter>
<kahaDB directory="{activemq.data}/kahadb" enableIndexWriteAsync="false"
ignoreMissingJournalfiles="true" checkForCorruptJournalFiles="true"
checksumJournalFiles="true"/>
</persistenceAdapter>
<!-- JDBC persistenceAdapter -->
<persistenceAdapter>
<jdbcPersistenceAdapter dataDirectory="{activemq.data}" dataSource="#oracle-ds"/>
</persistenceAdapter>
Can someone possibly point me in the right direction? I'm fairly new to ActiveMQ. Thanks in advance!
If anyone is curious, adding the schedulerDirectory property to the broker tag seems to be working fine. So my broker tag in activemq.xml now looks like this:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="broker1"
dataDirectory="{activemq.data}" useJmx="true" persistent="true"
schedulerSupport="true" schedulerDirectory="{activemq.data}/broker1/scheduler"/>
You have probably figured out what you need to do to make this work, but for the sake of other folks like me who was/is looking for the answer. if you're trying to make failover work for scheduled messages with the default kahaDb store (as of v 5.13.2) and a shared file system, you will need to do the following:
Have a folder in the shared file system defined as the dataDirectory attribute in the broker tag. /shared/folder in the example below
Use the same brokerName for all nodes in that master/slave cluster. myBroker1 in the example below.
Example:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="myBroker1"
dataDirectory="/shared/folder"
schedulerSupport="true">
I managed to create simple Websocket application with Spring 4 and Stomp. See my last question here
Then I tried to use remote message broker(ActiveMQ). I just started the broker and changed
registry.enableSimpleBroker("/topic");
to
registry.enableStompBrokerRelay("/topic");
and it worked.
The question is how the broker is configured? I understand that in this case the application automagicaly finds the broker on localhost:defaultport, bu what if I need to point the app to some other broker on other machine?
The enableStompBrokerRelay method returns a convenient Registration instance that exposes a fluent API.
You can use this fluent API to configure your Broker relay:
registry.enableStompBrokerRelay("/topic").setRelayHost("host").setRelayPort("1234");
You can also configure various properties, like login/pass credentials for your broker, etc.
Same with XML Configuration:
<websocket:message-broker>
<websocket:stomp-endpoint path="/foo">
<websocket:handshake-handler ref="myHandler"/>
<websocket:sockjs/>
</websocket:stomp-endpoint>
<websocket:stomp-broker-relay prefix="/topic,/queue"
relay-host="relayhost" relay-port="1234"
client-login="clientlogin" client-passcode="clientpass"
system-login="syslogin" system-passcode="syspass"
heartbeat-send-interval="5000" heartbeat-receive-interval="5000"
virtual-host="example.org"/>
</websocket:message-broker>
See the StompBrokerRelayRegistration javadoc for more details on properties and default values.