ActiveMQ Scheduler Failover with JDBC MasterSlave - java

I currently have a working two-broker JDBC MasterSlave configuration, and the next step for me is to implement a scheduler with failover. I've looked around and haven't seen any information about this, and was curious to see if this is possible or if I should try a different approach.
Currently, I have the two brokers using the same dataDirectory both within the broker tag and the JDBCPersistenceAdapter tag. However, within that data directory ActiveMQ creates two separate scheduler folders. I cannot seem to force it to use the same one, so failover with scheduling isn't working.
I've also tried the KahaDB approach with the same criteria, and that doesn't seem to work either.
Another option would be for the scheduler information to be pushed to the database (in this case, oracle) and be able to be picked up from there (not sure if possible).
Here is a basic overview of what I need:
Master and slave brokers up and running, using same dataDirectory (lets say, broker1 and broker2)
If I send a request to process messages through master at a certain time and master fails, slave should be able to pick up the scheduler information from master (this is where I'm stuck)
Slave should be processing these messages at the scheduled time
activemq.xml (relevant parts)
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="b1" useJmx="true"
persistent="true" schedulerSupport="true">
<!-- kahaDB persistenceAdapter -->
<persistenceAdapter>
<kahaDB directory="{activemq.data}/kahadb" enableIndexWriteAsync="false"
ignoreMissingJournalfiles="true" checkForCorruptJournalFiles="true"
checksumJournalFiles="true"/>
</persistenceAdapter>
<!-- JDBC persistenceAdapter -->
<persistenceAdapter>
<jdbcPersistenceAdapter dataDirectory="{activemq.data}" dataSource="#oracle-ds"/>
</persistenceAdapter>
Can someone possibly point me in the right direction? I'm fairly new to ActiveMQ. Thanks in advance!

If anyone is curious, adding the schedulerDirectory property to the broker tag seems to be working fine. So my broker tag in activemq.xml now looks like this:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="broker1"
dataDirectory="{activemq.data}" useJmx="true" persistent="true"
schedulerSupport="true" schedulerDirectory="{activemq.data}/broker1/scheduler"/>

You have probably figured out what you need to do to make this work, but for the sake of other folks like me who was/is looking for the answer. if you're trying to make failover work for scheduled messages with the default kahaDb store (as of v 5.13.2) and a shared file system, you will need to do the following:
Have a folder in the shared file system defined as the dataDirectory attribute in the broker tag. /shared/folder in the example below
Use the same brokerName for all nodes in that master/slave cluster. myBroker1 in the example below.
Example:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="myBroker1"
dataDirectory="/shared/folder"
schedulerSupport="true">

Related

Hazelcast Client Only Configuration

Is it possible to create a client only hazelcast node? We have hazelcast embedded in our Java Applications and they use a common hazelcast.xml. This works fine, however when one of our JVM's is distressed, it causes the other clustered JVM's to slow down and have issues. I want to run a hazelcast cluster outside of our application stack and update the common hazelcast.xml to point to the external cluster. I have tried various config options but the application JVM's always want to start a listener and become members. I realize I maybe asking for something that defeats the purpose of hazelcast, however I thought it may be possible to configure an instance to be a client only.
Thanks.
You can change your application to use Hazelcast client instances, but it requires a code change.
Instead of
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
you'll need to initialize your instance by requesting a client one:
HazelcastInstance hz = HazelcastClient.newHazelcastClient();
Another option is to keep the code unchanged and configure your embedded members to be "lite" ones. So they don't own any partition (they don't store cluster data).
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-4.0.xsd">
<!--
===== HAZELCAST LITE MEMBER CONFIGURATION =====
Configuration element's name is <lite-member>. When you want to use a Hazelcast member as a lite member,
set this element's "enabled" attribute to true in that member's XML configuration. Lite members do not store
data and are used mainly to execute tasks and register listeners. They do not have partitions.
-->
<lite-member enabled="true"/>
</hazelcast>

multiple MDB listeners reading on same queue

I have a requirement to implement multiple MDB listeners listening to a single queue. As the load on the queue increases, one listener is not enough to handle that load. I would like to know the best way what we can do to achieve this?
a. I can create similar MDB classes and deploy them on websphere server.
b. Any other way using any configuration?
Could you please provide the correct approach as if it is possible to configure the listeners dynamically and enable them as and when needed or there is only one way point (a) to achieve this?
You are missing a crucial point. It's the container that instantiate one or more beans to handle clients request. The whole world of enterprise beans has scalability as its key point.
Basically, you don't need to do anything else than design and deploy your bean. The container will do the rest.
If you are using MDBs as defined in JEE spec using #MessageDriven annotation, then it is up to the server container to manage actual instantiation and scaling of these beans. I am not that familiar with Websphere, but most servers have a notion of EJB pooling, that roughly translates to thread pool - giving you parallel execution out-of-the-box. This way, the server has a set of instances ready to process the messages in your queue. Each bean instance will only be active for the time required to execute its onMessage method, after that it will be cleaned up and returned to the queue. So lets say, that you have a pool of MDBs, the size of 20. If you have more then 20 message waiting in the queue, then the server will use up all of the available instances and process 20 message simultaneously.
In Wildfly/JBoss for example, you manage your EJB pools using the EJB subsystem and corresponding pool settings.
<subsystem xmlns="urn:jboss:domain:ejb3:4.0">
<!--omitted for brevity... -->
<mdb>
<resource-adapter-ref resource-adapter-name="${ejb.resource-adapter-name:activemq-ra.rar}"/>
<bean-instance-pool-ref pool-name="mdb-strict-max-pool"/>
</mdb>
<pools>
<bean-instance-pools>
<strict-max-pool name="mdb-strict-max-pool" derive-size="from-cpu-count" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
</bean-instance-pools>
</pools>
<!--omitted for brevity... -->
Here we specify, that Message driven beans should use a pool named mdb-strict-max-pool that derives its size from the number of CPUs on our system. You can also specify absolute values, e.g. max-pool-size="20"
All this is only relevant, if you are running the queue on a single server instance. If you are really doing a message intensive application, chances are that you will need a distributed messaging, with dedicated message broker and multiple processing instances. While many servers support such scenarios(e.g. Wildfly ActiveMQ cluster), it is a really a topic for another discussion.
For more info have a look on MDB tutorial and your server documentation.
Happy hacking.

Vertx Hazelcast: Cluster Problems

im working with Vertx and HazelCast to distribute my Verticles about the network.
No I have the problem, that my co-worker also uses clustered verticles with HazelCastManager.
Is there a possibility to avoid, that our verticles see each other to prevent by-effects?
You can define Hazelcast Cluster Groups in your cluster.xml file.
Here's the manual section related to it:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#creating-cluster-groups
If you use Multicast (default config) for discovery, you can redefine the groupname and password. Apart from that you can just choose any other option for discovery supported by the given Hazelcast version inside vert.x:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#discovering-cluster-members
Old question, but still valid, here's simple answer:
If you want to limit your vertx system to the single server, i.e. not have the eventbus leak out across your local network, the simplest thing to do is create a local copy of Hazelcast's cluster.xml on your classpath, i.e. copy/edit vertx source (see git):
vertx-hazelcast/src/main/resources/default-cluster.xml
into a new file in your vertx project
src/main/resources/cluster.xml
The required change is to the <multicast> stanza disabling that feature:
<hazelcast ...>
...
<network>
...
<join>
...
<multicast enabled="false">
...
</multicast>
<tcp-ip enabled="true">
<interface>127.0.0.1</interface>
</tcp-ip>

hazelcast is not clustered with other ip

I upgraded my hazelcast from 2.x to 3.3.3, but when I started 2 servers at different IPs, it's not clustered.
But it worked when I was using 2.x. It should be like this printing in console:
Members [1] {
Member [172.29.110.114]:5701 this
}
I tried using
**Hazelcast.newHazelcastInstance()**
and
**Hazelcast.newHazelcastInstance(config)**
to get the HazelcastInstance for getting the map and other distributed objects. When I used the second one, the config as the parameter, the above message can be printed but the other IP's node can't be shown. when I used the first one without config as its parameter, I can't even see the above message in console.
Anyone knows what's going on here? Many thanks.
You need to enable multicast in your hazelcast configuration. Here is how to enabled it with the xml configuration (i.e. hazelcast.xml):
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.0.xsd" xmlns=" http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<network>
<join><multicast enabled="true"/></join>
<network>
</hazelcast>
Then create your config this way (hazelcast.xml shoud be in the classpath):
Config config = new ClasspathXmlConfig("hazelcast.xml")
Finally, I find out what's going on. It's all because the firewall. After I turned if off, it works. Just sharing my experience. And thanks the help of Arbi.

JMS fault tolerant asynchronous publisher

In our architecture JMS publisher may continue to work (and generate new messages) even if connection with local network is lost. Is it possible to make publisher server tolerant to network or broker outages with JMS:
publish call may not block application, even if broker is not available;
published messages (during outage) must be delivered after network connection is restored;
As far as I understand it can be done with embedded (local) broker on each publishing machine. If it's the only way, are there any non obvious problems with that topology - performance, maintenance, etc? Will the local broker be tolerant to outages by itself?
I've not tried this but it seems like you could use local failover to reduce impedance:
With ActiveMQ you can configure a failover transport:
failover:(tcp://primary:61616,tcp://secondary:61616)?randomize=false
To try and draw this:
client +---> primary: network broker <-------+
| |
+---> secondary: embedded broker -----+
Here primary would be your network broker, and your secondary broker would be the locally embedded broker with a bridge to the primary broker. This seems like it would work well when the client publishes allot; I'm not sure if this would be any better for subscribes then the solution put forward by #Biju: illustrated below:
client +---> secondary: embedded broker ------> primary: network broker
For example here is my embedded broker (which is usually non-persistent).
<bean id="activeMQBroker" class="org.apache.activemq.broker.BrokerService">
<property name="transportConnectors">
<list>
<bean id="brokerConnection" class="org.apache.activemq.broker.TransportConnector">
<property name="connectUri">
<bean id="brokerURI" class="java.net.URI">
<constructor-arg value="tcp://localhost:61616" />
</bean>
</property>
</bean>
</list>
</property>
<property name="persistent" value="true" />
</bean>
The only way that I can think of is along the lines you have suggested -
Have a local embedded broker and provide a bridge from this embedded broker to a network based broker. Even the local one can go down though, so you may have to publish transactionally between your resources(db and jms infrastructure)
Do not publish directly, instead have an abstraction which buffers it - to a database, file, or like above to a local embedded jms, and provide a bridge like above from the buffer to the JMS queue.
A distributed architecture if queue managers \ brokers is very common in cases as you describe.
The exact configuration depends on the specific product you use, but it's usually well documented and easy to manage
regarding local redundancy, you may use two such queue manager in a fault tolerant configuration (again - the exact method of creating fault tolerant clusters is product dependent) - but this appears to be some what of an overkill.
JMS standardizes only the API of the message queue provider, other

Categories