Vertx Hazelcast: Cluster Problems - java

im working with Vertx and HazelCast to distribute my Verticles about the network.
No I have the problem, that my co-worker also uses clustered verticles with HazelCastManager.
Is there a possibility to avoid, that our verticles see each other to prevent by-effects?

You can define Hazelcast Cluster Groups in your cluster.xml file.
Here's the manual section related to it:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#creating-cluster-groups

If you use Multicast (default config) for discovery, you can redefine the groupname and password. Apart from that you can just choose any other option for discovery supported by the given Hazelcast version inside vert.x:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#discovering-cluster-members

Old question, but still valid, here's simple answer:
If you want to limit your vertx system to the single server, i.e. not have the eventbus leak out across your local network, the simplest thing to do is create a local copy of Hazelcast's cluster.xml on your classpath, i.e. copy/edit vertx source (see git):
vertx-hazelcast/src/main/resources/default-cluster.xml
into a new file in your vertx project
src/main/resources/cluster.xml
The required change is to the <multicast> stanza disabling that feature:
<hazelcast ...>
...
<network>
...
<join>
...
<multicast enabled="false">
...
</multicast>
<tcp-ip enabled="true">
<interface>127.0.0.1</interface>
</tcp-ip>

Related

Hazelcast Client Only Configuration

Is it possible to create a client only hazelcast node? We have hazelcast embedded in our Java Applications and they use a common hazelcast.xml. This works fine, however when one of our JVM's is distressed, it causes the other clustered JVM's to slow down and have issues. I want to run a hazelcast cluster outside of our application stack and update the common hazelcast.xml to point to the external cluster. I have tried various config options but the application JVM's always want to start a listener and become members. I realize I maybe asking for something that defeats the purpose of hazelcast, however I thought it may be possible to configure an instance to be a client only.
Thanks.
You can change your application to use Hazelcast client instances, but it requires a code change.
Instead of
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
you'll need to initialize your instance by requesting a client one:
HazelcastInstance hz = HazelcastClient.newHazelcastClient();
Another option is to keep the code unchanged and configure your embedded members to be "lite" ones. So they don't own any partition (they don't store cluster data).
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-4.0.xsd">
<!--
===== HAZELCAST LITE MEMBER CONFIGURATION =====
Configuration element's name is <lite-member>. When you want to use a Hazelcast member as a lite member,
set this element's "enabled" attribute to true in that member's XML configuration. Lite members do not store
data and are used mainly to execute tasks and register listeners. They do not have partitions.
-->
<lite-member enabled="true"/>
</hazelcast>

How to create a NetworkPolicy usingt he Fabric8 java client for Kubernetes

I am reading the Kubernetes docs here https://kubernetes.io/docs/concepts/services-networking/network-policies/
I would assume there is an equivalent object ofr a NetworkPolicy but I didnt find one in the source code or any examples setting the network policy on groups of pods.
Am I looking at the right place?
Here is an example of creating NetworkPolicy using fabric8 kubernetes client.
https://github.com/fabric8io/kubernetes-client/pull/976
To select group of pods you can use PodSelector in NetworkPolicySpec.
There’s a handler for it in https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/handlers/NetworkPolicyHandler.java, so I guess it’s supported. The actual NetworkPolicy class seems to be in a dependency library, kubernetes-model.

hazelcast is not clustered with other ip

I upgraded my hazelcast from 2.x to 3.3.3, but when I started 2 servers at different IPs, it's not clustered.
But it worked when I was using 2.x. It should be like this printing in console:
Members [1] {
Member [172.29.110.114]:5701 this
}
I tried using
**Hazelcast.newHazelcastInstance()**
and
**Hazelcast.newHazelcastInstance(config)**
to get the HazelcastInstance for getting the map and other distributed objects. When I used the second one, the config as the parameter, the above message can be printed but the other IP's node can't be shown. when I used the first one without config as its parameter, I can't even see the above message in console.
Anyone knows what's going on here? Many thanks.
You need to enable multicast in your hazelcast configuration. Here is how to enabled it with the xml configuration (i.e. hazelcast.xml):
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.0.xsd" xmlns=" http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<network>
<join><multicast enabled="true"/></join>
<network>
</hazelcast>
Then create your config this way (hazelcast.xml shoud be in the classpath):
Config config = new ClasspathXmlConfig("hazelcast.xml")
Finally, I find out what's going on. It's all because the firewall. After I turned if off, it works. Just sharing my experience. And thanks the help of Arbi.

ActiveMQ Scheduler Failover with JDBC MasterSlave

I currently have a working two-broker JDBC MasterSlave configuration, and the next step for me is to implement a scheduler with failover. I've looked around and haven't seen any information about this, and was curious to see if this is possible or if I should try a different approach.
Currently, I have the two brokers using the same dataDirectory both within the broker tag and the JDBCPersistenceAdapter tag. However, within that data directory ActiveMQ creates two separate scheduler folders. I cannot seem to force it to use the same one, so failover with scheduling isn't working.
I've also tried the KahaDB approach with the same criteria, and that doesn't seem to work either.
Another option would be for the scheduler information to be pushed to the database (in this case, oracle) and be able to be picked up from there (not sure if possible).
Here is a basic overview of what I need:
Master and slave brokers up and running, using same dataDirectory (lets say, broker1 and broker2)
If I send a request to process messages through master at a certain time and master fails, slave should be able to pick up the scheduler information from master (this is where I'm stuck)
Slave should be processing these messages at the scheduled time
activemq.xml (relevant parts)
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="b1" useJmx="true"
persistent="true" schedulerSupport="true">
<!-- kahaDB persistenceAdapter -->
<persistenceAdapter>
<kahaDB directory="{activemq.data}/kahadb" enableIndexWriteAsync="false"
ignoreMissingJournalfiles="true" checkForCorruptJournalFiles="true"
checksumJournalFiles="true"/>
</persistenceAdapter>
<!-- JDBC persistenceAdapter -->
<persistenceAdapter>
<jdbcPersistenceAdapter dataDirectory="{activemq.data}" dataSource="#oracle-ds"/>
</persistenceAdapter>
Can someone possibly point me in the right direction? I'm fairly new to ActiveMQ. Thanks in advance!
If anyone is curious, adding the schedulerDirectory property to the broker tag seems to be working fine. So my broker tag in activemq.xml now looks like this:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="broker1"
dataDirectory="{activemq.data}" useJmx="true" persistent="true"
schedulerSupport="true" schedulerDirectory="{activemq.data}/broker1/scheduler"/>
You have probably figured out what you need to do to make this work, but for the sake of other folks like me who was/is looking for the answer. if you're trying to make failover work for scheduled messages with the default kahaDb store (as of v 5.13.2) and a shared file system, you will need to do the following:
Have a folder in the shared file system defined as the dataDirectory attribute in the broker tag. /shared/folder in the example below
Use the same brokerName for all nodes in that master/slave cluster. myBroker1 in the example below.
Example:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="myBroker1"
dataDirectory="/shared/folder"
schedulerSupport="true">

ehcache auto-discovery (via multicast) between 2 instances on the same host

I run 2 tomcat instances on the same host. Each instance runs the same web application which tries to communicate some ehcache caches via RMI replication. I use the autodiscovery configuration in ehcache so I don't have to explicitly define which are the hosts and which are the caches I want to replicate. The ehcache instances do not manage to find each other and communicate:
DEBUG (RMIBootstrapCacheLoader.java:211) - cache peers: []
DEBUG (RMIBootstrapCacheLoader.java:133) - Empty list of cache peers for cache org.hibernate.cache.UpdateTimestampsCache. No cache peer to bootstrap from.
If I try the same thing but this time run each tomcat instance on a separate host (box) then everything works like a charm.
Am I doing something wrong, or isn't autodiscovery via multicast possible when the instances are on the same host?
My configuration uses the defaults presented in the RMI Distributed Caching documentation:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1,
multicastGroupPort=4446, timeToLive=32"/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="port=40001, socketTimeoutMillis=2000"/>
And inside each cache region I want to replicate I have:
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="asynchronousReplicationIntervalMillis=500 " />
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory" />
thanks
Am I doing something wrong, or isn't
autodiscovery via multicast possible
when the instances are on the same
host?
While I'm not really familiar with ehcache I'd think this to be possible and they are in fact providing an example doing something similar at least (multiple nodes per host, though one instance only): see section Full Example in the RMI Distributed Caching documentation you mentioned.
Usually you cannot open the same TCP port (40001 here) more than once per host though, it is bound to the first application/service allocating it (there do exist things like TCP Port Sharing on Windows for example, but you'd have to specifically account for that).
Consequently, if you are really using their identical default configurations, the second Tomcat instance trying to allocate TCP port 40001 will fail to do so. Of course, this should manifest itself somewhere earlier in the Tomcat logs, have you had a thorough look already?
Just using another free port for one Tomcat instance should solve the issue; you can see this in action within the ehcache.xml's for the Full Example mentioned above: the port number is increased one by one from 40001 up to 40006 per node.

Categories