Emulating multipe nodes on single node in Java? - java

I evaluate methods to setup and simulate multiple nodes of a virtual cluster on a single computer for cluster emulation in Java.
I might be able to assign virtual hosts /ips and spawn multiple JVM child processes but what is the best way to test cluster behavior to setup and test that kind of behavior.
Every idea to do so is appriciated.
Can I use the same ports for different local ip alias all mapping to a single localhost?
[update]
To give you an idea:
Laptop Dual Core, 8GB, 120GB SSD.
Virtual IPs: 127.0.0.2, ... 127.0.0.11
Now I would love to be able to start java child processes like:
java -jar node.jar + args = 127.0.0.2 for a node using 127.0.0.2 IP only.
A fall back would be to use different base ports for communication but this will introduce an additional layer and it would feel like testing independent node services running on the same node since it will not envolve the usual cluster detection and forming.

if a simulation environment is an option for you how about trying cloudsim... with a quick look at example 2 and the rest you might find what you are looking for and its java as what you need to import your code.

It turned out that jGroups has everything I need right build in. Just create multiple channel with the same name and live with the logic identity (address) and use other means of identify the actual nodes than IP.
I started and simulated about 1000 nodes each joining three or four channels. It takes about <10ms to join a channel. I even managed to use my four cores to parallelize it. So it takes two to three seconds to start the test. By also forming an alternative channel and disconnect at once from the main channel I was even able to check split brain behavior.

Related

Pivotal gemfire cluster configuration

I am trying to set up a Pivotal Gemfire cluster with two nodes/hosts. Precisely two different unix servers. The idea behind is creating 1 locator and 1 cache server in each host where the locators should take care of load balancing among the cache servers. A replicated region will be created in both the cache servers. When a client creates/update a region in cache server using gfsh or java API, it should be replicated to other one
Using gfsh, I am able to start a locator (locator 1) and a cache server (server 1) in host_A and likewise in host_B. I have created a region (RegionA) in both the servers.
Is that all i have to do ?. Pivotal tutorials talk about having a locator and multiple cache servers in same machine. I could not find any appropriate resource which talks about multi-server/host configuration.
After starting the servers in both the hosts. I am starting servers in each of the host like this.
start server --name=server1 --locators=host_A[10334],host_B[10334] --group=group1 --server-port=40406
start server --name=server2 --locators=host_A[10334],host_B[10334] --group=group1 --server-port=40406
When i do "list members" in gfsh, host B shows (locator 2, server 1 [from host A], server 2), but host A shows locator 1 only. Ideally i am expecting 2 locator s and 2 servers as members in both the machines. Is that not right?
The steps look just fine, are you having any issues or something is not working while using the started cluster?. You can go through Pivotal GemFire in 15 Minutes or Less to get to know how to start locators and servers, and how to interact with them as well. The only extra item I can think of (not mentioned withint he previous link as all members are started locally within the same gfsh session) is that you need to correctly configure the --locators parameter when starting your members, more information about how this works can be found in How Member Discovery Works and Configuring Peer-to-Peer Discovery.
Just for your reference, you can have as many members as you want per host, there's no implicit limit about this other than the actual physical resources on the host itself (memory, disk, ports, network throughput, etc.). Keep in mind, however, that it is always better to have only one member per host to achieve the highest reliability and availability for both your data and locator services.
Hope this helps, cheers.

Ensure replication between data centres with Hazelcast

I have an application incorporating a stretched Hazelcast cluster deployed on 2 data centres simultaneously. The 2 data centres are usually both fully functional, but, at times, one of them is taken completely out of the network for SDN upgrades.
What I intend to achieve is to configure the cluster in such a way that each main partition from a DC will have at least 2 backups - one in the other cluster and one in the current one.
For this purpose, checking the documentation pointed me toward the direction of partition groups(http://docs.hazelcast.org/docs/2.3/manual/html/ch12s03.html). Enterprise WAN Replication seemed exactly like the thing we wanted, but, unfortunately, this feature is not available for the free version of Hazelcast.
My configuration is as follows:
NetworkConfig network = config.getNetworkConfig();
network.setPort(hzClusterConfigs.getPort());
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(hzClusterConfigs.isMulticastEnabled());
join.getTcpIpConfig()
.setMembers(hzClusterConfigs.getClusterMembers())
.setEnabled(hzClusterConfigs.isTcpIpEnabled());
config.setNetworkConfig(network);
PartitionGroupConfig partitionGroupConfig = config.getPartitionGroupConfig()
.setEnabled(true).setGroupType(PartitionGroupConfig.MemberGroupType.CUSTOM)
.addMemberGroupConfig(new MemberGroupConfig().addInterface(hzClusterConfigs.getClusterDc1Interface()))
.addMemberGroupConfig(new MemberGroupConfig().addInterface(hzClusterConfigs.getClusterDc2Interface()));
config.setPartitionGroupConfig(partitionGroupConfig);
The configs used initially were:
clusterMembers=host1,host2,host3,host4
clusterDc1Interface=10.10.1.*
clusterDc2Interface=10.10.1.*
However, with this set of configs at any event triggered when changing the components of the cluster, a random node in the cluster started logging "No member group is available to assign partition ownership" every other second (as here: https://github.com/hazelcast/hazelcast/issues/5666). What is more, checking the state exposed by the PartitionService in JMX revealed that no partitions were actually getting populated, despite the apparently successful cluster state.
As such, I proceeded to replacing hostnames with the corresponding IPs and the configuration worked. The partitions were getting created successfully and no nodes were acting out.
The problem here is that the boxes are created as part of an A/B deployment process and get their IPs automatically from a range of 244 IPs. Adding all those 244 IPs seems like a bit much, even if it would be done programatically from Chef and not manually, because of all the network noise it would entail. Checking at every deployment using a telnet-based client which machines are listening on the hazelcast port also seems a bit problematic, since the IPs will be different from a deployment to another and we would get ourselves into a situation in which a part of the nodes in the cluster will have a certain member list and another part will have a different member list at the same time.
Using hostnames would be the best solution, in my opinion, because we would rely on DNS resolution and wouldn't need to wrap our heads around IP resolution at provisioning time.
Does anyone know of a workaround for the group config issue? Or, perhaps, an alternative to achieve the same behavior?
This is not possible at the moment. Backup groups cannot be designed the way to have a backup of themselves. As a workaround you might be able to design 4 groups but in this case there is no guarantee that one backup will be on each datacenter, at least not without using 3 backups.
Anyhow in general we do not recommend to spread a Hazelcast cluster over multiple datacenters, except for the very specific situation where the DCs are interconnected in a way that it is similar to a LAN network and redundancy is set up.

Akka 2.1.0-RC cluster router terminating because of no found routees issue

I am writing a java/scala akka proof of concept and currently I am fumbling with the actor concept in a cluster environment.
Specification
I have a specific situation where a system sends the same messages to multiple nodes. My job is to not drop any of those messages and pass only 1 message to a backend system. Like a unique filter with load balancing/fail-over capabilities.
Idea
I was thinking of using 2 "frontend" actors on 2 nodes and the system would send messages to a frontend router (lets say round-robin) which sends to the frontend actors that send to the backend.
The other fallback solution would be to use a only-leader-send-to-backend system where they all get the same message and only the leader passes it forward.
Problem
The problem I am facing (see code) is that I want the router to use existing frontend actors as routees on the cluster. This fails in the sample code because the router looks for the routees by routees-path (config setting) only locally, doesn't find any and dies.
I haven't had success with the config where the router deploys routees on the cluster nodes either. It would always deploy them locally.
I have sample code here http://ge.tt/2UHUqoQ/v/0?c. There are 2 entry points
* TransformationSample.App2 - run two instances with commandline params 2551 and 2552 each (seed nodes)
* TransformationSample.App1 - run one instance with no commandline params
The App1 is the one that tries to create a router and communicate with it but the router terminates because it can't find the frontend routers locally. I have the issue pinned to the akka.cluster.routing.ClusterRouteeProvider class createRoutees method line 178 https://github.com/akka/akka/blob/releasing-2.1.0-RC1/akka-cluster/src/main/scala/akka/cluster/routing/ClusterRouterConfig.scala.
In closing
I am probably doing something wrong here and please excuse my scala (this is the first project I am writing it with).
The reason why I am wishing for this router thingy to work is because the next step of the proof of concept would be to load balance the backend system with a similar setup where the frontend actors would communicate with a (separate) backend cluster router which sends work round-robin to backend actors.
Is this over-engineered? We have to have fail-over for the front part and load balancing on the back part.
First, what kind of actor are you using? Scala and Akka actors are different from each other.
If you are using an akka actor, try using the Remote Actor System which is really good especially if you have DB installed.

EC2 ELB performance issues

Two questions about EC2 ELB:
First is how to properly run JMeter tests. I've found the following http://osdir.com/ml/jmeter-user.jakarta.apache.org/2010-04/msg00203.html, which basically says to set -Dsun.net.inetaddr.ttl=0 when starting JMeter (which is easy) and the second point it makes is that the routing is per ip not per request. So aside from starting a farm of jmeter instances I don't see how to get around that. Any ideas are welcome, or possibly I'm mis-reading the explanation(?)
Also, I have a web service that is making a server side call to another web service in java (and both behind ELB), so I'm using HttpClient and it's MultiThreadedHttpConnectionManager, where I provide some large-ish routes to host value in the connection manager. And I'm wondering if that will break the load balancing behavior ELB because the connections are cached (and also, that the requests all originate from the same machine). I can switch to use a new HttpClient each time (kind of lame) but that doesn't get around the fact that all requests are originating from a small number of hosts.
Backstory: I'm in the process of perf testing a service using ELB on EC2 and the traffic is not distributing evenly (most traffic to 1-2 nodes, almost no traffic to 1 node, no traffic at all to a 4th node). And so the issues above are the possible culprits I've identified.
I have had very simular problems. One thing is the ELB does not scale well under burst load. So when you are trying to test it, it is not scaling up immediately. It takes a lot of time for it to move up. Another thing that is a drawback is the fact that it uses a CNAME as the DNS look up. This alone is going to slow you down. There are more performance issues you can research.
My recommendation is to use haproxy. You have much more control, and you will like the performance. I have been very happy with it. I use heartbeat to setup a redundant server and I am good to go.
Also if you plan on doing SSL with the ELB, you will suffer more because I found the performance to be below par.
I hope that helps some. When it comes down to it, AWS has told me personally that load testing the ELB does not really work, and if you are planning on launching with a large amount of load, you need to tell them so they can scale you up ahead of time.
You don't say how many jmeter instances you're running, but in my experience it should be around 2x the number of AZs you're scaling across. Even then, you will probably see unbalanced loads - it is very unusual to see the load scaled exactly across your back-end fleet.
You can help (a bit) by running your jmeter instances in different regions.
Another factor is the duration of your test. ELBs do take some time to scale up - you can generally tell how many instances are running by doing an nslookup against the ELB name. Understand your scaling patterns, and build tests around them. (So if it takes 20 minutes to add another instance to the ELB pool, include a 25-30 minute warm-up to your test.) You also get AWS to "pre-warm" the ELB pool if necessary.
If your ELB pool size is sufficient for your test, and can verify that the pool does not change during a test run, you can always try running your tests directly against the ELB IPs - i.e. manually balancing the traffic.
I'm not sure what you expect to happen with the 2nd tier of calls - if you're opening a connection, and re-using it, there's obviously no way to have that scaled across instances without closing & re-opening the connection. Are these calls running on the same set of servers, or a different set? You can create an internal ELB, and use that endpoint to connect to, but I'm not sure that would help in the scenario you've described.

Java Hazelcast problems with multiple clusters

I am running a small system that relies on Hazelcast for clustering, distributed computing and messaging in a Multicast mode (Standard config as available in the download). I have a number of server modules that run as "Core" Hazelcast instances and a Java Swing application that is implemented as a Hazelcast "Native Client". This all works well and I would now like to commission the system in production and would hence need to run two separate clusters (dev + prod) and that is where I run into problems.
According to the documentation all you need to is to use separate group names + passwords for the two clusters and I get the impression that the two clusters should sort themselves out automatically!? This appears to work for the server modules but when I try to connect a "Client"-instance to the prod environment, I can see from the logs of one of the server modules in prod that the client appears to connect successfully:
INFO: [prod] received auth from Connection [/192.168.0.2:55863 -> null] live=true,
client=true, type=JAVA_CLIENT, this group name:prod, auth group name:prod,
successfully authenticated
But, the client never shows up as a member of prod. Instead, I find that the client has become a member of the dev environment even though the authentification took place against prod!
Involontary mixing of the two clusters is obviously a giant problem for me and a showstopper. Does anyone know if there is anything that I am doing wrong or if there are any configuration changes that I can do to resolve the problem?
When a client connects to the cluster it never becomes a member of the cluster.
So I suspect that your client did connected to the prod, but somehow in your code you have somewhere something like Hazelcat.getMap() which results in starting a member in that JVM and since the default configuration that this member will use will be same as the dev, this new member will join to your dev cluster.
So in fact you have one client, that is connected to prod and another member that is connected to the dev cluster.
Try to put something through client and see in which cluster those entries are appear?
Am i making sense?

Categories