I am running a small system that relies on Hazelcast for clustering, distributed computing and messaging in a Multicast mode (Standard config as available in the download). I have a number of server modules that run as "Core" Hazelcast instances and a Java Swing application that is implemented as a Hazelcast "Native Client". This all works well and I would now like to commission the system in production and would hence need to run two separate clusters (dev + prod) and that is where I run into problems.
According to the documentation all you need to is to use separate group names + passwords for the two clusters and I get the impression that the two clusters should sort themselves out automatically!? This appears to work for the server modules but when I try to connect a "Client"-instance to the prod environment, I can see from the logs of one of the server modules in prod that the client appears to connect successfully:
INFO: [prod] received auth from Connection [/192.168.0.2:55863 -> null] live=true,
client=true, type=JAVA_CLIENT, this group name:prod, auth group name:prod,
successfully authenticated
But, the client never shows up as a member of prod. Instead, I find that the client has become a member of the dev environment even though the authentification took place against prod!
Involontary mixing of the two clusters is obviously a giant problem for me and a showstopper. Does anyone know if there is anything that I am doing wrong or if there are any configuration changes that I can do to resolve the problem?
When a client connects to the cluster it never becomes a member of the cluster.
So I suspect that your client did connected to the prod, but somehow in your code you have somewhere something like Hazelcat.getMap() which results in starting a member in that JVM and since the default configuration that this member will use will be same as the dev, this new member will join to your dev cluster.
So in fact you have one client, that is connected to prod and another member that is connected to the dev cluster.
Try to put something through client and see in which cluster those entries are appear?
Am i making sense?
Related
I have two development machines, both running Ignite in server mode on same network. Actually I started the server in the first machine and then started another machine. When the other machine starts, it is getting automatically added to the first one's topology.
Note:
when starting I've removed the work folder in both machines.
In config, I never mentioned any Ips of other machines.
Can anyone tell me what's wrong with this? My intention is each machine should've separate topology.
As described in discovery documentation, Apache Ignite will employ multicast to find all nodes in a local network, forming a cluster. This is default mode of operation.
Please note that we don't really recommend using this mode for either development or production deployment, use static discovery instead (see the same documentation).
I've a question which may seem strange, but I'm working in a environment which has very restricted options.
Basically, I've a job which runs on a SAP Netweaver server, which is clustered.
This job runs socket server code, which allows an ancient system to communicate with it.
My question is this:
Depending on which side of the cluster the job runs on (and I can't influence this), the sock server will either run on a .127 IP or a .129 IP.
Since the connecting system needs a fixed IP to connect to, It gives me a problem.
So, can I open the socket on the .127 IP each time, regardless of which of the two IP the Job happens to be running on, or does it have to be opened on the same IP that the code is actually running on ?
Well, if you can't influence the server, you can introduce a component between the ancient system and the cluster that will redirect the request to one of the IPs in the cluster (.127 / .129 in your example).
|-> [IP .127]
[ancient system] --> [load balancing/proxy component] |
|-> [IP .129]
An actual implementation can vary, basically it boils down to hardware based solution or software based solution.
Hardware
Some network equipment, like load balancers provide this feature, so talk to your network department about this, they'll provide a couple of options.
Software
You can install solutions like "ha-proxy" that will solve this at the level of software
I am trying to set up a Pivotal Gemfire cluster with two nodes/hosts. Precisely two different unix servers. The idea behind is creating 1 locator and 1 cache server in each host where the locators should take care of load balancing among the cache servers. A replicated region will be created in both the cache servers. When a client creates/update a region in cache server using gfsh or java API, it should be replicated to other one
Using gfsh, I am able to start a locator (locator 1) and a cache server (server 1) in host_A and likewise in host_B. I have created a region (RegionA) in both the servers.
Is that all i have to do ?. Pivotal tutorials talk about having a locator and multiple cache servers in same machine. I could not find any appropriate resource which talks about multi-server/host configuration.
After starting the servers in both the hosts. I am starting servers in each of the host like this.
start server --name=server1 --locators=host_A[10334],host_B[10334] --group=group1 --server-port=40406
start server --name=server2 --locators=host_A[10334],host_B[10334] --group=group1 --server-port=40406
When i do "list members" in gfsh, host B shows (locator 2, server 1 [from host A], server 2), but host A shows locator 1 only. Ideally i am expecting 2 locator s and 2 servers as members in both the machines. Is that not right?
The steps look just fine, are you having any issues or something is not working while using the started cluster?. You can go through Pivotal GemFire in 15 Minutes or Less to get to know how to start locators and servers, and how to interact with them as well. The only extra item I can think of (not mentioned withint he previous link as all members are started locally within the same gfsh session) is that you need to correctly configure the --locators parameter when starting your members, more information about how this works can be found in How Member Discovery Works and Configuring Peer-to-Peer Discovery.
Just for your reference, you can have as many members as you want per host, there's no implicit limit about this other than the actual physical resources on the host itself (memory, disk, ports, network throughput, etc.). Keep in mind, however, that it is always better to have only one member per host to achieve the highest reliability and availability for both your data and locator services.
Hope this helps, cheers.
I'm new-ish to networking, and I'm swimming (drowning) in semantics.
I have a VM which runs a Java application. Ideally, it would be fed inputs from the host through a RabbitMQ queue. The Java application would then place the results on another RabbitMQ queue on a different port where it will be used by the host application. After researching it for a bit, it seems like RabbitMQ only exists in the localhost space with listeners on different ports, am I correct in this?
Do I need 2 RabbitMQ servers running in tandem, then, (one on the VM and other on Host) each listening to the same port? Or do I just need one RabbitMQ server running while both applications are pointed to the same IP Address/Port?
Also, I have also read that you cannot connect as 'guest/guest' unless it is on localhost, which I understand, but how is RabbitMQ supposed to be configured/reachable to anything besides localhost?
I've been researching for several hours, but the documentation does not point to a direct answer/how-to guide. Perhaps it is my lack of network experience. If anyone could elaborate on these questions or point me to some articles/helpful guides, I would be much obliged.
P.S. -- I don't even know what code to display to give context. Let me know and I'll edit the code into the post.
RabbitMQ listens to TCP port 5672 on all network interfaces out-of-the-box. This includes the "loopback" interface (to allow fast connections to self) and interfaces visible to other remote hosts (including VMs).
For your use case, you probably need a single RabbitMQ instance for both directions. The application on the host will publish messages to one queue and the Java application in the VM will consume messages from that queue and push the result to a second queue. This second queue can be consumed by the application on the host.
For the user, you need to create a new user with the appropriate rights. This is documented in the access control article. To create the user, you can do it from the management web UI (after you enabled the management plugin) or using the rabbitmqctl command line tool.
The last part is networking between the host and the VM. It really depends on the technology you use. It may work out-of-the-box or you may have to configure how VMs are connected to the network. Refer to the documentation of your hypervisor.
I am writing a java/scala akka proof of concept and currently I am fumbling with the actor concept in a cluster environment.
Specification
I have a specific situation where a system sends the same messages to multiple nodes. My job is to not drop any of those messages and pass only 1 message to a backend system. Like a unique filter with load balancing/fail-over capabilities.
Idea
I was thinking of using 2 "frontend" actors on 2 nodes and the system would send messages to a frontend router (lets say round-robin) which sends to the frontend actors that send to the backend.
The other fallback solution would be to use a only-leader-send-to-backend system where they all get the same message and only the leader passes it forward.
Problem
The problem I am facing (see code) is that I want the router to use existing frontend actors as routees on the cluster. This fails in the sample code because the router looks for the routees by routees-path (config setting) only locally, doesn't find any and dies.
I haven't had success with the config where the router deploys routees on the cluster nodes either. It would always deploy them locally.
I have sample code here http://ge.tt/2UHUqoQ/v/0?c. There are 2 entry points
* TransformationSample.App2 - run two instances with commandline params 2551 and 2552 each (seed nodes)
* TransformationSample.App1 - run one instance with no commandline params
The App1 is the one that tries to create a router and communicate with it but the router terminates because it can't find the frontend routers locally. I have the issue pinned to the akka.cluster.routing.ClusterRouteeProvider class createRoutees method line 178 https://github.com/akka/akka/blob/releasing-2.1.0-RC1/akka-cluster/src/main/scala/akka/cluster/routing/ClusterRouterConfig.scala.
In closing
I am probably doing something wrong here and please excuse my scala (this is the first project I am writing it with).
The reason why I am wishing for this router thingy to work is because the next step of the proof of concept would be to load balance the backend system with a similar setup where the frontend actors would communicate with a (separate) backend cluster router which sends work round-robin to backend actors.
Is this over-engineered? We have to have fail-over for the front part and load balancing on the back part.
First, what kind of actor are you using? Scala and Akka actors are different from each other.
If you are using an akka actor, try using the Remote Actor System which is really good especially if you have DB installed.