Akka Client actor connecting to Server actor system - java

I have server actor running in the background. The basic operation of the server actor is to get a key and a value pair. Once it receives the pair, it stores it in a map and returns it when asked.
Now, I have a client actor. I want to the connect to the server actor using actorSelection() method. But I am confused with the parameters it takes. Can anyone help me understand what parameters it takes ?
Server side:-
Actor System: actorSystem
Server Actor: akkademy-db
Client side:-
Actor System: LocalSystem

You didn't mention that your scenario is from the book Learning Akka. As stated in the book, the client can obtain an ActorSelection of the server with the following:
ActorSelection remoteDb = system.actorSelection("akka.tcp://akkademy#" + remoteAddress + "/user/akkademy-db")
The template for the path, as the documentation describes, is the following:
akka.<protocol>://<actor system name>#<hostname>:<port>/<actor path>
Using the template, here's a breakdown of the ActorSelection path to the server:
"akka.tcp://akkademy#" + remoteAddress + "/user/akkademy-db"
// tcp --> protocol
// akkademy --> actor system name
// remoteAddress --> hostname:port
// /user/akkademy-db --> actor path
Read the documentation for more information.

Related

How do I configure Vert.x event bus to work across cluster of Docker containers?

In my current setup, I'm using the default multicast option of the Hazelcast cluster manager. When I link the instances of my containerized Vertx modules (via Docker networking links), I can see that they are successfully creating Hazelcast cluster. However, when I try publishing events on the event bus from one module, the other module doesn't react to it. I'm not sure how the network settings in the Hazelcast cluster related to the network settings for the event bus.
At the moment, I have the following programmatic configuration for each of my Vert.x module, each deployed inside a docker container.
ClusterManager clusterManager = new HazelcastClusterManager();
VertxOptions vertxOptions = new VertxOptions()
.setClustered(true)
.setClusterManager(clusterManager);
vertxOptions.setEventBusOptions(new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("application"));
The Vert.x Core manual states that I may have to configure clusterPublicHost, and clusterPublicPort for the event bus, but I'm not sure how those relate to the general network topology.
One answer is here https://groups.google.com/d/msg/vertx/_2MzDDowMBM/nFoI_k6GAgAJ
I see this question come up a lot, and what a lot of people miss in
the documentation (myself included) is that Event Bus does not use the
cluster manager to send event bus messages. I.e. in your example with
Hazelcast as the cluster manager, you have the Hazelcast cluster up
and communicating properly (so your Cluster Manager is fine); however,
the Event bus is failing to communicate with your other docker
instances due to one or more of the following:
It is attempting to use an incorrect IP address to the other node (i.e. the IP of the private interface on the Docker instance, not the
publicly mapped one)
It is attempting to communicate on a port Docker is not configured to forward (the event bus picks a dynamic port if you don't specify
one)
What you need to do is:
Tell Vertx the IP address that the other nodes should use to talk to each instance ( using the -cluster-host [command line] ,
setClusterPublicHost [VertXOptions] or "vertx.cluster.public.host"
[System Property] options)
Tell Vertx explicitly the Port to use for event bus communication and ensure Docker is forwarding traffic for those ports ( using the
"vertx.cluster.public.port" [System Property], setClusterPublicPort
[VertXOptions] or -cluster-port [command line] options). In the past,
I have used 15701 because it is easy to remember (just a '1' in fromt
of the Hazelcast ports).
The Event bus only uses the Cluster Manager to manage the IP/Port
information of the other Vertx Instances and the registration of the
Consumers/Producers. The communications are done independently of the
cluster manager, which is why you can have the cluster manager
configured properly and communicating, but still have no Event bus
communications.
You may not need to do both the steps above if both your containers
are running on the same host, but you definitely will once you start
running them on separate hosts.
Something what also can happen, is that vert.x uses the loopback interface, when not specifying the IP which vert.x (not hazelcast) should take to communicate over eventbus. The problem here is, that you don't know which interface is taken to communicate over (loopback, interface with IP, you could even have multiple interfaces with IP).
To overcome this problem, I wrote a method once https://github.com/swisspush/vertx-cluster-watchdog/blob/master/src/main/java/org/swisspush/vertx/cluster/ClusterWatchdogRunner.java#L101
The cluster manager works fine, the cluster manager configuration has to be the same on each node (machine/docker container) in your cluster or don't make any configurations at all (use the default configuration of your cluster manager).
You have to make the event bus configuration be consistent on each node, you have to set the cluster host on each node to be the IP address of this node itself and any arbitrary port number (unless you try to run more than Vert.x instance on the same node you have to choose a different port number for each Vert.x instance).
For example if a node's IP address is 192.168.1.12 then you would do the following:
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost("192.168.1.12") // node ip
.setClusterPort(17001) // any arbitrary port but make sure no other Vert.x instances using same port on the same node
.setClusterManager(clusterManager);
on another node whose IP address is 192.168.1.56 then you would do the following:
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost("192.168.1.56") // other node ip
.setClusterPort(17001) // it is ok because this is a different node
.setClusterManager(clusterManager);
found this solution that worked perfectly for me, below is my code snippet (important part is the options.setClusterHost()
public class Runner {
public static void run(Class clazz) {
VertxOptions options = new VertxOptions();
try {
// for docker binding
String local = InetAddress.getLocalHost().getHostAddress();
options.setClusterHost(local);
} catch (UnknownHostException e) { }
options.setClustered(true);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
res.result().deployVerticle(clazz.getName());
} else {
res.cause().printStackTrace();
}
});
}
}
public class Publisher extends AbstractVerticle {
public static void main(String[] args) {
Runner.run(Publisher.class);
}
...
}
no need to define anything else...

How do I communicate with a specific process in one Erlang node?

I have an Erlang server which is spawning a new process for each client that connects. Then the Pid of this new process is passed to the client (to make a connection to the new process.) Is that enough to make a connection from a jinterface client?
I am using this to connect from the client first:
final String SERVERNAME = "server";
final String SERVERNODE = "bertil#computer";
mbox.send(SERVERNAME, SERVERNODE, connectClient);
And those names is set in the server when it starts:
start() ->
net_kernel:start([bertil, shortnames]),
register(server, self()).
Do I have to register a new name for each spawned process? That would not be so dynamic... How do I solve this? Should I use the main process at the server as a router to send all traffic through?
Once you have a pid, you should be able to send a message directly to it. In Erlang you don't have to specify a node if you got a pid. You only need a node if you are sending to a registered name since names are only unique per nod. Pids are unique in the whole cluster.
If you have a varable my_pid as an OtpErlangPid object you can send like so:
mbox.send(my_pid, message);
See the documentation for the send function and chapter 1.6 Sending and Receiving Messages in the Jinterface User's Guide.

java socket server ClientA -> Server -> ClientB

I have implemented a simple client/server app using sockets, but now i would like to do so i have a ClientA that writes to server, and the server redirects the message to ClientB.
ClientA -> Server -> ClientB
I know how to implement ClientA and ClientB, but im having problems to distinguish ClientA from ClientB inside the server...
Server: I know how to read and resend the messages, i just need the logic to distinguish the clients.
If I understand the question, you have a server to which clients connect.
A server can have one of two roles, either the "sender" or "receiver". When a sender and a receiver connect to the server, the sender transmits data which is then passed on to the receiver. This is generically known as a "proxy".
One way to do this is to have the server listen on two different ports, say 3000 and 4000. Clients connecting to port 3000 (for instance) want to assume the role of sender, while those connecting on 4000 want to receive. If you have multiple senders and multiple receivers, then the clients will need to identify themselves to the server and indicate to which receiver they want to send or receive from (by sending login parameters, for instance), prior to setting up the data transfer connections. The details of how this is accomplished (data packets sent) is known as the "protocol", and you are responsible for designing it.
If clients can take on both roles simultaneously (sender and receiver) then you would have a single listening port on the server for all clients. The clients would then have to communicate to the server (by sending data packets) what connection they want to establish. Again, the details of how this happens are totally up to you. You must define the protocol.
Here's a sequence diagram of one (of many) ways to do this:
Client A Server Client B
|----login------>| |
| |<------login-----|
| |-------accept--->|
|<---acccept-----| |
|----data------->| |
| |-------data----->|
. . .
. . .
. . .
Client A login data message says "I am client A, I wish to send data to B"
Client B login data message says "I am client B, I wish to receive from A"
Server sends "accept" messages to both. When A receives the accept message it begins sending data and the server forwards it to B.
Issues to be dealt with include ordering of connections (what if B connects before A), connection failure (how does the server notify one client that the other disappeared), etc. These are all part of defining the protocol.

java akka remote clustering with play framework

I am building a system of clustered computers with several nodes. there is a master node that is suppose to schedule task to several nodes in the cluster. the nodes are separate PCs that are connected to the master node via network cables. the whole system is expected to be implemented with java akka and play framework platform.
is there a way to implement this with akka remote clustering with play framework.
I am aware of the remote calculator tutorials but it seems to be runned with the SBT platform
but I will love to know if a similar tutorials exist with the play framework.
Or any link to help me with my project
thank you
An instance of Play! framework application can connect to a remote Akka node (i.e: your master node) using a simple configuration.
There are two ways:
override the default actor system
define a new actor system
I suggest you to use the second one.
In this case you have to add in application.conf something like
master {
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
netty {
hostname = "your-master-host-name"
port = 0
}
}
}
}
Then in your Play! app you can connect to te remote master node in this way
ActorSystem system = ActorSystem.create("master", ConfigFactory.load().getConfig("master"))
ActorRef master = system.actorFor("akka://master#your-master-host-name:your-master-port/user/master")
If you prefer to override de default Play Akka actor system. Here is the reference configuration: http://www.playframework.org/documentation/2.0.3/AkkaCore
For the master and computational cluster nodes I suggest you to use the architecture and the code described here: http://letitcrash.com/post/29044669086/balancing-workload-across-nodes-with-akka-2
If your master and computational nodes does not required a web or REST interface you can implement them as simple Java program.
In the cited article the node are not exposed remotely. To do that just add an application.conf in master node app:
master {
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
netty {
hostname = "your-master-host-name"
port = your-master-port
}
}
}
}
And instantiate it in with actorOf method
ActorSystem system = ActorSystem.create("master", ConfigFactory.load().getConfig("master"))
ActorRef master = system.actorOf(new Props(Master.class), "master")
The computational nodes must be configured in the same way of Play! node.
Notice that only master node has a TCP-IP port defined. Non-master nodes use 0 port, which configure Akka to choose a random free port for them. This is correct because the only well-known host:port address you need is the master one, where every nodes, when its startup, has to point to.

connect to a lacewing server chat

I'm trying to make a port of a chat program a friend of mine made with lacewing and multimedia fusion 2 for android device.
I've managed to create a socket connecting to the listening socket of the server successfully, but I cannot seem to be able to send data to login and enter the chat. The login for now just requires a name, but even if I send a String of data, the server doesn't seem to reply or accept that data to get me over the channel.
I know I could easily port this with other way like using the NDK of the multimedia fusion 2 exporter, but I just want to figure out how this works
PS: I'm using Java and libgdx for the development
You need to read the liblacewing relay protocol:
https://github.com/udp/lacewing/blob/0.2.x/relay/current_spec.txt
On initial connection, you have to send byte 0 to identify that you are not an HTTP client. After this, you can exchange normal protocol messages.
The first message you need to send is the connection request (which may be denied by the server with a deny message). This would be:
byte 0 (2.1.0 request)
(1.2 size)
byte 0 (2.1.0.0 connection request)
string "revision 3" (2.1.0.0 connection request -> version)
When the server responds with response 0 (2.2.0.0 Connect), you then have to set a name before you may join any channels. This is done with message 2.1.0.1 SetName, which is the same structure as above but instead of 2.1.0.0's byte 0, it is 2.1.0.1's byte 1, followed by the name as a string instead of the protocol version.
The server should then respond with 2.2.0.1 SetName, assuming it accepted your name change request. You should process this message in case the server gave you a different name than you requested. Finally, once you have a name, you can join a channel with 2.1.0.2 JoinChannel. The flags you specify here will be used if the channel doesn't exist yet (e.g. nobody is in the chat yet) - these should match the ones in the MMF2 project file. The name should also match.
After all that, you're still not done! You have to process more messages, etc. it's almost like writing the RelayClient class yourself. It's a tough task, but with the protocol specification in hand you should be able to work it all out.

Categories