Broadcast to everyone on lan [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am attempting to contact everyone on a LAN to discover which devices are currently using the ip and running my service. Every device running the service will know which other devices are connected when they come online. I have basic networking experience(tcp/udp), but I haven't done much with more complicated communication packages. I wanted to post what I have researched/tried so far and get some expert responses to limit my trial and error time on future potential solutions.
Requirements:
Currently using java, but require cross-language communication.
Must be done in an acceptable time frame(couple seconds max) and preferably reliably.
I would like to use similar techniques for both broadcast and later communications to avoid introducing added complexity of multiple packages/technologies.
Currently I am planning on a heartbeat to known ip's to alert that still connected, but I may want to continuously broadcast to lan later.
I am interested in using cross-language rpc communication for this service, but this technique doesn't necessarily have to use that.
Later communication(non-broadcast) must be reliable.
Research and things attempted:
UDP - Worried about cross-language communication, lack of reliable delivery, and would add another way of communicating rather than using one solution like the ones below. I would prefer to avoid it if another more complete solution can be found.
Apache Thrift - Currently I have tried to iterate through all potential ip's and try to connect to each one. This is far too slow since the timeout is long for each attempted connection(when I call open). I have yet to find any broadcast option.
ZeroMQ - Done very little testing with basic zeromq, but I have only used a wrapper of it in the past. The pub/sub features seem to be useful for this scenario, but I am worried about subscribing to every ip in the lan. Also worried what will happen when attempt to subscribe to an ip that doesn't yet have a service running on it.
Do any of these recommendations seem like they will work better than the others given my requirements? Do you have any other suggestions of technologies which might work better?
Thanks.

What you specify is basically two separate problems; discovery/monitoring and a service provider. Since these two issues are somewhat orthogonal, I would use two different approaches to implement this.
Discovery/monitoring
Let each device continuously broadcast a (small) heartbeat/state message on the LAN over UDP on a predefined port. This heartbeat should contain the ip/port (sender) of the device, along with other interesting data, for example an address (URL) to the service(s) this device provides. Choose a compact message format if you need to keep the bandwidth utilization down, for example Protocol Buffers (available in many languages) or JSON for readability. These messages shall be published periodically, for example every 5th second.
Now, let each device listen to incoming messages on the broadcast address and keep an in-memory map [sender, last-recorded-time + other data] of all known devices. Iterate the map say every second and remove senders who has been silent for x heartbeat intervals (e.g. 3 x 5 seconds). This way each nodes will know about all other responding nodes.
You do not have to know about any IP:s, do not need any extra directory server and do not need to iterate all possible IP addresses. Also, sending/receiving data over UDP is much simpler than over TCP and it does not require any connections. It also generates less overhead, meaning less bandwidth utilization.
Service Provider
I assume you would like some kind of request-response here. For this I would choose a simple REST-based API over HTTP, talking JSON. Switch out the JSON payload for Protocol Buffers if your payload is fairly large, but in most cases JSON would probably work just fine.
All-in-all this would give you a solid, performant, reliable, cross-platform and simple solution.

Take a look at the Zyre project in the ZeroMQ Guide (Chapter 8). It's a fairly complete local network discovery and messaging framework, developed step by step. You can definitely reuse the UDP broadcast and discovery, maybe the rest as well. There's a full Java implementation too, https://github.com/zeromq/zyre.

I would use JMS as it can cross platform (for the client at least) You still have to decide how you want to encode data and unless you have specific ideas I would use XML or JSon as these are easy to read and check.
You can use ZeroMQ for greater performance and lower level access. Unless you know you need this, I suspect you don't.
You may benefit from the higher level features of JMS.
BTW: These services do service discovery implicitly. There is no particular need (except for monitoring) to know about IP addresses or whether services are up or down. Their design assumes you want to protected from have to know these details.

Related

Implementing an IP Gateway (in Java?)

I'd like to write a program to simulate various network conditions (e.g. latency, packet loss). The most straight-forward presentation of that program would be for it to be configured as an IP gateway - clients send traffic to it either as a default gateway, or downstream routers have it set up as a next-hop for routing purposes.
How can I write a program to receive and process that traffic?
What tools and libraries are available to allow this? e.g. Can this be done through iptables on linux?
(I'd prefer to implement it in Java if possible).
One work around could be to implement such a program as a proxy (e.g. HTTP + SOCKS) and configure a router to send all traffic to the proxy transparently. Another could be to open a raw socket and manually process all the traffic, but this might effectively mean re-implementing a network stack. Is there a better way?
Your question is very broad so the answer will be just as broad. I hope it helps you find some preliminary directions to research.
First, try to avoid writing anything at all. There are existing solutions for this task (hint: search for "WAN Emulator").
If what you find isn't good enough, my next step would be to script something using tc and netem. Linux has very good support for both routing and bridging. Using tc+netem you could add delay, loss, jitter and and just about anything else on top. If needed, create a higher level utility, possibly in Java to configure tc and provide a nicer UI.
A third option would be to actually write something. This is where things get tricky, especially if you want to do it in java. To perform bridging or routing, you'd need to get the frames to user space (iptables+nfqueue could help here), then handle them according to your own logic, and finally write them out using a raw socket. That's quite a lot of work.
Implementing an HTTP proxy would be easier since you don't need to work at the level of individual packets. You can avoid the low level stuff (iptables/nfqueue/raw sockets) and just use plain and simple sockets, or even whole HTTP proxy implementations in Java like this one from Jetty.
If you need further details about some of this stuff, you might want to open a second, more focused question after doing some reading.

Sockets or RMI - perfomance and scalability

I am currently decide what kind of communication method/network protocol I am going to use for a new project.
What I can tell you about this project is that:
- It is Android/java based, using X amount of Android devices
- These devices should be able to send strings to each other over a local network. We are talking about small strings here. Small as in less than 100 characters.
- The amount of packages/transmissions being sent can vary "A LOT". I can't say how much unfortunately, but the network protocol needs to be as scalable as possible.
I have researched different kinds of possible solutions and is now deciding wether to use "Sockets" or "RMI"
As I have understood about RMI:
It is easier than Java sockets to implement and maintain (smaller amount of code)
It is "a bit slower" than sockets, as it is a new "layer" build on top of Sockets
There may be some scalability issues (if this is true, how "serious" is it?) as it creates a lot of new sockets, resulting in Exceptions.
Obviously the system needs to run as smooth as possible, but the main objective is to make it scalable so it can handle more Android devices.
EDIT: The system the system is not "peer-to-peer". All of the android devices should be able to be configured as the server.
None of your concerns are the real issue, in my view.
RMI has a pre-defined protocol, raw sockets do not.
If you use raw sockets, you have to do all the work to define what messages and protocols are exchanged by client and server.
There are so many good existing protocols (RMI, HTTP, etc.) that I'd wonder why you feel the need to invent your own again.
Android devices communicating over HTTP - tell me why it won't be fast or scalable enough. HTTP is good enough for the Internet - why not you and your solution?
I would suggest you to expose some kind of webservice (SOAP or REST) in your application server. For example, people frequently expose their data to mobile devices as a REST webservice API returning some kind of JSON format in order to make it easier to marshal it again in the client device.
This way you take profit of the underlying implementation of HTTP communication in every application server; any other way, you would have to write your own worker thread pool using nio primitive operations in order to achieve performance... Not a thing to be done in a real production environment - maybe in an academic one?

Implementing push-like technology

I want to be able to exchange data between my app and the server where each side has to be able to initiate sending of data. I want it to happen quickly and polling from the client side for new messages is not fast enough in my case. How do push technologies work?
I was thinking to keep an opened socket connection from the device to the server and send receive raw bytes in some custom format.
Is it a good approach and what problems might I run into? What would you suggest as an alternative?
When it comes to message passing, the time needed to initialize a new connection between the server and the client usually exceeds by far the time needed to sent the data itself - at least for simple status-like messages. This adds significantly to the communication latency, which seems to be your main concern.
There are two main ways to deal with this issue:
Keep a connection open between both ends at all times: This is the standard way of dealing with this issue - it has the advantage of programming simplicity but you may need to use stay-alive packets regularly to keep the connection open. This may reduce the battery life of a mobile device and increase the networking cost slightly. It may also interact unfavorably with the power-management features of a mobile device.
In addition, no matter what you do, you cannot completely eliminate the possibility of a new connection needing to be established at an inconvenient time - connections that are mostly idle do not fare very well in today's networking infrastructure, I'm afraid...
Use a connection-less protocol, such as UDP: This solution has the potential to minimize the communication and power cost, but it requires that your server and client are designed to handle the inherent unreliability of such protocols.
That said, I would not consider the actual format of the data a major concern, until some profiling demonstrates that a custom format will indeed result in significant savings. I would consider the ability to use off-the-shelf network monitoring and analysis software far more important during the development phase...
Push technology is loosely called Comet. The basic logic is to open an persistent HTTP connection with the server (often called HTTP Streaming). As this connection will not last forever (due to the limitations on the server by default), you should be able to reopen the connection. I am not sure how to do it in android specifically but this should be possible.
The basic concept behind this is explained in this blogpost
As this is a concept, it can be implemented in any server side programming language of your choice. This tutorial gives a fair introduction about how to implement COMET in php. socket.io is another such library if you are comfortable with javascript. This SOF thread provides some useful links.
Coming to advantages and disadvantages,
If you want almost instant updates, COMET is the best.
If you have a limit on the number of connections to the server at a time, then COMET probably has to be thought upon based on the tradeoff.

Multiplayer card game on server using RPC

Basically I want a Java, Python, or C++ script running on a server, listening for player instances to: join, call, bet, fold, draw cards, etc and also have a timeout for when players leave or get disconnected.
Basically I want each of these actions to be a small request, so that players could either be processes on same machine talking to a game server, or machines across network.
Security of messaging is not an issue, this is for learning/research/fun.
My priorities:
Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand.
Speed. I'm going to be playing millions of these hands as fast as I can.
Run on a shared server instance (I may have limited access to ports or things that need root)
My questions:
Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these).
Any good frameworks to work off of?
Message types? I'm thinking JSON or Protocol Buffers.
How to make it FAST?
Thanks guys - just looking for some pointers and suggestions. I think it is a cool problem with a lot of neat things to learn doing it.
As far as frameworks goes, Ginkgo looks promising for building a network service (which is what you're doing). The Python is very straightforward, and the asynchronicity enabled by gevent lets you do asynchronous things without generally having to worry about callbacks. The gevent core also gives you access to a lot of building blocks.
Rather than having lots of services communicating over ports, you might look into either 1) a good message queue, like RabbitMQ or 0mq, or 2) a distributed coordination server, like Zookeeper.
That being said, what you aim to do is difficult, especially if you're not familiar with the basics. It's a worthwhile endeavor to learn about those basics.
Don't worry about speed at first. Get it working, then make it scale. Of course, there are directions you can go that will make it easier to scale in the future. Zookeeper in particular gives you easy-to-implement primitives for scaling horizontally (i.e. multiple workers sharing the load). In particular, see the Zookeeper recipe book and their corresponding python implementations (courtesy of the kazoo, a gevent-based client library).
Don't forget that "fast" also means optimizing your own development time, for quicker iterations and less time cursing your development environment. So use Python, which will let you get up and running quickly now, and optimize later if you really truly start to bind on CPU time or memory use. (With this particular application, you're far more likely to bind on network IO.)
Anything else? Maybe a cup of coffee to go with your question :-)
Answering your question from the ground up would require several books worth of text with topics ranging from basic TCP/IP networking to scalable architectures, but I'll try to give you some direction nevertheless.
Questions:
Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these).
I would venture that if you're not clear on the definition of each of these maybe designing an implementing a service that will be "be playing millions of these hands as fast as I can" is a bit hmm, over-reaching? But don't let that stop you as they say "ignorance is bliss."
Any good frameworks to work off of?
I think your project is a good candidate for Node.js. There main reason being that Node.js is relatively scaleable and it is good at hiding the complexity required for that scalability. There are downsides to Node.js, just Google search for 'Node.js scalability critisism'.
The main point against Node.js as opposed to using a more general purpose framework is that scalability is difficult, there is no way around it, and Node.js being so high level and specific provides less options for solving though problems.
The other drawback is Node.js is Javascript not Java or Phyton as you prefer.
Message types? I'm thinking JSON or Protocol Buffers.
I don't think there's going to be a lot of traffic between client and server so it doesn't really matter I'd go with JSON just because it is more prevalent.
How to make it FAST?
The real question is how to make it scalable. Running human vs human card games is not computationally intensive, so you're probably going to run out of I/O capacity before you reach any computational limit.
Overcoming these limitations is done by spreading the load across machines. The common way to do in multi-player games is to have a list server that provides links to identical game servers with each server having a predefined number of slots available for players.
This is a variation of a broker-workers architecture were the broker machine assigns a worker machine to clients based on how busy they are. In gaming users want to be able to select their server so they can play with their friends.
Related:
Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand.
Since this is in human time scales (seconds as opposed to miliseconds) the client should send keepalives say every 10 seconds with say 30 second session timeout.
The keepalives would be JSON messages in your application protocol not HTTP which is lower level and handled by the framework.
The framework itself should provide you with HTTP 1.1 connection management/pooling which allows several http sessions (request/response) to go through the same connection, but do not require the client to be always connected. This is a good compromise between reliability and speed and should be good enough for turn based card games.
Honestly, I'd start with classic LAMP. Take a stock Apache server, and a mysql database, and put your Python scripts in the cgi-bin directory. The fact that they're sending and receiving JSON instead of HTTP doesn't make much difference.
This is obviously not going to be the most flexible or scalable solution, of course, but it forces you to confront the actual problems as early as possible.
The first problem you're going to run into is game state. You claim there is no shared state, but that's not right—the cards in the deck, the bets on the table, whose turn it is—that's all state, shared between multiple players, managed on the server. How else could any of those commands work? So, you need some way to share state between separate instances of the CGI script. The classic solution is to store the state in the database.
Of course you also need to deal with user sessions in the first place. The details depend on which session-management scheme you pick, but the big problem is how to propagate a disconnect/timeout from the lower level up to the application level. What happens if someone puts $20 on the table and then disconnects? You have to think through all of the possible use cases.
Next, you need to think about scalability. You want millions of games? Well, if there's a single database with all the game state, you can have as many web servers in front of it as you want—John Doe may be on server1 while Joe Schmoe is on server2, but they can be in the same game. On the other hand, you can a separate database for each server, as long as you have some way to force people in the same game to meet on the same server. Which one makes more sense? Either way, how do you load-balance between the servers. (You not only want to keep them all busy, you want to avoid the situation where 4 players are all ready to go, but they're on 3 different servers, so they can't play each other…).
The end result of this process is going to be a huge mess of a server that runs at 1% of the capacity you hoped for, that you have no idea how to maintain. But you'll have thought through your problem space in more detail, and you'll also have learned the basics of server development, both of which are probably more important in the long run.
If you've got the time, I'd next throw the whole thing out and rewrite everything from scratch by designing a custom TCP protocol, implementing a server for it in something like Twisted, keeping game state in memory, and writing a simple custom broker instead of a standard load balancer.

Implementing custom protocol logic in Java?

When implementing a client/server solutions, one of the questions you always need to answer is about protocol.
In simple cases, it's possible that packets are always of the same type, so the protocol could even have no logic at all: client connects to the server and sever just says some fact, the client disconnects and that's it.
In more complex cases, some packets are can only be sent in some specific cases. For instance, imagine an abstract server that requires authorization: clients have to authorized before sending or getting any useful data. In this case, the concept of session appears.
Session is an concept that describes the state of client/server dialog: both client and sever expect something from eachother, while there are also things that both of them don't expect.
Then, going even deeper, pretend that protocol is quite complicated and it's implementation should be easily extendable. I believe, that the theoretically right solution here is using a finite state machine. Are there any Java frameworks/libraries that allow this state machine to be easily implemented? Or probably, any more protocol-specific solutions?
What I'm expecting is a framework that allows me to define states and transitions between them.
Update: the question is not about easiest client/server solution implementation, the question is about implementing custom protocol. So, please, don't recommend using web services.
I remember using Unimod FSM for finite state machines a few years ago, although for serious work I always preferred to implement the finite state machines directly.

Categories