When implementing a client/server solutions, one of the questions you always need to answer is about protocol.
In simple cases, it's possible that packets are always of the same type, so the protocol could even have no logic at all: client connects to the server and sever just says some fact, the client disconnects and that's it.
In more complex cases, some packets are can only be sent in some specific cases. For instance, imagine an abstract server that requires authorization: clients have to authorized before sending or getting any useful data. In this case, the concept of session appears.
Session is an concept that describes the state of client/server dialog: both client and sever expect something from eachother, while there are also things that both of them don't expect.
Then, going even deeper, pretend that protocol is quite complicated and it's implementation should be easily extendable. I believe, that the theoretically right solution here is using a finite state machine. Are there any Java frameworks/libraries that allow this state machine to be easily implemented? Or probably, any more protocol-specific solutions?
What I'm expecting is a framework that allows me to define states and transitions between them.
Update: the question is not about easiest client/server solution implementation, the question is about implementing custom protocol. So, please, don't recommend using web services.
I remember using Unimod FSM for finite state machines a few years ago, although for serious work I always preferred to implement the finite state machines directly.
Related
I'd like to write a program to simulate various network conditions (e.g. latency, packet loss). The most straight-forward presentation of that program would be for it to be configured as an IP gateway - clients send traffic to it either as a default gateway, or downstream routers have it set up as a next-hop for routing purposes.
How can I write a program to receive and process that traffic?
What tools and libraries are available to allow this? e.g. Can this be done through iptables on linux?
(I'd prefer to implement it in Java if possible).
One work around could be to implement such a program as a proxy (e.g. HTTP + SOCKS) and configure a router to send all traffic to the proxy transparently. Another could be to open a raw socket and manually process all the traffic, but this might effectively mean re-implementing a network stack. Is there a better way?
Your question is very broad so the answer will be just as broad. I hope it helps you find some preliminary directions to research.
First, try to avoid writing anything at all. There are existing solutions for this task (hint: search for "WAN Emulator").
If what you find isn't good enough, my next step would be to script something using tc and netem. Linux has very good support for both routing and bridging. Using tc+netem you could add delay, loss, jitter and and just about anything else on top. If needed, create a higher level utility, possibly in Java to configure tc and provide a nicer UI.
A third option would be to actually write something. This is where things get tricky, especially if you want to do it in java. To perform bridging or routing, you'd need to get the frames to user space (iptables+nfqueue could help here), then handle them according to your own logic, and finally write them out using a raw socket. That's quite a lot of work.
Implementing an HTTP proxy would be easier since you don't need to work at the level of individual packets. You can avoid the low level stuff (iptables/nfqueue/raw sockets) and just use plain and simple sockets, or even whole HTTP proxy implementations in Java like this one from Jetty.
If you need further details about some of this stuff, you might want to open a second, more focused question after doing some reading.
Why does Java RMI exist? Who uses it and for what?
My most pressing questions;
Why would you want to make calls to methods that aren't defined on your machine? Wouldn't it take much longer to execute? I don't see how this makes the world a better place. Wouldn't it just be smarter to have many machines running the complete program rather than many machines each running parts?
Doesn't the fact that you have to manually provide interfaces to all the machines (clients and servers) kill whatever benefits having remote objects provides? In other words, if a benefit of having a remote object is that the client programmer doesn't have to interact with the server programmer, then doesn't it get annoying to have manually contact eachother to update the interfaces on both sides for each little change?
How is this similar or different to a typical web app set up where a client communicates with a server? In my mind, HTTP calls are much easier to understand. Can an RMI Server require some sort of password from RMI clients?
What kind of applications are typically made using Java RMI? Any hard examples?
Why does Java RMI exist?
Err, because Sun built it? The same Sun that provided Sun RPC.
Who uses it and for what?
RMI is the basis of Jakarta EE (formerly J2EE) just to name one small example. However the concept of remote method calls dates further back to at least CORBA, and the concept of remote procedure calls to at least the 1970s. Sun provided their implementation of RPC in about 1982 and it is the basis of NFS among other things.
Why would you want to make calls to methods that aren't defined on your machine?
Err, if you wanted them to run on another machine?
Wouldn't it take much longer to execute?
Of course.
I don't see how this makes the world a better place. Wouldn't it just be smarter to have many machines running the complete program rather than many machines each running parts?
So you've never heard of distributed computing, then?
Doesn't the fact that you have to manually provide interfaces to all the machines (clients and servers) kill whatever benefits having remote objects provides?
No.
In other words, if a benefit of having a remote object is that the client programmer doesn't have to interact with the server programmer
Did somebody say that was a benefit?
then doesn't it get annoying to have manually contact each other to update the interfaces on both sides for each little change?
There don't tend to be many 'little changes', if you actually design your system before implementing it. But that isn't the only development model anyway. You could have a third person developing the interface. Or the same person developing both sides. Or have the remote interface defined by a specification. Or ...
How is this similar or different to a typical web app set up where a client communicates with a server?
It uses RMI instead of HTTP.
In my mind, HTTP calls are much easier to understand.
You can't get much easier to understand than a remote interface, but obviously your mileage varies.
Can an RMI Server require some sort of password from RMI clients?
Yes, it can use mutually-authenticated TLS for example, or arbitrary authentication protocols implemented via custom socket factories.
I am currently decide what kind of communication method/network protocol I am going to use for a new project.
What I can tell you about this project is that:
- It is Android/java based, using X amount of Android devices
- These devices should be able to send strings to each other over a local network. We are talking about small strings here. Small as in less than 100 characters.
- The amount of packages/transmissions being sent can vary "A LOT". I can't say how much unfortunately, but the network protocol needs to be as scalable as possible.
I have researched different kinds of possible solutions and is now deciding wether to use "Sockets" or "RMI"
As I have understood about RMI:
It is easier than Java sockets to implement and maintain (smaller amount of code)
It is "a bit slower" than sockets, as it is a new "layer" build on top of Sockets
There may be some scalability issues (if this is true, how "serious" is it?) as it creates a lot of new sockets, resulting in Exceptions.
Obviously the system needs to run as smooth as possible, but the main objective is to make it scalable so it can handle more Android devices.
EDIT: The system the system is not "peer-to-peer". All of the android devices should be able to be configured as the server.
None of your concerns are the real issue, in my view.
RMI has a pre-defined protocol, raw sockets do not.
If you use raw sockets, you have to do all the work to define what messages and protocols are exchanged by client and server.
There are so many good existing protocols (RMI, HTTP, etc.) that I'd wonder why you feel the need to invent your own again.
Android devices communicating over HTTP - tell me why it won't be fast or scalable enough. HTTP is good enough for the Internet - why not you and your solution?
I would suggest you to expose some kind of webservice (SOAP or REST) in your application server. For example, people frequently expose their data to mobile devices as a REST webservice API returning some kind of JSON format in order to make it easier to marshal it again in the client device.
This way you take profit of the underlying implementation of HTTP communication in every application server; any other way, you would have to write your own worker thread pool using nio primitive operations in order to achieve performance... Not a thing to be done in a real production environment - maybe in an academic one?
I am working on an application which receives different types of messages(about 4 types of messages). I wanted to know what would be better:
Have different ports for different message types, with the sending application sending the message on appropriate port
Send the messages on one port, distinguished by an id field or something, and parse them
Could someone please tell me which method would be more advantageous in terms of performance? I personally think having different ports would be better. Could someone please tell me if this is the right approach to do this?
Start with one socket, cause that will be way easier to maintain (sorting out multiple network ports for applications can be a pain, especially if there are firewalls involved).
assuming you write your code with reasonable encapsulation around the socket handling, if you get to a point where you truly need multiple sockets for performance (and you have proved this with actual testing), then it should be fairly easy to make the change later.
I have done some searching but haven't come up with anything on this topic. I was wondering if anyone has ever compared (to some degree) the performance difference between an RPC over a socket and a REST web service. If both do the same thing, which would have a tendency to be the better performer? I've already started building some socket code and would like to know if REST would give better performance before I progress much further. Any input would be really appreciated. Thanks indeed
RMI
Feels like a local API, much like
XMLRPC
Can provide some fairly nice remote
exception data
Java specific means this causes lock
in and limits your options
Has horrible versioning problems
between different versions of clients
Skeleton files must be compiled in
like CORBA, which is not very flexible
REST:
easy to route around firewalls
useful for uploading files as it can
be rather lightweight
very simple if you just want to shove
simple things at something and get
back an integer (like for uploaders)
easy to proxy security behind Apache
and let it take the heat
does not define any standard format
for the way the data is being
exchanged (could be JSON, YAML 1.0,
YAML 2.0, arbitrary XML format, etc)
does not define any convention about
having remote faults sent back to the
caller, integer codes are frequently
used, but method of sending back data
is not defined. Ideally this would be
standardized.
may require a lot of work on the
client side caller of the library to
make use of data (custom serialization
and so forth)
In short from here
web services do allow a loosely
coupled architecture. With RMI, you
have to make sure that the objects
stay in sync in all applications
RMI works best for smaller
applications, that are not
internet-related and thus not scalable
Its hard to imagine that REST is faster than a simple socket connection given it also goes over a Socket.
However REST may be performant enough, standard and easier to use. I would test whether REST is fast enough and meets your requirements first (or one of the many other existing solutions) before attempting your own Socket solution.