I have done some searching but haven't come up with anything on this topic. I was wondering if anyone has ever compared (to some degree) the performance difference between an RPC over a socket and a REST web service. If both do the same thing, which would have a tendency to be the better performer? I've already started building some socket code and would like to know if REST would give better performance before I progress much further. Any input would be really appreciated. Thanks indeed
RMI
Feels like a local API, much like
XMLRPC
Can provide some fairly nice remote
exception data
Java specific means this causes lock
in and limits your options
Has horrible versioning problems
between different versions of clients
Skeleton files must be compiled in
like CORBA, which is not very flexible
REST:
easy to route around firewalls
useful for uploading files as it can
be rather lightweight
very simple if you just want to shove
simple things at something and get
back an integer (like for uploaders)
easy to proxy security behind Apache
and let it take the heat
does not define any standard format
for the way the data is being
exchanged (could be JSON, YAML 1.0,
YAML 2.0, arbitrary XML format, etc)
does not define any convention about
having remote faults sent back to the
caller, integer codes are frequently
used, but method of sending back data
is not defined. Ideally this would be
standardized.
may require a lot of work on the
client side caller of the library to
make use of data (custom serialization
and so forth)
In short from here
web services do allow a loosely
coupled architecture. With RMI, you
have to make sure that the objects
stay in sync in all applications
RMI works best for smaller
applications, that are not
internet-related and thus not scalable
Its hard to imagine that REST is faster than a simple socket connection given it also goes over a Socket.
However REST may be performant enough, standard and easier to use. I would test whether REST is fast enough and meets your requirements first (or one of the many other existing solutions) before attempting your own Socket solution.
Related
I have the following situation:
I have 2 JVM processes (really 2 java processes running separately, not 2 threads) running on a local machine. Let's call them ProcessA an ProcessB.
I want them to communicate (exchange data) with one another (e.g. ProcessA sends a message to ProcessB to do something).
Now, I work around this issue by writing a temporary file and these process periodically scan this file to get message. I think this solution is not so good.
What would be a better alternative to achieve what I want?
Multiple options for IPC:
Socket-Based (Bare-Bones) Networking
not necessarily hard, but:
might be verbose for not much,
might offer more surface for bugs, as you write more code.
you could rely on existing frameworks, like Netty
RMI
Technically, that's also network communication, but that's transparent for you.
Fully-fledged Message Passing Architectures
usually built on either RMI or network communications as well, but with support for complicated conversations and workflows
might be too heavy-weight for something simple
frameworks like ActiveMQ or JBoss Messaging
Java Management Extensions (JMX)
more meant for JVM management and monitoring, but could help to implement what you want if you mostly want to have one process query another for data, or send it some request for an action, if they aren't too complex
also works over RMI (amongst other possible protocols)
not so simple to wrap your head around at first, but actually rather simple to use
File-sharing / File-locking
that's what you're doing right now
it's doable, but comes with a lot of problems to handle
Signals
You can simply send signals to your other project
However, it's fairly limited and requires you to implement a translation layer (it is doable, though, but a rather crazy idea to toy with than anything serious.
Without more details, a bare-bone network-based IPC approach seems the best, as it's the:
most extensible (in terms of adding new features and workflows to your
most lightweight (in terms of memory footprint for your app)
most simple (in terms of design)
most educative (in terms of learning how to implement IPC). (as you mentioned "socket is hard" in a comment, and it really is not and should be something you work on)
That being said, based on your example (simply requesting the other process to do an action), JMX could also be good enough for you.
I've added a library on github called Mappedbus (http://github.com/caplogic/mappedbus) which enable two (or many more) Java processes/JVMs to communicate by exchanging messages. The library uses a memory mapped file and makes use of fetch-and-add and volatile read/writes to synchronize the different readers and writers. I've measured the throughput between two processes using this library to 40 million messages/s with an average latency of 25 ns for reading/writing a single message.
What you are looking for is inter-process communication. Java provides a simple IPC framework in the form of Java RMI API. There are several other mechanisms for inter-process communication such as pipes, sockets, message queues (these are all concepts, obviously, so there are frameworks that implement these).
I think in your case Java RMI or a simple custom socket implementation should suffice.
Sockets with DataInput(Output)Stream, to send java objects back and forth. This is easier than using disk file, and much easier than Netty.
I tend to use jGroup to form local clusters between processes. It works for nodes (aka processes) on the same machine, within the same JVM or even across different servers.
Once you understand the basics it is easy working with it and having the options to actually run two or more processes in the same JVM makes it easy to test those processes easily.
The overhead and latency is minimal if both are on the same machine (usually only a TCP rountrip of about >100ns per action).
socket may be a better choice, I think.
Back in 2004 I implement code which do the job with sockets. Until then, many times I search for a better solution, because socket approach triggers firewall and my clients worry. There is no better solution until now. Client must serialize your data, send and server must receive and unserialize.
It is easy.
I'm a networking newbie and have explored a few frameworks such as Jersey and CXF for building RESTful web services in Java.
I have a new application that requires sending large amounts of binary data over the internet (between client and servers), and it needs to be extremely fast. I'm wondering if there is such a thing as a pure "TCP/IP (non-)web service", and if there are any open source Java libraries for building such things.
If all network services have to sit on top of TCP/IP, then I guess I'm looking for something that still uses binary data, but that introduces extremely little overhead for speedy service.
I always associated REST with XML or JSON; if it can be configured to be super-fast and work with binary data, I'd even be into that since I'm already somewhat familiar with Jersey.
I thought RMI might be a good choice, but not sure if it's not appropriate for this use case.
I need speed and I need a binary protocol, and not sure where to start. Any ideas? Thanks in advance!
CanĀ“t you go with Java Sockets?
That is how low level you can go.
http://docs.oracle.com/javase/tutorial/networking/sockets/
Yes RMI is way faster than any high level transport abstraction like CFX. You have two options, raw sockets or RMI, RMI is easier to work with.
REST is an architectural pattern whose distinctive trait is using the HTTP verbs for what they were originally intended to. Even if the majority of deployed REST services use JSON to serialize objects, the entire HTTP transaction looks like this:
PUT /user/foo HTTP/1.1
Host: example.com
X-Auth-Token: foobarbazanythingyouwanthere
{"fist": "Jeff", "last": "Atwood"}
And the response is:
HTTP/1.1 200 OK
So, there is no real "overhead". Instead, you exploit the capabilities of existing libraries to deal with routing, authentication, serialization, and so on. Since HTTP uses MIME-like envelops, it has no problem with binary content, and even if this adds some overhead, it's very difficult to come up with a super-efficient, yet effective protcol, not to mention that you need to design it, implement all the libraries and -well- nobody else will ever be interested in your work.
Since you state you are a "network newbie", and since you seem to use terms like REST and RMI in a random manner, my strong suggestion is not thinking about performance and just go with standard technologies. You can use HTTP, there are lots of pre-built framework, servlet containers, server stubs, you name it.
Come back when you measure real performance bottlenecks, so that we'll have the figures to better assist you.
I am doing a Software Engineering course in which different teams are building different prototype subsystems of a big system (different subsystem of F35 Lightning aircraft!).
The problem is that teams can use different programming languages (like C++ and Java) depending upon what they are most comfortable in. However, these subsystems need to be communicating with each other (like radar needs to provide object corodinates to navigation and control). Hence we need to come up with a solution in which different modules can interact in real time.
Someone suggested XML-RPC and hence I was reading about it. After reading it I think it is used in server client architecture. Is this a good way of doing interprocess kind of communication? What are my options?
Any help would be appreciated.
regards,
Newbie
There are a couple of options beside XML-RPC. For a short bullet-point comparison, take a look at:
http://michaeldehaan.net/2008/07/17/xmlrpc-vs-rest-vs-soap-vs-all-your-rpc-options/
If your exchange is more data-oriented, Protocol Buffers might be an alternative.
Protocol Buffers are a way of encoding structured data in an efficient yet extensible format.
Personally, I would go for lightweight exchange format or method first since the components are considered prototypes. Something like REST or some custom message passing might be simple enough, yet sufficient.
If you are already familiar with XML, it can be a reasonable answer. An advantage of XML is that you don't have to worry about how different machines represent numbers. A disadvantage is the time it takes to keep converting numbers to text and back to numbers.
I'm planning on building a Java server that will handle real time game communications between clients. What is the best type of Java implementation out there that could efficiently and, hopefully, accurately communicate between a client and server at high speeds (say 5-15 packets per second)? I know there are many types of Java networking APIs (ie. ObjectInputStream and ObjectOutputStream, DatagramPacket, KyroNet, etc.), but I'm not sure what is the most effective and/or commonly used implementation for such a scenario. I would assume that most real time games use UDP communication methods, but I understand the reliability issues that come with it. Are there UDP implementations that have some form of flow control? Anyway, thanks in advance!
A few things to consider:
Java NIO is really good, and can handle the kind of throughput/latency you are looking for. Don't use any of the older networking / serialization frameworks and APIs
Latency is really important. You basically want a minimal layer over NIO that allows you to send very fast, small, inidividual messages with minimal overhead.
Depending on the game, you may want TCP or UDP or both. Use TCP for important messages, UDP for messages that aren't strictly necessary for the game to proceed or will be subsumed by a future update (e.g. position updates in a FPS)
Some people implement their own TCP-like messaging protocol over UDP for real time games. This is probably more hassle than it's worth, but be aware of it as an option if you really need to optimise for a specific type of communication
For real time games, you are nearly always doing custom serialisation (e.g. only sending deltas rather than full updates of object positions) - so make sure your framework allows this
Given this, I'd recommend one of the following
Kryonet - lightwieght, customisable, designed for this kind of purpose
Netty - slightly more enterprise-oriented, but very capable, robust and scalable
Roll-your-own based on NIO - tricky but possible if you want really fine grained control. I've done this before, but in retrospect I probably should have picked Kryonet or Netty
Good luck!
Immidiately forget ObjectOutputStream and ObjectInputStream. These are the standard output-input mechanisms of the old standard java serialization, which is slow and produces bloat objects. Some resources to start with:
http://code.google.com/p/kryonet/
http://code.google.com/p/pyronet/
At what point is it better to switch from java.net to java.nio? .net (not the Microsoft entity) is easier to understand and more familiar, while nio is scalable, and comes with some extra nifty features.
Specifically, I need to make a choice for this situation: We have one control center managing hardware at several remote sites (each site with one computer managing multiple hardware units (a transceiver, TNC, and rotator)). My idea was to have write a sever app on each machine that acts as a gateway from the control center to the radio hardware, with one socket for each unit. From my understanding, NIO is meant for one server, many clients, but what I'm thinking of is one client, many servers.
I suppose a third option is to use MINA, but I'm not sure if that's throwing too much at a simple problem.
Each remote server will have up to 8 connections, all from the same client (to control all the hardware, and separate TX/RX sockets). The single client will want to connect to several servers at once, though. Instead of putting each server on different ports, is it possible to use channel selectors on the client side, or is it better to go multi-threaded io on the client side of things and configure the servers differently?
Actually, since the remote machines serve only to interact with other hardware, would RMI or IDL/CORBA be a better solution? Really, I just want to be able to send commands and receive telemetry from the hardware, and not have to make up some application layer protocol to do it.
Avoid NIO unless you have a good reason to use it. It's not much fun and may not be as beneficial as you would think. You may get better scalability once you are dealing with tens of thousands of connections, but at lower numbers you'll probably get better throughput with blocking IO. As always though, make your own measurements before committing to something you might regret.
Something else to consider is that if you want to use SSL, NIO makes it extremely painful.
Scalability will probably drive your choice of package. java.net will require one thread per socket. Coding it will be significantly easier. java.nio is much more efficient, but can be hairy to code around.
I would ask yourself how many connections you expect to be handling. If it's relatively few (say, < 100), I'd go with java.net.
There is almost no reason to write this kind of networking code from scratch now. Packages like netty.io will almost always get you more reliable and flexible code with fewer lines of code than a hand-crafted solution will. Also, with Netty, you can get SSL support w/o complicating your implementation at all. Libraries like netty also obviate the "async vs threads" question almost entirely, gives you good performance, and still lets you tweak the threading model as needed.
The number of connections you're talking about tells me you should use java.net. Really, there's no reason to complexify your task with non-blocking I/O. (Unless your remote systems are underpowered, but then why are you using Java on them?)
Take a look at Apache's XML-RPC package. It's easy to use, completely hides the network stuff from you, and works over good ol' HTTP. No need to worry about protocol issues ... it'll all look like method calls to you, on both ends.
Given the small number of connections involved, java.net sounds like the right solution.
Other posters talked about using XML-RPC. This is a good choice if the volumes of data being shipped are small, however I have had bad experiences with XML-based protocols when writing inter-process communications that ship large amounts of data (e.g. large request/responses, or frequent small amounts of data). The cost of XML parsing is typically orders of magnitude higher than more optimised wire formats (e.g. ASN.1).
For low volume control applications the simplicity of XML-RPC should outweigh the performance costs. For high volume data communications it may be better to use a more efficient wire protocol.