Implementing an IP Gateway (in Java?) - java

I'd like to write a program to simulate various network conditions (e.g. latency, packet loss). The most straight-forward presentation of that program would be for it to be configured as an IP gateway - clients send traffic to it either as a default gateway, or downstream routers have it set up as a next-hop for routing purposes.
How can I write a program to receive and process that traffic?
What tools and libraries are available to allow this? e.g. Can this be done through iptables on linux?
(I'd prefer to implement it in Java if possible).
One work around could be to implement such a program as a proxy (e.g. HTTP + SOCKS) and configure a router to send all traffic to the proxy transparently. Another could be to open a raw socket and manually process all the traffic, but this might effectively mean re-implementing a network stack. Is there a better way?

Your question is very broad so the answer will be just as broad. I hope it helps you find some preliminary directions to research.
First, try to avoid writing anything at all. There are existing solutions for this task (hint: search for "WAN Emulator").
If what you find isn't good enough, my next step would be to script something using tc and netem. Linux has very good support for both routing and bridging. Using tc+netem you could add delay, loss, jitter and and just about anything else on top. If needed, create a higher level utility, possibly in Java to configure tc and provide a nicer UI.
A third option would be to actually write something. This is where things get tricky, especially if you want to do it in java. To perform bridging or routing, you'd need to get the frames to user space (iptables+nfqueue could help here), then handle them according to your own logic, and finally write them out using a raw socket. That's quite a lot of work.
Implementing an HTTP proxy would be easier since you don't need to work at the level of individual packets. You can avoid the low level stuff (iptables/nfqueue/raw sockets) and just use plain and simple sockets, or even whole HTTP proxy implementations in Java like this one from Jetty.
If you need further details about some of this stuff, you might want to open a second, more focused question after doing some reading.

Related

2 programs that send messages to each other in Java [duplicate]

I have the following situation:
I have 2 JVM processes (really 2 java processes running separately, not 2 threads) running on a local machine. Let's call them ProcessA an ProcessB.
I want them to communicate (exchange data) with one another (e.g. ProcessA sends a message to ProcessB to do something).
Now, I work around this issue by writing a temporary file and these process periodically scan this file to get message. I think this solution is not so good.
What would be a better alternative to achieve what I want?
Multiple options for IPC:
Socket-Based (Bare-Bones) Networking
not necessarily hard, but:
might be verbose for not much,
might offer more surface for bugs, as you write more code.
you could rely on existing frameworks, like Netty
RMI
Technically, that's also network communication, but that's transparent for you.
Fully-fledged Message Passing Architectures
usually built on either RMI or network communications as well, but with support for complicated conversations and workflows
might be too heavy-weight for something simple
frameworks like ActiveMQ or JBoss Messaging
Java Management Extensions (JMX)
more meant for JVM management and monitoring, but could help to implement what you want if you mostly want to have one process query another for data, or send it some request for an action, if they aren't too complex
also works over RMI (amongst other possible protocols)
not so simple to wrap your head around at first, but actually rather simple to use
File-sharing / File-locking
that's what you're doing right now
it's doable, but comes with a lot of problems to handle
Signals
You can simply send signals to your other project
However, it's fairly limited and requires you to implement a translation layer (it is doable, though, but a rather crazy idea to toy with than anything serious.
Without more details, a bare-bone network-based IPC approach seems the best, as it's the:
most extensible (in terms of adding new features and workflows to your
most lightweight (in terms of memory footprint for your app)
most simple (in terms of design)
most educative (in terms of learning how to implement IPC). (as you mentioned "socket is hard" in a comment, and it really is not and should be something you work on)
That being said, based on your example (simply requesting the other process to do an action), JMX could also be good enough for you.
I've added a library on github called Mappedbus (http://github.com/caplogic/mappedbus) which enable two (or many more) Java processes/JVMs to communicate by exchanging messages. The library uses a memory mapped file and makes use of fetch-and-add and volatile read/writes to synchronize the different readers and writers. I've measured the throughput between two processes using this library to 40 million messages/s with an average latency of 25 ns for reading/writing a single message.
What you are looking for is inter-process communication. Java provides a simple IPC framework in the form of Java RMI API. There are several other mechanisms for inter-process communication such as pipes, sockets, message queues (these are all concepts, obviously, so there are frameworks that implement these).
I think in your case Java RMI or a simple custom socket implementation should suffice.
Sockets with DataInput(Output)Stream, to send java objects back and forth. This is easier than using disk file, and much easier than Netty.
I tend to use jGroup to form local clusters between processes. It works for nodes (aka processes) on the same machine, within the same JVM or even across different servers.
Once you understand the basics it is easy working with it and having the options to actually run two or more processes in the same JVM makes it easy to test those processes easily.
The overhead and latency is minimal if both are on the same machine (usually only a TCP rountrip of about >100ns per action).
socket may be a better choice, I think.
Back in 2004 I implement code which do the job with sockets. Until then, many times I search for a better solution, because socket approach triggers firewall and my clients worry. There is no better solution until now. Client must serialize your data, send and server must receive and unserialize.
It is easy.

Java Socket still first choice for TCP/IP programming in Java?

I need to interact with a server over TCP/IP with a basic message/response protocol (so for each request I shall receive a defined response).
In the JVM ecosystem, I think Java Socket was the tool to use 15 years ago, but I'm wondering if there is anything more suitable nowadays in the JDK? For example, with Java Sockets one still needs to manually timeout a request if no answer is received, which feels really old fashioned.
So is there anything new in the JDK or JVM universe?
No, there are much better option nowadays which allow you to implement your client/server asynchronously without additional threading.
Look at SocketChannel from nio or even better AsynchronousSocketChannel from nio2. Check this tutorial
Especially the latter option will allow you to just start the connection listener and register callback which will be called whenever new connection is requested, data arrived, data was written etc.
Also consider looking at some high level solutions like Netty. It will take care of the network core, distribute load evenly to executors. Additionally it provides clears separation of the network layer and processing layer. For the processing layer it will provide you with lot of reusable codecs for common protocols.
You can try RMI which works on top of TCP/IP but hides all the hardwork with a convenient set of APIs.
Follow this tutorial post
Well, there are really a lot of other technologies to use, for example JMS has various implementations which work out of the box.
Sockets are low-level building blocks of network communications, like wires in the electricity network of your house. Yes, they're old fashioned, yes, we likely don't want to see them, but they're there and they will stay there for a good reason.
On top of Sockets, you can e.g. pick the HTTPUrlConnection, which implements most of the HTTP protocol. Still, setting timeout policies are in your hands, which I find quite useful, and extremelly painful at the same time.
http://www.mkyong.com/java/how-to-send-http-request-getpost-in-java/
You are free to move one abstraction level above, and use a ready-made REST library, such as this: http://unirest.io/java.html
The example above connects to a server, configures a HTTP query string, perform the request (timeout, encodings, all the mess under the hood), and finally get the response in Json format in a few lines:
Unirest.post("http://httpbin.org/post")
.queryString("name", "Mark")
.field("last", "Polo")
.asJson();
Nowadays a vast amount of web services are available using REST protocol, which is a simple command-response over HTTP. If you have a chance, I'd suggest using REST, as you can easily find available client and server side implementations, and you don't need to reinvent the wheel on the command-protocol layer either.
On client side, unirest is quite convenient. On the server side, we have had really great experience in the 1.2.xx series of Play! framework. But there are thousands of these things out there, just search for "REST".
Take a look to Netty Project, "Netty is a NIO client server framework which enables quick and easy development of network applications such as protocol servers and clients. It greatly simplifies and streamlines network programming such as TCP and UDP socket server."
This framework give us a lot of capabilities that simplify the programming process, allowing a big scalability.
Is used by Twitter and a lot of big companies in the tecnology industry.
This is a nice presentation from Norman Maurer.

Sockets or RMI - perfomance and scalability

I am currently decide what kind of communication method/network protocol I am going to use for a new project.
What I can tell you about this project is that:
- It is Android/java based, using X amount of Android devices
- These devices should be able to send strings to each other over a local network. We are talking about small strings here. Small as in less than 100 characters.
- The amount of packages/transmissions being sent can vary "A LOT". I can't say how much unfortunately, but the network protocol needs to be as scalable as possible.
I have researched different kinds of possible solutions and is now deciding wether to use "Sockets" or "RMI"
As I have understood about RMI:
It is easier than Java sockets to implement and maintain (smaller amount of code)
It is "a bit slower" than sockets, as it is a new "layer" build on top of Sockets
There may be some scalability issues (if this is true, how "serious" is it?) as it creates a lot of new sockets, resulting in Exceptions.
Obviously the system needs to run as smooth as possible, but the main objective is to make it scalable so it can handle more Android devices.
EDIT: The system the system is not "peer-to-peer". All of the android devices should be able to be configured as the server.
None of your concerns are the real issue, in my view.
RMI has a pre-defined protocol, raw sockets do not.
If you use raw sockets, you have to do all the work to define what messages and protocols are exchanged by client and server.
There are so many good existing protocols (RMI, HTTP, etc.) that I'd wonder why you feel the need to invent your own again.
Android devices communicating over HTTP - tell me why it won't be fast or scalable enough. HTTP is good enough for the Internet - why not you and your solution?
I would suggest you to expose some kind of webservice (SOAP or REST) in your application server. For example, people frequently expose their data to mobile devices as a REST webservice API returning some kind of JSON format in order to make it easier to marshal it again in the client device.
This way you take profit of the underlying implementation of HTTP communication in every application server; any other way, you would have to write your own worker thread pool using nio primitive operations in order to achieve performance... Not a thing to be done in a real production environment - maybe in an academic one?

High Level Protocols for Bluetooth/WiFi Direct Sockets?

When you work with Bluetooth or WiFi Direct in Android, at the end of all of the handshaking and such, you wind up with sockets.
With TCP/IP, we have a zillion-plus-one libraries that layer on top of sockets, for high-level protocols: HTTP, XMPP, IMAP, etc. Courtesy of these libraries, we can deal with more domain-specific abstractions of an operation (e.g., "download this file"), with low-level socket plumbing handled by the library.
Question: Are there equivalents, for any high-level protocol, that are known to work (or are likely to work) with the sockets produced via Android's Bluetooth and/or WiFi Direct layers?
Right now, I'm not fussy about the specific protocol -- I'm just looking for examples of this sort of protocol layer, to make using these sorts of connectivity options easier for developers.
For example, it looks like I could create a fork or add-on for OkHTTP that uses an alternative source for sockets, and I could probably create a Java HTTP server that does the same. Given those, app developers would write HTTP apps that talked over Bluetooth or WiFi Direct (and, on the client side at least, the coding should be fairly "natural" in feel, once the connectivity-specific pairing and handshaking has gone on).
IOW, going back to dealing with raw sockets feels so two decades ago... :-)
Thanks!
UPDATE
Based on Kristopher Micinski's comment on the ZeroMQ answer, I figured some clarification might be in order.
It's easier to say what I don't want: I don't want to touch sockets, after creating them. Something else at a higher level should handle those for me, plus handle what I'd consider a "protocol" to be (e.g., determining when some communications operation has ended, beyond a socket closing).
Mostly, this is for my book. Most book examples for low-level socket stuff are unrealistic, such as "we open a socket to the server and immediately start blasting the bytes representing some image to be uploaded, then close the socket when we're done". While the examples work, you'd never write something like that in real life:
If you're really working at the socket level, you'd be implementing some protocol that has the hopes of addressing authentication, error handling, etc., even if you're rolling the protocol yourself
Few developers work directly with sockets today for Internet operations
Now, it'd be cool if the protocol offered by the layer were something developers were used to (e.g., HTTP) or had heard of even if they haven't used it (e.g., XMPP). And I'll settle for simple scenarios (e.g., N-way support is cool but not necessary). In this respect, based on preliminary research (conducted by a sleep-deprived brain), ZeroMQ isn't a bad option. It lacks a bit of "brand recognition" compared to, say, an XMPP stack that could work with arbitrary sockets. But off the cuff it seems to meet what else I'm looking for.
I recognize that these stacks will have limitations imposed by the underlying transport (e.g., Bluetooth works well for N-way only for small values of N). And I certainly don't want to portray -- here or in my book -- that whatever solution I portray is the be-all and end-all of socket based communication.
I just want something that has a prayer of being more realistic for actual use. Bonus points if it is something that I can grok, as I have always used higher-level protocols for TCP/IP communications, and so I'm short on experience with direct socket manipulation.
I found ZeroMQ to be useful for managing socket connection. They have a support in multiple languages which includes JAVA. You may use this to manage the sockets once you establish the connection over wifi-direct or BT.
I know it's a somewhat old question and already answered but I would like to contribute.
I did this app: https://play.google.com/store/apps/details?id=com.budius.WiFiShoot and although the WiFi Direct connection n handshake is somewhat broken and it's what causes most of my unhappy users, I'm handling all the communication using the excellent https://github.com/EsotericSoftware/kryonet
and my code is pretty much what you see on their examples, create kryo, register classes, open server, connect client to server IP and shoot objects across with the file information and later I shoot the actual files using this code https://code.google.com/p/kryonet/source/browse/trunk/kryonet/test/com/esotericsoftware/kryonet/InputStreamSenderTest.java
hope it helps.

socket -V- rest performance

I have done some searching but haven't come up with anything on this topic. I was wondering if anyone has ever compared (to some degree) the performance difference between an RPC over a socket and a REST web service. If both do the same thing, which would have a tendency to be the better performer? I've already started building some socket code and would like to know if REST would give better performance before I progress much further. Any input would be really appreciated. Thanks indeed
RMI
Feels like a local API, much like
XMLRPC
Can provide some fairly nice remote
exception data
Java specific means this causes lock
in and limits your options
Has horrible versioning problems
between different versions of clients
Skeleton files must be compiled in
like CORBA, which is not very flexible
REST:
easy to route around firewalls
useful for uploading files as it can
be rather lightweight
very simple if you just want to shove
simple things at something and get
back an integer (like for uploaders)
easy to proxy security behind Apache
and let it take the heat
does not define any standard format
for the way the data is being
exchanged (could be JSON, YAML 1.0,
YAML 2.0, arbitrary XML format, etc)
does not define any convention about
having remote faults sent back to the
caller, integer codes are frequently
used, but method of sending back data
is not defined. Ideally this would be
standardized.
may require a lot of work on the
client side caller of the library to
make use of data (custom serialization
and so forth)
In short from here
web services do allow a loosely
coupled architecture. With RMI, you
have to make sure that the objects
stay in sync in all applications
RMI works best for smaller
applications, that are not
internet-related and thus not scalable
Its hard to imagine that REST is faster than a simple socket connection given it also goes over a Socket.
However REST may be performant enough, standard and easier to use. I would test whether REST is fast enough and meets your requirements first (or one of the many other existing solutions) before attempting your own Socket solution.

Categories