Java Cross-Language Encryption - java

I am working on a game that has multiplayer support and I want to encrypt the server-client connection. I have done it before using a SecretKey object and an ObjectInput/OutputStream. However, I want to leave the ability open for other languages to connect to the server (if I ever take up another language and want to port my game.) Is there any way I could encrypt all the data without using Java objects so any language can use it?

You can create your own custom object serializer in Java with the Externalizable interface. The custom serializer can write out the state of the Java objects so that another language could read them. I've implemented this is a project where I needed serialization to work even if the objects changed and old state needed to be read back. The painful part of custom serialization is that you have to track the object fields carefully or your deserialize methods will create strange bugs.

Binary object serialization
One action you need to take is to serialize your objects to a binary format. You can do this using the standard serialization API, or you can create your own encoder/decoder. Be sure you describe your own protocol in detail - every bit should be described or you will run into trouble - if not directly then several years after.
There are standardized methods or creating your own protocol with binary encodings such as ASN.1, but if you take that route expect a rather steep learning curve. The general idea of using tag/length/value for values is a good one though, so maybe you can take a look at e.g. BER/DER encoding.
Encryption of your serialized objects
To encrypt, you can create your own cryptographic protocol. Most people on this forum go this route, and most fail. They manage to get their protocol working, but they also leave multiple security holes open.
One of the best ways of securing data in transit is TLS. So if TLS is applicable, by all means go this route. After initial setup, TLS has a relatively low overhead, so there is probably no need to try and implement a competing proprietary protocol.
You can also encrypt at application level instead of transport level. A solution to this is to rely on previous standards for cryptographic constainer formats. Well known formats are CMS (previously known as PKCS#7) or PGP. PGP and CMS formats are implemented in the Bouncy Castle libraries. They both are binary formats with many options present within them.

The way that I did it about six years ago, I serialized the object (so that it was a string) converted the string to a byte array, encrypted the bytes, and sent the data as bytes. The other end then reversed the process. I got this to work for encrypted communications between a Java server and an AS3 client. It won't work for languages that don't support byte arrays though. Do you need more details?

Related

Why should I use gRPC instead of IPC / Simple websocket?

I'm sketching an architecture for a micro services system, planned to run currently on one machine (maybe distribution in the future).
The system will be composed of services written in both Node.js, GO and might be Java.
Both node.js and Java will need to pass instructions and receive results from the GO server.
Now, I'm trying to decide should I use IPC pipe or ramp up on gRPC and protobuff and use them.
These are on different abstraction levels and have different uses, as such the 'or' in the question is wrong. You will need both types (transport and encoding), even if you reimplement one of them.
IPC like an anonymous or named pipe is usually called a transport, they have no way to encode multiple instructions or results (though they encode a stream of bytes).
gRPC and protobuf need a transport, support multiple transports and add more fine grained encoding (how to represent an integer, a list, etc) and possibly more on top. Technologies that support encoding something can often be nested with a transport or encoding, this is common with technologies that are used together with HTTP, this may make sense but may only add a layer without having a use.

Java - .Net object interchange, not web-based

I have a client-server system implemented in C#, and the client and server exchange .Net objects via serialization / deserialization and communicating via TCP/IP. This runs on a local network, it is not web-based or Internet-based.
Now I want to include Android clients connected by wifi. Again, this is local network only, not via the Internet and not web-based. The Android programming will be in Java. (I am aware of Mono for Android, but prefer not to get into that now.)
Is there some fairly simple way to implement object to object interchange between Java and .Net objects, provided, of course, that they are compatible?
I've looked a bit at JSON (Jackson on the Java end and Json.Net on the .Net end), and I'm guessing it can probably be done, but only with major efforts on remapping things at each end as soon as the objects become fairly complicated.
Any other suggestions? JSON-based or otherwise?
PS. My question is somewhat related to this one Mapping tool for converting Java's JSON to/from C#, but it never got a suitable answer, perhaps due to insufficient info in the question. Also, I don't care whether I end up using a JSON-based transport or XML or something else.
I would suggest either JSON or XML (which is based on a .xsd file) because these are independent of their respective implementations (instead of something like an ObjectOutputStream in java).
The problem of having this format between the two components (client and server) is that they need to be at the same version. My best practice is to have one underlying definition of the format (i use xml with an xsd file which specifies how the xml has to look like), then use jaxb to generated java classes. That way you can (un)marshal from/to xml in the java part.
I am very sure a similar thing exists in the world of .NET.
JSON is smaller than xml in size, i find xml to be more readable.
SO user "default locale" should get the honor for this, but he/she has only answered via a comment. So just to make it very clear what my choice was I'll answer my own question.
I've decided to go with Google Protocol Buffers, which in my opinion has much better support for moving objects back and forth between Java and .Net than JSON. Because I have a lot of experience with C#, and a lot of existing C#-defined classes, I've selected Marc Gravell's protobuf-net program for the .Net end, and Google's own support for the Android end (no - see edit). This implies that I'm defining the objects in C#, not in .proto files - protobuf-net generates the .proto files from which I then generate the Java code.
Incidentally, as the transport mechanism I'm using a little-known program called naga on the Android end. http://code.google.com/p/naga/ Naga seems to work fine, and is well-documented and has sample programs, and should be better known in my opinion.
EDIT:
OK, I've got it working now to my satisfaction. Here's what I'm using:
Google Protocol buffers as the interchange format: https://developers.google.com/protocol-buffers/
Marc Gravell's protobuf-net at the C# end: http://code.google.com/p/protobuf-net/
A program called called protostuff at the Java end: http://code.google.com/p/protostuff/
(I prefer protostuff to the official Google Java implementation of protocol buffers due to Google's implementation being based on the Java objects being immutable.)
Actually, I'm not using pure protocol buffers as the interchange format - I prefix the data with the name of the (outermost) class being transmitted. This makes the data self-identifying for deserializing at the other end.
You can also try wox (https://github.com/codelion/wox), it is a cross platform serialization library for Java and C# based on XML.

Serialised communication between languages

My universities peer to peer communication course uses an in house client/server program for demonstration and (i think) extending it is part of the assessment. The program we use is written in java and uses serialisation for the network communication.
To get a better grip I want to try reimplementing the protocol used in objective c, but googling around I cant find any information on using serialised data between languages. I would like to keep this as simple as possible, ideally be able to drop my replacement server/client onto a network and have it behave.
Edit Didnt actually ask a question there.
Is it possible to communicate between the two serialised formats, How can I make this work without reverse engineering the format java uses.
I would recommend avoiding writing (de)serialization support of java's native serialization in another language.
If you can change the existing Java server and clients, use a more language agnostic serialization format.
Assuming that you are not allowed to make that sort of change, I would define the new protocol, and implement a bridge in Java. The bridge (process) would establish a connection on behalf of each client that connects to it, and translate messages between the Java serialized and language agnostic form. This will provide a good migration strategy.
Java serialization protocol (if it's built-in default Java serialization) is documented, so you won't have to reverse engineer it - check this article and this link. However, if you can, use JSON, XML or XML-RPC; it will be much simpler than creating Java serializer/deserializer in another language.

What is the use of serialization? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What is object serialization?
I know the details of how the JVM does the serialization of object graphs. What I am more interested in is the use of serialization? What was the necessity that drove the specification of serialization? More specifically, what practical uses does serialization have these days? Is is used as a means to store the data? Or is it used to send data across the network?
I will appreciate any links to this question if a full answer is not possible.
Thanks in advance.
Very simple. Suppose you have object graph and want to store it in file and then read it back. The luck of java programmer is that he/she does not have to implement gory details of field-by-field writing and reading the data. If whole graph is consists of serializable objects java does this work for you.
The same is relevant if 2 applications exchange the data.
Serialization is mainly used to send objects across the network. It's used in RMI, Spring's HttpInvoker, and other binary remote method invocation systems.
Using it for durable persistent storage is questionable, because it's impossible to query, binary and thus hard to analyze with a simple text editor, and hard to maintain when the classes change and their serialization format is thus modified. So a more open format is often chosen (XML, JSON, etc.)
Yes and yes! You can use it to send objects across the network, cache them, save them to disk, whatever you like. It is used for things like session replication between clustered JVM instances. Much of the time it is used under the covers by libraries that you use as well.

how to design messages in a java client-server model

i have set up a basic client and a basic server using java sockets. it can successfully send strings between them.
now i want to design some basic messages.
could you give me any recommendations on how to lay them out?
should i use java's serialzation to send classes?
or should i just encode the information i need in a custom string and decode on the other side?
what about recognizing the type of messages? is there some convention for this? like the first 4 characters of each message are a identifier for the message?
thanks!
I would recommend you not to reinvent the wheel. If java serialization suits you, just use it.
Also take into account that there are some nice serialization frameworks around:
thrift, from facebook, and protocol buffers from Google.
Thrift also is a RPC mechanism, so you could also use it instead of opening / reading raw sockets, but this, of course, depends on your problem domain.
Edit: And answering your question about the message formatting. Definitely if you want to implement your own protocol and if you have more than one type of messages you should implement a header yes. But I warn you that implementing a protocol is hard and very error prone. Just create an object containing the different inner objects + methods you need, if you want add it a version field and make it implement the java.io.Serializable interface.
Maybe JMS would help you, it's hard to say without knowing the details. But JMS is standard, well thought out and versatile, and there are an impressive number of implementations available, open source and commercial. We use Sun's OpenMQ implementation and we're quite happy with it. It's fast enough for our needs, very mature and reliable.
Mind you, JMS is not a lightweight affair by any standard so it may very well be overkill for your needs.
If you're going to deploy this in a production environment, I'd advice you to look at either RMI or XML web services. (Google's Protocol Buffers are interesting too, but do not include a standard protocol for message transport, although 3rd party implementations exist.)
If you're doing this for the pleasure of learning, there are tons of ways to go about this. In general, a message in a generic messaging system will have some kind of "envelope format" which contains not only the message body, but also metadata about the message. A bare minimum for the header is something that identifies the intended receiver - either an integer identifier, a string representing a method name or a file, or something like it.
A simple example is HTTP, a plain-text format where the envelope and the is made up of all the lines until the first blank line. The first line identifies the protocol version and the intended receiver (≈the file requested), the following lines are metadata about the request, and the message body follows the first blank line.
In general, XML is a common format for distributed services (mostly because of its good schema capabilities and cross-platform support), although some schemes use other formats for simplicity and/or performance. RMI uses standard Java object serialization, for example.
What you choose to use is ultimately based on your needs. If you want to make it easy to interact with your system from a large amount of platforms, use XML web services (or REST). For communication between distributed Java subsystems, use RMI. If your system is extremely transaction intensive, maybe a custom binary format is best for faster processing and smaller messages - but before doing this "optimization", remember that it requires a lot more work to get it working properly and that most apps won't benefit a lot from it.

Categories