Sending potentially large messages over Netty in parallel - java

I want to implement a server/client application using Netty. As an example, suppose it needs to upload and download files and receive notifications when new files are uploaded. The problem is that the client must receive notifications even while downloading (or uploading) a file. I can see a few options:
Only send small messages over TCP containing URLs to files, download and upload over HTTP.
Open several parallel connections over TCP, using one for small messages and one for large (or one for each large message).
Write a chunking handler which automatically splits messages into chunks under 64Kb (e.g.) and allows chunks from different messages to be interleaved. From documentation, it seems ChunkedWriteHandler does not do this.
What I like in option 3 is that the client only needs to authenticate once, there is no possibility of one connection breaking while another is maintained, etc. But is it reasonable? And if yes, does such a solution already exist?

Chunks are nothing but http messages, try to use a socket client which buffers then writes your file to netty chunk by chunk in one single connection, then use netty http chunk aggregator handler to decode the chunks. The client implementation is pretty simple. Most of the server side implementation can be found under org.jboss.netty.example.http.upload .

If you have control of both client and server, use websockets. You are free to invent your own file transer protocol on top of it, including notifications and whatnot. Kermit goes websocket ;-)

Related

HttpServer streaming request body in Java

It is very interesting for me where all data is stored until I read request body.
For example, a file is uploading to the server. The Java program receives this file. It is impossible to store whole file content in buffers if file is very big - 100 GB.
Does Java streams this file from remote computer? I mean remote computer sends small part of data, Java receives this part and waits for next part. When remote computer decided that server read first part it sends second part of data and so on.
Does Java and its HttpServer works in this way or it stores whole file on the disk as Apache+PHP do?
The mechanism you're looking for is implemented by the TCP stack of the operating system. Buffers are used both on the sending and the receiving side.
TCP generally works like the receiving machine replies to the sender that "OK, got it, now send the next part" - also known as an ACK packet. This mechanism is also resposible to adjust transfer speed to the speed of your connection (instead of sending data too fast and resulting in packet loss).
It is a well-oiled machine, but if something goes wrong it's usually manifested by a TIMEOUT. (In your example if you are waiting a lot of time before processing the request body, and not reading, the sending machine will just give up).

Is there any way to read from one Netty channel only as fast as you can write to another?

We're experiencing an issue in LittleProxy where OutOfMemoryErrors are popping up when reading from a fast server LittleProxy is proxying access to and writing to a slow client configured to use the proxy. The problem is that the data coming in from the server buffers up in memory faster than we can write it to the client. LittleProxy is just a simple HTTP proxy built atop Netty.
Is there any easy way to throttle the read from the remote server to be exactly the same speed as the client is able to read it?
See:
https://github.com/adamfisk/LittleProxy/issues/53
and
https://github.com/adamfisk/LittleProxy
You could have a look at source code of : org.jboss.netty.example.proxy.HexDumpProxyInboundHandler
It set the inbound channel readeable flag according to outbound channel's status. Hope this could help.

Streaming data in Java: RMI vs Socket

I have a server that needs to stream data to multiple clients as quickly as possible in the Observer pattern.
At least 200 messages need to be sent to each client per second until the client disconnects from the sever, and each message consists of 8 values of several primitive types. Because each message needs to be sent as soon as it is created, messages cannot be combined into one large message. Both the server and the clients reside on the same LAN.
Which technology is more suitable to implement streaming under this situation, RMI or socket?
The overhead of RMI is significant so it would not be suitable. It would be better to create a simple protocol and use sockets to send the data.
Depending on the latency that is acceptable you should configure socket buffer sizes and turn off Nagle algorithm.
I would not use RMI for this, RMI is just there for Remote Method Invocation, i.e. when the client wants to execute a method (i.e. some business logic) on the server side.
Sockets are OK for this, but you might want to consider JMS (Java Messaging Service) for this specific scenario. JMS supports something called a Topic, which is essentially a broadcast to all listeners interested in that topic. It is also generally optimised to be very fast.
You can use something like Apache ActiveMQ to achieve what you want. You also have lots of options such as persistence (in case the queue goes down messages remain in queue), message expiry (in case you want the messages to become outdated if a client does not pick them up) etc.
You can obviously implement all this using normal Sockets and take care of everything yourself, but JMS provides all this for you. You can send text or binary data, or even serialized object (I don't personally recommend the latter).
RMI is a request/response protocol, not a streaming protocol. Use TCP.

netty sever-to-server data streams

I have two Java netty servers that need to pass lots of messages between themselves fairly frequently and I want it to happen fairly promptly.
I need a TCP socket between the two servers that I can send these messages over.
These messages are already-packed byte[] arrays and are self-contained.
The servers are currently both running HTTP interfaces.
What is the best way to do this?
For example, websockets might be a good fit yet I am unable to find any websocket client examples in netty..
I'm a netty newbie so would need some strong simple examples. It surely can't be so hard?!
Since you mentioned HTTP, you could look at the HttpStaticFileServer in the examples.
When established, a TCP connection is a Channel. To send your messages, you need to write them to a ChannelBuffer and call channel.write.
Of course, this does not cover message borders. The Telnet example shows a case, where the messages are delimited by the newline character.

How to get Acknowlegement in TCP communication in Java

I have written a socket program in Java. Both server and client can sent/receive data to each other. But I found that if client sends data to server using TCP then internally TCP sends acknowledgement to the client once the data is received by the server. I want to detect or handle that acknowledgement. How can I read or write data in TCP so that I can handle TCP acknowledgement. Thanks.
This is simply not possible, even if you were programming in C directly against the native OS sockets API. One of the points of the sockets API is that it abstracts this away for you.
The sending and receiving of data at the TCP layer doesn't necessarily correlate with your Java calls to send or receive data. The data you send in one Java call may be broken into several pieces which may be buffered, sent and received independently, or even received out of order.
See here for more discussion about this.
Any data sent over a TCP socket is acknowledged in both directions. Data sent from client to server is the same as data sent from server to client as far as TCP wire communications and application signaling. As #Eric mentions, there is no way to get at that signaling.
It may be that you are talking about timing out while waiting for the response from the server. That you'd like to detect if a response is taking too long. Is it possible that the client's message is larger than the server's response so the buffering is getting in the way of the response but not the initial request? Have you tried to use non-blocking sockets?
You might want to take a look at the NIO code if you have not already done so. It has a number of classes that give you more fine grained control over socket communications.
This is not possible in pure Java since Java's network API all handles socket, which hides all the TCP details.
You need a protocol that can handle IP-layer data so you can get TCP headers. DLPI is the most popular API to do this,
http://www.opengroup.org/onlinepubs/9638599/chap1.htm
Unfortunately, there is not Java implementation of such network. You have to use native code through JNI to do this.
I want to detect or handle that acknowledgement.
There is no API for receiving or detecting the ACKs at any level above the protocol stack.
Rethink your requirement. Knowing that the data has got to the server isn't any use to an application. What you want to know is that the peer application has received it, in which case you have to get the peer application to acknowledge at the application protocol level.

Categories