Netty dynamic pipeline configuration - java

This may be a "newb" question but here it goes anyway. We have a netty server up and running and we want it to support multiple different protocols like straight tcp, http, udp etc.. I am trying to write a class to be more dynamic what handlers/decoders/encoders we add to the pipeline on every request so we only add the layers we need depending on what type of traffic it is. I've got straight tcp figured out because we are encoding special bytes but I'm having a hard time coming up with a clever way to tell if its HTTP traffic vs straight tcp based off a ChannelBuffer or byte array.
My thoughts have been along the line of reading in some bytes and looking for a string like 'GET' or 'POST', I assume a HTTPRequest would have these items somewhere.. Is what I'm trying to do worth it? Or anyone have any helpful ideas?

I think you want to have a look at the portunification example where we do something like what you want to do. In short it's possible to do what you want. For more infos and more details please check the example at [1].
[1.a (master_deprecated)] https://github.com/netty/netty/blob/master_deprecated/example/src/main/java/io/netty/example/portunification/PortUnificationServerHandler.java
[1.b (4.1)] https://github.com/netty/netty/blob/4.1/example/src/main/java/io/netty/example/portunification/PortUnificationServerHandler.java

Related

netty: redirecting requests with serialization inbetween

I'm building a proxy with two endpoints and with a custom protocol in between.
Means, I'll receive the original request on one site, serialize it for the custom protocol, de-serialize it on the other end and send it to a defined target server. Same back with the response.
The thing looks about like this:
This all works wonderful. The thing is, that the custom protocol in the middle has a max size of about 5 MB. Although I need to be able to post files bigger than this.
I now had an idea, which I'm not sure if it is possible and I would be very happy about some advice.
Right now, I'm collecting all the HttpObjects and send the whole request at once over the custom protocol. On the other end, I'm parsing a FullHttpRequest, modify the Host, URI and so one and send it to the target server. Here again, same procedure with the response. This is of course a waste of memory and time.
Now with this I'm not sure:
I could immediately send all the HttpObjects that I receive, over the custom protocol without collecting the whole request first and send it in a whole.
On the other end, I still could manipulate the one HttpRequest object and the just pump all the HttpObjects, one after the other, into the outgoing channel until the HttpLastContent object. Would this work?
I thought I'd rather ask first, before putting too much effort in it just to find out it's a stupid idea.

How do i start reading byte through input stream from a specific Location in the stream?

I am using URL class in java and I want to read bytes through Input Stream from a specific byte location in the stream instead of using skip() function which takes a lot of time to get to that specific location.
I suppose it is not possible and here is why: when you send GET request, remote server does not know that you are interested in bytes from 100 till 200 - he sends you full document/file. So you need to read them, but don't need to handle them - that is why skip is slow.
But: I am sure that you can tell server (some of them support it, some - don't) that you want 100+ bytes of file.
Also: see this to get in-depth knowledge about skip mechanics: How does the skip() method in InputStream work?
The nature of streams mean you will need to read through all the data to get to the specific place you want to start from. You will not get faster than skip() unfortunately.
The simple answer is that you can't.
If you perform a GET that requests the entire file, you will have to use skip() to get to the part that you want. (And in fact, the slowness is most likely because the server has to send all of the data that is being skipped to the client. That is how TCP/IP works ...)
However, there is a possible alternative. The HTTP 1.1 specification supports partial fetching documents using the Range header. If your server supports this, then you can request the server to send you just the range of the document that you are interested in. However, you may need to deal with the case where the server ignores the Range header and sends the entire document anyway.

What is the fastest way to output a large amount of data?

I have an JAX-RS web service that calls a db2 z/os database and returns about 240mb of data in a resultset. I am then creating an OutputStream to send this data to the client by looping through the resultset and adding a few XML tags for my output.
I am confused about what to use PrintWriter, BufferedWriter or OutputStreamWriter. I am looking for the fastest way to deliver the data. I also don't want the JVM to hold onto this data any longer than it needs to, so I don't use up it's memory.
Any help is appreciated.
You should use
BufferedWriter
Call .flush() frequently
Enable gzip for best compression
Start thinking about a different way of doing this. Can your data be paginated? Do you need all the data in one request.
If you are sending a large binary data, you probably don't want to use xml. When xml is used, binary data is usually represented using base64 which becomes larger than the original binary and uses quite a lot of CPU for the conversion into base64.
If I were you, I'd send the binary separate from the xml. If you are using WebService, MTOM attachment could help. Otherwise you could send the reference to the binary data in the xml, and let the app. download the binary data separately.
As for the fastest way to send binary, if you are using weblogic, just writing on the response's outputstram would be ok. That output stream is most probably buffered and whatever you do probably won't change the performance anyways.
Turning on gzip could also help depending on what you are sending (e.g. if you are sending jpeg (stuff that is already compressed) or something, it won't help a lot but if you are sending raw text then it can help a lot, etc.).
One solution (which might not work for you) is to spawn a job / thread that creates a file and then notifies the user when the file is ready to download, in this way you're not tied to the bandwidth of the client connection (and you can even compress the file properly, before the client downloads it)
Some Business Intelligence and data crunching applications do this, specially if the process takes some time to generate the data.
The output max speed will me limited by network bandwith and i am shure any Java OutputStream will be much more faster than you will notice the difference.
The choice depends on the data to send: is that text (lines) PrintWriter is easy, is that a byte array take OutputStream.
To hold not too much data in the buffers you should call flush() any x kb maybe.
You should never use PrintWriter to output data over a network. First of all, it creates platform-dependent line breaks. Second, it silently catches all I/O exceptions, which makes it hard for you to deal with those exceptions.
And if you're sending 240 MB as XML, then you're definitely doing something wrong. Before you start worrying about which stream class to use, try to reduce the amount of data.
EDIT:
The advice about PrintWriter (and PrintStream) came from a book by Elliotte Rusty Harold. I can't remember which one, but it was a few years ago. I think that ServletResponse.getWriter() was added to the API after that book was written - so it looks like Sun didn't follow Rusty's advice. I still think it was good advice - for the reasons stated above, and because it can tempt implementation authors to violate the API contract
in order to get predictable behavior.

SSL overweight in Java

I'm using org.apache.commons.ssl to make an SSL server in Java.
I'm facing a strange problem : I send 500KB of data over the SSL stream, I receive 500KB of data on client side, but the transferred data over the TCP connection is 20 times bigger.
What could be the cause ? A bad configuration of SSL parameters ?
I'm using a real trusted SSL certificate for my tests.
I tried to sniff and decode the SSL stream with Wireshark but it didn't work, I wasn't able to see the decoded data. Or maybe the stream was encoded in more than one pass ?
The TCP packets were 1525 bytes each. Nothing abnormal as I could see.
If somebody has an idea ...
Thanks !
Olivier
Sounds like you are only sending one byte at a time over the wire. The overhead is then the TCP/IP-packet encapsulation.
Renegotiations won't account for your 20x explosion. Are you using BufferedOutputStreams around the SSL socket's output streams in both directions? i.e. at the server and the client? If you don't use buffered output and your code writes one byte at a time you can see a 40x explosition due to the SSL record protocol, and, geometrically, another 40x explosition due to TCP segment overhead; the latter is usually mitigated by the Nagle algorithm, but some people turn that off, a little too keenly IMHO.
#EJP: you were right, I made a mistake in my code: I was wrapping a BufferedOuputStream around a SomeStuffOutputStream, instead of wrapping a SomeStuffOutputStream around BufferedOuputStream.
The BufferedOuputStream must be at the lowest level, just above the raw socket's OutputStream.
Now it's working perfectly!
It was a misconception, and I'm just beginning to understand why I saw "normal" packet sizes, because SSL protocol stuff. I'll be more careful next time :)
Thanks to all.

send real time voice using udp

hi every body could you please help me . I write java code for sending string msg between client and server using udp socket . but I want to to send real time voice so could you please give some notes to do it
I can point you a little of the way, you probably would want to use the Real-time Transport Protocol (RTP), which is more or less the standard for sending audio or video real time over the net. However the implementation is not straight forward, and you should use a helper library like jlibrtp for the implementation. There is also a RTP packetizer in Java Media Framework (JMF), but you don't wanna go there....
UDP has no quality of service guarantee, so when sending your packets of data you will need to add some sort of order number to your data to detremine how to put the data back together. For example you could send 3 datagram packets in order from the server, yet the client may get them in a different order (2,1,3). Or it may not get one of them at all, in which case you either want it resent (doubtful) or simply ignore it and move on at some timeout.
Look into using Real Time Protocol RFC3550 (http://en.wikipedia.org/wiki/Real-time_Transport_Protocol)
as the transport over UDP. RTCP as the control over TCP.

Categories