Netty - Second packet is ignored - java

If I send two packets immediately, the second packet is usually ignored by the client. There's some cases in which it will pick it up, but it's usually just like it was never sent to begin with. I've done all the debugging and it seems like the server is definitely sending it, but the client never has a clue.
I'm using a ByteBuf which is the new version of a ChannelBuffer in Netty 4.
Here's how I'm sending the information over the network:
getChannel().writeAndFlush(buffer.retain());
Now the strange part is, nothing is getting mixed up if I send these together. The original packet data is all together like it should be, no problems.
The second packet just doesn't come through at all.
So, for example.
ByteBuf bufferA = Unpooled.buffer();
ByteBuf bufferB = Unpooled.buffer();
bufferA.writeInt(1);
bufferB.writeInt(2);
send(bufferA);
send(bufferB);
The client will only read bufferA, and completely ignore bufferB, won't even register a single byte of it.
If i space these out using Thread.sleep, the client loads them fine.
Not sure what to do?
Server and Client are both using netty.
EDIT: I'm currently using a "Cheat" fix that runs a scheduled executor for this case (Where I need to send two packets one after another) but this doesn't at all fix the issue at hand.

Maybe you try to send the same buffer two times ? In this case you need to call duplicate() on it first as otherwise it will share the same readerIndex and writerIndex which will be adjusted when a ByteBuf is written to the underlying socket.

Related

Ensure data delivery in socket programming

How can I be sure that data is successfully delivered to the other end in socket programming?
outStream.write() doesn't guarantee that bytes are received on the other end. I can force server to send back some confirmation data, but how long should client wait for it? If I wait too short, maybe data is delivered to the server just when I throw timeout exception in client (which then shows error dialog, but server actually received data). On the other hand, I don't want to wait too much.
Should client wait some time and if confirmation is received, a third "commit" message is sent to server which then supplies data for further processing (so first client writes, then server replies and then client confirms). But then again, if the commit message is not received on server, client thinks that data is successfully sent but server will ignore it after some time, because it didn't receive commit message. And so on, bouncing never ends...
How is this situation generally handled?
Every tutorial that I read is just about creating/closing sockets, and sending data on client side and receiving it on server side.
If you have links to blogs which explain this problem (or even books), that would be good too.
[EDIT]
I should clarify some things. I'm using Java for client and server, and later I will create C# client. Everything is working perfectly for now. Both client and server are on the same LAN and I have never had any real problems. Scenario explained above is just theoretical, because I would like to cover as much as possible, including error handling.
I know TCP guarantees delivery, but in Java, out.write() doesn't block until underlying TCP delivers or fails and then continues execution or throws an exception. It just continues execution and I don't know if sending failed or not. There is no callback function. I'm starting with socket programming so maybe there is very simple solution which I don't know about. All I need to do is to make sure client knows that server received the message (if that is even possible).
If you have this kind of extreme need for reliability, you need to build that into your application and protocol. One way I have done that in the past is as follows.
Say you have a stream of "objects" (objects here defined in whatever way makes sense to your application) that need to be communicated from client C to server S. Associate a unique identifier with each object on the client side. Then have C send each object along with its identifier to S. But have C keep its copy of the object for now (in memory, or on disk, or whatever makes sense).
For each object S receives, it stores the object together with its unique identifier in its own local data store, and sends back an acknowledgment to C that it received the object (using the identifier to communicate that). C can now delete that object from its data store (strictly speaking it can delete all the ones it sent prior to that object as well -- since TCP guarantees sequenced delivery -- but that slightly complicates things).
This process can continue indefinitely and C never needs to explicitly wait for a confirmation for any one object. It simply maintains a local copy of each object. As long as the connection stays up, S will continually acknowledge every object it has received.
If the connection is broken for any reason, C assumes that S has not received any object it sent since the most recently received acknowledgment. When the connection is re-established, C may therefore resend a few objects that S previously received but since S stored the unique identifier along with each object, it simply acknowledges again that it received the object.
If S hangs for some reason, then eventually buffers between client and server will fill up and C's send will block. The client may need to be prepared for this eventuality.
At the end of the stream of objects -- if there is an end -- C will need to wait for the last object to be acknowledged. There's simply no way around that, and so you will need to decide how long it's appropriate to wait before C gives up and declares an error.
(Of course, this is all essentially duplicating at the application layer what TCP is doing at the transport layer: acknowledging what was actually received with the ability for the sender to re-transmit anything that was lost.)
TCP:
TCP guarantees packet delivery at layer 4 of the OSI Model. TCP is based on a handshake in which the receiving party must confirm the packet's delivery. In that case there is either something wrong in your code or your network is malfunctioning. If you are talking about the packet not making it to its destination, make sure you have properly bound the TCP server to the port, and that the destination is correct. While waiting for a packets arrival, make sure you have a receive timeout in place in order to prevent you application from getting hung on the receive.

How does Java NIO break up messages?

I'm writing a toy Java NIO server paired with a normal Java client. The client sends a string message to the server using plain Socket. The server receives the message and dumps the content to terminal.
I've noticed that the same message from client is broken up into bytebuffers differently every single time. I understand this is intended behaviour of NIO, but would like to find out roughly how the NIO decides to chop up a message?
Example: Sending string "this is a test message" to server. The following are excerpts of server loggings (each line represents 1 bytebuffer received).
Run 1:
Server receiving: this is a test message
Run 2:
Server receiving: t
Server receiving: his is a test message
Run 3:
Server receiving: this is
Server receiving: a test message
UPDATE - Issue Resolved
I have installed Wireshark to analyse the packets and it has become apparent that the random "break up" was due to me using DataOutputStream for the writer, which sends the message character by character! So there was a packet for each character...
After changing the writer to BufferedWriter, my short message is now sent as a single packet, as expected. So the truth is Java NIO actually did the clever thing and merged my tiny packets to 1 to 2 bytebuffers!
UPDATE2 - Clarification
Thank you all for your replies. Thank you #StephenC for pointing out that unless I encode the message myself(yes, I did call flush() after writing to BufferedWriter), there's always the possiblity of my message arriving across multiple packets.
So the truth is Java NIO actually did the clever thing and merged my tiny
Actually, no. The merging is happening in the BufferedWriter layer. The buffered writer will only deliver a "bunch" of bytes to the NIO layer when either the application flushes or closes the DataOutputStream or the BufferdWriters buffer fills up.
I was in fact referring to my first attempt with DataOutputStream (I got it from an example online, which obviously is incorrect use of the class now that you've pointed it out). BufferedWriter was not involved. My simple writer in that case went like
DataOutputStream out = new DataOutputStream(socket.getOutputStream());
out.writeBytes("this is a test message");
Wireshark confirmed that this message was sent(server on localhost) 1 character a packet(22 packets in total for the actual message not including all the ACK and etc).
I'm probably wrong, but this behaviour seems to suggest that the NIO server combined these 22 packets into 1-2 bytebuffers?
The end game I'm trying to achieve here is a simple Java NIO server capable of receiving request and data stream using TCP from various clients, some may be written in C++ or C# by third party. It's not time critical so the clients can send all data in one go and the server can process them at its own pace. That's why I've written a toy client in Java using plain Socket rather than a NIO client. Therefore the client in this case can't really manipulate the ByteBuffer directly, so I probably need some sort of message format. Could I make this work?
If you are sending data over a TCP/IP socket, then there are no "messages" as such. What you send and receive is a stream of bytes.
If you are asking if you can send a chunk of N bytes, and have the receiver get exactly N bytes in a single read call, then the answer is that there is no guarantee that will happen. However, it is the TCP/IP stack that is "breaking up" the "messages". Not NIO. Not Java.
Data sent over a TCP/IP connection is ultimately broken into network packets for transmission. This typically erases any "message" structure based on the original write request sizes.
If you want a reliable message structure over the top of the TCP/IP byte stream, you need to encode it in the stream itself; e.g. using an "end-of-message" marker or prefixing each message with a byte count. (If you want to use fancy words, you need to implement a "message protocol" over the top of the TCP/IP stream.)
Concerning your update, I think there are still some misconceptions:
... it became apparent that the random "break up" was due to me using DataOutputStream for the writer, which sends the message character by character! So there was a packet for each character...
Yes, lots of small writes to a socket stream may result in severe fragmentation at the network level. However, it won't always. If there is sufficient "back pressure" due to either network bandwidth constraints or the receiver reading slowly, then this will lead to larger packets.
After changing the writer to BufferedWriter, my short message is now sent as a single packet, as expected.
Yes. Adding buffering to the stack is good. However, you are probably doing something else; e.g. calling flush() after each message. If you didn't then I would expect a network packet to contain a sequence of messages and partial messages.
What is more, if the messages are too large to fit into a single network packet, or if there is severe back-pressure (see above) then you are liable to get multiple / partial messages in a packet anyway. Either way, the receiver should not rely on getting one (whole) message each time it reads.
In short, you may not have really resolved your issue!!
So the truth is Java NIO actually did the clever thing and merged my tiny
Actually, no. The merging is happening in the BufferedWriter layer. The buffered writer will only deliver a "bunch" of bytes to the NIO layer when either the application flushes or closes the DataOutputStream or the BufferdWriters buffer fills up.
FWIW - given your description of what you are doing, it is unlikely using NIO is helping performance. If you wanted to maximize performance, you should stop using BufferedWriter and DataOutputStream. Instead do your message encoding "by hand", putting the bytes or characters directly into the ByteBuffer or CharBuffer.
(Also DataOutputStream is for binary data, not text. Putting one in front of a Writer doesn't seem right ... if that is what you are really doing.)

Java Socket fast reconnect

I am facing a little problem with sockets.
This method takes about 100ms or even more, depends on the server.
socket.connect(dest);
Then I am communicating through DataInput/Output streams to a sofisticated software, so there is query phase, handshake phase, login request phase etc.
Is there any way I can "reset" the datastream from handshake phase so the server forgets everything and the socket would be again in the first phase without doing socket.connect(dest); again ?
Thanks.
This is entirely protocol dependent, it has nothing to do with sockets per se.
There is nothing stopping you from passing as many messages back and forth through a socket; except maybe your protocol (or the lack of a clearly defined one) if it doesn't indicate where a message starts/ends.
When using a DataInput/OutputStream you could just define a Message class containing whatever data and both sides would just run in an infinite loop reading a Message, processing and possibly generating a response message.

NETTY ISSUE Client 1024 byte message

i wanted to report this error directly, but did not find any possibility yet from main page netty.io
I have noticed an error while sending data to channel. It happens not always, in 10-20% of cases, but it happens.
Following,
as example, if I get first connection with message of 1024 byte data, everything is fine till now, than I create socket to forwarded address doing it with HexDumpProxyInboundHandler
and here is fine everything, except one thing, I have created an listener on forwarded address with traffic logging, where i get the messages which were sent by Netty. I would expect the data of 1024 bytes on it, but it happens not always, not also in 100% of cases.
sometimes...
exactly sometimes the nightmare begins here,
if i get on same channel next message after 1024 bytes message, the data gets written in following possible forms:
3.1 either the first message and second message are merged and the data that i get on my port listener is correct, 1024 + 72(as example) and in correct byte order too (but in merged form, what is already not correct for me)
3.2 or the first message and second message are merged too, but with one little difference, with different order, 72(as example) + 1024 bytes, even if data was correctly received by server socket, and in correct order.. the sending order was incorrect also.
3.3 or finally the first message of 1024 gets send as is following by second message which gets send as is too, so everything is fine here too, and that is the correct and expected behavior..
Also, the error happens not always, but it happens, and always, if it happens, it happens only if by first connection the first message was 1024 byte long and the second message was sent immediately after first message without the received data before.
Now the question to community, is that possible to switch off such strange buffering behavior in Netty? So that all messages received on server socket are sent exactly in same way to the client socket channel without merging the data.
Thank you in advance!
This "strange" behavior has nothing todo with netty. Its up to the network layer how much bytes get transfered at once, so its really expected to see this. if you need to have all 1024 bytes you will need t buffer them until you receive enough.
Ok, after long night I have finally solved my problem. It seems, the Netty project is still buggy in this way and will accept incoming messages for sending in the wrong order.
So, what I do, I fill the buffer with incoming messages until the remote connection by client gets opened, so I send than the full correct buffer instead to let things do by Netty.

How to detect retransmited packets to discard them in TCP Sockets in java

I sometimes receive already received packets (I used sniffer and system ACKs them). Now I read all data (until socket timeout) and then send new request, but this is ugly. I was thinking about using sequence numbers but i didn't find it in Socket interface. Any clues?
No you don't. If the receiving TCP stack misses a packet, it will re-request it, but it can't have delivered the original one to you, because it missed it. And if it gets a packet it has already received, it will drop it.
TCP will deliver all the bytes that are sent, in the order they are sent. Nothing else (well, except some edge cases around disconnects).
Something else is going on.
EDIT:
To be clear, I'm talking about the bytes that are delivered to your application through the socket's InputStream. What happens on the wire is largely irrelevant unless you have some horrific network retransmission problem that you're trying to investigate. And if the receiving stack does get a duplicate packet, it will ACK it, because if it didn't then the sender would re-send it... again.
It sounds like you're trying to account for things that TCP already takes care of. It has sequence numbers built in and will take care of any lost data for you, and from the receiving point you should be waiting until you receive all your expected data, rather than reissuing a request. If you don't want to wait for a response to complete before issuing a new request, consider pipe-lining requests with multiple connections.

Categories