I have a Red5 client implementation which publishes streams, loaded from video file to our wowza media server. The problem is that if stream name is to big - approximately more than 90 symbols - the client is not publishing it and fails silently. All others actions expected from client are fulfilled: it connects to server and creates a stream. But never publishes the stream. I dont see a corresponding RTMP message and I dont see a resulting reaction in the logs of wowza.
I tried to debug the client and tracked the execution until it starts to write to SocketChannel. Everything is the same for the cases of execution of shorter named stream (which publishes ok), and stream with the long name, which RTMP command "to publish" is never sent.
A questions are:
whats up?
if I have written some bytes to SocketChannel without any exceptions thrown - does it guarantees that the corresponding message was sent?
if I have written some bytes to SocketChannel without any exceptions thrown - can I check by the means of my OS (MACOS in my case) whether the bytes were really written somewhere? Although I know, by the means of WireShark, that this piece of data was never sent.
UPDATE
...and which is even more strange - after sending "the big" packet sending a smaller one doesn't help. No packets can be sent after a packet of bigger length have been submitted to the socket.
If I have written some bytes to SocketChannel without any exceptions thrown - does it guarantees that the corresponding message was sent?
It guarantees that the data has been buffered locally in the socket send buffer, up to the count returned by write(). Nothing more.
As you can't send further data, it sounds to me as though the receiver isn't reading the large piece of data. Is it possibly failing with an exception and ceasing to read altogether?
Related
I'm writing a toy Java NIO server paired with a normal Java client. The client sends a string message to the server using plain Socket. The server receives the message and dumps the content to terminal.
I've noticed that the same message from client is broken up into bytebuffers differently every single time. I understand this is intended behaviour of NIO, but would like to find out roughly how the NIO decides to chop up a message?
Example: Sending string "this is a test message" to server. The following are excerpts of server loggings (each line represents 1 bytebuffer received).
Run 1:
Server receiving: this is a test message
Run 2:
Server receiving: t
Server receiving: his is a test message
Run 3:
Server receiving: this is
Server receiving: a test message
UPDATE - Issue Resolved
I have installed Wireshark to analyse the packets and it has become apparent that the random "break up" was due to me using DataOutputStream for the writer, which sends the message character by character! So there was a packet for each character...
After changing the writer to BufferedWriter, my short message is now sent as a single packet, as expected. So the truth is Java NIO actually did the clever thing and merged my tiny packets to 1 to 2 bytebuffers!
UPDATE2 - Clarification
Thank you all for your replies. Thank you #StephenC for pointing out that unless I encode the message myself(yes, I did call flush() after writing to BufferedWriter), there's always the possiblity of my message arriving across multiple packets.
So the truth is Java NIO actually did the clever thing and merged my tiny
Actually, no. The merging is happening in the BufferedWriter layer. The buffered writer will only deliver a "bunch" of bytes to the NIO layer when either the application flushes or closes the DataOutputStream or the BufferdWriters buffer fills up.
I was in fact referring to my first attempt with DataOutputStream (I got it from an example online, which obviously is incorrect use of the class now that you've pointed it out). BufferedWriter was not involved. My simple writer in that case went like
DataOutputStream out = new DataOutputStream(socket.getOutputStream());
out.writeBytes("this is a test message");
Wireshark confirmed that this message was sent(server on localhost) 1 character a packet(22 packets in total for the actual message not including all the ACK and etc).
I'm probably wrong, but this behaviour seems to suggest that the NIO server combined these 22 packets into 1-2 bytebuffers?
The end game I'm trying to achieve here is a simple Java NIO server capable of receiving request and data stream using TCP from various clients, some may be written in C++ or C# by third party. It's not time critical so the clients can send all data in one go and the server can process them at its own pace. That's why I've written a toy client in Java using plain Socket rather than a NIO client. Therefore the client in this case can't really manipulate the ByteBuffer directly, so I probably need some sort of message format. Could I make this work?
If you are sending data over a TCP/IP socket, then there are no "messages" as such. What you send and receive is a stream of bytes.
If you are asking if you can send a chunk of N bytes, and have the receiver get exactly N bytes in a single read call, then the answer is that there is no guarantee that will happen. However, it is the TCP/IP stack that is "breaking up" the "messages". Not NIO. Not Java.
Data sent over a TCP/IP connection is ultimately broken into network packets for transmission. This typically erases any "message" structure based on the original write request sizes.
If you want a reliable message structure over the top of the TCP/IP byte stream, you need to encode it in the stream itself; e.g. using an "end-of-message" marker or prefixing each message with a byte count. (If you want to use fancy words, you need to implement a "message protocol" over the top of the TCP/IP stream.)
Concerning your update, I think there are still some misconceptions:
... it became apparent that the random "break up" was due to me using DataOutputStream for the writer, which sends the message character by character! So there was a packet for each character...
Yes, lots of small writes to a socket stream may result in severe fragmentation at the network level. However, it won't always. If there is sufficient "back pressure" due to either network bandwidth constraints or the receiver reading slowly, then this will lead to larger packets.
After changing the writer to BufferedWriter, my short message is now sent as a single packet, as expected.
Yes. Adding buffering to the stack is good. However, you are probably doing something else; e.g. calling flush() after each message. If you didn't then I would expect a network packet to contain a sequence of messages and partial messages.
What is more, if the messages are too large to fit into a single network packet, or if there is severe back-pressure (see above) then you are liable to get multiple / partial messages in a packet anyway. Either way, the receiver should not rely on getting one (whole) message each time it reads.
In short, you may not have really resolved your issue!!
So the truth is Java NIO actually did the clever thing and merged my tiny
Actually, no. The merging is happening in the BufferedWriter layer. The buffered writer will only deliver a "bunch" of bytes to the NIO layer when either the application flushes or closes the DataOutputStream or the BufferdWriters buffer fills up.
FWIW - given your description of what you are doing, it is unlikely using NIO is helping performance. If you wanted to maximize performance, you should stop using BufferedWriter and DataOutputStream. Instead do your message encoding "by hand", putting the bytes or characters directly into the ByteBuffer or CharBuffer.
(Also DataOutputStream is for binary data, not text. Putting one in front of a Writer doesn't seem right ... if that is what you are really doing.)
I am currently using java.net.Socket to send messages from the client and reading messages from the server. All my messages are fairly short so far, and I have never had any problems.
One of my friends noticed that I was not handling message fragmentation, where the data could come in pieces, and has advised that I should create a buffer to handle this. I insisted that TCP handles this for me, but I'm not 100% sure.
Who is right?
Also, I plan on creating a client in C as well in the future. Do Berkeley sockets handle message fragmentation?
Details: Currently, in Java, the server creates a socket and reads the first byte from the message with InputStream#read(). That first byte determines the length of the entire message, and creates a byte array of the appropriate length, and calls InputStream#read(byte[]) once and assumes that the entire message has been read.
If you are talking about WebSockets,you may be mixing different concepts.
One thing is TCP/IP message fragmentation.
Other thing is how buffering works. You read buffers of data, and you need a framing protocol that tells you when you have a complete "message" (or frame). Basically you:
Read buffer.
Has complete header? No-> Goto 1, Yes-> continue
Read until having all the bytes that the head indicates as message
length.
Has complete message? No-> Goto 3, Yes -> continue
Yield message.
Goto 1.
Other different thing is WebSocket message fragmentation. WebSocket has already a framing protocol and messages can be split in different data frames, and control frames can be interleaved with data frames: https://developer.mozilla.org/en-US/docs/WebSockets/Writing_WebSocket_servers#Message_Fragmentation
If you are writing a WebSocket client or server you have to be ready for this situation.
Expanding on what nos said, TCP will break up large messages into smaller chunks, if the message is large enough. Often, it isn't. Often, the data you write is already split into parts (by you), into meaningful chunks like discrete messages.
The stuff about the reads/writes taking different amounts of calls comes from how the data is written, how it travels over the wire, and how you read it.
If you write 2 bytes 100 times, and then 20 seconds later go to read, it will say there is 200 bytes to be read, which you can read all at once if you want. If you pass a massive 2mb buffer to be written (I dont even know if thats possible), it would take longer to write out, giving more of a chance to the reading program to get different read calls.
Details: Currently, in Java, the server creates a socket and reads the first byte from the message with InputStream#read(). That first byte determines the length of the entire message, and creates a byte array of the appropriate length, and calls InputStream#read(byte[]) once and assumes that the entire message has been read.
That won't work. Have a look at the contract for InputStream.read(byte[]). It isn't obliged to transfer more than one byte. The correct technique is to read the length byte and then use DataInputStream.readFully(), which has the obligation to fill the buffer.
i wanted to report this error directly, but did not find any possibility yet from main page netty.io
I have noticed an error while sending data to channel. It happens not always, in 10-20% of cases, but it happens.
Following,
as example, if I get first connection with message of 1024 byte data, everything is fine till now, than I create socket to forwarded address doing it with HexDumpProxyInboundHandler
and here is fine everything, except one thing, I have created an listener on forwarded address with traffic logging, where i get the messages which were sent by Netty. I would expect the data of 1024 bytes on it, but it happens not always, not also in 100% of cases.
sometimes...
exactly sometimes the nightmare begins here,
if i get on same channel next message after 1024 bytes message, the data gets written in following possible forms:
3.1 either the first message and second message are merged and the data that i get on my port listener is correct, 1024 + 72(as example) and in correct byte order too (but in merged form, what is already not correct for me)
3.2 or the first message and second message are merged too, but with one little difference, with different order, 72(as example) + 1024 bytes, even if data was correctly received by server socket, and in correct order.. the sending order was incorrect also.
3.3 or finally the first message of 1024 gets send as is following by second message which gets send as is too, so everything is fine here too, and that is the correct and expected behavior..
Also, the error happens not always, but it happens, and always, if it happens, it happens only if by first connection the first message was 1024 byte long and the second message was sent immediately after first message without the received data before.
Now the question to community, is that possible to switch off such strange buffering behavior in Netty? So that all messages received on server socket are sent exactly in same way to the client socket channel without merging the data.
Thank you in advance!
This "strange" behavior has nothing todo with netty. Its up to the network layer how much bytes get transfered at once, so its really expected to see this. if you need to have all 1024 bytes you will need t buffer them until you receive enough.
Ok, after long night I have finally solved my problem. It seems, the Netty project is still buggy in this way and will accept incoming messages for sending in the wrong order.
So, what I do, I fill the buffer with incoming messages until the remote connection by client gets opened, so I send than the full correct buffer instead to let things do by Netty.
I have a problem with an J2ME client app, that sends data to an J2SE server, and immediately closes the sending socket. On the J2ME side, i use a ordinary OutputStream on a SocketConnection, and repeatedly call write with small packets of data (~30 bytes). Afterwards, I flush and finally close the stream and the connection.
When running the client in the emulator, everything works fine. But with the real device I get some problems...
What I noticed is that the connection is not correctly closed, no matter what I do on the client. I always get an Connection reset exception on the server, which according to TCP indicates an error in the connection or sender, meaning that all subsequent data is to be abandoned and the connection no longer to be used. (With the emulator, the read on the server eventually returns -1, indicating that the connection was correctly closed, no exception at all...)
I tried to play with the total packet size (1024, 2048, ...) and with the client side socket options (Delay, Linger, Keep alive). I also tried a Thread.sleep between flush and close... On the server side, different things happen:
Only the first packet of around 30 byte is received, then exception (w/o delay)
Part of the data is received (~1500 bytes), then no more data is read, and no exception thrown, blocking in the read method forever (with delay and total size around 2048)
All data is correctly received, then exception (with delay and total size around 1024)
In all cases, the client has successfully sent the whole data.
What is the best way to ensure that all data will be received by the other side? As I said, the J2ME client states that all data was successfully written! (The total size can't be fixed to a specific value)
Thanks in advance!
That looks like a bug in your device you can send a message from the J2SE side to acknowledge the reception and wait on the J2ME side that you receive this message to close the socket.
I'm making my own custom server software for a game in Java (the game and original server software were written with Java). There isn't any protocol documentation available, so I am having to read the packets with Wireshark.
While a client is connecting the server sends it the level file in Gzip format. At about 94 packets into sending the level, my server crashes the client with an ArrayIndexOutOfBoundsException. According to the capture file from the original server, it sends a TCP Window Update at about that point. What is a TCP Window Update, and how would I send one using a SocketChannel?
TCP windows are used for flow control between the peers on a connection. With each ACK packet, a host will send a "window size" field. This field says how many bytes of data that host can receive before it's full. The sender is not supposed to send more than that amount of data.
The window might get full if the client isn't receiving data fast enough. In other words, the TCP buffers can fill up while the application is off doing something other than reading from its socket. When that happens, the client would send an ACK packet with the "window full" bit set. At that point, the server is supposed to stop sending data. Any packets sent to a machine with a full window will not be acknowledged. (This will cause a badly behaved sender to retransmit. A well-behaved sender will just buffer the outgoing data. If the buffer on the sending side fills up too, then the sending app will block when it tries to write more data to the socket!)
This is a TCP stall. It can happen for a lot of reasons, but ultimately it just means the sender is transmitting faster than the receiver is reading.
Once the app on the receiving end gets back around to reading from the socket, it will drain some of the buffered data, which frees up some space. The receiver will then send a "window update" packet to tell the sender how much data it can transmit. The sender starts transmitting its buffered data and traffic should flow normally.
Of course, you can get repeated stalls if the receiver is consistently slow.
I've worded this as if the sender and receiver are different, but in reality, both peers are exchanging window updates with every ACK packet, and either side can have its window fill up.
The overall message is that you don't need to send window update packets directly. It would actually be a bad idea to spoof one up.
Regarding the exception you're seeing... it's not likely to be either caused or prevented by the window update packet. However, if the client is not reading fast enough, you might be losing data. In your server, you should check the return value from your Socket.write() calls. It could be less than the number of bytes you're trying to write. This happens if the sender's transmit buffer gets full, which can happen during a TCP stall. You might be losing bytes.
For example, if you're trying to write 8192 bytes with each call to write, but one of the calls returns 5691, then you need to send the remaining 2501 bytes on the next call. Otherwise, the client won't see the remainder of that 8K block and your file will be shorter on the client side than on the server side.
This happens really deep in the TCP/IP stack; in your application (server and client) you don't have to worry about TCP windows. The error must be something else.
TCP WindowUpdate - This indicates that the segment was a pure WindowUpdate segment. A WindowUpdate occurs when the application on the receiving side has consumed already received data from the RX buffer causing the TCP layer to send a WindowUpdate to the other side to indicate that there is now more space available in the buffer. Typically seen after a TCP ZeroWindow condition has occurred. Once the application on the receiver retrieves data from the TCP buffer, thereby freeing up space, the receiver should notify the sender that the TCP ZeroWindow condition no longer exists by sending a TCP WindowUpdate that advertises the current window size.
https://wiki.wireshark.org/TCP_Analyze_Sequence_Numbers
A TCP Window Update has to do with communicating the available buffer size between the sender and the receiver. An ArrayIndexOutOfBoundsException is not the likely cause of this. Most likely is that the code is expecting some kind of data that it is not getting (quite possibly well before this point that it is only now referencing). Without seeing the code and the stack trace, it is really hard to say anything more.
You can dive into this web site http://www.tcpipguide.com/free/index.htm for lots of information on TCP/IP.
Do you get any details with the exception?
It is not likely related to the TCP Window Update packet
(have you seen it repeat exactly for multiple instances?)
More likely related to your processing code that works on the received data.
This is normally just a trigger, not the cause of your problem.
For example, if you use NIO selector, a window update may trigger the wake up of a writing channel. That in turn triggers the faulty logic in your code.
Get a stacktrace and it will show you the root cause.