Measuring actual bytes written through Java sockets - java

I have written a small program which send/receives files from one client to another. I've set up progressbars for both the receiver and the client, but the problem is that the sender seems to have his progressbar finish much quicker than the actual transfer. The problem lies with the how it calculates how many bytes that have been written. I'm assuming it's counting how many bytes I've read into buffer, rather than bytes that were sent through the network, so how can I find a solution to this problem? The receiver is calculating his received bytes at a correct rate, but the sender is not doing his part correctly.
Setting a lower buffersize offsets the difference a bit, but it's still not correct. I've tried wrapping the outputstream with a CountingOutputStream, but it returns the same result as the code snippet below. The transfer eventually completes correctly, but I need the proper "sent" values to update my progressbar, as in what was actually received and written to disc at the receiver side. I've included a very stripped down code snippet which represents my way of calculating transferred bytes. Any examples of a solution would be very helpful.
try
{
int sent = 0;
Socket sk = new Socket(ip, port);
OutputStream output = sk.getOutputStream();
FileInputStream file = new FileInputStream(filepath);
byte[] buffer = new byte[8092];
while ((bytesRead = file.read(buffer)) > 0)
{
output.write(buffer, 0, bytesRead);
sent += bytesRead;
System.out.println(sent); // Shows incorrect values for the actual speed.
}
}

In short, I don't think you can get the sort of accurate visibility you're looking for solely from the "sender" side, given the number of buffers between you and the "wire" itself. But also, I don't think that matters. Here's why:
Bytes count as "sent" when they are handed to the network stack. When you are sending a small number of bytes (such as your 8K example) those bytes are going to be buffered & the write() calls will return quickly.
Once you're reached network saturation, your write() calls will start to block as the various network buffers become full - and thus then you'll get a real sense of the timings.
If you really must have some sort of "how many bytes have you received?" you'll have to have the receiving end send that data back periodically via an out-of-band mechanism (such as suggested by glowcoder)

Get the input stream from the socket, and on the other side, when you've written a selection of bytes to disk, write the result to the output stream. Spawn a second thread to handle the reading of this information, and link it to your counter.
Your variable is sent - it is accurate. What you need is a received or processed variable, and for that you will need two-way communication.

Related

Java nio Only Reading 8192/433000 bytes

I have a project that I'm working on to better understand Java NIO and network programming stuff. I'm trying to send a 400,000+ byte file through netcat to my server where it will be found and written to a file.
The Problem:
The program works perfectly when the file is below 10,000 bytes, or when I place a Thread.sleep(timeout) before the Select(). The file sends over but only reads 8192 bytes and then cancels out of the loop and goes back to the select() to capture the rest of the data. However the file captures what comes after. I need the complete data for further expansion to the project.
Things I've Tried:
I've tried to load the data onto another byte array which evidently works, but skips over the 8192 bytes (since the select() has been called again). Reads the rest of the 391000 bytes. When comparing the files the first 8192 bytes is missing.
I've tried various other things but I'm not adequate in NIO to understand what I'm messing up on.
My Code
This is just where I feel the code is messing bout (after debugging)
private void startServer() {
File temp = new File("Filepath");
Selector selector = Selector.open();
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
serverSocketChannel.configureBlocking(false);
serverSocketChannel.socket().bind(listenAddress);
serverSocketChannel.register(selector, SelectionKey.OP_ACCEPT);
log.info("Server Socket Channel Started");
while(!stopRequested){
selector.select();
Set<SelectionKey> keys = selector.selectedKeys();
for(SelectionKey key : keys){
if(key.isAcceptable()){
try {
serverSocketChannel = (ServerSocketChannel) key.channel();
SocketChannel socket = serverSocketChannel.accept();
socket.configureBlocking(false);
socket.register(selector, SelectionKey.OP_READ);
}catch (IOException e) {
log.error("IOException caught: ", e);
}
}
if(key.isReadable(){ read(key); }
keys.remove(key);
}
}
} catch (IOException e) {
log.error("Error: ", e);
}
}
private void read(SelectionKey key) {
int count = 0;
File tmp = new File("Path");
try {
SocketChannel channel = (SocketChannel) key.channel();
byteBuffer.clear();
while((count = channel.read(byteBuffer)) > 0) {
byteBuffer.flip();
//in bytearrayoutputstream to append to data byte array
byteArrayOutputStream.write(byteBuffer.array(), byteBuffer.arrayOffset(), count);
byteBuffer.compact();
}
}
data = byteArrayOutputStream.toByteArray();
FileUtils.writeByteArrayToFile(tmp, data);
}
}
The above code is what I'm working with. I have more stuff in this class but I believe the main two functions having the problem are these two. I'm not too sure what steps I should take. The file I have to test my program contains many TCPs about 400,000 bytes. The select() collects the initial 8192 bytes and then runs read (which shouldn't happen until it captures all of the data in the stream), comes back and gathers the rest. I've allocated the byteBuffer to be 30720 bytes.
If not clear, I can post the rest of the code, let me know what your suggestions are.
Question
Why does this code only grab 8192 bytes when the allocated space is 30720? Why does it work in debug mode or with Thread.sleep()?
Previous person advised me to place my byteBuffer.clear() outside of loop, even after doing so, the problem persists.
The non-blocking API merely promises that the 'readable' state is raised if there are more than 0 bytes. It makes no guarantee that it'll wait until all the bytes you're interested in have arrived; there isn't even a way to say 'dont mark this channel as isReadable until at least X bytes are in'. There is no way to fix that directly; your code must instead be capable of dealing with a half filled buffer. For example, by either reading this data away so that the 'isReadable' state gets cleared until MORE bytes arrive.
Using the raw non-blocking APIs is rocket science (as in, it is very tricky to write your code correctly, it is easy to get a CPU core to spin to 100% because you're mismanaging your flags, and it is easy to have all threads frozen and the app reduced to being able to handle only a percent or two of what a normal threaded variant could have done due to accidental invocation of blocking methods.
I strongly suggest you first reconsider whether you need non-blocking at all (it always almost slower, and orders of magnitude harder to develop for. After all, you cannot make a single potentially blocking call anywhere in any handler code or your app will be dog slow under load, and java is not great at await/yield stuff – the only real benefit is that you get more finegrained control over buffer sizes, which is irrelevant unless you are very RAM constrained and can get away with tiny buffers for per-connection state). And if you then conclude that truly this is the only way, use a library that makes this API easier to use, such as netty.

Can Java NIO.2 read the ByteBuffer out of order?

For anyone interested, the answer to this questions is no, the socket wont read the buffer out of order.
Is it possible for the AsynchronousSocketChannel to read bytes out of order? Im strugling to debug where my issue starts, my protocol serializes objects up to 32k and writes them to the socket like this:
AsynchronousSocketChannel socket; ...
// serialize packet
ByteBuffer base; // serialized buffer (unknown size, size growns as needed with limit of 32k)
for (int j1 = 0; j1 < 6 && base.remaining() > 0; j1++) { // limit to 6 tries
socket.write(base).get(15, TimeUnit.SECONDS);
if (base.remaining() > 0) {
// aparently, if the write operation could write everything at once, we wouldnt have an issue
}
}
This write operation is not concurrent it is synchronized with locks. I use the standard read operation like this:
AsynchronousSocketChannel socket; ...
Future<Integer> reading = socket.read(getReadBuffer()); // the read buffer is 8k big
// consume the buffer also not concurrently
I can write up to 1000 packets per second with up to 1000 bytes each without issues, but eventually one or other client will break. If the packet is bigger, the frequency that it can handle without breaking will be lower, packets with 40.000 bytes will break if I write around 8 per second.
Example: I write 5 bytes (1,2,3,4,5), the buffer is big enough the write everything at once but the operation decides to stop with remaining bytes in the buffer (this should be the normal TCP behavior), so lets say the operation wrote 1,2,3, stopped and wrote the remain 4,5 (while buf.remain > 0 { write }), while reading, is most likely that I will read 4,5 first and 1,2,3 later, this should not happen.
While on localhost everything works fine, but just outside the same machine (still same network/routers) it wont work.
I do not flip the buffers to write/read. I can ensure its not an issue with the serialization and both the server and client are single-threaded. I'm forgetting to do something? Any suggestions on how to fix this?
It isn't clear why you're using asynchronous I/O if all you really want is synchronous I/O, which is what you are getting from that code. You'd be better off with an ordinary SocketChannel.
I do not flip the buffers to write/read.
You must. You must flip() before write(), and compact() afterwards, where 'afterwards' in this case means the same as above.

android - how to read a specific number of bytes from socket?

In my system, I am using tcp to transfer messages between an android application, and a desktop application (that is developed in Qt).
I am using msgpack library to serialize/deserialize structures in both applications. In order to make sure that the structure is always received as a whole, I always send the number of bytes (encoded as a 32-bit big endian unsigned integer not using msgpack so that it always takes the first 4 bytes) before the message itself (that is encoded in msgpack). If there is a better way please tell me.
In the android app, I read the first 4 bytes into a byte[] and decode it into a long msgSize, next I need a way for the thread to keep blocking until the next msgSize bytes are received. After reading other questions and answers, I think I can write something like this:
InputStream is= sock.getInputStream();
byte[] msgSizeBuff= new byte[4];
is.read(msgSizeBuff, 0, 4);
long msgSize= MyDecodeFunction(msgSizeBuff);
DataInputStream dis = new DataInputStream(is);
byte[] msg = new byte[msgSize];
dis.readFully(msg);
After that I can use the msg array with msgpack, and I am sure that the whole message is received.
So:
is my usage of DataInputStream class alright, and am I guranteed that I will wait until I receive the specified number of bytes? because in the reference they say:
If insufficient bytes are available, EOFException is thrown.
I want a way to tell this thread to cancel the operation, so what would happen if another thread calls socket.close() or socket.shutdownInput(), will I get an exception?

Why does input stream read data in chunks?

I am trying to read some data from a network socket using the following code -
Socket s = new Socket(address, 502);
response = new byte[1024];
InputStream is = s.getInputStream();
int count = is.read(response, 0, 100);
The amount of data isn't large. It is 16 bytes in total. However the read() statement does not read all the data in one go. It reads only 8 bytes of data into my buffer.
I have to make multiple calls to read() like this in order to read the data -
Socket s = new Socket(address, 502);
response = new byte[1024];
InputStream is = s.getInputStream();
int count = is.read(response, 0, 100);
count += is.read(response, count, 100-count);
Why is this happening? Why does read() not read the entire stream in one go?
Please note that the data is not arriving gradually. If I wait for 2 seconds before reading the data by making a call to Thread.sleep(2000) the behavior remains the same.
Why does read() not read the entire stream in one go?
Because it isn't specified to do so. See the Javadoc. It blocks until at least one byte is available, then returns some number between 1 and the supplied length, inclusive.
That in turn is because the data doesn't necessarily arrive all in one go. You have no control over how TCP sends and receives data. You are obliged to just treat it as a byte stream.
I understand that it blocks until data arrives. "That in turn is because the data doesn't necessarily arrive all in one go." Why not is my question.
The data doesn't necessarily all arrive in one go because the network typically breaks it up into packets. IP is a packet switching protocol.
Does TCP transmit it blocks of 8 bytes?
Possibly, but probably not. The packet size depends on the network / networks that the data has traversed, but a typical internet packet size is around 1500 bytes.
If you are getting 8 bytes at a time, either your data is either coming through a network with an unusually small packet size, or (more likely) the sender is sending the data 8 bytes at a time. The second explanation more or less jives with what your other comments say.
And since I explicitly specify 100, a number much larger than the data in buffer shouldn't it attempt to read up till atleast 100 bytes?
Well no. It is not specified to work that way, and it doesn't work that way. You need to write your code according to what the spec says.
It is possible that this has something to do with the way the device is being "polled". But without looking at the specs for the device (or even knowing what it is exactly) this is only a guess.
Maybe the data is arriving gradually not because of your reading but because of the sender.
The sender should use a BufferedOutputStream (in the middle) to make big chunks before sending (and use flush only when it's needed).

Java sockets are not reading data

I'm programming a server (Java) - client (Android/Java) application. The server is a W7. All the communication goes well until one read in the client that freezes and stops reading data until I send it 2 times more.
The data not read is a byte array. I repeat that all the communication goes well until this point.
Here's the code that I use to send the data:
Long lLength = new Long(length);
byte [] bLength = this.longToBytes(lLength.longValue());
dos.write(bLength);
dos.flush();
dos.write(bLength);
dos.flush();
dos.write(bLength);
dos.flush();
This code transforms a long value into an 8 bytes array. As I said, when the first write is executed (and the client is waiting for data), the read is not done. It is not done until I execute the last write().
And here's the code to read it:
byte length[] = {0,0,0,0,0,0,0,0};
dis.read(length);
I've used Wireshark to sniff the traffic, and I can see that the byte array is send, and the client answers with an ACK, but the read is not done.
In the client and the server, the sockets are setup like this:
socket = new Socket(sIP, oiPort.intValue());
dos = new DataOutputStream(socket.getOutputStream());
dis = new DataInputStream(socket.getInputStream());
This is driving me mad... I don't know why, at one moment, the application stops reading the data, when I send it the same way as always.
I suppose that the problem may be in the input buffers of the client socket... But I don't know what to do or what to try...
Say that I've also test the server in a WXPSP3 and it still doesn't work.
First thing I'd look at is the code for your longToBytes method. Is it really creating a byte array of 8 bytes? If it is generating an array of less than 8 bytes, then that explains the problem. (Your client is expecting 8 bytes, and will block until they all arrive.)
Next thing I'd ask myself is why I'm not just using writeLong and readLong. It would simplify your code, and quite possibly cure the problem.

Categories