Is Java's DataInputStream's readByte() faster than readInt()? - java

Background: I'm currently creating an application in which two Java programs communicate over a network using a DataInputStream and DataOutputStream.
Before every communication, I'd like to send an indication of what type of data is being sent, so the program knows how to handle it. I was thinking of sending an integer for this, but a byte has enough possible combinations.
So my question is, is Java's DataInputStream's readByte() faster than readInt()?
Also, on the other side, is Java's DataOutputStream's writeByte() faster than writeInt()?

If one byte will be enough for your data then readByte and writeByte will be indeed faster (because it reads/writes less data). It won't be noticeable difference though because the size of the data is very small in both cases - 1 vs 4 bytes.
If you have lots of data coming from the stream then using readByte or readInt will not make speed difference - for example calling readByte 4 times instead of readInt 1 time. Just use the one depending on what kind of data you expect and what makes your code easier to understand. You will have to read the whole stuff anyway :)

Related

Byte array outputstream [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Stupid question probably but what is the ideal length for a byte array to send over an outputstream? I couldn't find anything about this online.
I have found a lot of example that set their array size equal to 2^X or something similar. But what is the purpose of this?
There is no optimal size. OutputStream is an abstract concept; there are a million implementations of it. (Not just 'FileOutputStream is an implementation', but 'FileOutputStream, on OpenJDK11, on Windows 10, with this servicepack, with this CPU and this much system memory, under these circumstances').
The reason you see that is for buffer efficiency. The problem with sending 1 byte is usually basically nothing, but sometimes, sending 1 (or very few) bytes results in this nasty scenario:
you send one byte.
The underlying outputstream isn't designed to buffer that byte, it doesn't have the storage for it, so the only thing it can do is send it onwards to the actual underlying resource. Let's say the OutputStream represents a file on the filesystem.
The kerneldriver for this works similarly. (Most OSes do buffer internally, but you can ask the OS not to do this when you open the file).
Therefore, that one byte now needs to be written to disk. However, it is an SSD, and you can't do that to an SSD, you can only write an entire cell at once*. That's just how SSDs work: You can only write an entire block's worth. They aren't bits in sequence on a big platter.
So, the kernel reads the entire cell out, updates the one byte you are writing, and writes the entire cell back to the SSD.
Your actual loop does write, say, about 50,000 bytes, so something that should have taken a single SSD read and write, now takes 50,000 reads and 50,000 writes, burning through your SSD cell longevity and taking 50,000 times longer than needed.
Similar issues occur for networking (end up sending a single byte, wrapped in HTTP headers, wrapped in 2 TCP/IP packets, resulting in sending ~1000 bytes over the network for each byte you .write(singleValue) and many other such systems.
So why don't these streams buffer?
Because there are cases where you don't actually want them to do this; there are plenty of reasons to write I/O with specific efficiency in mind.
Is there no way to just let something to this for me?
Ah, you're in luck, there is! BufferedWriter and friends (BufferedOutputStream exists as well) wrap around an underlying stream and buffer for you:
var file = new FileOutputStream("/some/path");
var wrapped = new BufferedOutputStream(file);
file.write(1); // this is a bad idea
wrapped.write(1); // this is fine
Here, the wrapped write doesn't result in anything happening except some memory being shoved around. No bytes are written to disk (with the downside that if someone trips over a powercable, it's just lost). Only after you close wrapped, or call flush() on wrapped, or write some sufficient amount of bytes to wrapped, will wrapped end up actually sending a whole bunch of bytes to the underlying stream. This is what you should use if making a byte array is unwieldy. Why reinvent the wheel?
But I want to write to the underlying raw stream
Well, you're using too few bytes if the amount of bytes is less than what a single TCP/IP packet can hold, or an unfortunate size otherwise (imagine the TCP/IP packet can hold 1000 bytes exactly, and you send 1001 and bytes. That's one full packet, and then a second packet with just 1 byte, giving you only 50% efficiency. 50% is still better than 0.1% efficiency which byte-at-a-time would get you in this hypothetical). But, if you send, say, 5001 bytes, that's 5 full packets and one regrettable 1-byte packet, for 83.35% efficiency. Unfortunate it's not near 100, but not nearly as bad. Same applies to disk (if an SSD cell holds 2048 bytes and you send 65537 bytes, it's still ~96/7% efficient).
You're using too many bytes if the impact on your own java process is such that this becomes problematic: It's causing excessive garbage collection, or, worse, out of memory errors.
So where is the 'sweet spot'? Depends a little bit, but 65536 is common and is unlikely to be 'too low'. Unless you run many thousands of simultaneous threads, it's not too high either.
It's usually a power of 2 mostly because of superstition, but there is some sense in it: Those underlying buffer things are often a power of two (computers are binary things, after all). So if the cell size happens to be, say, 2048, well, then you are 100% efficient if you send 65536 bytes (that's exactly 32 cells worth of data).
But, the only thing you're really trying to avoid is that 0.1% efficiency rate which occurs if you write one byte at a time to a packetizing (SSD, network, etc) underlying stream. Hence, it doesn't matter, as long as it's more than 2048 or so, you should already have avoided the doom scenario.
*) I'm oversimplifying; the point is that a single byte read or write can take as long as a whole chunk of them, and to give some hint as to why that is, not to do a complete deep-dive on SSD technology.

Java InputStream automatically splits socket messages

I have a really strange behavior in Java and I can't tell whether this happens on purpose or by chance.
I do have a Socket Connection to Server that sends me a response to a request. I am reading this response from the Socket with the following loop, which is encapsulated in a try-with-resource.
BufferedInputStream remoteInput = new BufferedInputStream(remoteSocket.getInputStream())
final byte[] response = new byte[512];
int bytes_read;
while ((bytes_read = remoteInput.read(response,0,response.length)) != -1) {
// Messageparsingstuff which does not affect the behaviour
}
According to my understanding the "read" Method fills as many bytes as possible into the byte Array. The limiting factors are either the amount of received bytes or the size of the array.
Unfortunately, this is not whats happening: the protocol I'm transmitting answers my request with several smaller answers which are sent one after another over the same socket connection.
In my case the "read" Method always returns with exactly one of those smaller answers in the array. The length of the answers varies but the 512 Byte that fit into the array are always enough. Which means my array always contains only one message and the rest/unneeded part of the array remains untouched.
If I intentionally define the byte-array smaller than my messages it will return several completely filled arrays and one last array that contains the rest of the bytes until the message is complete.
(A 100 byte answer with an array length of 30 returns three completely filled arrays and one with only 10 bytes used)
The InputStream or a socket connection in general shouldn't interpret the transmitted bytes in any way which is why I am very confused right now. My program is not aware of the used protocol in any way. In fact, my entire program is only this loop and the stuff you need to establish a socket connection.
If I can rely on this behavior it would make parsing the response extremely easy but since I do not know what causes this behavior in the first place I don't know whether I can count on it.
The protocol I'm transmitting is LDAP but since my program is completely unaware of that, that shouldn't matter.
According to my understanding the "read" Method fills as many bytes as possible into the byte Array.
Your understanding is incorrect. The whole point of that method returning the "number of bytes read" is: it might return any number. And to be precise: when talking about a blocking read - when the method returns, it has read something; thus it will return a number >= 1.
In other words: you should never every rely on read() reading a specific amount of bytes. You always always always check the returned numbers; and if you are waiting for a certain value to be reached, then you have to do something about that in your code (like buffering again; until you got "enough" bytes in your own buffer to proceed).
Thing is: there is a whole, huge stack of elements involved in such read operations. Network, operating system, jvm. You can't control what exactly happens; and thus you can not and should not build any implicit assumptions into your code like this.
While you might see this behaviour on a given machine, esp over loopback, once you start using real networks and use different hardware this can change.
If you send messages with enough of a delay, and read them fast enough, you will see one message at a time. However, if writing messages are sent close enough or your reader is delayed in any way, you can get multiple messages sent at once.
Also if you message is large enough e.g. around the MTU or more, a single message can be broken up even if your buffer is more than large enough.

TCP packet sizing at application level for max throughput

At application level, say using java, how much do I have to worry about the actual TCP packet size? So, for example, I am trying to write an application that should send data over TCP socket's outputstream, do I have to always keep into account the size of the data written to the stream? Since java sockets are streaming sockets, I havent actually considered the size of data units, but the TSO (TCP Segmentation offload) is "turned on" for the OS/NIC, then I can write a 64KB data slice or MSS to the outputstream and thus try to save the precious CPU time of slicing the data to less than 1500 bytes (< MTU). How effective could my programming be, in terms of being able to take care of this dynamically? I know we can get NetworkInterface.getMTU() to determine OS/NIC MTU size, but not sure how it can help that.
So, I can say that overall, I am a bit confused on how to maximize my throughput of byte writing to the outputstream.
how much do I have to worry about the actual TCP packet size?
Almost never. You can setNoTcpDelay(true); but this rarely makes a difference.
So, for example, I am trying to write an application that should send data over TCP socket's outputstream, do I have to always keep into account the size of the data written to the stream?
I doubt it. If you have a 1 Gb connection or slower, you will have trouble writing a program so inefficient it can't use this bandwidth.
Since java sockets are streaming sockets, I havent actually considered the size of data units, but the TSO (TCP Segmentation offload) is "turned on" for the OS/NIC, then I can write a 64KB data slice or MSS to the outputstream and thus try to save the precious CPU time of slicing the data to less than 1500 bytes (< MTU).
I don't see how give most decent network adapter support TCP offloading.
How effective could my programming be, in terms of being able to take care of this dynamically?
Java doesn't support it in any case.
I know we can get NetworkInterface.getMTU() to determine OS/NIC MTU size, but not sure how it can help that.
Me neither.
So, I can say that overall, I am a bit confused on how to maximize my throughput of byte writing to the outputstream.
The most significant change you can make in Java is to use NIO. I suggest blocking NIO as this is the simplest change from NIO. If you use direct ByteBuffers this can save redundant memory copies from Java to native memory.
Do you know you have a problem using the maximum bandwidth of your network? If you haven't measured this is the cause of your problem, it's just a guess.
TCP buffers, paces, decides segment sizes etc behind the scenes for you. There is nothing you can do to help except write as much as possible as fast as possible, and use a large socket send buffer at the sender and a large socket receive buffer at the receiver.

Java BufferedOutputStream: How many bytes to write

This is more like a matter of conscience than a technological issue :p
I'm writing some java code to dowload files from a server...For that, i'm using the BufferedOutputStream method write(), and BufferedInputStream method read().
So my question is, if i use a buffer to hold the bytes, what should be the number of bytes to read? Sure i can read byte to byte using just int byte = read() and then write(byte), or i could use a buffer. If i take the second approach, is there any aspects that i must pay attention when defining the number of bytes to read\write each time? What will this number affect in my program?
Thks
Unless you have a really fast network connection, the size of the buffer will make little difference. I'd say that 4k buffers would be fine, though there's no harm in using buffers a bit bigger.
The same probably applies to using read() versus read(byte[]) ... assuming that you are using a BufferedInputStream.
Unless you have an extraordinarily fast / low-latency network connection, the bottleneck is going to be the data rate that the network and your computers' network interfaces can sustain. For a typical internet connection, the application can move the data two or more orders of magnitude of times faster than the network can. So unless you do something silly (like doing 1 byte reads on an unbuffered stream), your Java code won't be the bottleneck.
BufferedInputStream and BufferedOutputStream typically rely on System.arraycopy for their implementations. System.arraycopy has a native implementation, which likely relies on memmove or bcopy. The amount of memory that is copied will depend on the available space in your buffer, but regardless, the implementation down to the native code is pretty efficient, unlikely to affect the performance of your application regardless of how many bytes you are reading/writing.
However, with respect to BufferedInputStream, if you set a mark with a high limit, a new internal buffer may need to be created. If you do use a mark, reading more bytes than are available in the old buffer may cause a temporary performance hit, though the amortized performance is still linear.
As Stephen C mentioned, you are more likely to see performance issues due to the network.
What is the MTU(maximum traffic unit) in your network connection? If you using UDP for example, you can check this value and use smaller array of bytes. If this is no metter, you need to check how memory eats your program. I think 1024 - 4096 will be good variant to save this data and continue to receive
If you pump data you normally do not need to use any Buffered streams. Just make sure you use a decently sized (8-64k) temporary byte[] buffer passed to the read method (or use a pump method which does it). The default buffer size is too small for most usages (and if you use a larger temp array it will be ignored anyway)

write (byte[] b) optim usage for large byte array

If I have a large byte array already in memory received from a SOAP response.
I have to write this byte array into an OutputStream.
It is OK just to use write:
byte [] largeByteArray=...;
outputstream.write(largeByteArray);
...
outputstream.flush();
...
or is better to split the bytearray in small chunks and to write that to the outputstream?
If you've already got the large array, then just write it out - if the output stream implementation chooses to chunk it, it can make that decision. I can't see a benefit in you doing that for it - which may well make it less efficient, if it's able to handle large chunks.
If you want to make this more efficient, I would write the data as you get it rather than building a large byte[] (and waiting until the end to start writing). If this is an option is it can be faster and more efficient. However if this is not an option use one large write.
What type of output stream are you using?
There are output streams that can write the array in chunks.
In general I believe if you issue an I/O operation (write) for each single byte, the performance may be poor, because I/O operations are expensive.
I can think of no conceivable reason it would be better without getting truly bizarre and absurd. Generally, if you can pass data between layers in larger chunks without additional effort, then you should do so. Often, it's even worth additional effort to do things that way, so why would you want to put in extra effort to make more work?
If largeByteArray is something really large , and write job cost long time , and memory is a considerable condition:
Split the array to parts, after write one part ,set the part=null, this release the reference of the part , would make the JVM/GC the part as soon as possible .
By split and release , you can do more write(largeByteArray) job at the same time, before OOM-ERROR occurs.
Notice:
during split stage, JVM need double arraysize memory to do so, but after split, original array will eventually get GC'd ,you are back to using the same amount of memory as before.
Example: a server has 1GB memory . it can run max 100 thread that each one holding and sending 10MB data to client sametime.
if you use the big 10MB array, memory use is always 1GB,no spare part even all thread has 1MB data not sent.
my solution is split 10MB to 10*1MB. after send some MB part , the sent part maybe get JVM/GC .and each thread is costing less average memory in whole life time. so server may run more tasks

Categories