I'm writing a server for my app, which must get data from client and do smth. The communication is done using SocketChannel, but there is a problem: i only can read previously specified number of bytes from it (as of javadoc for channel.read(ByteBuffer dst))
An attempt is made to read up to r bytes from the channel, where r is the number of bytes remaining in the buffer
Is there any way to get size of data, that is currently in the channel and read all it into the byte[]?
there is a problem: i only can read previously specified number of bytes from it
That's not correct.
(as of javadoc for channel.read(ByteBuffer dst))
An attempt is made to read up to r bytes from the channel, where r is the number of bytes remaining in the buffer
You've missed the words 'up to'. There is no problem. If one byte is available, one byte will be read, regardless of the amount of room left in the ByteBuffer.
Is there any way to get size of data, that is currently in the channel and read all it into the byte[]?
That's exactly what happens. The amount of data 'currently in the channel' == the amount of data that will be read provided the room left in the ByteBuffer >= the size of the data in the socket receive buffer, and that will be the value returned by read().
I think is there is none. You have to manage this by yourself. Are you using serialized objects or did you create your own protocol?
Related
I am opening a targetdataline to accept audio input for a given format.
I start and open the line, and I have a buffer which fills with bytes. This runs on a constant loop until an external parameter is changed.
Now for a fixed sample rate and buffer size, I would expect this to always take the same amount of time to fill, ie if my buffer size was 48000 for an 8 bit stream, and my sample rate was 48kHz, I would expect my buffer to always take 1 second to fill. However I am finding this varying greatly.
The following is the code I have used:
DataLine.Info info1 = new DataLine.Info(TargetDataLine.class, format1);
try (TargetDataLine line = (TargetDataLine) m1.getLine(info1)) {
line.open(format1);
line.start();
while (!pauseInput){
long time1 = System.currentTimeMillis();
int numBytesRead1 = line.read(buffer1, 0, buffer1.length);
//chan1double = deinterleaveAudio(buffer1, chan1selectedchannel, chan1totalchannels);
long time2 = System.currentTimeMillis();
System.out.println(threadName + " Capture time = " + (time2-time1));
}
line.stop();
}
The commented line is a process I want to run each time the buffer is full. I realise I cannot place this here as it will interrupt the stream, so I need to find a different way to call this, hence I have commented out.
For testing purposes I have a buffer size of 4096. My audio format is 48kHz 16-bit, so I would expect my byte buffer to be filled in 42.6ms. ((1/48000) * 2048). (this is multiplied by half the buffer size as each sample is two bytes). However using the currentTimeMillies to measure each pass it is coming back with 123ms and 250ms and varying between those times.
Is there something I am missing out here that I have not done?
EDIT: I have copied just the code into a brand new application that doesn't even have a GUI or anything attached to it. Purely to output to the console and see what is happening, making sure there are no background threads to interfere, and sure enough the same happens. 95% of the time the buffer with predicted fill time of 250ms fills within 255-259ms. However occasionally this will drop to 127ms (which is physically impossible unless there is some weird buffer thing going on. Is this a bug in java somewhere?
I don't think it is a good idea to adjust timing such a way. It depends on many things e.g., bufferSize, mixer, etc. Moreover, your application is sharing the line's buffer with the mixer. If you have a real-time processing, store your data in a circular buffer with a length that is good enough to hold the amount of data that you need. In another thread, read the desired amount of data from the circular buffer, and do your processing at a constant time interval. Thus, sometimes, you may overlap or miss some bytes between two consecutive processings, but you always have the expected amount of bytes.
When you open the line, you can specify the line's buffer size by using open(format, bufferSize) or you can check actual buffer size by
calling DataLine.getBufferSize(). Then you need to specify the size of your short buffer that you are providing when you retrieve data through TargetDataLine.read(). Your short buffer size has to be smaller than the line's buffer size. I would consider short buffer size as 1/4th, 1/8th, 1/16th or so of the line's buffer size. Another idea is checking the available bytes DataLine.available() before calling read(). Note that read() is a blocking call (but it doesn't block line's buffer), i.e., it will be stuck until the requested amount of bytes have been read.
For low latency direct communication between your application and audio interface, you may consider ASIO.
For anyone looking at the same issue, I have been given an answer which half explains what is happening.
The thread scheduler decides when the code can run, and this can cause this to vary by 10-20ms. In the earlier days this was as much as 70ms.
This does not mean the stream is missing samples, but just that this buffer will not provide a continuous stream. So any application look at processing this data in realtime and passing it to be written to an audio output stream needs to be aware of this extra potential latency.
I am still looking at the reason for the short buffer fill time, every four or five passes. I was told it could be to do with the targetDataLine buffer size being different to my buffer size and just the remainder of that buffer being written on that pass, however I have changed this to be exactly the same and still no luck.
I have a really strange behavior in Java and I can't tell whether this happens on purpose or by chance.
I do have a Socket Connection to Server that sends me a response to a request. I am reading this response from the Socket with the following loop, which is encapsulated in a try-with-resource.
BufferedInputStream remoteInput = new BufferedInputStream(remoteSocket.getInputStream())
final byte[] response = new byte[512];
int bytes_read;
while ((bytes_read = remoteInput.read(response,0,response.length)) != -1) {
// Messageparsingstuff which does not affect the behaviour
}
According to my understanding the "read" Method fills as many bytes as possible into the byte Array. The limiting factors are either the amount of received bytes or the size of the array.
Unfortunately, this is not whats happening: the protocol I'm transmitting answers my request with several smaller answers which are sent one after another over the same socket connection.
In my case the "read" Method always returns with exactly one of those smaller answers in the array. The length of the answers varies but the 512 Byte that fit into the array are always enough. Which means my array always contains only one message and the rest/unneeded part of the array remains untouched.
If I intentionally define the byte-array smaller than my messages it will return several completely filled arrays and one last array that contains the rest of the bytes until the message is complete.
(A 100 byte answer with an array length of 30 returns three completely filled arrays and one with only 10 bytes used)
The InputStream or a socket connection in general shouldn't interpret the transmitted bytes in any way which is why I am very confused right now. My program is not aware of the used protocol in any way. In fact, my entire program is only this loop and the stuff you need to establish a socket connection.
If I can rely on this behavior it would make parsing the response extremely easy but since I do not know what causes this behavior in the first place I don't know whether I can count on it.
The protocol I'm transmitting is LDAP but since my program is completely unaware of that, that shouldn't matter.
According to my understanding the "read" Method fills as many bytes as possible into the byte Array.
Your understanding is incorrect. The whole point of that method returning the "number of bytes read" is: it might return any number. And to be precise: when talking about a blocking read - when the method returns, it has read something; thus it will return a number >= 1.
In other words: you should never every rely on read() reading a specific amount of bytes. You always always always check the returned numbers; and if you are waiting for a certain value to be reached, then you have to do something about that in your code (like buffering again; until you got "enough" bytes in your own buffer to proceed).
Thing is: there is a whole, huge stack of elements involved in such read operations. Network, operating system, jvm. You can't control what exactly happens; and thus you can not and should not build any implicit assumptions into your code like this.
While you might see this behaviour on a given machine, esp over loopback, once you start using real networks and use different hardware this can change.
If you send messages with enough of a delay, and read them fast enough, you will see one message at a time. However, if writing messages are sent close enough or your reader is delayed in any way, you can get multiple messages sent at once.
Also if you message is large enough e.g. around the MTU or more, a single message can be broken up even if your buffer is more than large enough.
I have a file containing data that is meaningful only in chunks of certain size which is appended at the start of each chunk, for e.g.
{chunk_1_size}
{chunk_1}
{chunk_2_size}
{chunk_2}
{chunk_3_size}
{chunk_3}
{chunk_4_size}
{chunk_4}
{chunk_5_size}
{chunk_5}
.
.
{chunk_n_size}
{chunk_n}
The file is really really big ~ 2GB and the chunk size is ~20MB (which is the buffer that I want to have)
I would like to Buffer read this file to reduce the number to calls to actual hard disk.
But I am not sure how much buffer to have because the chunk size may vary.
pseudo code of what I have in mind:
while(!EOF) {
/*chunk is an integer i.e. 4 bytes*/
readChunkSize();
/*according to chunk size read the number of bytes from file*/
readChunk(chunkSize);
}
If lets say I have random buffer size then I might crawl into situations like:
First Buffer contains chunkSize_1 + chunk_1 + partialChunk_2 --- I have to keep track of leftover and then from the next buffer get the remaning chunk and concatenate to leftover to complete the chunk
First Buffer contains chunkSize_1 + chunk_1 + partialChunkSize_2 (chunk size is an integer i.e. 4 bytes so lets say I get only two of those from first buffer) --- I have to keep track of partialChunkSize_2 and then get remaning bytes from the next buffer to form an integer that actually gives me the next chunkSize
Buffer might not even be able to get one whole chunk at a time -- I have to keep hitting read until the first chunk is completely read into memory
You don't have much control over the number of calls to the hard disk. There are several layers between you and the hard disk (OS, driver, hardware buffering) that you cannot control.
Set a reasonable buffer size in your Java code (1M) and forget about it unless and until you can prove there is a performance issue that is directly related to buffer sizes. In other words, do not fall into the trap of premature optimization.
See also https://stackoverflow.com/a/385529/18157
you might need to do some analysis and have an idea of average buffer size, to read data.
you are saying to keep buffer-size and read data till the chunk is done ,to have some meaning full data
R u copying the file to some place else, or you sending this data to another place?
for some activities Java NIO packages have better implementations to deal with ,rather than reading data into jvm buffers.
the buffer size should be decent enough to read maximum chunks of data ,
If planning to hold data in memmory reading the data using buffers and holding them in memory will be still memory-cost operation ,buffers can be freed in many ways using basic flush operaitons.
please also check apache file-utils to read/write data
I'm using an event which is activated when X bytes has arrived into the buffer. Is the typical buffer(), available() and read() serial port methods. My question is, when you send a packet via wireless (or whatever medium) you can expect that the packet arrived with the total length at a time? or bytes arrives sequentially through the buffer forming the packet? Because I don't know If I need to use buffer() considering total packet length or using it considering the bytes that arrives which form the packet.
My guess is that the firmware first use the cheksum operation first to ensure that the packet arrived completely and then move it to the buffer. Isn't it?
Serial ports and TCP connections are byte streams. There are no message-boundaries larger than one byte. You cannot transfer messages any larger than one byte without another protocol on top.
I'm confused with the use of Socket's setReceiveBufferSize() from java.net.
From the API, I know that setting the receive buffer size for the socket defines (or gives a hint to) the data limit that the socket can receive at a time. However, everytime I try to read from the socket's input stream, I've found out that it can store more than what I set with setReceiveBufferSize().
Consider the following code:
InputStream input_stream = socket.getInputStream();
socket.setReceiveBufferSize(1024);
byte[] byte_array = new byte[4096];
input_stream.read(byte_array);
Everytime I read from input_stream, I've tested that I can actually read more than 1024 bytes at a time (and fill the 4096 byte array), as long as the sender side has already sent more than that much data.
Can anyone give an explanation as to why this happens? Am I just missing something? Thank you.
From the API, I know that setting the receive buffer size for the socket defines (or gives a hint to) the data limit that the socket can receive at a time.
No it doesn't. It gives a hint to TCP as to the total receive buffer size, which in turn affects the maximum receive window that can be advertised. 'Receive at a time' doesn't really have anything to do with it.
However, every time I try to read from the socket's input stream, I've found out that it can store more than what I set with setReceiveBufferSize().
TCP is free to adjust the hint up or down. In this case, 1024 is a ludicrously small size that any implementation would increase to at least 8192. You can find out how much TCP actually used with getReceiveBufferSize().