I have a problem and can't find answers. I would like to measure internet bandwidth with java, but I donĀ“t know how.
It would be great to get some hints; I know that I have to open a socket and send it to a defined server, get it back and then use the time.
But how would I code this?
Well I'd implement this simply by downloading a fixed size file. Not tested, but something along these lines should work just fine
byte[] buffer = new byte[BUFFERSIZE];
Socket s = new Socket(urlOfKnownFile);
InputStream is = s.getInputStream();
long start = System.nanoTime();
while (is.read(buffer) != -1) continue;
long end = System.nanoTime();
long time = end-start;
// Now we know that it took about time ns to download <filesize>.
// If you don't know the correct filesize you can obviously use the total of all is.read() calls.
How about fixing an arbitrary amount of time and send the data respecting it?
For example, let's say i want my server to limit it's bandwidth usage to 100Bytes/s.
So i fix 1 second and send the data as long as it does not goes beyond 1 second and 100 Bytes.
Here's some pseudocode to show what I'm talking about:
timer_get (a);
sent_data = 0;
while (not_finished_sending_data)
{
timer_get (b);
if ((b - a) < 1 ) // 1 second
{
if (sent_data < 100) // 100 bytes
{
// We actually send here
sent_data += send();
}
}
else
{
timer_get (a);
sent_data = 0;
}
}
Related
I have a song that has the different tracks in separate ogg files. I can play these ogg files together using Vorbis SPI to get the complete song. However I'm doing this by playing each ogg in a separate thread and I have no protection against the playback of each ogg file getting out of sync.
How would you go about ensuring they stay in sync. Ideally all tracks would pause if there was a delay reading any.
Once I've got this working I'll also be wanting to sync with graphics being displayed on screen.
I feel like what I need is some sort of time/location record so I know how far through each of the ogg playbacks have got to. I could then delay other threads until everything is back in sync.
I'm in early days so I'm happy to consider alternative playback libraries.
EDIT :
Following JimmyB's advice I'm now able to read into 4 byte buffers. I then add the buffers together using the following code :
for (int i = 0; i < byteLength; i++) {
data[0][i] = data[0][i];
for (int j = 1; j < numInputs; j++) {
data[0][i] += data[j][i];
}
}
This does work, but the output is distorted. I suspect clipping is occuring. How can I combine the sound bytes without this effect?
EDIT 2:
Dividing the byte by the number of inputs stops the clipping, but greatly reduces the quality of the playback.
EDIT 3:
I think I'm also losing sync between the tracks. My read method looks like :
// return true if there is more to come
private boolean fillBuffer(AudioInputStream audioInputStream, byte[] soundBytes) throws IOException {
byte empty = 0;
int offset = 0;
while (offset != soundBytes.length) {
int bytesRead;
try {
bytesRead = audioInputStream.read(soundBytes, offset, soundBytes.length - offset);
offset += bytesRead;
if (bytesRead == -1) {
Arrays.fill(soundBytes, offset, soundBytes.length, empty);
return false;
}
} catch (Exception e) {
e.printStackTrace();
LOGGER.info("Boom");
}
}
return true;
}
It seems like the relationship between bytes read and time may not be linear? Perhaps that's in my head though.
Today I got confused by the results of Visual VM profiling I got.
I have the following simple Java method:
public class Encoder {
...
private BitString encode(InputStream in, Map<Character, BitString> table)
throws IOException {
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
BitString result = new BitString();
int i;
while ((i = reader.read()) != -1) {
char ch = (char) i;
BitString string = table.get(ch);
result = result.append(string);
}
return result;
}
}
This method reads characters from a stream, one at a time. For each character it looks up it's bit-string representation and it joins these bit-strings to represent the whole stream.
A BitString is a custom data structure that represents a sequence of bits using an underlying byte array.
The method performs very poorly. The problem lies with BitString#append - that method creates a new byte array, copies the bits from both input BitStrings and returns it as a new BitString instance.
public BitString append(BitString other) {
BitString result = new BitString(size + other.size);
int pos = 0;
for (byte b : this) {
result.set(pos, b);
pos++;
}
for (byte b : other) {
result.set(pos, b);
pos++;
}
return result;
}
However, when I tried to use VisualVM to verify what's happening, here's what I got:
I have very little experience with Visual VM and profiling in general. To my understanding, this looks as if the problem lied somewhere in encode method itself, not in append.
To be sure, I surrounded the whole encode method and the append call with custom time measuring, like this:
public class Encoder {
private BitString encode(InputStream in, Map<Character, BitString> table)
throws IOException {
>> long startTime = System.currentTimeMillis();
>> long appendDuration = 0;
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
BitString result = new BitString();
int i;
>> long count = 0;
while ((i = reader.read()) != -1) {
char ch = (char) i;
BitString string = table.get(ch);
>> long appendStartTime = System.currentTimeMillis();
result = result.append(string);
>> long appendEndTime = System.currentTimeMillis();
>> appendDuration += appendEndTime - appendStartTime;
>> count++;
>> if (count % 1000 == 0) {
>> log.info(">>> CHARACTERS PROCESSED: " + count);
>> long endTime = System.currentTimeMillis();
>> log.info(">>> TOTAL ENCODE DURATION: " + (endTime - startTime) + " ms");
>> log.info(">>> TOTAL APPEND DURATION: " + appendDuration + " ms");
>> }
}
return result;
}
}
And I got the following results:
CHARACTERS PROCESSED: 102000
TOTAL ENCODE DURATION: 188276 ms
APPEND CALL DURATION: 188179 ms
This seems in contradiction with the results from Visual VM.
What am I missing?
You're seeing this behavior because VisualVM can only sample the call stack at safepoints, and the JVM is optimizing the safepoints out of your code. This causes the samples to be lumped together under "Self time" instead, which makes it artificially inflated and misleading. There are two possible fixes:
To make VisualVM work better, add JVM options to keep more safepoints, like -XX:-Inline and -XX:+UseCountedLoopSafepoints. These will slow your code down some, but will make the profiling results much more accurate. This solution is easy, and it's usually good enough. Just remember to remove these options when you're not profiling!
If you don't mind switching profilers, you can use Java Mission Control or honest-profiler. These have the ability to take samples at places other than safepoints, using a special API of the JVM. This solution is a little more accurate, since you're profiling the fully optimized code, but in my opinion those profilers are harder to use than VisualVM.
In your specific case, the method call to BitString.append() is probably being inlined by the JVM for performance. This causes the safepoint that's normally at the end of a method call to be removed, which means that method won't show up in the profiler anymore.
There's an excellent blog post here with more details about what safepoints are and how they work, and another one here that talks in more detail about the interaction between safepoints and sampling profilers.
[THIS ANSWER is INVALID. but i keep it until OP get some help from a member,because this post contains two comments which helps others to understand the problem].
in this case VisualVM has measured the actual CPU time, but time value you measured is "elapsed time".
if the execution threads had to wait for IO or Network, then that time will not be measured as CPU time.
Currently, I am relying on the ObjectInputStream.available() method to tell me how many bytes are left in a stream. Reason for this -- I am writing some unit/integration tests on certain functions that deal with streams and I am just trying to ensure that the available() method returns 0 after I am done.
Unfortunately, upon testing for failure (i.e., I have sent about 8 bytes down the stream) my assertion for available() == 0 is coming up true when it should be false. It should show >0 or 8 bytes!
I know that the available() method is classically unreliable, but I figured it would show something at least > 0!
Is there a more reliable way of checking if a stream is empty or not (The is my main goal here after all)? Perhaps in the Apache IO domain or some other library out there?
Does anyone know why the available() method is so profoundly unreliable; what is the point of it? Or, is there a specific, proper way of using it?
Update:
So, as many of you can read from the comments, the main issue I am facing is that on one end of a stream, I am sending a certain number of bytes but on the other end, not all the bytes are arriving!
Specifically, I am sending 205498 bytes on one end and only getting 204988 on the other, consistently. I am controlling both sides of this operation between threads in a socket, but it should be no matter.
Here is the code I have written to collect all the bytes.
public static int copyStream(InputStream readFrom, OutputStream writeTo, int bytesToRead)
throws IOException {
int bytesReadTotal = 0, bytesRead = 0, countTries = 0, available = 0, bufferSize = 1024 * 4;
byte[] buffer = new byte[bufferSize];
while (bytesReadTotal < bytesToRead) {
if (bytesToRead - bytesReadTotal < bufferSize)
buffer = new byte[bytesToRead - bytesReadTotal];
if (0 < (available = readFrom.available())) {
bytesReadTotal += (bytesRead = readFrom.read(buffer));
writeTo.write(buffer, 0, bytesRead);
countTries = 0;
} else if (countTries < 1000)
try {
countTries++;
Thread.sleep(1L);
} catch (InterruptedException ignore) {}
else
break;
}
return bytesReadTotal;
}
I put the countTries variable in there just to see what happens. Even without countTires in there, it will block forever before it reaches the BytesToRead.
What would cause the stream to suddenly block indefinitely like that? I know on the other end it fully sends the bytes over (as it actually utilizes the same method and I see that it completes the function with the full BytesToRead matching bytesReadTotal in the end. But the receiver doesn't. In fact, when I look at the arrays, they match up perfectly up till the end as well.
UPDATE2
I noticed that when I added a writeTo.flush() at the end of my copyStream method, it seems to work again. Hmm.. Why are flushes so vital in this situation. I.e., why would not using it cause a stream to perma-block?
The available() method only returns how many bytes can be read without blocking (which may be 0). In order to see if there are any bytes left in the stream, you have to read() or read(byte[]) which will return the number of bytes read. If the return value is -1 then you have reached the end of file.
This little code snippet will loop through an InputStream until it gets to the end (read() returns -1). I don't think it can ever return 0 because it should block until it can either read 1 byte or discover there is nothing left to read (and therefore return -1)
int currentBytesRead=0;
int totalBytesRead=0;
byte[] buf = new byte[1024];
while((currentBytesRead =in.read(buf))>0){
totalBytesRead+=currentBytesRead;
}
This very well may just be a KISS moment, but I feel like I should ask anyway.
I have a thread and it's reading from a sockets InputStream. Since I am dealing in particularly small data sizes (as in the data that I can expect to recieve from is in the order of 100 - 200 bytes), I set the buffer array size to 256. As part of my read function I have a check that will ensure that when I read from the InputStream that I got all of the data. If I didn't then I will recursively call the read function again. For each recursive call I merge the two buffer arrays back together.
My problem is, while I never anticipate using more than the buffer of 256, I want to be safe. But if sheep begin to fly and the buffer is significantly more the read the function (by estimation) will begin to take an exponential curve more time to complete.
How can I increase the effiency of the read function and/or the buffer merging?
Here is the read function as it stands.
int BUFFER_AMOUNT = 256;
private int read(byte[] buffer) throws IOException {
int bytes = mInStream.read(buffer); // Read the input stream
if (bytes == -1) { // If bytes == -1 then we didn't get all of the data
byte[] newBuffer = new byte[BUFFER_AMOUNT]; // Try to get the rest
int newBytes;
newBytes = read(newBuffer); // Recurse until we have all the data
byte[] oldBuffer = new byte[bytes + newBytes]; // make the final array size
// Merge buffer into the begining of old buffer.
// We do this so that once the method finishes, we can just add the
// modified buffer to a queue later in the class for processing.
for (int i = 0; i < bytes; i++)
oldBuffer[i] = buffer[i];
for (int i = bytes; i < bytes + newBytes; i++) // Merge newBuffer into the latter half of old Buffer
oldBuffer[i] = newBuffer[i];
// Used for the recursion
buffer = oldBuffer; // And now we set buffer to the new buffer full of all the data.
return bytes + newBytes;
}
return bytes;
}
EDIT: Am I being paranoid (unjustifiedly) and should just set the buffer to 2048 and call it done?
BufferedInputStream, as noted by Roland, and DataInputStream.readFully(), which replaces all the looping code.
int BUFFER_AMOUNT = 256;
Should be final if you don't want it changing at runtime.
if (bytes == -1) {
Should be !=
Also, I'm not entirely clear on what you're trying to accomplish with this code. Do you mind shedding some light on that?
I have no idea what you mean by "small data sizes". You should measure whether the time is spent in kernel mode (then you are issuing too many reads directly on the socket) or in user mode (then your algorithm is too complicated).
In the former case, just wrap the input with a BufferedInputStream with 4096 bytes of buffer and read from it.
In the latter case, just use this code:
/**
* Reads as much as possible from the stream.
* #return The number of bytes read into the buffer, or -1
* if nothing has been read because the end of file has been reached.
*/
static int readGreedily(InputStream is, byte[] buf, int start, int len) {
int nread;
int ptr = start; // index at which the data is put into the buffer
int rest = len; // number of bytes that we still want to read
while ((nread = is.read(buf, ptr, rest)) > 0) {
ptr += nread;
rest -= nread;
}
int totalRead = len - rest;
return (nread == -1 && totalRead == 0) ? -1 : totalRead;
}
This code completely avoids creating new objects, calling unnecessary methods and furthermore --- it is straightforward.
Is there an article/algorithm on how I can read a long file at a certain rate?
Say I do not want to pass 10 KB/sec while issuing reads.
A simple solution, by creating a ThrottledInputStream.
This should be used like this:
final InputStream slowIS = new ThrottledInputStream(new BufferedInputStream(new FileInputStream("c:\\file.txt"),8000),300);
300 is the number of kilobytes per second. 8000 is the block size for BufferedInputStream.
This should of course be generalized by implementing read(byte b[], int off, int len), which will spare you a ton of System.currentTimeMillis() calls. System.currentTimeMillis() is called once for each byte read, which can cause a bit of an overhead. It should also be possible to store the number of bytes that can savely be read without calling System.currentTimeMillis().
Be sure to put a BufferedInputStream in between, otherwise the FileInputStream will be polled in single bytes rather than blocks. This will reduce the CPU load form 10% to almost 0. You will risk to exceed the data rate by the number of bytes in the block size.
import java.io.InputStream;
import java.io.IOException;
public class ThrottledInputStream extends InputStream {
private final InputStream rawStream;
private long totalBytesRead;
private long startTimeMillis;
private static final int BYTES_PER_KILOBYTE = 1024;
private static final int MILLIS_PER_SECOND = 1000;
private final int ratePerMillis;
public ThrottledInputStream(InputStream rawStream, int kBytesPersecond) {
this.rawStream = rawStream;
ratePerMillis = kBytesPersecond * BYTES_PER_KILOBYTE / MILLIS_PER_SECOND;
}
#Override
public int read() throws IOException {
if (startTimeMillis == 0) {
startTimeMillis = System.currentTimeMillis();
}
long now = System.currentTimeMillis();
long interval = now - startTimeMillis;
//see if we are too fast..
if (interval * ratePerMillis < totalBytesRead + 1) { //+1 because we are reading 1 byte
try {
final long sleepTime = ratePerMillis / (totalBytesRead + 1) - interval; // will most likely only be relevant on the first few passes
Thread.sleep(Math.max(1, sleepTime));
} catch (InterruptedException e) {//never realized what that is good for :)
}
}
totalBytesRead += 1;
return rawStream.read();
}
}
The crude solution is just to read a chunk at a time and then sleep eg 10k then sleep a second. But the first question I have to ask is: why? There are a couple of likely answers:
You don't want to create work faster than it can be done; or
You don't want to create too great a load on the system.
My suggestion is not to control it at the read level. That's kind of messy and inaccurate. Instead control it at the work end. Java has lots of great concurrency tools to deal with this. There are a few alternative ways of doing this.
I tend to like using a producer consumer pattern for soling this kind of problem. It gives you great options on being able to monitor progress by having a reporting thread and so on and it can be a really clean solution.
Something like an ArrayBlockingQueue can be used for the kind of throttling needed for both (1) and (2). With a limited capacity the reader will eventually block when the queue is full so won't fill up too fast. The workers (consumers) can be controlled to only work so fast to also throttle the rate covering (2).
while !EOF
store System.currentTimeMillis() + 1000 (1 sec) in a long variable
read a 10K buffer
check if stored time has passed
if it isn't, Thread.sleep() for stored time - current time
Creating ThrottledInputStream that takes another InputStream as suggested would be a nice solution.
If you have used Java I/O then you should be familiar with decorating streams. I suggest an InputStream subclass that takes another InputStream and throttles the flow rate. (You could subclass FileInputStream but that approach is highly error-prone and inflexible.)
Your exact implementation will depend upon your exact requirements. Generally you will want to note the time your last read returned (System.nanoTime). On the current read, after the underlying read, wait until sufficient time has passed for the amount of data transferred. A more sophisticated implementation may buffer and return (almost) immediately with only as much data as rate dictates (be careful that you should only return a read length of 0 if the buffer is of zero length).
You can use a RateLimiter. And make your own implementation of the read in InputStream. An example of this can be seen bellow
public class InputStreamFlow extends InputStream {
private final InputStream inputStream;
private final RateLimiter maxBytesPerSecond;
public InputStreamFlow(InputStream inputStream, RateLimiter limiter) {
this.inputStream = inputStream;
this.maxBytesPerSecond = limiter;
}
#Override
public int read() throws IOException {
maxBytesPerSecond.acquire(1);
return (inputStream.read());
}
#Override
public int read(byte[] b) throws IOException {
maxBytesPerSecond.acquire(b.length);
return (inputStream.read(b));
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
maxBytesPerSecond.acquire(len);
return (inputStream.read(b,off, len));
}
}
if you want to limit the flow by 1 MB/s you can get the input stream like this:
final RateLimiter limiter = RateLimiter.create(RateLimiter.ONE_MB);
final InputStreamFlow inputStreamFlow = new InputStreamFlow(originalInputStream, limiter);
It depends a little on whether you mean "don't exceed a certain rate" or "stay close to a certain rate."
If you mean "don't exceed", you can guarantee that with a simple loop:
while not EOF do
read a buffer
Thread.wait(time)
write the buffer
od
The amount of time to wait is a simple function of the size of the buffer; if the buffer size is 10K bytes, you want to wait a second between reads.
If you want to get closer than that, you probably need to use a timer.
create a Runnable to do the reading
create a Timer with a TimerTask to do the reading
schedule the TimerTask n times a second.
If you're concerned about the speed at which you're passing the data on to something else, instead of controlling the read, put the data into a data structure like a queue or circular buffer, and control the other end; send data periodically. You need to be careful with that, though, depending on the data set size and such, because you can run into memory limitations if the reader is very much faster than the writer.