Calculating network download speed - java

I have written the follwing code to calculate download speed using java.
But it is not giving correct results.What is the problem?.Is there a problem with my logic , or is it a problem with java networking classes usage?I think it is a problem with the usage of java networking classes.Can anybody tell me what exactly the problem is?
/*Author:Jinu Joseph Daniel*/
import java.io.*;
import java.net.*;
class bwCalc {
static class CalculateBw {
public void calculateUploadBw() {}
public float calculateDownloadRate(int waitTime) throws Exception {
int bufferSize = 1;
byte[] data = new byte[bufferSize]; // buffer
BufferedInputStream in = new BufferedInputStream(new URL("https://www.google.co.in/").openStream());
int count = 0;
long startedAt = System.currentTimeMillis();
long stoppedAt;
float rate;
while (((stoppedAt = System.currentTimeMillis()) - startedAt) < waitTime) {
if ( in .read(data, 0, bufferSize) != -1) {
count++;
} else {
System.out.println("Finished");
break;
}
}
in .close();
rate = 1000 * (((float) count*bufferSize*8 / (stoppedAt - startedAt)) )/(1024*1024);//rate in Mbps
return rate;
}
public float calculateAverageDownloadRate() throws Exception{
int times[] = {100,200,300,400,500};
float bw = 0,curBw;
int i = 0, len = times.length;
while (i < len) {
curBw = calculateDownloadRate(times[i++]);
bw += curBw;
System.out.println("Current rate : "+Float.toString(curBw));
}
bw /= len;
return bw;
}
}
public static void main(String argc[]) throws Exception {
CalculateBw c = new CalculateBw();
System.out.println(Float.toString(c.calculateAverageDownloadRate()));
}
}

There are many problems with your code...
you're not checking how many bytes you are reading
testing with Google's home page is useless, since the content size is very small and most of the download time is related to network latency; you should try downloading a large file (10+ MB) - UNLESS you actually want to measure latency rather than bandwidth, in which case you can simply run ping
you also need to give it more than 500ms if you want to get any relevant result - I'd say at least 5 sec
plenty of code style issues, but those are less important

Here is the code which will calculate the average download rate for you in KBs and MBs per second you can scale them by 8 to get the rate in bits per second.
public static void main(String argc[]) throws Exception {
long totalDownload = 0; // total bytes downloaded
final int BUFFER_SIZE = 1024; // size of the buffer
byte[] data = new byte[BUFFER_SIZE]; // buffer
BufferedInputStream in = new BufferedInputStream(
new URL(
"http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.15/linux-headers-2.6.15-020615_2.6.15-020615_all.deb")
.openStream());
int dataRead = 0; // data read in each try
long startTime = System.nanoTime(); // starting time of download
while ((dataRead = in.read(data, 0, 1024)) > 0) {
totalDownload += dataRead; // adding data downloaded to total data
}
/* download rate in bytes per second */
float bytesPerSec = totalDownload
/ ((System.nanoTime() - startTime) / 1000000000);
System.out.println(bytesPerSec + " Bps");
/* download rate in kilobytes per second */
float kbPerSec = bytesPerSec / (1024);
System.out.println(kbPerSec + " KBps ");
/* download rate in megabytes per second */
float mbPerSec = kbPerSec / (1024);
System.out.println(mbPerSec + " MBps ");
}

Related

Java FileInputStream FileOutputStream difference in the run

Could someone tell me why the 1. run is wrong? (The return code is 0, but the file written is only half of the original one.
Thanks in advance!
public class FileCopyFisFos {
public static void main(String[] args) throws IOException {
FileInputStream fis = new FileInputStream("d:/Test1/OrigFile.MP4");
FileOutputStream fos = new FileOutputStream("d:/Test2/DestFile.mp4");
// 1. run
// while (fis.read() != -1){
// int len = fis.read();
// fos.write(len);
// }
// 2. run
// int len;
// while ((len = fis.read()) != -1){
// fos.write(len);
// }
fis.close();
fos.close();
}
}
FileInputStream 's read() method follows this logic:
Reads a byte of data from this input stream. This method blocks if no input is yet available.
So assigning the value of its return to a variable, such as:
while((len = fis.read())!= -1)
Is avoiding the byte of data just read from the stream to be forgotten, as every read() call will be assigned to your len variable.
Instead, this code bypasses one of every two bytes from the stream, as the read() executed in the while condition is never assigned to a variable. So the stream advances without half of the bytes being read (assigned to len):
while (fis.read() != -1) { // reads a byte of data (but not saved)
int len = fis.read(); // next byte of data saved
fos.write(len); // possible -1 written here
}
#aran and others already pointed out the solution to your problem.
However there are more sides to this, so I extended your example:
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class FileCopyFisFos {
public static void main(final String[] args) throws IOException {
final File src = new File("d:/Test1/OrigFile.MP4");
final File sink = new File("d:/Test2/DestFile.mp4");
{
final long startMS = System.currentTimeMillis();
final long bytesCopied = copyFileSimple(src, sink);
System.out.println("Simple copy transferred " + bytesCopied + " bytes in " + (System.currentTimeMillis() - startMS) + "ms");
}
{
final long startMS = System.currentTimeMillis();
final long bytesCopied = copyFileSimpleFaster(src, sink);
System.out.println("Simple+Fast copy transferred " + bytesCopied + " bytes in " + (System.currentTimeMillis() - startMS) + "ms");
}
{
final long startMS = System.currentTimeMillis();
final long bytesCopied = copyFileFast(src, sink);
System.out.println("Fast copy transferred " + bytesCopied + " bytes in " + (System.currentTimeMillis() - startMS) + "ms");
}
System.out.println("Test completed.");
}
static public long copyFileSimple(final File pSourceFile, final File pSinkFile) throws IOException {
try (
final FileInputStream fis = new FileInputStream(pSourceFile);
final FileOutputStream fos = new FileOutputStream(pSinkFile);) {
long totalBytesTransferred = 0;
while (true) {
final int readByte = fis.read();
if (readByte < 0) break;
fos.write(readByte);
++totalBytesTransferred;
}
return totalBytesTransferred;
}
}
static public long copyFileSimpleFaster(final File pSourceFile, final File pSinkFile) throws IOException {
try (
final FileInputStream fis = new FileInputStream(pSourceFile);
final FileOutputStream fos = new FileOutputStream(pSinkFile);
BufferedInputStream bis = new BufferedInputStream(fis);
BufferedOutputStream bos = new BufferedOutputStream(fos);) {
long totalBytesTransferred = 0;
while (true) {
final int readByte = bis.read();
if (readByte < 0) break;
bos.write(readByte);
++totalBytesTransferred;
}
return totalBytesTransferred;
}
}
static public long copyFileFast(final File pSourceFile, final File pSinkFile) throws IOException {
try (
final FileInputStream fis = new FileInputStream(pSourceFile);
final FileOutputStream fos = new FileOutputStream(pSinkFile);) {
long totalBytesTransferred = 0;
final byte[] buffer = new byte[20 * 1024];
while (true) {
final int bytesRead = fis.read(buffer);
if (bytesRead < 0) break;
fos.write(buffer, 0, bytesRead);
totalBytesTransferred += bytesRead;
}
return totalBytesTransferred;
}
}
}
The hints that come along with that code:
There is the java.nio package that usualy does those things a lot faster and in less code.
Copying single bytes is 1'000-40'000 times slower that bulk copy.
Using try/resource/catch is the best way to avoid problems with reserved/locked resources like files etc.
If you solve something that is quite commonplace, I suggest you put it in a utility class of your own or even your own library.
There are helper classes like BufferedInputStream and BufferedOutputStream that take care of efficiency greatly; see example copyFileSimpleFaster().
But as usual, it is the quality of the concept that has the most impact on the implementation; see example copyFileFast().
There are even more advanced concepts (similar to java.nio), that take into account concepts like OS caching behaviour etc, which will give performance another kick.
Check my outputs, or run it on your own, to see the differences in performance:
Simple copy transferred 1608799 bytes in 12709ms
Simple+Fast copy transferred 1608799 bytes in 51ms
Fast copy transferred 1608799 bytes in 4ms
Test completed.

about linux IO performance

I wrote a program to test IO performance in java useing FileChannel. Write data and call force(false) immediately. My Linux server has 12 ssd hard drives, sda~sdl, and I test writing data to different hard drive, the performance varies widely, and I don't know why?
code:
public static void main(String[] args) throws IOException, InterruptedException {
RandomAccessFile aFile = new RandomAccessFile(args[0], "rw");
int count = Integer.parseInt(args[1]);
int idx = count;
FileChannel channel = aFile.getChannel();
long time = 0;
long bytes = 0;
while (--idx > 0) {
String newData = "New String to write to file..." + System.currentTimeMillis();
String buff = "";
for (int i = 0 ; i<100; i++) {
buff += newData;
}
bytes += buff.length();
ByteBuffer buf = ByteBuffer.allocate(buff.length());
buf.clear();
buf.put(buff.getBytes());
buf.flip();
while(buf.hasRemaining()) {
channel.write(buf);
}
long st = System.nanoTime();
channel.force(false);
long et = System.nanoTime();
System.out.println("force time : " + (et - st));
time += (et -st);
}
System.out.println("wirte " + count + " record, " + bytes + " bytes, force avg time : " + time/count);
}
Result like this:
sda: wirte 1000000 record, 4299995700 bytes, force avg time : 273480 ns
sdb: wirte 100000 record, 429995700 bytes, force avg time : 5868387 ns
The average time vary significantly.
Here is some IO monitor data.
sda:
iostat data image
sdb:
iostat data image
You need to start by measure your SSD disks performance using some standard tool like fio.
Then you can test your utility again using numbers from fio output.
Looks like you are writing into the Linux write cache so that can explain your results :)

Reading a file twice is extremely fast on the second read

I'm currently writing a small program to frequently test my internet speed.
To test the computational overhead I changed the read source to a file on my disk. Here I noticed that the bytewise reading limits the speed at about 31 MB/s so I changed it to reading 512 KB blocks.
Now I have a really strange behavior: After reading a 1GB file for the first time every following read operation is finished in less than one second. But there is no way that my normal HDD reads at over 1 GB/s and I also can't imaging that the whole file is cached in the RAM.
Here's my code:
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.text.SimpleDateFormat;
import java.util.Date;
public class Main {
public static void main(String[] args) {
SimpleDateFormat sdf = new SimpleDateFormat("dd.MM.yyyy HH:mm");
try {
System.out.println("Starting test...");
InputStream in = (new FileInputStream(new File("path/to/testfile")));
long startTime = System.currentTimeMillis();
long initTime = startTime + 8 * 1000; // start measuring after 8 seconds
long stopTime = initTime + 15 * 1000; // stop after 15 seconds testing
boolean initiated = false;
boolean stopped = false;
long bytesAfterInit = 0;
long bytes = 0;
byte[] b = new byte[524288];
int bytesRead = 0;
while((bytesRead = in.read(b)) > 0) {
bytes += bytesRead;
if(!initiated && System.currentTimeMillis() > initTime) {
initiated = true;
System.out.println("initiated");
bytesAfterInit = bytes;
}
if(System.currentTimeMillis() > stopTime) {
stopped = true;
System.out.println("stopped");
break;
}
}
long endTime = System.currentTimeMillis();
in.close();
long duration = 0;
long testBytes = 0;
if(initiated && stopped) { //if initiated and stopped calculate for the test time
duration = endTime - initTime;
testBytes = bytes - bytesAfterInit;
} else { //otherwise calculate the whole process
duration = endTime - startTime;
testBytes = bytes;
}
if(duration == 0) //prevent dividing by zero
duration = 1;
String result = sdf.format(new Date()) + "\t" + (testBytes / 1024 / 1024) / (duration / 1000d) + " MB/s";
System.out.println(duration + " ms");
System.out.println(testBytes + " bytes");
System.out.println(result);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Output:
Starting test...
302 ms
1010827264 bytes
09.02.2015 10:20 3192.0529801324506 MB/s
I don't have that behavior if I change the source of the file to some file in the internet or a way bigger file on my SSD.
How is it possible that all the bytes are read in such a short period?

Java: Create large text file with random numbers?

I'm trying to create a text file with random numbers on each line.
I have managed to do this but for some reason the largest file I can seem to generate is 768MBs and I need files up to 15Gbs.
Ay ideas why this is happening? My guess is some sort of size limitation or memory issue?
This is the code I have written:
public static void main(String[] args) throws FileNotFoundException, UnsupportedEncodingException {
//Size in Gbs of my file that I want
double wantedSize = Double.parseDouble("1.5");
Random random = new Random();
PrintWriter writer = new PrintWriter("AvgNumbers.txt", "UTF-8");
boolean keepGoing = true;
int counter = 0;
while(keepGoing){
counter++;
StringBuilder stringValue = new StringBuilder();
for (int i = 0; i < 100; i++) {
double value = 0.1 + (100.0 - 0.1) * random.nextDouble();
stringValue.append(value);
stringValue.append(" ");
}
writer.println(stringValue.toString());
//Check to see if the current size is what we want it to be
if (counter == 10000) {
File file = new File("AvgNumbers.txt");
double currentSize = file.length();
double gbs = (currentSize/1000000000.00);
if(gbs > wantedSize){
keepGoing=false;
writer.close();
}else{
writer.flush();
counter = 0;
}
}
}
}
This is how I would code it. It produces the size you want as well.
public static void main(String... ignored) throws FileNotFoundException, UnsupportedEncodingException {
//Size in Gbs of my file that I want
double wantedSize = Double.parseDouble(System.getProperty("size", "1.5"));
Random random = new Random();
File file = new File("AvgNumbers.txt");
long start = System.currentTimeMillis();
PrintWriter writer = new PrintWriter(new BufferedWriter(new OutputStreamWriter(new FileOutputStream(file), "UTF-8")), false);
int counter = 0;
while (true) {
String sep = "";
for (int i = 0; i < 100; i++) {
int number = random.nextInt(1000) + 1;
writer.print(sep);
writer.print(number / 1e3);
sep = " ";
}
writer.println();
//Check to see if the current size is what we want it to be
if (++counter == 20000) {
System.out.printf("Size: %.3f GB%n", file.length() / 1e9);
if (file.length() >= wantedSize * 1e9) {
writer.close();
break;
} else {
counter = 0;
}
}
}
long time = System.currentTimeMillis() - start;
System.out.printf("Took %.1f seconds to create a file of %.3f GB", time / 1e3, file.length() / 1e9);
}
prints finally
Took 58.3 seconds to create a file of 1.508 GB
You never clean your StringBuilder, and it keeps accumulating all the random number strings you have stored. Just after you write do a clear().

Measuring Download Speed Java

I'm working on downloading a file on a software, this is what i got, it sucesfully download, and also i can get progress, but still 1 thing left that I dont know how to do. Measure download speed. I would appreciate your help. Thanks.
This is the current download method code
public void run()
{
OutputStream out = null;
URLConnection conn = null;
InputStream in = null;
try
{
URL url1 = new URL(url);
out = new BufferedOutputStream(
new FileOutputStream(sysDir+"\\"+where));
conn = url1.openConnection();
in = conn.getInputStream();
byte[] buffer = new byte[1024];
int numRead;
long numWritten = 0;
double progress1;
while ((numRead = in.read(buffer)) != -1)
{
out.write(buffer, 0, numRead);
numWritten += numRead;
this.speed= (int) (((double)
buffer.length)/8);
progress1 = (double) numWritten;
this.progress=(int) progress1;
}
}
catch (Exception ex)
{
echo("Unknown Error: " + ex);
}
finally
{
try
{
if (in != null)
{
in.close();
}
if (out != null)
{
out.close();
}
}
catch (IOException ex)
{
echo("Unknown Error: " + ex);
}
}
}
The same way you would measure anything.
System.nanoTime() returns a Long you can use to measure how long something takes:
Long start = System.nanoTime();
// do your read
Long end = System.nanoTime();
Now you have the number of nanoseconds it took to read X bytes. Do the math and you have your download rate.
More than likely you're looking for bytes per second. Keep track of the total number of bytes you've read, checking to see if one second has elapsed. Once one second has gone by figure out the rate based on how many bytes you've read in that amount of time. Reset the total, repeat.
here is my implementation
while (mStatus == DownloadStatus.DOWNLOADING) {
/*
* Size buffer according to how much of the file is left to
* download.
*/
byte buffer[];
// handled resume case.
if ((mSize < mDownloaded ? mSize : mSize - mDownloaded <= 0 ? mSize : mSize - mDownloaded) > MAX_BUFFER_SIZE) {
buffer = new byte[MAX_BUFFER_SIZE];
} else {
buffer = new byte[(int) (mSize - mDownloaded)];
}
// Read from server into buffer.
int read = stream.read(buffer);
if (read == -1)
break;// EOF, break while loop
// Write buffer to file.
file.write(buffer, 0, read);
mDownloaded += read;
double speedInKBps = 0.0D;
try {
long timeInSecs = (System.currentTimeMillis() - startTime) / 1000; //converting millis to seconds as 1000m in 1 second
speedInKBps = (mDownloaded / timeInSecs) / 1024D;
} catch (ArithmeticException ae) {
}
this.mListener.publishProgress(this.getProgress(), this.getTotalSize(), speedInKBps);
}
I can give you a general idea. Start a timer at the beginning of the download. Now, multiply the (percentage downloaded) by the download size, and divide it by the time elapsed. That gives you average download time. Hope I get you on the right track!
You can use System.nanoTime(); as suggested by Brian.
Put long startTime = System.nanoTime(); outside your while loop. and
long estimatedTime = System.nanoTime() - startTime; will give you the elapsed time within your loop.

Categories