about linux IO performance - java

I wrote a program to test IO performance in java useing FileChannel. Write data and call force(false) immediately. My Linux server has 12 ssd hard drives, sda~sdl, and I test writing data to different hard drive, the performance varies widely, and I don't know why?
code:
public static void main(String[] args) throws IOException, InterruptedException {
RandomAccessFile aFile = new RandomAccessFile(args[0], "rw");
int count = Integer.parseInt(args[1]);
int idx = count;
FileChannel channel = aFile.getChannel();
long time = 0;
long bytes = 0;
while (--idx > 0) {
String newData = "New String to write to file..." + System.currentTimeMillis();
String buff = "";
for (int i = 0 ; i<100; i++) {
buff += newData;
}
bytes += buff.length();
ByteBuffer buf = ByteBuffer.allocate(buff.length());
buf.clear();
buf.put(buff.getBytes());
buf.flip();
while(buf.hasRemaining()) {
channel.write(buf);
}
long st = System.nanoTime();
channel.force(false);
long et = System.nanoTime();
System.out.println("force time : " + (et - st));
time += (et -st);
}
System.out.println("wirte " + count + " record, " + bytes + " bytes, force avg time : " + time/count);
}
Result like this:
sda: wirte 1000000 record, 4299995700 bytes, force avg time : 273480 ns
sdb: wirte 100000 record, 429995700 bytes, force avg time : 5868387 ns
The average time vary significantly.
Here is some IO monitor data.
sda:
iostat data image
sdb:
iostat data image

You need to start by measure your SSD disks performance using some standard tool like fio.
Then you can test your utility again using numbers from fio output.
Looks like you are writing into the Linux write cache so that can explain your results :)

Related

Is there a reason why DataInputStream.read() only reads the first few bytes of really big arrays (>100,000 bytes)?

I'm trying to write software that sends a set of data (a portion of a video game) in different formats (chunked, compressed, raw), and measures the speed between each. However, I'm running into an issue while sorting out the CHUNKED method. I've found that, when reading byte array sizes of more than 140000 bytes, the client starts to only read up to around 131072, no matter how much bigger the array actually is. Is there a reason for this, or potentially a better way to do this? My code is shown below. I'm using the read() method of DataInputStream (and its return value).
SERVER
/**
*
* #return Time taken to complete transfer.
*/
public int start(String mode, int length) throws IOException, InterruptedException {
if(mode.equals("RAW")){
byte[] all = new ByteCollector(ServerMain.FILES, length).collect();
output.writeUTF("SENDING " + mode + " " + all.length);
expect("RECEIVING " + mode);
long start = System.currentTimeMillis();
echoSend(all);
return (int) (System.currentTimeMillis() - start);
}else if(mode.equals("CHUNKED")){ /*the important part*/
//split into chunks
byte[] all = new ByteCollector(ServerMain.FILES, length).collect();
int chunks = maxChunks(all);
output.writeUTF("SENDING " + mode + " " + chunks);
System.out.println("Expecting RECEIVING " + chunks + "...");
expect("RECEIVING " + chunks);
int ms = 0;
for(int i = 0; i<chunks; i++){
byte[] currentChunk = getChunk(i, all);
System.out.println("My chunk length is " + currentChunk.length);
long start = System.currentTimeMillis();
System.out.println("Sending...");
echoSend(currentChunk);
ms += System.currentTimeMillis() - start;
}
if(chunks == 0) expect("0"); //still need to confirm, even though no data was sent
return ms;
}else if(mode.equals("COMPRESSED")){
byte[] compressed = new ByteCollector(ServerMain.FILES, length).collect();
compressed = ExperimentUtils.compress(compressed);
output.writeUTF("SENDING " + mode + " " + compressed.length);
expect("RECEIVING " + mode);
long start = System.currentTimeMillis();
echoSend(compressed, length);
return (int) (System.currentTimeMillis() - start);
}
return -1;
}
public static void main(String[] args) throws IOException,InterruptedException{
FILES = Files.walk(Paths.get(DIRECTORY)).filter(Files::isRegularFile).toArray(Path[]::new);
SyncServer server = new SyncServer(new ServerSocket(12222).accept());
System.out.println("--------[CH UNK ED]--------");
short[] chunkedSpeeds = new short[FOLDER_SIZE_MB + 1/*for "zero" or origin*/];
for(int i = 0; i<=FOLDER_SIZE_MB; i++){
chunkedSpeeds[i] = (short) server.start("CHUNKED", i * MB);
System.out.println(i + "MB, Chunked: " + chunkedSpeeds[i]);
}
short[] compressedSpeeds = new short[FOLDER_SIZE_MB + 1];
for(int i = 0; i<=FOLDER_SIZE_MB; i++){
compressedSpeeds[i] = (short) server.start("COMPRESSED", i * MB);
}
short[] rawSpeeds = new short[FOLDER_SIZE_MB + 1];
for(int i = 0; i<=FOLDER_SIZE_MB; i++){
rawSpeeds[i] = (short) server.start("RAW", i * MB);
}
System.out.println("Raw speeds: " + Arrays.toString(rawSpeeds));
System.out.println("\n\nCompressed speeds: " + Arrays.toString(compressedSpeeds));
System.out.println("\n\nChunked speeds: " + Arrays.toString(chunkedSpeeds));
}
CLIENT
public static void main(String[] args) throws IOException, InterruptedException {
Socket socket = new Socket("localhost", 12222);
DataInputStream input = new DataInputStream(socket.getInputStream());
DataOutputStream output = new DataOutputStream(socket.getOutputStream());
while(socket.isConnected()){
String response = input.readUTF();
String[] content = response.split(" ");
if(response.startsWith("SENDING CHUNKED")){
int chunks = Integer.parseInt(content[2]);
System.out.println("Read chunk amount of " + chunks);
output.writeUTF("RECEIVING " + chunks);
for(int i = 0; i<chunks; i++){
byte[] chunk = new byte[32 * MB];
System.out.println("Ready to receive...");
int read = input.read(chunk);
System.out.println("Echoing read length of " + read);
output.writeUTF(String.valueOf(read));
}
if(chunks == 0) output.writeUTF("0");
}else if(response.startsWith("SENDING COMPRESSED")){
byte[] compressed = new byte[Integer.parseInt(content[2])];
output.writeUTF("RECEIVING " + compressed.length);
input.read(compressed);
decompress(compressed);
output.writeInt(decompress(compressed).length);
}else if(response.startsWith("SENDING RAW")){
int length = Integer.parseInt(content[2]);
output.writeUTF("RECEIVING " + length);
byte[] received = new byte[length];
input.read(received);
output.writeInt(received.length);
}
}
}
public static byte[] decompress(byte[] in) throws IOException {
try {
ByteArrayOutputStream out = new ByteArrayOutputStream();
InflaterOutputStream infl = new InflaterOutputStream(out);
infl.write(in);
infl.flush();
infl.close();
return out.toByteArray();
} catch (Exception e) {
System.out.println("Error decompressing byte array with length " + in.length);
throw e;
}
}
Using SDK 17
I tried switching around the byte amount, and found the cutoff was right where I stated above. I even replicated this in a test client/server project with no frills (find that here, and found that the cutoff was even lower! I really hope this isn't an actual issue with Java...
The read() method of DataInputStream doesn't directly correspond to the write() method of DataOutputStream. If you want to know how many bytes were sent in a single method call, the server has to inform the client manually.
This is because the read() method, since it doesn't depend on a set length, treats its process as completed when some bytes are read, as it has no way of knowing how many you want.

How to read a File character-by-character in reverse without running out-of-memory?

The Story
I've been having a problem lately...
I have to read a file in reverse character by character without running out of memory.
I can't read it line-by-line and reverse it with StringBuilder because it's a one-line file that takes up to a gig (GB) of I/O space.
Hence it would take up too much of the JVM's (and the System's) Memory.
I've decided to just read it character by character from end-to-start (back-to-front) so that I could process as much as I can without consuming too much memory.
What I've Tried
I know how to read a file in one go:
(MappedByteBuffer+FileChannel+Charset which gave me OutOfMemoryExceptions)
and read a file character-by-character with UTF-8 character support
(FileInputStream+InputStreamReader).
The problem is that FileInputStream's #read() only calls #read0() which is a native method!
Because of that I have no idea about the underlying code...
Which is why I'm here today (or at least until this is done)!
This will do it (but as written it is not very efficient).
just skip to the last location read less one and read and print the character.
then reset the location to the mark, adjust size and continue.
File f = new File("Some File name");
int size = (int) f.length();
int bsize = 1;
byte[] buf = new byte[bsize];
try (BufferedInputStream b =
new BufferedInputStream(new FileInputStream(f))) {
while (size > 0) {
b.mark(size);
b.skip(size - bsize);
int k = b.read(buf);
System.out.print((char) buf[0]);
size -= k;
b.reset();
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
This could be improved by increasing the buffer size and making equivalent adjustments in the mark and skip arguments.
Updated Version
I wasn't fully satisfied with my answer so I made it more general. Some variables could have served double duty but using meaningful names helps clarify how they are used.
Mark must be used so reset can be used. However, it only needs to be set once and is set to position 0 outside of the main loop. I do not know if marking closer to the read point is more efficient or not.
skipCnt - initally set to fileLength it is the number of bytes to skip before reading. If the number of bytes remaining is greater than the buffer size, then the skip count will be skipCnt - bsize. Else it will be 0.
remainingBytes - a running total of how many bytes are still to be read. It is updated by subtracting the current readCnt.
readCnt - how many bytes to read. If remainingBytes is greater than bsize then set to bsize, else set to remainingBytes
The while loop continuously reads the file starting near the end and then prints the just read information in reverse order. All variables are updated and the process repeats until the remainingBytes reaches 0.
File f = new File("some file");
int bsize = 16;
int fileSize = (int)f.length();
int remainingBytes = fileSize;
int skipCnt = fileSize;
byte[] buf = new byte[bsize];
try (BufferedInputStream b =
new BufferedInputStream(new FileInputStream(f))) {
b.mark(0);
while(remainingBytes > 0) {
skipCnt = skipCnt > bsize ? skipCnt - bsize : 0;
b.skip(skipCnt);
int readCnt = remainingBytes > bsize ? bsize : remainingBytes;
b.read(buf,0,readCnt);
for (int i = readCnt-1; i >= 0; i--) {
System.out.print((char) buf[i]);
}
remainingBytes -= readCnt;
b.reset();
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
This doesn't support multi byte UTF-8 characters
Using a RandomAccessFile you can easily read a file in chunks from the end to the beginning, and reverse each of the chunks.
Here's a simple example:
import java.io.FileWriter;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.stream.IntStream;
class Test {
private static final int BUF_SIZE = 10;
private static final int FILE_LINE_COUNT = 105;
public static void main(String[] args) throws Exception {
// create a large file
try (FileWriter fw = new FileWriter("largeFile.txt")) {
IntStream.range(1, FILE_LINE_COUNT).mapToObj(Integer::toString).forEach(s -> {
try {
fw.write(s + "\n");
} catch (IOException e) {
throw new RuntimeException(e);
}
});
}
// reverse the file
try (RandomAccessFile raf = new RandomAccessFile("largeFile.txt", "r")) {
long size = raf.length();
byte[] buf = new byte[BUF_SIZE];
for (long i = size - BUF_SIZE; i > -BUF_SIZE; i -= BUF_SIZE) {
long offset = Math.max(0, i);
long readSize = Math.min(i + BUF_SIZE, BUF_SIZE);
raf.seek(offset);
raf.read(buf, 0, (int) readSize);
for (int j = (int) readSize - 1; j >= 0; j--) {
System.out.print((char) buf[j]);
}
}
}
}
}
This uses a very small file and very small chunks so that you can test it easily. Increase those constants to see it work on a larger scale.
The input file contains newlines to make it easy to read the output, but the reversal doesn't depend on the file "having lines".

Reading a file twice is extremely fast on the second read

I'm currently writing a small program to frequently test my internet speed.
To test the computational overhead I changed the read source to a file on my disk. Here I noticed that the bytewise reading limits the speed at about 31 MB/s so I changed it to reading 512 KB blocks.
Now I have a really strange behavior: After reading a 1GB file for the first time every following read operation is finished in less than one second. But there is no way that my normal HDD reads at over 1 GB/s and I also can't imaging that the whole file is cached in the RAM.
Here's my code:
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.text.SimpleDateFormat;
import java.util.Date;
public class Main {
public static void main(String[] args) {
SimpleDateFormat sdf = new SimpleDateFormat("dd.MM.yyyy HH:mm");
try {
System.out.println("Starting test...");
InputStream in = (new FileInputStream(new File("path/to/testfile")));
long startTime = System.currentTimeMillis();
long initTime = startTime + 8 * 1000; // start measuring after 8 seconds
long stopTime = initTime + 15 * 1000; // stop after 15 seconds testing
boolean initiated = false;
boolean stopped = false;
long bytesAfterInit = 0;
long bytes = 0;
byte[] b = new byte[524288];
int bytesRead = 0;
while((bytesRead = in.read(b)) > 0) {
bytes += bytesRead;
if(!initiated && System.currentTimeMillis() > initTime) {
initiated = true;
System.out.println("initiated");
bytesAfterInit = bytes;
}
if(System.currentTimeMillis() > stopTime) {
stopped = true;
System.out.println("stopped");
break;
}
}
long endTime = System.currentTimeMillis();
in.close();
long duration = 0;
long testBytes = 0;
if(initiated && stopped) { //if initiated and stopped calculate for the test time
duration = endTime - initTime;
testBytes = bytes - bytesAfterInit;
} else { //otherwise calculate the whole process
duration = endTime - startTime;
testBytes = bytes;
}
if(duration == 0) //prevent dividing by zero
duration = 1;
String result = sdf.format(new Date()) + "\t" + (testBytes / 1024 / 1024) / (duration / 1000d) + " MB/s";
System.out.println(duration + " ms");
System.out.println(testBytes + " bytes");
System.out.println(result);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Output:
Starting test...
302 ms
1010827264 bytes
09.02.2015 10:20 3192.0529801324506 MB/s
I don't have that behavior if I change the source of the file to some file in the internet or a way bigger file on my SSD.
How is it possible that all the bytes are read in such a short period?

Calculating network download speed

I have written the follwing code to calculate download speed using java.
But it is not giving correct results.What is the problem?.Is there a problem with my logic , or is it a problem with java networking classes usage?I think it is a problem with the usage of java networking classes.Can anybody tell me what exactly the problem is?
/*Author:Jinu Joseph Daniel*/
import java.io.*;
import java.net.*;
class bwCalc {
static class CalculateBw {
public void calculateUploadBw() {}
public float calculateDownloadRate(int waitTime) throws Exception {
int bufferSize = 1;
byte[] data = new byte[bufferSize]; // buffer
BufferedInputStream in = new BufferedInputStream(new URL("https://www.google.co.in/").openStream());
int count = 0;
long startedAt = System.currentTimeMillis();
long stoppedAt;
float rate;
while (((stoppedAt = System.currentTimeMillis()) - startedAt) < waitTime) {
if ( in .read(data, 0, bufferSize) != -1) {
count++;
} else {
System.out.println("Finished");
break;
}
}
in .close();
rate = 1000 * (((float) count*bufferSize*8 / (stoppedAt - startedAt)) )/(1024*1024);//rate in Mbps
return rate;
}
public float calculateAverageDownloadRate() throws Exception{
int times[] = {100,200,300,400,500};
float bw = 0,curBw;
int i = 0, len = times.length;
while (i < len) {
curBw = calculateDownloadRate(times[i++]);
bw += curBw;
System.out.println("Current rate : "+Float.toString(curBw));
}
bw /= len;
return bw;
}
}
public static void main(String argc[]) throws Exception {
CalculateBw c = new CalculateBw();
System.out.println(Float.toString(c.calculateAverageDownloadRate()));
}
}
There are many problems with your code...
you're not checking how many bytes you are reading
testing with Google's home page is useless, since the content size is very small and most of the download time is related to network latency; you should try downloading a large file (10+ MB) - UNLESS you actually want to measure latency rather than bandwidth, in which case you can simply run ping
you also need to give it more than 500ms if you want to get any relevant result - I'd say at least 5 sec
plenty of code style issues, but those are less important
Here is the code which will calculate the average download rate for you in KBs and MBs per second you can scale them by 8 to get the rate in bits per second.
public static void main(String argc[]) throws Exception {
long totalDownload = 0; // total bytes downloaded
final int BUFFER_SIZE = 1024; // size of the buffer
byte[] data = new byte[BUFFER_SIZE]; // buffer
BufferedInputStream in = new BufferedInputStream(
new URL(
"http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.15/linux-headers-2.6.15-020615_2.6.15-020615_all.deb")
.openStream());
int dataRead = 0; // data read in each try
long startTime = System.nanoTime(); // starting time of download
while ((dataRead = in.read(data, 0, 1024)) > 0) {
totalDownload += dataRead; // adding data downloaded to total data
}
/* download rate in bytes per second */
float bytesPerSec = totalDownload
/ ((System.nanoTime() - startTime) / 1000000000);
System.out.println(bytesPerSec + " Bps");
/* download rate in kilobytes per second */
float kbPerSec = bytesPerSec / (1024);
System.out.println(kbPerSec + " KBps ");
/* download rate in megabytes per second */
float mbPerSec = kbPerSec / (1024);
System.out.println(mbPerSec + " MBps ");
}

How can I get the count of line in a file in an efficient way? [duplicate]

This question already has answers here:
Number of lines in a file in Java
(19 answers)
Closed 6 years ago.
I have a big file. It includes approximately 3.000-20.000 lines. How can I get the total count of lines in the file using Java?
BufferedReader reader = new BufferedReader(new FileReader("file.txt"));
int lines = 0;
while (reader.readLine() != null) lines++;
reader.close();
Update: To answer the performance-question raised here, I made a measurement. First thing: 20.000 lines are too few, to get the program running for a noticeable time. I created a text-file with 5 million lines. This solution (started with java without parameters like -server or -XX-options) needed around 11 seconds on my box. The same with wc -l (UNIX command-line-tool to count lines), 11 seconds. The solution reading every single character and looking for '\n' needed 104 seconds, 9-10 times as much.
Files.lines
Java 8+ has a nice and short way using NIO using Files.lines. Note that you have to close the stream using try-with-resources:
long lineCount;
try (Stream<String> stream = Files.lines(path, StandardCharsets.UTF_8)) {
lineCount = stream.count();
}
If you don't specify the character encoding, the default one used is UTF-8. You may specify an alternate encoding to match your particular data file as shown in the example above.
use LineNumberReader
something like
public static int countLines(File aFile) throws IOException {
LineNumberReader reader = null;
try {
reader = new LineNumberReader(new FileReader(aFile));
while ((reader.readLine()) != null);
return reader.getLineNumber();
} catch (Exception ex) {
return -1;
} finally {
if(reader != null)
reader.close();
}
}
I found some solution for this, it might useful for you
Below is the code snippet for, count the no.of lines from the file.
File file = new File("/mnt/sdcard/abc.txt");
LineNumberReader lineNumberReader = new LineNumberReader(new FileReader(file));
lineNumberReader.skip(Long.MAX_VALUE);
int lines = lineNumberReader.getLineNumber();
lineNumberReader.close();
Read the file through and count the number of newline characters. An easy way to read a file in Java, one line at a time, is the java.util.Scanner class.
This is about as efficient as it can get, buffered binary read, no string conversion,
FileInputStream stream = new FileInputStream("/tmp/test.txt");
byte[] buffer = new byte[8192];
int count = 0;
int n;
while ((n = stream.read(buffer)) > 0) {
for (int i = 0; i < n; i++) {
if (buffer[i] == '\n') count++;
}
}
stream.close();
System.out.println("Number of lines: " + count);
Do You need exact number of lines or only its approximation? I happen to process large files in parallel and often I don't need to know exact count of lines - I then revert to sampling. Split the file into ten 1MB chunks and count lines in each chunk, then multiply it by 10 and You'll receive pretty good approximation of line count.
All previous answers suggest to read though the whole file and count the amount of newlines you find while doing this. You commented some as "not effective" but thats the only way you can do that. A "line" is nothing else as a simple character inside the file. And to count that character you must have a look at every single character within the file.
I'm sorry, but you have no choice. :-)
This solution is about 3.6× faster than the top rated answer when tested on a file with 13.8 million lines. It simply reads the bytes into a buffer and counts the \n characters. You could play with the buffer size, but on my machine, anything above 8KB didn't make the code faster.
private int countLines(File file) throws IOException {
int lines = 0;
FileInputStream fis = new FileInputStream(file);
byte[] buffer = new byte[BUFFER_SIZE]; // BUFFER_SIZE = 8 * 1024
int read;
while ((read = fis.read(buffer)) != -1) {
for (int i = 0; i < read; i++) {
if (buffer[i] == '\n') lines++;
}
}
fis.close();
return lines;
}
If the already posted answers aren't fast enough you'll probably have to look for a solution specific to your particular problem.
For example if these text files are logs that are only appended to and you regularly need to know the number of lines in them you could create an index. This index would contain the number of lines in the file, when the file was last modified and how large the file was then. This would allow you to recalculate the number of lines in the file by skipping over all the lines you had already seen and just reading the new lines.
Old post, but I have a solution that could be usefull for next people.
Why not just use file length to know what is the progression? Of course, lines has to be almost the same size, but it works very well for big files:
public static void main(String[] args) throws IOException {
File file = new File("yourfilehere");
double fileSize = file.length();
System.out.println("=======> File size = " + fileSize);
InputStream inputStream = new FileInputStream(file);
InputStreamReader inputStreamReader = new InputStreamReader(inputStream, "iso-8859-1");
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
int totalRead = 0;
try {
while (bufferedReader.ready()) {
String line = bufferedReader.readLine();
// LINE PROCESSING HERE
totalRead += line.length() + 1; // we add +1 byte for the newline char.
System.out.println("Progress ===> " + ((totalRead / fileSize) * 100) + " %");
}
} finally {
bufferedReader.close();
}
}
It allows to see the progression without doing any full read on the file. I know it depends on lot of elements, but I hope it will be usefull :).
[Edition]
Here is a version with estimated time. I put some SYSO to show progress and estimation. I see that you have a good time estimation errors after you have treated enough line (I try with 10M lines, and after 1% of the treatment, the time estimation was exact at 95%).
I know, some values has to be set in variable. This code is quickly written but has be usefull for me. Hope it will be for you too :).
long startProcessLine = System.currentTimeMillis();
int totalRead = 0;
long progressTime = 0;
double percent = 0;
int i = 0;
int j = 0;
int fullEstimation = 0;
try {
while (bufferedReader.ready()) {
String line = bufferedReader.readLine();
totalRead += line.length() + 1;
progressTime = System.currentTimeMillis() - startProcessLine;
percent = (double) totalRead / fileSize * 100;
if ((percent > 1) && i % 10000 == 0) {
int estimation = (int) ((progressTime / percent) * (100 - percent));
fullEstimation += progressTime + estimation;
j++;
System.out.print("Progress ===> " + percent + " %");
System.out.print(" - current progress : " + (progressTime) + " milliseconds");
System.out.print(" - Will be finished in ===> " + estimation + " milliseconds");
System.out.println(" - estimated full time => " + (progressTime + estimation));
}
i++;
}
} finally {
bufferedReader.close();
}
System.out.println("Ended in " + (progressTime) + " seconds");
System.out.println("Estimative average ===> " + (fullEstimation / j));
System.out.println("Difference: " + ((((double) 100 / (double) progressTime)) * (progressTime - (fullEstimation / j))) + "%");
Feel free to improve this code if you think it's a good solution.
Quick and dirty, but it does the job:
import java.io.*;
public class Counter {
public final static void main(String[] args) throws IOException {
if (args.length > 0) {
File file = new File(args[0]);
System.out.println(countLines(file));
}
}
public final static int countLines(File file) throws IOException {
ProcessBuilder builder = new ProcessBuilder("wc", "-l", file.getAbsolutePath());
Process process = builder.start();
InputStream in = process.getInputStream();
LineNumberReader reader = new LineNumberReader(new InputStreamReader(in));
String line = reader.readLine();
if (line != null) {
return Integer.parseInt(line.trim().split(" ")[0]);
} else {
return -1;
}
}
}
Read the file line by line and increment a counter for each line until you have read the entire file.
Try the unix "wc" command. I don't mean use it, I mean download the source and see how they do it. It's probably in c, but you can easily port the behavior to java. The problem with making your own is to account for the ending cr/lf problem.
The buffered reader is overkill
Reader r = new FileReader("f.txt");
int count = 0;
int nextchar = 0;
while (nextchar != -1){
nextchar = r.read();
if (nextchar == Character.getNumericValue('\n') ){
count++;
}
}
My search for a simple example has createde one thats actually quite poor. calling read() repeadedly for a single character is less than optimal. see here for examples and measurements.

Categories