I read text data from big file line by line.
But I need to read just n-x lines(don't read last x lines) .
How can I do it without reading whole file more than 1 time?
(I read line and immediately process it, so i can't go back)
In this post I'll provide you with two completely different approaches to solving your problem, and depending on your use case one of the solutions will fit better than the other.
Alternative #1
This method is memory efficient though quite complex, if you are going to skip a lot of contents this method is recommended since you only will store one line at a time in memory during processing.
The implementation of it in this post might not be super optimized, but the theory behind it stands clear.
You will start by reading the file backwards, searching for N number of line breaks. When you've successfully located where in the file you'd like to stop your processing later on you will jump back to the beginning of the file.
Alternative #2
This method is easy to comprehend and is very straight forward. During execution you will have N number of lines stored in memory, where N is the number of lines you'd like to skip in the end.
The lines will be stored in a FIFO container (First In, First Out). You'll append the last read line to your FIFO and then remove and process the first entry. This way you will always process lines at least N entries away from the end of your file.
Alternative #1
This might sound odd but it's definitely doable and the way I'd recommend you to do it; start by reading the file backwards.
Seek to the end of the file
Read (and discard) bytes (towards the beginning of the file) until you've found SKIP_N line breaks
Save this position
Seek to the beginning of the file
Read (and process) lines until you've come down to the position you've stored away
Example code:
The code below will strip off the last 42 lines from /tmp/sample_file and print the rest using the method described earlier in this post.
import java.io.RandomAccessFile;
import java.io.File;
import java.lang.Math;
public class Example {
protected static final int SKIP_N = 42;
public static void main (String[] args)
throws Exception
{
File fileHandle = new File ("/tmp/sample_file");
RandomAccessFile rafHandle = new RandomAccessFile (fileHandle, "r");
String s1 = new String ();
long currentOffset = 0;
long endOffset = findEndOffset (SKIP_N, rafHandle);
rafHandle.seek (0);
while ((s1 = rafHandle.readLine ()) != null) {
; currentOffset += s1.length () + 1; // (s1 + "\n").length
if (currentOffset >= endOffset)
break;
System.out.println (s1);
}
}
protected static long findEndOffset (int skipNLines, RandomAccessFile rafHandle)
throws Exception
{
long currentOffset = rafHandle.length ();
long endOffset = 0;
int foundLines = 0;
byte [] buffer = new byte[
1024 > rafHandle.length () ? (int) rafHandle.length () : 1024
];
while (foundLines < skipNLines && currentOffset != 0) {
currentOffset = Math.max (currentOffset - buffer.length, 0);
rafHandle.seek (currentOffset);
rafHandle.readFully (buffer);
for (int i = buffer.length - 1; i > -1; --i) {
if (buffer[i] == '\n') {
++foundLines;
if (foundLines == skipNLines)
endOffset = currentOffset + i - 1; // we want the end to be BEFORE the newline
}
}
}
return endOffset;
}
}
Alternative #2
Read from your file line by line
On every successfully read line, insert the line at the back of your LinkedList<String>
If your LinkedList<String> contains more lines than you'd like to skip, remove the first entry and process it
Repeat until there are no more lines to be read
Example code
import java.io.InputStreamReader;
import java.io.FileInputStream;
import java.io.DataInputStream;
import java.io.BufferedReader;
import java.util.LinkedList;
public class Example {
protected static final int SKIP_N = 42;
public static void main (String[] args)
throws Exception
{
String line;
LinkedList<String> lli = new LinkedList<String> ();
FileInputStream fis = new FileInputStream ("/tmp/sample_file");
DataInputStream dis = new DataInputStream (fis);
InputStreamReader isr = new InputStreamReader (dis);
BufferedReader bre = new BufferedReader (isr);
while ((line = bre.readLine ()) != null) {
lli.addLast (line);
if (lli.size () > SKIP_N) {
System.out.println (lli.removeFirst ());
}
}
dis.close ();
}
}
You need to use a simple read-ahead logic.
Read x lines first and put them in a buffer. Then you can repeatedly read one line at a time, add it to the end of the buffer, and process the first line in the buffer. When you reach EOF, you have x unprocessed lines in the buffer.
Update: I noticed the comments on the question and my own answer, so just to clarify: my suggestion works when n is unknown. x should be known, of course. All you need to do is create a simple buffer, and then fill up the buffer with x lines, and then start your processing.
Regarding the implementation of the buffer, as long as we are talking about Java's built-in collections, a simple LinkedList is all you need. Since you'll be pulling one line out of the buffer for every line that you place in it, ArrayList won't perform well do to constant shifting of array indices. Generally speaking, an array-backed buffer would have to be circular to avoid bad performance.
Just read x lines ahead. That is have a queue of x lines.
Related
So i'm trying to create a 2d character array from a .txt file. The first while-loop calculates to number of columns and rows. The second while-loop is to enter chars into the 2d array. However when i create BufferedReader br2 and use readLine() and then try to print it the line prints out "null". Why does the second BufferedReader start at the end of the file?
public Maze(FileReader reader){
try {
BufferedReader br = new BufferedReader(reader);
cols = 0;
rows = 0;
str = br.readLine();
while (str != null) {
if (str.length() > cols) {
cols = str.length();
}
rows++;
str = br.readLine();
}
}
catch (IOException e) {
System.out.println("Error");
}
maze = new char[getNumRows()][getNumColumns()];
try {
BufferedReader br2 = new BufferedReader(reader);
line = br2.readLine();
System.out.println(line);
while ((line = br2.readLine()) != null) {
System.out.println(line);
for (int i = 0; i < getNumColumns(); i++) {
maze[row][i] = line.charAt(i);
}
row++;
}
}
catch (IOException e) {
System.out.println("Error");
}
}
this is how I call it from main
public class RobotTest {
public static void main(String[] args) throws IOException{
File file = new File(args[0]);
Maze maze = new Maze(new FileReader(file));
}
}
You are using the same reader for initializing both of the BufferedReaders and after the first one finishes reading that means the next one will continue reading at the EOF. You must return the second one to the beginning of the file before iterating again through it.
You can reset the pointer of the reader by using FileReader.reset()
You can checkout mark() as well in the documentation.
Source: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/Reader.html#reset()
As the name indicates, a 'BufferedReader' uses a buffer.
There's a reason for that.
Harddisks, network communications, SSDs - these are all concepts that tend to operate in terms of packets. They write or read largish chunks. For example, with networking, you can't just 'send a bunch of bytes down a wire' - you need to send a packet, because the packet includes information about where the packet is supposed to go and to solve the ordering issue (when you send packets on the internet, one you sent later may arrive earlier, so packets need an index number on them so the receiver can re-order them in the right way).
If you send one byte, okay - but that'll be ~200 bytes on the wire. Hence, sending 1 byte 5000 times is ~1 million bytes sent, whereas sending 5000 bytes in one go is only 5200 bytes; a 1000x difference!
Similar principles apply elsewhere, thus, 'send 1 byte' or 'read 1 byte' is often incredibly, factor 1000x inefficient.
Hence, buffers. You ASK for one character or one line (which can be quite a short line) from your BufferedReader and it will dutifully give you this, but under the hood it has read an entire largish chunk (because that is efficient), and will be fielding your further requests for e.g. another line from this buffer until it runs out and then it grabs another chunk.
The upshot of all that, is that you CAN NEVER use a reader ever again once you wrap it in a bufferedreader. You are 'committed' to the buffer now: That BufferedReader is the only thing you can read, from here on out, until the stream is done.
You're creating another one, and thus, your code is buggy: You're now effectively skpping whatever the first BufferedReader buffered; given that you're getting null out, that means right now it buffered the entire contents of the file, but perhaps on another system, a bigger file, it wouldn't return null, but some line deep into the file. Either way, you cannot use that filereader anymore once you have created a bufferedreader.
The solution is simple enough: Make the bufferedreader, once, and pass that around. Don't keep making BufferedReader instances out of it.
Also, resources need to be 'protected' - you must close them no matter how your code exits. If your code throws an error you need to still close the resources; failure to do so means your program will eventually get stuck and will be incapable of opening files, forever - the only way out is to completely close the app. Finally, FileReader is basically broken; it uses 'platform default charset encoding' which is anyone's guess. You want to 'hardcode' what encoding it has, and usually, the right answer is "UTF-8". This doesn't matter if the only characters are simple ASCII, but it's 2021. People use emojis, snowmen, and almost every language on the planet needs more than just a to z. If your encoding settings are off, it'll be mangled gobbledygook.
The newer Files API (java.io.File is outdated and you probably don't want to use it anymore) defaults to UTF-8, which is great, saves us some typing.
thus:
public static void main(String[] args) throws IOException {
try (var reader = Files.newBufferedReader(Paths.get(args[0]))) {
Maze maze = new Maze(reader);
}
}
I am trying to read information from a file but for each line it just returns a null
String[] quotes = new String[numberOfLines];
String myLine;
for (int i = 0; i < numberOfLines; i++)
{
myLine = readFile.readLine();
System.out.println(myLine);
quotes[i] = myLine;
}
numberOfLines in the number of lines that actually has characters on them in the file
BufferedReader.readLine returns null if and only if you have read to the end of the file / stream: See javadoc.
Therefore, you have reached the end of file.
Therefore the problem is somewhere else in your code:
how readFile is instantiated / used (e.g. have you opened the right file?), or
how you get the value for numberOfLines.
Unfortunately, we can't go further without seeing the code that does those things. Or better still, an MCVE.
UPDATE
One possibility: the code you use to count the lines has read the file via readFile and left the BufferedReader positioned at the end of file.
Is there a way that I could read lines from line number n to line number m from a file?
To put it other way, I have a file with over 100k entries. I would like load 10k lines at once, process them and then load next 10k lines, so as to run with limited memory resources. Is there any way to accomplish this?
You cannot start reading at an arbitrary line, but that's not what you say in the second part of your question. What you want is the following, if it's fine to hold the resource for the whole process :
int batchSize = 10000;
try (BufferedReader br = Files.newBufferedReader(file.toPath())) {
boolean eof = false;
while (!eof) {
List<String> batch = new ArrayList<>(batchSize);
for (int i=0 ; i<batchSize ; i++) {
String line = br.readLine();
if (eof = line == null) break;
batch.add(line);
}
processBatch(batch);
}
}
If you want to release the resource as soon as possible, it might be a better idea to have a producer splitting the files by batches of 10 000 lines while a consumer process them in order. This can be achieved very easily with two threads and a BlockingQueue<File>.
I am trying to solve UVa problem 458 - decoder and I came up with the following algorithm which gives me the correct output for the sample input data, but runs longer than allowed.
public class Decoder {
public void decoder() {
Scanner sc = new Scanner(System.in);
while (sc.hasNext()) {
String line = sc.nextLine();
for (int i = 0; i < line.length(); i++) {
if(line.charAt(i)>=32 && line.charAt(i)<=126)
System.out.print((char) (line.charAt(i) - 7));
}
System.out.println();
}
}
}
What I've looked into
Well I have read the forums and most of the solutions were pretty similar, I have been researching if there was a way of avoiding the for loop which is running through the string and printing out the new char. But this loop is inevitable, this algorithm's time complexity is always going to be n^2.
The problem also mentions to only change ASCII printable values, which is why I set the condition to check if its greater than or equal to 32 and 126. According to Wikipedia that is the range of printable values.
http://ideone.com/XkByW9
Avoid decoding the stream to characters. It's ok to use bytes if you only have to support ASCII.
Read and write the data in big chunks to avoid function/system call overhead.
Avoid unnecessary allocations. Currently you are allocating new String for every line.
Do not split the input into lines to avoid bad performance for very small lines.
Example:
public static void main(String[] args) throws IOException {
byte[] buffer = new byte[2048];
while (true) {
int len = System.in.read(buffer);
if (len <= 0) {
break;
}
for (int i = 0; i < len; i++) {
...
}
System.out.write(buffer, 0, len);
}
}
It will process the input as you would normally process a binary file. For every iteration, it will read up to 2048 bytes into a buffer, process them and write them to standard output. Program will end when EOF is reached and read returns -1. 2048 is usually a good buffer size, but you might want to try different sizes and see which one works best.
Never use Scanner for long inputs. The scanner is unbelievably slower than other means of reading input in Java, such as BufferedReader. This UVa problem looks like one with a quite long input.
File tempFile = new File(loadedFileName);
FileInputStream datStream = new FileInputStream(tempFile);
InputStreamReader readDat = new InputStreamReader(datStream);
int data = readDat.read();
String temp = "";
// keeps reading in one character at a time and returns -1 if there are no more
// characters
while(data != -1){
char datChar = (char)data;
if(temp.length() > 2){
if((temp.substring(temp.length()-1)).equals("\n")){
String[] arrayTemp = temp.split("\\|");
if(Float.valueOf(arrayTemp[columnNumber-1]) > value){
System.out.print(temp);
}
temp = "";
}
}
temp = temp+datChar;
data = readDat.read();
}
The code reads in a file character by character and appends it to a string. Once it reaches a new line it will split the string into an array and then checks to see if a value in that array matches and prints out the string before it was split.
The problem with this code is that even though it gets most of the job done, because of how it is in a while loop where it checks to see if it reaches the end, which if it does it returns -1. This makes it so that I can't print the last line of the file because there is no new line at the end of the file, so it kills the loop before it gets to print out the last line.
An example of a few lines being read in.
This | world | is | brown | and | dirty|
24 | hours | are | in | day| Hello|
Can't store the whole file into an array, can't use a buffered reader, I've tried doing something like counting the number of "|" but I can't seem to get that to work. So in this case if it counted to 6 pipes it would then split and then check before printing. Which I think would solve the problem of not printing the last line. Here is how I tried to implement the count for the |.
while(data != -1){
char datChar = (char)data;
// checks to make sure that temp isn't an empty string first
if(temp.length() > 2){
// checks to see if a new line started and if it did splits the current string into an array.
if((temp.substring(temp.length()-1)).equals("\\|")){
if(count == 6){
String[] arrayTemp = temp.split("\\|");
//then checks the variable in the array col and compares it with a value and prints if it is greater.
if(Float.valueOf(arrayTemp[columnNumber-1]) > value){
System.out.print(temp);
}
temp = "";
count = 0;
}
}
}
temp = temp+datChar;
data = readDat.read();
}
You'd be better off using a BufferedReader and readLine. That will be more efficient in terms of IO, and means you don't need to worry about handling the line breaks yourself:
BufferedReader reader = new BufferedReader(readDat);
String line;
while ((line = reader.readLine()) != null) {
// Handle a line of data - check for it being empty, split it etc.
}
This will give you all the lines in the file, whether or not there's a terminating newline.
EDIT: If you really can't use BufferedReader, I would make your code significantly simpler by checking whether the current character is \n or the end of the file, and processing the "line so far" if so:
StringBuilder line = new StringBuilder();
while (true) {
int next = readDat.read();
if (next == '\n' || next == -1) {
// Handle line here
line = new StringBuilder();
if (next == -1) {
break;
}
} else {
line.append((char) next);
}
}
Note the use of StringBuilder instead of repeated concatenation.
Maybe you can try counting the number of lines in a file and check on that, in addition to checking for newlines.
Also, you could rewrite your code to read a whole line and split that, instead of reading a file character by character. Should you do that, you don't have to check for newlines (and I think you don't even have to check if the current read line is not the last line of the file. I'm not sure if it's faster (but I'm betting it is), but it would definitely result in better readable (and as a result, better maintainable) code.
I may be misunderstanding your question, but it seems to me like some line based scanning using a basic Scanner would be much easier in this situation. Would the following simplify things for you?
Scanner input = new Scanner(new File("FileName.txt"));
while (input.hasNextLine()) {
String[] arrayTemp = input.nextLine().split("\\|");
//etc etc.
}