So i'm trying to create a 2d character array from a .txt file. The first while-loop calculates to number of columns and rows. The second while-loop is to enter chars into the 2d array. However when i create BufferedReader br2 and use readLine() and then try to print it the line prints out "null". Why does the second BufferedReader start at the end of the file?
public Maze(FileReader reader){
try {
BufferedReader br = new BufferedReader(reader);
cols = 0;
rows = 0;
str = br.readLine();
while (str != null) {
if (str.length() > cols) {
cols = str.length();
}
rows++;
str = br.readLine();
}
}
catch (IOException e) {
System.out.println("Error");
}
maze = new char[getNumRows()][getNumColumns()];
try {
BufferedReader br2 = new BufferedReader(reader);
line = br2.readLine();
System.out.println(line);
while ((line = br2.readLine()) != null) {
System.out.println(line);
for (int i = 0; i < getNumColumns(); i++) {
maze[row][i] = line.charAt(i);
}
row++;
}
}
catch (IOException e) {
System.out.println("Error");
}
}
this is how I call it from main
public class RobotTest {
public static void main(String[] args) throws IOException{
File file = new File(args[0]);
Maze maze = new Maze(new FileReader(file));
}
}
You are using the same reader for initializing both of the BufferedReaders and after the first one finishes reading that means the next one will continue reading at the EOF. You must return the second one to the beginning of the file before iterating again through it.
You can reset the pointer of the reader by using FileReader.reset()
You can checkout mark() as well in the documentation.
Source: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/Reader.html#reset()
As the name indicates, a 'BufferedReader' uses a buffer.
There's a reason for that.
Harddisks, network communications, SSDs - these are all concepts that tend to operate in terms of packets. They write or read largish chunks. For example, with networking, you can't just 'send a bunch of bytes down a wire' - you need to send a packet, because the packet includes information about where the packet is supposed to go and to solve the ordering issue (when you send packets on the internet, one you sent later may arrive earlier, so packets need an index number on them so the receiver can re-order them in the right way).
If you send one byte, okay - but that'll be ~200 bytes on the wire. Hence, sending 1 byte 5000 times is ~1 million bytes sent, whereas sending 5000 bytes in one go is only 5200 bytes; a 1000x difference!
Similar principles apply elsewhere, thus, 'send 1 byte' or 'read 1 byte' is often incredibly, factor 1000x inefficient.
Hence, buffers. You ASK for one character or one line (which can be quite a short line) from your BufferedReader and it will dutifully give you this, but under the hood it has read an entire largish chunk (because that is efficient), and will be fielding your further requests for e.g. another line from this buffer until it runs out and then it grabs another chunk.
The upshot of all that, is that you CAN NEVER use a reader ever again once you wrap it in a bufferedreader. You are 'committed' to the buffer now: That BufferedReader is the only thing you can read, from here on out, until the stream is done.
You're creating another one, and thus, your code is buggy: You're now effectively skpping whatever the first BufferedReader buffered; given that you're getting null out, that means right now it buffered the entire contents of the file, but perhaps on another system, a bigger file, it wouldn't return null, but some line deep into the file. Either way, you cannot use that filereader anymore once you have created a bufferedreader.
The solution is simple enough: Make the bufferedreader, once, and pass that around. Don't keep making BufferedReader instances out of it.
Also, resources need to be 'protected' - you must close them no matter how your code exits. If your code throws an error you need to still close the resources; failure to do so means your program will eventually get stuck and will be incapable of opening files, forever - the only way out is to completely close the app. Finally, FileReader is basically broken; it uses 'platform default charset encoding' which is anyone's guess. You want to 'hardcode' what encoding it has, and usually, the right answer is "UTF-8". This doesn't matter if the only characters are simple ASCII, but it's 2021. People use emojis, snowmen, and almost every language on the planet needs more than just a to z. If your encoding settings are off, it'll be mangled gobbledygook.
The newer Files API (java.io.File is outdated and you probably don't want to use it anymore) defaults to UTF-8, which is great, saves us some typing.
thus:
public static void main(String[] args) throws IOException {
try (var reader = Files.newBufferedReader(Paths.get(args[0]))) {
Maze maze = new Maze(reader);
}
}
Related
So, I am writing a program where I am reading from a file one character at a time, doing an operation with the character, then writing the output to a different file.
For some reason I get a different result when I hard code the file path (I did that just so I didn't have to keep typing the file while debugging) and when I pass the files from the command line.
When I pass the file from the command line it will skip input lines sometimes, so if I had a file with 10 lines I may only get 8 lines being processed.
I have a feeling it has something to do with whether or not there are spaces at the end of the input lines but I can't seem to figure it out. Any help would be much appreciated.
Also, I was using NetBeans when I hardcoded the file path, and ran the program from the terminal when I used command-line arguments. I have pasted the I/O code below.
while( ( i = buffRead.read() ) != -1 )
{
try
{
char c = (char) i;
if ( Character.isWhitespace(c) )
{
if(converter.getStackSize() > 1)
{
converter.resetConverter();
throw new IncorrectNumOfOperandsException();
}
buffRead.readLine();
converter.resetConverter();
writeOut.println();
}
else
{
converter.register( c );
}
}
catch (InvalidCharException j)
{
writeOut.println("Invalid Character Entered\n");
buffRead.readLine();
}
catch (IncorrectNumOfOperatorsException k)
{
writeOut.println("Too Many Operators for Number of Operands\n");
buffRead.readLine();
}
catch ( IncorrectNumOfOperandsException m)
{
writeOut.println("Too Many Operands for Number of Operators\n");
buffRead.readLine();
}
}
buffRead.close();
writeOut.close();
I think I see the problem.
You test c to see if it is a whitespace character, and if it is, you then call readLine(). What readLine() does is to read one or more characters until it gets to the next end-of-line sequence.
So what happens when c contains a newline character?
newline is a whitespace character (look it up)
so you read a line, starting at the first character after the newline that you just read
and discard the line.
So you have (accidentally) thrown away a complete line of input.
The solution ... I will leave to you.
When I pass the file from the command line it will skip input lines sometimes ...
I suspect that the same behavior was happening when you were typing the input ... but you didn't notice it. But it is possible that there is something going on with platform specific line termination sequences.
Unfortunately the code you provided seems to have nothing to do with the question!
Where are the 2 different ways of obtaining File?
Also, try using the try-with-resources statement. Something like this:
try(final Reader rdr = new InputStreamReader(System.in);
final BufferedReader brd = new BufferedReader (rdr))
{
/*
* Resources declared above will be automatically closed.
*/
brd.readLine();
}
...it will ensure all files are closed.
I am having some problems debugging a code, I managed to debug all bugs but one:
Method ignores results of InputStream.read(), the debugger (SpotBugs) says the problem is on reader.read(buffer, 0, n) and advises me to check the return value otherwise the caller will not be able to correctly handle the case where fewer bytes were read than the caller requested.
char[] buffer = new char[n];
try{
reader = new BufferedReader(new InputStreamReader(new FileInputStream(file), "UTF-8"));
reader.read(buffer,0,n);
reader.close();
}
catch(RuntimeException e) {
throw e;
}
catch(Exception e) {
System.out.println("Something went wrong");
}
for(int i=0;i<buffer.length;i++) {
int swap = i % 2;
if (Integer.toString(swap).equals(Integer.toString(1))) {
buffer[i] = ' ';
}
}
System.out.print(buffer);
How can I fix this bug?
read() when used with a buffer returns the actual number of the bytes read (or -1 for end of stream). It's possible that the buffer isn't filled completely with a single read (although small buffers are, since data is transferred in blocks), so you need to make sure (i.e. use a while loop) you've read the amount of bytes you intended to.
R. Castro gave you good explanation, but what's missing (in obvious words) is you are not checking how many bytes are being read from file. The number can be different than size of your buffer. That's what spotbug is trying to tell you. Your file can be longer or shorter than buffer size. You are not handling the case where file is longer than buffer size. And loop you have needs to be changed to whatever number of bytes are read than buffer.length
I'm writing a simple file server that will let the user telnet in, supply a username and search for the access level given to that user, and then show a list of files and only allow them to see the contents of the file if they have a sufficient access level. This code is for the server side, I'm just using PuTTY for the client
I read the userfile in (delimited with colons to separate name and access level)
paul:10
schemm:8
bobbarker:0
with this code
static Map<String, Integer> users = new HashMap<String, Integer>();
file = new File("userfile.txt");
try
{
out.println("Reading userfile.txt");
Scanner scannerusers = new Scanner(file);
while (scannerusers.hasNextLine())
{
String line = scannerusers.nextLine();
line.trim();
String field[] = line.split(":");
users.put(field[0], Integer.parseInt(field[1]));
}
scannerusers.close();
}
catch (FileNotFoundException e)
{
out.println("userfile.txt not found!");
}
But my actual problem is here (at least I think). Both the uncommented and commented code will fail on the first attempt, but succeed on the second.
//socket connection stuff here, and all this is nested in a try-catch to get
//connection errors
while(!check)
{
outToClient.writeBytes("What is your username?");
clientinput = inFromClient.readLine();
String username = clientinput;
//
if(users.get(username) == null)
{
outToClient.writeBytes("Invalid username");
}
else
{
check = true;
}
//
//try
//{
// accesslevel = users.get(username);
// check = true; //my thinking was that the NullPointerException would be thrown
// //before this point, but either way doesn't fix the problem
//}
//catch(NullPointerException e)
//{
// outToClient.writeBytes("Incorrect username");
//}
}
Edit: I put the full source on pastebin here
I agree, this does sound like a race condition but I don't see where a race condition would come from in your code. Perhaps it's the way you open your streams? Now I haven't played with Java in a while, but if I remember right I always had the base stream, then opened an input stream reader (I think) from the base stream, then wrapped a buffered reader around that.
Try printing out the string you receive from the client, in ASCII codes. Printing out the actual string might not show some unprintable characters you might be receiving.
If it ALWAYS fails the first time, and ALWAYS succeeds the second, then it sounds like a race condition. Perhaps the file read code is triggered by the first read operation, but done in a separate thread so the calling thread continues instead of synchronously waiting for the result. We need to see the calling code to be sure.
Perhaps inFromClient.readLine() is reading more (or less) than you expect the first time around. Are you sure it is returning what you expect? I haven't used PuTTY in a while, perhaps it is sending some terminal control characters on initial connect that are getting picked up by your first read? You also don't show us the connection code, are you sending data back and forth prior to this point that is not being fully processed?
I read text data from big file line by line.
But I need to read just n-x lines(don't read last x lines) .
How can I do it without reading whole file more than 1 time?
(I read line and immediately process it, so i can't go back)
In this post I'll provide you with two completely different approaches to solving your problem, and depending on your use case one of the solutions will fit better than the other.
Alternative #1
This method is memory efficient though quite complex, if you are going to skip a lot of contents this method is recommended since you only will store one line at a time in memory during processing.
The implementation of it in this post might not be super optimized, but the theory behind it stands clear.
You will start by reading the file backwards, searching for N number of line breaks. When you've successfully located where in the file you'd like to stop your processing later on you will jump back to the beginning of the file.
Alternative #2
This method is easy to comprehend and is very straight forward. During execution you will have N number of lines stored in memory, where N is the number of lines you'd like to skip in the end.
The lines will be stored in a FIFO container (First In, First Out). You'll append the last read line to your FIFO and then remove and process the first entry. This way you will always process lines at least N entries away from the end of your file.
Alternative #1
This might sound odd but it's definitely doable and the way I'd recommend you to do it; start by reading the file backwards.
Seek to the end of the file
Read (and discard) bytes (towards the beginning of the file) until you've found SKIP_N line breaks
Save this position
Seek to the beginning of the file
Read (and process) lines until you've come down to the position you've stored away
Example code:
The code below will strip off the last 42 lines from /tmp/sample_file and print the rest using the method described earlier in this post.
import java.io.RandomAccessFile;
import java.io.File;
import java.lang.Math;
public class Example {
protected static final int SKIP_N = 42;
public static void main (String[] args)
throws Exception
{
File fileHandle = new File ("/tmp/sample_file");
RandomAccessFile rafHandle = new RandomAccessFile (fileHandle, "r");
String s1 = new String ();
long currentOffset = 0;
long endOffset = findEndOffset (SKIP_N, rafHandle);
rafHandle.seek (0);
while ((s1 = rafHandle.readLine ()) != null) {
; currentOffset += s1.length () + 1; // (s1 + "\n").length
if (currentOffset >= endOffset)
break;
System.out.println (s1);
}
}
protected static long findEndOffset (int skipNLines, RandomAccessFile rafHandle)
throws Exception
{
long currentOffset = rafHandle.length ();
long endOffset = 0;
int foundLines = 0;
byte [] buffer = new byte[
1024 > rafHandle.length () ? (int) rafHandle.length () : 1024
];
while (foundLines < skipNLines && currentOffset != 0) {
currentOffset = Math.max (currentOffset - buffer.length, 0);
rafHandle.seek (currentOffset);
rafHandle.readFully (buffer);
for (int i = buffer.length - 1; i > -1; --i) {
if (buffer[i] == '\n') {
++foundLines;
if (foundLines == skipNLines)
endOffset = currentOffset + i - 1; // we want the end to be BEFORE the newline
}
}
}
return endOffset;
}
}
Alternative #2
Read from your file line by line
On every successfully read line, insert the line at the back of your LinkedList<String>
If your LinkedList<String> contains more lines than you'd like to skip, remove the first entry and process it
Repeat until there are no more lines to be read
Example code
import java.io.InputStreamReader;
import java.io.FileInputStream;
import java.io.DataInputStream;
import java.io.BufferedReader;
import java.util.LinkedList;
public class Example {
protected static final int SKIP_N = 42;
public static void main (String[] args)
throws Exception
{
String line;
LinkedList<String> lli = new LinkedList<String> ();
FileInputStream fis = new FileInputStream ("/tmp/sample_file");
DataInputStream dis = new DataInputStream (fis);
InputStreamReader isr = new InputStreamReader (dis);
BufferedReader bre = new BufferedReader (isr);
while ((line = bre.readLine ()) != null) {
lli.addLast (line);
if (lli.size () > SKIP_N) {
System.out.println (lli.removeFirst ());
}
}
dis.close ();
}
}
You need to use a simple read-ahead logic.
Read x lines first and put them in a buffer. Then you can repeatedly read one line at a time, add it to the end of the buffer, and process the first line in the buffer. When you reach EOF, you have x unprocessed lines in the buffer.
Update: I noticed the comments on the question and my own answer, so just to clarify: my suggestion works when n is unknown. x should be known, of course. All you need to do is create a simple buffer, and then fill up the buffer with x lines, and then start your processing.
Regarding the implementation of the buffer, as long as we are talking about Java's built-in collections, a simple LinkedList is all you need. Since you'll be pulling one line out of the buffer for every line that you place in it, ArrayList won't perform well do to constant shifting of array indices. Generally speaking, an array-backed buffer would have to be circular to avoid bad performance.
Just read x lines ahead. That is have a queue of x lines.
I’m working on the UVa Online Judge problem set archive as a way to practice Java, and as a way to practice data structures and algorithms in general.
They give an example input file to submit to the online judge to use as a starting point (it’s the solution to problem 100).
Input from the standard input stream (java.lang.System.in) is required as part of any solution on this site, but I can’t understand the implementation of reading from System.in they give in their example solution. It’s true that the input file could consist of any variation of integers, strings, etc, but every solution program requires reading basic lines of text input from System.in, one line at a time. There has to be a better (simpler and more robust) method of gathering data from the standard input stream in Java than this:
public static String readLn(int maxLg) {
byte lin[] = new byte[maxLg];
int lg = 0, car = -1;
String line = “”;
try {
while (lg < maxLg) {
car = System.in.read();
if ((car < 0) || (car == ‘\n’)) {
break;
}
lin[lg++] += car;
}
} catch (java.io.IOException e) {
return (null);
}
if ((car < 0) && (lg == 0)) {
return (null); // eof
}
return (new String(lin, 0, lg));
}
I’m really surprised by this. It looks like something pulled directly from K&R’s “C Programming Language” (a great book regardless), minus the access level modifer and exception handling, etc. Even though I understand the implementation, it just seems like it was written by a C programmer and bypasses most of Java’s object oriented nature. Isn’t there a better way to do this, using the StringTokenizer class or maybe using the split method of String or the java.util.regex package instead?
You definitely don't have to read one byte at a time (you don't in C either, that's what fgets is for). Depending on what you're doing, you might use BufferedReader or Scanner:
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
Scanner sc = new Scanner(System.in);
BufferedReader has a readLine method, while Scanner has a variety of useful methods, including nextLine, nextInt, nextDouble, etc. which handle conversions for you. It also has a regex-based delimiter for reading arbitrary tokens.
One thing to understand about Java is that it has a very clear distinction between binary data (Streams) and character data (Readers and Writers). There are default decoders and encoders (as used above), but you always have the flexibility to choose the encoding.