Java BufferedReader miss some values - java

I have problem with BufferedReader.
My source code works fine, but the problem is when I read a value from named pipe it missed some values.
delim="\t";
reader = new BufferedReader(new FileReader("/tmp/base.pip"));
while ((line = reader.readLine())!=null) {
try{
timestamp = Long.parseLong(line.split(delim)[0]);
}
catch(Exception e){
continue;
}
I need to read whole line to get first column value properly.
example
original line : 12345678 A B
readed line: 2345678 A B (missed first bit)
Is there any suggestion to solve this problem?
p.s it works fine, but only a few lines have problem like the above examples.

I've tested your program and it works fine on my computer.
Check your delim String delim = "\t"
Check your file and it has a tab seperator between them.
Check the line value in your program.
If you don't have a tab space, consider using a regular expression which accepts any number of spaces.
String delim = "\\s+";
delim = '\t'
Split cannot take a character as a delimiter. Please check that. It has to be delim = "\t"

Try to split it with Whitspaces and take the first out ouf the array like:
delim = "\\s";
timestamp = Long.parseLong(line.split(delim)[0]);
I think this should solve your problem.

Related

Double.valueOf() java.lang.NumberFormatException

I encountered the following issue when trying read numbers from a csv file.
The numbers are well formed and decimal points are also correct (dots):
10;111.1;0.94
9.5;111.1;0.94
9;111.4;0.94
8.5;110.7;0.94
I read the file line by line and split each of them into three tokens, free of white spaces etc. (e.g. "10","111.1","0.94"). In spite of this I got the exception when calling a parsing function:
Double pwr = Double.parseDouble(tokens[1]);
Double control = Double.parseDouble(tokens[0]);
Double cos = Double.parseDouble(tokens[2]);
java.lang.NumberFormatException: For input string: "10"
When I change the order of lines, e.g., 1 <--> 2, the problem persists, but now I got java.lang.NumberFormatException: For input string: "9.5"
What is interesting, every time I make the above calls from the debugger level, I obtain correct values with no exception. It looks like a problem related to the first line of file.
Have you any idea where the problem source is?
It's probably a non-printable ASCII character
To remove this, you can simple use replaceAll method like this and use the following regex to remove \\P{Print}
BufferedReader br = new BufferedReader(new FileReader(""));
String str=br.readLine();
str=str.replaceAll("\\P{Print}", "");
After running the following RegEx you should be able to parse the value
===========================================================================
To see which character it is you can try this.
1) Read the line and print it as it is like this.
public class Test {
public static void main(String[] args) {
try {
BufferedReader br = new BufferedReader(new FileReader("/path/to/some.csv"));
String str=br.readLine();
System.out.println(str);
} catch (Exception e) {
e.printStackTrace();
}
}
}
OUTPUT:
2) Now copy output as it is and paste it inside a ""(double quotes)
As you can see the special character is visible now
[SOLVED] Presumably it was a problem of some "hidden" zero-length character at the beginning of the file (BTW, thank you for the helpful suggestions!). I changed the file encoding to UTF-8 (Menu > File > File Encoding) and that resolved the issue.

How to Read csv file with line breaks when "\r\n" does not work

I am new to Java and have been reading Java docs and other threads (1 ,2) but couldn't make it work.
Basically my csv file has few records which read like this
How are
you
so I want my code to read it as one line
How are you
My code looks like this:
BufferedReader bReader = new BufferedReader(new InputStreamReader(new FileInputStream(csv),"utf-8"));
while ((line = bReader.readLine()) != null) {
String lines = line.replaceAll("\r\n", " ");
System.out.println(lines);
Manually, when I pressed backspace at youit goes back with areand I pressed space. Then it was fine. But I have a big csv file with 29k records. There must be a way through which I can fix this. Can you please point me towards the direction? Thank you.
[Edit]
This is how it appears.
Fav: Beaver tails.
Least fav: HST not included in prices.
Edit 2:
-3166,1054,CF ,5992841,15:37.5,en,13007,12,12,Comments: Favorite and/or least favorite things,0,"Cafe Fun
-Least favourite - cabs"
"Cafe Fun Least Favourite - cabs" should be on the same line.
readLine() will return the next line in the file, without the line separator. So on the first iteration of your loop, lines is "How are" and on the second iteration, lines is "you". Neither of these contain "\r\n", so your calls to replaceAll(...) just return the same string.
Then, System.out.println(...) prints the text with a line separator appended, so you get back to what you started with.
You can collect all the lines into a list:
List<String> lines = Files.readAllLines(csv);
and then concatenate them using String.join(...):
String allLines = String.join(" ", lines);
BufferedReader.readLine() doesn't read the newline, so your String lines does't have a brake.
You only print a newline with System.out.println(lines); change it to System.out.print(lines); and invoke System.out.println(); after the while-loop.
BufferedReader bReader = new BufferedReader(new InputStreamReader(new FileInputStream(csv),"utf-8"));
while ((line = bReader.readLine()) != null) {
System.out.print(line);
}
System.out.println();
To start the csv (as the name implies) are files separated by commas, not by spaces. But forgetting that, the readLine only reads the line where the "pointer" is, and in that case "you" is in another line than "How are". I think that's where your problem lies. One way to solve it would be to use the StringBuilder and the "append (String)". and it is adding everything together. Regards

String search breaking when wrapping .jar with JSmooth

I've got an oddball problem here. I've got a little java program that filters Minecraft log files to make them easier to read. On each line of these logs, there are usually multiple instances of the character "§", which returns a hex value of FFFD.
I am filtering out this character (as well as the character following it) using:
currentLine = currentLine.replaceAll("\uFFFD.", "");
Now, when I run the program through NetBeans, it works swell. My lines get outputted looking like this:
CxndyAnnie: Mhm
CxndyAnnie: Sorry
But when I build the .jar file and wrap it into a .exe file using JSmooth, that character no longer gets filtered out when I run the .exe, and my lines come out looking like this:
§e§7[§f$65§7] §1§nCxndyAnnie§e: Mhm
§e§7[§f$65§7] §1§nCxndyAnnie§e: Sorry
(note: the additional square brackets and $65 show up because their filtering is dependent on the special character and it's following character being removed first)
Any ideas why this would no longer work after putting it through JSmooth? Is there a different way to do the text replace that would preserve its function?
By the way, I also attempted to remove this character using
currentLine = currentLine.replaceAll("§.", "");
but that didn't work in Netbeans nor as a .exe.
I'll go ahead and past the full method below:
public static String[] filterLines(String[] allLines, String filterType, Boolean timeStamps) throws IOException {
String currentLine = null;
FileWriter saveFile = new FileWriter("readable.txt");
String heading;
String string1 = "[L]";
String string2 = "[A]";
String string3 = "[G]";
if (filterType.equals(string1)) {
heading = "LOCAL CHAT LOGS ONLY \r\n\r\n";
}
else if (filterType.equals(string2)) {
heading = "ADVERTISING CHAT LOGS ONLY \r\n\r\n";
}
else if (filterType.equals(string3)) {
heading = "GLOBAL CHAT LOGS ONLY \r\n\r\n";
}
else {
heading = "CHAT LINES CONTAINING \"" + filterType + "\" \r\n\r\n";
}
saveFile.write(heading);
for (int i = 0; i < allLines.length; i++) {
if ((allLines[i] != null ) && (allLines[i].contains(filterType))) {
currentLine = allLines[i];
if (!timeStamps) {
currentLine = currentLine.replaceAll("\\[..:..:..\\].", "");
}
currentLine = currentLine.replaceAll("\\[Client thread/INFO\\]:.", "");
currentLine = currentLine.replaceAll("\\[CHAT\\].", "");
currentLine = currentLine.replaceAll("\uFFFD.", "");
currentLine = currentLine.replaceAll("\\[A\\].", "");
currentLine = currentLine.replaceAll("\\[L\\].", "");
currentLine = currentLine.replaceAll("\\[G\\].", "");
currentLine = currentLine.replaceAll("\\[\\$..\\].", "");
currentLine = currentLine.replaceAll(".>", ":");
currentLine = currentLine.replaceAll("\\[\\$100\\].", "");
saveFile.write(currentLine + "\r\n");
//System.out.println(currentLine);
}
}
saveFile.close();
ProcessBuilder openFile = new ProcessBuilder("Notepad.exe", "readable.txt");
openFile.start();
return allLines;
}
FINAL EDIT
Just in case anyone stumbles across this and needs to know what finally worked, here's the snippet of code where I pull the lines from the file and re-encode it to work:
BufferedReader fileLines;
fileLines = new BufferedReader(new FileReader(file));
String[] allLines = new String[numLines];
int i=0;
while ((line = fileLines.readLine()) != null) {
byte[] bLine = line.getBytes();
String convLine = new String(bLine, Charset.forName("UTF-8"));
allLines[i] = convLine;
i++;
}
I also had a problem like this in the past with minecroft logs, I don’t remember the exact details, but the issue came down to a file format problem, where UTF8 encoding worked correctly but some other text encoding including the system default did not work correctly.
First:
Make sure that you specify UTF8 encoding when reading the byteArray from file so that allLines contains the correct info like so:
Path fileLocation = Paths.get("C:/myFileLocation/logs.txt");
byte[] data = Files.readAllBytes(fileLocation);
String allLines = new String(data , Charset.forName("UTF-8"));
Second:
Using \uFFFD is not going to work, because \uFFFD is only used to replace an incoming character whose value is unknown or unrepresentable in Unicode.
However if you used the correct encoding (shown in my first point) then \uFFFD is not necessary because the value § is known in unicode so you can simply use
currentLine.replaceAll("§", "");
or specifically use the actual unicode string for that character U+00A7 like so
currentLine.replaceAll("\u00A7", "");
or just use both those lines in your code.

Reading From File With Comma Delimeter Error

Here's the .txt file i'm trying to read from
20,Dan,09/05/1990,3,Here
5,Danezo,04/09/1990,99,There
And here's how I'm doing it.. Whenever the .txt file has only one line, it seems to be reading from file fine. Whenever more than one line is being read, I get this error
Exception in thread "main" java.lang.NumberFormatException: For input string: "Danezo"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at AttackMonitor.readFromFile(AttackMonitor.java:137)
at AttackMonitor.monitor(AttackMonitor.java:57)
at MonsterAttackDriver.main(MonsterAttackDriver.java:14)
Java Result: 1
Here's the readfromfile code.
private void readFromFile() throws FileNotFoundException, IOException
{
monsterAttacks.clear();
Scanner read = new Scanner(new File("Attacks.txt"));
read.useDelimiter(",");
String fullDateIn = "";
int attackIdIn = 0;
int attackVictimsIn = 0;
String monsterNameIn= "";
String attackLocationIn= "";
while (read.hasNext())
{
attackIdIn = Integer.parseInt(read.next());
monsterNameIn = read.next();
fullDateIn = read.next();
attackVictimsIn = Integer.parseInt(read.next());
attackLocationIn = read.next();
monsterAttacks.add(new MonsterAttack(fullDateIn, attackIdIn, attackVictimsIn, monsterNameIn, attackLocationIn));
}
read.close();
}
What is happening is that at the end of each line there is a newline character, which is currently not a delimiter. So your code is attempting to read it as the first integer of the next line, which it is not. This is causing the parse exception.
To remedy this, you can try adding newline to the list of delimiters for which to scan:
Scanner read = new Scanner(new File("Attacks.txt"));
read.useDelimiter("[,\r\n]+"); // use just \n on Linux
An alternative to this would be to just read in each entire line from the file and split on comma:
String[] parts = read.nextLine().split(",");
attackIdIn = Integer.parseInt(parts[0]);
monsterNameIn = parts[1];
fullDateIn = parts[2];
attackVictimsIn = Integer.parseInt(parts[3]);
attackLocationIn = parts[4];
You can use the Biegeleisen suggestion. Or else you can do as follows.
In your while loop you are using hasNext as condition. Instead of that you can use while (read.hasNextLine()) and get the nextLine inside the loop and then split it by your delimiter and do the processing. That would be a more appropriate approach.
e.g
while (read.hasNextLine()) {
String[] values = scanner.nextLine().split(".");
// do your rest of the logic
}
Put the while loop content in a try catch, and catch for NumberFormatException. So whenever it falls to catch code, you can understand you tried to convert a string to int.
Could help more if your business is explained.
attackLocationIn = read.next(); This value takes as "Here\n 5" because there is no comma between Here and 5 and it has new line character.
so 2nd iteration attackIdIn = Integer.parseInt(read.next()); here read.next() value is "Danezo" it is String and you are trying parse to Integer. That's why you are getting this exception.
What I suggest is use BufferReader to read line by line and split each line with comma. It will be fast also.
Or another solution Add comma at end of each line and use read.next().trim() in your code. That's it it will work with minimal changes to your current code.

How to read from a file into a JTextArea (GUI) line by line?

I am reading in from a file (which is a list of names and their contact numbers) and displaying it in a textarea in a GUI. The file reads in fine and the text is displayed. But I want each line from the file to be on a new line on the GUI. So each name and address to be on a new line. How do I do this?
This is my code so far, but it doesn't display each line from the file on a new line on the GUI.
public void books() throws IOException {
String result = " ";
String line;
LineNumberReader lnr = new LineNumberReader(new FileReader(newFile("books2.txt")));
while ((line = lnr.readLine()) != null) {
result += line;`
}
area1 = new JTextArea(" label 1 ");
area1.setText(result);
area1.setBounds(50, 50, 900, 300);
area1.setForeground(Color.black);
panelMain.add(area1);
}
You don't really need to read it line by line. Something like this will do:
String result = new String(Files.readAllBytes(Paths.get("books2.txt")),
StandardCharsets.UTF_8);
This, of course, will require more memory: first to read bytes, and then to create a string. But if memory is a concern, then reading the whole file at once is probably a bad idea anyway, not to mention displaying it in a JTextArea!
It may not handle different line endings properly. When you use readLine(), it strips the line of all endings, be it CR LF, LF or CR. The way above will read the string as-is. So maybe reading it line-by-line is not a bad idea after all. But I've just checked—JTextArea seems to handle CR LF all right. It may cause other problems, though.
With line-by-line approach, I'd do something like
String result = String.join("\n",
Files.readAllLines(Paths.get("books2.txt"),
StandardCharsets.UTF_8));
This still strips the last line of EOL. If that's important (e. g., you want to be able to put text cursor on the line after the last one), just do one more + "\n".
All of the above requires Java 7/8.
If you're using Java 6 or something, then the way you do it is probably OK, except that:
Replace LineNumberReader with BufferedReader—you don't need line numbers, do you?
Replace String result with StringBuilder result = new StringBuilder(), and += with result.append(line).append('\n').
In the end, use result.toString() to get the string.

Categories