Restart File writing after certain number of lines - java

I'm working in Android trying to write some lines into a file. After a certain amount of lines, say 100, I want the file to delete the first line, and then append a line to the end. So basically, I want to keep writing to the file but keep the limit at 100 lines. I have been reading and found the files in java aren't to friendly with what I'm trying to do. I haven't found anything here either http://docs.oracle.com/javase/tutorial/essential/io/file.html
Is there a better way of keeping a file to the limit of 100 lines, and deleting the oldest lines when adding the new lines after that?
More specifically, I want a textView to display the 100 most recent events that a service has sent.
As of now, I have this method to display my STORETEXT file,
public void readFileInEditor() {
try {
InputStream in = openFileInput(STORETEXT);
if (in != null) {
InputStreamReader tmp = new InputStreamReader(in);
BufferedReader reader = new BufferedReader(tmp);
String str;
StringBuilder buf = new StringBuilder();
while ((str = reader.readLine()) != null) {
buf.append(str + "\n");
}
in.close();
writelog.setText(buf.toString());
}
} catch (java.io.FileNotFoundException e) {
// that's OK, we probably haven't created it yet
} catch (Throwable t) {
Toast.makeText(this, "Exception: " + t.toString(),
Toast.LENGTH_LONG).show();
}
}
and I write to the file like this...
OutputStreamWriter out = new OutputStreamWriter(openFileOutput(
STORETEXT, MODE_APPEND));
out.write("Some User Activity");
out.write("\n");
out.close();
I want to modify my code to only write the 100 most recent activities, and then set that to my textView. Thanks for any help.

Related

Spring Boot java: Process/Compare lines of very large file

I have this app where i process a very large file, Extract the lines that have the same first 5 characters (i call this currentlineId ), use them to create an object and do something with it, example sample of the file contents:
AZDFS12345678998765432345678
AZDFS09876545432345678987654
AZDFS34568987654567890987654
AZDFS12345670987654345678998
AZDFS12345098734567765123456
// the lines above have the same first 5 characters, they create Object1.
FGHJUY121324
FGHJUY089909
FGHJUYTTUTUU
//same for the lines above, they create Object2.
NB: the lines will always be in an order where lines with the same first 5 will always be together (abover/below each other) so i wonn't have lines all over the place
My current function code:
private void processScpFile(File file) {
LOGGER.info("Processing File: {} ", file.getName());
try (var br = new BufferedReader(new FileReader(file))) {
String currentLine;
String lastLineId = null;
List<String> similarLineIdsList = new ArrayList<>();
while ((currentLine = br.readLine()) != null) {
if (StringUtils.isEmpty(lastLineId)) {
lastLineId = currentLine.substring(0,5);
}
if (lastLineId.equals(currentLine.substring(0,5))) {
similarLineIdsList.add(currentLine);
}
else if (!lastLineId.equals(currentLine.substring(0,5))) {
doSomethinsWithTheList(similarLineIdsList);
similarLineIdsList.clear();
similarLineIdsList.add(currentLine);
lastLineId= currentLine.substring(0,5);
}
}
doSomethinsWithTheList(similarLineIdsList);
}
catch (IOException e) {
LOGGER.error("Couldn't read file, {}", e.getMessage(), e);
}
}
Now this has worked well up until now, but going forward i have to process files where i would have for instance over 100k lines with same first 5, which makes this process very slow.
Please do you have any suggestion on haow to make this process faster, thank you
Edit: just to be precise it's the generating the list with the same first 5 chars that's slower as the number of similar lines gets larger.

Creating an inverted index with limited memory in java

Im curious on how create an Inverted Index on data that doesn't fit into memory. So right now I'm reading a file directory and indexing the files based on the contents inside the file, I am using a HashMap to store the index. The code below is a snippet from a function I use and I call the function on an entire directory. What do I do if this directory was just massive and the HashMap can't fit all the entries. Yes, This does sound like premature optimization. Im just having fun. I don't want to use Lucene so don't even mention it because I'm tired as to seeing that as the majority answer to "Index" stuff. This HashMap is my only constraint everything else is stored in files to easily reference stuff later on.
Im just curious how I can do this since it stores it in the map like so
keyword -> file1,file2,file3,etc..(locations)
keyword2 -> file9,file11,file13,etc..(locations)
My thoughts were to create a file which would some how be able to update itself to be like the format above but I feel thats not efficient.
Code Snippet
br = new BufferedReader(new FileReader(file));
while ((line = br.readLine()) != null) {
for (String _word : line.split("\\W+")) {
word = _word.toLowerCase();
if (!ignore_words.contains(word)) {
fileLocations = index.get(word);
if (fileLocations == null) {
fileLocations = new LinkedList<Long>();
index.put(word, fileLocations);
}
fileLocations.add(file_offset);
}
}
}
br.close();
Update:
So I managed to come up with something, but performance wise I feel this is slow, especially if there was a large amount of data. I basically created a file that would just have to word and its offset on each line the word appeared.Lets name it index.txt.
It had the format of like so
word1:offset
word2:offset
word1:offset <-encountered again.
word3:offset
etc...
I then created multiple files for each word and appended the offset to that file each time it was encountered in the index.txt file.
So basically the format of the word files are like so
word1.txt -- Format
word1:offset1:offset2:offset3:offset4...and so on
each time word1 is encountered in the index.txt file it would append it to the word1.txt file and add to end.
Then finally, I go through all the word files I created and overwrite the index.txt file with the final output in the index file looking like so
word1:offset1:offset2:offset3:offset4:...
word2:offset9:offset11:offset13:offset14:...
etc..
Then to finish it up, I delete all the word files.
The nasty code snippet for this is below, its a fair amount.
public void createIndex(String word, long file_offset)
{
PrintWriter writer;
try {
writer = new PrintWriter(new FileWriter(this.file,true));
writer.write(word + ":" + file_offset + "\n");
writer.close();
}
catch (IOException ioe)
{
ioe.printStackTrace();
}
}
public void mergeFiles()
{
String line;
String wordLine;
String[] contents;
String[] wordContents;
BufferedReader reader;
BufferedReader mergeReader;
PrintWriter writer;
PrintWriter mergeWriter;
try {
reader = new BufferedReader(new FileReader(this.file));
while((line = reader.readLine()) != null)
{
contents = line.split(":");
writer = new PrintWriter(new FileWriter(
new File(contents[0] + ".txt"),true));
if(this.words.get(contents[0]) == null)
{
this.words.put(contents[0], contents[0]);
writer.write(contents[0] + ":");
}
writer.write(contents[1] + ":");
writer.close();
}
//This could be put in its own method below.
mergeWriter = new PrintWriter(new FileWriter(this.file));
for(String word : this.words.keySet())
{
mergeReader = new BufferedReader(
new FileReader(new File(word + ".txt")));
while((wordLine = mergeReader.readLine()) != null)
{
mergeWriter.write(wordLine + "\n");
}
}
mergeWriter.close();
deleteFiles();
}
catch(IOException ioe)
{
ioe.printStackTrace();
}
}
public void deleteFiles()
{
File toDelete;
for(String word : this.words.keySet())
{
toDelete = new File(word + ".txt");
if(toDelete.exists())
{
toDelete.delete();
}
}
}

Split file into multiple files

I want to cut a text file.
I want to cut the file 50 lines by 50 lines.
For example, If the file is 1010 lines, I would recover 21 files.
I know how to count the number of files, the number of lines but as soon as I write, it's doesn't work.
I use the Camel Simple (Talend) but it's Java code.
private void ExtractOrderFromBAC02(ProducerTemplate producerTemplate, InputStream content, String endpoint, String fileName, HashMap<String, Object> headers){
ArrayList<String> list = new ArrayList<String>();
BufferedReader br = new BufferedReader(new InputStreamReader(content));
String line;
long numSplits = 50;
int sourcesize=0;
int nof=0;
int number = 800;
try {
while((line = br.readLine()) != null){
sourcesize++;
list.add(line);
}
System.out.println("Lines in the file: " + sourcesize);
double numberFiles = (sourcesize/numSplits);
int numberFiles1=(int)numberFiles;
if(sourcesize<=50) {
nof=1;
}
else {
nof=numberFiles1+1;
}
System.out.println("No. of files to be generated :"+nof);
for (int j=1;j<=nof;j++) {
number++;
String Filename = ""+ number;
System.out.println(Filename);
StringBuilder builder = new StringBuilder();
for (String value : list) {
builder.append("/n"+value);
}
producerTemplate.sendBodyAndHeader(endpoint, builder.toString(), "CamelFileName",Filename);
}
}
} catch (IOException e) {
e.printStackTrace();
}
finally{
try {
if(br != null)br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
For people who don't know Camel, this line is used to send the file:
producerTemplate.sendBodyAndHeader (endpoint, line.toString (), "CamelFileName" Filename);
endpoint ==> Destination (it's ok with another code)
line.toString () ==> Values
And then the file name (it's ok with another code)
you count the lines first
while((line = br.readLine()) != null){
sourcesize++; }
and then you're at the end of the file: you read nothing
for (int i=1;i<=numSplits;i++) {
while((line = br.readLine()) != null){
You have to seek back to the start of the file before reading again.
But that's a waste of time & power because you'll read the file twice
It's better to read the file once and for all, put it in a List<String> (resizable), and proceed with your split using the lines stored in memory.
EDIT: seems that you followed my advice and stumbled on the next issue. You should have maybe asked another question, well... this creates a buffer with all the lines.
for (String value : list) {
builder.append("/n"+value);
}
You have to use indexes on the list to build small files.
for (int k=0;k<numSplits;k++) {
builder.append("/n"+list[current_line++]);
current_line being the global line counter in your file. That way you create files of 50 different lines each time :)

jTextArea saves only first line of text in text file using BufferedReader?

I am trying to save the multiple line output in a text file from my jTextArea(named as "outputarea" in a code) to my desired path, Everything is OK but the file being saved do not contain the whole output, but only first line oof text. I am using "\n" to break the line in jtextarea while giving multiple line output, does that make any difference or any other problem in this code, This code is just the code on saveAs button, output is coming from another methods I've created. Thanks in Advance!
private void saveAs() {
FileDialog fd = new FileDialog(home.this, "Save", FileDialog.SAVE);
fd.show();
if(fd.getFile()!=null)
{
fn=fd.getFile();
dir=fd.getDirectory();
filename = dir + fn +".txt";
setTitle(filename);
try
{
DataOutputStream d=new DataOutputStream(new FileOutputStream(filename));
holdText = outputarea.getText();
BufferedReader br = new BufferedReader(new StringReader(holdText));
while((holdText = br.readLine())!=null)
{
d.writeBytes(holdText+"\r\n");
d.close();
}
}
catch (Exception e)
{
System.out.println("File not found");
}
outputarea.requestFocus();
save(filename);
}
}
You should put the d.close(); after the completion of while loop, because just after writing the first line in the file using DataOutputStream, you are closing it and you don't let it to fulfill the whole job.
You can see even an error is wrote in your console:
File not found
This is not because it doesn't find your file, it's because in the iterations after the first, it tries to write into a closed stream. So only the first line is wrote then. So change you code like this:
while ((holdText = br.readLine()) != null) {
d.writeBytes(holdText + "\r\n");
}
d.close();
Also I can advise to use a PrintWriter instead of DataOutputStream. Then you can easily change the writeBytes into println method. In this way you don't need to append \r\n manually to each line you write.
Another good hint is to use a try-with-resource (in case you use java 7 or later) or at least a finally block to close your streams either way:
String holdText = outputarea.getText();
try (PrintWriter w = new PrintWriter(new File(filename));
BufferedReader br = new BufferedReader(new StringReader(holdText))) {
while ((holdText = br.readLine()) != null) {
w.println(holdText);
}
} catch (Exception e) {
System.out.println("File not found");
}
Good Luck.

How to read a file in Android

I have a text file called "high.txt". I need the data inside for my Android app. But I have absolutely no idea how to read it into an ArrayList of the Strings. I tried the normal way of doing it in Java but apparently that doesn't work in Android since it cant find the file. So how do I go about doing this? I have put it in my res folder. But how do you take the input stream that you get from opening the file within Android and read it into an ArrayList of Strings. I am stuck on that part.
Basically it would look something like this:
3. What do you do for an upcoming test?
L: make sure I know what I'm studying and really review and study for this thing. Its what Im good at. Understand the material really well.
CL: Time to study. I got this, but I really need to make sure I know it,
M: Tests can be tough, but there are tips and tricks. Focus on the important, interesting stuff. Cram in all the little details just to get past this test.
CR: -sigh- I don't like these tests. Hope I've studied enough to pass or maybe do well.
R: Screw the test. I'll study later, day before should be good.
This is for a sample question and all the lines will be stored as separate strings in the array list.
If you put the text file in your assets folder you can use code like this which I've taken and modified from one of my projects:
public static void importData(Context context) {
try {
BufferedReader br = new BufferedReader(new InputStreamReader(context.getAssets().open("high.txt")));
String line;
while ((line = br.readLine()) != null) {
String[] columns = line.split(",");
Model model = new Model();
model.date = DateUtil.getCalendar(columns[0], "MM/dd/yyyy");
model.name = columns[1];
dbHelper.insertModel(model);
}
} catch (IOException e) {
e.printStackTrace();
}
}
Within the loop you can do anything you need with the columns, what this example is doing is creating an object from each row and saving it in the database.
For this example the text file would look something like this:
15/04/2013,Bob
03/03/2013,John
21/04/2013,Steve
If you want to read file from External storage than use below method.
public void readFileFromExternal(){
String path = Environment.getExternalStorageDirectory().getPath()
+ "/AppTextFile.txt";
try {
BufferedReader reader = new BufferedReader(new FileReader(path));
String line, results = "";
while( ( line = reader.readLine() ) != null)
{
results += line;
}
reader.close();
Log.d("FILE","Data in your file : " + results);
} catch (Exception e) {
}
}
//find all files from folder /assets/txt/
String[] elements;
try {
elements = getAssets().list("txt");
} catch (IOException e) {
e.printStackTrace();
}
//for every files read text per line
for (String fileName : elements) {
Log.d("xxx", "File: " + fileName);
try {
InputStream open = getAssets().open("txt/" + fileName);
InputStreamReader inputStreamReader = new InputStreamReader(open);
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
String line = "";
while ((line = bufferedReader.readLine()) != null) {
Log.d("xxx", line);
}
} catch (IOException e) {
e.printStackTrace();
}
}

Categories