Best way to continuously write to file (50 times per second) - java

I am building an Android app which records Accelerometer and Gyroscope data to a text file. In most of the tutorials they use a method which involves creating two text files, and opening and closing them each 50 times per second. ie :
private static void writeToFile(File file, String data) {
FileOutputStream stream = null;
try {
stream = new FileOutputStream(file, true);
stream.write(data.getBytes());
} catch (FileNotFoundException e) {
Log.e("History", "In catch");
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
try {
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
ie, on every SensorEvent, you open the file, write the values, then close the file, then open it again 20 milliseconds later.
It all seems to be working fine, I was just wondering if there was a better way of going about doing it? I tried some changes using a boolean flag to say whether the stream is already open or not, and then a different writeToFile if flag is set to true, but clearly the fileOutputStream can sometimes close itself in the 20 millisecond time frame, and the app crashes.
So I guess my Question is: How many system resources does it take to open, write and close a file that many times? Is it fine, and not something I should worry about, or is there a better way of doing things? Bear in mind continous sensor logging already takes a toll on battery life, so I would like to do things as efficiently as possible.
Thanks

It's not a good way of doing it. A better way would be to create the FileOutputStream once, save it as an instance member of whatever class this is, and just write to it (possibly with an occasional call to flush to make sure it writes to disk).

Related

Writting an object in a file. Best way

I have a problem with time.
I currently develop an app in Java where I have to make a network analyzer.
For that I use JPCAP to capture all the packets, and write them in a file, and from there I will put them bulk in DB.
The problem is when I am writting in file the entire object, like this,
UDPPacket udpPacket = (UDPPacket)packet
wtf.writeToFile("packets.txt",udpPacket +"\n");
everything is working nice and smooth, but when I try to write like this
String str=""+udpPacket.src_ip+" "+udpPacket.dst_ip+""
+udpPacket.src_port+" "+udpPacket.dst_port+" "+udpPacket.protocol +
" Wi-fi "+udpPacket.dst_ip.getCanonicalHostName()+"\n";
wtf.writeToFile("packets.txt",str +"\n");
writting in file is during lot more time.
the function to write in file is this
public void writeToFile(String name, String str){
try{
PrintWriter writer = new PrintWriter(new FileOutputStream(new File(name),this.restart));
if(!str.equalsIgnoreCase("0")){
writer.append(str);
this.restart=true;
}
else {
this.restart=false;
writer.print("");
}
writer.close();
} catch (IOException e) {
System.out.println(e);
}
}
Can anyone give me a hit, whats the best way to do this?
Thanks a lot
EDIT:
7354.120266 ns - packet print
241471.110451 ns - with StringBuilder
Keep the PrintWriter open. Don't open and close it for every line you want to write to the file. And don't flush it either: just close it when you exit. Basically you should remove your writeToFile() method and just call PrintWriter.write() or whatever directly when necessary.
NB You are writing text, not objects.
I found the problem
as #KevinO said, getCanonicalHostName() was the problem.
Thanks a lot.

Open and Close of same File multiple times vs Opening File for long time

I am writing to a File whenever any change of content in JTextArea field.I have decided to open and close the file content each time as per the change event.
Something like ,
public void addToLogFile(String changeContent) {
try {
PrintWriter pw = new PrintWriter(new BufferedWriter(new FileWriter(currentLogFile,true)));
pw.print(changeContent);
pw.close();
} catch (FileNotFoundException ex) {
Logger.getLogger(Main.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(Main.class.getName()).log(Level.SEVERE, null, ex);
}
}
Instead of opening and closing the file each time, I thought may be we could open it at initial phase and dump content whenever required. Finally close it at the end phase.
At Initial Phase of program:
PrintWriter pw = new PrintWriter(new BufferedWriter(new FileWriter(currentLogFile,true)));
Then somewhere in code, wherever needed,
pw.print(changeContent); // Most frequent usage
At Final Phase of program:
pw.close();
Which one will be more efficient ? Under what condition, Do I have to choose one ?
More effective would definitely be opening the file once. Opening the file everytime is quite costly.
One case it may be useful is when new entries to the file happen once in a long while, so the OS doesn't need to hold open file handler.
Another case in which I would consider opening and closing it each time is when writes happen not so often and you want to let other processes write to the file. Or maybe when you want to ensure each entry is visible just after writing it, but then you should rather simply flush the buffer.
Not keeping the file open would be an option if you have many of those textfields, where each one is assiciated with a different file. Then, if the number of text fields approaches the open file limit, chances are that your program cannot open any other files, sockets or whatever, when each of the fields would occupy one file descriptor.
But, of course, this is a purely theoretic consideration. The open files limit is usually around 2000, and I hardly can imagine an application with 2000 text input fields.
That being said, early versions of the unix find utility took care to close and later re-open traversed directories, to avoid problems with file decriptors running out. But this was in the early days when the limit was 20 or so.

Rolling file implementation

I am always curious how a rolling file is implemented in logs.
How would one even start creating a file writing class in any language in order to ensure that the file size is not exceeded.
The only possible solution I can think of is this:
write method:
size = file size + size of string to write
if(size > limit)
close the file writer
open file reader
read the file
close file reader
open file writer (clears the whole file)
remove the size from the beginning to accommodate for new string to write
write the new truncated string
write the string we received
This seems like a terrible implementation, but I can not think up of anything better.
Specifically I would love to see a solution in java.
EDIT: By remove size from the beginning is, let's say I have 20 byte string (which is the limit), I want to write another 3 byte string, therefore I remove 3 bytes from the beginning, and am left with end 17 bytes, and by appending the new string I have 20 bytes.
Because your question made me look into it, here's an example from the logback logging framework. The RollingfileAppender#rollover() method looks like this:
public void rollover() {
synchronized (lock) {
// Note: This method needs to be synchronized because it needs exclusive
// access while it closes and then re-opens the target file.
//
// make sure to close the hereto active log file! Renaming under windows
// does not work for open files
this.closeOutputStream();
try {
rollingPolicy.rollover(); // this actually does the renaming of files
} catch (RolloverFailure rf) {
addWarn("RolloverFailure occurred. Deferring roll-over.");
// we failed to roll-over, let us not truncate and risk data loss
this.append = true;
}
try {
// update the currentlyActiveFile
currentlyActiveFile = new File(rollingPolicy.getActiveFileName());
// This will also close the file. This is OK since multiple
// close operations are safe.
// COMMENT MINE this also sets the new OutputStream for the new file
this.openFile(rollingPolicy.getActiveFileName());
} catch (IOException e) {
addError("setFile(" + fileName + ", false) call failed.", e);
}
}
}
As you can see, the logic is pretty similar to what you posted. They close the current OutputStream, perform the rollover, then open a new one (openFile()). Obviously, this is all done in a synchronized block since many threads are using the logger, but only one rollover should occur at a time.
A RollingPolicy is a policy on how to perform a rollover and a TriggeringPolicy is when to perform a rollover. With logback, you usually base these policies on file size or time.

Files not being written to device

I have been trying a hundred different methods to solve my problem, but for some reason they simple won't work.
I'm trying to make a quick and dirty way, for my application to be persistent. It basically got a lot of objects it needs to save when destroying, so I thought I would make it put the objects into an ArrayList, and then write the ArrayList to the file using an ObjectOutputStream.
public void onStop() {
super.onStop();
Log.d("Event", "Stopped");
FileOutputStream fos = null;
ObjectOutputStream oos = null;
try {
fos = openFileOutput("Flights", MODE_WORLD_WRITEABLE);
oos = new ObjectOutputStream(fos);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
ArrayList<Flight> alFlightList = new ArrayList<Flight>();
Iterator it = flightMap.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pairs = (Map.Entry)it.next();
alFlightList.add((Flight) pairs.getValue());
}
try {
oos.writeObject(alFlightList);
oos.close();
} catch (IOException e) {
e.printStackTrace();
} finally {
Log.d("Info", "File created!");
}
}
I got a similar algorithm for reading it out again, but it complains about there not being any file to read from.
I know using files for persistence is not the best practice, but this is as previously mentioned, supposed to have been a quick and dirty solution. (But the time I have used on it now, might as well have been spent on making a database. ._.)
Thanks!
From the documentation on Saving Persistent State,
There are generally two kinds of persistent state than an activity
will deal with: shared document-like data (typically stored in a
SQLite database using a content provider) and internal state such as
user preferences.
For content provider data, we suggest that activities use a "edit in
place" user model. That is, any edits a user makes are effectively
made immediately without requiring an additional confirmation step.
Supporting this model is generally a simple matter of following two
rules:
When creating a new document, the backing database entry or file for
it is created immediately. For example, if the user chooses to write a
new e-mail, a new entry for that e-mail is created as soon as they
start entering data, so that if they go to any other activity after
that point this e-mail will now appear in the list of drafts. When an
activity's onPause() method is called, it should commit to the backing
content provider or file any changes the user has made. This ensures
that those changes will be seen by any other activity that is about to
run. You will probably want to commit your data even more aggressively
at key times during your activity's lifecycle: for example before
starting a new activity, before finishing your own activity, when the
user switches between input fields, etc.
So if you want to do it "correctly", I would save the data in onPause... and I'd probably save the state using an SQLite database of some sorts. You should also perform file I/O on a separate thread using an AsyncTask, as this sort of thing could potentially block the UI thread and crash your app.
If you want a quick and dirty way to do it (i.e. if you are not releasing this application on the Android market), then I am betting that the problem is that you are trying to perform the file I/O in onDestroy, which is not guaranteed to be called. This is another reason to perform the file reads/writes in onPause.
The last thing I would suggest is reading through the documentation on internal/external storage. It could be that you aren't writing to the correct directory because you don't have the file permissions to do so. You should perform the file I/O like so:
String FILENAME = "FLIGHTS";
FileOutputStream fos = openFileOutput(FILENAME, Context.MODE_PRIVATE);
fos.write(...);
fos.close();
Replace Flight in /sdcard/Flights or else it creates a file in null space. :)
I am not sure if it will work.
Why don't you use database? and call back all the setting from the database when the app is created or restarted?
You can also use onpause() or onstop() method to store all the data into the database.

Recovering from IOException: network name no longer available

I'm trying to read in a large (700GB) file and incrementally process it, but the network I'm working on will occasionally go down, cutting off access to the file. This throws a java.io.IOException telling me that "The specified network name is no longer available". Is there a way that I can catch this exception and wait for, say, fifteen minues, and then retry the read, or is the Reader object fried once access to the file is lost?
If the Reader is rendered useless once the connection is lost, is there a way that I can rewrite this in such a way as to allow me to "save my place" and then begin my read from there without having to read and discard all the data before it? Even just munching data without processing it takes a long time when there's 500GB of it to get through.
Currently, the code looks something like this (edited for brevity):
class Processor {
BufferedReader br;
Processor(String fname) {
br = new BufferedReader(new FileReader("fname"));
}
void process() {
try {
String line;
while((line=br.readLine)!=null) {
...code for processing the line goes here...
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Thank you for your time.
You can keep track of read bytes in a variable. For example here I keep track in a variable called read, and buff is char[]. Not sure if this is possible using the readLine method.
read+=br.read(buff);
Then if you need to restart, you can skip that many bytes
br.skip(read);
Then you can keep processing away. Good luck
I doubt that the underlying fd will still be usable after this error, but you would have to try it. More probably you will have to reopen the file and skip to where you were up to.

Categories