Today, when I was working on some kind of servlet which was writing some information to some file present on my hard disk, I was using the following code to perform the write operation
File f=new File("c:/users/dell/desktop/ja/MyLOgs.txt");
PrintWriter out=new PrintWriter(new FileWriter(f,true));
out.println("the name of the user is "+name+"\n");
out.println("the email of the user is "+ email+"\n");
out.close(); //**my question is about this statement**
When I was not using the statement, the servlet was compiling well, but it was not writing anything to the file, but when I included it, then the write operation was successfully performed. My questions are:
Why was the data not being written to the file when I was not including that statement (even my servlet was compiling without any errors)?
Up to which extent the close operation is considerable for the streams?
Calling close() causes all the data to be flushed. You have constructed a PrintWriter without enabling auto-flush (a second argument to one of the constructors), which would mean you would have to manually call flush(), which close() does for you.
Closing also frees up any system resources used by having the file open. Although the VM and Operating System will eventually close the file, it is good practice to close it when you are finished with it to save memory on the computer.
You may also which to put the close() inside a finally block to ensure it always gets called. Such as:
PrintWriter out = null;
try {
File f = new File("c:/users/dell/desktop/ja/MyLOgs.txt");
out = new PrintWriter(new FileWriter(f,true));
out.println("the name of the user is "+name+"\n");
out.println("the email of the user is "+ email+"\n");
} finally {
out.close();
}
See: PrintWriter
Sanchit also makes a good point about getting the Java 7 VM to automatically close your streams the moment you don't need them automatically.
When you close a PrintWriter, it will flush all of its data out to wherever you want the data to go. It doesn't automatically do this because if it did every time you wrote to something, it would be very inefficient as writing is not an easy process.
You could achieve the same effect with flush();, but you should always close streams - see here: http://www.javapractices.com/topic/TopicAction.do?Id=8 and here: http://docs.oracle.com/javase/tutorial/jndi/ldap/close.html. Always call close(); on streams when you are done using them. Additionally, to make sure it is always closed regardless of exceptions, you could do this:
try {
//do stuff
} finally {
outputStream.close():
}
It is because the PrintWriter buffers your data in order for not making I/O operations repeatedly for every write operation (which is very expensive). When you call close() the Buffer flushes into the file. You can also call flush() for forcing the data to be written without closing the stream.
Streams automatically flush their data before closing. So you can either manually flush the data every once in a while using out.flush(); or you can just close the stream once you are done with it. When the program ends, streams close and your data gets flushed, this is why most of the time people do not close their streams!
Using Java 7 you can do something like this below which will auto close your streams in the order you open them.
public static void main(String[] args) {
String name = "";
String email = "";
File f = new File("c:/users/dell/desktop/ja/MyLOgs.txt");
try (FileWriter fw = new FileWriter(f, true); PrintWriter out = new PrintWriter(fw);) {
out.println("the name of the user is " + name + "\n");
out.println("the email of the user is " + email + "\n");
} catch (IOException e) {
e.printStackTrace();
}
}
PrintWriter buffers the data to be written so and will not write to disk until its buffer is full. Calling close() will ensure that any remaining data is flushed as well as closing the OutputStream.
close() statements typically appear in finally blocks.
Why the data was not being written to the file when I was not including that statement?
When the process terminates the unmanaged resources will be released. For InputStreams this is fine. For OutputStreams, you could lose an buffered data, so you should at least flush the stream before exiting the program.
Related
I'm trying to keep a log of http responses with writing them in a txt file.
I'm using the FileWriter in Java, but unfortunately when the number of lines (e.g. 1000 lines) or the size of the txt file (e.g. 80kb) is exceeded, it automatically removes the previous content and writes the new ones.
This happens every time the limit is exceeded.
try{
File file = new File("response.txt");
file.createNewFile();
FileWriter writer = new FileWriter(file,true);
writer.write(+System.currentTimeMillis()+"\t"+response+"\n");
writer.flush();
writer.close();}
catch(IOException ioe){
System.out.println("\nError");}
file.createNewFile();
Here you are creating a new file every time you call this method.
FileWriter writer = new FileWriter(file,true);
Here you are trying to append to an existing file, which no longer exists because of the prior File.createNewFile(). So you are losing all your prior output and writing to a new file every time you call this method. Remove it.
This kind of second-guessing is always and everywhere a complete waste of time and space. new FileWriter() already has to do all that anyway, and you're just forcing it to happen twice: in this case, erroneously.
In fact you should try to keep the file open rather than reopening and reclosing it every time you call this method. What you're doing is horrifically inefficient. As well as not working.
NB When you get an exception, print the exception. Not just "error". Otherwise next thing you know you will be asking here why it prints "error", just because you didn't write your code properly.
I am guessing it's either because:
There's not enough disk space.
file.createNewFile() is being used for every line and it's not reliable.
You open and close the stream for every line.
You call this code in a multithreaded environment without synchronising.
Try the following:
BufferedWriter out = null;
try
{
out = new BufferedWriter(new FileWriter(file, true));
out.append(response);
out.newLine();
} catch(Exception e)
{
e.printStackTrace(); // if needed
out.append("Error");
out.newLine();
}
finally
{
ResourceUtil.closeQuietly(out);
}
I'm writing a logger to my java program (in csv format).
The logger works fine, and I had one problem.
It sounds pretty logical that the program will crash when i tried to write to the file and at the same time open the file.
When i do that, I got that exception: "The process cannot access the file because it is being used by another process".
My question is if there is anyway to continue writing even if someone open the file?
Thanks.
UPDATE:
I think i solved the problem.
Every time after i write to the file (With bufferedWriter and FileWriter), I call to a close() function that closing the bufferedWriter and FileWriter.
I changed the close() function:
1. Added fileChannel and FileLock.
2. Igonore the line bw.close();
Its ok not to close the bufferWriter (bw)?, Can there be any problems later on?
private void close() throws IOException {
RandomAccessFile rf;
rf = new RandomAccessFile(file, "rw");
fileChannel = rf.getChannel();
lock = fileChannel.lock();
try {
if (bw != null) {
// bw.close(); The line i ignored.
bw = null;
}
if (fw != null) {
fw.close();
fw = null;
}
} catch (IOException ex) {
ex.printStackTrace();
}
lock.release();
}
UPDATE 2:
Now i found that if i change the function to that (close changed to flush), Its working:
private void close() {
try {
if (bw != null) {
bw.flush();
bw = null;
}
if (fw != null) {
fw.flush();
fw = null;
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
What is the best option ?
Reverse the problem: try to open while continuing writing:
if you want fixed datas, you can copy the file (by shell), and then read it;
if you want even future written datas, you must keep the same output: try to redirect the normal output, to something you can store and read.
Perhaps some library exists. It seems like tee and tpipe.
see for example:
Could I duplicate or intercept an output stream in Java?
for redirecting log4j to what you want, see this for example:
How do I redirect log4j output to my HttpServletResponse output stream?
Is there is anyway to continue writing even if something else has opened the file?
Not in Java.
To write a file, you must first open it. If you cannot open it because the OS won't permit it ... because something else has opened it ... then you cannot get to the point where you can write it.
In this scenario, you should consider opening a different log file.
Note that this scenario happens in Windows because Java is following normal Window practice and opening the file with an exclusive (mandatory) lock by default. Short of changing Java ... and every other Windows application that opens files like this ... you are stuck.
UPDATE
It turns out that there may be a way.
Read this Q&A: https://stackoverflow.com/a/22648514/139985
Use FileChannel.open as described, but use flags that allow you to write without forbidding other writers. For example
FileChannel.open(path, WRITE)
or
FileChannel.open(path, WRITE, APPEND)
The trick is that you don't want any of the NOSHARE_* options.
CAVEAT: I haven't tried this.
As #guillaume said, you can use a library like log4j.
But If you want to implements your solution in Java, you can use the observer pattern and write your logs async.
I am always curious how a rolling file is implemented in logs.
How would one even start creating a file writing class in any language in order to ensure that the file size is not exceeded.
The only possible solution I can think of is this:
write method:
size = file size + size of string to write
if(size > limit)
close the file writer
open file reader
read the file
close file reader
open file writer (clears the whole file)
remove the size from the beginning to accommodate for new string to write
write the new truncated string
write the string we received
This seems like a terrible implementation, but I can not think up of anything better.
Specifically I would love to see a solution in java.
EDIT: By remove size from the beginning is, let's say I have 20 byte string (which is the limit), I want to write another 3 byte string, therefore I remove 3 bytes from the beginning, and am left with end 17 bytes, and by appending the new string I have 20 bytes.
Because your question made me look into it, here's an example from the logback logging framework. The RollingfileAppender#rollover() method looks like this:
public void rollover() {
synchronized (lock) {
// Note: This method needs to be synchronized because it needs exclusive
// access while it closes and then re-opens the target file.
//
// make sure to close the hereto active log file! Renaming under windows
// does not work for open files
this.closeOutputStream();
try {
rollingPolicy.rollover(); // this actually does the renaming of files
} catch (RolloverFailure rf) {
addWarn("RolloverFailure occurred. Deferring roll-over.");
// we failed to roll-over, let us not truncate and risk data loss
this.append = true;
}
try {
// update the currentlyActiveFile
currentlyActiveFile = new File(rollingPolicy.getActiveFileName());
// This will also close the file. This is OK since multiple
// close operations are safe.
// COMMENT MINE this also sets the new OutputStream for the new file
this.openFile(rollingPolicy.getActiveFileName());
} catch (IOException e) {
addError("setFile(" + fileName + ", false) call failed.", e);
}
}
}
As you can see, the logic is pretty similar to what you posted. They close the current OutputStream, perform the rollover, then open a new one (openFile()). Obviously, this is all done in a synchronized block since many threads are using the logger, but only one rollover should occur at a time.
A RollingPolicy is a policy on how to perform a rollover and a TriggeringPolicy is when to perform a rollover. With logback, you usually base these policies on file size or time.
I actually checked other posts that could be related to this and I couldn't find any answer to my question. So, had to create this newly:
The file does not get created in the given location with this code:
File as = new File ("C:\\Documents and Settings\\<user>\\Desktop\\demo1\\One.xls");
if (!as.exists()) {
as.createNewFile();
}
FileOutputStream fod = new FileOutputStream(as);
BufferedOutputStream dob = new BufferedOutputStream(fod);
byte[] asd = {65, 22, 123};
byte a1 = 87;
dob.write(asd);
dob.write(a1);
dob.flush();
if (dob!=null){
dob.close();
}
if(fod!=null){
fod.close();
The code runs fine and I don't get any FileNotFoundException!!
Is there anything that I'm missing out here?
You can rewrite your code like this:
BufferedOutputStream dob = null;
try {
File file = new File("C:\\Documents and Settings\\<user>\\Desktop\\demo1\\One.xls");
System.out.println("file created:" + file.exists());
FileOutputStream fod = new FileOutputStream(file);
System.out.println("file created:" + file.exists());
BufferedOutputStream dob = new BufferedOutputStream(fod);
byte[] asd = {65, 22, 123};
byte a1 = 87;
dob.write(asd);
dob.write(a1);
//dob.flush();
}
catch (Exception ex) {
ex.printStackTrace();
}
finally {
if (dob != null) {
dob.close();
}
}
In this case it is only necessary to call the topmost stream handler close() method - the BufferedOutputStream's one:
Closes this output stream and releases any system resources associated with the stream.
The close method of FilterOutputStream calls its flush method, and then calls the close method of its underlying output stream.
so, the dob.flush() in try block is commented out because the dob.close() line in the finally block flushes the stream. Also, it releases the system resources (e.g. "closes the file") as stated in the apidoc quote above. Using the finally block is a good practice:
The finally block always executes when the try block exits. This ensures that the finally block is executed even if an unexpected exception occurs. But finally is useful for more than just exception handling — it allows the programmer to avoid having cleanup code accidentally bypassed by a return, continue, or break. Putting cleanup code in a finally block is always a good practice, even when no exceptions are anticipated.
The FileOutputStream constructor creates an empty file on the disk:
Creates a file output stream to write to the file represented by the specified File object. A new FileDescriptor object is created to represent this file connection.
First, if there is a security manager, its checkWrite method is called with the path represented by the file argument as its argument.
If the file exists but is a directory rather than a regular file, does not exist but cannot be created, or cannot be opened for any other reason then a FileNotFoundException is thrown.
Where a FileDescriptor is:
Instances of the file descriptor class serve as an opaque handle to the underlying machine-specific structure representing an open file, an open socket, or another source or sink of bytes. The main practical use for a file descriptor is to create a FileInputStream or FileOutputStream to contain it.
Applications should not create their own file descriptors.
This code should either produce a file or throw an exception. You have even confirmed that no conditions for throwing exception are met, e.g. you are replacing the string and the demo1 directory exists. Please, rewrite this to a new empty file and run.
If it still behaving the same, unless I have missed something this might be a bug. In that case, add this line to the code and post output:
System.out.println(System.getProperty("java.vendor")+" "+System.getProperty("java.version"));
Judging from the path, I'd say you are using Win 7, am I right? What version?
Then it means there is a file already in your directory
I'm trying to read in a large (700GB) file and incrementally process it, but the network I'm working on will occasionally go down, cutting off access to the file. This throws a java.io.IOException telling me that "The specified network name is no longer available". Is there a way that I can catch this exception and wait for, say, fifteen minues, and then retry the read, or is the Reader object fried once access to the file is lost?
If the Reader is rendered useless once the connection is lost, is there a way that I can rewrite this in such a way as to allow me to "save my place" and then begin my read from there without having to read and discard all the data before it? Even just munching data without processing it takes a long time when there's 500GB of it to get through.
Currently, the code looks something like this (edited for brevity):
class Processor {
BufferedReader br;
Processor(String fname) {
br = new BufferedReader(new FileReader("fname"));
}
void process() {
try {
String line;
while((line=br.readLine)!=null) {
...code for processing the line goes here...
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Thank you for your time.
You can keep track of read bytes in a variable. For example here I keep track in a variable called read, and buff is char[]. Not sure if this is possible using the readLine method.
read+=br.read(buff);
Then if you need to restart, you can skip that many bytes
br.skip(read);
Then you can keep processing away. Good luck
I doubt that the underlying fd will still be usable after this error, but you would have to try it. More probably you will have to reopen the file and skip to where you were up to.