Despite closing streams in finally clauses I seem to constantly run into cleaning up problems when using Java. File.delete() fails to delete files, Windows Explorer fails too. Running System.gc() helps sometimes but nothing short of terminating the VM helps consistently and that is not an option.
Does anyone have any other ideas I could try? I use Java 1.6 on Windows XP.
UPDATE: FLAC code sample removed, the code worked if I isolated it.
UPDATE:
More info, this happens in Apache Tomcat, Commons FileUpload is used to upload the file and could be the culprit, also I use Runtime.exec() to execute LAME in a separate process to encode the file, but that seems unlikely to cause this since ProcessExplorer clearly indicates that java.exe has a RW lock on the file and LAME terminates fine.
UPDATE: I am working with the assumption that there is a missing close() or a close() that does not get called somewhere in my code or external library. I just can't find it!
The code you posted looks good - it should not cause the issues you are describing. I understand you posted just a piece of the code you have - can you try extracting just this part to a separate program, run it and see if the issue still happens?
My guess is that there is some other place in the code that does new FileInputStream(path); and does not close the stream properly. You might be just seeing the results here when you try to delete the file.
I assume you're using jFlac. I downloaded jFlac 1.3 and tried your sample code on a flac freshly downloaded from the internet live music archive. For me, it worked. I even monitored it with ProcessExplorer and saw the file handles be opened and then released. Is your test code truly as simple as what you gave us, or is that a simplified version of your code? For me, once close() was called, the handle was released and the file was subsequently successfully deleted.
Try changing your infinite loop to:
File toDelete = new File(path);
if (!toDelete.delete()) {
System.out.println("Could not delete " + path);
System.out.println("Does it exist? " + toDelete.exists());
}
or if you want to keep looping, then put a 1 second sleep between attempts to delete the file. I tried this with JDK6 on WinXP Pro.
Don't forget to put a try/catch around your close() and log errors if the close throws an exception.
Make sure you have your close calls in the finally block not in the try block. If there is no try/finally because the method throws the exception then add a try/finally and put the close in there.
Look at the Windows Task Manager. For the Processes add the "Handles" column (under the View menu). Watch to see if the handles keep going up without ever dropping.
Use a profiler to see if you have Stream/Reader/Writer objects around that you do not think you should have.
EDIT:
Thanks for posting the code... off to see it. One thing - your close methods are not both guaranteed to execute - the first close might throw and then the second won't run.
EDIT 2:
final WavWriter wavWriter = new WavWriter(os);
LACDecoder decoder = new FLACDecoder(is);
The above two lines will cause the strams to be kept in instance variables presumably. As a test see if you can set the stream references to null after the decoder.decode() call (make a decoder.cleanup() method perhaps). See if holding onto the closed streams is causing a problem.
Also, do you do any wrapping of the streams passed into the above constructors? If so you might have to close the streams via the wrappers.
Your code sample should definitely work. In fact I ran your it on Java 1.6/Vista with jflac 1.3 and the source file is deleted, without any looping.
I'm guessing in your case another process is keeping the file open, perhaps a desktop search indexer or an antivirus. You can procexp to find which process is actually holding onto the file.
Isn't that an empty while loop?
you have:
try
{
...code
}
finally
{
}
while (something);
put some whitespace in there, and you actually have:
try
{
...code
}
finally
{
}
while (something)
;
your while loop isn't related to your try/finally. if your original try statement fails and the file isn't created, that while loop will never complete, because the try/finally will never execute a second time.
did you intend to make that a do{ all your code } while (your while statement)?
because that isn't what you have there.
EDIT to clarify:
my suggestion would be to change your while loop to have more info of why it can't delete:
while (!file.delete())
{
if (!file.exists())
break; // the file doesn't even exist, of course delete will fail
if (!file.canRead())
break; // the file isn't readable, delete will fail
if (!file.canWrite())
break; // the file isn't writable, delete will fail
}
because if delete fails once, its just going to fail over and over and over, of course its going to hang there. you aren't changing the state of the file in the loop.
Now that you've added other info, like Tomcat, etc, is this a permissions issue? are you trying to write to a file that the user tomcat is running as (nobody?) vm can't create? or delete a file that the tomcat process can't delete?
If process explorer/etc say java has a lock on the file, then something still has an open stream using it. someone might have not properly called close() on whatever streams are writing to the file?
If you are out of clues and ideas: In cygwin, cd to your javaroot and run something like:
find . -name '*.java' -print0 | xargs -0 grep "new.*new.*putStream"
It might provide a few suspects...
Another thing to try since you're using Tomcat-- in your Context Descriptor (typically Tomcat/conf/Catalina/localhost/your-context.xml), you can set
antiResourceLocking=true, which is designed to "avoid resource locking on Windows". The default for this (if you don't specify) is false. Worth a try.
Related
My post got a little too long, sorry. Here is a summary:
File on disk cannot be deleted ("the JVM holds the file" error). both when deleting from the java code and when trying to manually delete the file from windows.
All streams to that file are closed and set to null. All file objects set to null.
The program does nothing at that point; but waiting 30 minutes allows me to deleted the file from windows. Weird. Is the file not used by java anymore? Plus, since nothing happens in the program, it indicates it cannot be some stream I forgot (plus, I triple checked nothing is open).
Invoking System.gc() seemed to work when files were small. Did not help when they got to about 20MB.
[EDIT2] - I tried writing some basic code to explain, but its tricky. I am sorry, I know it's difficult to answer like that. I can however write how I open and close streams, of course:
BufferedWriter bw = new BufferedWriter(new FileWriter(new File("C:\\folder\\myFile.txt")));
for(int i = 0; i < 10; i++)
{
bw.write("line " + i);
bw.newLine();
}
bw.close();
bw = null;
If I've used a file object:
File f = new File("C:\\folder\\myFile.txt");
// use it...
f = null;
Basic code, I know. But this is essentially what I do.
I know for a fact I've closed all streams in this exact way.
I know for a fact that nothing happens in the program in that 30-minutes interval in which I cannot delete the file, until I somehow magically can.
thank you for your input even without the coherent code.
I appreciate that.
Sorry for not providing any specific code here, since I can't pinpoint the problem (not exactly specific-code related). In any case, here is the thing:
I have written a program which reads, writes and modifies files on disk. For several reasons, the handling of the read/write is done in a different thread, which is constantly operating.
At some point, I am terminating the "read/write" thread, keeping only the main thread - it waits for input from a socket, totally unrelated to the file, and does nothing. Then, I try to delete the file (using either File.delete(), even tried nio.Files delete option).
The thing is - and it's very weird - sometimes it works, sometimes it doesn't. Even manually, going to the folder and trying to delete the file via windows, gives me the "The file is open by the JVM" message.
Now, I am well aware that keeping references from all kinds of streams to the file prevents me from deleting it. Well past that by now :)
I have made sure that all streams are closed. I even set their values to null, including any "File" objects I have used (even though it shouldn't make any difference). All set to null, all closed. And the thread which generates all of them - the "read/write" thread - well, it's terminated since it got the the end of its run() method.
Usually, if I wait about 30 minutes, while the JVM still operates, I can delete the file manually from windows. The error magically disappears. When the JVM is closed, I can always delete the file right away.
I am lost here. Tried specifically invoking System.gc() before trying to delete the file, even called it like 10 times (not that it should matter). Sometimes it helped, but on other occasions, for example, when the file got larger (say 20MB), that didn't help.
What am I missing here?
Obviously, this couldn't be my implicit fault (not closing some stream), since the read/write thread is dead, the main thread awaits something unrelated (so the program is at a "standstill"), I have explicitly closed all streams, even nullified the references (inStream = null), invoked the garbage collector.
What am I missing? Why is the file "deletable" after 30 minutes (nothing happens at that time - not something in my code). Am I missing some gentle reference/garbage collection thingy?
What you're doing just calls for problems. You say that "if an IOexception occurred, it is printed immediately" and it may be true, but given that something inexplicable happens, let's better doubt it.
I'd first ensure that everything gets always closed, and then I'd care about related logic (logging, exiting, ...).
Anyway, what you did is not how resources should be managed. The answer above is not exactly correct either. Anyway, try-with-resources is (besides #lombok.Cleanup) about the only way, clearly showing that nothing gets ever left open. Anything else is more complicated and more error-prone. I'd strongly recommend using it everywhere. This may be quite some work, but it also forces you to re-inspect all the critical code pieces.
Things like nullifying references and calling the GC should not help... and if they seem to do, it may be a chance.
Some ideas:
Are you using memory mapped files?
Are you sure System.exit is not disabled by a security manager?
Are you running an antivirus? They love to scan files just after they get written.
Btw., locking files is one reason why the WOW never started for me. Sometimes the locks persisted long after the culprit was gone, at least according to tools I could use.
Are you closing your streams in a try...finally or try(A a = new A()) block? If not the streams may not be closed.
I would strongly recommend using either Automatic Resource Block Management ( try(A a = new A()) ) or a try...finally block for all external resources.
try(BufferedWriter br = new BufferedWriter(new FileWriter(new File("C:\\folder\\myFile.txt")));
for(int i = 0; i < 10; i++)
{
br.write("line " + i);
br.newLine();
})
When my program starts, it opens a file and writes to it periodically. (It's not a log file; it's one of the outputs of the program.) I need to have the file available for the length of the program, but I don't need to do anything in particular to end the file; just close it.
I gather that for file I/O in Java I'm supposed to implement AutoCloseable and wrap it in a try-with-resources block. However, because this file is long-lived, and it's one of a few outputs of the program, I'm finding it hard to organize things such that all the files I open are wrapped in try-with-resources blocks. Furthermore, the top-level classes (where my main() function lies) don't know about this file.
Here's my code; note the lack of writer.close():
public class WorkRecorder {
public WorkRecorder(String recorderFile) throws FileNotFoundException {
writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(recorderFile)));
}
private Writer writer;
public void record(Data data) throws Exception {
// format Data object to match expected file format
// ...
writer.write(event.toString());
writer.write(System.lineSeparator());
writer.flush();
}
}
tl;dr do I need to implement AutoCloseable and call writer.close() if the resource is an opened output file, and I never need to close it until the program is done? Can I assume the JVM and the OS (Linux) will clean things up for me automatically?
Bonus (?): I struggled with this in C#'s IDisposeable too. The using block, like Java's try-with-resources construct, is a nice feature when I have something that I'm going to open, do something with quickly, and close right away. But often that's not the case, particularly with files, when the access to that resource hangs around for a while, or when needing to manage multiple such resources. If the answer to my question is "always use try-with-resources blocks" I'm stuck again.
I have similar code that doesn't lend itself to being wrapped in a try-with-resources statement. I think that is fine, as long as you close it when the program is done.
Just make sure you account for any Exceptions that may happen. For example, in my program, there is a cleanup() method that gets called when the program is shut down. This calls writer.close(). This is also called if there is any abnormal behavior that would cause the program to shut down.
If this is just a simple program, and you're expecting the Writer to be open for its duration, I don't think it's really a big deal for it to not be closed when the program terminates...but it is good practice to make sure your resources are closed, so I would go ahead and add that to wherever your program may shut down.
You should always close resources or set them to null so it can be picked up by the garbage collector in Java. Using try-with-resource blocks is a great way to have Java automatically close resources when you're done with them. Even if you use it for the duration of the program, it is good programming practice to close it even at the end. Some might say you don't need to, I personally would say just go ahead and do it and here's why:
"When a stream is no longer needed, always close it using the close() method or automatically close it using a try-with-resource statement. Not closing streams may cause data corruption in the output file, or other programming errors."
-Introduction to Java Programming 10th Edition, Y. Daniel Liang
If possible, just run the .close() method on the resource at the very end of the program.
I (now) think a better answer is "It depends" :-). A detailed treatment is provided by Lukas Eder here. Also check out the Lambda EG group post.
But in general, it's a good idea to return the resource back to the operating system when you are done with it and use try-with-resources all the time (except when you know what you are doing).
I have a implemented a listener that notifies if we receive a new file in a particular directory. This is implemented by polling and using a TimerTask.
Now the program is so set up that once it receives a new file it calls another java program that opens the file and validates whether it is the correct file. My problem is that since the polling happens a specified number of seconds later there can arise a case in which a file is being copied in that directory and hence is locked by windows.
This throws an IOException since the other java program that tries to open it for validation cannot ("File is being used by another process").
Is there a way I can know when windows has finished copying and then call the second program to do the validations from java?
I will be more than happy to post code snippets if someone needs them in order to help.
Thanks
Thanks a lot for all the help, I was having the same problem with WatchEvent.
Unfortunately, as you said, file.canRead() and file.canWrite() both return true, even if the file still locked by Windows. So I discovered that if I try to "rename" it with the same name, I know if Windows is working on it or not. So this is what I did:
while(!sourceFile.renameTo(sourceFile)) {
// Cannot read from file, windows still working on it.
Thread.sleep(10);
}
This one is a bit tricky. It would have been a piece of cake if you could control or at least communicate with the program copying the file but this won't be possible with Windows I guess. I had to deal with a similar problem a while ago with SFU software, I resolved it by looping on trying to open the file for writing until it becomes available.
To avoid high CPU usage while looping, checking the file can be done at an exponential distribution rate.
EDIT A possible solution:
File fileToCopy = File(String pathname);
int sleepTime = 1000; // Sleep 1 second
while(!fileToCopy .canWrite()){
// Cannot write to file, windows still working on it
Sleep(sleepTime);
sleepTime *= 2; // Multiply sleep time by 2 (not really exponential but will do the trick)
if(sleepTime > 30000){
// Set a maximum sleep time to ensure we are not sleeping forever :)
sleepTime = 30000;
}
}
// Here, we have access to the file, go process it
processFile(fileToCopy);
I think you can create the File object and then use canRead or canWrite to know whether file ready to be used by the other java program.
http://docs.oracle.com/javase/6/docs/api/java/io/File.html
Other option is to try to Open file on first program and if it throws the exception then dont call the other java program. But I ll recommend the above 'File option.
I'm doing some file I/O with multiple files (writing to 19 files, it so happens). After writing to them a few hundred times I get the Java IOException: Too many open files. But I actually have only a few files opened at once. What is the problem here? I can verify that the writes were successful.
On Linux and other UNIX / UNIX-like platforms, the OS places a limit on the number of open file descriptors that a process may have at any given time. In the old days, this limit used to be hardwired1, and relatively small. These days it is much larger (hundreds / thousands), and subject to a "soft" per-process configurable resource limit. (Look up the ulimit shell builtin ...)
Your Java application must be exceeding the per-process file descriptor limit.
You say that you have 19 files open, and that after a few hundred times you get an IOException saying "too many files open". Now this particular exception can ONLY happen when a new file descriptor is requested; i.e. when you are opening a file (or a pipe or a socket). You can verify this by printing the stacktrace for the IOException.
Unless your application is being run with a small resource limit (which seems unlikely), it follows that it must be repeatedly opening files / sockets / pipes, and failing to close them. Find out why that is happening and you should be able to figure out what to do about it.
FYI, the following pattern is a safe way to write to files that is guaranteed not to leak file descriptors.
Writer w = new FileWriter(...);
try {
// write stuff to the file
} finally {
try {
w.close();
} catch (IOException ex) {
// Log error writing file and bail out.
}
}
1 - Hardwired, as in compiled into the kernel. Changing the number of available fd slots required a recompilation ... and could result in less memory being available for other things. In the days when Unix commonly ran on 16-bit machines, these things really mattered.
UPDATE
The Java 7 way is more concise:
try (Writer w = new FileWriter(...)) {
// write stuff to the file
} // the `w` resource is automatically closed
UPDATE 2
Apparently you can also encounter a "too many files open" while attempting to run an external program. The basic cause is as described above. However, the reason that you encounter this in exec(...) is that the JVM is attempting to create "pipe" file descriptors that will be connected to the external application's standard input / output / error.
For UNIX:
As Stephen C has suggested, changing the maximum file descriptor value to a higher value avoids this problem.
Try looking at your present file descriptor capacity:
$ ulimit -n
Then change the limit according to your requirements.
$ ulimit -n <value>
Note that this just changes the limits in the current shell and any child / descendant process. To make the change "stick" you need to put it into the relevant shell script or initialization file.
You're obviously not closing your file descriptors before opening new ones. Are you on windows or linux?
Although in most general cases the error is quite clearly that file handles have not been closed, I just encountered an instance with JDK7 on Linux that well... is sufficiently ****ed up to explain here.
The program opened a FileOutputStream (fos), a BufferedOutputStream (bos) and a DataOutputStream (dos). After writing to the dataoutputstream, the dos was closed and I thought everything went fine.
Internally however, the dos, tried to flush the bos, which returned a Disk Full error. That exception was eaten by the DataOutputStream, and as a consequence the underlying bos was not closed, hence the fos was still open.
At a later stage that file was then renamed from (something with a .tmp) to its real name. Thereby, the java file descriptor trackers lost track of the original .tmp, yet it was still open !
To solve this, I had to first flush the DataOutputStream myself, retrieve the IOException and close the FileOutputStream myself.
I hope this helps someone.
If you're seeing this in automated tests: it's best to properly close all files between test runs.
If you're not sure which file(s) you have left open, a good place to start is the "open" calls which are throwing exceptions! 😄
If you have a file handle should be open exactly as long as its parent object is alive, you could add a finalize method on the parent that calls close on the file handle. And call System.gc() between tests.
Recently, I had a program batch processing files, I have certainly closed each file in the loop, but the error still there.
And later, I resolved this problem by garbage collect eagerly every hundreds of files:
int index;
while () {
try {
// do with outputStream...
} finally {
out.close();
}
if (index++ % 100 = 0)
System.gc();
}
I have an eclipse plugin, which connects to a COM component using Jacob. But after I close the plugin entirely, the .exe file stays hanging in Windows processes.
I use ComThread.InitMTA(true) for initialization and make sure that SafeRelease() is called for every COM object I created before closing the app and I call ComThread.Release() at the very end.
Do I leave something undone?
Some further suggestions:
Move the call to ComThread.Release() into a finally block, otherwise the thread will remain attached if an exception is thrown.
Check that you are calling ComThread.InitMTA and ComThread.Release in every thread that uses a COM object. If you forget to do this in a worker thread then that thread will be attached automatically and never detached.
Avoid InitSTA and stick to InitMTA. Even when there is only one thread using COM, I have found InitSTA to be flaky. I don't know how JACOB's internal marshalling mechanism works but I have ended up with "ghost" objects that appear to be valid but do nothing when their methods are invoked.
Fortunately I have never yet needed to modify any code in the JACOB library.
I ran into this issue myself. After messing with initMTA,etc. I found a simple fix - when you start Java add the following to your command line:
-Dcom.jacob.autogc=true
This will cause the ROT class to use a WeakHashMap instead of a HashMap and that solves the problem.
You can also use -Dcom.jacob.debug=true to see lots of informative debug spew and watch the size of the ROT map.
Had the same problem with TD2JIRA converter. Eventually had to patch one of the Jacob files to release the objects. After that all went smooth.
The code in my client logout() method now looks like this:
try {
Class rot = ROT.class;
Method clear = rot.getDeclaredMethod("clearObjects", new Class[]{});
clear.setAccessible(true);
clear.invoke(null, new Object[]{});
} catch( Exception ex ) {
ex.printStackTrace();
}
The ROT class wasn't accessible initially, AFAIR.
Update
The correct way to release resources in Jacob is to call
ComThread.InitSTA(); // or ComThread.InitMTA()
...
ComThread.Release();
Bad thing though is that sometimes it doesn't help. Despite Jacob calls native method release(), the memory (not even Java memory, but JVM process memory) grows uncontrollably.