Java IOException "Too many open files" - java

I'm doing some file I/O with multiple files (writing to 19 files, it so happens). After writing to them a few hundred times I get the Java IOException: Too many open files. But I actually have only a few files opened at once. What is the problem here? I can verify that the writes were successful.

On Linux and other UNIX / UNIX-like platforms, the OS places a limit on the number of open file descriptors that a process may have at any given time. In the old days, this limit used to be hardwired1, and relatively small. These days it is much larger (hundreds / thousands), and subject to a "soft" per-process configurable resource limit. (Look up the ulimit shell builtin ...)
Your Java application must be exceeding the per-process file descriptor limit.
You say that you have 19 files open, and that after a few hundred times you get an IOException saying "too many files open". Now this particular exception can ONLY happen when a new file descriptor is requested; i.e. when you are opening a file (or a pipe or a socket). You can verify this by printing the stacktrace for the IOException.
Unless your application is being run with a small resource limit (which seems unlikely), it follows that it must be repeatedly opening files / sockets / pipes, and failing to close them. Find out why that is happening and you should be able to figure out what to do about it.
FYI, the following pattern is a safe way to write to files that is guaranteed not to leak file descriptors.
Writer w = new FileWriter(...);
try {
// write stuff to the file
} finally {
try {
w.close();
} catch (IOException ex) {
// Log error writing file and bail out.
}
}
1 - Hardwired, as in compiled into the kernel. Changing the number of available fd slots required a recompilation ... and could result in less memory being available for other things. In the days when Unix commonly ran on 16-bit machines, these things really mattered.
UPDATE
The Java 7 way is more concise:
try (Writer w = new FileWriter(...)) {
// write stuff to the file
} // the `w` resource is automatically closed
UPDATE 2
Apparently you can also encounter a "too many files open" while attempting to run an external program. The basic cause is as described above. However, the reason that you encounter this in exec(...) is that the JVM is attempting to create "pipe" file descriptors that will be connected to the external application's standard input / output / error.

For UNIX:
As Stephen C has suggested, changing the maximum file descriptor value to a higher value avoids this problem.
Try looking at your present file descriptor capacity:
$ ulimit -n
Then change the limit according to your requirements.
$ ulimit -n <value>
Note that this just changes the limits in the current shell and any child / descendant process. To make the change "stick" you need to put it into the relevant shell script or initialization file.

You're obviously not closing your file descriptors before opening new ones. Are you on windows or linux?

Although in most general cases the error is quite clearly that file handles have not been closed, I just encountered an instance with JDK7 on Linux that well... is sufficiently ****ed up to explain here.
The program opened a FileOutputStream (fos), a BufferedOutputStream (bos) and a DataOutputStream (dos). After writing to the dataoutputstream, the dos was closed and I thought everything went fine.
Internally however, the dos, tried to flush the bos, which returned a Disk Full error. That exception was eaten by the DataOutputStream, and as a consequence the underlying bos was not closed, hence the fos was still open.
At a later stage that file was then renamed from (something with a .tmp) to its real name. Thereby, the java file descriptor trackers lost track of the original .tmp, yet it was still open !
To solve this, I had to first flush the DataOutputStream myself, retrieve the IOException and close the FileOutputStream myself.
I hope this helps someone.

If you're seeing this in automated tests: it's best to properly close all files between test runs.
If you're not sure which file(s) you have left open, a good place to start is the "open" calls which are throwing exceptions! 😄
If you have a file handle should be open exactly as long as its parent object is alive, you could add a finalize method on the parent that calls close on the file handle. And call System.gc() between tests.

Recently, I had a program batch processing files, I have certainly closed each file in the loop, but the error still there.
And later, I resolved this problem by garbage collect eagerly every hundreds of files:
int index;
while () {
try {
// do with outputStream...
} finally {
out.close();
}
if (index++ % 100 = 0)
System.gc();
}

Related

File cannot be deleted because the JVM holds it - a tricky one

My post got a little too long, sorry. Here is a summary:
File on disk cannot be deleted ("the JVM holds the file" error). both when deleting from the java code and when trying to manually delete the file from windows.
All streams to that file are closed and set to null. All file objects set to null.
The program does nothing at that point; but waiting 30 minutes allows me to deleted the file from windows. Weird. Is the file not used by java anymore? Plus, since nothing happens in the program, it indicates it cannot be some stream I forgot (plus, I triple checked nothing is open).
Invoking System.gc() seemed to work when files were small. Did not help when they got to about 20MB.
[EDIT2] - I tried writing some basic code to explain, but its tricky. I am sorry, I know it's difficult to answer like that. I can however write how I open and close streams, of course:
BufferedWriter bw = new BufferedWriter(new FileWriter(new File("C:\\folder\\myFile.txt")));
for(int i = 0; i < 10; i++)
{
bw.write("line " + i);
bw.newLine();
}
bw.close();
bw = null;
If I've used a file object:
File f = new File("C:\\folder\\myFile.txt");
// use it...
f = null;
Basic code, I know. But this is essentially what I do.
I know for a fact I've closed all streams in this exact way.
I know for a fact that nothing happens in the program in that 30-minutes interval in which I cannot delete the file, until I somehow magically can.
thank you for your input even without the coherent code.
I appreciate that.
Sorry for not providing any specific code here, since I can't pinpoint the problem (not exactly specific-code related). In any case, here is the thing:
I have written a program which reads, writes and modifies files on disk. For several reasons, the handling of the read/write is done in a different thread, which is constantly operating.
At some point, I am terminating the "read/write" thread, keeping only the main thread - it waits for input from a socket, totally unrelated to the file, and does nothing. Then, I try to delete the file (using either File.delete(), even tried nio.Files delete option).
The thing is - and it's very weird - sometimes it works, sometimes it doesn't. Even manually, going to the folder and trying to delete the file via windows, gives me the "The file is open by the JVM" message.
Now, I am well aware that keeping references from all kinds of streams to the file prevents me from deleting it. Well past that by now :)
I have made sure that all streams are closed. I even set their values to null, including any "File" objects I have used (even though it shouldn't make any difference). All set to null, all closed. And the thread which generates all of them - the "read/write" thread - well, it's terminated since it got the the end of its run() method.
Usually, if I wait about 30 minutes, while the JVM still operates, I can delete the file manually from windows. The error magically disappears. When the JVM is closed, I can always delete the file right away.
I am lost here. Tried specifically invoking System.gc() before trying to delete the file, even called it like 10 times (not that it should matter). Sometimes it helped, but on other occasions, for example, when the file got larger (say 20MB), that didn't help.
What am I missing here?
Obviously, this couldn't be my implicit fault (not closing some stream), since the read/write thread is dead, the main thread awaits something unrelated (so the program is at a "standstill"), I have explicitly closed all streams, even nullified the references (inStream = null), invoked the garbage collector.
What am I missing? Why is the file "deletable" after 30 minutes (nothing happens at that time - not something in my code). Am I missing some gentle reference/garbage collection thingy?
What you're doing just calls for problems. You say that "if an IOexception occurred, it is printed immediately" and it may be true, but given that something inexplicable happens, let's better doubt it.
I'd first ensure that everything gets always closed, and then I'd care about related logic (logging, exiting, ...).
Anyway, what you did is not how resources should be managed. The answer above is not exactly correct either. Anyway, try-with-resources is (besides #lombok.Cleanup) about the only way, clearly showing that nothing gets ever left open. Anything else is more complicated and more error-prone. I'd strongly recommend using it everywhere. This may be quite some work, but it also forces you to re-inspect all the critical code pieces.
Things like nullifying references and calling the GC should not help... and if they seem to do, it may be a chance.
Some ideas:
Are you using memory mapped files?
Are you sure System.exit is not disabled by a security manager?
Are you running an antivirus? They love to scan files just after they get written.
Btw., locking files is one reason why the WOW never started for me. Sometimes the locks persisted long after the culprit was gone, at least according to tools I could use.
Are you closing your streams in a try...finally or try(A a = new A()) block? If not the streams may not be closed.
I would strongly recommend using either Automatic Resource Block Management ( try(A a = new A()) ) or a try...finally block for all external resources.
try(BufferedWriter br = new BufferedWriter(new FileWriter(new File("C:\\folder\\myFile.txt")));
for(int i = 0; i < 10; i++)
{
br.write("line " + i);
br.newLine();
})

How to open several MappedByteBuffer in parallel in Java

I need to write an algorithm which downloads parts of a file from different locations and merges them all in a single file on my local drive. The file may be huge (several giga bytes) but each part is small.
Each part has a header which says the file it's part of and also the offset byte where it's located in the file.
Every part is downloaded in its own thread.
Just after the header has been decoded, I open a MappedByteBuffer:
MappedByteBuffer memoryMappedFile;
try (RandomAccessFile raf = new RandomAccessFile(file.toFile(), "rw")) {
memoryMappedFile = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, offset, mappedSize);
}
The problem is that several threads could execute this code at the same time (while trying to map different parts of the same file) and it causes IOException (thrown by the above 'map' method) with message "This operation cannot be performed on a file having an open user mapped section" (this is a translation from a localized message).
If I synchronize the whole block, the exception is not thrown. So I guess it's ok to use several MappedByteBuffer on the same file as long as they're not open at the same time. Is it possible to achieve this result without synchronizing this part and if not, is there a better solution?
The program runs on Windows 8.1

How to Wait for windows process to finish before opening file in java

I have a implemented a listener that notifies if we receive a new file in a particular directory. This is implemented by polling and using a TimerTask.
Now the program is so set up that once it receives a new file it calls another java program that opens the file and validates whether it is the correct file. My problem is that since the polling happens a specified number of seconds later there can arise a case in which a file is being copied in that directory and hence is locked by windows.
This throws an IOException since the other java program that tries to open it for validation cannot ("File is being used by another process").
Is there a way I can know when windows has finished copying and then call the second program to do the validations from java?
I will be more than happy to post code snippets if someone needs them in order to help.
Thanks
Thanks a lot for all the help, I was having the same problem with WatchEvent.
Unfortunately, as you said, file.canRead() and file.canWrite() both return true, even if the file still locked by Windows. So I discovered that if I try to "rename" it with the same name, I know if Windows is working on it or not. So this is what I did:
while(!sourceFile.renameTo(sourceFile)) {
// Cannot read from file, windows still working on it.
Thread.sleep(10);
}
This one is a bit tricky. It would have been a piece of cake if you could control or at least communicate with the program copying the file but this won't be possible with Windows I guess. I had to deal with a similar problem a while ago with SFU software, I resolved it by looping on trying to open the file for writing until it becomes available.
To avoid high CPU usage while looping, checking the file can be done at an exponential distribution rate.
EDIT A possible solution:
File fileToCopy = File(String pathname);
int sleepTime = 1000; // Sleep 1 second
while(!fileToCopy .canWrite()){
// Cannot write to file, windows still working on it
Sleep(sleepTime);
sleepTime *= 2; // Multiply sleep time by 2 (not really exponential but will do the trick)
if(sleepTime > 30000){
// Set a maximum sleep time to ensure we are not sleeping forever :)
sleepTime = 30000;
}
}
// Here, we have access to the file, go process it
processFile(fileToCopy);
I think you can create the File object and then use canRead or canWrite to know whether file ready to be used by the other java program.
http://docs.oracle.com/javase/6/docs/api/java/io/File.html
Other option is to try to Open file on first program and if it throws the exception then dont call the other java program. But I ll recommend the above 'File option.

Java - (android) Reuse a process after flushing its OutputStream

im trying to do this on Android:
Process p = Runtime.getRuntime().exec("sh");
DataOutputStream out = new DataOutputStream(p.getOutputStream());
out.writeBytes("something useful\n");
out.close();
p.waitFor();
out = new DataOutputStream(p.getOutputStream());
out.writeBytes("something useful\n");
out.close();
p.waitFor();
The second time I execute out.writeBytes(); , I get a java IOException: "Bad file number".
My app has to execute several native programs, but always use the same process.
Anyone know why this does not work?
Note that the shell is not part of the public SDK (note it is not documented anywhere in the SDK documentation), so this code is in effect relying on private APIs.
Also this puts you outside of the normal application model -- we have no guarantee what will happen to a process you have forked and is not being managed by the platform. It may get killed at any time.
This is also a very inefficient way to do things, compared to doing whatever the command is doing in your own process. And starting a separate process for a command won't let it do anything more than you can, because it still runs as your uid.
So basically... for 99.99% of apps please don't do this. If you are writing a terminal app... well, okay, only geeks are going to care about that anyway, and it isn't going to be of much use since it runs as your uid, but okay. But otherwise, please no. :)
When you call out.close(), it will automatically call close() on the ouputstream of your process.
Each time you call p.getOutputStream() you get the same OutputStream, on your second use of out, p.getOutputStream() returns an already closed OutputStream.
Basically with your code, you don't really need to close the first DataOutputStream.
Sources :
Sources of DataOutputStream extends FilterOutputStream
Sources of FilterOutputStream.close()

Java keeps lock on files for no apparent reason

Despite closing streams in finally clauses I seem to constantly run into cleaning up problems when using Java. File.delete() fails to delete files, Windows Explorer fails too. Running System.gc() helps sometimes but nothing short of terminating the VM helps consistently and that is not an option.
Does anyone have any other ideas I could try? I use Java 1.6 on Windows XP.
UPDATE: FLAC code sample removed, the code worked if I isolated it.
UPDATE:
More info, this happens in Apache Tomcat, Commons FileUpload is used to upload the file and could be the culprit, also I use Runtime.exec() to execute LAME in a separate process to encode the file, but that seems unlikely to cause this since ProcessExplorer clearly indicates that java.exe has a RW lock on the file and LAME terminates fine.
UPDATE: I am working with the assumption that there is a missing close() or a close() that does not get called somewhere in my code or external library. I just can't find it!
The code you posted looks good - it should not cause the issues you are describing. I understand you posted just a piece of the code you have - can you try extracting just this part to a separate program, run it and see if the issue still happens?
My guess is that there is some other place in the code that does new FileInputStream(path); and does not close the stream properly. You might be just seeing the results here when you try to delete the file.
I assume you're using jFlac. I downloaded jFlac 1.3 and tried your sample code on a flac freshly downloaded from the internet live music archive. For me, it worked. I even monitored it with ProcessExplorer and saw the file handles be opened and then released. Is your test code truly as simple as what you gave us, or is that a simplified version of your code? For me, once close() was called, the handle was released and the file was subsequently successfully deleted.
Try changing your infinite loop to:
File toDelete = new File(path);
if (!toDelete.delete()) {
System.out.println("Could not delete " + path);
System.out.println("Does it exist? " + toDelete.exists());
}
or if you want to keep looping, then put a 1 second sleep between attempts to delete the file. I tried this with JDK6 on WinXP Pro.
Don't forget to put a try/catch around your close() and log errors if the close throws an exception.
Make sure you have your close calls in the finally block not in the try block. If there is no try/finally because the method throws the exception then add a try/finally and put the close in there.
Look at the Windows Task Manager. For the Processes add the "Handles" column (under the View menu). Watch to see if the handles keep going up without ever dropping.
Use a profiler to see if you have Stream/Reader/Writer objects around that you do not think you should have.
EDIT:
Thanks for posting the code... off to see it. One thing - your close methods are not both guaranteed to execute - the first close might throw and then the second won't run.
EDIT 2:
final WavWriter wavWriter = new WavWriter(os);
LACDecoder decoder = new FLACDecoder(is);
The above two lines will cause the strams to be kept in instance variables presumably. As a test see if you can set the stream references to null after the decoder.decode() call (make a decoder.cleanup() method perhaps). See if holding onto the closed streams is causing a problem.
Also, do you do any wrapping of the streams passed into the above constructors? If so you might have to close the streams via the wrappers.
Your code sample should definitely work. In fact I ran your it on Java 1.6/Vista with jflac 1.3 and the source file is deleted, without any looping.
I'm guessing in your case another process is keeping the file open, perhaps a desktop search indexer or an antivirus. You can procexp to find which process is actually holding onto the file.
Isn't that an empty while loop?
you have:
try
{
...code
}
finally
{
}
while (something);
put some whitespace in there, and you actually have:
try
{
...code
}
finally
{
}
while (something)
;
your while loop isn't related to your try/finally. if your original try statement fails and the file isn't created, that while loop will never complete, because the try/finally will never execute a second time.
did you intend to make that a do{ all your code } while (your while statement)?
because that isn't what you have there.
EDIT to clarify:
my suggestion would be to change your while loop to have more info of why it can't delete:
while (!file.delete())
{
if (!file.exists())
break; // the file doesn't even exist, of course delete will fail
if (!file.canRead())
break; // the file isn't readable, delete will fail
if (!file.canWrite())
break; // the file isn't writable, delete will fail
}
because if delete fails once, its just going to fail over and over and over, of course its going to hang there. you aren't changing the state of the file in the loop.
Now that you've added other info, like Tomcat, etc, is this a permissions issue? are you trying to write to a file that the user tomcat is running as (nobody?) vm can't create? or delete a file that the tomcat process can't delete?
If process explorer/etc say java has a lock on the file, then something still has an open stream using it. someone might have not properly called close() on whatever streams are writing to the file?
If you are out of clues and ideas: In cygwin, cd to your javaroot and run something like:
find . -name '*.java' -print0 | xargs -0 grep "new.*new.*putStream"
It might provide a few suspects...
Another thing to try since you're using Tomcat-- in your Context Descriptor (typically Tomcat/conf/Catalina/localhost/your-context.xml), you can set
antiResourceLocking=true, which is designed to "avoid resource locking on Windows". The default for this (if you don't specify) is false. Worth a try.

Categories