Why do I get "Too many open files" errors? - java

I find myself having to explicitly call System.gc() in my Groovy scripts to prevent errors like the one below. Why doesn't the garbage collector do this for me? Is there something I can do to cause it to garbage collect to prevent these errors (maybe JAVA_OPTS)?
Caught: java.util.concurrent.ExecutionException: org.codehaus.groovy.runtime.InvokerInvocationException: java.io.IOException: Cannot run program "ls": java.io.IOException: error=24, Too many open files
at groovyx.gpars.GParsPool.runForkJoin(GParsPool.groovy:305)
at UsageAnalyzer$_run_closure2_closure6.doCall(UsageAnalyzer.groovy:36)
at groovyx.gpars.GParsPool$_withExistingPool_closure1.doCall(GParsPool.groovy:170)
at groovyx.gpars.GParsPool$_withExistingPool_closure1.doCall(GParsPool.groovy)
at groovyx.gpars.GParsPool.withExistingPool(GParsPool.groovy:169)
at groovyx.gpars.GParsPool.withPool(GParsPool.groovy:141)
at groovyx.gpars.GParsPool.withPool(GParsPool.groovy:117)
at groovyx.gpars.GParsPool.withPool(GParsPool.groovy:96)
at UsageAnalyzer$_run_closure2.doCall(<removed>)
at UsageAnalyzer.run(<removed>)
This stack trace is from a parallel program but it happens in sequential programs as well.

As you're using groovy, you can use the convenient methods such as File.withReader(), File.withWriter(), File.withInputStream(), InputStream.withStream() to ensure resources get closed cleanly. This is less cumbersome than using Java's try .. finally idiom, as there's not need to explicitly call close(), or declare a variable outside the try block.
e.g. to read from a file.
File f = new File('/mumble/mumble/')
f.withReader{ r ->
// do stuff with reader here
}

Definitely look for any place you open files or streams and make sure you close them. It's often beneficial to wrap them like this:
final InputStream in = ...;
try
{
// Do whatever.
}
finally
{
// Will always close the stream, regardless of exceptions, return statements, etc.
in.close();
}

Related

Resource leak in Files.list(Path dir) when stream is not explicitly closed?

I recently wrote a small app that periodically checked the content of a directory. After a while, the app crashed because of too many open file handles. After some debugging, I found the error in the following line:
Files.list(Paths.get(destination)).forEach(path -> {
// To stuff
});
I then checked the javadoc (I probably should have done that earlier) for Files.list and found:
* <p> The returned stream encapsulates a {#link DirectoryStream}.
* If timely disposal of file system resources is required, the
* {#code try}-with-resources construct should be used to ensure that the
* stream's {#link Stream#close close} method is invoked after the stream
* operations are completed
To me, "timely disposal" still sounds like the resources are going to be released eventually, before the app quits. I looked through the JDK (1.8.60) code but I wasn't able to find any hint about the file handles opened by Files.list being released again.
I then created a small app that explicitly calls the garbage collector after using Files.list like this:
while (true) {
Files.list(Paths.get("/")).forEach(path -> {
System.out.println(path);
});
Thread.sleep(5000);
System.gc();
System.runFinalization();
}
When I checked the open file handles with lsof -p <pid> I could still see the list of open file handles for "/" getting longer and longer.
My question now is: Is there any hidden mechanism that should eventually close no longer used open file handles in this scenario? Or are these resources in fact never disposed and the javadoc is a bit euphemistic when talking about "timely disposal of file system resources"?
If you close the Stream, Files.list() does close the underlying DirectoryStream it uses to stream the files, so there should be no resource leak as long as you close the Stream.
You can see where the DirectoryStream is closed in the source code for Files.list() here:
return StreamSupport.stream(Spliterators.spliteratorUnknownSize(it, Spliterator.DISTINCT), false)
.onClose(asUncheckedRunnable(ds));
The key thing to understand is that a Runnable is registered with the Stream using Stream::onClose that is called when the stream itself is closed. That Runnable is created by a factory method, asUncheckedRunnable that creates a Runnable that closes the resource passed into it, translating any IOException thrown during the close() to an UncheckedIOException
You can safely assure that the DirectoryStream is closed by ensuring the Stream is closed like this:
try (Stream<Path> files = Files.list(Paths.get(destination))){
files.forEach(path -> {
// Do stuff
});
}
Regarding the IDE part: Eclipse performs resource leak analysis based on local variables (and explicit resource allocation expressions), so you only have to extract the stream to a local variable:
Stream<Path> files =Files.list(Paths.get(destination));
files.forEach(path -> {
// To stuff
});
Then Eclipse will tell you
Resource leak: 'files' is never closed
Behind the scenes the analysis works with a cascade of exceptions:
All Closeables need closing
java.util.stream.Stream (which is Closeable) does not need closing
All streams produced by methods in java.nio.file.Files do need closing
This strategy was developed in coordination with the library team when they discussed whether or not Stream should be AutoCloseable.
List<String> fileList = null;
try (Stream<Path> list = Files.list(Paths.get(path.toString()))) {
fileList =
list.filter(Files::isRegularFile).map(Path::toFile).map(File::getAbsolutePath)
.collect(Collectors.toList());
} catch (IOException e) {
logger.error("Error occurred while reading email files: ", e);
}

FileOutputStream: Stream closed

Solved, In short: the problem was that I wrote to an already closed FileOutputStream
I noticed some strange semantics using the FileOutputStream class.
If I create a FileOutputStream using this code:
try {
File astDumpFile = new File(dumpASTPath);
if(!astDumpFile.exists()) {
astDumpFile.createNewFile();
}
astDumpStream = new FileOutputStream(dumpASTPath);
} catch( IOException e ) {
dumpAST = false;
//throw new IOException("Failed to open file for dumping AST: " + dumpASTPath);
System.out.println("Failed to open file for dumping AST: " + dumpASTPath);
}
at the beginning of the program (astDumpStream is a member variable). Then if I later (~3 seconds later) write string data to the file, i get an IOException: stream closed:
try {
String dotGraph = gpvisitor.getDotGraph();
astDumpStream.write(dotGraph.getBytes("UTF8"));
astDumpStream.flush();
astDumpStream.close();
} catch( IOException e ) {
System.out.println("Failed to dump AST to file: " + e.getMessage());
e.printStackTrace();
}
However if I copy the excact code which I use to create the FileOutputStream to directly before writing to it, it works as expected.
Now I wonder why do I get that exception if I create that object earlier, but not if I create it directly before I use it.
EDIT: The exception:
java.io.IOException: Stream Closed
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:305)
at MyClass.function(MyClass.java:208)
I just noticed, that even though I get an exception, still some data was written to the file. Interrestingly the first line is written completely, then all following lines except the last line are missing.
If I replace the written String dotGraph with something shorter everything is written correctly, however I still get that exception.
EDIT: Environment Information:
[~]> lsb_release -a
Distributor ID: Debian
Description: Debian GNU/Linux testing (wheezy)
Release: testing
Codename: wheezy
[~]> java -version
java version "1.7.0_09"
Java(TM) SE Runtime Environment (build 1.7.0_09-b05)
Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)
The only reason to get an IOException complaining that the stream is closed is because the stream was closed. You'll have to trace through your code to find out where that's happening. Some not-so-obvious places include calls into other methods and finally blocks of try statements. Another thing to look for is reassignment of the variable astDumpStream to a different stream (that was closed before the IOException was raised—possibly even before the first assignment to astDumpStream).
The time doesn't seem relevant unless you have a separate thread that might close the stream after a delay.
The only way this can happen if is the close() function gets called more than once. My guess is that for some reason, the second block of code is being called more than once.
To prevent indentation errors, there are two good pieces of advice I've received:
Always indent consistently. Preferably use a tool that does this for you (like Eclipse).
Always use curly braces, even if you don't think you need them. This helps prevent quite a few minor bugs that take forever to find, so the extra half-second it takes to type each one is more than made up by the hours you don't spend looking for these bugs.
To Second Tedd - in case you happen to use a nested try with resource block and using stream outside tryblock - you could run in this situation as well because once the control comes out of nested try with resource block the stream will be closed.

git java wrapper - git pull never ends

I'm creating a simple Java wrapper for git executable, that I want to use in my app.
A small code example:
public static void main(String[] args) {
String gitpath = "C:/eclipse/git/bin/git.exe";
File folder = new File("C:/eclipse/teste/ssadasd");
try {
folder.mkdirs();
Runtime.getRuntime().exec(
gitpath + " clone git#192.168.2.15:test.git", null,
folder);
} catch (IOException e) {
e.printStackTrace();
}
}
The code simply never ends the execution.. seems like it has caught inside exec.
If I run the git clone via command line, it work as expected.
If I try another repository, from github, e.g., it works too.
Someone have a ide for what is going on here?
Thanks in advance
This isn't a direct answer to your question, but you may want to take a look at JGit, which is direct Java implementation of Git operations (no wrapping of commandline git). JGit gets a lot of use and stabilization work as it is the foundation for EGit (Eclipse Git integration).
Runtime.getRuntime().exec returns a Process object that you can use to interact with the process and see what's going on. My suspicion is that you just need to do something like this:
Process p = Runtime.getRuntime().exec(
gitpath + " clone git#192.168.2.15:test.git", null,
folder);
p.waitFor();
If not, you can also do getErrorStream() or getOutputStream() on the process to see what it's writing out; that might be helpful in debugging.
Runtime.exec() can cause hanging under various circumstances - see this article which quotes the Javadoc, which says (in JDK 7):
Because some native platforms only provide limited buffer size for standard input and output streams, failure to promptly write the input
stream or read the output stream of the subprocess may cause the
subprocess to block, and even deadlock.
The article gives some example solutions, which consume the output and error streams, although I think the ProcessBuilder class was introduced after the article was written, so may be more satisfactory: the newer Javadoc adds:
Where desired, subprocess I/O can also be redirected using methods of the ProcessBuilder class.

JUnit tests fail when creating new Files

We have several JUnit tests that rely on creating new files and reading them. However there are issues with the files not being created properly. But this fault comes and goes.
This is the code:
#Test
public void test_3() throws Exception {
// Deletes files in tmp test dir
File tempDir = new File(TEST_ROOT, "tmp.dir");
if (tempDir.exists()) {
for (File f : tempDir.listFiles()) {
f.delete();
}
} else {
tempDir.mkdir();
}
File file_1 = new File(tempDir, "file1");
FileWriter out_1 = new FileWriter(file_1);
out_1.append("# File 1");
out_1.close();
File file_2 = new File(tempDir, "file2");
FileWriter out_2 = new FileWriter(file_2);
out_2.append("# File 2");
out_2.close();
File file_3 = new File(tempDir, "fileXXX");
FileWriter out_3 = new FileWriter(file_3);
out_3.append("# File 3");
out_3.close();
....
The fail is that the second file object, file_2, never gets created. Sometimes. Then when we try to write to it a FileNotFoundException is thrown
If we run only this testcase, everything works fine.
If we run this testfile with some ~40 testcases, it can both fail and work depending on the current lunar cycle.
If we run the entire testsuite, consisting of some 10*40 testcases, it always fails.
We have tried
adding sleeps (5sec) after new File, nothing
adding while loop until file_2.exists() is true but the loop never stopped
catching SecurityException, IOException and even throwable when we do the New File(..), but caught nothing.
At one point we got all files to be created, but file_2 was created before file_1 and a test that checked creation time failed.
We've also tried adding file_1.createNewFile() and it always returns true.
So what is going on? How can we make tests that depend on actual files and always be sure they exist?
This has been tested in both java 1.5 and 1.6, also in Windows 7 and Linux. The only difference that can be observed is that sometimes a similar testcase before fails, and sometimes file_1 isn't created instead
Update
We tried a new variation:
File file_2 = new File(tempDir, "file2");
while (!file_2.canRead()) {
Thread.sleep(500);
try {
file_2.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
This results in alot of Exceptions of the type:
java.io.IOException: Access is denied
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:883)
... but eventually it works, the file is created.
Are there multiple instances of your program running at once?
Check for any extra instances of javaw.exe running. If multiple programs have handles to the same file at once, things can get very wonky very quickly.
Do you have antivirus software or anything else running that could be getting in the way of file creation/deletion, by handle?
Don't hardcode your file names, use random names. It's the only way to abstract yourself from the various external situations that can occur (multiple access to the same file, permissions, file system error, locking problems, etc...).
One thing for sure: using sleep() or retrying is guaranteed to cause weird errors at some point in the future, avoid doing that.
I did some googling and based on this lucene bug and this board question seems to indicate that there could be an issue with file locking and other processes using the file.
Since we are running this on ClearCase it seems plausible that ClearCase does some indexing or something similar when the files are being created. Adding loops that repeat until the file is readable solved the issue, so we are going with that. Very ugly solution though.
Try File#createTempFile, this at least guarantees you that there are no other files by the same name that would still hold a lock.

How to delete files from a USB Stick ? Using File.delete() doesn't work

after creating a file and populating it with that with a thread if the file is in a USB java can't delete it, when I try on disk it deletes the file ok !
Here is the part of the code that create and after an exception when try to delete the file.
if(canExport && fileCreated)
{
//Create the file
this.file.createNewFile();
//Export the data
this.run();
if(possible == false){ // in case writing fails delete the file created.
file.delete();
Export novaTentativa = new Export(plan);
novaTentativa.fileCreator(plan);
}
}
The file is created when the this.file.createNewFile() acts.
When this.run() runs, there is a lot of methods to populate the data and handle exceptions, if one exception is caught it sets the global variable possible to false so I know the file is created but empty in the USB, after that I try to delete it with file.delete();
You mention that you're trying to delete the file "after an exception" - consequently, your approach is on the wrong track and isn't going to work as-is.
If an exception is thrown by earlier methods (e.g. the createNewFile() call), then that exception will immediately propagate upwards, so your file.delete() call won't get a chance to execute. You'd need to wrap the earlier statements in a try block, and put the delete call in the corresponding catch or finally block in order for it to execute when an exception was thrown.
Here's an example of what you might try to do:
if(canExport && fileCreated)
{
//Create the file
this.file.createNewFile();
try
{
this.run();
}
catch (IOException e)
{
try
{
file.delete();
}
catch (IOException ignore) {} // don't want to mask the real exception
// Rethrow the actual exception from run() so callers can handle it
throw e;
}
}
An alternative approach rather than catching IOExceptions would be to have a finally block (which is always run) and then check a condition there, such as your possible flag.
Note as well that I start the try block after the call to createNewFile() - if an exception is thrown in the create file call then the file won't exist to delete at all!
As a file note, adding "a lot of code that asks for the thread to start over" in your error-handling block is probably not the best design. It would be more appropriate to simply consider recovering from IO situations here, and let the exception bubble up to the top and cause the thread/runnable to die. The logic around restarting tasks and/or resurrecting threads would be better positioned with the class that started the threads in the first place (e.g. a thread pool/task executor/etc.). Scattering the logic throughout the code will make it harder to see what any individual class is doing (not to mention that having a class marshall resources to resurrect itself just seems wrong from an OO standpoint).
Try explicitly stating the drive letter, path and folder to access the USB device to create write and read or delete the file. If that does not work then it is possible only a specific operating system utility or proprietory utility can delete the file.
How certain are you that you closed the file when the write failed? I'll bet money that you are missing a finally block somewhere in this.run(). That would result in exactly the behavior you describe - delete() will fail if the file is open (you should check it's return code - File.delete() doesn't throw exceptions if it is unable to delete the file).
If you want to test this, replace this.run() with a super, crazy simple implementation that writes 100 bytes to the file, sets 'possible' to false, then returns. If the file still won't delete, post the code you are using for this simplified version of run() and maybe someone can spot what's going on.

Categories