Java file management on process forking - java

My guesses were wrong, and had nothing to do with the answer. This question is no longer valid. See my answer. Sorry about this poor question.
Tl;dr Version
Why can't a Java process find a certain file, until another process – the process that created that file – has finished executing. Can this be worked around?
Longer Version
I have an application that needs to restart itself (it just needs to, okay?). The first time, it creates a File and then serializes an object there. This is done with a FileOutputStream/ObjectOutputStream combo, as described below:
private static File serializeBootstrapInfoIntoFile(
final BootstrapInfo info) throws IOException {
final File tempFile = File.createTempFile("BobBootstrapInfo", null);
final FileOutputStream fos = new FileOutputStream(tempFile);
final ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(info);
// just being thorough
oos.flush();
oos.close();
fos.flush();
fos.close();
return tempFile;
}
After this, I create another java process with a System.exec()-call, to which I pass the the absolute path of the returned tempFile as a system property. The other java process should then open the file, and deserialize the containing Object. The first process remains alive until the spawned process exits, since it handles the new one's output/error streams.
The problem is, however, that the second process doesn't seem to find the file, and dies always in a FileNotFoundException during deserialization (I've confirmed this with a file.exists()).
When I check manually afterwards, the file does indeed exist. Also, if I manually run the exact same command line that is passed to the System.exec(), it runs fine. So, I'm guessing the first process somehow manages to hide the file from the new process, or fails to actually write the file to the file system, even the streams are flushed and closed. I also tried with Thread.sleep(10000) on the first thread, to let the IO operations to finish, but that didn't help a bit.
Am I doing something wrong? Or is this a Java thing, or maybe an OSX thing (which I'm running with atm)?
Answers to Comments
I'm running OS X 10.6.3, and the Java version is 1.6.0_20.
The System.exec() arguments are
java
-classpath
/var/folders/dr/drDlHsguGvq0zF2Jtgn4S++++TI/-Tmp-/bob7168396245507677201.tmp:/Users/wolfie/Documents/workspace/Bob/dist/lib/junit.jar:/Users/wolfie/Documents/workspace/Bob/dist/lib/bob.jar
-Dcache.location="/var/folders/dr/drDlHsguGvq0zF2Jtgn4S++++TI/-Tmp-/BobBootstrapInfo4944987280015634213.tmp"
com.github.wolfie.bob.Bob
where each line is an element in a String array. The /var/folders/dr/drDlHsguGvq0zF2Jtgn4S++++TI/-Tmp-/BobBootstrapInfo4606780571989099166.tmp-file is the one that was created, and being read by the other thread. The envp and dir arguments are null.
The whole stacktrace is:
Exception in thread "main" com.github.wolfie.bob.BootstrapError: java.io.FileNotFoundException: "/var/folders/dr/drDlHsguGvq0zF2Jtgn4S++++TI/-Tmp-/BobBootstrapInfo4606780571989099166.tmp" (No such file or directory)
at com.github.wolfie.bob.Bob.getBootstrapInfo(Bob.java:186)
at com.github.wolfie.bob.Bob.run(Bob.java:138)
at com.github.wolfie.bob.Bob.main(Bob.java:95)
Caused by: java.io.FileNotFoundException: "/var/folders/dr/drDlHsguGvq0zF2Jtgn4S++++TI/-Tmp-/BobBootstrapInfo4606780571989099166.tmp" (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:106)
at com.github.wolfie.bob.Bob.deserializeBootstrapInfoFromFile(Bob.java:265)
at com.github.wolfie.bob.Bob.getBootstrapInfo(Bob.java:184)
... 2 more

The answer isn't visible from the question: I defined the system property like so:
-Dcache.location="/foo/file.ext"
But the property should've been instead
-Dcache.location=/foo/file.ext
i.e. without the quotes. This, apparently, was ignored when the exact same arguments were passed from the command line, probably because Bash processed them in a different way to the JVM.
Sorry about the poor question.

Related

How to resolve java.nio.file.FileSystemException The process cannot access the file because it is being used by another process

I am getting the exception (java.nio.file.FileSystemException) while I run the this code
public String getScreenShotAsBase64() throws IOException {
File source = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
String path = System.getProperty("user.dir") + "/Screenshots/image.png";
FileUtils.copyFile(source, new File(path));
byte[] imageBytes = IOUtils.toByteArray(new FileInputStream(path));
return Base64.getEncoder().encodeToString(imageBytes);
}
when I try to run the method it is not working throws exception.
The cause of your problem is that Windows won't let your application open the "Screenshots/image.png" file for writing because something else already has it open. It just won't. See File Locking for an overview of Windows file locks and their purpose.
This SuperUser Q&A gives a number of ways to figure out which other application holds the file lock:
Find out which process is locking a file or folder in Windows
Your use of Selenium in this instance is (probably) not apropos.
You will most likely need to do one of the following to resolve this.
Change your application to write the screenshot to another file if the first target file it chooses is locked.
Tell the user that your application can't write the file. The user message could suggest that they need to close whatever other application it is that has the image file open at the moment.
If the other application is Windows itself (for some reason) you will probably need to rethink what you trying to do.

Java program can't use file write after it's written

The thing is this, I am creating a file, that an XML resource uses, right after creation. When the program is done executing, the file should be deleted. This is what happens:
I run the program, file does not yet exist...
File should be created using FileWriter:
File file = new File("src/main/resources/org/avalin/optaplanner/solver/employeeShiftsScoreRules.drl");
try (FileWriter fileWriter = new FileWriter(file))
{
fileWriter.write("Content...");
fileWriter.flush();
fileWriter.close();
}
catch(IOException e)
{
e.printStackTrace();
}
And then I have this code:
private static synchronized Solver buildSolver()
{
SolverFactory solverFactory =
SolverFactory.createFromXmlResource(SOLVER_CONFIG_XML);
return solverFactory.buildSolver();
}
The filewriting is also wrapped in a synchronized method, but I assume that since they're not directly accessing the same variable, they have no effect what so ever. The file is being read from the SOLVER_CONFIG_XML seen above.
When the program ends, it deletes the file on the path given, so that when it runs next time it will be created accordingly to parameters given to the program.
Now this is what happens...
The first time I run the program I get an error, saying the file isn't written.
Exception in thread "main" java.lang.IllegalArgumentException: The scoreDrl (org/avalin/optaplanner/solver/employeeShiftsScoreRules.drl) does not exist as a classpath resource in the classLoader
I can make prints right after filewriting, that concludes the method HAS run through the first time around, but for some reason, the file is not "created" anyway, before the program ends executing the first time around...
The second time, program runs fine, as the file was created before it gets to the exception?
Is there a way to make sure the file is "completely written" before the next part of my program executes? The file differs in length each time, as it is dynamically created from what the user inputs, so I can't check on that. I would assume it would be completely written as it did execute the prints I made after fileWriter.close() but apparently not so.
It looks like you are writing your file to the src/main/resources folder, which is a standard location for resource sources - that is, where your build system reads files from, and not where your running program does.
Although it's possible to add your source folders to the classpath of your running problem, it is bad practice - try to find out where your build system writes its output to (probably separate folders for class files and copied resources) and write your file there.

Create a text file if it doesn't exist and append to it if it does using Java BufferedWriter

This is probably ridiculously simple for gun Java programmers, yet the fact that I (a relative newbie to Java) couldn't find a simple, straightforward example of how to do it means that I'm going to use the self-answer option to hopefully prevent others going through similar frustration.
I needed to output error information to a simple text file. These actions would be infrequent and small (and sometimes not needed at all) so there is no point keeping a stream open for the file; the file is opened, written to and closed in the one action.
Unlike other "append" questions that I've come across, this one requires that the file be created on the first call to the method in that run of the Java application. The file will not exist before that.
The original code was:
Path pathOfLog = Paths.get(gsOutputPathUsed + gsOutputFileName);
Charset charSetOfLog = Charset.forName("US-ASCII");
bwOfLog = Files.newBufferedWriter(pathOfLog, charSetOfLog);
bwOfLog.append(stringToWrite, 0, stringToWrite.length());
iReturn = stringToWrite.length();
bwOfLog.newLine();
bwOfLog.close();
The variables starting with gs are pre-populated string variables showing the output location, and stringToWrite is an argument which is passed in.
So the .append method should be enough to show that I wanted to append content, right?
But it isn't; each time the procedure was called the file was left containing only the string of the most recent call.
The answer is that you also need to specify open options when calling the newBufferedWriter method. What gets you is the default arguments as specified in the documentation:
If no options are present then this method works as if the CREATE,
TRUNCATE_EXISTING, and WRITE options are present.
Specifically, it's TRUNCATE_EXISTING that causes the problem:
If the file already exists and it is opened for WRITE access, then its
length is truncated to 0.
The solution, then, is to change
bwOfLog = Files.newBufferedWriter(pathOfLog, charSetOfLog);
to
bwOfLog = Files.newBufferedWriter(pathOfLog, charSetOfLog,StandardOpenOption.CREATE, StandardOpenOption.APPEND);
Probably obvious to long time Java coders, less so to new ones. Hopefully this will help someone avoid a bit of head banging.
You can also try this :
Path path = Paths.get("C:\\Users", "textfile.txt");
String text = "\nHello how are you ?";
try (BufferedWriter writer = Files.newBufferedWriter(path, StandardCharsets.UTF_8, StandardOpenOption.APPEND,StandardOpenOption.CREATE)) {
writer.write(text);
} catch (IOException e) {
e.printStackTrace();
}

Xuggle can't open in-memory input

I am working on a program that integrates Hadoop's MapReduce framework with Xuggle. For that, I am implementing a IURLProtocolHandlerFactory class that reads and writes from and to in-memory Hadoop data objects.
You can see the relevant code here:
https://gist.github.com/4191668
The idea is to register each BytesWritable object in the IURLProtocolHandlerFactory class with a UUID so that when I later refer to that name while opening the file it returns a IURLProtocolHandler instance that is attached to that BytesWritable object and I can read and write from and to memory.
The problem is that I get an exception like this:
java.lang.RuntimeException: could not open: byteswritable:d68ce8fa-c56d-4ff5-bade-a4cfb3f666fe
at com.xuggle.mediatool.MediaReader.open(MediaReader.java:637)
(see also under the posted link)
When debugging I see that the objects are correctly found in the factory, what's more, they are even being read from in the protocol handler. If I remove the listeners from/to the output file, the same happens, so the problem is already with the input. Digging deeper in the code of Xuggle I reach the JNI code (which tries to open the file) and I can't get further than this. This apparently returns an error code.
XugglerJNI.IContainer_open__SWIG_0
I would really appreciate some hint where to go next, how should I continue debugging. Maybe my implementation has a flaw, but I can't see it.
I think the problem you are running into is that a lot of the types of inputs/outputs are converted to a native file descriptor in the IContainer JNI code, but the thing you are passing cannot be converted. It may not be possible to create your own IURLProtocolHandler in this way, because it would, after a trip through XuggleIO.map(), just end up calling IContainer again and then into the IContainer JNI code which will probably try to get a native file descriptor and call avio_open().
However, there may be a couple of things that you can open in IContainer which are not files/have no file descriptors, and which would be handled correctly. The things you can open can be seen in the IContainer code, namely java.io.DataOutput and java.io.DataOutputStream (and the corresponding inputs). I recommend making your DataInput/DataOutput implementation which wraps around BytesReadable/BytesWriteable, and opening it in IContainer.
If that doesn't work, then write your inputs to a temp file and read the outputs from a temp file :)
You can copy file to local first and then try open the container:
filePath = split.getPath();
final FileSystem fileSystem = filePath.getFileSystem(job);
Path localFile = new Path(filePath.getName());
fileSystem.createNewFile(localFile);
fileSystem.copyToLocalFile(filePath, localFile);
int result = container.open(filePath.getName(), IContainer.Type.READ, null);
This code works for me in the RecordReader class.
In your case you may copy the file to local first and then try to create the MediaReader

Unable to access Java-created file -- sometimes

In Java, I'm working with code running under WinXP that creates a file like this:
public synchronized void store(Properties props, byte[] data) {
try {
File file = filenameBasedOnProperties(props);
if ( file.exists() ) {
return;
}
File temp = File.createTempFile("tempfile", null);
FileOutputStream out = new FileOutputStream(temp);
out.write(data);
out.flush();
out.close();
file.getParentFile().mkdirs();
temp.renameTo(file);
}
catch (IOException ex) {
// Complain and whine and stuff
}
}
Sometimes, when a file is created this way, it's just about totally inaccessible from outside the code (though the code responsible for opening and reading the file has no problem), even when the application isn't running. When accessed via Windows Explorer, I can't move, rename, delete, or even open the file. Under Cygwin, I get the following when I ls -l the directory:
ls: cannot access [big-honkin-filename]
total 0
?????????? ? ? ? ? ? [big-honkin-filename]
As implied, the filenames are big, but under the 260-character max for XP (though they are slightly over 200 characters).
To further add to the sense that my computer just wants me to feel stupid, sometimes the files created by this code are perfectly normal. The only pattern I've spotted is that once one file in the directory "locks", the rest are screwed.
Anybody ever run into something like this before, or have any insights into what's going on here?
Make sure you always close the stream in a finally block. In your case if an exception is thrown the stream might not get closed and will leak a file handle. You could use procexp from SysInternals to see which process holds the handle to the file.
Although, by definition, NTFS should handle path length up to 2^15-1, in practice the length of paths is limited to 255.
You can create files with a longer path name (filename including parent folder names), but you cannot access them afterwards. The error I get in these cases is that the file could not be found. To get rid of these files, I have to shorten the names of parent folders, until the path length is short enough.

Categories