I have an app that serializes and reads/writes some custom objects in Java.
One of my clients has a particular file (only one) that is throwing a EOFException whenever the file is read into the ObjectInputStream constructor.
java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source)
java.io.ObjectInputStream$BlockDataInputStream.readShort(Unknown Source)
java.io.ObjectInputStream.readStreamHeader(Unknown Source)
java.io.ObjectInputStream.(Unknown Source)
EDIT: Sorry, my mistake. I forgot to mention that I am receiving the file through this code:
File folder = new File(path);
File[] files = folder.listFiles();
So, the File does exist as far as File#listFiles() is retrieving it.
So file in the code below is received from the loop:
for(File file : files)
Thus, the IOException shouldn't be from the file being missing (because why would listFiles() return it?).
END-EDIT
I figured this may be due to a glitch from a failed-partial-write of the object, so I added code to delete the problem file if there is a EOFException:
try (InputStream is = new FileInputStream(file); ObjectInputStream ois = new ObjectInputStream(is);) {
// Do stuff...
} catch (IOException e) {
if(e instanceof EOFException) {
file.delete();
}
ErrorHandler.handleError(e);
}
Although this code executes successfully, it does not actually delete the file. (I still see the error in logs constantly). So, I opted to have my client manually search for and delete this file. He searched, found it, and deleted it. He confirmed to me that it successfully deleted the file. However, even after he manually deleted it, this error still pops up!
Although this is a Java program, my suspicion is this is a Windows file-system glitch so Java won't have much to do with this. Does anyone have experience with "ghost" files that seem to be there but aren't? Or that seem to get deleted but don't?
This is a confusing problem. Impossible for me to reproduce.
The file is empty, or doesn't contain a complete object stream header. In either event it is corrupt, and you should have detected that when you wrote it.
Probably you failed to close the ObjectOutputStream when you created the file.
Related
I need to write a custom batch File renamer. I've got the bulk of it done except I can't figure out how to check if a file is already open. I'm just using the java.io.File package and there is a canWrite() method but that doesn't seem to test if the file is in use by another program. Any ideas on how I can make this work?
Using the Apache Commons IO library...
boolean isFileUnlocked = false;
try {
org.apache.commons.io.FileUtils.touch(yourFile);
isFileUnlocked = true;
} catch (IOException e) {
isFileUnlocked = false;
}
if(isFileUnlocked){
// Do stuff you need to do with a file that is NOT locked.
} else {
// Do stuff you need to do with a file that IS locked
}
(The Q&A is about how to deal with Windows "open file" locks ... not how implement this kind of locking portably.)
This whole issue is fraught with portability issues and race conditions:
You could try to use FileLock, but it is not necessarily supported for your OS and/or filesystem.
It appears that on Windows you may be unable to use FileLock if another application has opened the file in a particular way.
Even if you did manage to use FileLock or something else, you've still got the problem that something may come in and open the file between you testing the file and doing the rename.
A simpler though non-portable solution is to just try the rename (or whatever it is you are trying to do) and diagnose the return value and / or any Java exceptions that arise due to opened files.
Notes:
If you use the Files API instead of the File API you will get more information in the event of a failure.
On systems (e.g. Linux) where you are allowed to rename a locked or open file, you won't get any failure result or exceptions. The operation will just succeed. However, on such systems you generally don't need to worry if a file is already open, since the OS doesn't lock files on open.
// TO CHECK WHETHER A FILE IS OPENED
// OR NOT (not for .txt files)
// the file we want to check
String fileName = "C:\\Text.xlsx";
File file = new File(fileName);
// try to rename the file with the same name
File sameFileName = new File(fileName);
if(file.renameTo(sameFileName)){
// if the file is renamed
System.out.println("file is closed");
}else{
// if the file didnt accept the renaming operation
System.out.println("file is opened");
}
On Windows I found the answer https://stackoverflow.com/a/13706972/3014879 using
fileIsLocked = !file.renameTo(file)
most useful, as it avoids false positives when processing write protected (or readonly) files.
org.apache.commons.io.FileUtils.touch(yourFile) doesn't check if your file is open or not. Instead, it changes the timestamp of the file to the current time.
I used IOException and it works just fine:
try
{
String filePath = "C:\sheet.xlsx";
FileWriter fw = new FileWriter(filePath );
}
catch (IOException e)
{
System.out.println("File is open");
}
I don't think you'll ever get a definitive solution for this, the operating system isn't necessarily going to tell you if the file is open or not.
You might get some mileage out of java.nio.channels.FileLock, although the javadoc is loaded with caveats.
Hi I really hope this helps.
I tried all the options before and none really work on Windows. The only think that helped me accomplish this was trying to move the file. Event to the same place under an ATOMIC_MOVE. If the file is being written by another program or Java thread, this definitely will produce an Exception.
try{
Files.move(Paths.get(currentFile.getPath()),
Paths.get(currentFile.getPath()), StandardCopyOption.ATOMIC_MOVE);
// DO YOUR STUFF HERE SINCE IT IS NOT BEING WRITTEN BY ANOTHER PROGRAM
} catch (Exception e){
// DO NOT WRITE THEN SINCE THE FILE IS BEING WRITTEN BY ANOTHER PROGRAM
}
If file is in use FileOutputStream fileOutputStream = new FileOutputStream(file); returns java.io.FileNotFoundException with 'The process cannot access the file because it is being used by another process' in the exception message.
I have a temporary file which I want to send the client from the controller in the Play Framework. Can I delete the file after opening a connection using FileInputStream? For example can I do something like this -
File file = getFile();
InputStream is = new FileInputStream(file);
file.delete();
renderBinary(is, "name.txt");
What if file is a large file? If I delete the file, will subsequent reads() on InputStream give an error? I have tried with files of around 1MB I don't get an error.
Sorry if this is a very naive question, but I could not find anything related to this and I am pretty new to Java
I just encountered this exact same scenario in some code I was asked to work on. The programmer was creating a temp file, getting an input stream on it, deleting the temp file and then calling renderBinary. It seems to work fine even for very large files, even into the gigabytes.
I was surprised by this and am still looking for some documentation that indicates why this works.
UPDATE: We did finally encounter a file that caused this thing to bomb. I think it was over 3 Gb. At that point, it became necessary to NOT delete the file while the rendering was in process. I actually ended up using the Amazon Queue service to queue up messages for these files. The messages are then retrieved by a scheduled deletion job. Works out nicely, even with clustered servers on a load balancer.
It seems counter-intuitive that the FileInputStream can still read after the file is removed.
DiskLruCache, a popular library in the Android world originating from the libcore of the Android platform, even relies on this "feature", as follows:
// Open all streams eagerly to guarantee that we see a single published
// snapshot. If we opened streams lazily then the streams could come
// from different edits.
InputStream[] ins = new InputStream[valueCount];
try {
for (int i = 0; i < valueCount; i++) {
ins[i] = new FileInputStream(entry.getCleanFile(i));
}
} catch (FileNotFoundException e) {
....
As #EJP pointed out in his comment on a similar question, "That's how Unix and Linux behave. Deleting a file is really deleting its name from the directory: the inode and the data persist while any processes have it open."
But I don't think it is a good idea to rely on it.
I have Java code which passes in a list of Zip Files, one of which is purposely badly formatted. This Zip file is placed at the end of the list.
My code looks somewhat like:
System.out.println("Hi Stinky Pete ");
try
{
for (File files : file)
{
zip_str = new ZipInputStream(new BufferedInputStream(new FileInputStream(file)));
yada;
}
}
catch(Exception)
{
}
It never prints "Hi Stinky Pete" or processes any File before it gets to the bad zip file, which is the 4th or 20th file in the list, it just throws the ZipException. ALSO, I cannot catch the ZipException! It always bubbles up and terminates my program.
Any help would be great.
Is this malformed ZIPfile on your classpath by any chance? Or do you have a static initializer in your class that tries to open it?
Take a close look at the exception stack trace to see where it's being thrown. If you can't interpret it, then post the stack trace in your question.
I apologize but I had inherited code. The code I inherited performed a loop through file list, casting them to ZipFile in a separate running thread. This was why I could not catch it or get the StackTrace. Basically it was,
for( File files : file)
{
ZipFile zip = new ZipFile(file);
}
They were doing this to check they Zip file but weren't catching. Sorry for the post!
I have a problem. The current code works fine when I run it through IntelliJ,
but it fails with an exception when I run it in maven 3.
public static boolean isZipContent(InputStream inputstream) throws IOException {
BufferedInputStream bis = new BufferedInputStream(inputstream);
ZipInputStream zis = new ZipInputStream(bis);
ZipEntry ze = zis.getNextEntry();
if (ze == null) {
return false;
}
zis.closeEntry();
zis.close();
bis.close();
return true;
}
Exception:
java.util.zip.ZipException: invalid literal/lengths set
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) ~[na:1.7.0_06]
at java.util.zip.ZipInputStream.read(ZipInputStream.java:193) ~[na:1.7.0_06]
at java.util.zip.ZipInputStream.closeEntry(ZipInputStream.java:139) ~[na:1.7.0_06]
The Zip files look just fine when I open them manually using WinZip or whatever - and as I said, everything works perfectly in IntelliJ.
I have debugged and checked file encoding, class loaders and everything, everything looks equal, but still the code fails consistently if I run the test using Maven3, but works in IntelliJ.
It fails on the zis.closeEntry(); with an exception.
I have made sure the stream is still open during debugging.
I'm using Java 1.6, on Win7. Maven 3.0.4. I've tried other versions of Java with the same result.
Does anyone have an idea of what is going on?
You don't need the closeEntry(), as you're not interested in the next one. Remove it. You also don't need bis.close(): it's already closed by zis.close().
The problem was a corrupt Zip file...
What threw me off was that the table of contents with all the entries looked just fine, thus I thought the Zip file was fine.
Once I tried to actually unzip one of the files it failed.
In Java, I'm working with code running under WinXP that creates a file like this:
public synchronized void store(Properties props, byte[] data) {
try {
File file = filenameBasedOnProperties(props);
if ( file.exists() ) {
return;
}
File temp = File.createTempFile("tempfile", null);
FileOutputStream out = new FileOutputStream(temp);
out.write(data);
out.flush();
out.close();
file.getParentFile().mkdirs();
temp.renameTo(file);
}
catch (IOException ex) {
// Complain and whine and stuff
}
}
Sometimes, when a file is created this way, it's just about totally inaccessible from outside the code (though the code responsible for opening and reading the file has no problem), even when the application isn't running. When accessed via Windows Explorer, I can't move, rename, delete, or even open the file. Under Cygwin, I get the following when I ls -l the directory:
ls: cannot access [big-honkin-filename]
total 0
?????????? ? ? ? ? ? [big-honkin-filename]
As implied, the filenames are big, but under the 260-character max for XP (though they are slightly over 200 characters).
To further add to the sense that my computer just wants me to feel stupid, sometimes the files created by this code are perfectly normal. The only pattern I've spotted is that once one file in the directory "locks", the rest are screwed.
Anybody ever run into something like this before, or have any insights into what's going on here?
Make sure you always close the stream in a finally block. In your case if an exception is thrown the stream might not get closed and will leak a file handle. You could use procexp from SysInternals to see which process holds the handle to the file.
Although, by definition, NTFS should handle path length up to 2^15-1, in practice the length of paths is limited to 255.
You can create files with a longer path name (filename including parent folder names), but you cannot access them afterwards. The error I get in these cases is that the file could not be found. To get rid of these files, I have to shorten the names of parent folders, until the path length is short enough.