I am using the object RandomAccessFile to access to files with
RandomAccessFile file = new RandomAccessFile(path, "r");
My problem is if file path get removed from disk when I do perform a
file.seek(...);
or a
file.readLine()
no exception is started, I do not have any exception.
Is it possible to have an exception in case of Dangling Pointer, if this file has been removed from disk?
Is there another method to detect the file inaccessibility ?
EDIT : precision for Windows (thanks to pingw33n)
It is perfectly normal that you get no Exception when :
you open a file
you or someone else deletes the file
you still access to the file, read what it contained before delete, or write to it
In fact the removal of a file do nothing to the file itself. What is removed is an entry in a directory. And the file will be actually destroyed (and the sectors it uses on disk will be released) only when :
no more directory entries point to it
no file descriptors keep it opened
So even if the byte you ask in not buffered in memory, the file system still knows how to get it from disk. By the way, it is a common pattern to create temporary files, that is files that will be deleted on last close.
Of course, you can do what merlin2011 suggests, that is test the presence of the file via its path. But you must know that is the file is deleted and then created again, the path (that was used to open the file) is present, but points to a totaly different object.
So if you really need that the file actually reflects the content of the directory, you cannot keep it opened and must reopen it at each and every acces ... It this is not a fair option you still can:
ignore modification to directory and file system ; you have a file and you use it, full stop. There are many use cases where this is correct.
state in you documentation that the directory is yours and nobody else should delete a file in it. And after all you cannot prevent an admin to break its system or kill your app.
This is true for all normal filesystems, all those of Linux or other Unix like systems, NTFS, etc. I am not sure that it is still true for older one such such as CPM or FAT, but they are no longer currently used in production :-). But under Windows, it should not be possible to delete a file currently opened in a java application.
To answer precisely to you 2 questions :
your pointer is not dangling but still points to a real file (even if nobody else can see it)
Exception will be thrown in case of file inaccessibility (physical damage to disk or connections, file system errors, etc.). But if only the entry was removed, the file is still accessible
There are two answers to your question.
Based on the Javadoc, you should get an IOException if any byte cannot be read for any reason.
If any byte cannot be read for any reason other than end-of-file, an
IOException other than EOFException is thrown. In particular, an
IOException may be thrown if the stream has been closed.
You can explicitly check for file deletion before trying to read, using the method describe in this answer.
File f = new File(filePathString);
if(f.exists() && !f.isDirectory()) { /* do something */ }
Related
I've already created a method that uses createNewFile to create a new file and it does so successfully. I've also made a method that's supposed to open files, using randomAccessFile. Due to some issues I checked to see whether a new file is created if I put a new name as a parameter in randomAccessfile and it is. I was wondering if that's actually the case and if so, what can I replace it with in order to open files and read-write on them. I can't change much to the "general idea" of my program since this is a part of an assignment.
The documentation of the RandomAccessFile states about the mode parameter to the class's two constructors:
"r" Open for reading only. Invoking any of the write methods of the
resulting object will cause an IOException to be thrown.
"rw" Open for reading and writing. If the file does not already exist then an
attempt will be made to create it.
The file is only created or modified if you supply a "w" in to the file mode. The file will be created if it doesn't exist, but the contents will not be changed if the file does exist because you are opening the file for both reading and writing.
There is no write mode that causes a file to be opened only if it exists, failing otherwise. To get that functionality in your code, you'd want to first check for the existence of the file, and have your logic do whatever is appropriate when the file does not exist.
Linux machine, Java standalone application
I am having the following situation:
I have:
consecutive file write(which creates the destination file and writes some content to it) and file move.
I also have a power outage problem, which instantly cuts off the power of computer during these operations.
As a result, I am getting that the file was created, and it was moved as well, but the file content is empty.
The question is what under the hood can be causing this exact outcome? Considering the time sensitivity, may be hard drive is disabled before the processor and RAM during the cut out, but in that case, how is it possible that the file is created and moved after, but the write before moving is not successful?
I tried catching and logging the exception and debug information but the problem is power outage disables the logging abilities(I/O) as well.
try {
FileUtils.writeStringToFile(file, JsonUtils.toJson(object));
} finally {
if (file.exists()) {
FileUtils.moveFileToDirectory(file, new File(path), true);
}
}
Linux file systems don't necessarily write things to disk immediately, or in exactly the order that you wrote them. That includes both file content and file / directory metadata.
So if you get a power failure at the wrong time, you may find that the file data and metadata is inconsistent.
Normally this doesn't matter. (If the power fails and you don't have a UPS, the applications go away without getting a chance to finish what they were doing.)
However, if it does matter, you can do the following: to force the file to "sync" before you move it:
FileOutputStream fos = ...
// write to file
fs.getFD().sync();
fs.close();
// now move it
You need to read the javadoc for sync() carefully to understand what the method actually does.
You also need to read the javadoc for the method you are using to move the file regarding atomicity.
I have a code to work with some file:
Path path = ...;
if (!path.toFile().exists() || Files.size(path) == 0L) {
Files.write(path, data, StandardOpenOption.CREATE);
}
It's working fine almost always, but in some cases it overrides existing file, so I'm getting corrupted file with old data overriden with new data. For example if file content was 00000000000000 and data is AAA in code above, I'll get the file with content AAA00000000000.
File access is syncronized well, so only one thread can access the file, only one instance of application can be started at same time. Application is running on Heroku (it's heroku-managed filesystem), I can't reproduce same behavior on my laptop.
Is it pissible that Files.size(path) returns zero for file with some data? How to rewrite this code to make it work correctly? Is it possible to use another StandardOpenOption flags to fail (throw exception) if file is not empty or doesn't exist?
What is the desired behavior for an existing file with data?
Discard existing data
You can use CREATE and TRUNCATE_EXISTING together. Actually, maybe you should use nothing, since the default for write() is CREATE, TRUNCATE_EXISTING, WRITE, per the documentation.
Keep existing data
You can open it in APPEND mode rather than WRITE mode.
Do nothing if file already exists and is not empty.
This is tricky. The non-zero size report is troubling. I'd suggest using CREATE_NEW (fail if exists) and if you get the failure exception, read the file to see if it's non-empty.
Your code contains a race hazard because it performs a "look before you leap" that can not be relied upon. In between your predicate
!path.toFile().exists() || Files.size(path) == 0L
giving true, which you think means the file has no previous content, and executing the Files.write to write to the file, a different process (or thread) could have written to the file.
I'm adding code to a large JSP web application, integrating functionality to convert CGM files to PDFs (or PDFs to CGMs) to display to the user.
It looks like I can create the converted files and store them in the directory designated by System.getProperty("java.io.tmpdir"). How do I manage their deletion, though? The program resides on a Linux-based server. Will the OS automatically delete from /tmp or will I need to come up with functionality myself? If it's the latter scenario, what are good ways to go about doing it?
EDIT: I see I can use deleteOnExit() (relevant answer elsewhere), but I think the JVM runs more or less continuously in the background so I'm not sure if the exits would be frequent enough.
I don't think I need to cache any converted files--just convert a file anew every time it's needed.
You can do this
File file = File.createTempFile("base_name", ".tmp", new File(temporaryFolderPath));
file.deleteOnExit();
the file will be deleted when the virtual machine terminates
Edit:
If you want to delete it after the job is done, just do it:
File file = null;
try{
file = File.createTempFile("webdav", ".tmp", new File(temporaryFolderPath));
// do sth with the file
}finally{
file.delete();
}
There are ways to have the JVM delete files when the JVM exits using deleteOnExit() but I think there are known memory leaks using that method. Here is a blog explaining the leak: http://www.pongasoft.com/blog/yan/java/2011/05/17/file-dot-deleteOnExit-is-evil/
A better solution would either be to delete old files using a cron or if you know you aren't going to use the file again, why not just delete it after processing?
From your comment :
Also, could I just create something that checks to see if the size of my files exceeds a certain amount, and then deletes the oldest ones if that's true? Or am I overthinking it?
You could create a class that keeps track of the created files with a size limit. When the size of the created files, after creating a new one, goes over the limit, it deletes the oldest one. Beware that this may delete a file that still needs to exist even if it is the oldest one. You might need a way to know which files still need to be kept and delete only those that are not needed anymore.
You could have a timer in the class to check periodically instead of after each creation. This solution is tied to your application while using a cron isn't.
I want to save a video file in C:\ by incrementing the file name e.g. video001.avi video002.avi video003.avi etc. i want to do this in java. The program is on
Problem in java programming on windows7 (working well in windows xp)
How do i increment the file name so that it saves without replacing the older file.
Using the File.createNewFile() you can atomically create a file and determine whether or not the current thread indeed created the file, the return value of that method is a boolean that will let you know whether or not a new file was created or not. Simply checking whether or not a file exists before you create it will help but will not guarantee that when you create and write to the file that the current thread created it.
You have two options:
just increment a counter, and rely on the fact that you're the only running process writing these files (and none exist already). So you don't need to check for clashes. This is (obviously) prone to error.
Use the File object (or Apache Commons FileUtils) to get the list of files, then increment a counter and determine if the corresponding file exists. If not, then write to it and exit. This is a brute force approach, but unless you're writing thousands of files, is quite acceptable performance-wise.