I have this recurrent Java JAR program tasks that tries to modify a file every 60seconds.
Problem is that if user is viewing the file than Java program will not be able to modify the file. I get the typical IOException.
Anyone knows if there is a way in Java to modify a file currently in use? Or anyone knows what would be the best way to solve this problem?
I was thinking of using the File canRead(), canWrite() methods to check if file is in use. If file is in use then I'm thinking of making a backup copy of data that could not be written. Then after 60 seconds add some logic to check if backup file is empty or not. If backup file is not empty then add its contents to main file. If empty then just add new data to main file. Of course, the first thing I will always do is check if file is in use.
Thanks for all your ideas.
I was thinking of using the File
canRead(), canWrite() methods to check
if file is in use.
Not a good idea - you'll run into race conditions e.g. when your code has used those check methods, received true return values, but then the file is locked by a different application (possibly the user) just before you open it for writing.
Instead, try to get a FileLock on the file and use the "backup file" when that fails.
You can hold a lock on the file. This should guarantee you are able to write on the file.
See here on how to use the FileLock class.
If the user is viewing the file you should still be able to read it. In this case, make an exact copy of the file, and make changes to the new file.
Then after the next 60 seconds you can either:
1) Check if the file is being viewed and if not, delete it and replace it with the earlier file, then directly update this file.
2) If it is being viewed, continue making changes to the copy of the file.
EDIT: As Michael mentioned, when working with the main file, get a lock on it first.
Related
I have some code that is designed to open a local master file, make additions, and save the file both by overwriting the master file and overwriting a write protected copy on an accessible network location. This is done by saving the modified file to a temp file and then copying over the other two files.
String tempFileName= "File.tmp";
String fileName= "File.xlsm";
String serverPath="\\\\network path\\";
File serverFile = new File(serverPath+fileName);
Files.copy(Paths.get(tempFileName),Paths.get(fileName),
StandardCopyOption.COPY_ATTRIBUTES,StandardCopyOption.REPLACE_EXISTING);
if(serverFile.exists()){serverFile.setWritable(true, false);}
Files.copy(Paths.get(tempFileName),Paths.get(serverPath+fileName),
StandardCopyOption.COPY_ATTRIBUTES,StandardCopyOption.REPLACE_EXISTING);
serverFile.setWritable(false,false);
Files.delete(Paths.get(tempFileName));
This code works well most of the time however, some of the time the code completes successfully without exception but with the network location file deleted. The local master file is saved and updated correctly but the file that should exist on the network is simply gone.
What makes this more difficult is that i have been unable to reproduce this problem under any controlled circumstances. So i ask you for any guidance on how this could occur from a file copy/overwrite operation.
Thank you
UPDATE:
I had a hunch and checked network access logs to the server file path. The deletion of the file occurs if and only if the file is being accessed by a user other than the creator but not all of the time. Again though, this is accessed as read only so a user having the file open should not affect overwriting a new version and most of the time does not. Digging deeper it seems that occasionally if and only if the file is opened by another user and java is trying to overwrite the file an AccessDenied Exception is thrown and the file is deleted.
I believe this must be a bug in setWritable() or Files.copy (or a combination) as the file should not be deleted in any case and isWritable() returns true every time. I have tried other methods for setting/UN-setting read only permissions and have come up empty. The current work around that I have in place simply catches the exception and loops until the file is deleted and a fresh copy is in place. This works but is really a hack so if anyone has any better solutions/suggestions I welcome them.
See How does FileLock work?, you could do something like:
Wait for file to become available
Lock file
Overwrite/delete/other
Unlock (if applicable)
This should prevent access by other users during the process of modifying the file.
In File Monitoring process if one file came and it is processed immediately it does not check
if file is open and writing something..Then how to prevent moving of file without closing the file.
Do you have control over the program that's putting the files in the directory? Put something like ".partial" on the end of the filename while the file is still being written, and then rename it to remove the ".partial" when the writing is done. If you make the Java file-monitoring program ignore files whose names end in ".partial", it'll only see files after they've been fully written out.
My Android application downloads data only the first lunch. the data is ~50 mb with ~2500 files.
1. Is it a good idea to store if the files got downloaded in SharedSettings? The problem is that if a user clears the data application (maybe by mistake), he has to redownload everything. I manually copy a prepacked database to /data/data/../databases/, is it a good idea to check if the db exists, and if no then download everything?:
if(new File(/data/data/../databases/myDB.db).exists){//dont download}
2.Is getting the folder size and checking if its the same a good way to see if the folder+data are good? or is there a better way to check if 2 folders are the same?
Thanks.
No, do not put 50MB of data into SharedSettings. That will fall over and die. A set of SharedSettings is stored in XML on disk and entirely loaded into RAM when opened. This also won't keep the user from clearing this data.
For determining whether the data has been downloaded, I would suggest just having a file you make once the download is complete indicating it is done. The user can't selectively remove files. They can clear your data, but that will also clear the sentinel file and you will know you need to re-download. (Also keep in mind you will need to deal with restarting the download if it gets interrupting in the middle.)
Also be sure you correctly handle filesystem operations as described here: http://android-developers.blogspot.com/2010/12/saving-data-safely.html
An alternate idea if you're worried about missing data files... If at any point your app looks for a file and it doesn't exist, throw an exception, pass it to a handler that shows a dialog and 'verifies' your data. You can keep a list of all needed data files, and then only download ones that don't exist. Something like a system check, if you will.
That way they don't end up downloading 50MB if they were only missing a couple files they accidentally deleted in root explorer ;-)
I'm adding autosave functionality to a graphics application in Java. The application periodically autosaves the current document and also autosaves on exit. When the user starts the application, the autosave file is reloaded.
If the autosave file is corrupted in any way (I assume a power cut when the file is in the middle of being saved would do this?), the user will lose their work. How can I prevent such situations and do all I can to guarantee that the autosave document is in a consistent state?
To further complicate matters, to autosave the document I need to save one .xml file and several .png files. Also, the .png saving occurs in C code over JNI.
My current strategy is to write each .png with the extension .png.tmp, write the .xml file with the extension .xml.tmp, and then rename each file to remove the .tmp part leaving the .xml until last. On startup, I only load the autosave document if I can find a .xml file and ignore .xml.tmp files. I also don't delete the previous autosave document until the .xml.tmp file for the new document is renamed.
I guess my knowledge of what happens when you write to disk is poor. I know you can have software read/write buffers when using files, as well as OS and hardware buffers and that all of these need to be flushed. I'm confused how I can know for sure when something really has been written to disk and what I can do to protect myself. Does the renaming operation do anything to make sure buffers are flushed?
If the autosave file is corrupted in any way (I assume a power cut when the file is in the middle of being saved would do this?), the user will lose their work. How can I prevent such situations and do all I can to guarantee that the autosave document is in a consistent state?
To prevent loss of data due to partially written autosave file, don't overwrite the autosave file. Instead, write to a new file each time, and then rename it once the file has been safely written.
To guard against not noticing that an autosave file has not been correctly written:
Pay attention to the exceptions thrown as the autosave file is written and closed in case a disc error, file system full, etc.
Keep a running checksum of the file as it is written and write it at the end of the file. Then when you load the autosave file, check that the checksum is there and is correct.
If the checkpointed state involves multiple files, make sure that you write the files in a well known order (without overwriting!), and write the checksum on the autosave file after all of the other files have been safely closed. You might want to create a directory for each checkpoint.
FOLLOW UP
No. I'm not saying that rename always succeeds. However, it is atomic - it either succeeds (and completes) or the file system is not changed. So, if you do this:
write "file.new" and close,
delete "file",
rename "file.new" to "file"
then provided the first step succeeds you are guaranteed to have the latest "file" safely on disc. And it is simple to add a couple of steps so that you have a backup of "file" at all times. (If the 3rd step fails, you are left with "file.new" and no "file". This can be recovered manually, or automatically by the application next time you run it.)
Also, I'm not saying that writes always succeed, or that applications don't crash, or that the power never goes off. And the point of the checksum is to allow you to detect the cases where these things have happened and the autosave file is incomplete.
Finally, it is a good idea to have two autosaves in case your application gets itself into a state where its data structures are messed up and the last autosave is nonsensical as a result. (The checksum won't protect against this.) Be cautious about autosaving when the application crashes for the same reason.
As an aside, since you have several different files as part of this one document, consider using either a project directory to hold them all together, or using some encapsulation format (like .zip) to put them all inside one file.
What you want to do is atomically replace the old backup files with new ones. Unfortunately, I don't believe that Java gives you enough control do this directly. You also need to reason about what operations are atomic in the underlying operating system. I know Linux file systems, so my answer will be biased towards a Java program running on that system. I would be shocked if Windows didn't do the same thing, but I can't say for certain.
Most Linux file systems (e.g. the meta-data journaled ones) let you rename files atomically. If the system crashes half-way through a rename, when you restart, it will be as if you never renamed a file in the first place. For this reason, a common way to atomically update an existing file F is to write your new data to a temporary file T and then rename T to F. Any system or application crash up to that rename will not affect F, so it will always be consistent.
Of course, before you rename, you need to make sure that your temporary file is consistent. Make sure that all streaming buffers for the file are flushed to the OS (Channel.force() or OutputStream.flush()) and the OS buffers are flushed to the disk (FileOutputStream.getFD.sync()). Of course, unless your OS disables the write cache on the hard disk itself (it probably hasn't), there's still a chance that your data can be corrupted. Add a checksum to the XML if you really want to be really sure. If you're truly paranoid, you should flush the OS and hard disk buffer caches and re-read the file to verify that it is consistent. This is beyond any reasonable expectation for normal consumer applications.
But that's just to atomically write write a single file. Your propblem is more complex: you have many files to update atomically. For example, I'll say that you have two files, img.png and main.xml. I'd do one of these:
The easy solution is to make a per-savefile directory. You wouldn't need to worry about renaming each individual file, and you could still atomically rename the new backup dir over the old backup dir you're replacing. That is, if your old backup is bak/img.png and bak/main.xml, write bak.tmp/img.png and bak.tmp/main.xml and rename bak.tmp to bak.
Name the new auxiliary files something else and let them coexist with the old ones for a little while. That is, write img.2.png and main.xml.tmp (which should refer to img.2.png, not img.png) and only rename main.xml.tmp to main.xml. Then delete img.png.
addition: If you don't have atomic renames, the next best thing extends on #2. Whenever you save the project, give it a new name (e.g. ver342.xml). When you load, just find the most recent XML that is consistent (i.e. its checksum verifies). Keep around 2 or 3 to be safe. Only delete an auto-save if you have successfully restored from a more-recent copy.
I'm working on a small Java application (Java 1.6, Solaris) that will use multiple background threads to monitor a series of text files for output lines that match a particular regex pattern and then make use of those lines. I have one thread per file; they write the lines of interest into a queue and another background thread simply monitors the queue to collect all the lines of interest across the whole collection of files being monitored.
One problem I have is when one of the files I'm monitoring is reopened. Many of the applications that create the files I'm monitoring will simply restart their logfile when they are restarted; they don't append to what's already there.
I need my Java application to detect that the file has been reopened and restart following the file.
How can I best do this?
Could you keep a record of each of the length of each file? When the current length subsequently goes back to zero or is smaller than the last time you recorded the length, you know the file has been restarted by the app?
using a lockfile is a solution as Jurassic mentioned.
Another way is to try and open the file while you're reading the pattern and find if the file has a new size and create time. If the create time is NOT same as when you found it, then you can be sure that it has been recreated.
You could indicate somewhere on the filesystem that indicates you are reading a given file. Suppose next to the file being read (a.txt), you create a file next to it (a.txt.lock) that indicates a.txt is being read. When your process is done with it, a.txt.lock is deleted. Every time a process goes to open a file to read it, it will check for the lock file beforehand. If there is no lockfile, its not being used. I hope that makes sense and answers your question. cheers!