My Android application downloads data only the first lunch. the data is ~50 mb with ~2500 files.
1. Is it a good idea to store if the files got downloaded in SharedSettings? The problem is that if a user clears the data application (maybe by mistake), he has to redownload everything. I manually copy a prepacked database to /data/data/../databases/, is it a good idea to check if the db exists, and if no then download everything?:
if(new File(/data/data/../databases/myDB.db).exists){//dont download}
2.Is getting the folder size and checking if its the same a good way to see if the folder+data are good? or is there a better way to check if 2 folders are the same?
Thanks.
No, do not put 50MB of data into SharedSettings. That will fall over and die. A set of SharedSettings is stored in XML on disk and entirely loaded into RAM when opened. This also won't keep the user from clearing this data.
For determining whether the data has been downloaded, I would suggest just having a file you make once the download is complete indicating it is done. The user can't selectively remove files. They can clear your data, but that will also clear the sentinel file and you will know you need to re-download. (Also keep in mind you will need to deal with restarting the download if it gets interrupting in the middle.)
Also be sure you correctly handle filesystem operations as described here: http://android-developers.blogspot.com/2010/12/saving-data-safely.html
An alternate idea if you're worried about missing data files... If at any point your app looks for a file and it doesn't exist, throw an exception, pass it to a handler that shows a dialog and 'verifies' your data. You can keep a list of all needed data files, and then only download ones that don't exist. Something like a system check, if you will.
That way they don't end up downloading 50MB if they were only missing a couple files they accidentally deleted in root explorer ;-)
Related
I want to move two files to a different directory in same filesystem.
Concrete example, I want to move /var/bigFile to /var/path/bigFile, and /var/smallFile to /var/path/smallFile.
Currently I use Files.move(source, target), without any options, moving the small file first and big file second. I need this order since there is another process waiting for this files to arrive, and the order is important.
Problem is that, sometimes I see the creation date for small file being greater than the creation date for the big file, like the moving order is not followed.
Initially I thought I have to do a sync, but it does not make sense.
Given the fact that the move will actually be a simple rename, there is no system buffers included, to force them to be flushed to disk.
Timestamp for the files was checked using ls -alrt command.
Does anyone have any idea what could be wrong?
i am writing a program that parses xml files that hold tourist attractions for cities. each city has it's own xml and the nodes have info like cost, address etc... i want to have a thread on a timer to check for new xml files or more recent versions of existing ones in a specific directory. creating the thread is not the problem. i just have no idea what the best way to check for these new files or changed files is. does anyone have any suggestions as to an easy way to make do that. i was thinking of crating a csv file with names and date altered info for each file processed and then checking against this csv file when i go to check for new or altered xml, but that seems overly complicated and i would like a better solution. i have no code to offer at this point for this mechanism i am just looking for a direction to go in.
the idea is as i get xml's for different cities fitting the schema that it will update my db automatically next time the program runs or periodically if already running.
To avoid polling you should watch the directory containing the xml file. Oracle has an extensive documentation about the topic at Watching a Directory for Changes
What you are describing looks like asynchronous feeding of new info. One common pitfall on such problem is race condition : what happens if you are trying to read a file while it's being modified or if something else tries to write a file while you are reading it ? What happens if your app (or the app that edit your xml files) breaks in the middle of processing ? To avoid such problems you should move files (change name or directory) to follow their status because moves are atomical operation on normal file systems. If you want a bullet proof solution, you should have :
files being edited or transfered by an external part
files being fully edited or transfered and ready to be read by you app
files being processed
files completely processed
files containing errors (tried to process them but could not complete processing)
The 2 first are under external responsability (you just define an interface contract), the 2 latter are under yours. The cost if 4 or 5 directories (if you choose that solution), the gains are :
if there is any problem while editing-tranfering a xml file, the external app just have to restart its operation
if a file can't be processed (syntax error, oversized, ...) it is put apart for further analysis but does not prevent processing of other files
you only have to watch almost empty directories
if your app breaks in the middle of processing a file, at next start it can restart its processing.
Lets say I have a database with a few hundred blobs stored in separate files I want to move to the SD card, what is the best way of programatically doing this ensuring nothing gets left behind?
I realize I can only copy files across different mounts.
So can I copy all the files to a subdirectory in "cache" and then move the subdirectory to the correct spot atomically? Or can I write all to the files to a directory with a "temp" prefix, and rename it in place when verified?
The possibility to move the folder (I mean, relocate it just by renaming) depends on the file system and probably it is better not to rely on in. Also, even when it works, it only works inside the boundaries on the single mount. I would propose simply to rename the folder after all files have been successfully copied.
If the folder with "temporary" prefix would be discovered due previous crash, it can be simply removed ("rollback").
I would recommend finding a library with an algorithm that is designed for problems such as this. You might want to start by reviewing this question Any good rsync library for Java?
I am stuck with a problem when reading files from an FTP-Server. It appears, that I get empty files. But I know (kind of for sure :-) ) that there are no empty files uploaded. My strong feeling is, that I start downloading when the file has not yet been completely been uploaded.
Unfortunately I do not have the possibility to change the way files are uploaded. So I need to find a workaround from my side.
My Idea was to check the mDate (Last Modification Date) of the file. And when it is more then 30s in the past, it would be safe to start downloading the files. During my tests I uploaded a file and checked the mdate. Unfortunately it was 13s in the future.
No finally my question
Is there a way to get the current system time of the ftp server? So I could calculate an offset. In the sftp framework I am using (com.jcraft.jsch) there are function like "getExtension()" but I do not find any usefull information on that method.
Cheers,
Christian
Before getting files from the remote server, do this. (1) Put an empty file in the remote location. (2) Get the last modified time of the file put in step-1. This roughly gives the system time. (3) List the actual files you want to get, get their last modified time, compare with the system time in step-2. (4) Delete the empty file created in step-1, if you do not like it being there.
I'm adding autosave functionality to a graphics application in Java. The application periodically autosaves the current document and also autosaves on exit. When the user starts the application, the autosave file is reloaded.
If the autosave file is corrupted in any way (I assume a power cut when the file is in the middle of being saved would do this?), the user will lose their work. How can I prevent such situations and do all I can to guarantee that the autosave document is in a consistent state?
To further complicate matters, to autosave the document I need to save one .xml file and several .png files. Also, the .png saving occurs in C code over JNI.
My current strategy is to write each .png with the extension .png.tmp, write the .xml file with the extension .xml.tmp, and then rename each file to remove the .tmp part leaving the .xml until last. On startup, I only load the autosave document if I can find a .xml file and ignore .xml.tmp files. I also don't delete the previous autosave document until the .xml.tmp file for the new document is renamed.
I guess my knowledge of what happens when you write to disk is poor. I know you can have software read/write buffers when using files, as well as OS and hardware buffers and that all of these need to be flushed. I'm confused how I can know for sure when something really has been written to disk and what I can do to protect myself. Does the renaming operation do anything to make sure buffers are flushed?
If the autosave file is corrupted in any way (I assume a power cut when the file is in the middle of being saved would do this?), the user will lose their work. How can I prevent such situations and do all I can to guarantee that the autosave document is in a consistent state?
To prevent loss of data due to partially written autosave file, don't overwrite the autosave file. Instead, write to a new file each time, and then rename it once the file has been safely written.
To guard against not noticing that an autosave file has not been correctly written:
Pay attention to the exceptions thrown as the autosave file is written and closed in case a disc error, file system full, etc.
Keep a running checksum of the file as it is written and write it at the end of the file. Then when you load the autosave file, check that the checksum is there and is correct.
If the checkpointed state involves multiple files, make sure that you write the files in a well known order (without overwriting!), and write the checksum on the autosave file after all of the other files have been safely closed. You might want to create a directory for each checkpoint.
FOLLOW UP
No. I'm not saying that rename always succeeds. However, it is atomic - it either succeeds (and completes) or the file system is not changed. So, if you do this:
write "file.new" and close,
delete "file",
rename "file.new" to "file"
then provided the first step succeeds you are guaranteed to have the latest "file" safely on disc. And it is simple to add a couple of steps so that you have a backup of "file" at all times. (If the 3rd step fails, you are left with "file.new" and no "file". This can be recovered manually, or automatically by the application next time you run it.)
Also, I'm not saying that writes always succeed, or that applications don't crash, or that the power never goes off. And the point of the checksum is to allow you to detect the cases where these things have happened and the autosave file is incomplete.
Finally, it is a good idea to have two autosaves in case your application gets itself into a state where its data structures are messed up and the last autosave is nonsensical as a result. (The checksum won't protect against this.) Be cautious about autosaving when the application crashes for the same reason.
As an aside, since you have several different files as part of this one document, consider using either a project directory to hold them all together, or using some encapsulation format (like .zip) to put them all inside one file.
What you want to do is atomically replace the old backup files with new ones. Unfortunately, I don't believe that Java gives you enough control do this directly. You also need to reason about what operations are atomic in the underlying operating system. I know Linux file systems, so my answer will be biased towards a Java program running on that system. I would be shocked if Windows didn't do the same thing, but I can't say for certain.
Most Linux file systems (e.g. the meta-data journaled ones) let you rename files atomically. If the system crashes half-way through a rename, when you restart, it will be as if you never renamed a file in the first place. For this reason, a common way to atomically update an existing file F is to write your new data to a temporary file T and then rename T to F. Any system or application crash up to that rename will not affect F, so it will always be consistent.
Of course, before you rename, you need to make sure that your temporary file is consistent. Make sure that all streaming buffers for the file are flushed to the OS (Channel.force() or OutputStream.flush()) and the OS buffers are flushed to the disk (FileOutputStream.getFD.sync()). Of course, unless your OS disables the write cache on the hard disk itself (it probably hasn't), there's still a chance that your data can be corrupted. Add a checksum to the XML if you really want to be really sure. If you're truly paranoid, you should flush the OS and hard disk buffer caches and re-read the file to verify that it is consistent. This is beyond any reasonable expectation for normal consumer applications.
But that's just to atomically write write a single file. Your propblem is more complex: you have many files to update atomically. For example, I'll say that you have two files, img.png and main.xml. I'd do one of these:
The easy solution is to make a per-savefile directory. You wouldn't need to worry about renaming each individual file, and you could still atomically rename the new backup dir over the old backup dir you're replacing. That is, if your old backup is bak/img.png and bak/main.xml, write bak.tmp/img.png and bak.tmp/main.xml and rename bak.tmp to bak.
Name the new auxiliary files something else and let them coexist with the old ones for a little while. That is, write img.2.png and main.xml.tmp (which should refer to img.2.png, not img.png) and only rename main.xml.tmp to main.xml. Then delete img.png.
addition: If you don't have atomic renames, the next best thing extends on #2. Whenever you save the project, give it a new name (e.g. ver342.xml). When you load, just find the most recent XML that is consistent (i.e. its checksum verifies). Keep around 2 or 3 to be safe. Only delete an auto-save if you have successfully restored from a more-recent copy.