Avoiding fragmentation when saving files to BlackBerry filesystem. Best practice? - java

In my application I need to save some file (a pdf) to the filesystem. My current method involves creating a directory for storing the files:
FileConnection fc = (FileConnection)Connector.open("file:///SDCard/BlackBerry/pdfs/");
if (!fc.exists())
fc.mkdir();
fc.close();
I then write to the directory with my file:
fc = (FileConnection)Connector.open("file:///SDCard/BlackBerry/pdfs/" + filename, Connector.READ_WRITE);
if (!fc.exists())
fc.create();
OutputStream outStream = fc.openOutputStream();
outStream.write(pdf);
outStream.close();
fc.close();
This all works fine, and my pdf arrives in my created directory. My question is: will I run into trouble with the fact that I have hard coded a file path as my save destination. With the BlackBerry API is it possible to retrieve a writeable folder which exists on all models/configurations?

You can query the system for the available roots using FileSystemRegistry.listRoots(). Note that it is not guaranteed that there will be an sdcard, or that it will be visible even if there is one (when in mass storage mode, for instance). I think that the only root guaranteed to be on all devices is internal storage ("file:///Store").
There's (a little) more information here.

Related

Amazon Kindle Fire HD Saving Files onto MicroSD Cards

Before It gets brought up about my question already being asked, I would like to state that I have tried around 5 other options and possible solutions with no result.
Here is a snippet of my code. This is just a snippet. Upon testing the results of my code currently, a file is being saved in the main directory, /ScoutingApp. However, I would like to files to save in a folder /ScoutingApp/ on the MicroSD card so I can eject data more quickly.
if (Environment.MEDIA_MOUNTED.equals(state)) {
File root = Environment.getExternalStorageDirectory();
File Dir = new File(root.getAbsolutePath() + "/ScoutingApp");
if (!Dir.exists()) {
Dir.mkdir();
} else {
filename = UUID.randomUUID().toString() + ".sql";
File file = new File(Dir, filename);
If the Android that your Fire OS is based on is Android 4.4+, you can try getExternalFilesDirs() on any Context (such as an Activity). Note the plural form — if this method returns 2+ items, the second and subsequent ones are on removable storage. Those locations will be specific for your app, and you can read from and write to those locations without permissions.
Note, though, that Fire OS is not completely compliant with the Play ecosystem's compatibility requirements, and so YMMV.

Do I need to delete tmp files created by my java application?

I output several temporary files in my application to tmp directories but was wondering if it is best practise that I delete them on close or should I expect the host OS to handle this for me?
I am pretty new to Java, I can handle the delete but want to keep the application as multi-OS and Linux friendly as possible. I have tried to minimise file deletion if I don't need to do it.
This is the method I am using to output the tmp file:
try {
java.io.InputStream iss = getClass().getResourceAsStream("/nullpdf.pdf");
byte[] data = IOUtils.toByteArray(iss);
iss.read(data);
iss.close();
String tempFile = "file";
File temp = File.createTempFile(tempFile, ".pdf");
FileOutputStream fos = new FileOutputStream(temp);
fos.write(data);
fos.flush();
fos.close();
nopathbrain = temp.getAbsolutePath();
System.out.println(tempFile);
System.out.println(nopathbrain);
} catch (IOException ex) {
ex.printStackTrace();
System.out.println("TEMP FILE NOT CREATED - ERROR ");
}
createTempFile() only creates a new file with a unique name, but does not mark it for deletion. Use deleteOnExit() on the created file to achieve that. Then, if the JVM shuts down properly, the temporary files should be deleted.
edit:
Sample for creating a 'true' temporary file in java:
File temp = File.createTempFile("temporary-", ".pdf");
temp.deleteOnExit();
This will create a file in the default temporary folder with a unique random name (temporary-{randomness}.pdf) and delete it when the JVM exits.
This should be sufficient for programs with a short to medium run time (e.g. scripts, simple GUI applications) that do sth. and then exit. If the program runs longer or indefinitely (server application, a monitoring client, ...) and the JVM won't exit, this method may clog the temporary folder with files. In such a situation the temporary files should be deleted by the application, as soon as they are not needed anymore (see delete() or Files helper class in JDK7).
As Java already abstracts away OS specific file system details, both approaches are as portable as Java. To ensure interoperability have a look at the new Path abstraction for file names in Java7.

How to load a random access or memory mapped file hosted online from an applet

I have a Processing script in which I'm plotting a small subset of data from a very large file that is constantly being updated. The data will be randomly accessed based on user input, and so to increase speed without loading the whole file into memory I've been memory mapping the file as a ByteBuffer. This works exceptionally well when run on a local machine, but unfortunately the end goal of the project is a web applet. Is there a good way of creating memory mapped or random access file structures from files accessed within an applet? I looked a lot at URL.openStream(), but couldn't figure out how to turn the InputStream into what I need. Here is what I had for the offline app:
//within my GraphicPlot class:
ByteBuffer bbuf;
FileChannel fc;
SetUpPlot(String fileName){
try {
FileInputStream instream = new FileInputStream(fileName);
fc = instream.getChannel();
bbuf = fc.map(FileChannel.MapMode.READ_ONLY, 0, fc.size());
}
catch (IOException e) {
e.printStackTrace();
exit();
}
}
No matter how I send the URL to the function (absolute URL, relative URL, etc) I get a file not found error.
Any help would be greatly appreciated!
EDIT: I should probably mention that the applet will be signed, and the file in question will be hosted in the same directory as the applet, so there shouldn't be any applet permission problems.
FileInputStream and memory mapping only works for files on your local system. You can't memory map a file which is only available via http or some other protocol. To use FileInputStream or memory mapping, you must first take a copy on the local system.
This is not a limitation of Java but a limitation of the operating system.

Creating and reading a one time created file

I want to store a list of strings in a file.
I need to create it just one time, and after that i will read and write on it programmaticlly.
My question is where in the file system should i create the file (manually) so that it will best for reading and writing ?
Thanks.
You can create your file in your app's directory so no one can access it but your app
getApplicationContext().getFilesDir().getAbsolutePath();
or on sd card
File externalStorage = Environment.getExternalStorageDirectory();
if you want others to access it and, maybe, if your file is very big
If you intent to create your file manually then I think SD card is the only option unless you have a rooted phone or working with the emulator.
if (Environment.MEDIA_MOUNTED.equals(Environment.getExternalStorageState()))
{
//SDcard is there
File f=new File("/sdcard/YOURFILE.txt");
if (!f.exists())
{
//File created only for first time
f.createNewFile();
//create inputstream and write it to your file
OutputStream out=new FileOutputStream(f);
byte buf[]=new byte[1024];
int len;
while((len=inputStream.read(buf))>0)
out.write(buf,0,len);
out.close();
inputStream.close();
System.out.println("\nData Written");
}
else { // read/ write SECOND TIME }
}
It really depends.
The problem with creating the file on the SDCard is that a special permission is required in order to access it. If the app is only for yourself, that's cool. If you want to distribute it through Google MarketPlay (or whatever it is called these days), please know that some people (myself included) tend to look at the permissions and ask "why would an app doing X require permission to do Y?", and sometimes not install the app because of it.
If the manual part is done by the app's user, by all means, store it on the sdcard. It's the only place a standard, none-root user even has access to.
Generally speaking, however, a better place to store data is in /data/data/packagename. See Android's data storage for more details.
Shachar
Add file in assets folder, then it will be clearly after new install

Java I/O over an NFS mount

I have a bit of Java code that outputs an XML file to a NFS mounted filesystem. On another server that has the filesytem mounted as a Samba share, there is a process running that polls for new XML files every 30 seconds. If a new file is found, it is processed and then renamed as a backup file. 99% of the time, the files are written without an issue. However, every now and then the backup file contains a partially written file.
After some discussion with some other people, we guessed that the process running on the external server was interfering with the Java output stream when it read the file. They suggested first creating a file of type .temp which will then be renamed to .xml after the file write is complete. A common industry practice. After the change, the rename fails every time.
Some research turned up that Java file I/O is buggy when working with NFS mounted filesystems.
Help me Java gurus! How do I solve this problem?
Here is some relevant information:
My process is Java 1.6.0_16 running on Solaris 10
Mounted filesystem is a NAS
Server with polling process is Windows Server 2003 R2 Standard, Service Pack 2
Here is a sample of my code:
//Write the file
XMLOutputter serializer = new XMLOutputter(Format.getPrettyFormat());
FileOutputStream os = new FileOutputStream(outputDirectory + fileName + ".temp");
serializer.output(doc, os);//doc is a constructed xml document using JDOM
os.flush();
os.close();
//Rename the file
File oldFile = new File(outputDirectory + fileName + ".temp");
File newFile = new File(fileName + ".xml");
boolean success = oldFile.renameTo(newFile);
if (!success) {
// File was not successfully renamed.
throw new IOException("The file " + fileName + ".temp could not be renamed.");
}//if
You probably have to specify the complete path in the new file name:
File newFile = new File(outputDirectory + fileName + ".xml");
This looks like a bug to me:
File oldFile = new File(outputDirectory + fileName + ".temp");
File newFile = new File(fileName + ".xml");
I would have expected this:
File oldFile = new File(outputDirectory + fileName + ".temp");
File newFile = new File(outputDirectory + fileName + ".xml");
In general, it sounds like there is a race condition between the writing of the XML file and the read/process/rename task. Can you have the read/process/rename task only operate on files > 1 minute old or something similar?
Or, have the Java program write out an additional, empty file once it has completed writing out the XML file that signals that the writing to the XML file has completed. Only read/process/rename the XML file when the signal file is present. Then delete the signal file.
The original bug definitely sounds like an issue with concurrent access to the file -- your solution should have worked, but there are alternate solutions too.
For example, put a timer on your auto-read process so it when a new file is detected it records filesize, sleeps X seconds, and then if the sizes don't match restarts the timer. That should avoid problems with partial file transfer.
EDIT: or check the timestamps as pre above to check this, but make sure it's old enough that any imprecision in the timestamp doesn't matter (say, 10 seconds to 1 minute since last modified).
Alternately, try this:
File f = new File("foo.xml");
FileOutputStream fos = new FileOutputStream(f);
FileChannel fc = fos.getChannel();
FileLock lock = fc.lock();
(DO FILE WRITE)
fis.flush();
lock.release();
fos.close();
This SHOULD use native OS file locking to prevent concurrent access by other programs (such as your XML reader daemon).
As far as NFS glitches: there is a documented "feature" (bug) where files can't be moved between filesystems via "rename" in Java. Could there be confusion, since it is on a NFS filesystem?
Some information to NFS in general. Depending on your NFS settings, locks might not work at all and a lot of big NFS installations are tuned for read performance, therefore new data might turn up later than expected, due to caching effects.
I have seen effects where you created a file, added data (this was seen on another machine), but all data after that appeared with a 30 sec delay.
Best solution by the way is a rotating file schema. So that the last one is assumed to be written and the one before was safely written and can be read. I would not work on a single file and use it as a "pipe".
You can alternatively use an empty file that is written after the large file was written and closed properly. So if the small guys is there, the big guy was definitively done and can be read.
Possibly due to "The rename operation might not be able to move a file from one filesystem to another" from http://java.sun.com/j2se/1.5.0/docs/api/java/io/File.html#renameTo%28java.io.File%2)
Try using apache commons io FiltUtils.copyFileToDirectory http://commons.apache.org/io/api-release/org/apache/commons/io/FileUtils.html#copyFileToDirectory(java.io.File,%20java.io.File) instead

Categories