Practical Use to Temp Files - java

What would be a practical use for temporary files (see code below)?
File temp = File.createTempFile("temp-file-name", ".tmp");
Why can't you store the data you would keep in the file in some variables? If the file is (probably) going to be deleted on the program exit (as "temp" implies), why even create them?
An example can be such as when downloading a file, it often appears as a temporary file while the downloading completes.

The two reasons I know of:
As storage space for large chunks of memory you don't need at the moment, when doing memory-intensive tasks like video editing
A kind of hacky way of interproccess communication

Aside from the ram versus disk comment above. You may use temp files as precusor files or files about to be processed or served. For example, a server may generate a large PDF for a browser. That PDF file would be stored as a temp file while the (possibly slow) browser downloads the file. Once the communication is complete, the temp file can be destroyed.

For our little 'imagefilesystem' project (http://code.google.com/p/imagefilesystem/) we actually use the /tmp directory to store the thumbnails we created based upon the images in the local filesystem. So the thumbs were created 'on demand' and were, as the name of /tmp says it itself' temporary of nature so that it didn't create GBs of permanent data.

Related

large Files transfer over the network

I have a requirement where in large size zipped files (size in GBs) are coming in a directory on a unix server (lets say server1) and I have to write application which will poll that directory and copy the files to another unix server (lets say server2) as they come . I have a way to know when one file is completely copied in a directory (using corresponding meta data file which will only come when copy operation of a single file is complete) . Since there are hundreds of files, we dont want to wait for all the files to be copied. Once files are copied to server2 , I have to do unzipping and some validations before I land up those files in my final repository.
Questions
What would be the appropriate tech to use for this scenario,shell scripting or java or something else in terms of speed ?
Since we will be doing the transfer operation file by file , how do we achieve parallelism (other than multithreading if we use java) ?
Any existing lib/package/tool available which can fit this scenario .

Looking for an efficient file caching system

I'm currently developing an MMO which utilizes numerous sprites (image files), and I plan to store these files in a compressed state on the user's hard drive. I was wondering if there already exists an implementation of an efficient, directory-based cache system, in which I can utilize to store these image files in different folders that can compress into either one file or multiple files. I was also researching LZ4 (de)compression, and I suppose that would be useful as well, but that does not solve the directory issue.
Thanks!
EDIT: For example, one file should hold numerous image files.
If something like this does not exist, what would the fastest way be to compress multiple image files into one file, and then decompress to load them into memory when the program starts?

Does saving files in a ".zip" folder speed up file write time to network drive?

I know that when I write a new file to a folder that ends in ".zip" it compresses the file. This is when using BufferedOutputStream in JAVA and saving to a windows file system. I'm saving these files to a network drive, so the write time is dependent on network speed.
Will saving to a .zip folder speed up write time? In other words, does it transfer the data uncompressed and then compresses it (so it wouldn't speed up write time) or does it compress then write out the file? Sorry if this is an ignorant question.
There are so many misconceptions in the Question, I think it is worth going through them one at a time.
I know that when I write a new file to a folder that ends in ".zip" it compresses the file.
That is not correct. Creating a file with a ".zip" suffix does not automatically make it compressed. Writing files to a directory that has ".zip" as its filename suffix (?!?) doesn't either. Not in Java. Not in other languages.
In order to get compression, the application needs to take steps to make this happen. In Java you could use ZipOutputStream to write a file in ZIP file format. However, a ZIP file is actually an "archive" format that is designed to hold multiple files in a ZIP file. If you simply trying to compress a single file, there are better alternatives; e.g. GZIPOutputStream.
(It is also possible that this so-called "ZIP folder" you are talking about is a normal ZIP file that has been "mounted" as a loopback file system. You / someone else would have had to set that up explicitly. Anyhow, if this is what is going on here, it is nothing to do with Java. It is all happening in external software and in the operating system where the ZIP is "mounted".)
This is when using BufferedOutputStream in JAVA and saving to a windows file system.
Erm ... no. See above. However you are correct that it may be better to use a BufferedOutputStream to write files, though it only really helps if your application is writing the files in small chunks; e.g. a byte at a time. (Stream compression complicates the issue, so it is difficult to give a simple, general answer on this.)
I'm saving these files to a network drive, so the write time is dependent on network speed.
Correct. It is also dependent on network latency, the protocols used and the load on the remote file server. (If you have a ZIP "mounted", then that is going to add overheads too.)
Will saving to a .zip folder speed up write time?
Maybe. See above. It depends what you mean by a ZIP folder.
Ignoring that, writing the files (the right way) in compressed and / or archive form from Java may speed up writes. There are actually two things to consider:
For plain compression, you are trading off the time it takes the application (!!) to compress and decompress the data against the time (and disk space) you are saving by moving and storing less bytes.
For ZIP files (and similar archive formats) there is a second potential saving. Storing and retrieving lots of individual small files from a file system is slow compared with storing and retrieving a single ZIP file containing those files.
And if you are looking for optimal compression, then ZIP is not the best option.
In other words, does it transfer the data uncompressed and then compresses it (so it wouldn't speed up write time) or does it compress then write out the file?
There are so many variables that it is hard to say for sure. But unless you have done something odd, it is likely that the bytes are sent over the network in compressed form.
Finally, I would advise you NOT to try to combine mounted ZIP files and network shares:
The combination of the two could potentially interact in ways that makes performance worse.
There is a risk that you will end up with a corrupted ZIP or lost files if the network share goes offline at an inconvenient point.

What is the best way to transfer bytes between files on network in Java [duplicate]

This question already exists:
Does java FileChannnel.transferTo() work cleverly when files are on network?
Closed 7 years ago.
The code is written in Java 1.7
I want to make some major modifications to a binary file on a slow network.To protect against the network connection being lost instead of writing directly to the file I write to a new file. When I have completed writing to the new file I delete the old file and rename the new file to the old file.
My question is is it better for the new file to be
1. On the same location as the original file
2. Locally on the computer
With 1. writing to the file could be slower, but the rename should be quicker in fact with most oses would be immediate . With 2 writing to the file should be quicker but then renaming the filwe would be slower.
I feel the answer is 1.
Actually if I open a Filechannel to both files and transfer files directly from one channel to another do the bytes have to come from network to my computer and back to network or can they been copied directly from one place on network to the the ther.
I'm guessing here but the files are probably mounted via some network file system (NFS, SMB) on your computer. So you can access them like local files; they are just slower.
As for the first question: You're not gaining anything by first writing the file locally. In the end, you always have to move the file to correct place in the network and that always involves a "copy all bytes" operation. For example, Java's File.rename() will fail when the two files aren't on the same harddisk / mount. So you have to manually copy the bytes to the destination folder anyway. Some IO frameworks do that for you when necessary but it always happens.
As for directly copying data between two remote hosts: There are a few network filesystems which support such operations but it's a special feature. The usual culprits (NFS and SMB) don't. They always download the whole file from the source and then upload it to the target.

What is the alternatives safely keep files in android app?

my app download files from server into app. There could be lots of those file. One file is about 100 mb. I need to do something to safely keep them into my app.
Thirst i tried to encrypt files. How ever this is bad solution because to encrypt and decrypt 100 mb file (it's pdf file) take a some time. Also i need at a time to read this file so i need to decrypt and write decrypted file into some other file for reading at this time files is reachable.
Furthermore i can't keep this file in memory, because of file size. So maybe there is the way to encrypt directory in internal storage where file is saved ? Or this is not good idea as i should then encrypt every file in directory.
As my files is pdf, i could put password to int, but then how to do this ? Also i could try to check if device is rooted or not, but i think someone would find workaround.
So what would you suggest ?
Thanks
It seems like you have 3 options: to encrypt your data; to store the pdfs in a private folder; or to not store the files on-device.
1) Encrypt your data: As you've said, there are disadvantages because the pdfs are quite big and if you can't have those stored in memory, you need to write the decrypted files to file anyway before displaying them, so this doesn't really solve your problem.
2) Store the pdfs in a private folder: Alternatively you could store the pdfs in a private folder only accessible through your app. This can be done using
FileOutputStream fos = openFileOutput(FILENAME, Context.MODE_PRIVATE);
as noted here. "MODE_PRIVATE will create the file (or replace a file of the same name) and make it private to your application". The only problem I see with this is if people are using rooted phones and can access your app's private folders. The only way around this (as far as I know) is to use option 3.
3) Don't store the files on device: You could download the data, or parts of it, each time. This will guarantee that people can't copy the files because they never persist on the device. You could use Google Docs to stream only portions of the document to reduce download requirements if you want. The problem with this is the huge data requirement.
I think you need to weigh up the pros and cons and decide which is best for you. I'd personally go with option 2. I don't think you'll find a solution that addresses all the problems.

Categories