I would like to write simple Java downloader for my backup website. What is important, applet should be able to download many files at once.
So, here is my problem. Such applet seems to me easily to hack or infect. What is more, it for sure will need many system resources to run. So, I would like to hear your opinions what is the best, the most optimal and the most secure way to do it.
I thought about something like this:
//user chose directory to download his files
//fc is a FileChooser
//fc.showSaveDialog(this)==JFileChooser.APPROVE_OPTION
try {
for(i=0;i<=urls.length-1;i++){
String fileName = '...';//obtaining filename and extension
fileName=fileName.replaceAll(" ", "_");
//I am not sure if line above resolves all problems with names of files...
String path = file.getAbsolutePath() + File.separator + fileName;
try{
InputStream is=null;
FileOutputStream os=new FileOutputStream(path);
URLConnection uc = urls[i].openConnection();
is = uc.getInputStream();
int a=is.read();
while(a!=-1){
os.write(a);
a=is.read();
}
is.close();
os.close();
}
catch(InterruptedIOException iioe) {
//TODO User cancelled.
}
catch(IOException ioe){
//TODO
}
}
}
but I am sure that there is a better solution.
There is one more thing - when user wants to download really huge amount of files (e.g. 1000, between 10MB and 1GB), there will be several problems. So, I thought about setting a limit for it, but I don't really know how to decide how many files at once is OK. Should I check user's Internet connection or computer's load?
Thanks in advance
BroMan
I would like to write simple Java downloader for my backup website.
What is important, applet should be able to download many files at once.
I hope you mean sequentially like your code is written. There would be no advantage in this situation to run multiple download streams in parallel.
Such applet seems to me easily to hack or infect.
Make sure to encrypt your communication stream. Since it looks like you are just accessing URLs on the server, maybe configure your server to use HTTPS.
What is more, it for sure will need many system
resources to run.
Why do you assume that? The network bandwidth will be the limiting factor. You are not going to be taxing your other resources very much. Maybe you meant avoiding saturating user's bandwidth. You can implement simple throttling by giving user a configurable delay that you insert between every file or even every iteration of your read/write loop. Use Thread.sleep to implement the delay.
So, I thought about setting a limit for it, but I don't
really know how to decide how many files at once is OK.
Assuming you are doing download sequentially, setting limits isn't a technical question. More about what kind of service you want to provide. More files just means the download takes longer.
int a=is.read();
Your implementation of stream read/write is very inefficient. You want to read/write in chunks rather than single bytes. See the versions of read/write methods that take byte[].
Here is the basic logic flow to copy data from an input stream to an output stream.
InputStream in = null;
OutputStream out = null;
try
{
in = ...
out = ...
final byte[] buf = new byte[ 1024 ];
for( int count = in.read( buf ); count != -1; count = in.read( buf ) )
{
out.write( buf, 0, count );
}
}
finally
{
if( in != null )
{
in.close();
}
if( out != null )
{
out.close();
}
}
Related
I'm trying to write an InputStream that is an mp4 that I get from calling an external SOAP service, when I do so, it always generates this tmp files for my chosen temporary directory(java.io.tmpdir) that aren't removable and stay after the writing is done.
Writing images that I also get from the SOAP service works normal without the permanent tmp on the directory. I'm using java 1.8 SpringBoot
tmp files
This is what I'm doing:
File targetFile = new File("D:/archive/video.mp4");
targetFile.getParentFile().mkdirs();
targetFile.setWritable(true);
InputStream inputStream = filesToWrite.getInputStream();
OutputStream outputStream = new FileOutputStream(targetFile);
try {
int byteRead;
while ((byteRead = inputStream.read()) != -1) {
outputStream.write(byteRead);
}
} catch (IOException e) {
logger.fatal("Error# SaveFilesThread for guid: " + guid, e);
}finally {
try {
inputStream.close();
outputStream.flush();
outputStream.close();
}catch (Exception e){
e.printStackTrace();
}
also tried:
byte data[] = IOUtils.toByteArray(inputStream);
Path file = Paths.get("video.mp4");
Files.write(file, data);
And from apache commons IO:
FileUtils.copyInputStreamToFile(initialStream, targetFile);
When your code starts, the damage is already done. Your code is not the source of the temporary files (It's.. a ton of work for something that could be done so much simpler, though, see below), it's the framework that ends up handing you that filesToWrite variable.
It is somewhat likely that you can hook in at an earlier point and get the raw inputstream representing the socket or HTTP connection, and start saving the files straight from there. Alternatively, Perhaps filesToWrite has a way to get at the files themselves, in which case you can just move them into place instead of copying them over.
But, your code to do this is a mess, it has bad exception handling, and leaks memory, and is way too much code for a simple job, and is possibly 2000x to 10000x slower than needed depending on your harddisk (I'm not exaggerating, calling single-byte read() on unbuffered streams is thousands of times slower!)
// add `throws IOException` to your method signature.
// it saves files, it's supposed to throw IOException,
// 'doing I/O' is in the very definition of your method!
try (InputStream in = filesToWrite.getInputStream();
OutputStream out = new FileOutputStream(targetFile)) {
in.transferTo(out);
}
That's it. That solves all the problems - no leaks, no speed loss, tiny amount of code, fixes the deplorable error handling (which, here, is 'log something to the log, then print something to standard out, then potentially leak a bunch of resources, then don't tell the calling code anything went wrong and return exactly as if the copy operation succeeded).
There are many examples on the internet showing how to use StandardOpenOption.DELETE_ON_CLOSE, such as this:
Files.write(myTempFile, ..., StandardOpenOption.DELETE_ON_CLOSE);
Other examples similarly use Files.newOutputStream(..., StandardOpenOption.DELETE_ON_CLOSE).
I suspect all of these examples are probably flawed. The purpose of writing a file is that you're going to read it back at some point; otherwise, why bother writing it? But wouldn't DELETE_ON_CLOSE cause the file to be deleted before you have a chance to read it?
If you create a work file (to work with large amounts of data that are too large to keep in memory) then wouldn't you use RandomAccessFile instead, which allows both read and write access? However, RandomAccessFile doesn't give you the option to specify DELETE_ON_CLOSE, as far as I can see.
So can someone show me how DELETE_ON_CLOSE is actually useful?
First of all I agree with you Files.write(myTempFile, ..., StandardOpenOption.DELETE_ON_CLOSE) in this example the use of DELETE_ON_CLOSE is meaningless. After a (not so intense) search through the internet the only example I could find which shows the usage as mentioned was the one from which you might got it (http://softwarecave.org/2014/02/05/create-temporary-files-and-directories-using-java-nio2/).
This option is not intended to be used for Files.write(...) only. The API make is quite clear:
This option is primarily intended for use with work files that are used solely by a single instance of the Java virtual machine. This option is not recommended for use when opening files that are open concurrently by other entities.
Sorry I can't give you a meaningful short example, but see such file like a swap file/partition used by an operating system. In cases where the current JVM have the need to temporarily store data on the disc and after the shutdown the data are of no use anymore. As practical example I would mention it is similar to an JEE application server which might decide to serialize some entities to disc to freeup memory.
edit Maybe the following (oversimplified code) can be taken as example to demonstrate the principle. (so please: nobody should start a discussion about that this "data management" could be done differently, using fixed temporary filename is bad and so on, ...)
in the try-with-resource block you need for some reason to externalize data (the reasons are not subject of the discussion)
you have random read/write access to this externalized data
this externalized data only is of use only inside the try-with-resource block
with the use of the StandardOpenOption.DELETE_ON_CLOSE option you don't need to handle the deletion after the use yourself, the JVM will take care about it (the limitations and edge cases are described in the API)
.
static final int RECORD_LENGTH = 20;
static final String RECORD_FORMAT = "%-" + RECORD_LENGTH + "s";
// add exception handling, left out only for the example
public static void main(String[] args) throws Exception {
EnumSet<StandardOpenOption> options = EnumSet.of(
StandardOpenOption.CREATE,
StandardOpenOption.WRITE,
StandardOpenOption.READ,
StandardOpenOption.DELETE_ON_CLOSE
);
Path file = Paths.get("/tmp/enternal_data.tmp");
try (SeekableByteChannel sbc = Files.newByteChannel(file, options)) {
// during your business processing the below two cases might happen
// several times in random order
// example of huge datastructure to externalize
String[] sampleData = {"some", "huge", "datastructure"};
for (int i = 0; i < sampleData.length; i++) {
byte[] buffer = String.format(RECORD_FORMAT, sampleData[i])
.getBytes();
ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
sbc.position(i * RECORD_LENGTH);
sbc.write(byteBuffer);
}
// example of processing which need the externalized data
Random random = new Random();
byte[] buffer = new byte[RECORD_LENGTH];
ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
for (int i = 0; i < 10; i++) {
sbc.position(RECORD_LENGTH * random.nextInt(sampleData.length));
sbc.read(byteBuffer);
byteBuffer.flip();
System.out.printf("loop: %d %s%n", i, new String(buffer));
}
}
}
The DELETE_ON_CLOSE is intended for working temp files.
If you need to make some operation that needs too be temporaly stored on a file but you don't need to use the file outside of the current execution a DELETE_ON_CLOSE in a good solution for that.
An example is when you need to store informations that can't be mantained in memory for example because they are too heavy.
Another example is when you need to store temporarely the informations and you need them only in a second moment and you don't like to occupy memory for that.
Imagine also a situation in which a process needs a lot of time to be completed. You store informations on a file and only later you use them (perhaps many minutes or hours after). This guarantees you that the memory is not used for those informations if you don't need them.
The DELETE_ON_CLOSE try to delete the file when you explicitly close it calling the method close() or when the JVM is shutting down if not manually closed before.
Here are two possible ways it can be used:
1. When calling Files.newByteChannel
This method returns a SeekableByteChannel suitable for both reading and writing, in which the current position can be modified.
Seems quite useful for situations where some data needs to be stored out of memory for read/write access and doesn't need to be persisted after the application closes.
2. Write to a file, read back, delete:
An example using an arbitrary text file:
Path p = Paths.get("C:\\test", "foo.txt");
System.out.println(Files.exists(p));
try {
Files.createFile(p);
System.out.println(Files.exists(p));
try (BufferedWriter out = Files.newBufferedWriter(p, Charset.defaultCharset(), StandardOpenOption.DELETE_ON_CLOSE)) {
out.append("Hello, World!");
out.flush();
try (BufferedReader in = Files.newBufferedReader(p, Charset.defaultCharset())) {
String line;
while ((line = in.readLine()) != null) {
System.out.println(line);
}
}
}
} catch (IOException ex) {
ex.printStackTrace();
}
System.out.println(Files.exists(p));
This outputs (as expected):
false
true
Hello, World!
false
This example is obviously trivial, but I imagine there are plenty of situations where such an approach may come in handy.
However, I still believe the old File.deleteOnExit method may be preferable as you won't need to keep the output stream open for the duration of any read operations on the file, too.
I've made two apps designed to run concurrently (I do not want to combine them), and one reads from a certain file and the other writes to it. When one or the other are running no errors, however if they are both running a get an access is denied error.
Relevant code of the first:
class MakeImage implements Runnable {
#Override
public void run() {
File file = new File("C:/Users/jeremy/Desktop/New folder (3)/test.png");
while (true) {
try{
//make image
if(image!=null)
{
file.createNewFile();
ImageIO.write(image, "png", file);
hello.repaint();}}
catch(Exception e)
{
e.printStackTrace();
}
}
}
}
Relevant code of the second:
BufferedImage image = null;
try {
// Read from a file
image = ImageIO.read(new File("C:/Users/jeremy/Desktop/New folder (3)/test.png"));
}
catch(Exception e){
e.printStackTrace();
}
if(image!=null)
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write( image, "png", baos );
baos.flush();
byte[] imageInByte = baos.toByteArray();
baos.close();
returns=Base64.encodeBase64String(imageInByte);
}
I looked at this: Java: how to handle two process trying to modify the same file, but that is when both are writting to the file where here only one is. I tried the retry later method as suggested in the former's answer without any luck. Any help would be greatly appreciated.
Unless you use OS level file locking of some sort and check for the locks you're not going to be able to reliably do this very easily. A fairly reliable way to manage this would be to use another file in the directory as a semaphore, "touch" it when you're writing or reading and remove it when you're done. Check for the existence of the semaphore before accessing the file. Otherwise you will need to use a database of some sort to store the file lock (guaranteed consistency) and check for it there.
That said, you really should just combine this into 1 program.
Try RandomAccessFile.
This is a useful but very dangerous feature. It goes like this "if you create different instances of RandomAccessFile for a same file you can concurrently write to the different parts of the file."
You can create multiple threads pointing to different parts of the file using seek method and multiple threads can update the file at the same time. Seek allow you to move to any part of the file even if it doesn't exist (after EOF), hence you can move to any location in the newly created file and write bytes on that location. You can open multiple instances of the same file and seek to different locations and write to multiple locations at the same time.
Use synchronized on the method that modify the file.
Edited:
As per the Defination of a Thread safe class, its this way.. " A class is said to be thread safe, which it works correctly in the presence of the underlying OS interleaving and scheduling with NO means of synchronization mechanism from the client side".
I believe there is a File which is to be accessed on to a different machine, so there must be some client-server mechanism, if its there.. then Let the Server side have the synchronization mechanism, and then it doesnt matters how many client access it...
If not, synchronized is more than enough........
Let's say you had an external process writing files to some directory, and you had a separate process periodically trying to read files from this directory. The problem to avoid is reading a file that the other process is currently in the middle of writing out, so it would be incomplete. Currently, the process that reads uses a minimum file age timer check, so it ignores all files unless their last modified date is more than XX seconds old.
I'm wondering if there is a cleaner way to solve this problem. If the filetype is unknown (could be a number of different formats) is there some reliable way to check the file header for the number of bytes that should be in the file, vs the number of bytes currently in the file to confirm they match?
Thanks for any thoughts or ideas!
The way I've done this in the past is that the process writing the file writes to a "temp" file, and then moves the file to the read location when it has finished writing the file.
So the writing process would write to info.txt.tmp. When it's finished, it renames the file to info.txt. The reading process then just had to check for the existence of info.txt - and it knows that if it exists, it has been written completely.
Alternatively you could have the write process write info.txt to a different directory, and then move it to the read directory if you don't like using weird file extensions.
You could use an external marker file. The writing process could create a file XYZ.lock before it starts creating file XYZ, and delete XYZ.lock after XYZ is completed. The reader would then easily know that it can consider a file complete only if the corresponding .lock file is not present.
I had no option of using temp markers etc as the files are being uploaded by clients over keypair SFTP. they can be very large in size.
Its quite hacky but I compare file size before and after sleeping a few seconds.
Its obviously not ideal to lock the thread but in our case it is merely running as a background system processes so seems to work fine
private boolean isCompletelyWritten(File file) throws InterruptedException{
Long fileSizeBefore = file.length();
Thread.sleep(3000);
Long fileSizeAfter = file.length();
System.out.println("comparing file size " + fileSizeBefore + " with " + fileSizeAfter);
if (fileSizeBefore.equals(fileSizeAfter)) {
return true;
}
return false;
}
Note: as mentioned below this might not work on windows. This was used in a Linux environment.
One simple solution I've used in the past for this scenario with Windows is to use boolean File.renameTo(File) and attempt to move the original file to a separate staging folder:
boolean success = potentiallyIncompleteFile.renameTo(stagingAreaFile);
If success is false, then the potentiallyIncompleteFile is still being written to.
This possible to do by using Apache Commons IO maven library FileUtils.copyFile() method. If you try to copy file and get IOException its means that file is not completely saved.
Example:
public static void copyAndDeleteFile(File file, String destinationFile) {
try {
FileUtils.copyFile(file, new File(fileDirectory));
} catch (IOException e) {
e.printStackTrace();
copyAndDeleteFile(file, fileDirectory, delayThreadPeriod);
}
Or periodically check with some delay size of folder that contains this file:
FileUtils.sizeOfDirectory(folder);
Even the number of bytes are equal, the content of the file may be different.
So I think, you have to match the old and the new file byte by byte.
2 options that seems to solve this issue:
the best option- writer process notify reading process somehow that
the writing was finished.
write the file to {id}.tmp, than when finish- rename it to {id}.java, and the reading process run only on *.java files. renaming taking much less time and the chance this 2 process work together decrease.
First, there's Why doesn't OS X lock files like windows does when copying to a Samba share? but that's variation of what you're already doing.
As far as reading arbitrary files and looking for sizes, some files have that information, some do not, but even those that do do not have any common way of representing it. You would need specific information of each format, and manage them each independently.
If you absolutely must act on the file the "instant" it's done, then your writing process would need to send some kind of notification. Otherwise, you're pretty much stuck polling the files, and reading the directory is quite cheap in terms of I/O compared to reading random blocks from random files.
One more method to test that a file is completely written:
private void waitUntilIsReadable(File file) throws InterruptedException {
boolean isReadable = false;
int loopsNumber = 1;
while (!isReadable && loopsNumber <= MAX_NUM_OF_WAITING_60) {
try (InputStream in = new BufferedInputStream(new FileInputStream(file))) {
log.trace("InputStream readable. Available: {}. File: '{}'",
in.available(), file.getAbsolutePath());
isReadable = true;
} catch (Exception e) {
log.trace("InputStream is not readable yet. File: '{}'", file.getAbsolutePath());
loopsNumber++;
TimeUnit.MILLISECONDS.sleep(1000);
}
}
}
Use this for Unix if you are transferring files using FTP or Winscp:
public static void isFileReady(File entry) throws Exception {
long realFileSize = entry.length();
long currentFileSize = 0;
do {
try (FileInputStream fis = new FileInputStream(entry);) {
currentFileSize = 0;
while (fis.available() > 0) {
byte[] b = new byte[1024];
int nResult = fis.read(b);
currentFileSize += nResult;
if (nResult == -1)
break;
}
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("currentFileSize=" + currentFileSize + ", realFileSize=" + realFileSize);
} while (currentFileSize != realFileSize);
}
EDIT:
Got the directory to live. Now there's another issue in sight:
The files in the storage are stored with their DB id as a prefix
to their file names. Of course I don't want the users to see those.
Is there a way to combine the response.redirect and the header setting
für filename and size?
best,
A
Hi again,
new approach:
Is it possible to create a IIS like virtual directory within tomcat in order
to avoid streaming and only make use of header redirect? I played around with
contexts but could'nt get it going...
any ideas?
thx
A
Hi %,
I'm facing a wired issue with the java heap space which is close
to bringing me to the ropes.
The short version is:
I've written a ContentManagementSystem which needs to handle
huge files (>600mb) too. Tomcat heap settings:
-Xmx700m
-Xms400m
The issue is, that uploading huge files works eventhough it's
slow. Downloading files results in a java heap space exception.
Trying to download a 370mb file makes tomcat jump to 500mb heap
(which should be ok) and end in an Java heap space exception.
I don't get it, why does upload work and download not?
Here's my download code:
byte[] byt = new byte[1024*1024*2];
response.setHeader("Content-Disposition", "attachment;filename=\"" + fileName + "\"");
FileInputStream fis = null;
OutputStream os = null;
fis = new FileInputStream(new File(filePath));
os = response.getOutputStream();
BufferedInputStream buffRead = new BufferedInputStream(fis);
while((read = buffRead.read(byt))>0)
{
os.write(byt,0,read);
os.flush();
}
buffRead.close();
os.close();
If I'm getting it right the buffered reader should take care of any
memory issue, right?
Any help would be highly appreciated since I ran out of ideas
Best regards,
W
If I'm getting it right the buffered
reader should take care of any memory
issue, right?
No, that has nothing to do with memory issues, it's actually unnecessary since you're already using a buffer to read the file. Your problem is with writing, not with reading.
I can't see anything immediately wrong with your code. It looks as though Tomcat is buffering the entire response instead of streaming it. I'm not sure what could cause that.
What does response.getBufferSize() return? And you should try setting response.setContentLength() to the file's size; I vaguely remember that a web container under certain circumstances buffers the entire response in order to determine the content length, so maybe that's what's happening. It's good practice to do it anyway since it enables clients to display the download size and give an ETA for the download.
Try using the setBufferSize and flushBuffer methods of the ServletResponse.
You better use java.nio for that, so you can read resources partially and free resources already streamed!
Otherwise, you end up with memory problems despite the settings you've done to the JVM environment.
My suggestions:
The Quick-n-easy: Use a smaller array! Yes, it loops more, but this will not be a problem. 5 kilobytes is just fine. You'll know if this works adequately for you in minutes.
byte[] byt = new byte[1024*5];
A little bit harder: If you have access to sendfile (like in Tomcat with the Http11NioProtocol -- documentation here), then use it
A little bit harder, again: Switch your code to Java NIO's FileChannel. I have very, very similar code running on equally large files with hundreds of concurrent connections and similar memory settings with no problem. NIO is faster than plain old Java streams in these situations. It uses the magic of DMA (Direct Memory Access) allowing the data to go from disk to NIC without ever going through RAM or the CPU. Here is a code snippet for my own code base...I've ripped out much to show the basics. FileChannel.transferTo() is not guaranteed to send every byte, so it is in this loop.
WritableByteChannel destination = Channels.newChannel(response.getOutputStream());
FileChannel source = file.getFileInputStream().getChannel();
while (total < length) {
long sent = source.transferTo(start + total, length - total, destination);
total += sent;
}
The following code is able to streaming data to the client, allocating only a small buffer (BUFFER_SIZE, this is a soft point since you may want to adjust it):
private static final int OUTPUT_SIZE = 1024 * 1024 * 50; // 50 Mb
private static final int BUFFER_SIZE = 4096;
#Override
protected void doGet(HttpServletRequest request,HttpServletResponse response)
throws ServletException, IOException {
String fileName = "42.txt";
// build response headers
response.setStatus(200);
response.setContentLength(OUTPUT_SIZE);
response.setContentType("text/plain");
response.setHeader("Content-Disposition",
"attachment;filename=\"" + fileName + "\"");
response.flushBuffer(); // write HTTP headers to the client
// streaming result
InputStream fileInputStream = new InputStream() { // fake input stream
int i = 0;
#Override
public int read() throws IOException {
if (i++ < OUTPUT_SIZE) {
return 42;
} else {
return -1;
}
}
};
ReadableByteChannel input = Channels.newChannel(fileInputStream);
WritableByteChannel output = Channels.newChannel(
response.getOutputStream());
ByteBuffer buffer = ByteBuffer.allocate(BUFFER_SIZE);
while (input.read(buffer) != -1) {
buffer.flip();
output.write(buffer);
buffer.clear();
}
input.close();
output.close();
}
Are you required to serve files using Tomcat? For this kind of tasks we have used separate download mechanism. We chained Apache -> Tomcat -> storage and then add rewrite rules for download. Then you just by-pass Tomcat and Apache will serve the file to client (Apache->storage). But if works only if you have files stored as files. If you read from DB or other type of non-file storage this solution cannot be used successfully. the overall scenario is that you generate download links for files as e.g. domain/binaries/xyz... and write redirect rule for domain/files using Apache mod_rewrite.
Do you have any filters in the application, or do you use the tcnative library? You could try to profile it with jvisualvm?
Edit: Small remark: Note that you have a HTTP response splitting attack possibility in the setHeader if you do not sanitize fileName.
Why don't you use tomcat's own FileServlet?
It can surely give out files much better than you can possible imagine.
A 2-MByte buffer is way too large! A few k should be ample. Megabyte-sized objects are a real issue for the garbage collector, since they often need to be treated separately from "normal" objects (normal == much smaller than a heap generation). To optimize I/O, your buffer only needs to be slightly larger than your I/O buffer size, i.e. at least as large as a disk block or network package.