Check real file size while streaming files with FileItemStream - java

I'm writing an API using Spring + apache commons file upload.
https://commons.apache.org/proper/commons-fileupload/
There is a problem that I faced. I need to validate a file size. If it's bigger then the one that I configure, user should get an error.
For now, I implemented the upload without this check and it looks like this:
public ResponseEntity insertFile(#PathVariable Long profileId, HttpServletRequest request) throws Exception {
ServletFileUpload upload = new ServletFileUpload();
FileItemIterator uploadItemIterator = upload.getItemIterator(request);
if (!uploadItemIterator.hasNext()) {
throw new FileUploadException("FileItemIterator was empty");
}
while (uploadItemIterator.hasNext()) {
FileItemStream fileItemStream = uploadItemIterator.next();
if (fileItemStream.isFormField()) {
continue;
}
//do stuff
}
return new ResponseEntity(HttpStatus.OK);
}
It does exactly what I need. It doesn't require me to get file loaded completely to the memory. I use InputStream that I got to perform further transfer to another service. Eventually, I don't have file loaded to the memory completely at any point of the time.
However, that prevents me from getting the total number of bites that were loaded.
Is there a way to handle such validation without downloading file completely or saving it somewhere?
I tried FileItem, but it does require complete loading of the file.

ServletFileUpload has a method setSizeMax that control the max file size accepted for each request. To mitigate memory consumption issues you can use a DiskFileFactory to set disk file storing for larger files. You must always get the files cause trusting in headers only is not reliable but I think this will do the job :)

Related

How to download (stream) large (generated) file in Micronaut

In my app I'm generating large pdf/csv files. I'm wondering Is there any way to stream large files in Micronaut without keeping it fully in memory before sending to a client.
You can use StreamedFile, eg:
#Get
public StreamedFile download() {
InputStream inputStream = ...
return new StreamedFile(inputStream, "large.csv");
}
Be sure to check the official documentation about file transfers.

Creating Zip file while client is downloading

I try to develop something like dropbox(very basic one). For one file to download, it's really easy. Just use servletoutputstream. what i want is: when client asks me multiple file, i zip files in server side then send to user. But if file is big it takes too many times to zip them and send to user.
is there any way to send files while they are compressing?
thanks for your help.
Part of the Java API for ZIP files is actually desgined to provide "on the fly" compression. It all fits nicely both with the java.io API and the servlet API, which means this is even... kind of easy (no multithreading required - even for performance reason, because usually your CPU will probably be faster at ZIPping than your network will be at sending contents).
The part you'll be interacting with is ZipOutputStream. It is a FilterOutputStream (which means it is designed to wrap an outputstream that already exists - in your case, that would be the respone's OutputStream), and will compress every byte you send it, using ZIP compression.
So, say you have a get request
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
// Your code to handle the request
List<YourFileObject> responseFiles = ... // Whatever you need to do
// We declare that the response will contain raw bytes
response.setContentType("application/octet-stream");
// We open a ZIP output stream
try (ZipOutputStream zipStream = new ZipOutputStream(response.getOutputStream()) {// This is Java 7, but not that different from java 6
// We need to loop over each files you want to send
for(YourFileObject fileToSend : responseFiles) {
// We give a name to the file
zipStream.putNextEntry(new ZipEntry(fileToSend.getName()));
// and we copy its content
copy(fileToSend, zipStream);
}
}
}
Of course, you should do proper exception handling. A couple quick notes though :
The ZIP file format mandates that each file has a name, so you must create a new ZipEntry each time you start a new file (you'll probably get an IllegalStateException if you do not, anyway)
Proper use of the API would be that you close each entry once you are done writing to it (at the end of the file). BUT : the Java implementation does that for you : each time you call putNextEntry it closes the previous one (if need be) all by itself
Likewise, you must not forget to close the ZIP stream, beacuse, this will properly close the last entry AND flush everything that is needed to create a proper ZIP file. Failure to do so will result in a corrupt file. Here, the try with resources statement does this : it closes the ZipOutputStream once everything is written to it.
The copy method here is just what you would use to transfert all the bytes from the original file to the outputstream, there is nothing ZIP specific about it. Just call outputStream.write(byte[] bytes).
**EDIT : ** to clarify...
For example, given a YourFileType that has the following methods :
public interface YourFileType {
public byte[] getContent();
public InputStream getContentAsStream();
}
Then the copy method could look like (this is all very basic Java IO, you could maybe use a library such as commons io to not reinvent the wheel...)
public void copy(YourFileType file, OutputStream os) throws IOException {
os.write(file.getContent());
}
Or, for a full streaming implementation :
public void copy(YourFileType file, OutputStream os) throws IOException {
try (InputStream fileContent = file.getContentAsStream()) {
byte[] buffer = new byte[4096]; // 4096 is kind of a magic number
int readBytesCount = 0;
while((readBytesCount = fileContent.read(buffer)) >= 0) {
os.write(buffer, 0, readBytesCount);
}
}
}
Using this kind of implementation, your client will start receiveing a response almost as soon as you start writing to the ZIPOutputStream (the only delay would be that of internal buffers), meaning it should not timeout (unless you spent too long buliding the content to send - but that would not be the ZIPping part fault's).

OutOfMemoryError while attending multiple download requests with Spring

I'm getting an OutOfMemoryException while trying to download several files.
All of them are being downloading simultaneously and their size is over 200MB more or less.
I'm using Spring 3.2.3 and java 7. This is a call from a REST request.
This is the code:
#RequestMapping(value = "/app/download", method = RequestMethod.GET, produces = MediaType.MULTIPART_FORM_DATA_VALUE)
public void getFile(#PathVariable String param, HttpServletResponse response) {
byte[] fileBytes = null;
String fileLength = null;
try {
// Firstly looking for the file from disk
Path fileFromDisk = getFileFromDisk(param);
InputStream is = null;
long fileLengthL = Files.size(fileFromDisk);
fileLength = String.valueOf(fileLengthL);
// Preparing data for response
String fileName = "Some file name.zip";
response.setHeader("Content-Disposition", "attachment; filename=\"" + fileName + "\"");
response.setHeader("Content-Length", fileLength);
is = Files.newInputStream(fileFromDisk);
IOUtils.copy(is, response.getOutputStream());
response.flushBuffer();
} catch (Exception e) {
// Exception treatment
}
}
IOUtils is the library from Apache to work with files.
The code works perfectly until we have several requests at a time.
I think the problem is the response is filled with all the data from the file and it is not freed from the JVM until the download is completed.
I would like to know if there is a way to chunk the response or similar to avoid filling the heap space with all the data at a time.
¿Any ideas?
Thank you very much in advance.
Have you given your dev environment enough memory?
I use Eclipse and its default memory allocation is 512m which has caused
me issues when using Spring.
If you are using eclipse go into eclipses main folder and
open a file called eclipse.ini.
There will be a line in there that says -Xmx512m.
Change that to what ever memory you would like to allocate to your Dev enviroment
I would normally go at least -Xmx1024m at least.
I hope this helps.
The content type set with the 'produces' attribute looks to be incorrect. Set the proper content type directly on the response object with the setContentType method. Also try using the setContentLength method to set the content length.
After reading and reading I've reached this conclusion: The output stream of the response object has to be completely filled, it can't be returned as little blocks of data to the browser or client. So the file size is loaded whatever it will be.
My personal solution is let doing the hard work a third party. My requirements need to have multiple downloads of big files at the same time: as my memory is not enough I'm using an external entity that provides me those files as a temporary URL.
I don't know if it is the best way, but is working for me.
Thank you anyway for your responses.

Servlet File Uploading

I'm using a servlet to do a multiple file upload (using Apache Commons FileUpload). A portion of my code is posted below. My problem is that if I upload files again and again , the memory consumption of the app server jumps rather drastically. The Apache Tomcat server seems to hang on to the memory and never return it. The heap space runs out of memory. Sometimes it runs out of memory exception and throws java heap space error.
I closed all the input streams, I think the problem is in the ServletFileUpload, could anyone help me out to how to close it.
ServletContext context=this.getServletConfig().getServletContext();
DiskFileItemFactory factory = new DiskFileItemFactory();
FileCleaningTracker fileCleaningTracker = FileCleanerCleanup.getFileCleaningTracker(context);
factory.setFileCleaningTracker(fileCleaningTracker);
if (isMultiPart) {
upload = new ServletFileUpload(factory);
try {
itr = upload.getItemIterator(request);
while (itr.hasNext()) {
item = itr.next();
if (item.isFormField()) {
...
You're using FileCleaningTracker, there are versions of Apache commons FileUpload with a bug in that component (see this: http://blog.novoj.net/2012/09/19/commons-file-upload-contains-a-severe-memory-leak/)
It seems it has been already fixed: https://issues.apache.org/jira/browse/FILEUPLOAD-189
So try using the last available version.

How to upload a directory via Grails or Java?

What is the best way to upload a directory in grails ?
I try this code :
def upload = {
if(request.method == 'POST') {
Iterator itr = request.getFileNames();
while(itr.hasNext()) {
MultipartFile file = request.getFile(itr.next());
File destination = new File(file.getOriginalFilename())
if (!file.isEmpty()) {
file.transferTo(destination)
// success
}
else
{
// failure
}
}
response.sendError(200,'Done');
}
}
Unfortunately, I can only upload file by file.
I would like to define my directory, and upload all files directly.
Any ideas ?
There is one major misconception here. The code which you posted will only work if both the server and the client runs at physically the same machine (which won't occur in real world) and if you're using the MSIE browser which has the misbehaviour to send the full path along the filename.
You should in fact get the contents of the uploaded file as an InputStream and write it to any OutputStream the usual Java IO way. The filename can be used to create a file with the same name at the server side, but you'll ensure that you strip the incorrectly by MSIE sent path from the filename.
As to your actual functional requirement, HTML doesn't provide facilities to upload complete directories or multiple files by a single <input type="file"> element. You'll need to create a client application which is capable of this and serve this from your webpage, like a Java Applet using Swing JFileChooser. There exist 3rd party solutions for this, like JumpLoader.

Categories