I am using commons-fileupload to read image file from the POST request using classes DiskFileItemFactory and ServletFileUpload.
Can someone help me with which objects off the above can be re-used and accessed concurrently among threads or will have to be created anew for each request
Thanks in advance
Could not get any straight point in this direction.
But found the Streaming API of the commons file upload here.
Since it creates one ServletFileUpload object, So I think this migh be a better idea.
// Create a new file upload handler
ServletFileUpload upload = new ServletFileUpload();
The documentation is not informative on this respect. The available examples suggest that all the objects are to be recreated on each request, except the FileCleaningTracker for which a static method is suggested when building the DiskFileItemFactories as shown in https://commons.apache.org/proper/commons-fileupload/using.html:
public static DiskFileItemFactory newDiskFileItemFactory(ServletContext context,
File repository) {
FileCleaningTracker fileCleaningTracker
= FileCleanerCleanup.getFileCleaningTracker(context);
DiskFileItemFactory factory
= new DiskFileItemFactory(DiskFileItemFactory.DEFAULT_SIZE_THRESHOLD,
repository);
factory.setFileCleaningTracker(fileCleaningTracker);
return factory;
}
Related
I'm trying to programmatically upload a file to an enpoint via RestEasyClient.
File file = new File("/Users/michele/path/file.txt");
MultipartOutput multipartOutput = new MultipartOutput();
multipartOutput.addPart(file, MediaType.APPLICATION_OCTET_STREAM_TYPE, "file.txt");
Entity<MultipartOutput> entity = Entity.entity(multipartOutput, MediaType.MULTIPART_FORM_DATA_TYPE);
//client is an instance of org.jboss.resteasy.client.jaxrs.ResteasyClient
client
.target("http://localhost:8080/endpoint")
.request()
.post(entity);
The problem is that the backend does not "find" the file that I uploaded
backend code
DiskFileItemFactory factory = new DiskFileItemFactory();
ServletFileUpload fileUpload = new ServletFileUpload(factory);
List<FileItem> items = fileUpload.parseRequest(httpReq);
items is always empt.
Using MultipartFormDataOutput::addFormData, as described in many articles, works but does not fit my use case.
Also using apache.http.client.HttpClient works, but I prefer to avoid adding dependencies to my client.
Any ideas?
Found it.
The trick was to use MultipartFormDataOutput and to set the filename when adding a part
MultipartFormDataOutput multipartOutput = new MultipartFormDataOutput();
multipartOutput.addFormData("uploaded file", file, MediaType.APPLICATION_OCTET_STREAM_TYPE, "file.txt");
I'm writing an API using Spring + apache commons file upload.
https://commons.apache.org/proper/commons-fileupload/
There is a problem that I faced. I need to validate a file size. If it's bigger then the one that I configure, user should get an error.
For now, I implemented the upload without this check and it looks like this:
public ResponseEntity insertFile(#PathVariable Long profileId, HttpServletRequest request) throws Exception {
ServletFileUpload upload = new ServletFileUpload();
FileItemIterator uploadItemIterator = upload.getItemIterator(request);
if (!uploadItemIterator.hasNext()) {
throw new FileUploadException("FileItemIterator was empty");
}
while (uploadItemIterator.hasNext()) {
FileItemStream fileItemStream = uploadItemIterator.next();
if (fileItemStream.isFormField()) {
continue;
}
//do stuff
}
return new ResponseEntity(HttpStatus.OK);
}
It does exactly what I need. It doesn't require me to get file loaded completely to the memory. I use InputStream that I got to perform further transfer to another service. Eventually, I don't have file loaded to the memory completely at any point of the time.
However, that prevents me from getting the total number of bites that were loaded.
Is there a way to handle such validation without downloading file completely or saving it somewhere?
I tried FileItem, but it does require complete loading of the file.
ServletFileUpload has a method setSizeMax that control the max file size accepted for each request. To mitigate memory consumption issues you can use a DiskFileFactory to set disk file storing for larger files. You must always get the files cause trusting in headers only is not reliable but I think this will do the job :)
I have a requirement where I want to pass multiple thumbnails to UI (javascript) as the response of GET Request.
Each thumbnail is a separate file, so essentially I want to pass multiple files.
Is downloading multiple files even a sane idea? If yes, how can we do using CXF JAX-RS ?
I tried below code which is not working.
#GET
#Path("/streamThumbnails")
#Produces("multipart/mixed")
public MultipartBody getBooks2() {
List<Attachment> atts = new LinkedList<Attachment>();
File thumbnail1 = new File("//D:/pdf2.pdf");
File thumbnail2 = new File("//D:/pdf3.pdf");
atts.add(new Attachment("thumbnail1", "application/pdf",thumbnail1));
atts.add(new Attachment("thumbnail2", "application/pdf",thumbnail2));
return new MultipartBody(atts, true);
}
Since you are talking about thumbnails I guess you want to download images, and not PDFs as in your example. Why not to stripe them together into a single image and have the client use them as CSS sprites?
I'm using a servlet to do a multiple file upload (using Apache Commons FileUpload). A portion of my code is posted below. My problem is that if I upload files again and again , the memory consumption of the app server jumps rather drastically. The Apache Tomcat server seems to hang on to the memory and never return it. The heap space runs out of memory. Sometimes it runs out of memory exception and throws java heap space error.
I closed all the input streams, I think the problem is in the ServletFileUpload, could anyone help me out to how to close it.
ServletContext context=this.getServletConfig().getServletContext();
DiskFileItemFactory factory = new DiskFileItemFactory();
FileCleaningTracker fileCleaningTracker = FileCleanerCleanup.getFileCleaningTracker(context);
factory.setFileCleaningTracker(fileCleaningTracker);
if (isMultiPart) {
upload = new ServletFileUpload(factory);
try {
itr = upload.getItemIterator(request);
while (itr.hasNext()) {
item = itr.next();
if (item.isFormField()) {
...
You're using FileCleaningTracker, there are versions of Apache commons FileUpload with a bug in that component (see this: http://blog.novoj.net/2012/09/19/commons-file-upload-contains-a-severe-memory-leak/)
It seems it has been already fixed: https://issues.apache.org/jira/browse/FILEUPLOAD-189
So try using the last available version.
I have a temporary file with data that's returned as part of a SOAP response via a MTOM binary attachment. I would like to trash it as soon as the method call "ends" (i.e., finishes transferring). What's the best way for me to do this? The best way I can figure out how to do this is to delete them when the session is destroyed, but I'm not sure if there's a more 'immediate' way to do this.
FYI, I'm NOT using Axis, I'm using jax-ws, if that matters.
UPDATE: I'm not sure the answerers are really understanding the issue. I know how to delete a file in java. My problem is this:
#javax.jws.WebService
public class MyWebService {
...
#javax.jws.WebMethod
public MyFileResult getSomeObject() {
File mytempfile = new File("tempfile.txt");
MyFileResult result = new MyFileResult();
result.setFile(mytempfile); // sets mytempfile as MTOM attachment
// mytempfile.delete() iS WRONG
// can't delete mytempfile because it hasn't been returned to the web service client
// yet. So how do I remove it?
return result;
}
}
I ran into this same problem. The issue is that the JAX-WS stack manages the file. It is not possible to determine in your code when JAX-WS is done with the file so you do not know when to delete it.
In my case, I am using a DataHandler on my object model rather than a file. MyFileResult would have the following field instead of a file field:
private DataHandler handler;
My solution was to create a customized version of FileDataSource. Instead of returning a FileInputStream to read the contents of the file, I return the following extension of FileInputStream:
private class TemporaryFileInputStream extends FileInputStream {
public TemporaryFileInputStream(File file) throws FileNotFoundException {
super(file);
}
#Override
public void close() throws IOException {
super.close();
file.delete();
}
}
Essentially the datasource allows reading only once. After the stream is closed, the file is deleted. Since the JAX-WS stack only reads the file once, it works.
The solution is a bit of a hack but seems to be the best option in this case.
Are you using standard java temp files? If so, you can do this:
File script = File.createTempFile("temp", ".tmp", new File("./"));
... use the file ...
script.delete(); // delete when done.
the work folder that you set up in the context for this webapp that you're talking about. Can you set this work directory in a known directory ? If yes, then you can find the temp file within the temp work directory(that you know). Once you find, you can delete it.