This question already has an answer here:
Uploading files in Spring with Tomcat related to the maximum size allowed
(1 answer)
Closed 7 years ago.
I've read the article "File upload using Spring MVC and annotation configuration"
(http://www.raistudies.com/spring/spring-mvc/file-upload-spring-mvc-annotation)
I really learned some useful things from it and thanks for this article!
It works fine in Tomcat-8.0.20 when I upload a small file.
But , when I upload a large file which is larger than 3M bytes, the MaxUploadSizeExceededException would have been catched "twice" , then the connection between browser and server would be broken. The browser reports an ERR_CONNECTION_RESET error and no error info(or page) is shown, it just looks like the cable to my computer is pulled out by someone.
My system environment is :
JRE1.8+Tomcat-8.0.20+WAR_package
and my tomcat is a brand new clean tomcat in a new folder , which only has this WAR in it.
I tried by using Spring 4.1.4 , the problem still remains.
By the way ,the lazy mode of file-uploading is not suitable in my case. So I need to report result to user immediately when MaxUploadSizeExceededException has been catched.
And this is what I expected.
How to resolve this "disconnection" problem caused by uploading large file ?
Thanks a lot and best regards !
#Controller
#RequestMapping(value="/FileUploadForm.htm")
public class UploadFormController implements HandlerExceptionResolver
{//this is the Exception Handler and Controller class
#RequestMapping(method=RequestMethod.GET)
public String showForm(ModelMap model){
UploadForm form = new UploadForm();
model.addAttribute("FORM", form);
return "FileUploadForm";
}
#RequestMapping(method=RequestMethod.POST)
public String processForm(#ModelAttribute(value="FORM") UploadForm form,BindingResult result){
if(!result.hasErrors()){
FileOutputStream outputStream = null;
String filePath = System.getProperty("java.io.tmpdir") + "/" + form.getFile().getOriginalFilename();
try {
outputStream = new FileOutputStream(new File(filePath));
outputStream.write(form.getFile().getFileItem().get());
outputStream.close();
} catch (Exception e) {
System.out.println("Error while saving file");
return "FileUploadForm";
}
return "success";
}else{
return "FileUploadForm";
}
}
//MaxUploadSizeExceededException can be catched here.....but twice..
//then some weird happend, the connection between browser and server is broken...
#Override
public ModelAndView resolveException(HttpServletRequest arg0,
HttpServletResponse arg1, Object arg2, Exception exception) {
Map<Object, Object> model = new HashMap<Object, Object>();
if (exception instanceof MaxUploadSizeExceededException){
model.put("errors", "File size should be less then "+
((MaxUploadSizeExceededException)exception).getMaxUploadSize()+" byte.");
} else{
model.put("errors", "Unexpected error: " + exception.getMessage());
}
model.put("FORM", new UploadForm());
return new ModelAndView("/FileUploadForm", (Map) model);//the programme can run to this line and return a ModelAndView object normally
}
}
I've confirmed that the problem I met is a mechanism of Tomcat7/8 for an aborted upload request. A new attribute "maxSwallowSize" is the key to deal this situation. It should happen when you upload a file which is larger than 2M.
Because the 2M is the default value of this new attribute . Tomcat7/8 can't swallow the rest file bytes which is being uploaded from browser, so it simply dissconnect the connection. Please visit http://tomcat.apache.org/tomcat-8.0-doc/config/http.html and search for "maxSwallowSize".
Change CommonsMultipartResolver configuration to use a larger upload size. For example, the following line will set the maximum upload size to 100MB.
<bean id="filterMultipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
<property name="maxUploadSize" value="100000000"/>
</bean>
If you use a web server between the browser and Tomcat, make sure to change the upload size and connection timeout on the web server as well, otherwise the request may time out before reaching Tomcat.
Related
I have an endpoint in Spring MVC controller that allow user to start download a file while I still process necessary data on the server. The endpoint looks like this:
#GetMapping(value = "/download.csv")
public void download(HttpServletResponse response) {
try {
response.setContentType(APPLICATION_OCTET_STREAM);
response.addHeader("Content-Disposition", "attachment; filename=name.csv");
final List listOfIds = getListOfIds();
for (id in ids) {
final String data = getAdditionalData(id)
write(data, response.getOutputStream())
response.flushBuffer()
}
} catch (Exception ex) {
// handle error
}
}
This code works fine, when the user accesses the download endpoint, the web browser will immediately ask if the user wants to save the file. Then I can process the expensive getAdditionalData(id) calls on the server during user downloading, instead of letting the user wait until I processed all the ids which could take a long time.
However, there is one problem: when getAdditionalData(id) fails and throws an exception, the web browser will show the download as "completed" instead of "failed" and leave a partial result to the user.
My problem is: is there any way from this endpoint to tell the web browser the download has failed so it won't show "completed" to the user? Thanks.
I get an exception when trying to upload a file to Amazon S3 from my Java Spring application. The method is pretty simple:
private void productionFileSaver(String keyName, File f) throws InterruptedException {
String bucketName = "{my-bucket-name}";
TransferManager tm = new TransferManager(new ProfileCredentialsProvider());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
bucketName, keyName, new File("/mypath/myfile.png"));
try {
// Or you can block and wait for the upload to finish
upload.waitForCompletion();
System.out.println("Upload complete.");
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
}
It is basically the same that amazon provides here, and the same exception with the exactly same message ("profile file cannot be null") appears when trying this other version.
The problem is not related to the file not existing or being null (I have already checked in a thousand ways that the File argument recieved by TransferManager.upload method exists before calling it).
I cannot find any info about my exception message "profile file cannot be null". The first lines of the error log are the following:
com.amazonaws.AmazonClientException: Unable to complete transfer: profile file cannot be null
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.unwrapExecutionException(AbstractTransfer.java:281)
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.rethrowExecutionException(AbstractTransfer.java:265)
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.waitForCompletion(AbstractTransfer.java:103)
at com.fullteaching.backend.file.FileController.productionFileSaver(FileController.java:371)
at com.fullteaching.backend.file.FileController.handlePictureUpload(FileController.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
My S3 policy allows getting and puttings objects for all kind of users.
What's happening?
ProfileCredentialsProvider() creates a new profile credentials provider that returns the AWS security credentials configured for the default profile.
So, if you haven't any configuration for default profile at ~/.aws/credentials, while trying to put object, it yields that error.
If you run your code on Lambda service, it will not provide this file. In that case, you also do not need to provide credentials. Just assign right IAM Role to your lambda function, then using default constructor should solve issue.
You may want to change TransferManager constructor according to your needs.
The solution was pretty simple: I was trying to implement this communication without an AmazonS3 bean for Spring.
This link will help with the configuration:
http://codeomitted.com/upload-file-to-s3-with-spring/
my code worked fine as below:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(DefaultAWSCredentialsProviderChain.getInstance()).withRegion(clientRegion).build();
I have an application with Java Spring, mysql backend and AngularJS frontend. It's hosted on amazon ec2 m4.xlarge instance.
I use the HTML5 camera capture capability to take photograph and send base64 encoded image and some other metadata to backend via a RESTful web service. At the back-end I convert the base64 data to a png file and save to disk and also make an entry to MySQL database about the status of the file. This has been working fine until many users start uploading images at the same time. There are 4000+ users in the system and at the peak there could be around 1000 concurrent users trying to upload image data simultaneously. Having too many users slows down my application and it take 10-15 seconds to return any page (normally its under 2 seconds). I checked my server stats and the CPU utilization is under 20%, and no SWAP memory is used. I am not sure where the bottleneck is and how to measure it. Any suggestion on how to approach the problem? I know that autoscaling ec2 and having queue in image data processing at the backend might help, but before doing anything I want to get to the root cause of the problem.
Java code to process base64 image:
/**
* POST /rest/upload/studentImage -> Upload a photo of the user and update the Student (Student History and System Logs)
*/
#RequestMapping(value = "/rest/upload/studentImage",
method = RequestMethod.POST,
produces = "application/json")
#Timed
public Integer create(#RequestBody StudentPhotoDTO studentPhotoDTO) {
log.debug("REST request to save Student : {}", studentPhotoDTO);
Boolean success = false;
Integer returnValue = 0;
final String baseDirectory = "/Applications/MAMP/htdocs/studentPhotos/";
Long studentId = studentPhotoDTO.getStudent().getStudentId();
String base64Image = studentPhotoDTO.getImageData().split(",")[1];
byte[] imageBytes = DatatypeConverter.parseBase64Binary(base64Image);
try {
BufferedImage image = ImageIO.read(new ByteArrayInputStream(imageBytes));
String filePath = baseDirectory + 'somestring';
log.info("Saving uploaded file to: " + filePath);
File f = new File(filePath);
Boolean bool = f.mkdirs();
File outputfile = new File(filePath + studentId + ".png");
success = ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
success = false;
e.printStackTrace();
} catch(Exception ex){
success = false;
ex.printStackTrace();
}
if(success) {
returnValue = 1;
// update student
studentPhotoDTO.getStudent().setPhotoAvailable(true);
studentPhotoDTO.getStudent().setModifiedOn(new Date());
studentRepository.save(studentPhotoDTO.getStudent());
}
return returnValue;
}
Here is an image of my cloudwatch network monitoring
Update (06/17/2015):
I am serving the Spring Boot tomcat application with apache fronting using ajp ProxyPass. I tried to directly serve the tomcat app without the apache fronting and that seem to significantly improve the performance. My app didn't slow down as before. Still looking for the root cause.
Instead of using a technical approach to solve this problem, I would recommend having a good design that takes care of this problem. In my opinion, which is used totally based on the information that you have provided, I think its a design problem. To find the root cause of this problem, I need to have a look at your code.
But as of now, a design should be to use a queue that processes images at the back end, and at the front end, notifies the user that their image has been queued to process. Once its processed, it should say that it has been processed (or not processed because of some error).
If you can show code, I can try to figure out the problem if there is any.
I've read a request for an HTML document from my browser, parsed the file from the request, found the specified file, and now all that's left is to send back the contents of the HTML file to the browser. What I'm currently doing seems like it should work just fine, however, the contents of the HTML file are not received by the browser.
public void sendResponse(File resource){
System.out.println(resource.getAbsolutePath());
Scanner fileReader;
try {
fileReader = new Scanner(resource);
while(fileReader.hasNext()){
socketWriter.println(fileReader.nextLine());
}
} catch (FileNotFoundException e) {
System.out.println("File not found!");
e.printStackTrace();
}
}
What am I doing incorrectly? There is no exception thrown, but the browser just keeps loading and loading.
that suggests your code is stuck in an infinite loop. Check your while loop. nextLine() is not moving the file pointer ahead?
It's hard to tell without knowing what type socketWriter is, but I imagine you'll need to close the connection. Look for a close() method or something similar on socketWriter and call it when you're done.
It is not evident from your code, where socketWriter is going. Low level operations such as socket are best handled by the web server itself. Normally when we have to write a response back to the browser, we make use of HttpServletResponse object which is available in the goGet / doPost method of your servlet. Refer to the javadocs for more details.
This should be a fairly simple one... as is so often the case...
I'm writing a desktop application in Java, which must keep a cache of images from Facebook.
I don't seem to be able to reliably determine when I have a cache hit or miss as I would expect to based on the resource modification date.
I've noticed that the last-modified header in facebook's responses is almost always (in the case of profile pictures as retrieved by the method below) midnight on 1st Jan 2009... Browsing through a few pages on the site with Chrome's resource monitor open, I see that many images do indeed have this date set as the last-modified, and that in cases where this is true they also at least tend to have X-Backend, X-Blockid & X-Cache-by (or sometimes X-N) fields. There are other cases in which last-modified is set to some older date and X-* are not present, and other permutations, too (mostly it seems on interface and ad graphics).
Thus far, searching for info on these x-* headers, combined with limited basic knowledge of HTTP, has left me little the wiser as to how to properly implement my cache. Clearly between an ordinary browser and the facebook servers, there is no trouble ascertaining whether or not the images in cache are up to date, but it is not clear to me the mechanism that is being used to determine this.
At the moment, I have a method that opens a URLConnection to the appropriate facebook resource, setting ifmodifiedsince in cases where there already exists a corresponding file in the cache. If I get a 304 response code, then I return - after calling setLastModified on the cache file for good measure (partly as, for now, the cache of images is going into SVN & they all get lastModified set to whenever they happened to be checked out).
public static void downloadPicture(String uid) {
try {
URL url = new URL("http://graph.facebook.com/" + uid + "/picture?type=large");
String fileName = "data/images/" + uid + ".jpg";
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
File cachedPic = new File(fileName);
long lastModified = cachedPic.lastModified();
if (cachedPic.exists()) {
conn.setIfModifiedSince(lastModified);
}
conn.connect();
if (conn.getResponseCode() == 304) {
System.out.println("Using cache for " + uid);
cachedPic.setLastModified(conn.getLastModified());
return;
}
System.out.println("Downloading new for " + uid);
ReadableByteChannel rbc = Channels.newChannel(conn.getInputStream());
FileOutputStream fos = new FileOutputStream(fileName);
fos.getChannel().transferFrom(rbc, 0, 1 << 24);
fos.close();
} catch (MalformedURLException e) {
System.err.println("This really shouldn't happen...");
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
There are undoubtedly other improvements that should be made to the way I do this process (the code above takes an appreciable amount of time for each individual item, which it clearly shouldn't and I'm sure I can resolve - although comments on best practice are still welcome), but what I need to find out is why I seem to always get 304 when there is any cached file present, and what's the correct method for ensuring that the proper requests are sent.
I think part of the problem here is the redirect that FB does when using the graph API URL to get the picture. The URL http://graph.facebook.com/user_uid/picture will actually redirect you to another URL for the picture. From what I have seen, the redirected URL changes when you change the profile picture. So, the actual picture URL would not change very often, the URL that you get redirected to would change.
I cannot point to any "official" documentation to back up the claims, this is just what I have experienced.