I have an application with Java Spring, mysql backend and AngularJS frontend. It's hosted on amazon ec2 m4.xlarge instance.
I use the HTML5 camera capture capability to take photograph and send base64 encoded image and some other metadata to backend via a RESTful web service. At the back-end I convert the base64 data to a png file and save to disk and also make an entry to MySQL database about the status of the file. This has been working fine until many users start uploading images at the same time. There are 4000+ users in the system and at the peak there could be around 1000 concurrent users trying to upload image data simultaneously. Having too many users slows down my application and it take 10-15 seconds to return any page (normally its under 2 seconds). I checked my server stats and the CPU utilization is under 20%, and no SWAP memory is used. I am not sure where the bottleneck is and how to measure it. Any suggestion on how to approach the problem? I know that autoscaling ec2 and having queue in image data processing at the backend might help, but before doing anything I want to get to the root cause of the problem.
Java code to process base64 image:
/**
* POST /rest/upload/studentImage -> Upload a photo of the user and update the Student (Student History and System Logs)
*/
#RequestMapping(value = "/rest/upload/studentImage",
method = RequestMethod.POST,
produces = "application/json")
#Timed
public Integer create(#RequestBody StudentPhotoDTO studentPhotoDTO) {
log.debug("REST request to save Student : {}", studentPhotoDTO);
Boolean success = false;
Integer returnValue = 0;
final String baseDirectory = "/Applications/MAMP/htdocs/studentPhotos/";
Long studentId = studentPhotoDTO.getStudent().getStudentId();
String base64Image = studentPhotoDTO.getImageData().split(",")[1];
byte[] imageBytes = DatatypeConverter.parseBase64Binary(base64Image);
try {
BufferedImage image = ImageIO.read(new ByteArrayInputStream(imageBytes));
String filePath = baseDirectory + 'somestring';
log.info("Saving uploaded file to: " + filePath);
File f = new File(filePath);
Boolean bool = f.mkdirs();
File outputfile = new File(filePath + studentId + ".png");
success = ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
success = false;
e.printStackTrace();
} catch(Exception ex){
success = false;
ex.printStackTrace();
}
if(success) {
returnValue = 1;
// update student
studentPhotoDTO.getStudent().setPhotoAvailable(true);
studentPhotoDTO.getStudent().setModifiedOn(new Date());
studentRepository.save(studentPhotoDTO.getStudent());
}
return returnValue;
}
Here is an image of my cloudwatch network monitoring
Update (06/17/2015):
I am serving the Spring Boot tomcat application with apache fronting using ajp ProxyPass. I tried to directly serve the tomcat app without the apache fronting and that seem to significantly improve the performance. My app didn't slow down as before. Still looking for the root cause.
Instead of using a technical approach to solve this problem, I would recommend having a good design that takes care of this problem. In my opinion, which is used totally based on the information that you have provided, I think its a design problem. To find the root cause of this problem, I need to have a look at your code.
But as of now, a design should be to use a queue that processes images at the back end, and at the front end, notifies the user that their image has been queued to process. Once its processed, it should say that it has been processed (or not processed because of some error).
If you can show code, I can try to figure out the problem if there is any.
Related
I have an endpoint in Spring MVC controller that allow user to start download a file while I still process necessary data on the server. The endpoint looks like this:
#GetMapping(value = "/download.csv")
public void download(HttpServletResponse response) {
try {
response.setContentType(APPLICATION_OCTET_STREAM);
response.addHeader("Content-Disposition", "attachment; filename=name.csv");
final List listOfIds = getListOfIds();
for (id in ids) {
final String data = getAdditionalData(id)
write(data, response.getOutputStream())
response.flushBuffer()
}
} catch (Exception ex) {
// handle error
}
}
This code works fine, when the user accesses the download endpoint, the web browser will immediately ask if the user wants to save the file. Then I can process the expensive getAdditionalData(id) calls on the server during user downloading, instead of letting the user wait until I processed all the ids which could take a long time.
However, there is one problem: when getAdditionalData(id) fails and throws an exception, the web browser will show the download as "completed" instead of "failed" and leave a partial result to the user.
My problem is: is there any way from this endpoint to tell the web browser the download has failed so it won't show "completed" to the user? Thanks.
I have been trying to upload to a Java Spring server running on my laptop, using an app written in Xamarin forms, using a physical Redmi Android device.
But when I send out the multi-part request, if it is bigger than about 2MB, the file loses a few hundred bytes by the time it reaches the server.
For example, the original video file has 8,268,891 bytes. Sometimes the file that reaches the server will have 8,267,175 and sometimes 8,269,279 or some other random number.
I don't know if it's related to my Xamarin code, because this seems to happen whether I use multi-part requests or send it as a base64 string in a request.
But just in case, here is my multi-part Xamarin code
var multipartContent = new MultipartFormDataContent();
var videoBytes = new ByteArrayContent(file.GetStream().ToByteArray());
multipartContent.Add(videoBytes, "file", file.Path.FileName());
multipartContent.Add(new StringContent(serializedRequest, Encoding.UTF8, "application/json"), "request");
content = multipartContent;
}
switch (type)
{
case RequestType.Post:
result = await client.PostAsync(_siteUrl + apiPath, content, cancellationToken);
break;
And my controller on the Spring server
#RequestMapping(value = { RequestMappingConstants.MOBILE + RequestMappingConstants.UPLOAD + RequestMappingConstants.UPLOAD_VIDEO }, method = RequestMethod.POST)
public #ResponseBody VideoUploadResponse uploadVideo(#RequestPart(value="request") VideoUploadRequest request, #RequestPart(value="file") MultipartFile file, HttpServletRequest httpRequest) {
LOG.info("Inside video upload");
return uploadService.uploadWelcomeVideo(request, file, httpRequest);
}
Also, my settings on the server:
multipart.maxFileSize= 100MB
multipart.maxRequestSize= 100MB
spring.servlet.multipart.enabled=true
spring.servlet.multipart.file-size-threshold=2KB
spring.servlet.multipart.max-file-size=200MB
spring.servlet.multipart.max-request-size=215MB
spring.servlet.multipart.resolve-lazily=false
Again, this happens as long as the video file exceeds about 2MB. The corrupted file that reaches the server is unplayable.
The server and client are running on the same wi-fi network.
I would be very grateful if you could help.
I have found that you also need to adjust the Tomcat and/or Jetty (as appropriate) settings:
server.jetty.max-http-form-post-size: 100MB
server.tomcat.max-http-form-post-size: 100MB
server.tomcat.max-swallow-size: -1
It turned out to be something wrong with my laptop or wireless network that was causing packet loss. Nothing to do with the code, as it was working when I tried it on a production server
This question already has an answer here:
Uploading files in Spring with Tomcat related to the maximum size allowed
(1 answer)
Closed 7 years ago.
I've read the article "File upload using Spring MVC and annotation configuration"
(http://www.raistudies.com/spring/spring-mvc/file-upload-spring-mvc-annotation)
I really learned some useful things from it and thanks for this article!
It works fine in Tomcat-8.0.20 when I upload a small file.
But , when I upload a large file which is larger than 3M bytes, the MaxUploadSizeExceededException would have been catched "twice" , then the connection between browser and server would be broken. The browser reports an ERR_CONNECTION_RESET error and no error info(or page) is shown, it just looks like the cable to my computer is pulled out by someone.
My system environment is :
JRE1.8+Tomcat-8.0.20+WAR_package
and my tomcat is a brand new clean tomcat in a new folder , which only has this WAR in it.
I tried by using Spring 4.1.4 , the problem still remains.
By the way ,the lazy mode of file-uploading is not suitable in my case. So I need to report result to user immediately when MaxUploadSizeExceededException has been catched.
And this is what I expected.
How to resolve this "disconnection" problem caused by uploading large file ?
Thanks a lot and best regards !
#Controller
#RequestMapping(value="/FileUploadForm.htm")
public class UploadFormController implements HandlerExceptionResolver
{//this is the Exception Handler and Controller class
#RequestMapping(method=RequestMethod.GET)
public String showForm(ModelMap model){
UploadForm form = new UploadForm();
model.addAttribute("FORM", form);
return "FileUploadForm";
}
#RequestMapping(method=RequestMethod.POST)
public String processForm(#ModelAttribute(value="FORM") UploadForm form,BindingResult result){
if(!result.hasErrors()){
FileOutputStream outputStream = null;
String filePath = System.getProperty("java.io.tmpdir") + "/" + form.getFile().getOriginalFilename();
try {
outputStream = new FileOutputStream(new File(filePath));
outputStream.write(form.getFile().getFileItem().get());
outputStream.close();
} catch (Exception e) {
System.out.println("Error while saving file");
return "FileUploadForm";
}
return "success";
}else{
return "FileUploadForm";
}
}
//MaxUploadSizeExceededException can be catched here.....but twice..
//then some weird happend, the connection between browser and server is broken...
#Override
public ModelAndView resolveException(HttpServletRequest arg0,
HttpServletResponse arg1, Object arg2, Exception exception) {
Map<Object, Object> model = new HashMap<Object, Object>();
if (exception instanceof MaxUploadSizeExceededException){
model.put("errors", "File size should be less then "+
((MaxUploadSizeExceededException)exception).getMaxUploadSize()+" byte.");
} else{
model.put("errors", "Unexpected error: " + exception.getMessage());
}
model.put("FORM", new UploadForm());
return new ModelAndView("/FileUploadForm", (Map) model);//the programme can run to this line and return a ModelAndView object normally
}
}
I've confirmed that the problem I met is a mechanism of Tomcat7/8 for an aborted upload request. A new attribute "maxSwallowSize" is the key to deal this situation. It should happen when you upload a file which is larger than 2M.
Because the 2M is the default value of this new attribute . Tomcat7/8 can't swallow the rest file bytes which is being uploaded from browser, so it simply dissconnect the connection. Please visit http://tomcat.apache.org/tomcat-8.0-doc/config/http.html and search for "maxSwallowSize".
Change CommonsMultipartResolver configuration to use a larger upload size. For example, the following line will set the maximum upload size to 100MB.
<bean id="filterMultipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
<property name="maxUploadSize" value="100000000"/>
</bean>
If you use a web server between the browser and Tomcat, make sure to change the upload size and connection timeout on the web server as well, otherwise the request may time out before reaching Tomcat.
I'm using jsoup in my android app but the problem is, the html source takes too much time to download. Here is my code:
long t = System.currentTimeMillis();
String url = "http://www.stackoverflow.com/";
Document doc = null;
try {
Connection c = Jsoup.connect(url);
doc = c.get();
System.out.println(System.currentTimeMillis() - t);
} catch (IOException e) {
e.printStackTrace();
}
Executing this code takes 1.265 seconds which feels really weird because i can download the whole website (with images and all that good stuff) using web browser in less than a 0.5 seconds on the same device. Did I do something wrong? Or maybe there is a faster way for getting html source of website? Thanks in advance.
Where are you trying this code on? Your device? If you are using the LTE/3G network it wouldn't be too much off.
The other reason that I could think is that your wireless router is not situated in the best place from your device in case you are using Wifi.
From that code I don't see anything that could cause more delay. 1.2 secs may not be that bad if you dont have the host DNS entry cached and the server is far away from you.
Also, try setting the Agent to the same as your browser when comparing times. It may happen that the server gives different priorities based on the user agent. In this case you are using the default Java user agent.
This should be a fairly simple one... as is so often the case...
I'm writing a desktop application in Java, which must keep a cache of images from Facebook.
I don't seem to be able to reliably determine when I have a cache hit or miss as I would expect to based on the resource modification date.
I've noticed that the last-modified header in facebook's responses is almost always (in the case of profile pictures as retrieved by the method below) midnight on 1st Jan 2009... Browsing through a few pages on the site with Chrome's resource monitor open, I see that many images do indeed have this date set as the last-modified, and that in cases where this is true they also at least tend to have X-Backend, X-Blockid & X-Cache-by (or sometimes X-N) fields. There are other cases in which last-modified is set to some older date and X-* are not present, and other permutations, too (mostly it seems on interface and ad graphics).
Thus far, searching for info on these x-* headers, combined with limited basic knowledge of HTTP, has left me little the wiser as to how to properly implement my cache. Clearly between an ordinary browser and the facebook servers, there is no trouble ascertaining whether or not the images in cache are up to date, but it is not clear to me the mechanism that is being used to determine this.
At the moment, I have a method that opens a URLConnection to the appropriate facebook resource, setting ifmodifiedsince in cases where there already exists a corresponding file in the cache. If I get a 304 response code, then I return - after calling setLastModified on the cache file for good measure (partly as, for now, the cache of images is going into SVN & they all get lastModified set to whenever they happened to be checked out).
public static void downloadPicture(String uid) {
try {
URL url = new URL("http://graph.facebook.com/" + uid + "/picture?type=large");
String fileName = "data/images/" + uid + ".jpg";
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
File cachedPic = new File(fileName);
long lastModified = cachedPic.lastModified();
if (cachedPic.exists()) {
conn.setIfModifiedSince(lastModified);
}
conn.connect();
if (conn.getResponseCode() == 304) {
System.out.println("Using cache for " + uid);
cachedPic.setLastModified(conn.getLastModified());
return;
}
System.out.println("Downloading new for " + uid);
ReadableByteChannel rbc = Channels.newChannel(conn.getInputStream());
FileOutputStream fos = new FileOutputStream(fileName);
fos.getChannel().transferFrom(rbc, 0, 1 << 24);
fos.close();
} catch (MalformedURLException e) {
System.err.println("This really shouldn't happen...");
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
There are undoubtedly other improvements that should be made to the way I do this process (the code above takes an appreciable amount of time for each individual item, which it clearly shouldn't and I'm sure I can resolve - although comments on best practice are still welcome), but what I need to find out is why I seem to always get 304 when there is any cached file present, and what's the correct method for ensuring that the proper requests are sent.
I think part of the problem here is the redirect that FB does when using the graph API URL to get the picture. The URL http://graph.facebook.com/user_uid/picture will actually redirect you to another URL for the picture. From what I have seen, the redirected URL changes when you change the profile picture. So, the actual picture URL would not change very often, the URL that you get redirected to would change.
I cannot point to any "official" documentation to back up the claims, this is just what I have experienced.