Here's what I'm doing. I want to upload multipart file via Ajax to my Spring web app. When the server receives the POST request, it creates a ticket number in the database. It then starts a thread that handles the actual file upload. The server then returns the ticket number.
I am using the CommonsMultipartResolver to handle the request and I have set the resolveLazily flag to true so that the Multipart isn't resolved right away.
So here's something along the lines of what I have
#Controller
public class myController{
#RequestMapping(value = "/upload", method = RequestMethod.POST)
#ResponseStatus(value = HttpStatus.OK)
#ResponseBody
public String upload(MultipartHttpServletRequest request, String fileName){
String ticket = dao.createUploadTicket(fileName);
Runnable run = new Runnable(){
#Override
public void run(){
dao.writeUpdate(ticket, "Getting data from request");
final MultipartFile file = request.getFile("theFile");
dao.writeUpdate(ticket, "Multipart file processed");
try {
dao.writeUpdate(ticket, "Saving file to disk");
file.transferTo(new File("/myDirectory"));
dao.writeUpdate(ticket, "File saved to disk");
}
catch(Exception e){
dao.writeUpdate(ticket, "File upload failed with the exception " + e.toString());
}
}
};
Thread t = new Thread(run);
t.start();
return ticket;
}
}
So the point here is that the ticket number can be used to get the progress updates. Say a large file is being uploaded. The client that made the file upload POST (say in this instance an Ajax request) can do it asynchronously and get back a ticket number. The client can the use that ticket number to determine the stage of the file upload and display information in another page.
One other use is that I can have an HTML page that makes a request to the server for all ticket numbers, then shows a "live" view of all the file uploads that are taking place on the server.
I haven't been able to get this to work because as soon as the controller returns, Spring calls cleanupMultipart() in the CommonsMultipartResolver. Since the resolveLazily flag is set to false, when cleanupMultipart() is called, it will begin to resolve and initialize the multipart files. This leads to a race condition between the call to "request.getFile("theFile");" in the runnable and the cleanupMultipart() call eventually leading to an exception.
Anyone have any ideas? Am I breaking some kind of HTTP contract here by wanting to do back-end asynchronous file handling.
HTTP request is already executed in its own thread, and client can make few request in parallel, asynchronously. So you don't need to start a new thread. Just save/process file as usual, in main thread. Just make 'async file upload' only on client side.
Also, you should send http response only when you've processed the input. I mean you can't read input header, make a http response, and continue reading data from the browser. Consume input -> Process it -> Send output, that how HTTP 1/1.1 protocols works.
If you need a ticket number to send to upload, you could create it before actual uploading, by using a two step upload, like:
Ajax GET request to get ticket number
POST a file content and ticket number (received from previous step)
+ ajax GET get current status for ticket, anytime later, async
Related
I use Java and Spring framework to create, inside a REST controller class, a method bound to GET requests.
However, the result returned by this method is sent as a stream which is fed asynchronously by another service (using InfluxDB).
Therefore, it immediately returns code 200 to the client, even though a timeout or any exception can occur afterwards.
I would like to notify the client about this.
/**
* InfluxDB service
*/
#Inject
InfluxDBService influxDBService;
/**
* #return CSV file containing the data
*/
#RequestMapping(value="/dump", method=RequestMethod.GET, produces="application/csv")
public #ResponseBody void getDump(
HttpServletResponse response,
#RequestParam(value = "app", required = false) String appFilter,
#RequestParam(value = "context", required = false) String contextFilter,
#RequestParam(value = "path", required = false) String pathRegex
) throws DataAnalysisException {
[...]
InputStream dump = influxDBService.dump( ... filters after treatment ...);
response.setContentType("application/csv");
long currentTime = System.currentTimeMillis() / 1000;
String fileName = "influxdb-dump_" + currentTime + ".csv";
response.setHeader("Content-Disposition", "attachment; filename=\"" + fileName + "\"");
try {
FileCopyUtils.copy(dump, response.getOutputStream());
} catch (IOException e) {
throw new DataAnalysisException("Could not get output from request results", e);
}
}
In the dump() method, an OkHttpClient creates a remote connection to an InfluxDB server and returns an InputStream of data. This client has a default timeout of 10 seconds.
If there is not too much data, everything works fine and the client downloads a CSV with correct data.
But if the InfluxDB server doesn't answer in time (too much data), then an empty CSV file is downloaded, even though HTTP code 200 is returned.
Thing is, when I debug, it goes through the FileCopyUtils.copy line which returns 200, but after 10 seconds it goes through the "throw new DataAnalysisException" catch-block. But at this time, client has already downloaded an empty CSV and got code 200.
DataAnalysisException is a custom exception returning HTTP code 500.
My question is : after the timeout, is there a way to notify the client that we actually had an issue even though he got 200 ? That could help me build an error page to notify him.
Thanks to you all.
I solved it.
Instead of FileCopyUtils.copy, I used StreamUtils.copy, which is basically the same thing, except it doesn't automatically close the input and output streams.
Then, in a catch clause, I do response.reset() then response.sendError(code, "msg"), and throw an exception.
And in a finally clause, I manually close both input and output streams.
Therefore, the CSV headers and remaining data are cleared, and when the streams close it doesn't tell the browser to download a CSV file.
Don't hesitate to contact me if you need more info or precise code.
I have a Java backend with REST-API and an Angularjs frontend. Users can use the frontend to request information from the backend. When this happens, the backend generates a file on the fly and sends it to the frontend. So far so good.
The problem occurs when users request large amounts of information. This makes generating the file take so long that the frontend times out and aborts the connection.
Is there a way to reassure the client that a response is actually going to come, or increase the timeout limit for this one endpoint only? Alternatively, is there a way for the server to send two responses, one immediately after receiving the request and one after the file is generated?
The API endpoint looks like this:
#Path("download")
#Produces(MediaType.APPLICATION_OCTET_STREAM)
#Consumes(MediaType.APPLICATION_JSON)
public Response download() {
StreamingOutput stream = //stream containing file
return Response.ok(stream)
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"download.xlsx\"")
.build();
}
The frontend makes the request by doing window.open(download url, '_blank', '') (the content of the file depends on previous input from the user).
I am writing a Spring controller that handles the HTTP PUT request from client, and generates S3 pre-signed url and issues a HTTP 307 status (Temp redirect) code. So basically I am authenticating the client and if it succeeds then I am asking him to write to a s3 folder. The client is able to write to signed url location.
Now my concern is the client will have to do upload two times. Once to my application server and then to s3, so the operation will take double the time.
Is my understanding correct?Does the client actually does 2 write in this case? Or is the client smart enough and just pushes the part of payload first and if it succeeds then pushes entire payload?
I read about HTTP 100 status code, but looks like the app server/tomcat already issues it and is not in my control.
Here is my spring controller
#RequestMapping("/upload")
public ResponseEntity<Void> execute(HttpServletRequest request) throws IOException, ServletException {
HttpHeaders headers = new HttpHeaders();
String redirectUrl = getRedirectUrl(requestURI, request.getMethod());
headers.setLocation(new URI(redirectUrl));
ResponseEntity<Void> redirectEntity = new ResponseEntity<Void>(null,headers,HttpStatus.TEMPORARY_REDIRECT);
return redirectEntity;
}
How can i prevent clint from uploading the entire payload to my app server?
So my understanding correct?
Answer is YES. Server will send the response of PUT request after reading the full request including body. when you client will repeat the request, in response 307 (Temporary Redirect), it will be like a new http request.
Also an important point on using 307 response code from spec(see below) should be considered for this approach.
If the 307 status code is received in response to a request other
than GET or HEAD, the user agent MUST NOT automatically redirect the
request unless it can be confirmed by the user, since this might
change the conditions under which the request was issued.
On point
How can i prevent client from uploading the entire payload to my app server?
You may do upload to s3 in background from your controller and return the redirect response (301?) point to an URL which will return the status of upload request.
This just isn’t how HTTP works, HTTP has no mechanism to halt a file upload other than closing the connection, but if you close the connection you cant return the redirect information.
If you want the client to upload directly to S3, you will need to do it in two steps.
Have the client request the URL for the file transfer, then have them initiate the transfer with the desired URL.
I have a HTML form that will upload a file (among other form fields) to my server. There, I am using Java and Jersey, and have created a filter which will throw an Exception if the file is to large (>10MB). But I can't seem to get the browser to stop the file upload as soon as I throw the exception. I can see that the reading from the InputStream stops after 10MB at the server side, but the browser will still continue to upload the file until it is finished. Only then will the error be shown.
Is there something with the way HTTP and browsers work that prevents this from working?
The user experience becomes worse if the user have to wait for the while file to be uploaded before getting the error.
This is the Jersey-filter I am using. This seems to be functioning correctly on the server side, so the problem seems to be with the browser.
import java.io.IOException;
import org.apache.commons.fileupload.util.LimitedInputStream;
import org.springframework.stereotype.Component;
import com.sun.jersey.spi.container.ContainerRequest;
import com.sun.jersey.spi.container.ContainerRequestFilter;
import com.sun.jersey.spi.container.ContainerResponseFilter;
import com.sun.jersey.spi.container.ResourceFilter;
#Component
public class SizeLimitFilter implements ResourceFilter, ContainerRequestFilter {
#Override
public ContainerRequest filter(final ContainerRequest request) {
LimitedInputStream limitedInputStream = new LimitedInputStream(request.getEntityInputStream(), 1024 * 1024 * 10) {
#Override
protected void raiseError(final long pSizeMax, final long pCount) throws IOException {
// Throw error here :)
}
};
request.setEntityInputStream(limitedInputStream);
return request;
}
#Override
public ContainerRequestFilter getRequestFilter() {
return this;
}
#Override
public ContainerResponseFilter getResponseFilter() {
return null;
}
}
(and yes, I know there exists more sexy and modern solutions for uploading files (Javascript functions), but I need to use an approach that works in IE7 :/ )
Edit:
To clarify: everything works, the only problem is that I can't get the browser to show the error before the whole file is uploaded. Hence if I try to upload a 200MB file, the error won't be shown before the whole file is uploaded, even though my code throws an error just after 10MB...
This is because the browser doesn't look for a server response until AFTER it has sent the complete request.
Here are two ways to stop large file uploads before they arrive at your server:
1) Client-side scripting. An applet (Java file upload) or browser plug-in (Flash file upload, ActiveX, or whatever) can check the size of the file and compare it to what you declare to be the limit (you can use the OPTIONS request for this or just include the maximum size allowed in your form as either a hidden input or non-standard attribute on the file input element).
2) Use a 3rd party data service such as Amazon S3 that allows uploads directly from the browser. Set up the form on the page so that it uploads to the data service, and on success invoke some javascript function you write that notifies your server about an available file upload (with the instance details). Your server then sends a request to the data service to find out the size of the file... if it's small enough you retrieve it from the server side. If it's too large you just send a delete request to the data service and then send your response to the browser (your javascript function) that the file was too large and they need to do it again. Your javascript function then affects the display to notify the user.
I am making a HTTP POST request from GWT Client to a HTTPServlet. This Servlet is creating a PDF file from request content and writing it to response stream.
Headers of the response stream are:
Content-Disposition: attachment; filename=report.pdf
I want to open this PDF in new window of the user's browser or prompt him to download it.
import com.google.gwt.http.client.*;
...
String url = "http://www.myserver.com/getData?type=3";
RequestBuilder builder = new RequestBuilder(RequestBuilder.POST, URL.encode(url));
try {
Request request = builder.sendRequest(data, new RequestCallback() {
public void onError(Request request, Throwable exception) {
// Couldn't connect to server (could be timeout, SOP violation, etc.)
}
public void onResponseReceived(Request request, Response response) {
if (200 == response.getStatusCode()) {
// Process the response in response.getText()
// Window.open(url, "_blank", "");
} else {
// Handle the error. Can get the status text from response.getStatusText()
}
}
});
} catch (RequestException e) {
// Couldn't connect to server
}
How should I handle response in onResponseRecieved?
I think in this case you should not use a single RequestBuilder AJAX call. You can rely on the default browser behavior by invoking a normal call and letting the browser handle the PDF response (displaying it with a PDF viewer plugin or opening a Save dialog).
There are several alternatives to achieving this:
If you can pass your data in a GET request (only possible for a small data volume) you can create the URL with the data as GET parameters and then open a new browser window with Window.open() passing the URL with data.
For larger amounts of data you can first POST your data with RequestBuilder to the server to store the data temporaly and in RequestCallback.onResponseReceived() open a new browser window with a short URL like above in alternative 1. On the server side you have to split the PDF generation servlet into two parts: a data store servlet with POST method (i.e. storing the data into the web session) and a PDF render servlet with GET method, which takes the data out of the session (and deleting it) and doesn't need large parameters.
Create a form with method POST, hidden fields for your data and the PDF generation servlet URL. Fill the hidden fields with your data and submit the form programmatically (i.e. FormPanel.submit()). If you create your FormPanel with a target name the browser opens a new window or uses the specified frame to handle the response.
On the client side, use an Anchor instead of a request builder and invoke the servlet directly
by using Window.Location.replace(URL.encode(formActionUrl));