what is delay between HttpPost being sent to server and server responding - java

I'm an uploading a zipfile from a Java desktop application to an Httpserver (running Tomcat 7), Im using Apache httpClient 4.5.3 and I display a progress bar showing progress using this wrapper solution https://github.com/x2on/gradle-hockeyapp-plugin/blob/master/src/main/groovy/de/felixschulze/gradle/util/ProgressHttpEntityWrapper.groovy
So in my code Im updating progressbar every time the callback gets called
HttpEntity reqEntity = MultipartEntityBuilder.create()
.addPart("email", comment)
.addPart("bin", binaryFile)
.build();
ProgressHttpEntityWrapper.ProgressCallback progressCallback = new ProgressHttpEntityWrapper.ProgressCallback() {
#Override
public void progress(final float progress) {
SwingUtilities.invokeLater(
new Runnable()
{
public void run()
{
MainWindow.logger.severe("progress:"+progress);
Counters.getUploadSupport().set((int)progress);
SongKong.refreshProgress(CreateAndSendSupportFilesCounters.UPLOAD_SUPPORT_FILES);
}
}
);
}
};
httpPost.setEntity(new ProgressHttpEntityWrapper(reqEntity, progressCallback));
HttpResponse response = httpclient.execute(httpPost);
HttpEntity resEntity = response.getEntity();
MainWindow.logger.severe("HttpResponse:"+response.getStatusLine());
This reports files uploaded as a percentage, but there is a sizeable delay between it reporting 100% creation and actually receiving http status from server.
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.19408
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.40069
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.6073
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.81391
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.99768
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.99778
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.99789
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.999794
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:99.9999
07/07/2017 14.23.54:BST:CreateSupportFile$4$1:run:SEVERE: progress:100.0
07/07/2017 14.24.11:BST:CreateSupportFile:sendAsHttpPost:SEVERE: HttpResponse:HTTP/1.1 200 OK
07/07/2017 14.24.11:BST:CreateSupportFile:sendAsHttpPost:SEVERE: Unknown Request
Note is not due to my tomcat code doing much since I haven't yet implemented the tomcat code for this function so it just defaults to the "Unknown Request" code.
protected void doPost(javax.servlet.http.HttpServletRequest request,
javax.servlet.http.HttpServletResponse response)
throws javax.servlet.ServletException, java.io.IOException
{
String createMacUpdateLicense = request.getParameter(RequestParameter.CREATEMACUPDATELICENSE.getName());
if(createMacUpdateLicense!=null)
{
createMacUpdateLicense(response, createMacUpdateLicense);
}
else
{
response.setCharacterEncoding("UTF-8");
response.setContentType("text/plain; charset=UTF-8; charset=UTF-8");
response.getWriter().println("Unknown Request");
response.getWriter().close();
}
}
How can I more accurately report to the user when it will complete
Update
I have now fully implemented the serverside, this has increased the discrepancy
#Override
protected void doPost(javax.servlet.http.HttpServletRequest request, javax.servlet.http.HttpServletResponse response)
throws javax.servlet.ServletException, java.io.IOException
{
String uploadSupportFiles = request.getParameter(RequestParameter.UPLOADSUPPORTFILES.getName());
if(uploadSupportFiles!=null)
{
uploadSupportFiles(request, response, uploadSupportFiles);
}
else
{
response.setCharacterEncoding("UTF-8");
response.setContentType("text/plain; charset=UTF-8; charset=UTF-8");
response.getWriter().println("Unknown Request");
response.getWriter().close();
}
}
private void uploadSupportFiles(HttpServletRequest request, HttpServletResponse response, String email) throws IOException
{
Part filePart;
response.setCharacterEncoding("UTF-8");
response.setContentType("text/plain; charset=UTF-8; charset=UTF-8");
try
{
filePart = request.getPart("bin");
String fileName = getSubmittedFileName(filePart);
response.getWriter().println(email+":File:" + fileName);
//Okay now save the zip file somewhere and email notification
File uploads = new File("/home/jthink/songkongsupport");
File supportFile = new File(uploads, email+".zip");
int count =0;
while(supportFile.exists())
{
supportFile = new File(uploads, email+"("+count+").zip");
count++;
}
InputStream input;
input = filePart.getInputStream();
Files.copy(input, supportFile.toPath());
Email.sendAlert("SongKongSupportUploaded:" + supportFile.getName(), "SongKongSupportUploaded:" + supportFile.getName());
response.getWriter().close();
}
catch(ServletException se)
{
response.getWriter().println(email+":"+se.getMessage());
response.getWriter().close();
}
}

Assuming your server-side code just writes the uploaded file somewhere and responds something like "DONE" at the end, here is a rough timeline of what happens:
Bytes written to socket OutputStream
============================|
<--> Buffering |
Bytes sent by TCP stack |
============================
<------> Network latency|
Bytes received by Tomcat
============================
| (Tomcat waits for all data to finish uploading
| before handing it out as "parts" for your code)
| File written to local file on server
| =====
|
| Response "DONE" written by servlet to socket output
| ==
| <---> Network latency
| == Response "DONE" received by client
| |
| |
"100%" for entity wrapper ^ Actual 100% ^
Discrepancy
<----------------------->
"Twilight Zone" : part of discrepancy you cannot do much about.
(progress feedback impossible without using much lower level APIs)
<--------------------->
The scales are of course completely arbitrary, but it shows that there are several factors that can participate into the discrepancy.
Your server writes the file after receiving all bytes, but it does not make a big difference here.
So, the factors:
(client side) Buffering (possibly at several levels) between the Java I/O layer and the OS network stack
Network latency
(server-side) Buffering (possibly at several levels) between the OS network stack and the Java I/O layer
Time to write (or finish writing) zip file on disk
Time to print response (negligible)
Network latency
(client side) Time to read response (negligible)
So you could take that discrepancy into account and adjust the "upload complete" step to 90% of the total progress, and jump from 90 to 100 when you get the final response. From 0% to 90% the user would see "Uploading", with a nice progress bar moving, then you show "Processing...", perhaps with a throbber, and when done, jump to 100%.
That's what many other tools do. Even when I download a file with my browser, there is a small lag towards the end, the download seems stuck at "almost" 100% for a second (or more on my old computer) before the file is actually usable.
If the "twilight zone" time is much higher than the upload time as perceived by your progress wrapper, you might have a problem, and your question would thus be "where does this delay come from?" (for now I don't know). In that case, please provide complete timings (& make sure client & server machines have their clocks synchronized).
If you reaaaally need a more accurate/smooth progress report towards the end, you will need a much more involved setup. You will probably need to use more low-level APIs on the server side (e.g. not using #MultipartConfig etc), in order to have your server do something like writing to disk as data is received (which makes error handling much more difficult), print a dot to output and flush, for every 1% of the file that is written to disk (or any other kind of progress you want, provided it's actual progress on server-side). Your client side would then have the ability to read that response progressively, and get accurate progress report. You can avoid threading on client side, it's fine to do this sequentially:
POST data, report progress but scaled to 90% (ie if wrapper says 50%, you report 45%)
when done, start reading output from server, and report 91%, 95%, whatever, up until 100%.
Even with that I'm not sure it's possible to display progress info for all the steps (especially between 100% sent and first byte the server can possibly send), so maybe even that extremely complex setup would be useless (it could very well stall at 90% for a moment, then go 91/92/...99/100 in an instant).
So really at this point it's probably not worth it. If you really have a 17s step between last byte sent by client, and response received, something else is off. Initially I was assuming it was for humongous files, but since then you said your files were up to 50MB, so you might have something else to look at.

some of the server-side code might change depending on how the chunk data is represented, but the concept is roughly the same. Let's say you are uploading a 10MB file and you have your chunk size set to 1MB. You will send 10 requests to the server with 1MB of data each. The client is actually responsible for breaking all of this up. That is what you will do in Javascript. Then, each request is sent up via HttpRequest along with some other data about the file, chunk number and number of chunks. Again, I use the plupload plugin which handles this for me so some of the Request data may differ between implementations.
The method I am showing you is part of a Webservice which outputs JSON
data back to the client. Your javascript can then parse the JSON and
look for an error or success message and act appropriately. Depending
on your implementation, the data you send back might be different.
The javascript will ultimately handle the progress bar or percentage
or whatever, increasing it as it gets successful chunk uploads. My
implementation for my project lets plupload deal with all that, but
maybe that article I gave you will give you more control over the
client-side.
protected void Upload()
{
HttpPostedFile file = Request.Files[0];
String relativeFilePath = "uploads/";
try
{
if(file == null)
throw new Exception("Invalid Request.");
//plupload uses "chunk" to indicate which chunk number is being sent
int chunk = (int)Request.Form["chunk"];
//plupload uses "chunks" to indicate how many total chunks are being sent
int chunks = (int)Request.Form["chunks"];
//plupload uses "name" to indicate the original filename for the file being uploaded
String filename = Request.Form["name"];
relativeFilePath += filename;
//Create a File Stream to manage the uploaded chunk using the original filename
//Note that if chunk == 0, we are using FileMode.Create because it is the first chunk
//otherwise, we use FileMode.Append to add to the byte array that was previously saved
using (FileStream fs = new FileStream(Server.MapPath(relativeFilePath), chunk == 0 ? FileMode.Create : FileMode.Append))
{
//create the byte array based on the data uploaded and save it to the FileStream
var buffer = new byte[file.InputStream.Length];
file.InputStream.Read(buffer, 0, buffer.Length);
fs.Write(buffer, 0, buffer.Length);
}
if((chunks == 0) || ((chunks > 0)&&(chunk == (chunks - 1))))
{
//This is final cleanup. Either there is only 1 chunk because the file size
//is less than the chunk size or there are multiple chunks and this is the final one
//At this point the file is already saved and complete, but maybe the path is only
//temporary and you want to move it to a final location
//in my code I rename the file to a GUID so that there is never a duplicate file name
//but that is based on my application's needs
Response.Write("{\"success\":\"File Upload Complete.\"}");
}
else
Response.Write("{\"success\":\"Chunk "+chunk+" of "+chunks+" uploaded.\"}");
}
catch(Exception ex)
{
//write a JSON object to the page and HtmlEncode any quotation marks/HTML tags
Response.Write("{\"error\":\""+HttpContext.Current.Server.HtmlEncode(ex.Message)+"\"});
}
}

Related

Prevent client timing out while a servlet generates a large download

I have a Java servlet that generates some arbitrary report file and returns it as a download to the user's browser. The file is written directly to the servlet's output stream, so if it is very large then it can successfully download in chunks. However, sometimes the resulting data is not large enough to get split into chunks, and either the client connection times out, or it runs successfully but the download doesn't appear in the browser's UI until it's 100% done.
This is what my servlet code looks like:
#Override
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
response.setContentType("application/pdf");
response.setHeader("Content-Disposition", "attachment; filename=\"" + report.getFileName(params) + "\"");
try(OutputStream outputStream = response.getOutputStream()) {
// Send the first response ASAP to suppress client timeouts
response.flushBuffer(); // This doesn't seem to change anything??
// This calls some arbitrary function that writes data directly into the given stream
generateFile(outputStream);
}
}
I ran a particularly bad test where the file generation took 110,826ms. Once it got to the end, the client had downloaded a 0 byte file - I assume this is the result of a timeout. I am expecting this specific result to be somewhere between 10 and 30 KB - smaller than the servlet's buffer. When I ran a different test, it generated a lot of data quickly (up to 80MB total), so the download appeared in my browser after the first chunk was filled up.
Is there a way to force a downloaded file to appear in the browser (and prevent a timeout from occurring) before any actual data has been generated? Am I on the right track with that flushBuffer() call?
Well, it looks like shrinking the size of my output buffer with response.setBufferSize(1000); allowed my stress test file to download successfully. I still don't know why response.flushBuffer() didn't seem to do anything, but at least as long as I generate data quickly enough to fill that buffer size before timing out, the download will complete.

Uploading mp4 from Android phone to Spring server results in file missing a few hundred bytes if bigger than 2MB

I have been trying to upload to a Java Spring server running on my laptop, using an app written in Xamarin forms, using a physical Redmi Android device.
But when I send out the multi-part request, if it is bigger than about 2MB, the file loses a few hundred bytes by the time it reaches the server.
For example, the original video file has 8,268,891 bytes. Sometimes the file that reaches the server will have 8,267,175 and sometimes 8,269,279 or some other random number.
I don't know if it's related to my Xamarin code, because this seems to happen whether I use multi-part requests or send it as a base64 string in a request.
But just in case, here is my multi-part Xamarin code
var multipartContent = new MultipartFormDataContent();
var videoBytes = new ByteArrayContent(file.GetStream().ToByteArray());
multipartContent.Add(videoBytes, "file", file.Path.FileName());
multipartContent.Add(new StringContent(serializedRequest, Encoding.UTF8, "application/json"), "request");
content = multipartContent;
}
switch (type)
{
case RequestType.Post:
result = await client.PostAsync(_siteUrl + apiPath, content, cancellationToken);
break;
And my controller on the Spring server
#RequestMapping(value = { RequestMappingConstants.MOBILE + RequestMappingConstants.UPLOAD + RequestMappingConstants.UPLOAD_VIDEO }, method = RequestMethod.POST)
public #ResponseBody VideoUploadResponse uploadVideo(#RequestPart(value="request") VideoUploadRequest request, #RequestPart(value="file") MultipartFile file, HttpServletRequest httpRequest) {
LOG.info("Inside video upload");
return uploadService.uploadWelcomeVideo(request, file, httpRequest);
}
Also, my settings on the server:
multipart.maxFileSize= 100MB
multipart.maxRequestSize= 100MB
spring.servlet.multipart.enabled=true
spring.servlet.multipart.file-size-threshold=2KB
spring.servlet.multipart.max-file-size=200MB
spring.servlet.multipart.max-request-size=215MB
spring.servlet.multipart.resolve-lazily=false
Again, this happens as long as the video file exceeds about 2MB. The corrupted file that reaches the server is unplayable.
The server and client are running on the same wi-fi network.
I would be very grateful if you could help.
I have found that you also need to adjust the Tomcat and/or Jetty (as appropriate) settings:
server.jetty.max-http-form-post-size: 100MB
server.tomcat.max-http-form-post-size: 100MB
server.tomcat.max-swallow-size: -1
It turned out to be something wrong with my laptop or wireless network that was causing packet loss. Nothing to do with the code, as it was working when I tried it on a production server

Java Streaming Objects via HTTP: some Best Practises for Exception Handling?

For some larger data we use HTTP streaming of objects in our Java-/Spring-Boot-Application. For that we need to bypass Spring MVC a bit like this:
#GetMapping("/report")
public void generateReport(HttpServletResponse response) throws TransformEntryException {
response.addHeader("Content-Disposition", "attachment; filename=report.json");
response.addHeader("Content-Type", "application/stream+json");
response.setCharacterEncoding("UTF-8");
OutputStream out = response.getOutputStream();
Long count = reportService.findReportData()
.map(entry -> transfromEntry(entry))
.map(entry -> om.writeValueAsBytes(entry))
.peek(entry -> out.write(entry))
.count();
LOGGER.info("Generated Report with {} entries.", count);
}
(...I know this code won't compile - just for illustration purposes...)
This works great so far - except if something goes wrong: let's say after streaming 12 entries successfully, the 13th entry will trigger an TransformEntryException during transfromEntry().
The stream will stop here. And the client gets indicated that his download finished successfully - while it was only part of the file.
We can log this server side and also attach some warning or even stacktrace to the downloaded file, but the client gets indicated that his download finished successfully - while it was only part of or even corrupt file.
I know that the HTTP status code gets sent with the header - which is already out. Is there any other way to indicate to the client a failed download?
("Client" most cases means some Webbrowser)

Weird issues with gzip encoded responses

Ok, so I'm running my own fork of NanoHttpd (a minimalist java web server, the fork is quite complex though), and I had to implement gzip compression on top of it.
It has worked fine, but it just turned out that firefox 33.0 on Linux mint 17.1 will not execute gzipped js files at all, although they load just fine and headers look OK etc. This does not happen on the same pc with chrome, or with any other browser I've tried, but still is something that I must get fixed.
Also, the js resources execute just fine if I disable gzipping. I also tried removing Connection: keep-alive, but that did not have any effect.
Here's the code responsible for gzipping:
private void sendAsFixedLength(OutputStream outputStream) throws IOException {
int pending = data != null ? data.available() : 0; // This is to support partial sends, see serveFile()
headerLines.add("Content-Length: "+pending+"\r\n");
boolean acceptEncoding = shouldAcceptEnc();
if(acceptEncoding){
headerLines.add("Content-Encoding: gzip\r\n");
}
headerLines.add("\r\n");
dumpHeaderLines(outputStream);//writes header to outputStream
if(acceptEncoding)
outputStream = new java.util.zip.GZIPOutputStream(outputStream);
if (requestMethod != Method.HEAD && data != null) {
int BUFFER_SIZE = 16 * 1024;
byte[] buff = new byte[BUFFER_SIZE];
while (pending > 0) {
int read = data.read(buff, 0, ((pending > BUFFER_SIZE) ? BUFFER_SIZE : pending));
if (read <= 0) {
break;
}
outputStream.write(buff, 0, read);
pending -= read;
}
}
outputStream.flush();
outputStream.close();
}
Fwiw, the example I copied this from did not close the outputStream, but without doing that the gzipped resources did not load at all, while non-gzipped resources still loaded ok. So I'm guessing that part is off in some way.
EDIT: firefox won't give any errors, it just does not excecute the script, eg:
index.html:
<html><head><script src="foo.js"></script></head></html>
foo.js:
alert("foo");
Does not do anything, despite that the resources are loaded OK. No warnings in console, no nothing. Works fine when gzip is disabled and on other browsers.
EDIT 2:
If I request foo.js directly, it loads just fine.
EDIT 3:
Tried checking the responses & headers with TemperData while having gzipping on/off.
The only difference was that when gzipping is turned on, there is Content-Encoding: gzip in the response header, which is not very suprising. Other than that, 100% equal responses.
EDIT 4:
Turns out that removing content-length from the header made it work again... Not sure of the side effects tho, but at least this pinpoints it better.
I think the cause of your problem is that you are writing the Content-Length header before compressing the data, which turns out in an incoherent information to the browser. I guess that depending on the browser implementation, it handles this situation in one or other way, and it seems that Firefox does it the strict way.
If you don't know the size of the compressed data (which is understandable), you'd better avoid writing the Content-Length header, which is not mandatory.

Play framework 2 Java, return chunked response

Based on the following code example that returns chunked data from Play server,
the out.write shouldn't create response each time it is called?
when using post client I get only 1 response which contains that wholl data with all the data.
I need to return a huge file from the server in chunks which should be downloaded in the client.
Any ideas?
public static Result index(){
// Prepare a chunked text stream
Chunks<String> chunks = new StringChunks()
{
// Called when the stream is ready
public void onReady(Chunks.Out<String> out)
{
out.write("kiki");
out.write("foo");
out.write("bar");
out.close();
}
};// Serves this stream with 200 OK
return ok(chunks);
}
The amount of chunks depends on the chunk size, which usually defaults to 1024 KB throughout the framework. Your output is too small and thus only one chunk is sent.

Categories