Google Cloud Platform load balancer aborts large file upload requests - java

When uploading (multipart post) file which is by underlying backend application (tomcat using apache fileupload library) considered as too large and responded with 413 http status, original request is aborted on load balancer with "aborted_request_due_to_backend_early_response" status. This means that user is presented with some built-in system error page instead of application error page.
How should the system be configured to deliver application error response to end user through load-balancer?
Some other details:
when the limit is, for example, 2MB, and the uploaded file is ~5MB, everything works fine, but when the file is >6MB described behaviour occurs (this depends on user's connection quality)
tomcat's / servlet's maxPostSize maxFileSize ... does not make any change
returning 200 instead of 413 does not make any change
I assume, that the response is (for those large ~6MB files) returned back before the file itself has been fully uploaded yet. But that is desired as I don't want to process, say, some gigabyte-large files into filesystem, and return 413 response afterwards - I want to cut them before any processing other than detecting their size.
So how to accomplish this in GCP (load balancer - apache - tomcat) environment?

Related

How to make nginx wait for response, when response time can be different?

There is Java on project server-side. I have some endpoints which allow to generate csv files(lets say user records from db). File size can be different, because there can be applied filters. When filters are applied and file is not too big it works correctly, but when file is big it takes 1,5min to generate it, but nginx timeout is 30 sec, so nginx does not wait for server response and reports with 500. Of course nginx-timeout can be increased, but this is not secured. Is it possible somehow to make nginx waiting more without touching its timeout param?

How to handle incoming request error 'Request Too Large' in Google App Engine?

We provided our GAE Servlet POST URL as Webhook to third party service that sends data. According to GAE - "Each incoming HTTP request can be no larger than 32MB."
Sometimes third party service sends data more than 40MB which gets rejected by GAE server as 'Request Too Large' error.
The service retries continuously on the other end upto 100 times and blocks further requests until retries completed if webhook URL doesn't return a 200 HTTP response code.
Is it possible to handle such requests and send 200 HTTP response code with GAE?
You cannot circumvent the 32MB upload limit in GAE directly. But you can build a working solution with one of these options:
Use blobstore uploads. The docs say that you cannot serve more than a 32MB chunk but you should be able to upload a larger file than that.
let the third party services upload into cloud storage
Let the third party service upload chunks of 30MB and reassemble the complete request after the last chunk. The Compose API endpoint should be useful for that.
Sadly all those options require the third party service to make changes in their requests. You could contact Google support and ask if they would increase the limit for your app. I have my doubts that they will do it since limiting the request size and the request duration (deadline) is a great way of making your app scaleable.

SOAP Web Service - Java Server - iOS Client - SudzC send or receive large files

My system:
Server: Java Soap Web Service generated by JAX WS 2.2.3 (-wsgen)
Client: iOS - Soap Web Service generated by SudzC
I am using SudzC on iOS to communicate with a Java web service. I want to upload NSData files from the iOS client to the Java Server or download byte[] files from the Java server to the client. For small amounts of data the web service runs fine, but when the data is greater than 4MB, there are problems. If i send a file which is greater than 4MB a HTTP Internal Error 500 occurred or everything get stuck and my application crashes.
Any suggestions? Perhaps, should I try something else than SudzC?
I know, that to send and receive large files, SOAP has an opportunity called MTOM.
This extracts the base64Binary data from the SOAP message and packages it as separate binary http attachments within the MIME message, in a similar manner to e-mail attachments.
So my problem is, how can i implement this option on the iOS client SudzC generated objective-C program code?
On the java server side the MTOM option is activated, but on the iOS client i have to implement this option?!
Perhaps someone can help?
I use WSClient++ to generate the classes and never had a problem.
http://wsclient.neurospeech.com/
I don't like SudzC, I have any problems when XML return has list of list.
I've used SudzC to upload larger files (20 + megabytes) so the issue probably isn't from SudzC. I remember having an issue with the file upload at the beginning aswell : the server didn't accept anything over X bytes and was returning an error.
However, what i have seen is that sudzC has a lot of issues with memory when uploading large files so i switched to wsdl2objc for file upload

apache commons FileUpload - cutting off large files before whole file uploaded

Using Tomcat 6, I am using apache commons FileUpload to allow image uploads. I can set the max files using setSizeMax and setFileSizeMax. But it seems that an entire large file is uploaded and then checked too see if it is too big. According to another post, it seems that setSizeMax should cut off the upload, but that is not the behavior I get.
To test this, I set the sizeMax and fieSizeMax very low, and uploaded a rather large file. It took 15 secs uploading the large file, instead of cutting it off almost instantaneously.
Any idea? Here is some code, with a simplified exception clause.
FileItemFactory factory = new DiskFileItemFactory();
ServletFileUpload upload = new ServletFileUpload(factory);
upload.setFileSizeMax(30);
upload.setSizeMax(28);
List items = null;
try {
items = upload.parseRequest(request);
} catch (Exception e) {
out.println("exceeded max file size..");
return;
}
MORE INFO: Using tomcat 6. Setting the maxPostSize does not work for content-type: multipart/form-data. Also, checking the request content length again requires the entire file to be uploaded. Finally, using the steaming api of FileUpload also does not seem to work, as it again seems to require the entire file to be uploaded before the stream can be be closed; the upload will continue even if the servlet does not write the bytes to disk. There has to be a way to prevent huge uploads using tomcat 6, as uploading images and such is a very common task for web apps.
The client sends the bits whether you save them on the server or not. There is no way for the server to tell the client to stop sending bits short of forcibly closing the input stream (because of the nature of HTTP: response follows request -- they are not simultaneous). You can try that, but most application servers will drain the input stream from the client when you perform a close() so nothing will change.
If you can control the client and require the use of 100-Continue (via an Expect header), you may be able to snoop the Content-Length of the request and reject it by sending a negative reply instead of 100-Continue.
Tomcat has a maxPostSize attribute for its HTTP Connector. See Tomcat docs
Edit: Too bad that doesn't work for multipart/form-data. Have you considered using a signed Applet or using "Java Web Start" to manage the upload ?

URL fetch request size in googleaapengine

I am doing a web application project using GWT in Eclipse.
I have a file on the client side which is to send on the project.server and from server to external Repository.
File
|
V
Client-->Server-->Repository
Iam Using default SDk - (appengine-java-sdk-1.6.3.1- 1.6.3)
GWT-2.4.0
according to documentation googleappengine , the limit for URL fetch request is 5Mb.
** But I cannot fetch the request more then 3.8Mb **
If i try to fetch more then 3.8 Mb then it gives me an Error.
Cannot access http://URL: The request to API call urlfetch.Fetch() was too large.
Can somebody explain me the reason for this.
Even i have to download the file from repository and save it on client side.
So is there any limitaion of size to getContent of the file present in the Repository to the server Side.
If it's a binary file being sent over HTTP, it's probably being encoded as base64 before being transfered. That adds about 33% to the file size.
http://en.wikipedia.org/wiki/Base64

Categories