I'm writing a java servlet on AppEngine. This servlet generates png images. I would like to "gzip" the response. I do it this way:
resp.setHeader("Content-Encoding","gzip");
resp.setContentType("image/png");
// ... png generation ...
GZIPOutputStream gzos = new GZIPOutputStream(resp.getOutputStream());
gzos.write(myPNGdata);
gzos.close();
But: in development server, it's ok, the png displays fine and the response is well gzipped. Then I test on production server (AppEngine) and all I get is a "broken" image...
What could be wrong with my code? Is it related to dev/prod environment?
Off course, If I don't gzip the output, it's ok in both environments.
Thanks for any help.
Edit: I tried this too:
GZIPOutputStream gzos = new GZIPOutputStream(resp.getOutputStream());
gzos.write(ImagesServiceFactory.makeImage(readImage("somePicture.png")).getImageData());
gzos.flush();
gzos.finish();
gzos.close();
and it doesn't work either.
Edit 2: in fact, the response is gzip. I fetched the servlet with "curl theUrl > tmp.gz", then I gunzip "tmp.gz", and the image is fine. But no browser can display it correctly :( What's wrong with my gzip?
The App Engine infrastructure will take care of gzipping content for you when appropriate. You shouldn't do it yourself.
Check the size of your downloaded image. If it is smaller then you expecting, most likely you need to flush the stream before closing.
Related
we're currently working on the service that would archive the data and return it to the user as a ZipOutputStream. What we're currently looking for is an option to completely terminate the operation if something goes wrong on the server side. With our current implementation (just closing the response output stream) errors result in a malformed zip at the user side, but it can't be told if the archive is malformed or not before attempting to unzip it. The desired behavior would be something like download termination (from a browser perspective, for instance, it would result in an unsuccessful download indication (red cross icon or something similar, depending on the browser) explicitly telling the user that something went wrong). We're using Spring Boot, so any java code examples would really be appreciated, but if you know the underlying HTTP mechanism that is responsible for this kind of behavior, and can point in the right direction, that would be much appreciated too.
Here's what we have as of now (output being a response output stream of a Spring REST controller (HttpServletResponse.getOutputStream()) :
try (ZipOutputStream zipOutputStream = new ZipOutputStream(outputStream)) {
try {
for (ZipRecordFile fileInfo : zipRecord.listZipFileOverride()) {
InputStream fileStream = getFileStream(fileInfo.s3region(), fileInfo.s3bucket(),
fileInfo.s3key());
ZipEntry zipEntry = new ZipEntry(fileInfo.fileName());
zipOutputStream.putNextEntry(zipEntry);
fileStream.transferTo(zipOutputStream);
}
}
catch (Exception e) {
outputStream.close();
}
}
There isn't a (clean) way to do what you want:
Once you have started writing the ZIP file to the output stream, it is too late to change the HTTP response code. The response code is sent at the start of response.
Therefore, there is no proper way for the HTTP server to tell the HTTP client: "Hey ... ignore that ZIP file I sent you 'cos it is corrupt".
So what are the alternatives?
On the server side, create the entire ZIP as an in-memory object or write it to a temporary file. If you succeed, send an 2xx response followed by the ZIP data. If you fail, send a 4xx or 5xx response.
The main problem is that you need enough memory or file system space to hold the ZIP file.
Redesign your HTTP API so that the client can sent a second request to check if the first request's response contained a complete ZIP file.
You might be able to exploit MIME multipart encoding; see RFC 1341. Each part of a well-formed MIME multipart has a start marker and an end-marker. What you could try is to have your web-app construct the multipart stream containing the ZIP "by hand". If it decides it must abort the ZIP, it could just close the output stream without adding the required end marker.
The main problem with this is that you are depending on the HTTP stack on the client side to tell the browser (or whatever) that the multipart is corrupted. Furthermore, the browser (or whatever) must not pass on the partial (i.e. corrupt) ZIP file on to the user. I'm not sure if you can rely on (particular) web browsers to do that.
If you are running the download via custom code on the client side, you could conceivably implement your own encapsulation protocol. The effect would be the same as for 3 ... but you wouldn't be abusing the MIME spec.
I am stuck in a strange issue, I am posting some image data to a server, now I created the requested using curl and then traced back it.
Next was to create similar request in java.
The code posts raw binary image data to server, but when I compare the binary data in java to that posted by curl, there is a minute difference due to which I am getting 400 response code from server.
The difference I think is in few dots.
Below is the request generate by curl (linux).
Generate by curl
Now here is the request generate by Java, when I read bytes.
Click here
Java code looks something like this:
PrintWriter out = new PrintWriter(os);
out.println("POST /1izb0rx1 HTTP/1.1");
out.println("User-Agent: curl/7.35.0");
out.println("Host: requestb.in");
out.println("Accept: */*");
out.println("Content-Disposition:inline; filename=test.png");
out.println("Authorization: Basic YW5kcm9pZDpUZXN0dGVzdDExISE=");
out.println("Content-Length: "+"24143");
out.println("Content-Type: application/x-www-form-urlencoded");
out.println();
out.println("."+imgBytes);
Any idea what can be causing this issue ?
Thanks
So,
I got it working, the problem was that certain classes on Android are broken and not behaving as the way they behave on core Java.
The same code that was working on Java, wasn't working here, reason being, a change in header occurring here (On Android).
This issue is very well mentioned here also:
https://github.com/WP-API/WP-API/issues/1862
Thus I was facing a similar issue, and adding external updated jars were conflicting with the ones on Android.
Finally I used a small HTTP Request library: https://github.com/kevinsawicki/http-request
The code is below:
HttpRequest request = HttpRequest.post(url);
request.authorization("Basic "+ah);
request.part("file", fName+".png", "image/png", new File(file));
request.part("title", "test");
if(request.code()==201) {
StringWriter sw = new StringWriter();
request.receive(sw);
onMedia(Media.parse(new JsonParser().parse(sw.toString()).getAsJsonObject()));
}
Thanks
Do not use PrintWriter to send raw bytes.
I am using the Java client library for Google Drive API to upload some text files and convert them to Google doc format. The code runs on Google App Engine. The main code segment looks like this:
File fileMetadata = new File();
fileMetadata.setTitle("Document title");
fileMetadata.setDescription("Desc goes here");
fileMetadata.setMimeType("text/plain; charset=utf-8");
ByteArrayContent byteArrayContent = ByteArrayContent.fromString(
"text/plain; charset=utf-8", "Text file's content goes here");
Drive.Files.Insert insertRequest = driveService.files()
.insert(fileMetadata, byteArrayContent).setConvert(true);
File insertedFile = insertRequest.execute();
For small text files, the above code works fine. But when the text files are big (perhaps there are some other factors that I am not aware of), it would throw the following exception:
Uncaught exception from servlet
java.net.SocketTimeoutException: Timeout while fetching URL: https://www.googleapis.com/upload/drive/v2/files?convert=true&uploadType=resumable&upload_id=AEnB2UpPDz6jvM8zO2zPFxFHmoCiisplOf1Ui5fngZdI4qqoK6hwt_wtOt89RcW3QautW9FPlHKfMznYA4gmo95qdWthJQgWpQ
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.convertApplicationException(URLFetchServiceImpl.java:145)
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.fetch(URLFetchServiceImpl.java:45)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.fetchResponse(URLFetchServiceStreamHandler.java:419)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.getInputStream(URLFetchServiceStreamHandler.java:298)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.getResponseCode(URLFetchServiceStreamHandler.java:151)
at com.google.api.client.http.javanet.NetHttpResponse.<init>(NetHttpResponse.java:36)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:94)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:965)
at com.google.api.client.googleapis.media.MediaHttpUploader.executeCurrentRequestWithoutGZip(MediaHttpUploader.java:545)
at com.google.api.client.googleapis.media.MediaHttpUploader.resumableUpload(MediaHttpUploader.java:417)
at com.google.api.client.googleapis.media.MediaHttpUploader.upload(MediaHttpUploader.java:336)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:418)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:343)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:460)
Some people said it's due to the URLConnnection's timeout. But I am using the Java client library, which doesn't allow me to set the timeout explicitly. Also, the above exception happens at least 30 seconds after the insertRequest.execute() call.
I suspect the Java client library has set an internal timeout of 30 seconds... I hope some Google software engineers can take a look.
Any idea will be appreciated. Thanks very much.
This error is expected when you are uploading a big file. Try implementing a resumable upload. You can read more about the method as well as a sample in the following page: https://developers.google.com/drive/web/manage-uploads#resumable
Based on the Java library, you should add the following code before the insertRequest.execute()
MediaHttpUploader uploader = insert.getMediaHttpUploader();
uploader.setDirectUploadEnabled(useDirectUpload);
uploader.setProgressListener(new FileUploadProgressListener());
I'm using this great snippet from How to download and save a file from Internet using Java? to download a file from an url :
URL website = new URL("http://www.website.com/information.asp");
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream("information.html");
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
But instead of Long.MAX_VALUE, I prefer limit the download to 2mb for security reasons, so I replaced it by
fos.getChannel().transferFrom(rbc, 0, 2097152);
But now, I'm wondering how can I handle the case where the file size is greater than 2mb?
What can I do to check if the file is corrupt or not?
Have you considered checking the Content-Length header as per the RFC? You could then check if this exceeds some acceptable value -- in your case 2MB -- and reject further processing. You could accomplish this with an initial HTTP HEAD request and then a GET if you're happy, or by reading the headers of just the GET response and proceeding with further streaming if acceptable.
Alternatively (but admittedly ugly), you could use a BufferedReader passing in a buffer of 2MB and comparing that with the headers.
As for corruption, you're better off using a checksum as stated in other comments. Of course, this requires you knowing the checksum for the resource up-front, and is not something you're likely to get from the HTTP response itself.
There are actually two aspects to this Question:
how do you know if you've downloaded the entire file, and
how do you know if what you have downloaded is corrupt.
First thing to note is that if you "chop" the file transfer at 2Mb, then if the apparent transferred file size is 2Mb you can be pretty sure that it won't be complete. (By the looks of it, your current code will give you the bytes after any transfer encoding has been decoded ... which simplifies things.)
Next thing to note is that an HTTP response will often include a Content-length header that tells the client how many bytes of (transfer encoded) content to expect in the response body. However, that won't tell you if the bytes you actually received (after decoding) are actually correct. (And besides, this header is optional ... you can't rely on it being there.)
As #ato notes, you would be better off checking the Content-length in the GET (or a HEAD) response before you actually try to read the data.
However, the only sure-fire way to know if you've got a complete / non-corrupt file is to check it against a checksum or (ideally) a crypto-hash that you obtained separately from the transfer. There is no standard way of obtaining a checksum or hash using the HTTP protocol.
I am attempting to have my android phone connect to my servlet and send it a certain image. The way I figured I would do this, is to use the copyPixelsToBuffer() function and then attempt to send this to the servlet through some output stream(similar to how I would do it in a normal stand alone java application). Will this way work? If so, what kind of stream do I use exactly? Should I just use DataOutputStream and just do something like the following:
ByteBuffer imgbuff;
Bitmap bm = BitmapFactory.decodeResource(getResources(), R.drawable.icon);
bm.copyPixelsToBuffer(bm);
...code...
URLConnection sc = server.openConnection();
sc.setDoOutput(true);
DataOutputStream out = new DataOutputStream( sc.getOutputStream() );
out.write(imgbuff.array());
out.flush();
out.close();
Note: I understand that this may not be the proper way of connecting to a server using the Android OS but at the moment I'm working on just how to send the image, not the connection (unless this is relevant on how the image is sent).
If this is not a way you'd recommend sending the image to the servlet (I figured a byte buffer would be best but I could be wrong), how would you recommend this to be done?
Since a HttpServlet normally listens on HTTP requests, you'd like to use multipart/form-data encoding to send binary data over HTTP, instead of raw (unformatted) like that.
From the client side on, you can use URLConnection for this as outlined in this mini tutorial, but it's going to be pretty verbose. You can also use Apache HttpComponents Client for this. This adds however extra dependencies, I am not sure if you'd like to have that on Android.
Then, on the server side, you can use Apache Commons FileUpload to parse the items out of a multipart/form-data encoded request body. You can find a code example in this answer how the doPost() of the servlet should look like.
As to your code example: wrapping in the DataOutputStream is unnecessary. You aren't taking benefit of the DataOutputStream's facilities. You are just using write(byte[]) method which is already provided by the basic OutputStream as returned by URLConnection#getOutputStream(). Further, the Bitmap has a compress() method which you can use to compress it using a more standard and understandable format (PNG, JPG, etc) into an arbitrary OutputStream. E.g.
output = connection.getOutputStream();
// ...
bitmap.compress(CompressFormat.JPEG, 100, output);
Do this instead of output.write(bytes) as in your code.