URLConnection does not handle content length via proxy correctly - java

I faced the following problem: When URLConnection is used via proxy the content length is always set to -1.
First I checked that proxy really returns the Content-Length (lynx and wget are also working via proxy; there is no other way to go to internet from local network):
$ lynx -source -head ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip
HTTP/1.1 200 OK
Last-Modified: Mon, 09 Jul 2007 17:02:37 GMT
Content-Type: application/x-zip-compressed
Content-Length: 30745
Connection: close
Date: Thu, 02 Feb 2012 17:18:52 GMT
$ wget -S -X HEAD ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip
--2012-04-03 19:36:54-- ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip
Resolving proxy... 10.10.0.12
Connecting to proxy|10.10.0.12|:8080... connected.
Proxy request sent, awaiting response...
HTTP/1.1 200 OK
Last-Modified: Mon, 09 Jul 2007 17:02:37 GMT
Content-Type: application/x-zip-compressed
Content-Length: 30745
Connection: close
Age: 0
Date: Tue, 03 Apr 2012 17:36:54 GMT
Length: 30745 (30K) [application/x-zip-compressed]
Saving to: `WO2003-104476-001.zip'
In Java I wrote:
URL url = new URL("ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip");
int length = url.openConnection().getContentLength();
logger.debug("Got length: " + length);
and I get -1. I started to debug FtpURLConnection and it turned out that the necessary information is in underlying HttpURLConnection.responses field however it is never properly populated from there:
(there is Content-Length: 30745 in headers). The content length is not updated when you start reading the stream or even after the stream was read. Code:
URL url = new URL("ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip");
URLConnection connection = url.openConnection();
logger.debug("Got length (1): " + connection.getContentLength());
InputStream input = connection.getInputStream();
byte[] buffer = new byte[4096];
int count = 0, len;
while ((len = input.read(buffer)) > 0) {
count += len;
}
logger.debug("Got length (2): " + connection.getContentLength() + " but wanted " + count);
Output:
Got length (1): -1
Got length (2): -1 but wanted 30745
It seems like it is a bug in JDK6, so I have opened new bug#7168608.
If somebody can help me to write the code should return correct content length for direct FTP connection, FTP connection via proxy and local file:/ URLs I would appreciate.
If given problem cannot be worked-around with JDK6, suggest any other library that definitely works for all cases I've mentioned (Apache Http Client?).

Remember that proxies will often change the representation of the underlying entity. In your case I suspect the proxy is probably altering the transfer encoding. Which in turn makes the Content-Length meaningless even if supplied.
You are falling afoul of the following two sections of the HTTP 1.1 spec:
4.4 Message Length
...
...
If a Content-Length header field (section 14.13) is present, its decimal value in OCTETs represents both the entity-length and the transfer-length. The Content-Length header field MUST NOT be sent if these two lengths are different (i.e., if a Transfer-Encoding header field is present). If a message is received with both a Transfer-Encoding header field and a Content-Length header field, the latter MUST be ignored.
14.41 Transfer-Encoding
The Transfer-Encoding general-header field indicates what (if any) type of transformation has been applied to the message body in order to safely transfer it between the sender and the recipient. This differs from the content-coding in that the transfer-coding is a property of the message, not of the entity.
Transfer-Encoding = "Transfer-Encoding" ":" 1#transfer-coding
Transfer-codings are defined in section 3.6. An example is:
Transfer-Encoding: chunked
If multiple encodings have been applied to an entity, the transfer- codings MUST be listed in the order in which they were applied. Additional information about the encoding parameters MAY be provided by other entity-header fields not defined by this specification.
Many older HTTP/1.0 applications do not understand the Transfer- Encoding header.
So The URLConnection is then ignoring the Content-Length header, as per the spec because it is meaningless in the presence of chunked transfers
In your debugger screenshot it's not clear whether the Transfer-Encoding header is present. Please let us know...
On further investigation - it seems that lynx does not show all the headers returned when you issue a lynx -head. It is not showing the Transfer-Encoding header critical to this discussion.
Here's the proof of the discrepancy with a publically visible website
Ξ▶ lynx -useragent='dummy' -source -head http://www.bbc.co.uk
HTTP/1.1 302 Found
Server: Apache
X-Cache-Action: PASS (non-cacheable)
X-Cache-Age: 0
Content-Type: text/html; charset=iso-8859-1
Date: Tue, 03 Apr 2012 13:33:06 GMT
Location: http://www.bbc.co.uk/mobile/
Connection: close
Ξ▶ wget -useragent='dummy' -S -X HEAD http://www.bbc.co.uk
--2012-04-03 14:33:22-- http://www.bbc.co.uk/
Resolving www.bbc.co.uk... 212.58.244.70
Connecting to www.bbc.co.uk|212.58.244.70|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: Apache
Cache-Control: private, max-age=15
Etag: "7e0f292b2e5e4c33cac1bc033779813b"
Content-Type: text/html
Transfer-Encoding: chunked
Date: Tue, 03 Apr 2012 13:33:22 GMT
Connection: keep-alive
X-Cache-Action: MISS
X-Cache-Age: 0
X-LB-NoCache: true
Vary: Cookie
Since I am obviously not inside your network I can't replicate your exact circumstances, but please validate that you really aren't getting a Transfer-Encoding header when passing through a proxy.

I think it's a "bug" in the jdk related to handling ftp connections which are proxied. The FtpURLConnection delegates to an HttpURLConnection when a proxy is in use. however, the FtpURLConnection doesn't seem to delegate any of the header management to this HttpURLConnection in this situation. thus, you can correctly get the streams, but i don't think you can access any "header" values like content length or content type. (this is based on a quick glance over the openjdk source for 1.6, i could have missed something).

One thing to check I would do is to actually read the response (writing off the top of my head so expect mistakes):
URLConnection connection= url.openConnection();
InputStream input= connection.getInputStream();
byte[] buffer= new byte[4096];
while(input.read(buffer) > 0)
;
logger.debug("Got length: " + getContentLength());
If the size you are getting is good, then look for a way for make URLConnection read the header but not the data to avoid reading the whole response.

Related

Trying to download html page via url in JAVA. Getting some weird symbols instead

So I am trying to download this page http://www.csfd.cz/film/895-28-dni-pote/prehled/.
I am using this code:
URL url = new URL("http://www.csfd.cz/film/895-28-dni-pote/prehled/");
try(BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream(),Charset.forName("UTF-8")))){
String line = br.readLine();
while(line != null){
System.out.println(line);
line = br.readLine();
}
It worked on some other pages, but now it is giving me some weird symbols. For example the second line I get is: "�\�?�����c��n��". (It has not been copied exactly as I see it in eclipse console.)
I think I am using UTF-8 encoding as is the page.
In case you are wondering it is in Czech.
Thanks for help.
$ curl -D- http://www.csfd.cz/film/895-28-dni-pote/prehled/
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 01 Feb 2016 08:11:36 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: close
X-Frame-Options: SAMEORIGIN
X-Powered-By: Nette Framework
Vary: X-Requested-With
X-From-Cache: TRUE
Content-Encoding: gzip`
▒}I▒▒▒▒^▒▒29B▒▒▒$R▒M▒$nER▒▒4X, #
etc....
Notice Content-Encoding: gzip - the content is compressed using gzip, and you will need to decompress it in order to use it.
Study the classes in java.util.zip, especially GzipInputStream, which I believe you can wrap around a regular input stream.

OneDrive API download in parts

I am currently trying to develop a Java based app to access OneDrive.
Today i tried to implement the download as described here: https://dev.onedrive.com/items/download.htm
I wanted to use the range parameter, to offer the user the capability to pause large downloads. But no matter how i send the parameter be at within the HTTP-Request header or in the URL as a GET-Parameter it will always send me the complete file.
Things i tried so far:
https:/ /api.onedrive.com/v1.0/drive/items/***/content?range=0-8388607
(OAuth via HTTP header)
https:/ /api.onedrive.com/v1.0/drive/items/***/content:
Header: Authorization: ***
range: 0-8388607
https:/ /api.onedrive.com/v1.0/drive/items/***/content:
Header: Authorization: ***
range: bytes=0-8388607
I also tried Content-Range and various variations on lower and upper case without success. Any reason why this dose not work?
PS.:
The links a broken because i am using a new account that only allows 2 links per post, I am aware that ther is a space between the two // in my post ;)
Requesting the range of the file is supported. You might want to use fiddler or some other tool to see if the original headers are being passed after the 302 redirect is performed. Below are the HTTP requests and responses when I provide the range header which is being passed on after the 302 redirect occurs. You'll notice that a HTTP 206 partial content response is returned. Additionally, to resume a download, you can use "Range: bytes=1025-" or whatever the last byte received was. I hope that helps.
GET https://api.onedrive.com/v1.0/drive/items/item-id/content HTTP/1.1
Authorization: Bearer
Range: bytes=0-1024
Host: api.onedrive.com
HTTP/1.1 302 Found
Content-Length: 0
Location: https://kplnyq.dm2302.livefilestore.com/edited_location
Other headers removed
GET https://kplnyq.dm2302.livefilestore.com/edited_location
Range: bytes=0-1024
Host: kplnyq.dm2302.livefilestore.com
HTTP/1.1 206 Partial Content
Cache-Control: public
Content-Length: 1025
Content-Type: audio/mpeg
Content-Location: https://kplnyq.dm2302.livefilestore.com/edited_location
Content-Range: bytes 0-1024/4842585
Expires: Tue, 11 Aug 2015 21:34:52 GMT
Last-Modified: Mon, 12 Dec 2011 21:33:41 GMT
Accept-Ranges: bytes
Server: Microsoft-HTTPAPI/2.0
Other headers removed

Cache Control gets ignored

I am currently working with Retrofit and Okhttp and I am trying to cache some GET responses.
My Code is:
OkHttpClient okHttpClient = new OkHttpClient();
File cacheDir = new File(System.getProperty("java.io.tmpdir"),
"ddcache");
HttpResponseCache cache = new HttpResponseCache(cacheDir, 2024);
okHttpClient.setResponseCache(cache);
OkClient cl=new OkClient(okHttpClient);
restAdapter = new RestAdapter.Builder().setEndpoint(API_URL)
.setLogLevel(RestAdapter.LogLevel.FULL)
.setClient(cl).build();
And the Log shows this header:
HTTP/1.1 200 OK
Cache-Control: max-age=7200
Connection: Keep-Alive
Content-Type: text/html
Date: Tue, 18 Mar 2014 18:38:16 GMT
Keep-Alive: timeout=3, max=100
OkHttp-Received-Millis: 1395167895452
OkHttp-Response-Source: NETWORK 200
OkHttp-Sent-Millis: 1395167895378
Server: Apache/2.2.26 (Unix)
Transfer-Encoding: chunked
X-Powered-By: PHP/5.3.28
I check the response by returning the Server Unix Time on every call and it always returns a new one which means the
Cache-Control: max-age=7200
gets totally ignored
The Journal File in the Cache gets also updated with "CLEAN" and "DIRTY" notes, but nothing gets cached.
Is there something obvious I do not see?
I think I had a similar problem. The size of the cache is set in kilobytes and you are setting it to only 2024 kilobytes. That not enough space for almost anything. Try setting it to "10L * 1024 * 1024" (10Mb) and see if that helps.

Override the HTTP response status text

How can I override in Tomcat 7 the text of the HttpStatus.
I'm using HttpServletResponse.sendError(401, "Invalid username or Password"), but when I'm looking at the response status in the client it goves 401 Unauthorized.
Is there any way to override it?
Tomcat no longer supports USE_CUSTOM_STATUS_MSG_IN_HEADER property.
Changelog from 8.5.0:
RFC 7230 states that clients should ignore reason phrases in HTTP/1.1
response messages. Since the reason phrase is optional, Tomcat no
longer sends it. As a result the system property
org.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER is no longer used
and has been removed. (markt)
RFC 7230, Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing, June 2014. Section 3.1.2:
The reason-phrase element exists for the sole purpose of providing
a textual description associated with the numeric status code,
mostly out of deference to earlier Internet application protocols
that were more frequently used with interactive text clients. A
client SHOULD ignore the reason-phrase content.
Edit catalina.properties and add the property:
org.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true
With that set in my dev environment, then when I do:
response.sendError(HttpServletResponse.SC_BAD_REQUEST,
"A very very very bad request");
I see:
HTTP/1.1 400 A very very very bad request
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 1024
Date: Fri, 20 Dec 2013 11:09:54 GMT
Connection: close
Also discussed here and here
No - the response codes are set according to RFC 2616. If you want to communicate a message to the user (to the API client) either write it in the body or in a response header

http: conditional get does not give a chance to refresh headers without sending body again

I don't know if this is a bug or a feature in the http spec, or I am not understanding things ok.
I have a resource that changes at most once a week, at the week's beginning. If it didn't change, then the previous week's resource continues to be valid for the whole week.
(For all our tests we have modified the one week period for five minutes, but I think our observations are still valid).
First we send the resource with the header Expires: next Monday. The whole week the browser retrieves from the cache. If on Monday we have a new resource then it is retrieved with its new headers and everything is ok.
The problem occurs when the resource is not renewed. In response to the conditional get our app (Java+Tomcat) sends new headers with Expires: next Monday but without the body. But our frontend server (apache) removes this header, because the spec says you should not send new headers if the resource did not change. So now forever (until the resource changes) the browser will send a conditional get when we would like it to continue serving straight from the cache.
Is there a spec compliant way to update the headers without updating the body? (or sending it again)
And subquestion: how to make apache pass along tomcat's headers?
Just a Expires header is not enough. According to RFC 2616 section 13.3.4, a server needs to respond with two headers, Last-Modified and ETag, to do conditional GET right:
In other words, the preferred behavior for an HTTP/1.1 origin server is to send both a strong entity tag and a Last-Modified value.
And if the client is HTTP/1.1 compliant, it should send If-Modified-Since. Then the server is supposed to respond as following (quoted from Roy Fielding's proposal to add conditional GET):
If resource is inaccessible (for whatever reason), then the server should return a 4XX message just like it does now.
If resource no longer exists, the server should return a 404 Not Found response (i.e. same as now).
If resource is accessible but its last modification date is earlier (less than) or equal to the date passed, the server should return a 304 Not Modified message (with no body).
If resource is accessible and its last modification date is later than the date passed, the server should return a 200 OK message (i.e. same as now) with body.
So, I guess you don't need to configure Apache and/or Tomcat the way you described. You need to make your application HTTP/1.1 compliant.
Try sending a valid HTTP-Date for the Expires header?
One way to solve the problem is using separate URIs for each week. The canonical url redirects to the appropriate url for the week, and instructs the browser to cache the redirect for a week. Also, URLs that have a date in them will instruct the browser to cache forever.
Canonical URL : /path/to/resource
Status Code : 301
Location : /path/to/resource/12-dec or /path/to/resource/19-dec
Expires : Next Monday
Week 1 : /path/to/resource/12-dec
Status code : 200
Expires : Never
Week 2 : /path/to/resource/19-dec
Status code : 200
Expires : Never
When the cache expires on Monday, you just send a redirect response. You either send last weeks URL or this weeks, but you never send the entire response body.
With this approach, you have eliminated conditional gets. You have also made your resources "unmodifiable-once-published", and you also get versioned resources.
The only caveat - redirects aren't cached by all browsers even though the http spec requires them to do so. Notably IE8 and below don't cache. For details, look at the column "cache browser redirects" in browserscope.
The Expires header was basically deprecated with HTTP 1.1; use Cache-Control: max-age instead.
Make sure you are including Last-Modified.
It's optional, but you may also want to specify Cache-Control: must-revalidate, to make sure intermediate proxies don't deliver potentially stale content.
You don't need to set ETag.
Example request:
GET http://localhost/images/logo.png HTTP/1.1
Accept: image/png, image/svg+xml, image/*;q=0.8, */*;q=0.5
Referer: http://localhost/default.aspx
Accept-Language: en-US
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Host: localhost
Connection: Keep-Alive
The response includes the requested content:
HTTP/1.1 200 OK
Cache-Control: max-age=10
Content-Type: image/png
Last-Modified: Sat, 21 Feb 2009 11:28:18 GMT
Accept-Ranges: bytes
Date: Sun, 18 Dec 2011 05:48:34 GMT
Content-Length: 2245
Requests made before the 10 second timeout are resolved from cache, with no HTTP request. After the timeout:
GET http://localhost/images/logo.png HTTP/1.1
Accept: image/png, image/svg+xml, image/*;q=0.8, */*;q=0.5
Referer: http://localhost/default.aspx
Accept-Language: en-US
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
If-Modified-Since: Sat, 21 Feb 2009 11:28:18 GMT
Host: localhost
The response is just headers, without content:
HTTP/1.1 304 Not Modified
Cache-Control: max-age=10
Last-Modified: Sat, 21 Feb 2009 11:28:18 GMT
Accept-Ranges: bytes
Date: Sun, 18 Dec 2011 05:49:04 GMT
Subsequent requests are again resolved from the browser's cache until the specified cache expiration time.

Categories