How can I override in Tomcat 7 the text of the HttpStatus.
I'm using HttpServletResponse.sendError(401, "Invalid username or Password"), but when I'm looking at the response status in the client it goves 401 Unauthorized.
Is there any way to override it?
Tomcat no longer supports USE_CUSTOM_STATUS_MSG_IN_HEADER property.
Changelog from 8.5.0:
RFC 7230 states that clients should ignore reason phrases in HTTP/1.1
response messages. Since the reason phrase is optional, Tomcat no
longer sends it. As a result the system property
org.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER is no longer used
and has been removed. (markt)
RFC 7230, Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing, June 2014. Section 3.1.2:
The reason-phrase element exists for the sole purpose of providing
a textual description associated with the numeric status code,
mostly out of deference to earlier Internet application protocols
that were more frequently used with interactive text clients. A
client SHOULD ignore the reason-phrase content.
Edit catalina.properties and add the property:
org.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true
With that set in my dev environment, then when I do:
response.sendError(HttpServletResponse.SC_BAD_REQUEST,
"A very very very bad request");
I see:
HTTP/1.1 400 A very very very bad request
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 1024
Date: Fri, 20 Dec 2013 11:09:54 GMT
Connection: close
Also discussed here and here
No - the response codes are set according to RFC 2616. If you want to communicate a message to the user (to the API client) either write it in the body or in a response header
Related
I am currently trying to develop a Java based app to access OneDrive.
Today i tried to implement the download as described here: https://dev.onedrive.com/items/download.htm
I wanted to use the range parameter, to offer the user the capability to pause large downloads. But no matter how i send the parameter be at within the HTTP-Request header or in the URL as a GET-Parameter it will always send me the complete file.
Things i tried so far:
https:/ /api.onedrive.com/v1.0/drive/items/***/content?range=0-8388607
(OAuth via HTTP header)
https:/ /api.onedrive.com/v1.0/drive/items/***/content:
Header: Authorization: ***
range: 0-8388607
https:/ /api.onedrive.com/v1.0/drive/items/***/content:
Header: Authorization: ***
range: bytes=0-8388607
I also tried Content-Range and various variations on lower and upper case without success. Any reason why this dose not work?
PS.:
The links a broken because i am using a new account that only allows 2 links per post, I am aware that ther is a space between the two // in my post ;)
Requesting the range of the file is supported. You might want to use fiddler or some other tool to see if the original headers are being passed after the 302 redirect is performed. Below are the HTTP requests and responses when I provide the range header which is being passed on after the 302 redirect occurs. You'll notice that a HTTP 206 partial content response is returned. Additionally, to resume a download, you can use "Range: bytes=1025-" or whatever the last byte received was. I hope that helps.
GET https://api.onedrive.com/v1.0/drive/items/item-id/content HTTP/1.1
Authorization: Bearer
Range: bytes=0-1024
Host: api.onedrive.com
HTTP/1.1 302 Found
Content-Length: 0
Location: https://kplnyq.dm2302.livefilestore.com/edited_location
Other headers removed
GET https://kplnyq.dm2302.livefilestore.com/edited_location
Range: bytes=0-1024
Host: kplnyq.dm2302.livefilestore.com
HTTP/1.1 206 Partial Content
Cache-Control: public
Content-Length: 1025
Content-Type: audio/mpeg
Content-Location: https://kplnyq.dm2302.livefilestore.com/edited_location
Content-Range: bytes 0-1024/4842585
Expires: Tue, 11 Aug 2015 21:34:52 GMT
Last-Modified: Mon, 12 Dec 2011 21:33:41 GMT
Accept-Ranges: bytes
Server: Microsoft-HTTPAPI/2.0
Other headers removed
I work on a REST-like API that will support bulk operations on some resources. As it may take some time to finish such a request, I would like to return statuses of the operations in a chunked response. The media type should be JSON. How to do it with JAX-RS?
(I know that there is StreamingOutput, but it needs to manually serialize the data.)
Chunked Transfer encoding is usually used in cases where the content length is unknown when the sender starts transmitting the data. The receiver can handle each chunk while the server is still producing new ones.
This implies the the server is sending the whole time. I don't think that it makes too much sense to send I'm still working|I'm still working|I'm still working| in chunks and as far as I know chunked transfer-encoding is handled transparently by most application servers. They switch automatically when the response is bigger then a certain size.
A common pattern for your use case looks like this:
The client triggers a bulk operation:
POST /batch-jobs HTTP/1.1
The server creates a resource which describes the status of the job and returns the URI in the Location header:
HTTP/1.1 202 Accepted
Location: /batch-jobs/stats/4711
The client checks this resource and receives a 200:
GET /batch-jobs/stats/4711 HTTP/1.1
This example uses JSON but you could also return plain text or add caching headers which tell the client how long he should wait for the next poll.
HTTP/1.1 200 OK
Content-Type: application/json
{ "status" : "running", "nextAttempt" : "3000ms" }
If the job is done the server should answer with a 303 and the URI of the resource he has created:
HTTP/1.1 303 See other
Location: /batch-jobs/4711
I am having a strange issue parsing some POP3 messages in JavaMail 1.4.4 - Java 1.4, also in Java 1.6.
I am parsing a com.sun.mail.pop3.POP3Message retrieved from a Windows 2003 POP3 service mailbox. When I go through the getAllHeaderLines() Enumeration and compare them to the source message I see that the Reply-To header is cut off in mid email address and all remaining headers are missing (specifically Subject, To, In-Reply-To, MIME-Version, Contact-Type, Return-Path and X-OriginalArrivalTime). The getContentType() method returns text/plain and the getContent() method returns the entire multipart/mixed message as a String.
Everything about the message looks normal and matches the source message file when I turn on JavaMail debug mode.
Any ideas would be appreciated.
Here is a snippet from the source message file in the POP3 mailbox:
Message-ID: <1345995532.54860.YahooMailNeo#web111910.mail.gq1.yahoo.com>
Date: Sun, 26 Aug 2012 08:38:52 -0700
From: Secure Comfort <securecomforttransportation#ymail.com>
Reply-To: Secure Comfort <securecomforttransportation#ymail.com>
Subject: Language & Transportation Service
To: "xxxxxx#xxxxxx.com"
< xxxxxx # xxxxxx.com>
In-Reply-To: <1345995390.53486.YahooMailNeo#web111908.mail.gq1.yahoo.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
boundary="1816409020-1433069823-1345995533=:54860"
Return-Path: securecomforttransportation#ymail.com
X-OriginalArrivalTime: 26 Aug 2012 15:39:22.0287 (UTC) FILETIME=[F6D67BF0:01CD83A0]
--1816409020-1433069823-1345995533=:54860
Content-Type: multipart/alternative;
boundary="1816409020-520494517-1345995533=:54860"
--1816409020-520494517-1345995533=:54860
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Here are most of the getters for the MimeMessage:
Content ID=null
Content Language=null
Content MD5=null
Content Type=text/plain
Data Handler=javax.mail.internet.MimeBodyPart$MimePartDataHandler
Description=null
Disposition=null
Encoding=null
File Name=null
Line Count=-1
Message ID=<1345995532.54860.YahooMailNeo#web111910.mail.gq1.yahoo.com>
Received Date=null
Sent Date=Sun Aug 26 10:38:52 CDT 2012
Size=7480850
Subject=null
What does the debug output from JavaMail show? (If you don't want to post it here, send it to me at javamail_ww#oracle.com.)
There's no header size limit in JavaMail.
Possibly you have a firewall or anti-virus software that's intercepting the conversation with the server and (accidentally) introducing this break in the message headers.
Just using the core Java API I am trying to send a response with correctly formatted HTTP headers. With my current code, I can see the client recieves my response, but I think I am not including all the information because when I a return a 404 error to a web browser client the web browser just displays a blank page. If I try with curl, I also do not see a curl letting me know about the error. Here is what I am currently doing with the response:
outputStream.write("HTTP/1.1 404 Not Found".getBytes());
An HTTP response is made up of three parts
The initial status line, which is what you are sending
The Response Header (with response headers like Content-Type, Content-Length, Last-Modified, Cookie, etc)
The Response Body - (The actual HTML with user-friendly content that tells the user what happened.
You are only sending the first.
An example of what you should be sending is
HTTP/1.1 404 Not Found
Date: Fri, 18 Aug 2012 13:24:45 GMT
Content-Type: text/HTML
Content-Length: 90
<html><head><title>Page not found</title></head><body>The page was not found.</body></html>
Also, don't forget to close the socket at the end.
This page describes the HTTP response in an easy way to understand.
EDIT
It's important to note that each line in the response must be separated by both the CR and LF characters. So you should also be writing \r\n to represent each newline before your call to getBytes().
When you have are giving a 404 error, you should include content for the browser to display. Include a Content-Length and Content-Type property, and then some html content for the browser to display. Also remember to flush the stream after writing to it. A proper response might look like this:
String response = "HTTP/1.1 404 Not Found\r\n" +
"Content-Length: 22\r\n" +
"Content-Type: text/html\r\n\r\n" +
"<h1>404 Not Found</h1>";
outputStream.write(response.getBytes());
outputStream.flush();
There should be a carriage return and a line feed(\r\n) after each header, and two \r\n's after the headers are done so you can indicate that the content is coming up.
Adding a newline and closing the connection helps my browser and curl to understand the response:
outputStream.write("HTTP/1.1 404 Not Found\n".getBytes());
outputStream.close();
...and a minimal "working" response:
outputStream.write("HTTP/1.1 200 OK\n\nHello, world!".getBytes())
outputStream.close()
I faced the following problem: When URLConnection is used via proxy the content length is always set to -1.
First I checked that proxy really returns the Content-Length (lynx and wget are also working via proxy; there is no other way to go to internet from local network):
$ lynx -source -head ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip
HTTP/1.1 200 OK
Last-Modified: Mon, 09 Jul 2007 17:02:37 GMT
Content-Type: application/x-zip-compressed
Content-Length: 30745
Connection: close
Date: Thu, 02 Feb 2012 17:18:52 GMT
$ wget -S -X HEAD ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip
--2012-04-03 19:36:54-- ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip
Resolving proxy... 10.10.0.12
Connecting to proxy|10.10.0.12|:8080... connected.
Proxy request sent, awaiting response...
HTTP/1.1 200 OK
Last-Modified: Mon, 09 Jul 2007 17:02:37 GMT
Content-Type: application/x-zip-compressed
Content-Length: 30745
Connection: close
Age: 0
Date: Tue, 03 Apr 2012 17:36:54 GMT
Length: 30745 (30K) [application/x-zip-compressed]
Saving to: `WO2003-104476-001.zip'
In Java I wrote:
URL url = new URL("ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip");
int length = url.openConnection().getContentLength();
logger.debug("Got length: " + length);
and I get -1. I started to debug FtpURLConnection and it turned out that the necessary information is in underlying HttpURLConnection.responses field however it is never properly populated from there:
(there is Content-Length: 30745 in headers). The content length is not updated when you start reading the stream or even after the stream was read. Code:
URL url = new URL("ftp://ftp.wipo.int/pub/published_pct_sequences/publication/2003/1218/WO03_104476/WO2003-104476-001.zip");
URLConnection connection = url.openConnection();
logger.debug("Got length (1): " + connection.getContentLength());
InputStream input = connection.getInputStream();
byte[] buffer = new byte[4096];
int count = 0, len;
while ((len = input.read(buffer)) > 0) {
count += len;
}
logger.debug("Got length (2): " + connection.getContentLength() + " but wanted " + count);
Output:
Got length (1): -1
Got length (2): -1 but wanted 30745
It seems like it is a bug in JDK6, so I have opened new bug#7168608.
If somebody can help me to write the code should return correct content length for direct FTP connection, FTP connection via proxy and local file:/ URLs I would appreciate.
If given problem cannot be worked-around with JDK6, suggest any other library that definitely works for all cases I've mentioned (Apache Http Client?).
Remember that proxies will often change the representation of the underlying entity. In your case I suspect the proxy is probably altering the transfer encoding. Which in turn makes the Content-Length meaningless even if supplied.
You are falling afoul of the following two sections of the HTTP 1.1 spec:
4.4 Message Length
...
...
If a Content-Length header field (section 14.13) is present, its decimal value in OCTETs represents both the entity-length and the transfer-length. The Content-Length header field MUST NOT be sent if these two lengths are different (i.e., if a Transfer-Encoding header field is present). If a message is received with both a Transfer-Encoding header field and a Content-Length header field, the latter MUST be ignored.
14.41 Transfer-Encoding
The Transfer-Encoding general-header field indicates what (if any) type of transformation has been applied to the message body in order to safely transfer it between the sender and the recipient. This differs from the content-coding in that the transfer-coding is a property of the message, not of the entity.
Transfer-Encoding = "Transfer-Encoding" ":" 1#transfer-coding
Transfer-codings are defined in section 3.6. An example is:
Transfer-Encoding: chunked
If multiple encodings have been applied to an entity, the transfer- codings MUST be listed in the order in which they were applied. Additional information about the encoding parameters MAY be provided by other entity-header fields not defined by this specification.
Many older HTTP/1.0 applications do not understand the Transfer- Encoding header.
So The URLConnection is then ignoring the Content-Length header, as per the spec because it is meaningless in the presence of chunked transfers
In your debugger screenshot it's not clear whether the Transfer-Encoding header is present. Please let us know...
On further investigation - it seems that lynx does not show all the headers returned when you issue a lynx -head. It is not showing the Transfer-Encoding header critical to this discussion.
Here's the proof of the discrepancy with a publically visible website
Ξ▶ lynx -useragent='dummy' -source -head http://www.bbc.co.uk
HTTP/1.1 302 Found
Server: Apache
X-Cache-Action: PASS (non-cacheable)
X-Cache-Age: 0
Content-Type: text/html; charset=iso-8859-1
Date: Tue, 03 Apr 2012 13:33:06 GMT
Location: http://www.bbc.co.uk/mobile/
Connection: close
Ξ▶ wget -useragent='dummy' -S -X HEAD http://www.bbc.co.uk
--2012-04-03 14:33:22-- http://www.bbc.co.uk/
Resolving www.bbc.co.uk... 212.58.244.70
Connecting to www.bbc.co.uk|212.58.244.70|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: Apache
Cache-Control: private, max-age=15
Etag: "7e0f292b2e5e4c33cac1bc033779813b"
Content-Type: text/html
Transfer-Encoding: chunked
Date: Tue, 03 Apr 2012 13:33:22 GMT
Connection: keep-alive
X-Cache-Action: MISS
X-Cache-Age: 0
X-LB-NoCache: true
Vary: Cookie
Since I am obviously not inside your network I can't replicate your exact circumstances, but please validate that you really aren't getting a Transfer-Encoding header when passing through a proxy.
I think it's a "bug" in the jdk related to handling ftp connections which are proxied. The FtpURLConnection delegates to an HttpURLConnection when a proxy is in use. however, the FtpURLConnection doesn't seem to delegate any of the header management to this HttpURLConnection in this situation. thus, you can correctly get the streams, but i don't think you can access any "header" values like content length or content type. (this is based on a quick glance over the openjdk source for 1.6, i could have missed something).
One thing to check I would do is to actually read the response (writing off the top of my head so expect mistakes):
URLConnection connection= url.openConnection();
InputStream input= connection.getInputStream();
byte[] buffer= new byte[4096];
while(input.read(buffer) > 0)
;
logger.debug("Got length: " + getContentLength());
If the size you are getting is good, then look for a way for make URLConnection read the header but not the data to avoid reading the whole response.