Does the Single Sign On Paradigm respects the HTTP protocol rules? - java

We are using the CAS server for a JAVA web application.
The authentication system is as follows:
A user tries to access a resource (1) on the system
It get redirects (302 Found)
The User Enters the username password
The Server answers with a cookie and redirects to the original page (1)
I am debated on the fact that this interaction respects the HTTP protocol.
If I do not have the authorization to access a resource
shouldn't the system answer with a 401 Unauthorized or even better a 407 Proxy Authentication Required ?
And the Authorization Resource, couldn't be instead of a Cookie string a full SSL-Level authorization key ?
Added:
Header dump using curl -L -D
HTTP/1.1 301 Moved Permanently
Server: nginx/0.8.54
Date: Sat, 10 Dec 2011 02:07:55 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://server.com/service/
HTTP/1.1 302 Found
Server: nginx/0.8.54
Date: Sat, 10 Dec 2011 02:07:55 GMT
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: no-store, max-age=0, must-revalidate
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Set-Cookie: JSESSIONID=q7rjikj4spvd1fxaowjl9XXX
Location: https://server.com/login/
HTTP/1.1 200 OK
Server: nginx/0.8.54
Date: Sat, 10 Dec 2011 02:07:55 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
X-Powered-By: PHP/5.3.2
Content-Length: 6650

You're right - the SSO system in your examples is really at a different level than the HTTP protocol itself. If the application returned a 401 then the browser itself will likely handle the authentication (e.g.: prompt the user for username/password, then send the next request with this base-64 encoded in the HTTP Authorization header). In the case of HTTP Basic authentication the username & password would then be sent with every request to your system. You may do authentication with Kerberos or NTLM, in which case the authentication could be transparent if the user is already logged into the network.
That said - many SSO systems take the approach of redirecting the user to a login HTML form, and then maintaining session state with a cookie. One of the main benefits is you have more control over the look & feel of the login interface. With the SSO system having a session cookie it can take ownership of maintaining the state in its own (likely proprietary) way.

Related

Spring-Boot how to disable "Transfer-Encoding: chunked" in java

I have a problem with tomcat included with spring boot. i want to disable the chunked encoding or even using the HTTP 1.0 version as it doesn't use it. i don't have a java class for that i just want to change the properties.
below the part of the HTTP response header that i received with the chunked encoding
HTTP/1.1 200
Date: Thu, 26 Sep 2019 15:46:57 GMT
Content-Type: application/xml;charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive

Websocket connection attempt fails, returns "Connection: Upgrade, close."

I'm building a Java websocket server using Tomcat. On my dev build, it works perfectly. However when I deploy it to production, the server is automatically appending "close" to the connection response header, immediately closing the socket (which never seems to connect to the server in the first place).
Here's some context for the production environment:
Tomcat 7, Java 8 on RHEL
Communications are encrypted by SSL, websocket uses wss
The server is behind an institutional firewall (but I expect that the encryption should make this a non-issue)
My local dev environment is not an exact clone (as it's used for multiple projects). It's running Tomcat 8, but I believe Tomcat 7 should feature comparable websocket support.
Here's the request/response (as captured by Chrome dev tools) when the websocket is sent to the production server:
General:
Request URL: wss://example.com/WSServer
Request Method: GET
Status Code: 101 Switching Protocols
Response:
HTTP/1.1 101 Switching Protocols
Date: Thu, 10 May 2018 17:04:39 GMT
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Upgrade: websocket
Connection: upgrade, close
Sec-WebSocket-Accept: JFNyciPc/Cza8PFaXWVct6f21qw=
Sec-WebSocket-Extensions: permessage-deflate;client_max_window_bits=15
Content-Length: 0
Content-Type: text/plain; charset=UTF-8
Request:
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cache-Control: no-cache
Connection: Upgrade
Cookie: *redacted*
Host: example.com
Origin: https://example.com
Pragma: no-cache
Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits
Sec-WebSocket-Key: OvMcwMxIYqBLrx9ijlFK/w==
Sec-WebSocket-Version: 13
Upgrade: websocket
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36`
As far as I can tell, the most revealing part of this is Connection: upgrade, close, which explains the client-side behavior below.
Here's a snippet of the client-side Javascript:
var socket = new WebSocket((window.location.protocol==="http:"?"ws:":"wss:") + "//" + window.location.host + "/WSServer");
socket.onopen = function wsOpen() {
socket.send("Hello!"));
}
socket.onclose = function wsClose(reason) {
log(JSON.stringify(reason)); //debug
}
socket.onopen gets called first. Executed normally, this doesn't produce any console message, but if I delay its execution with a breakpoint I get an error message: "Websocket is already in CLOSED or CLOSING state."
socket.onclose gets called immediately after. The reason code is 1006 with no explanation.
I've also put some debug logging in the ServerEndpointConfig.Configurator.modifyHandshake method, but it never reaches that point, nor does it reach the #OnOpen-annotated method.
Any idea what's causing the connection to fail? Again, the server and client code works in dev, so I'm confident that it's not a code issue. Is it a Tomcat configuration issue (as far as I can tell, there's nothing unusual about the way it's setup). Is there something obvious I'm missing?
Thanks in advance for any help!
HTTP/1.1 enables keep-alive connections by default.
A request such as:
GET / HTTP/1.1
Host: example.com
Connection: close
tells the server to disable keep-alive on the connection (the opposite of the HTTP/1.1 default)
Upgrade is a hop-by-hop header, just like Connection, and Upgrade is only valid if listed in Connection, e.g. Connection: Upgrade
When a client makes an HTTP/1.1 request containing Upgrade, the server receiving the request is not required to upgrade, and can instead simply respond with an HTTP/1.1 response.
Connection: upgrade, close requests that the server upgrade to (one of) the protocol(s) in the Upgrade header, or else to respond with HTTP/1.1 and close the connection. If the server upgrades the protocol, then the server uses the new protocol, and the close token in Connection is ignored, as the server is now using the upgraded protocol in the Upgrade response header immediately after the HTTP/1.1 101 Switching Protocols response.

How can I get Jetty to return an error response instead of assuming an HTTP/0.9 request?

A broken HTTP client sent some requests to our Jetty-based HTTP server with a newline in the URL. Jetty sees this as an HTTP/0.9 request, truncates the URL at the newline, ignores the request headers, and sends back a response with no headers or status line.
I believe this is mostly correct according to spec, although Jetty doesn't require CRLF and will happily do this for requests other than GET. But newer specs note that HTTP/0.9 requests mainly indicate confused clients. In our case, the client (and we) could have avoided some confused troubleshooting if an error message had been sent instead.
How can I get Jetty to return an error response to requests with a newline in the URL? I'm happy to use either Jetty-level configuration or webapp-level code.
First, support for HTTP/0.9 has been completely removed in Jetty 9.3+.
Lets see what the behavior is ...
Jetty Distribution 9.2.7.v20150116, running demo-base:
Normal HTTP/1.0 Request:
$ printf "GET / HTTP/1.0\r\n\r\n" | nc localhost 8080
HTTP/1.1 200 OK
Set-Cookie: visited=yes
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Accept-Ranges: bytes
Content-Type: text/html
Last-Modified: Sat, 17 Jan 2015 00:25:03 GMT
Content-Length: 2773
Server: Jetty(9.2.7.v20150116)
<html xmlns=\ "http://www.w3.org/1999/xhtml\" xml:lang=\"en\">
Got the headers there, looks like HTTP/1.0 response headers too.
Normal HTTP/1.1 Request:
$ printf "GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n" | nc localhost 8080
HTTP/1.1 200 OK
Set-Cookie: visited=yes
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Accept-Ranges: bytes
Content-Type: text/html
Last-Modified: Sat, 17 Jan 2015 00:25:03 GMT
Content-Length: 2773
Connection: close
Server: Jetty(9.2.7.v20150116)
<html xmlns=\ "http://www.w3.org/1999/xhtml\" xml:lang=\"en\">
Looks normal as well.
Even includes the HTTP/1.1 specific headers.
Now lets try HTTP/1.0 with embedded CRLF:
$ printf "GET /\r\nHTTP/1.0\r\n\r\n" | nc localhost 8080
<html xmlns=\ "http://www.w3.org/1999/xhtml\" xml:lang=\"en\">
No response headers.
Why is this happening?
Well, there's no HTTP Version that Jetty can determine, so there's no valid set of headers it can respond with. So it responds with no headers. Which surprisingly is how HTTP spec prior to 1.0 behaved.
Now lets try Jetty Distribution 9.3.x, and the demo-base configuration with the same CRLF issue.
$ printf "GET /\r\nHTTP/1.0\r\n\r\n" | nc localhost 8080
HTTP/1.1 400 HTTP/0.9 not supported
Content-Length: 0
Connection: close
Server: Jetty(9.3.0-SNAPSHOT)
Now, in the modern era, with HTTP/2 just around the corner, this makes a lot more sense.
HttpServletRequest#getProtocol() will return an empty String for HTTP/0.9 requests in Jetty 8. Thus a simple filter can return a Bad Request response for such requests.
As the other answer indicates, HTTP/0.9 requests are no longer supported in recent versions of Jetty 9.

Cross Domain $.ajax POST to REST Web Service

I have following scenario:
App1:
My web service hosted on tomcat server :
192.168.100.123
App2:
Another application which is communicating with this web service is hosted on another machine and server :
192.168.100.456
REQUEST and RESPONSE HEADER
Allow OPTIONS,POST
Content-Length 511
Content-Type application/vnd.sun.wadl+xml
Date Thu, 02 May 2013 22:53:17 GMT
Server Apache-Coyote/1.1
----------------------------
Request Headersview source
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language en-US,en;q=0.5
Access-Control-Request-He... content-type,x-requested-with
Access-Control-Request-Me... POST
Cache-Control no-cache
Connection keep-alive
DNT 1
Host 192.168.200.164:8080
Origin http://192.168.200.157
Pragma no-cache
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:20.0) Gecko/20100101 Firefox/20.0
After debugging the whole scenario using firebug I am sure that the issue is regarding cross domain policy. Kindly help me figure the way out of this'
HTTP 302 refers to status code having redirection information. May be the App2 tried to login to App1 and it sent back the logged in URL back as the response.
Once App2 receives such a response, proably the redirection URL out of the response can be extracted and this URL should be hit again by App2.

Implementing If-Match HTTP Header in Spring

The ShallowEtagHeaderFilter which is part of Spring processes the If-None-Match header on an Http request. As part of the Http 1.1 spec this returns an Http status of 304 - Not Modified if the contents of the If-None-Match header sent on the request is the same as the Etag header. This is helpful for caching as it means that if the Etag is the same on the client and server then the contents will be identical.
This is fine.
However my question is this - does Spring have support for the If-Match header (again part of HTTP 1.1) rather than If-None-Match because as far as the docs go it looks like the ShallowEtagHeaderFilter only processes the If-None-Match header. I need the If-Match header to prevent simultaneous requests from overwriting the previous one. IE I only want the request to be processed if the Etags are the same and hence they have the latest version of the entity.
It doesn't look like the ShallowEtagHeaderFilter supports If-Match:
curl "Accept: application/json" -H 'If-Match: "somevalue"' -i http://localhost:8080/rest-sec/api/resources/1
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
ETag: "03cb37ca667706c68c0aad4cb04c3a211"
Content-Type: application/json;charset=UTF-8
Content-Length: 56
Date: Fri, 11 Jan 2013 14:58:40 GMT
I opened a JIRA issue to track this:
https://jira.springsource.org/browse/SPR-10164

Categories