GSSAPI: The Security Context Loop - java

The Oracle GSSAPI Java examples, and various SPNEGO / GSSAPI IETF RFCs indicate that both the GSS initiator (client) and acceptor (server) should have a loop to establish a security context, and that the client may need to make multiple passes with GSS tokens before the security context is established.
Oracle GSSAPI example:
https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/BasicClientServer.html
Structure of the Generic Security Service (GSS) Negotiation Loop:
https://www.rfc-editor.org/rfc/rfc7546
SPNEGO-based Kerberos and NTLM HTTP Authentication in Microsoft Windows:
https://www.rfc-editor.org/rfc/rfc4559
For instance RFC4559 gives this example:
Pass 1: Fails because request does not have a token:
C: GET dir/index.html
S: HTTP/1.1 401 Unauthorized
S: WWW-Authenticate: Negotiate
Pass 2: Fails, but request has a token
C: GET dir/index.html
C: Authorization: Negotiate a87421000492aa874209af8bc028
S: HTTP/1.1 401 Unauthorized
S: WWW-Authenticate: Negotiate 749efa7b23409c20b92356
Pass 3: Succeeds
C: GET dir/index.html
C: Authorization: Negotiate 89a8742aa8729a8b028
S: HTTP/1.1 200 Success
S: WWW-Authenticate: Negotiate ade0234568a4209af8bc0280289eca
Here the security context is established and thus the request is authenticated on the third pass. i.e. on the second pass from the client (C) to server (S) with a token.
Question 1:
Why might multiple passes from initiator to acceptor with tokens be required before the security context is successfully established?
Why might pass 2 above fail, but pass 3 succeed?
Does something change on either the initiator or the acceptor between these 2 passes?
Question 2:
My instinct is that both the initiator and acceptor loops should have protection against endless looping.
For instance the initiator could abort if the context is not established by x attempts.
Are there any rules-of-thumb / metrics on the number of passes that might reasonably be expected to establish the security context?
e.g. if the security context has not been established by the 5th pass --> abort.
Question 3:
In the Oracle GSSAPI examples the client and server communicate over sockets.
The server builds a GSSContext object which is dedicated to a single client, is kept until the server closes, and is thus available for multiple passes to establish the security context.
But how might this work for a Http RESTful WebServer with multiple clients?
My assumption is that:
a) each pass of a request to establish a security context should be made to the same GSSContext object (and not against a new GSSContext instance).
b) the Http server should establish a new GSSContext instance for each new client request.
(i.e. a GSSContext object should not be shared / reused across multiple clients / requests).
If my assumptions are correct the server must distinguish between:
i) a following pass for an existing request for which the security context has not yet been established. --> an existing GSSContext object and loop should be used.
ii) the first pass of a totally new request (either from the same or from a different client). --> a new GSSContext object and loop should be used.

Using Negotiate as the example protocol, it's useful to consider how it operates.
The server indicates to the client that it can support negotiation.
The client agrees and infers what the server might support.
The client creates a token based on what it really thinks the server supports (e.g. Kerberos), and then creates a list of other possible token types (e.g. NTLM).
The client sends both the token and list to the server.
The server either accepts the initial token or decides to pick something else from the list.
The server indicates to the client that it wants something else.
The client then sends another token of the preferred type.
The server accepts or declines and responds to the client appropriately.
This requires up to three roundtrips and may fail or complete after one. Other protocols may choose to do whatever they want.
You will probably want to track the number of roundtrips and kill it after a arbitrarily high number. The resources required aren't that high, but under load it can exhaust the system.

Related

What is the difference between making HttpRequest and Forwarding HttpRequest?

Lets say my client (Browser) request my java service (Service A).
http://localhost:8080/getDataFromB
Based on the request, from my Service A, I need to make another HttpRequest to either Service B or Service C to get the data.
getDataFromB: http://serverb.com/getDataFromB
getDataFromC: http://serverc.com/getDataFromC
I tried making HttpRequest to Service B and Service C based on the request. But should I do it ? or Should I forward the requests the service B or Service C ? If So I save some TCP connection requests on my side.
What will be difference between making HttpRequest vs forwarding the requests
If you don't want your client to know that you're actually serving the response from B or C, you should forward the request to either B or C.
If you want your client to know that your server will not be handling A directly, but instead will do B or C – so perhaps in the future the client can ask for B or C directly instead of asking for A – then you should send a redirect to the client.
You could instead do what you're suggesting - your server handles incoming request, then makes a separate HTTP request to B or C – but that would just add more complexity to how your server communicates back with the original client. If your server logic somehow "fits" with this approach, I would consider stepping back and re-thinking your server logic to either handle requests directly, or handle it with either a redirect or forward.
Unless your server is unable to handle new inbound requests due to excessive TCP connections, I wouldn't worry about optimizing for that.

Securing REST Web Service using token (Java)

This question is in some way related to the below linked question. However, I need a little more clarity on some aspects and some additional information. Refer:
REST Web Service authentication token implementation
Background:
I need to implement security for a REST Web Service using token
The webservice is intended for use with Java client. Hence, form
authentication and popups for credentials are not useful.
I'm new to REST security and encryption
This is what I have understood till now:
For first request:
User establishes https connection (or container ensures https using
301)
User POSTs username and password to login service
If credentials are valid we:
Generate a random temporary token
Store the random token on server mapping it to actual username
Encrypt the token using a symmetric key only known to server
Hash the encrypted token
Send the encrypted token and the hash to the client
For subsequent requests:
Client sends this encrypted token and hash combination (using
username field of basic?)
We make sure the encrypted token is not tampered using the hash and
then decrypt it
We check the decrypted token in the session-tracking-table for a
not-expired entry and get the actual username (expiry to be managed
by code?)
If the username is found, based on allowed roles, allowed operations
are configured
More details:
Since client is a java client, the first request can be a POST
containing the credentials. However, this looks like it may expose
the credentials before the https gets established. Hence should
there be a dummy GET to a secured resource so that https is
established first?
Assuming above is required, the second request is a LoginAction POST
with credentials. This request is handled manually (not using
container's authorisation). Is this right?
The above LoginAction returns the user the combination of encrypted
token + hash
User sets it to the header that is used by BASIC authentication
mechanism (field username)
We implement a JAASRealm to decrypt and validate the token, and find
the roles allowed
The rest of authorisation process is taken care of by the container
with the WebResourceCollection defined in the web.xml
Is this the correct approach?
Why not simplify it to the following?
For first request:
User establishes HTTPS connection to server (service does not listen on any
other ports) and POSTs credentials to login service.
Server replies with HSTS header to ensure all further communication
is HTTPS.
If credentials are valid we:
Generate a random temporary token which is securely generated using a CSPRNG. Make this long enough to be secure (128 bit).
Store the random token on server mapping it to actual username.
Send the random token to the client
For subsequent requests:
Client sends token in a custom HTTP header over HTTPS.
Token is located in the DB and mapped to the username. If found access is configured based on allowed roles and allowed operations.
If not found user is considered unauthenticated and will have to authenticate with the login service again to get a new token.
On the server side the token will be stored with an expiry date. On each access to the service this date will be updated to create a sliding expiration. There will be a job that will run every few minutes to delete expired tokens and the query that checks the token for a valid session will only check those that have not deemed to have expired (to prevent permanent sessions if the scheduled job fails for any reason).
There is no need to hash and encrypt the tokens within the database - it adds no real value apart from a touch of security through obscurity. You could just hash though. This would prevent an attacker that managed to get at the session data table from hijacking existing user sessions.
The approach looks ok. Not very secure.
Let me highlight some of the attacks possible with the request.
Man-In-the-middle attack in a POST request, the user can tamper with the request and server does not have any way to ensure the data is not tampered.
Replay attack: In this, the attacker does not tamper with the request. The attacker taps the request and sends it to the server multiple times in a short duration, though it is a valid request, the server processes the request multiple times, which is not needed
Please read about Nonce.
In the first step, the user sends his credentials i.e username and password to the login service and if you have a web based application that also uses the same password it might be dangerous. If in case password in compromised, API and web everything is exposed, please use a different PIN for API access. Also, ensure decrypted token as specified by you, expires after a certain time.
Ensure the service (application server) tomcat. jboss never returns a server page in case of internal error, this gives the attacker extra information of the server where the app is deployed.
-- MODIFIED BASED ON SECOND POST --
Yes your correct if your using mutual SSL, but in case its a one way access you don't have the client certificates. It would be good if you just double ensured everything in the request, just like signed (signature) SOAP, one of the strong data transfer mechanism. But replay attack is a possibility with HTTPS, just handle that. Rest use tokens encryption is good. And why not ask the client to decrypt the token with the password and return the output of the decryption by this you can validate the output, if it is present in your database ? This approach the user does not send the password over the wire even if it is HTTPS ?

Java: Can I redirect clients to the same session /node behind load-balancer through application?

I have a webapp where a user should be able to access the same session (jsession) through different clients (i.e. from a pc-browser and from smartphone-browser) at the same time.
Example:
A) Person X will access the system through his PC. For expample by browsing http://example.com/testapp?workon=123. The Server will create an new JSESSIONID send it back to the client and store some value - say 'abc' inside his session.
B) Now the same Person X will access the same URL from his smartphone and be able to retrieve the value 'abc' from the session in subsequent requests.
This will not work out of the box, because in B) the client will get a new Session and JSESSIONID which is different than the one provided in A).
When I now force the server to supply the same JSESSIONID to B) as it did in A) will they both be able to access the same session? Is this possible?
I'm asking this, because I want to achieve the following:
The application is running behind a load balancer that has sticky sessions enabled by using the JSESSIONID.
I want to achieve that B will be redirected to the same cluster-node as A) on subsequent requests of B).
The request-parameter "workon" here is just an example. In reality this is a token that the load-balancer cannot understand. Only the application is able to understand and decode the content of the "workon" parameter.
It will be no problem that the first request of B) will go to any node. Any node is able to decode the "workon" paramter any supply the correct JSESSIONID for it to the client. But subsequent request should be redirected to the same node as requests from A) go to.
I do not want to use Session-Sharing across nodes, because of performance issues. The sessions-data is rather large. I want to redirect B) to the same node as A) based on the first Request of B)
Any ideas?
Edit to reflectt questions in comments:
On request A) there is the request parameter "workon" this identifies some record inside a map. This record contains the user and the jsessionid for securely binding. so the load balancer cannot find out the user for the request. The user is not authenticated using any login.
On request B) (from the smartphone) the phone sends a userid and a token on the first request (inside the json/xml payload). the apllication checks that the token is valid for that user (again using some map), then finds the latest "workon" for that user and sends this "workon" back to the smartphone. On subsequent request (Those should go to the same node as A) the smartphone sends the token and the workon parameter.
you cannot use the JSESSION ID token directly if you want the node assignment to persist across different browser (a pc and a mobile as i nyour example)
you would need authentication for that - after authentication you set a cookie on the client which is unique for each user - do not use this cookie to check if the user is authenticated: it will open you to all kind of security issues. just make sure that after logon an user get always the same cookie value
use that cookie to implement sticky session in your load balancer. specific will change according to the balancer, but most of them should understand cookies.
the specific name and content of cookie varies across load balancers.
here is a sample with apache server
apache load balancer: http://httpd.apache.org/docs/current/mod/mod_proxy_balancer.html
http://www.markround.com/archives/33-Apache-mod_proxy-balancing-with-PHP-sticky-sessions.html/
here another one with haproxy:
Load Balancing (HAProxy or other) - Sticky Sessions
notice that for haproxy you should enable the 'preserve' option in configuration, so that the server is in control of the cookie content and can stick the same user to the same backend (after authentication)

Can not understand the session object behavior

I am confused on the documentation of the javax.servlet.http.HttpSession.
It says:
Sessions are used to maintain state and user identity across multiple
page requests. A session can be maintained either by using cookies or
by URL rewriting.
Now both cookies and URL rewriting are handled by application code in server (i.e. our code).
Then it says relating to when a session is considered as new:
The server considers a session to be new until it has been joined by
the client. Until the client joins the session, the isNew method
returns true.A value of true can indicate one of these three cases:
1. the client does not yet know about the session
2. the session has not yet begun
3. the client chooses not to join the session. This case will occur if the client supports only cookies and chooses to reject any cookies
sent by the server. If the server supports URL rewriting, this case
will not commonly occur.
I am not clear on when it is considered/meant that the client has joined the session.
I mean if I don't use cookies from my web application (or URL rewriting) and I have the following:
POST from IP A to server
200 OK from server to A
POST from IP A to server
In step 3 will the session.isNew() return true or false? It is not clear to me from the doc.
Will it return false (i.e. the session is not new) and I will have to call session.invalidate() in order to create a new session?
The reason this confuses me more is because I am debugging a piece of code where the client is an HTTP application but not a web brower and I see that in step 3 the session.isNew() does not return true although there is no cookies or url rewriting in the server code.
So I can not figure out what is going out under the hood.
Any info that could help understand this?
Here is a nice example of Session Tracking
Client has joined the session means that client made subsequent request and included session id, which can be recognized by your webserver. If cookies are enabled - jsessionid will be passed with cookies, otherwise - it should be include in the URL itself - like this http://localhost:8080/bookstore1/cashier;jsessionid=c0o7fszeb1.
In JSP c:url from Core Tag Library will handle URL rewriting for you.
In case of B2B communication you have to obtain session id by yourself and include it in subsequent requests manually.
Example:
POST from IP A to server
200 OK from server to A
A obtains session id from the response
POST from IP A to server and includes obtained session id
UPDATE:
Consider reading a great article - "Web Based Session Management: Best practices in managing HTTP-based client sessions." It's a general overview of how HTTP sessions can be emulated and is not tied to Java.

Shiro authentication with sessionId or username+password

I do not have much experience in Java authentication frameworks and authentication workflow in general (only some theoretical knowledge), so for educational purposes I'm trying to create this type of authentication for my HTTP application:
Client Posts login+password to /login.
Shiro logs in the user by given credentials. Server returns client his sessionId.
Client requests some kind of resource /myresource?sessionId=1234567.
Shiro logs in the Subject by given sessionId. Then server does the regular workflow of getting the /myresource (with Shiro managing method-level access rights).
Basically I have these questions:
I guess I have no need for HTTP sessions nor Servlet sessions. Shiro has it's own session manager which is enough for my needs. Am I wrong?
Is it good practice to give client the real sessionId or should I send some kind of sessionToken (which is resolved to sessionId on server side)?
How do I login the Subject using sessionId (which the client should store locally)?
Are there any other things I need to know before doing this kind of authentication?
Thanks in advance.
I guess I have no need for HTTP sessions nor Servlet sessions. Shiro has it's own session manager which is enough for my needs. Am I wrong?
No, you're right. That's why Shiro is awesome. From documentation:
Shiro's Session support is much simpler to use and manage than either of these two [web container or EJB Stateful Session Beans] mechanisms, and it is available in any application, regardless of container.
e.g.
Subject currentUser = SecurityUtils.getSubject();
Session session = currentUser.getSession();
session.setAttribute( "someKey", someValue);
quoting from the doc: getSession calls work in any application, even non-web applications
Is it good practice to give client the real sessionId or should I send some kind of sessionToken (which is resolved to sessionId on server side)?
It is a bad idea to send plain sessionId. Specially, if you're sending data over unencrypted network. Either use something like HTTPS or use something line NONCE.
And, a side note, if over http/s POST data instead of having it in URL.
How do I login the Subject using sessionId (which the client should store locally)?
You meant how could you authenticate the subject once you have session ID? You can simply,
from the doc,
Subject requestSubject = new Subject.Builder().sessionId(sessionId).buildSubject();
Are there any other things I need to know before doing this kind of authentication?
Yes.
Read Shiro's Session Management
Lean about MITM Attack
About HTTPS and SSL
Some on Hash functions this, Apache Commons DigestUtils and may be this
Updates
About that subject authentication part - will it make the newly created Subject the currently authenticated subject? If not, how do I make it the "current" subject?
If you're talking about new Subject.Builder().sessionId(sessionId).buildSubject(), it will not. And I do not know how to set it as currentUser for the thread. Shiro's JavaDoc says,
[this way] returned Subject instance is not automatically bound to the application (thread) for further use. That is, SecurityUtils.getSubject() will not automatically return the same instance as what is returned by the builder. It is up to the framework developer to bind the built Subject for continued use if desired.
so, it's upto you how you bind the subject in current thread or further use.
If you were worried about how SecurityUtils.getSubject(); thingy works, well, in web-container context, it uses simple cookie to store your session-data. When your request comes through Shiro filter, it attached the current subject to request for it's life cycle (current thread). And when you as getSubject() it simply gets the Subject from request. I found an interesting thread here.
about nonce part: If he sends me some kind of hash instead of his sessionId - I won't be able to decode it to get real sessionId (to authorize him with that). Am I missing something here?
Nonce part -- it's a pain in the neck. Now rethinking, I think doing NONCE is just overkill. Let me explain some, anyways,
User logs in first time with his user-name and password. Set userid, nonce (say, UUID), and HASH(sessionID+nonce) ,call it hash1, on client side. Say, in cookie. Store this nonce on server side, may be in DB or in a Map as user_id <--> nonce,session_id
On subsequent request, make sure you passback userid, nonce and the HASH.
On server side, the first thing you will do is validate the request. Get the sessionId and nonce stored in the hashmap or DB based on user_id that was sent by the client. Create a hash, HASH(sessionId_from_db+nonce_from_db), call it hash2.
Now, if hash1 matches hash2, you can validate the request and since you have stored current sessionId on server side, you can use it. On request completion, set new nonce in cookie and on server side.
If you go through 1 -- 4, you'll realize you wouldn't require Shiro for authentication. (:
So, I am taking my words back, NONCE is not applied in this case unless you are too freaky about security over performance.
Why do MITM attack matter to me? My client (javascript ajax code) fetches data from it's server via ajax. So I do not think I should care about MITM in any way.
I think it should matter to you. MITM Attack means your requests/responses are being chained via a machine (MITM) to your router. If it's an unencrypted request, it's all plain text to the MITM. He can see all your requests... and possibly spoof the requests and may hijack session. let me find some example.... http://michael-coates.blogspot.com/2010/03/man-in-middle-attack-explained.html

Categories