Issue with SAML response notBeforeDate - java

Getting SAMLException" Current date is before the notBeforeDate" during authentication. The current date and "notBeforeDate" are same for 90% login attempts and it results into the error. What could be the reason for this error?

In short: This will be most likely caused by timedrift on IdP/SP servers.
If you have access to these servers, make sure you are properly synchronized with NTP servers/or adjust manually time to proper one.
If you don't, inform IT department, working for IdP side or SP side. Let them know to check server time synchronization.
Error is referring to this part of SAML request:
<saml:Subject>
<saml:NameID SPNameQualifier="http://sp.example.com/demo1/metadata.php" Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">_ce3d2948b4cf20146dee0a0b3dd6f69b6cf86f62d7</saml:NameID>
<saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml:SubjectConfirmationData NotOnOrAfter="2014-07-18T06:21:48Z" Recipient="http://sp.example.com/demo1/index.php?acs" InResponseTo="ONELOGIN_4fee3b046395c4e751011e97f8900b5273d56685"/>
</saml:SubjectConfirmation>

Related

Error in backend of REST API: "INFO: The connection was broken. It was probably closed by the client. Reason: Closed"

So I am trying out a simple full stack project of my own that involves a java backend implementation of a REST API, for which I am using the org.restlet.com framework/package and jetty as the server.
Whilst I was testing my API using Postman I noticed something wierd: Every time I started the server only the first POST/PUT/DELETE HTTP Request would get an answer, while the next ones would not receive one and on the console this error message would appear:
/* Timestamp-not-important */ org.restlet.engine.adapter.ServerAdapter commit
INFO: The connection was broken. It was probably closed by the client.
Reason: Closed
The GET HTTP Requests however do not share that problem.
I said "Fair enough, probably it's postman's fault".. after all the request made it to the server and their effects were applied. However, now that I am building the front-end this problem blocks the server's response: instead of a JSON object I get an undefined (edit: actually I get 204 No Content) on the front-end and the same "INFO" on the back-end for every POST/PUT/DELETE after the first one.
I have no idea what it is or what I am doing wrong. It has to be the backend's problem, right? But what should I look for?
Nevermind, it was the stupidest thing ever. I tried to be "smart" about returning the same Representation object (with only a 'success' JSON field) on multiple occasions by making one instance on a static final field of a class. Turns out a new instance must be returned each time.

WebHDFS Java client not handling Kerberos Tokens correctly

I'm trying to run a long-lived WebHDFS client (actually building the Framework in front on HDFS). But my tokens are expiring after one day (default kerberos configuration here), at first I tried running a thread which would call
userLoginInformation.currentUser().checkTGTAndReloginFromKeytab();
However even though I see the TGT relogin 21hours, but after 24h my WebHDFS Filesystem is stuck on "token not found in the cache" (which is an error meaning that the server already deleted my token).
Watching inside the code # https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
I found the method "replaceExpiredDelegationToken". But after looking at "runWithRetry" it will be called only if "OPGETDELEGATIONTOKEN" fails (because at all other operations getRequireAuth is FALSE), which basically forces my client to run getDelegationToken at least once each day, so my token gets renewed.
**For now I'll be checking if the FS is a WebHDFS Service and then, each hour I'll do:
if (hdfsFileSystem instanceof WebHdfsFileSystem)
{
WebHdfsFileSystem tmpFS = (WebHdfsFileSystem) hdfsFileSystem;
tmpFS.setDelegationToken(tmpFS.getDelegationToken(null));
}
Is there a better way to force delegation token renewal? (or to have long-lived clients)
Thanks!
After two days testing (so kerberos ticket would run off)
Calling
if (hdfsFileSystem instanceof WebHdfsFileSystem)
{
WebHdfsFileSystem tmpFS = (WebHdfsFileSystem) hdfsFileSystem;
tmpFS.setDelegationToken(tmpFS.getDelegationToken(null));
}
once each hour, it seems to work fine, IMO this should be done at HDFS level but well... it will be # framework level for us :)

Session is getting overwritten in Java

I am facing a strange issue:
I have a page with an email field in it when I submit the page the control goes to a servlet where I am saving the email value in session by using
request.getSession().setAttribute("email_Value", request.getParameter("email_Value"));
Now, on the basis of this email value I lookup the database and extracts the information for this user if information found then remove the session attribute by
request.getSession().removeAttribute("email_Value");
if not then redirect the request to same page with an error message and prefilled email value which I am extracting from session using
if(null!= request.getSession().getAttribute("email_Value")){
String Email = (String)(request.getSession().getAttribute("email_Value"));
request.getSession().removeAttribute("email_Value");
}
It works fine on our deleopment, UAT environments but problem is coming only on PROD where we have load balancer.
The issue is that while coming back to the same page it change the email address field witch some different email value which I have not even entered on my machine i.e. it is accessing someone else session.
Could someone provide any pointer to resolve this issue. As this is Production issue, any help would be appreciated.
Thanks
looks like you need to use sticky-sessions. This must be configured in the apache
Http is a stateless protocol meaning, the server doesnt know to identify a client over a period of time.
When a client makes a call to the server (load balanced, say server_1 & server_2), it could reach either server_1 or server_2, assume the request reaches the server_1, now your code creates a session and adds the email to the session.
When the same client makes another call to the server, this time it hits server_2, the email which is in server_1 session is not available to server_2 and server_2 might have email from another session thats why you are seeing another email address.
Hope its clear.
Solution:
URL Rewriting
Cookies
If your application is deployed on multiple servers, chances are there that your sessions may get transferred between servers. Also, in such scenarios, if you are storing any objects in sessions, they HAVE TO implement Serializable interface. If they don't, then the data will not be persisted when the session gets migrated.
Also, it seems that the session gets interchanged with another one. Are you storing anything at Application level?
I would also advice you to look into HttpSessionActivationListener for your case.

Scribe - multiple callback simultaneously

I am making a module for a server software that is allowing support for facebook.
The problem is with the callback URL. If one client start the authorization proccess, then another client starts the proccess at the same time, or before the first user finish. How could I check what user finished first?
I need a way to check what client's callback I'm getting. One solution would be to lock other from register until the first one has finished, but I don't want to do that. Is there another way? I have thought about including ?client=clientid at the end of the callback, but I heard facebook only allows the exact url specified in the app on facebook.
UPDATE
It didn't work to add client="clientid" to the callback. Any other ideas?
After some more searchig I figured facebook will allow a parameter: state. (thanks to #jacob https://stackoverflow.com/a/6470835/1104307)
So I just did ?state=clientId.
For anyone using scribe the code is this:
service.getAuthorizationUrl(null) + "&state=" + clientId;
I think there is no problem on adding and GET parameter like client=clientID. Facebook will redirect you to the URL you have specified and using the REQUEST parameters you can check who completed the request. The problem exist if you have specified URL as http://yoursite.com and pass redirect to http://some-sub-domain.yoursite.com or entirely different location.
if you are using the server-side flow then the oauth 2 flow will be:
redirect user to facebook
facebook then rediects the user to your specified callback
your server uses something like curl to get the access token
your server does some more curl to get maybe more user data or update the user's data
my recommendation would be to set a session cookie in step 1 and simultaneously store this session id on your server. then the session cookie will automatically be sent to the callback url in step 2 and you can identify the session in the database this way.
this will work for all service providers (google, twitter, linkedin, etc) and is the preferred way of maintaining session continuity.

Session management between thick client and server?

My application is a Eclipse Rich Client and I would like to add authentication and authorization features to. My Users and roles are stored in a database and my application also has a web based admin console which lets me manage users and roles. I am leveraging Spring security on this admin console.
So here's my requirement:
I would like my thick client to provide users with a login dialog box. The authentication would need to be performed on the server side (it could be a webservice) and the roles have to flow in to the thick client. I would also like to manage sessions on the server side, somehow.
I really can't think of any easy way to doing this. I know that if I were to use Spring Rich Client, it would integrate pretty well with Spring Security on the server side.
But, that is not an option for me at this point.
Please share your thoughts on how to acheive this. Appreciate your help.
Since you're leaning toward web services (it sounds like you are) I'd think about taking the user information from your rich client (I assume user ID and password), using WS-Security to send the encrypted info to a web service, and having the web service do the auth stuff. Also I'd think about the web service returning any info that you want to go back to the rich client about the user (first/last name, etc).
I developed a similar application recently using the Challenge-Response-authentication. Basically you have three methods in your webservice or on your server
getChallenge(username) : challenge
getSession(username, response) : key
getData(username, action?) : data
getChallenge returns a value (some random value or a timestamp for instance) that the client hashes with his/hers password and sends back to getSession. The server stores the username and the challenge in a map for instance.
In getSession the server calculates the same hash and compares against the response from the client. If correct, a session key is generated, stored, and sent to the client encrypted with the users password. Now every call to getData could encrypt the data with the session key, and since the client is already validated in getSession, s/he doesn't have to "login" again.
The good thing about this is that the password is never sent in plain text, and if someone is listening, since the password is hashed with a random value, the call to getSession will be hard to fake (by replaying a call for instance). Since the key from getSession is sent encrypted with the users password, a perpetrator would have to know the password to decipher it. And last, you only have to validate a user once, since the call to getData would encipher the data with the users session key and then wouldn't have to "care" anymore.
I've a similar requirement I think. In our case:
user provides username and password at login
check this against a USER table (password not in plain text btw)
if valid, we want a session to last, say, 20 minutes; we don't want to check username and password every time the thick client does a retrieve-data or store-data (we could do that, and in fact it wouldn't be the end of the world, but it's an extra DB op that's unnecessary)
In our case we have many privileges to consider, not just a boolean "has or has not got access". What I am thinking of doing is generating a globally unique session token/key (e.g. a java.util.UUID) that the thick client retains in a local ThickClientSession object of some sort.
Every time the thick client initiates an operation, e.g. calls getLatestDataFromServer(), this session key gets passed to the server.
The app server (e.g. a Java webapp running under Tomcat) is essentially stateless, except for the record of this session key. If I log in at 10am, then the app server records the session key as being valid until 10:20am. If I request data at 10:05am, the session key validity extends to 10:25am. The various privilege levels accompanying the session are held in state as well. This could be done via a simple Map collection keyed on the UUID.
As to how to make these calls: I recommend Spring HTTP Invoker. It's great. You don't need a full blown Spring Rich Client infrastructure, it can be very readily integrated into any Java client technology; I'm using Swing to do so for example. This can be combined with SSL for security purposes.
Anyway that's roughly how I plan to tackle it. Hope this is of some use!
Perhaps this will help you out:
http://prajapatinilesh.wordpress.com/2009/01/14/manually-set-php-session-timeout-php-session/
Notice especially this (for forcing garbage collection):
ini_set(’session.gc_maxlifetime’,30);
ini_set(’session.gc_probability’,1);
ini_set(’session.gc_divisor’,1);
There is also another variable called session.cookie_lifetime which you may have to alter as well.
IIRC, there are at least 2, possibly more, variables that you have to set. I can't remember for the life of me what they were, but I do remember there was more than 1.

Categories