Implementing secure native Play Framework 2.3.x (Java style) authentication - java

First of all, I am fully aware of the authentication modules that are available to Play. That said, I am unable to implement even the simplest example code from let's say SecureSocial. With a little bit of research it became clear that a lot of things were broken in their example code provided here when the Play Framework updated to version 2.3.x.
With the help of online docs and the excellent video tutorial by Philip Johnson on implementing standard (unsafe) authentication I did succesfully implemented the following:
// Class which is used by the #Security annotation
public class Secured extends Security.Authenticator {
#Override
public String getUsername(Context ctx) {
return ctx.session().get("auth");
}
#Override
public Result onUnauthorized(Context ctx) {
return redirect(routes.Application.login());
}
}
// Controller class that serves routes
public class Application extends Controller {
#Security.Authenticated(Secured.class)
public static Result index() {
return ok(index.render("Your new application is ready."));
}
public static Result login() {
session().clear();
session("auth", "a1234"); // dummy data simulating succesful login
returning redirect(routes.Application.index());
}
}
I need to ultimately implement a safe login system to authenticate users.
My question is two-sided. What would be the better of the following: 'reinventing the wheel' (at least partly) by taking the working code base and improving it or give implementing one of the authentication modules another shot?
We all do not like reinventing the wheel, that said, I have a much better chance of succesfully compiling when I made it myself it seems...
I am aware that for a wholesome security-in-depty (a.k.a. layered security) a secure connection implementation is also needed (HTTPS with TLS1.2` at the time of writing). This is beyond the scope of my question.

I don't know if there's a right answer to this question. Whether to build your own framework or to try an existing framework (which might not work perfectly) is a matter for your own judgement. Personally, I'd probably use SecureSocial as a starting point but then write my own code if I couldn't get it working. It sounds like this is the approach you've already tried.
To use SecureSocial you'd probably need to check out the master branch and build from source. It might be hard to use if the examples are out of date, but then again writing your own auth code is difficult too.

Related

Java Play Controller Session Retrieval Understanding

The company I work for, use Java Play Framework. What I think mysterious is how can play retrieve the current session to me. As I see from source code,
package play.mvc;
public abstract class Controller extends Results implements Status, HeaderNames {
public static Request request() {
return Http.Context.current().request();
}
public static Session session() {
return Http.Context.current().session();
}
// ...
}
I found such class very strange, as all the methods are static methods. And I do not understand how Play can get the correct session in handling concurrent request (multi-thread environment ?).
In such case, how can Play handle / retrieve the session information correctly? Note that I can retrieve the session in the beginning or at the end of each Action (which also affect time). And in such case, how can Play retrieve the session correctly using a static method?
Or am I missing something here? It will be cool if one can tell me how Play can make such retrieval work. Thanks.
P.S. I am using quite a legacy Play Version, Play 2.2.0 in my work place and not learning Scala yet.

Is it possible to create necessary / required interfaces?

i have a little kont in my brain about structuring our code. We have a REST Backend based on SpringBoot. To handle requests regarding to security checks we use HandlerInterceptors. In some specific cases we need a specific interceptor and not our default one. The default one is registered in a 3rd party lib that no one can forget it. But i want all coders to think about this specific interceptor.
Actually, i just said it to them to achieve this.
Here's my question: Is there an option to create required (or necessary) interfaces which must be implemented? This would be a way to provide our security code by lib and to have the security that every coder implemented our specific interface (also if he just does nothing with it).
pseudo code:
public interface thinkForIt(){
Object SecBean specificSecBean;
public void methodToThinkOn();
}
public SecImpl implements thinkForIt(){
#Override
public void methodToThinkOn(){
return null; // i thought about it but i do not need to do anyting!
}
If the interface thinkForIt would have any annotations like #required, users could get warning or error if they did not implement it...
Looking for a solution and thanks for your comments in advance!
Your overall design is questionable; you are reinventing security code, which is always a red flag. Use Spring Security instead.
However, there's a simple way to ensure that "some bean of type Foo" has been registered with the context:
#Component
#RequiredArgsConstructor
public class ContextConfigurationVerifier {
final Foo required;
}

Restrict execution of a Method with Java Annotations

Do you know, if there is the possibility to check who is calling a method and to restrict whether they are allowed to execute it with Java Annotations?
For example if you have a client and a server. There are several users, which have different roles and they login into the client. Then (the same client) with different users wants to call a getMethod on the server.
Can I restrict, who is allowed to call this methos with Java Annotations?
Like:
#Role(role="AllowedRole")
public ReturnType getMethod() {
...
}
Well, I used to achieve this with Seam/DeltaSpike in JBoss Server. It's pretty straightforward.
Basically, you have a method which you annotate with your annotation. For example, mine is #User:
public class MyClass {
#User
public Object getMethod() {
//implementation
}
}
Next, you need a class where you define how you check your annotations:
public class Restrictions {
#Secures #User
public boolean isOk(Identity identity) {
if (identity.getUsername("Peter")) {
return true;
}
return false;
}
}
That's it! Ofcourse, you need some libraries and to define these intercepting stuff in certain xml files (like beans.xml) but it can be easily done with a little googling.
Start from these links:
Seam framework
Questions I asked on JBoss community when I was starting with this
This seems to be a good case for Method Security of Spring Security.
Annotations do not include code and are not processed magically. They just define metadata, so you need some kind of engine that processes the annotations and performs the access validation.
There are a lot of frameworks and tools that do this. For example you can implement this using AspectJ, Spring framework and Java EE support similar annotations.
You can also implement this logic yourself using dynamic proxy, byte code engineering or other technique.
So, please explain better what kind of application are you implementing and we can probably give you better advice.

Writing a custom tomcat realm using bcrypt

I'm working on a Java-based web app using Tomcat 7.0 as the application server. After the helpful responses to a prior question, I've decided to use bcrypt to securely store passwords in my HSQLDB. However Tomcat's default Realm implementations can't handle bcrypt, so I need to write my own; that's the only reason I'm writing a custom realm though as in all other ways plain JDBCRealm would work. I've been googling and looking at examples and I'm rather confused on a couple of points.
First, should I extend RealmBase, or JDBCRealm? Most examples I found use RealmBase, but I've successfully been using JDBCRealm for the app up to this point (as it's still in development I started off with storing the passwords in plaintext and just using JDBCRealm to handle authentication), and one answer to a question on Code Ranch recommended just extending that. I'm not exactly sure which methods I'd need to override in that case, though. Just the authenticate method, or something more? If did this would JDBCRealm still be able to handle and manage user roles, getPrincipal, and all that?
Second, in the CodeRanch example linked above, unless I'm missing something, the getPassword method seems to be returning the unencrypted password. Since I'm going to be using bcrypt that won't be possible, and it seems kind of inadvisable anyway, I would think. In other examples like on this blog post, getPassword seems to just return the password directly from the database. So which way is correct? I can't find what exactly getPassword is used for; the documentation doesn't say. Will it be ok to just return the encrypted value stored in the database for this?
If anybody can tell me what class I should extend, what methods I should override, and what getPassword should return, I would really appreciate it.
Well after some trial and error I figured out how to do this. I extended JDBCRealm and only overrode the authenticate method and it works perfectly. I put BCrypt.java in the same directory as my custom realm, and this code is what worked:
import java.security.Principal;
import org.apache.catalina.realm.JDBCRealm;
public class BCryptRealm extends JDBCRealm
{
#Override
public Principal authenticate(String username, String credentials)
{
String hashedPassword = getPassword(username);
// Added this check after discovering checkpw generates a null pointer
// error if the hashedPassword is null, which happens when the user doesn't
// exist. I'm assuming returning null immediately would be bad practice as
// it would let an attacker know which users do and don't exist, so I added
// a call to hashpw. No idea if that completely solves the problem, so if
// your application has more stringent security needs this should be
// investigated further.
if (hashedPassword == null)
{
BCrypt.hashpw("fakePassword", BCrypt.gensalt());
return null;
}
if (BCrypt.checkpw(credentials, hashedPassword))
{
return getPrincipal(username);
}
return null;
}
}

What's the "proper" and right way to keep Jersey Client API functions and REST (Jersey API) Server functions linked?

I was wondering how people with more experience and more complex projects get along with this "uglyness" in the REST Communication. Imagine the following Problem:
We'll need a fair amount of functionalities for one specific resource within our REST Infrastructure, in my case that's about 50+ functions that result in different querys and different responses. I tried to think of a meaningful resource-tree and assigned these to methods that will do "stuff". Afterwards, the Server Resource Class looks like this:
#Path("/thisResource")
public class SomeResource {
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunction1 ( ...lots of Params) {
... logic ....
}
//
// lots of functions ...
//
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunctionN ( ...lots of Params) {
... logic ....
}
}
To construct the urls my client will call, I made a little function to prevent Typos and to take better use of Constants
so my Client looks like this:
public class Client() {
public returnType function1 () {
client.resource = ResourceClass.build(Constants.Resouce, "meaningfulPath");
...
return response.getEntity(returnType);
}
}
Now the questions that bothers me is how could I link the client function and the server function better?
The only connection between these two blocks of code is the URL that will be called by the client and mapped by the server, and if even this URL is generated somewhere else, this leads to a lot of confusion.
When one of my colleagues needs to get into this code, he has a hard time figuring out which of the 50+ client functions leads to wich server function. Also it is hard to determine if there are obsolete functions in the code, etc. I guess most of you know about the problems of unclean code better than I do.
How do you deal with this? How would you keep this code clean, maintainable and georgeous?
Normally, this would be addressed by EJB or similar technologies.
Or at least by "real" web services, which would provide at least WSDL and schemas (with kind of mapping to Java interfaces, or "ports").
But REST communication is very loosely typed and loosely structured.
The only thing I can think of now, is: define a project (let's call it "Definitions") which would be referenced (hence known) by client and server. In this project you could define a class with a lot of public static final String, such as:
public static final String SOME_METHOD_NAME = "/someMethodName";
public static final String SOME_OTHER_METHOD_NAME = "/someOtherMethodName";
Note: a static final String can very well be referenced by an annotation (in that case it is considered to be constant by the compiler). So use the "constants" to annotate your #Path, such as:
#Path(Definitions.SOME_METHOD_NAME)
Same for the client:
ResourceClass.build(Constants.Resouce, Definitions.SOME_METHOD_NAME);
You are missing the idea behind REST. What you are doing is not REST but RPC over HTTP. Generally you are not supposed to construct URLs using out of band knowledge. Instead you should be following links received in the responses received from the server. Read about HATEOAS:
http://en.wikipedia.org/wiki/HATEOAS

Categories