I have a RESTful service which uses the internal business service to save and fetch information from the databases. The webservice uri looks something like, /api/entity/{id}, to which the web UI application requests with the ID of the entity. Here the problem is the {id} can be literally anyone's id, that is, the record of someone else that should not be seeing by the user using the application. In order to solve this, using the Spring Security, I wrote a SPEL, something like,
#Service
interface EntityService {
#PostAuthorize("returnObject.userId==principal.id")
public ReturnObject get(long i);
}
Is the above approach the right way to solve this? (In my earlier projects my developers used to write all these security stuffs into their business methods.) As the complexity of the security grows for an entity, say, the administrator of the user (who have created the record) can see the object but not other administrators, or group administrators can see the record etc, this approach too gets complicated.
Besides, after digging my head into many of the Spring-ACL library classes, I somehow have managed to configure the Spring-ACL, invoke its permission evaluator evaluate the hasPermission method in the SPEL with the authorization of the entity and its unique-identifier loaded from the database.
#Service
interface EntityService {
#PreAuthorize("hasPermission(#id, 'com.project.domain.ReturnObject', 'read')")
public ReturnObject get(long id);
}
But the problem I see with this approach is, every time a user creates a record shouldn't the application create an ACL permission too (ACL_ENTRY) for that object? Is this a right way to approach this problem, or is there some other way I didn't see through? I know this is not a new problem, and someone out there should have solved it already in many ways. I want to know how the problem was solved, not in the traditional way, in the service logic or in the queries, but using frameworks such as Spring-ACL or Apache Shiro.
I can't answer if Shiro uses a different approach, but as Neil commented earlier,
Spring Security does not provide any special integration to automatically create, update or delete ACLs as part of your DAO or repository operations. Instead, you will need to write code [...] for your individual domain objects
— Spring Security Reference: Domain Object-based ACLs
The open-source OACC security framework (disclosure: I'm maintainer and co-author), on the other hand has the concept of create-permissions, which let you define exactly what permissions a user would get on an object after creating it.
OACC is an API designed to help you manage and query your permissions and security topology programmatically.
It supports fine-grained permissions on your individual domain objects, lets you model coarse-grained roles, and with its concepts of domains, permission inheritance and impersonation, can handle pretty much anything in between.
Related
I am using Spring layered architecture, performing authorization of the requests in the service classes. One service could look like this:
#Service
public class SomeService {
public void findOne(Long id) {
assertPrivilege("READ");
// ...
}
}
Now, assertPrivilege() uses the SecurityContextHolder to obtain a list of GratedAuthority objects. Putting the authorization logic into services, the controllers don´t have to worry about that - also when calling multiple services from one controller.
The problem is that all other services cannot access that method now, unless authorized. However, there are some threads (schedulers for example) which exactly call that method at some point without having an Authentication object ready. In some cases, if it is the same thread, the SecurityContext will also return the current authentication.
Now, how to refactor this logic to achieve the other internal threads to call the method without authorization. Is a design change needed, possible a second wrapper class like SomeService (no authorization) and SomeClientService (with authorization)? Another possibility would be to access the repositories directly.
It's better use #PreAuthorize("hasRole('ROLE_VIEWER') or hasRole('ROLE_EDITOR') or #id == authentication.principal.username") instead of method, see tutorial, there are many of them.
Generally, your business logic should not be mixed with security framework - it's why there are #PreAuthorize and #PostAuthorize - developer should be able define user access independently from business logic - it's two different requirements.
There are no easy way to disable spring security for one user (inner call, external app etc.), also it's not recommended (bugs etc.), you can create special role for external app or use Concurrency Support to perform inner call in behalf of user.
The proxy service approach seems to be the easiest thing to do, as you don't need to move authorization logic out of this layer. Managing authorization as an aspect would be the key for that solution. Otherwise, you can always manage these checks directly in the controllers, so every endpoint handles it according to the type of client it was designed to be used by.
I do not understand Java annotations with retention policy as RUNTIME that well. What I'm trying to do is create an annotation named #Authorize and use it on methods which needs user authorization in order to perform some action( the user is already authenticated at this point).
eg. I have an order service with a getOrder() method. I want only the user who created this order to access it.
`
public void getOrder(User user) {
//current code does something like this
if(order.getCreatedBy().equals(user)) {
//then proceed.
}
}
`
I do not want to mix this logic with business logic. Instead, I'm looking to have something like this-
`
#Authorize
public void getOrder(User user) {
//business logic
}
`
There are several methods but not all of them would need such authorization. Could someone please explain me how can I fit the pieces together here?
What I don't understand at this point is that how AnnotationProcessor would help me here since it does its magic at compile time. As far as I understand, it will help me generate some code at compile time but I have no clue how to use that generated code. I went through numerous examples on AnnotationProcessors but I'm still missing something.
These links helped me a bit to understand annotation processing so far-
http://hannesdorfmann.com/annotation-processing/annotationprocessing101
https://equaleyes.com/blog/2017/09/04/annotation-processing/
Even if I go with reflections, where should I place the reflection logic? and is it counter productive of what I'm trying to achieve?
At this point, I'm open to other solutions as well which do not involve annotations but will help me separating out business logic with such resource specific authorization.
To implement authorization controls on methods in Java, I highly recommend Spring Security with an eXtensible Access Control Markup Language (XACML) implementation that has a Spring Security API.
Spring Security
Spring Security provides two main means to protect access to methods:
Preauthorization: this allows for certain conditions/constraints to
be checked before the execution of the method is allowed. Failure to
verify these conditions will result in the failure to call the
method.
Postauthorization: this allows for certain conditions/constraints to
be checked after the method returns. This is used less often that
preauthorization check, but can be used to provide extra security
around complex interconnected business tier methods, especially
around constraints related to the object returned by the method.
Say for example, that one of the access control rule is that the user has have the ROLE_ADMIN authority before being able to invoke a method getEvents(). The way to do that within the Spring Security framework would be to use the PreAuthorize annotation as below:
public interface Sample { ...
#PostAuthorize("hasRole('ROLE_ADMIN')")
Event getEvent(); }
In essence Spring Security uses a runtime Aspect Oriented Programming (AOP) pointcut to execute before an advice on the method and throw an o.s.s.access.AccessDeniedException if the security constraints specified are not met.
More can be found about Spring Security's Method Level Security in section 27.3 of this documentation.
eXtensible Access Control Markup Language (XACML) - a policy language for ABAC
Spring Security does a great job of implementing access control with its expression based access control, but attribute based access control (ABAC) allows more fine grained control of access and is recommended by the National Institute of Standards and Technology.
To address the limitations of Role Based Access Control (RBAC), NIST came up with a new model called ABAC (Attribute Based Access Control). In ABAC, you can now use more metadata / parameters. You can for instance consider:
a user's identity, role, job title, location, department, date of
birth...
a resource's type, location, owner, value, department...
contextual information e.g. time of day the action the user is
attempting on the resource
All these are called attributes. Attributes are the foundation of ABAC, hence the name. You can assemble these attributes into policies. Policies are a bit like the secret sauce of ABAC. Policies can grant and deny access. For instance:
An employee can view a record if the employee and the record are in the same region
Deny access to reading records between 5pm and 8am.
Policies can be used to express advanced scenarios e.g.
segregation of duty
time-based constraints (see above)
relationship-based access control (see above)
delegation rules delegate Bob access to Alice's document.
There are 2 main syntaxes available to write policies:
the Abbreviated Language for Authorization (ALFA), which is based on XACML
the eXtensible Access Control Markup Language (XACML)
ABAC also comes with an architecture to define how the policies will get evaluated and enforced.
The architecture contains the following components:
the Policy Enforcement Point (PEP): this is the component that
secures the API / application you want to protect. The PEP intercepts
the flow, analyzes it, and send an authorization request to the PDP
(see below). It then receives a decision (Permit/Deny) which it
enforces.
the Policy Decision Point (PDP) receives an authorization request
(e.g. can Alice view record #123?) and evaluates it against the set
of policies it has been configured with. It eventually reaches a
decision which it sends back to the PEP. During the evaluation
process, the PDP may need additional metadata e.g. a user's job
title. To that effect, it can turn to policy information points (PIP)
the Policy Information Point (PIP) is the interface between the PDP
and underlying data sources e.g. an LDAP, a database, a REST service
which contain metadata about users, resources, or other. You can use
PIPs to retrieve information the PDP may need at runtime e.g. a risk
score, a record’s location, or other.
Implementations of XACML
Full disclosure - I am on the XACML Technical Committee and work for Axiomatics, a provider of dynamic authorization that implements XACML.
Axiomatics provides a Spring Security SDK for their Axiomatics Policy Server and it provides four expressions that can be used to query the PDP as a part of protecting a method invocation
xacmlDecisionPreAuthz, called with #PreAuthorize
xacmlDecisionPostAuthz, called with #PostAuthorize
xacmlDecisionPreFilter, called with #PostFilter
xacmlDecisionPostFilter, called with #PreFilter
The exact signatures for these methods are as follows:
xacmlDecisionPreAuthz(Collection<String> attributeCats,
Collection<String> attributeTypes, Collection<String> attributeIds,
ArrayList<Object> attributeValues)
xacmlDecisionPostAuthz(Collection<String> attributeCats,
Collection<String> attributeTypes, Collection<String> attributeIds,
ArrayList<Object> attributeValues)
xacmlDecisionPreFilter(Collection<String> attributeCats, Collection<String>
attributeTypes, Collection<String> attributeIds, ArrayList<Object>
attributeValues)
xacmlDecisionPostFilter (Collection<String>
attributeCats, Collection<String> attributeTypes, Collection<String>
attributeIds, ArrayList<Object> attributeValues)
For an entire list of XACML implementations, you can check this list on Wikipedia.
I'm (trying to :) using spring-boot-starter-data-rest in my spring boot app to quickly serve the model through true, fullblown, restFULL api. That works great.
Question 1 (Security):
The advantage of Spring JpaRepository is I don't need to code basic functions (save, findAll, etc). Is it possible to secure these auto-implemented methods without overriding all of them (wasting what Spring provided for me)? i.e.:
public interface BookRepository extends JpaRepository<Book, Long> {
#PreAuthorize("hasRole('ROLE_ADMIN')")
<S extends Book> Book save(Book book);
}
.
Question 2 (Security):
How to secure a JpaRepository to prevent updating items the loggeg-in user is not an owner?
i.e.: User is allowed to modify only his/her own properties.
i.e.2: User is allowed to modify/delete only the Posts he/she created.
Sample code is highly welcome here.
.
Question 3 (DTOs):
Some time ago I had an argue with a developer friend: He ensisted that there MUST be DTOs returned from Spring MVC controllers. Even if the DTO is 1-1 copy of the model object. Then I reserched, asked other guys and confirmed it: DTOs are required to divide/segregate the application layers.
How this relates to JpaRepositories? How to use DTOs with Spring auto serverd rest repos? Should I DTOs at all?
Thanks for your hints/answers in advance !
Question 1: Security
Some old docs mention:
[...] you expose a pre-defined set of operations to clients that are not under you control, it’s pretty much all or nothing until now. There’s seemingly no way to only expose read operations while hiding state changing operations entirely.
which implies that all methods are automatically inherited (also, as per standard java inheritance behavior).
As per the #PreAuhtorize docs, you can place the annotation also on a class / interface declaration.
So you could just have one basic interface extend JpaRepository
#NoRepositoryBean // tell Spring not create instances of this one
#PreAuthorize("hasRole('ROLE_ADMIN')") // all methods will inherit this behavior
interface BaseRepository<T, ID extends Serializable> extends Repository<T, ID> {}
and then have all of your Repository's extend BaseRepository.
Question 2: Security
I'm going to be a little more general on this one.
In order to correctly regulate access to entities within your application and define what-can-see-what, you should always separate your project into different layers.
A good starting point would be:
layer-web (or presentation-layer): access to layer-business, no access to the db-layer. Can see DTO models but not DB models
layer-business (or business-layer): access to the db-layer but no access to the DAO
layer-db (or data-layer): convert DTO -> DB model. Persist objects and provide query results
In your case, I believe that the right thing to do, would be therefore to check the role in the layer-business, before the request even reaches the Repository class.
#Service
public interface BookService {
#PreAuthorize("hasRole('ROLE_ADMIN')")
ActionResult saveToDatabase(final BookDTO book);
}
or, as seen before
#Service
#PreAuthorize("hasRole('ROLE_ADMIN')")
public interface BookService {
ActionResult saveToDatabase(final BookDTO book);
}
Also, ensuring a user can modify only its own objects can be done in many ways.
Spring provides all necessary resources to do that, as this answer points out.
Or, if you are familiar with AOP you can implement your own logic.
E.g (dummyCode):
#Service
public interface BookService {
// custom annotation here
#RequireUserOwnership(allowAdmin = false)
ActionResult saveToDatabase(final BookDTO book);
}
And the check:
public class EnsureUserOwnershipInterceptor implements MethodInterceptor {
#Autowired
private AuthenticationService authenticationService;
#Override
public Object invoke(Invocation invocation) throws Throwable {
// 1. get the BookDTO argument from the invocation
// 2. get the current user from the auth service
// 3. ensure the owner ID and the current user ID match
// ...
}
}
Useful resources about AOP can be found here and here.
Question 3: DTO's and DB models
Should I DTOs at all?
Yes, yes you should. Even if your projects has only a few models and your are just programming for fun (deploying only on localhost, learning, ...).
The sooner you get into the habit of separating your models, the better it is.
Also, conceptually, one is an object coming from an unknown source, the other represents a table in your database.
How this relates to JpaRepositories?
How to use DTOs with Spring auto serverd rest repos?
Now that's the point! You can't put DTO's into #Repositorys. You are forced to convert one to another. At the same point you are also forced to verify that the conversion is valid.
You are basically ensuring that DTOs (dirty data) will not touch the database in any way, and you are placing a wall made of logical constraints between the database and the rest of the application.
Also I am aware of Spring integrating well with model-conversion frameworks.
So, what are the advantages of a multi-layer / modular web-application?
Applications can grow very quickly. Especially when you have many developers working on it. Some developers tend to look for the quickest solution and implement dirty tricks or change access modifiers to finish the job asap. You should force people to gain access to certain resources only through some explicitly defined channels.
The more rules you set from the beginning, the longer the correct programming pattern will be followed. I have seen banking application become a complete mess after less then a year. When a hotfix was required, changing some code would create two-three other bugs.
You may reach a point where the application is consuming too many OS resources. If you, let's say, have a module module-batch containing background-jobs for your application, it will be way easier to extract it and implement it into another application. If your module contains logic that queries the database, access any type of data, provides API for the front-end, ecc... you will be basically forced to export all your code into your new application. Refactoring will be a pain in the neck at that point.
Imagine you want to hire some database experts to analyze the queries your application does. With a well-defined and separated logic you can give them access only to the necessary modules instead of the whole application. The same applies to front-end freelancers ecc... I have lived this situation as well. The company wanted database experts to fix the queries done by the application but did not want them to have access to the whole code. At the end, they renounced to the database optimization because that would have exposed too much sensitive information externally.
And what are the advantages of DTO / DB model separation?
DTO's will not touch the database. This gives you more security against attacks coming from the outside
You can decide what goes on the other side. Your DTO's do not need to implement all the fields as the db model. Actually you can even have a DAO map to many DTO's or the other way around. There is lots of information that shouldn't reach the front-end, and with the DTO's you can easily do that.
DTO are in general liter than #Entity models. Whereas entities are mapped (e.g #OneToMany) to other entities, DTO's may just contain the id field of the mapped objects.
You do not want to have database objects hanging around for too long; and neither being passed around by methods of your application. Many framework commit database transactions at the end of each method, which means any involuntary change done onto the database entity may be committed into the db.
Personally, I believe that any respectful web-application should strongly separate layers, each with its responsibility and limited visibility to other layers.
Differentiation between database models and data transfer objects is also a good pattern to follow.
At the end this is only my opinion though; many argue that the DTO pattern is outdated and causes unnecessary code repetition any many argue that to much separation leans to difficulty in maintaining the code. So, you should always consult different sources and then apply what works best for you.
Also interesting:
SE: What is the point of using DTO (Data Transfer Objects)?
Lessons Learned: Don't Expose EF Entities to the Client Directly
Guice Tutorial – method interception (old but gold)
SO: Large Enterprise Java Application - Modularization
Microsoft Docs: Layered Application Guidelines
The 5-layer architecture
I have configured a LDAP realm in Glassfish, and authentication works just fine.
Now I am wondering how could I match the Principal.getName() return to a certain attribute of my LDAP user object. I thought it would use something such as "givenName" by default, but it returns the username used for authentication.
I don't mind making an extra trip to the LDAP server to obtain the additional information, but instead of keeping the LDAP connection attributes in my application, I'd like to inject the security realm (if such a thing is possible) and use its own connection.
So, in short, the questions are:
1) Can I map additional attributes to the Principal returned by the realm?
2) If number one is not possible, then how could I reuse the realm's information in order to connect to the LDAP server and obtain the data I need?
Thanks in advance for any help or suggestions.
The JAAS Subject often contains many principals, each one representing a different attribute.
For Java EE one, and only one, of these Principals is selected for the one that is returned when you call HttpServletRequest#getUserPrincipal and similar methods. The other Principals are for the Java EE API just lost.
You can determine which of those Principals to select by writing a JASPIC authentication module if the login happens via HTTP or SOAP.
You can preserve the entire Subject by putting it into the HTTP session from within the JASPIC authentication module. Other code can pick it up from there.
Edited: I was under the impression that the following used to work, at least with GlassFish 4.0. Unfortunately, that doesn't (any longer) seem to be the case. A workaround can be found in the comments of this issue.
Not really a solution per se; just a little detail I kept overlooking for a while, and which was quite a relief for me to have now become aware of. So --skipping the boring specifics-- I realized that a CallerPrincipalCallback(Subject s, Principal p) constructor is additionally available, which, when supplied with my custom Principal, causes the server to actually retain it, instead of wrapping it in or transforming it into an internal GlassFish implementation instance, as I previously thought it would. From "userspace" I was then able to access my "enriched" (more Subject- than Principal-like, to be honest) version the usual way (e.g. ExternalContext#getUserPrincipal, etc.), cast it and enjoy the convenience of not having to care about deriving custom Principals from generic ones in each application from now on :) .
Well, I could not extend the Principal attribute mapping without using a custom LoginModule; so, instead I opted to the solution described here: http://docs.oracle.com/cd/E19798-01/821-1751/abllk/index.html
What I do is, upon authentication, use the injected LDAP context to go back to the LDAP server and obtain the attributes I want. The downsides are obvious: two trips to the server instead of a single one, and the extra code to probe attributes and tie them to the Principal (or another POJO) in some way.
I am developing web application with Java EE 6. In order to minimize calls to database will it be a good idea to have classes:
Data access class (DAO) will call only basic methods getAllClients, getAllProducts, getAllOrders, delete, update methods - CRUD methods.
Service class which will call CRUD methods but in addition filter methods e.g. findClientByName, findProuctByType, findProductByYear, findOrderFullyPaid/NotPaid etc... which will be based on basic DAO methods.
Thank you
In my experience (albeit, limited) DAO classes tend to have all the possible database operations which the application is allowed to perform. So in your case, it will have methods such as getAllClients() and getClientByName(String name), etc.
Getting all the users in your DAO and iterating all over them until you find the one you need will result in unneeded waste of computational time and memory consumption.
If you want to reduce the amount of times that your database is hit you could, maybe, implement some caching mechanism. An ORM framework such as Hibernate should be able to provide what you need as shown here.
EDIT:
As per your comment question, no, your service will not be made redundant. What one does is to usually use a Service layer to expose the DAO functionalities. This will, basically, not make the DAO visible from the from front end of your application. It usually also allows for extra methods, such as, for instance, public String getUserFormatted(String userName). This will make use of the getUserByName function offered by the DAO but provide some extra functionality.
The Service layer will also make itself useful should there be a change in specification and you now also need a web service to interface with your application. Having a service layer in between will allow the web service to query the DAO through the Service layer.
So basically, the DAO layer will still worry about the database stuff (CRUD Operations) while the service will adapt the data returned by the DAO without exposing the DAO.
It's hard to say without more information, but I think it's probably a good idea to leverage your database more than with just CRUD operations. Databases are good at searching, provided you configure them correctly, so IMHO it's a good idea to let your database handle the searching in your find methods for you. This means that your find methods would probably go in your DAOs...
It's good to think about/be aware of the implications of DB access on performance, but don't go overboard. Also, your approach implies that since your services are going to be doing the filtering, you are going to load a large amount of DB data into your application, which is a bad idea. The bottom line is you should use your RDBMS as it is intended to be used, and worry about performance due to over-access when you can show its a problem. I doubt you will run into that scenario.
I would say that you're better off having your DAO be more fine grained than you've specified.
I'd suggest putting findClientByName, findProuctByType, findProductByYear, findOrderFullyPaid/NotPaid on your DAO as well in some way because your database will most likely be better at filtering and sorting data than your in memory code.
Imagine you have 10 years of data and you call findProductsByYear on your service class and it then calls getAllProducts and then throws away 9 years of data in memory. You're far better off getting your database to only return you the year you are interested in.
Yes, this is the right way to do it.
The service will own the transactions. You should write these as POJOs; that way you can expose them as SOAO or REST web services, EJBs, or anything else that you want later on.