I have a spring boot application exposing several REST API endpoints. I want to enable google oauth authentication (authorization code grant) on it. I am guessing what is correct way to do this out of following options:
Have separate application as OAuth 2 client (i.e. with spring-boot-starter-oauth2-client dependency and make the existing app a resource server (i.e. with spring-boot-starter-oauth2-resource-server dependency)
a. This Udemy's course keeps two application separate: resource server, OAuth 2 client. Then it seem to need a proxy REST endpoint in oauth
2 client project corresponding to every REST endpoint in resource server. REST end point in OAuth 2 client retrieves access token and adds it to every request to corresponding REST endpoint in resource server.
b. This stackoverflow threads talks about making same application both OAuth2 client as well as resource server
Make the existing app OAuth 2 client. (that is include spring-boot-starter-oauth2-client dependency) and simply require user to be authenticated to access REST endpoint URLs.
I have following doubts:
Q1. Should REST API always be exposed as resource server? And if yes, then is approach 2 not-so-recommended way? (as it does not expose existing REST API as resource server but as a part of OAuth client with restricted access to those APIs)?
Q2. If approach 2 is not fine, then which of approaches (1.a) and (1.b) are preferred or when to prefer one above other? (I believe (1.a) is more suitable when we want single OAuth client as a point of access for several different resource servers.)
In OAuth2 world, a REST API is a resource-server by definition.
In your scenario, Google currently is an authorization-server. You could hide it behind a Keycloak instance or something else capable of user identity federation if you need to extend to other identity providers (Github, Facebook, etc.) or want some roles definition, but as Google serve JWT access-tokens, you can use it for you resource-server security (if google IDs are enough for your security rules).
In both cases (Google directly or with an OIDC authorization-server in the middle), you can find sample configuration here (or there if you prefer to stick to spring-boot-starter-oauth2-resource-server but it requires more Java conf as you can see in tutorials).
I personnaly don't like to merge client(s), resource-server(s) and authorization-server(s). My clients are generally mobile and web with client-side rendering (Angular), but even for spring clients I'd keep it separate.
There is a special case, thought: when a resource-server delegates some of its processing to another, then, by definition, it is a client too. In that case, it is possible that security requirements and mechanisms are pretty different:
is your API authenticating in its own name (using client credentials flow)? In that case, you might use spring-boot-starter-oauth2-client to negotiate access-token to be used when issuing requests to other service.
is your API issuing requests in the name of authenticated user and does the other service know about the authorization-server which issued user authentication? In that case, you can forward the token you received
is the service you are consuming not OAuth2 at all? (just requires basic auth header for instance)
Related
I started playing with Keycloak, but I have a question. While reading articles, I always found examples where a client (let's say Angular) is logging in on Keycloak, it gets a bearer and then it send the bearer to the SpringBoot application. The backend, so, validates that the bearer is valid and, if so, it allows you accessing the desired endpoint.
But it's not enough in my opinion. I don't need just to login, I would need the entire functionality - let's say I have a backend application and I need a user. I could have a basic todo-application, how do I know for which backend user I am actually accesing an endpoint?
Straight question: how can I bind my own backend user (stored in the DB from backend) to the one from Keycloak?
What is the best way to do it? The only thing that I found online and into the Keycloack documenation is that I could move the logic of logging in from client (Angular) to backend (SpringBoot). Is this the way to go?
Imagine like I'm creating my manual /login endpoint on backend on which I would then call the Keycloak server (Keycloak REST client?) and I would pass myself (as a backend) the bearer to the client.
Please help me with an explanation if I'm right or wrong, what's the best practice, maybe help me with an online example, because I just found out the too easy ones.
OpenID tokens are rich
Keycloak is an OpenID provider and emits JWTs. You already have the standard OpenID info about user identity in the token (matching requested scopes), plus some Keycloak specific stuff like roles plus whatever you add with "mappers".
All the data required for User Authentication (identity) and Authorization (access-control) should be embedded in access-tokens.
How to bind user data between Keycloak and your backend
In my opinion, the best option is to leave user management to Keycloak (do not duplicate what is already provided by Keycloak). An exception is if you already have a large user database, then you should read the doc or blog posts to bind Keycloak to this DB instead of using its own.
Spring clients and resource-servers configuration
I have detailed that for Spring Boot 3 in this other answer: Use Keycloak Spring Adapter with Spring Boot 3
In addition to explaining configuration with Spring Boot client and resource-server starters, it links to alternate Spring Boot starters which are probably easier to use and more portable (while building on top of spring-boot-starter-oauth2-resource-server).
I Also have a set of tutorials from most basic RBAC to advanced access-control involving the accessed resource itself as well as standard and private OpenID claims from the token (user details) there.
Tokens private claims
For performance reason, it is a waste to query a DB (or call a web-service) when evaluating access-control rules after decoding a JWT: this happens for each request.
It is much more efficient to put this data in the tokens as private claims: this happens only once for each access-token issuance.
Keycloak provides with quite a few "mappers" you can configure to enrich tokens and also allows you to write your own. Sample project with a custom Keycloak mapper here. This is a multi-module maven project composed of:
a custom "mapper" responsible for adding a private claim to the tokens
a web-service which exposes the data used to set the value of this claim
a resource-server reading this private claim to take access-control decisions
The simplest way to do it is to consider that the job of storing users will be delegated to your Keycloak server. But you can implement some roles and checks manually with in-memory or any database of your preference too.
I invite you to follow some documentation about OAuth 2 and Keycloak, to make requests to get a valid token for a time period and to make others request inside that time period to get datas. You can use CURL to make requests or web/software tools like Postman.
Be careful, a lot of Keycloak Adapters are deprecated ones since some months.
I would echo BendaThierry's comments. Look into OAuth2 and Keycloak. The Bearer token you receive from Keycloak will have user information in it (typically in the Claims). This way you can have user preferences or features in your backend without needing to manage the authorization and authentication that Keycloak does.
There are lots of great resource include Spring's website tutorials (like https://spring.io/guides/tutorials/spring-boot-oauth2/) and Baeldung (https://www.baeldung.com/).
I'm using keycloak as an auth server, my client app is a sring-boot one with the keycloak client adapter dependency.
One challenge I have not yet tackled, is the insertion of specific scopes on the request header before an authorization request executes (towards the auth server - keycloak auth endpoint).
Right now I've tested my endpoints (using access-tokens with limited capabilities by the use of scopes ) via curl and/or postman and they behave as expected, so I know they work. But I don't know when/how can I append them as a "scope" request header when using spring boot (mainly because that's all plumbing code that runs under the hood in spring boot).
I assume I would need to use some kind of interceptor/filter that gives me access to that request object just before "executing", but I haven't been able to find a concrete example.
Any suggestion/guidance or pointing towards relevant documentation would be greatly appreciated.
thanks in advance.
UPDATE:
Since my last post; I've tested several combinations to achieve this, and sadly none have worked, it is quite amazing that something as basic to oAuth2 like injecting scopes on the authorization request, isn't supported easily out of the box by the spring boot keycloak adapter. I've tried following approaches:
1 - Using a custom implementation of Sping "ClientHttpRequestInterceptor" . This doesn't help because this interceptor provides access to the front-channel requests (or requests reaching the app through the front-controller), it doesn't provide access to the back-channel request (of which the auth-request is part of).
2 - "ClientHttpRequestInterceptor" is usually also used in conjunction with a "RestTemplate" custom implementation. The problem here is that this would work only to those requests executed through an instance of the RestTemplate, and this is not what happens with the back-channel requests used by the spring-adapter
3 - Using Configuration objects based on springsecurity. spring-security offers usefull filters for configuration components that could help here (#Configuration, #EnableWebSecurity, #ComponentScan(basePackageClasses = KeycloakSecurityComponents.class OR #KeycloakConfiguration); This global type of configuration basically mix your spring boot keycloak adapter app with springsecurity code, and while this works fine for most cases, if you happen to use/need "policies" (By using the "keycloak.policy-enforcer-config" type of configs), then your policies will stop working, and whole set of new issues will arise.
FROM: https://oauth.net/2/scope/
OAuth Scopes
tools.ietf.org/html/rfc6479#section-3.3
Scope is a mechanism in OAuth 2.0 to limit an application's access to
a user's account. An application can request one or more scopes, this
information is then presented to the user in the consent screen, and
the access token issued to the application will be limited to the
scopes granted.
The OAuth spec allows the authorization server or user to modify the
scopes granted to the application compared to what is requested,
although there are not many examples of services doing this in
practice.
OAuth does not define any particular values for scopes, since it is
highly dependent on the service's internal architecture and needs.
It is clearly stated that scopes can be requested by the application to receive tokens with limited access, yet all on-line documentation to achieve this with the keyclock adpaters (and particularly spring boot) is almost(completely?) non existant.
some redhat's keycloak expert could offer suggesstion?
For my recent project, I have created a separate resource server using spring boot. Resource Server is configured in a way that it will check for 2 legged and 3 legged access to an API and also validates the JWT token. Resource server is an independent spring boot jar running in its container.
We have created several microservices using Spring Boot which are executable jars, deployed and running independently in their container. Resource server will protect end points exposed in these microservices. For that, I have created a RestController in resource server in which end point is exposed which will call the microservice end point when request comes in. for e.g
Microservice.java - Running at port 8080
#RequestMapping("/getUser")
public String getUserName(){
return "xyz";
}
Resource Server - Running at port 8090
ResourceServerController.java
#RequestMapping("/userInfo")
public String getUserName(){
// calling above microservice using rest template
}
There can be several end point in a several microservices and as we have to protect them, is it right to proxy every end point in the rest controller of a resource server? I am not sure whether it is a correct approach. Other approach which we think of is to create a jar of resource server and deployed as a dependency with every microservice. In this way, we do not need to proxy end points in the Rest Controller of Resource Server.
Just wanted to know the best way to Protect microservices using separate resource server.
Sharing Libraries is not a recommended option. A huge benefit of Microservices is independence and that would go for a toss if you do this.
A better option would be to see if you can provide access to API based on scope. That way, when your Authorization Server issues JWT token, it sends all the applicable scope for the user.
Then, in your Resource Server(s) , you can enable access to Microservice using the following annotation in Spring Boot
#PreAuthorize("#oauth2.hasScope('read')")
Another approach is, you can create roles and PreAuthorize using Roles.
If the above options doesn't work out, the current approach that you are following based on Proxy Service is perfectly fine. Only aspect that you should consider is to see if the JWT token validation can be moved to the respective Microservices so that all your services are protected.
Again, Code duplicacy is perfectly fine when you are implementing Microservices and if that is your main concern, don't hesitate to add the same logic in every service. With Microservices, Duplication is better than wrong abstraction
JWT is the self-contained token, which can store scope of access for REST-service users. JWT is issued by the Authorization server.In your Resource servers, you should validate every request from REST-service users. If your Resource server(s) runs on spring-boot you can use the following annotation:
#PreAuthorize("hasRole('ROLE_VIEW')")
As for the part where your services call each other, there no need to use JWT because no REST-service users involved.
Microservices can be secured by simple BasicAuth, CORS or locate them in one network without access from an external network
I would like to use pac4j in a Java application to create a Tomcat filter or Tomcat valve. The filter or valve must support OpenID Connect using standard OAUTH2 authorization code flows. Even better would be to build on top of j2e-pac4j to do this, but I cannot use the OAUTH2 filter in j2e-pac4j. My filter or valve needs to modify the request object by adding an HTTP header. The web app behind the filter/valve requires the HTTP header, and it's third party so I can't change this.
I will use Open ID Connect so I can get the JWT, but other than using the JWT and its contents and perhaps the userinfo endpoint of Open ID Connect, I am simply doing a standard OAUTH2 flow. Here's the complication: The server I will redirect to for authorization supports multiple tenants. This means that the URL I redirect to will have the tenant name in the URL. This means that instead of a URL like
https://authorization.server.com/oauth/authorize
(Note, leaving off ?response_type=code&scope=openid&client_id=myclientname&state=nonce&redirect_uri=... for the sake of clarity.) I will be required to redirect to a URL like one of:
https://authorization.server.com/tenantname/oauth/authorize
https://authorization.server.com/oauth/tenantname/authorize
https://authorization.server.com/oauth/authorize?realm=tenantname
How do you do this with pac4j, or better yet, j2e-pac4j? I've been told that pac4j will not support this, where every client potentially uses a different redirect URL for authorization.
I have a Restful Java Web application which is to be deployed to a number of different environments (outside of my control) which will be using a SAML 2.0 SSO solution.
My application (which I believe is the "service provider") needs to store state generated by the user, and uses internal business logic to work out which users are allowed to view or update other user's data. In order for this to work we need to know who the user is, and what groups the user is part of. But how do I get this information?
Ideally my web app will be SSO agnostic, and would look for some configurable key headers in the http requests to get this information e.g. a SAML token in a request which could be parsed, or perhaps some custom headers specific to my "service provider".
Many Thanks
You are correct, your application is the Service Provider and you will have an external Identity Provider (IdP) to authenticate to.
Basically you need to issue an Authentication Request to the IdP (via either front channel HTTP POST or back channel SOAP/whatever they support) and use the authenticationResponse from the IdP to make your decision on whether they are who they say they are. As a rule you should be able to get the subject principal (ie username) and any group memberships from the authnResponse however exactly how this works will depend on what the IdP is or isn't configured to do.
Before you do this you will need to exchange SAML metadata with the IdP (this is generally part of being registered as a SP with the IdP) which gives both parties things like the public X509 cert for signing and validating requests.
There's a good spring library for SAML SP support, http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle
you could run a reverse proxy in front of the Java web application to handle the SSO protocol part and relay user identity information to the application in HTTP headers; for SAML 2.0 there is mod_auth_mellon: https://github.com/UNINETT/mod_auth_mellon
If this is done in Java and running on a webcontainer (Tomcat, JBoss, etc.), then the agent could be implemented as a web authentication (servlet) filter (add into web.xml). The user would be typically derived from the SAML Auth response's <saml:NameID> or from <saml:Attribute> matching the userid attribute (uid,email, etc.). This user should be validated against the web app's identity repository (could be LDAP, database, etc.) and corresponding groups computed. Instead of using arbitrary headers or custom representation for the authenticated user, consider using java Principal (for users and groups) in a Subject.
The filter can then run the rest of the filter chain in a Subject.doAs(). That way, the Subject can be looked up any where in the downstream code using Subject.getSubject().