Using a managed identity with Azure Service Bus - java

I want to use a managed identity to connect to Azure Service Bus. In the docs they mention the DefaultAzureCredentialBuilder. I don't really get how this would use my managed identity to authenticate to the Service Bus.
Does anyone know this?

DefaultAzureCredential is a chained credential; internally it considers multiple authorization sources, including managed identities. More information can be found in the Azure.Identity overview.
Service Bus can use any of the Azure.Identity credentials for authorization. DefaultAzureCredentialBuilder is demonstrated only because it allows for success in a variety of scenarios.
If you'd prefer to restrict authorization to only a managed identity, you can do so by using ManagedIdentityCredentialBuilder rather than the default credential. An example of creating the can be found here. It can then be passed to Service Bus in the same manner as the default credential.

Related

What is the preferred way to configure GCP credentials in Spring boot application

I am in process of writing a spring boot based microservice, which will be deployed on GKE. To configure service account credentials i do see there are multiple options available . What is the most preferred and possibly safer option available. I have tried below approaches, kindly suggest other ways
CredentialsProvider interface with spring.cloud.gcp.credentials.location
spring.cloud.gcp.credentials.encoded-key
GCP secrete manager
In Cloud environment generally the safest and best option with least administrative overhead is to use the corresponding service from the Cloud provider - in your case it would be Secret Manager. Of course if you are planning to not tie your applications with a specific cloud provider or cater to on-prem environment as well then you can use third party Secret management providers like HashiCorp Vault.
However even with Secret Manager if you interact with the API directly you will have to provide keys to call the API somewhere which creates another problem. The general recommended solution is to to use application authenticating as Service accounts and interacting with Secret manager directly as outlined here. Or there are alternative ways of mounting Secrets from Secret Manager on the GKE Volume using CSI Driver as explained here.
Running a secure cluster and container applications is a critical requirement and here are further recommendations for GKE security hardening which covers Secret management as well. More specifically you can check the recommendation in section "CIS GKE Benchmark Recommendation: 6.3.1"
Although #Shailendra gives you a good solution, as you are using GKE, you can store sensitive information as Kubernetes secrets.
Both the Kubernetes and GKE documentation provide several examples of creating secrets.
You can later use the configured secrets in multiple ways, in the simple use case, as environment variables that can be consumed in the application. Please, see as well this article.
Th Spring Cloud Kubernetes project provides support for consuming this secrets as property sources.
This approach allows you to test your application deployment locally, with minikube or kind, and later deploy the same artifacts to the cloud provider. In addition, it is cloud provider agnostic as you are using out-of-the-box Kubernetes artifacts.
I am afraid that we were so focused in provide you further alternatives that at the end we do not actually answer your question.
Previously, I will give you the advice of use Kubernetes Secrets, and it is still perfectly fine, but please, allow me to come back to it later.
According to the different properties you are trying setting, you are trying configuring the credentials on behalf your application with interact with other services deployed in GCP.
For that purpose the first thing you need is a service account.
In a nutshell, a service account is a software artifact that agglutinates several permissions.
This service account can be later assigned to a certain GCP resource, to a certain GCP service, and it will allow that resource to inherit or act on behalf of the configured permissions when interacting with other GCP resources and services.
Every service account will have an associated set of keys which identify the service account - the information you are trying to keep safe.
There are different types of service accounts, mainly, default service accounts, created by GCP when you enable or use some Google Cloud services - one for Compute Engine and one for App Engine - and user defined ones.
You can modify the permissions associated with these service accounts: the important thing to keep in mind is always follow the principle of least privilege, only grant the service account the necessary permissions for performing its task, nothing else.
By default, your GKE cluster will use the default Compute Engine service account and the scopes for it defined. These permissions will be inherited by your pods when contacting other services.
As a consequence, one possible option is just configuring an appropriate service account for GKE and use these permissions in your code.
You can use the default Compute Engine service account, but, as indicated in the GCP docs when describing how to harden the cluster security:
Each GKE node has an Identity and Access Management (IAM) Service Account associated with it. By default, nodes are given the Compute Engine default service account, which you can find by navigating to the IAM section of the Cloud Console. This account has broad access by default, making it useful to wide variety of applications, but it has more permissions than are required to run your Kubernetes Engine cluster. You should create and use a minimally privileged service account to run your GKE cluster instead of using the Compute Engine default service account.
So probably you will need to create a service account with the minimum permissions to run your cluster (and) application. The aforementioned link provides all the necessary information.
As an alternative, you can configure a different service account for your application and this is where, as a possible alternative, you can use Kubernetes Secrets.
Please:
Do not directly provide your own implementation of CredentialsProvider, I think it will not provide you any additional benefit compared with the rest of solutions.
If you want to use the spring.cloud.gcp.credentials.location configuration property, create a Kubernetes secret and expose it as a file, and set the value of this property to that file location.
In a very similar way, using Kubernetes Secrets, and as exemplified for instance in this article, you can expose the service account credentials under the environment variable GOOGLE_APPLICATION_CREDENTIALS, both Spring GCP and the different GCP client libraries will look for this variable in order to obtain the required credentials.
I would not use the configuration property spring.cloud.gcp.credentials.encoded-key, in my opinion this approach makes the key more suitable for threats - probably you have to deal with VCS problems, etc.
Secret Manager... as I told, it is a suitable solution as indicated by #Shailendra in his answer.
The options provided by Guillaume are very good as well.
The preferred way is hard to answer. Depends on your wishes...
Personally, I prefer to keep a high level of security, it's related to service account authentication and a breach can be a disaster.
Therefore, too keep the secrets secret, I prefer not having secrets. Neither in K8S secret nor in secret manager! Problem solved!!
You can achieve that with ADC (application default credential). But like that, out of the box, the ADC use the Node identity. The problem here is if you run several pods on the same node, all will have the same identity and thus the same permissions.
A cool feature is to use Workload Identity. It allows you to bind a service account with your K8S deployment. The ADC principle is the same, except that it's bind at the pods level and not at the node level (a proxy is created that intercept the ADC requests)
If your application is running on GCP the preferred way would be to use default credentials provided by the Google GCP clients. When using the default credentials provider the clients will use the service account that is associated with you application.

Create a custom identity provider and configure it with keycloak

I am working on a project where I need to create an application that shall act as an OIDC mediator between a client which only supports OIDC for authentication and a REST api. The REST api is able to generate tokens and give user info but does not support OIDC.
To achieve this I am thinking of using keycloak to handle the OIDC communication with the client and implement my own java application that keycloak can trigger to realize the authorization, token and userinfo endpoint (sort of a custom ownmade identity provider) handling the communication with the rest api.
I have created a realm in keycloak and configured the realm to use an Identity Provider Redirector with an Identity Provider I added in keycloak (user-defined OpenID Connect v1.0). In the identity provider configuration I have set all the URLs to point to my java application but the initial OIDC authorization call from the client just redirects to the redirect_uri with a #error=login_required without any of my endpoints in the java application beeing triggered.
I guess there is something I have missed.. I need to intervene the authorization flow so that I can pick up a query param from the authorization request that needs to be handled in my java application. I also need to map the token from the rest api into the token request (when this request comes from the backend of the client app), and finally map the userinfo object as a response to the userinfo request.
I really hope someone have time to point me in the right direction. Thank you so much in advance.
Edit:
I have added a sequence diagram to explain it better:
I need to intercept the authorization request call to pick up a custom query param (endUserString) that identifies the user. There will be no user login form. I need the param in my code that uses this towards the REST API. Both the token and the userinfo must be received from my APP and not from keycloak itself.
The Java Mediator may ask for a token in advance (A) and use this to access the Rest API (using a predefined clientId and clientsecret). Alternatively this token may be fetched for each method. To token must be used to retrieve customer info from the REST API (B). I want to wrap this with OIDC support without any login form. A browser will just redirect to the authorization flow with the endUserString identifying the end user. The customer info will be returned from the java mediator into keycloak responding this in the GetUserInfoRsp.
I think there might be a simpler solution than what you envisioned: implementing your own custom authenticator for Keycloak.
Keycloak has a notion of authentication flow which is a tree of authenticators than are provided by Keycloak or custom made. Each authenticator can be called to try to authenticate the user.
The most common one is the Username/Password Form which displays a login page to the user and authenticates the user if the provided credentials are valid. But you could imagine any type of authenticator such as an SMS authenticator or a magic link one.
You can find the existing Keycloak's authenticators on their repo and the documentation on how to create your own here.
In your case, you would need to implement your own logic where your authenticator would get the endUserString param from the request and call the REST API to validate the user's identity. You could fetch the REST API token at initialisation or for each request. You could also modify the user stored in Keycloak with data coming from the REST API's user info endpoint (common OIDC attributes or custom attributes).
Please note that the dev team announced Keycloak X, a sort of reboot of the project which will probably bring breaking changes to their APIs.
Also, please consider all the security impacts of your design as, from what you provided, it seems the authentication of a user will only rely on a simple query parameter which, if it doesn't change over time for example, feels like a big security hole.

Tracking of who created or changed an entity in microservices

Usually in spring boot applications, we can use jpa audit to do the tracking.
Spring Boot Jpa Auditing
While in microservices architecture, I'd try to avoid involving security in core microservice. Instead, we can do authentication/authorization at api gateway.
While, if the core service didn't get the current login user, we have to find an way to pass the current operator to core services. It could be an user identifier header on the request. Or Maybe we can pass token to core services to let it fetch the login user from auth server.
I am wondering if anyone has handled such case and give out some suggestion.
If I understand the question correctly ...
You have an API gateway in which authentication/authorisation is implemented
On successful negotiation though the API gateway the call is passed on to a core service
The core services perform some auditing of 'who does what'
In order to perform this auditing the core services need the identity of the calling user
I think the possible approaches here are:
Implement auditing in the API gateway. I suspect this is not a runner because the auditing is likely to be more fine grained than can be implemented in the API gateway. I suspect the most you could audit in the API getway is something like User A invoked Endpoint B whereas you probably want to audit something like User A inserted item {...} at time {...} and this could only be done within a core service.
Pass the original caller's credentials through to the core service and let it authenticate again. This will ensure that no unauthenticated calls can reach the core service and would also have the side effect of providing the user identity to the core service which it can then use for auditing. However, if your API gateway is the only entrypoint for the core services then authenticating again within a core service only serves to provide the user identity in which case it could be deemed overkill.
Pass the authenticated user identity from the API gateway to the core service and let the core service use this in its auditing. If your API gateway is the only entrypoint for the core services then there is no need to re-authenticate on the core service and the provision of the authenticated user identity could be deemed part of the core service's API. As for how this identity should be propagated from the API gateway to the core service, there are several options depending on the nature of the interop between API gateway and core service. It sounds like these are HTTP calls, if so then a request header would make sense since this state is request scoped. It's possible that you are already propagating some 'horizontal state' (i.e. state which is related to the call but is not a caller supplied parameter) such as a correlationId (which allows you to trace the call though the API getway down into the core service and back again), if so then the authenticated user identify could be added to that state and provided to the core service in the same way.
DO you have code Example. I have tried to pass token from zuul to other module. But I always got null in header in another module.

Integrating Java Web App with SAML SSO

I have a Restful Java Web application which is to be deployed to a number of different environments (outside of my control) which will be using a SAML 2.0 SSO solution.
My application (which I believe is the "service provider") needs to store state generated by the user, and uses internal business logic to work out which users are allowed to view or update other user's data. In order for this to work we need to know who the user is, and what groups the user is part of. But how do I get this information?
Ideally my web app will be SSO agnostic, and would look for some configurable key headers in the http requests to get this information e.g. a SAML token in a request which could be parsed, or perhaps some custom headers specific to my "service provider".
Many Thanks
You are correct, your application is the Service Provider and you will have an external Identity Provider (IdP) to authenticate to.
Basically you need to issue an Authentication Request to the IdP (via either front channel HTTP POST or back channel SOAP/whatever they support) and use the authenticationResponse from the IdP to make your decision on whether they are who they say they are. As a rule you should be able to get the subject principal (ie username) and any group memberships from the authnResponse however exactly how this works will depend on what the IdP is or isn't configured to do.
Before you do this you will need to exchange SAML metadata with the IdP (this is generally part of being registered as a SP with the IdP) which gives both parties things like the public X509 cert for signing and validating requests.
There's a good spring library for SAML SP support, http://docs.spring.io/autorepo/docs/spring-security-saml/1.0.x-SNAPSHOT/reference/htmlsingle
you could run a reverse proxy in front of the Java web application to handle the SSO protocol part and relay user identity information to the application in HTTP headers; for SAML 2.0 there is mod_auth_mellon: https://github.com/UNINETT/mod_auth_mellon
If this is done in Java and running on a webcontainer (Tomcat, JBoss, etc.), then the agent could be implemented as a web authentication (servlet) filter (add into web.xml). The user would be typically derived from the SAML Auth response's <saml:NameID> or from <saml:Attribute> matching the userid attribute (uid,email, etc.). This user should be validated against the web app's identity repository (could be LDAP, database, etc.) and corresponding groups computed. Instead of using arbitrary headers or custom representation for the authenticated user, consider using java Principal (for users and groups) in a Subject.
The filter can then run the rest of the filter chain in a Subject.doAs(). That way, the Subject can be looked up any where in the downstream code using Subject.getSubject().

Authentication and Session management with Apache CXF DOSGi

I have a client - server application which uses cxf DOSGi [1]. Now I want to authenticate the clients from the server and create a session for the client. The client will have a cookie which is used to access the service once authenticated. I would like to know what is the best way for the server to access the HTTP session and the best way to store a cookie at the client end once authenticated.
I was thinking of making a custom Session object at application level once authenticated and send a Cookie object to the client. So when the client accesses the service methods, it will pass the cookie as an argument. The client will be validated in every service method. But I dont think this is the best way to handle this since every service method must have a separate argument to pass the Cookie.
I came across this when I was googling [2]. Is it possible to get "WebServiceContext" in the service in DOSGi? Even if I get it, how would I store the cookie at client end and make sure the client sends the cookie in every subsequent web service call?
[1] http://cxf.apache.org/distributed-osgi-greeter-demo-walkthrough.html
[2] How can I manage users' sessions when I use web services?
Any help is highly appreciated.
Thanks.
You can use a custom intent to control authentication. Basically an intent is a CXF feature that is applied to the webservice by DOSGi. You create the feature in a separate bundle and then publish it with a special property for its name: See the DOSGi reference guide.
In a project we created a feature that read a threadlocal containing the authentication context and used the credentials stored there to populate the CXF authentication. So you just have to store the credentials once into the threadlocal at the start of you application and all calls work.
Currently there is no simple documenation or example for this case but I plan to create it in the near future as authentication is a common problem. I plan to use shiro as an authentication framework and write a generic adapter for CXF. I will add a comment or another answer as soon as I got it ready. In the meantime you can try to do the same yourself.

Categories