I am trying to call Google Cloud DocumentAI through a google service account. I have the json key that was generated for it and I load it into my application via the FixedCredentialsProvider and a GoogleCredentials object since it's not possible to load it via environment variables for my use case. This method used to work but now it throws an UNAUTHORIZED exception and something related to not having valid OAuth2 tokens. When I test the scenario using the GOOGLE_APPLICATION_CREDENTIALS env variable it works fine. Has there been a change that doesn't allow the FixedCredentials method anymore? I have not updated the sdk, it just stopped on its own. Is there a new way to load the credentials JSON key programmatically?
Ended up inspecting the SDK source to find the answer. The difference between loading via environment variables and using the GoogleCredentials is that in the latter case it does not provide OAuth2 scopes which is something that since last testing has become mandatory from Google's side for DocumentAI service. Loading the key using the environment variables goes from a different code path that provides some default scopes. We can provide the same scopes manually when loading via GoogleCredentials like so:
GoogleCredentials googleCredentials = GoogleCredentials
.fromStream(new FileInputStream(jsonKeyFile))
.createScoped(DocumentUnderstandingServiceSettings.getDefaultServiceScopes());
DocumentUnderstandingServiceClient client = DocumentUnderstandingServiceClient.create(
DocumentUnderstandingServiceSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(googleCredentials))
.build()
);
The DocumentUnderstandingServiceSettings.getDefaultServiceScopes() returns a static variable that contains the same scopes that are used by the environment variable loading method which in turn enables usage of DocumentAI with the manually created GoogleCredentials.
Related
Our application needs to connect to confluent kafka and thus we have the following setups inside application.yaml file
kafka:
properties:
sasl:
mechanism: PLAIN
jaas:
config: org.apache.kafka.common.security.plain.PlainLoginModule required username={userName} password={passWord};
The {userName} and {passWord} need to be replaced by value fetching from AWS secret manager. These are what I have done so far.
Step 1: Use the following maven dependency
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-secretsmanager</artifactId>
</dependency>
Step 2: Create a configuration class and create a method annotated with #Bean to init a AWSSecretsManager client object.And we can get some key value pairs by using AWSSecretsManager object.
// Create a Secrets Manager client
AWSSecretsManager client = AWSSecretsManagerClientBuilder.standard()
.withRegion(region)
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretKey)))
.build();
I have the following questions to ask:
How can we inject the value we get from secret manager and replace the placeholder in the application.yml file?
To access AWSSecretsManager we need to pass AWS accessKey and seretKey. What is a good practice to provide those two values?
Some more info:
our application will be running on AWS ECS
I wouldn't recommend doing this via Java code at all. I would totally remove the aws-java-sdk-secretsmanager dependency, and use the ECS support for injecting SecretsManager values as environment variables.
My answer here will focus on the Secrets Manager API part of your question
I recommend that you move from AWS SDK for Java V1 to AWS SDK for Java V2. You can find V2 Java Secret Manager examples here.
https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/secretsmanager
Here is the Service Client for V2.
SecretsManagerClient secretsClient = SecretsManagerClient.builder()
.region(region)
.credentialsProvider(ProfileCredentialsProvider.create())
.build();
In this example, I am using a ProfileCredentialsProvider that reads creds from .aws/Credentials. You can learn more about how V2 handles creds in the AWS Java V2 DEV Guide.
Using credentials
You cannot use ProfileCredentialsProvider in an app deployed to a container as this file structure not part of the container. So you can use Amazon ECS container credentials:
The SDK uses the ContainerCredentialsProvider class to load credentials from the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI system environment variable.
See point 5 in the above Doc.
How can I create a custom data source credentials provider that for example reads the credentials from a file on the disk? I need a way to set the credentials from code. I guess that's the way to go in Quarkus.
quarkus.datasource.username=I want to set this in the code
quarkus.datasource.password=I want to set this in the code
I only see a hashicorp vault integration. I need a way to do this in a custom credentials provider. I can see that there is a way to set the class that represent your provider but what interface that class should implement?
From the docs:
quarkus.datasource.credentials-provider=?
quarkus.datasource.credentials-provider-type=?
The credentials provider type. It is the #Named value of the credentials provider bean. It is used to discriminate if multiple CredentialsProvider beans are available. For Vault it is: vault-credentials-provider. Not necessary if there is only one credentials provider available.
Can somebody please help with this?
this pattern is now officially supported in https://github.com/quarkusio/quarkus/pull/9032 and documented in https://github.com/quarkusio/quarkus/pull/9552
Interesting. We have designed that contract with only Vault in mind so the interface is called io.quarkus.vault.CredentialsProvider and is in the quarkus-vault-spi module.
That being said, I think you could just add that module to your project (it doesn't have any Vault dependency). Then you could just implement that interface and things should be OK.
Your CredentialsProvider needs to be a CDI bean so you should make it either #Singleton or #ApplicationScoped.
Then you would just need to define a value for quarkus.datasource.credentials-provider=<value here>. The name is passed to the crendentials provider and is used in the case of Vault.
In your case, it just needs to be defined.
If it works for you, could you open an issue in our tracker? I think we should make that interface part of the datasource extension and not Vault specific.
UPDATE: I created an example project here: https://github.com/gsmet/quarkus-credentials-provider . Just run mvn clean install (you need Docker) and you'll see your CredentialsProvider being called.
Yes, o.quarkus.vault.CredentialsProvider is meant to be HashiCorp Vault neutral.
Please see this issue for some guidance: https://github.com/quarkusio/quarkus/issues/6896#issuecomment-581014674
Due to some new security requirments the api I'm developing now is required to store several urls, azure account names etc. in the azure key vault, rather than in the application.yml config file.
The issue is that I'm having trouble authenticating / accessing the key vault client in a Local environment. I have very limited access to the azure functions / key vault itself so testing the new code I'm writing is near impossible at current:
public String getSecretFromKeyVault(String key) {
/**
* Breaks in the constructor call, as the system.env variables for MSI_ENDPOINT and MSI_SECRET are null.
**/
AppServiceMSICredentials credentials = new AppServiceMSICredentials(AzureEnvironment.AZURE);
KeyVaultClient client = new KeyVaultClient(credentials);
SecretBundle secret = client.getSecret("url-for-key-vault", key);
return secret.value();
}
I'm aware that the variables will be set in the cloud server, but my question is how can I best verify that the vault calls have been implemented properly(unit, integration, e2e local tests), and how would I manage to use key vault calls during local development / runtime?
The alternative to MSI would be to enter the client id and key manually, following authentication against the active directory. This could be a solution for local development, but Would still require the declaration of confidential information in the source code.
Ive also tried logging in to azure using az login before running the server but that didn't work either.
Does anyone have any suggestions on how I might resolve this issue, or what my best options are going forward?
Notes on application:
Java version: 8
Spring boot
Azure / vsts development and deployment environment
Since you're using spring-boot you may be better off using Microsoft's property source implementation that maps the keyvault properties into Spring properties and for local development and testing you set equivalent properties in property files.
Use Spring profiles. let's say you have azure and local profiles. In your application-azure.yml file configure your app to use keyvault:
# endpoint on the azure internal network for getting the identity access token
MSI_ENDPOINT: http://169.254.169.254/metadata/identity/oauth2/token
MSI_SECRET: unused
# this property triggers the use of keyvault for properties
azure.keyvault:
uri: https://<your-keyvault-name>.vault.azure.net/
Now you can inject secret properties from the spring context into your variables and they will be read from keyvault:
#Value("${superSecretValue}")
String secretValue;
To make this work locally for testing, in your application-local.yml file you can set the secret property to something appropriate:
superSecretValue: dummy-for-testing-locally
The Azure dependency you need to add to build.gradle is:
implementation "com.microsoft.azure:azure-keyvault-secrets-spring-boot-starter:2.1.6"
Run your spring-boot jar with azure as the active profile when deployed, and local when testing and developing away from azure. This is tested and working with azure java containers.
I am using Xero's Java SDK to build my application. My application is now facing a requirement of having to work with several Xero private apps, therefore I need to manage and performing authentication (OAuth) via the key certificate file and appropriate consumer key and secret.
I was thinking to very simply store these details in a database table and retrieve them appropriately more or less as in the following:
// create a Xero config instance
Config config = JsonConfig.getInstance();
// build config file - details will be obtained from database
config.setConsumerKey("key");
config.setConsumerSecret("secret");
// this line will have me authenticate with the Xero service using the config file built
XeroClient client = new XeroClient(config);
The problem with this approach is that I am not pointing at the public_privatekey.pfx key file which is another essential element required to authenticate.
The reason why I am not doing so is that the SDK does not seem to support this using the Config instance as shown above - there is no option for me to select the appropriate public_private.pfx file (and neither an option for me to just load the contents of the file). It doesn't make sense to me that an SDK would be missing a feature, therefore questioning my approach; have I overlooked a detail or am I approaching the problem incorrectly?
Take a look at the read me under the heading Customize Request Signing
https://github.com/XeroAPI/Xero-Java/blob/master/README.md
You can provide your own signing mechanism by using the public XeroClient(Config config, SignerFactory signerFactory) constructor. Simply implement the SignerFactory interface with your implementation.
You can also provide a RsaSignerFactory using the public RsaSignerFactory(InputStream privateKeyInputStream, String privateKeyPassword) constructor to fetch keys from any InputStream.
I'm having a hard time authenticating to the Object Storage Service in IBM Cloud from an external Java application using the OpenStack4j library (version 3.1.0). Here's how I'm trying:
Identifier domainIdentifier = Identifier.byName("DOMAIN");
Identifier projectIdentifier= Identifier.byName("PROJECT");
OSClient.OSClientV3 os = OSFactory.builderV3()
.endpoint("https://identity.open.softlayer.com/v3")
.credentials("USER", "PASS")
.scopeToProject(projectIdentifier, domainIdentifier)
.authenticate();
References:
https://github.com/acloudfan/IBM-Object-Storage-Sample/
https://github.com/ibm-bluemix-mobile-services/bluemix-objectstorage-sample-liberty
The problem seems to be that I can't figure out where to get the DOMAIN and PROJECT information mentioned above, and perhaps the endpoint. The documentation says to obtain them from the Object Storage page under Service Credentials and View Credentials. I do see a JSON output with the following fields:
{
"apikey": "...",
"endpoints": "...",
"iam_apikey_description": "...",
"iam_apikey_name": "...",
"iam_role_crn": "...",
"iam_serviceid_crn": "...",
"resource_instance_id": "..."
}
None of which seem to relate to domain or project information, at least by name. I even created a separate Web App with an Object Storage Connector, and tried to obtain the information from the Environment Variables page, as some of the documentation suggested, but with no luck.
What I ultimately want to achieve is to be able to ingest files to a container I created, and use the data & analytics services on top (Data Science Experience).
The reason for the confusion is that there are (or used to be) two different Object Storage services on Bluemix (Object Storage and Cloud Object Storage). The bluemix-mobile-services SDK is written for the Object Storage one rather than the service you have provisioned.
The App Service page has a starter kit which makes it pretty easy to get starter code and set up with a toolchain for a Liberty project:
This has the domain field for the credentials. (Here is a link to the starter kits & I added the Object Storage service which injects the credentials: https://console.bluemix.net/developer/appservice/starter-kits. Or you can create a project with just the service and no code: https://console.bluemix.net/developer/appservice/create-project?services=Object-Storage)
Here is the documentation for the Java SDK for Cloud Object Storage if you would like to use that service instead:
https://console.bluemix.net/docs/services/cloud-object-storage/libraries/java.html#java
Here is a comparison of the Object Storage services:
https://console.bluemix.net/catalog/infrastructure/object-storage-group