I am writing a Java application that users Spring for dependency injection and AWS for various services. I will be deploying the application to EC2. The issue I am having is setting the AWS credentials in a secure way during development/deployment. Because the service is running on EC2, I would like to use the InstanceProfileCredentialsProvider in production. However, these credentials are not available during development.
Almost all the AWS clients are currently injected using Spring. Here is an example using DynamoDB:
#Lazy
#Configuration
public class SpringConfiguration {
#Bean(name = "my.dynamoDB")
public DynamoDB dynamoDB() {
return DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(
new AWSCredentialsProvider() /* What should go here? */));
}
}
Any thoughts?
Try creating a separate bean that returns a credentials provider. Within that method switch between the two credential sources based on stage or host type.
/**
* #return: an AWSCredentialsProvider appropriate for the stage.
*/
#Bean
public AWSCredentialsProvider awsCredentialsProvider() {
if(isProd() /* define what this means in your configuration code */) {
return new InstanceProfileCredentialsProvider()
} else {
return new AWSCredentialsProvider()
}
}
Related
I have an application module that installs a DynamoDB module
install(new DynamoDBModule());
In DynamoDBModule we have some code to build the DynamoDb client and initialize the mapper
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://prod-endpoint:8000", "us-west-2"))
.build();
Now, when I am writing tests, I have to replace this dynamoDB endpoint with a local endpoint and I was wondering what is the simplest way to do this. I am aware of this stackoverflow question, but it would mean writing a lot of code for just a single small change. I would have to create a mock dynamodb module, a mock application module and then I can run something like this in my tests
Guice.createInjector(Modules.override(new AppModule()).with(new TestAppModule()));
Is there a simple way to somehow use or override the test endpoint when running tests and continue using the prod endpoint otherwise.
Configure an EndpointConfiguration as binding and override it in TestAppModule. E.g.:
class DynamoDBModule {
#Provides
#Singleton
AmazonDynamoDB provideAmazonDynamoDB(EndpointConfiguration endpointConfiguration) {
return AmazonDynamoDBClientBuilder.standard()
.withEndpointConfiguration(endpointConfiguration)
.build()
}
#Provides
#Singleton
EndpointConfiguration provideEndpointConfiguration() {
return new AwsClientBuilder.EndpointConfiguration("http://prod-endpoint:8000", "us-west-2");
}
}
class TestAppModule {
#Provides
#Singleton
EndpointConfiguration provideTestEndpointConfiguration() {
return new AwsClientBuilder.EndpointConfiguration("test-value", "us-west-2");
}
}
Then use you approach with Modules.override and it should work:
Guice.createInjector(Modules.override(new AppModule()).with(new TestAppModule()));
I created a simple spring boot app to retrieve secrets from keyvault.
I added the following dependency to work around with,
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>azure-spring-boot-starter-keyvault-secrets</artifactId>
<version>3.5.0</version>
</dependency>
and added the following in application.properties
azure.keyvault.enabled=true
azure.keyvault.uri=<URL>
#keys
mySecretProperty=secret
and my main application,
#SpringBootApplication
public class KeyVaultSample implements CommandLineRunner {
#Value("${mySecretProperty}")
private String mySecretProperty;
public static void main(String[] args) {
SpringApplication.run(KeyVaultSample.class, args);
}
#Override
public void run(String... args) {
System.out.println("property your-property-name value is: " + mySecretProperty);
}
}
But every time I tried to run the above app on local, it tries to use ManagedIdentityCredential to connect. So I added a configuration class for creating a bean for SecretClient with AzureCliCredential, but then too, the results are the same.
My Configuration class,
#Configuration
public class AppConfiguration {
#Bean
public SecretClient secretClient() {
AzureCliCredential az = new AzureCliCredentialBuilder().build();
SecretClient sec = new SecretClientBuilder().vaultUrl("<url>")
.credential(az).buildClient();
return sec;
}
}
I'm looking for ways I could use/test this keyvault on my local.
Is there any configuration I could put in the properties file which would make it use AzureCliCredential instead of ManagedIdentityCredential?
azure-spring-boot-starter-keyvault-secrets uses MSI / Managed identities.
If you would like to authenticate with Azure CLI, just use azure-identity and azure-security-keyvault-secrets.
public void getSecretWithAzureCliCredential() {
AzureCliCredential cliCredential = new AzureCliCredentialBuilder().build();
// Azure SDK client builders accept the credential as a parameter
SecretClient client = new SecretClientBuilder()
.vaultUrl("https://{YOUR_VAULT_NAME}.vault.azure.net")
.credential(cliCredential)
.buildClient();
KeyVaultSecret secret = secretClient.getSecret("<secret-name>");
System.out.printf("Retrieved secret with name \"%s\" and value \"%s\"%n", secret.getName(), secret.getValue());
}
If you don't necessarily need the real thing in local (a test double can be fine instead of Azure Key Vault) you could try Lowkey Vault too! It supports keys and secrets using a local container and you can fake the authentication using a simple basic auth.
Project home: https://github.com/nagyesta/lowkey-vault
Java POC (although not using the Spring Boot starter): https://github.com/nagyesta/lowkey-vault-example
I have a Elasticache setup with one master and two slaves. I am still not sure how to pass in a list of master slave RedisURIs to construct a StatefulRedisMasterSlaveConnection for LettuceConnectionFactory. I only see support for standaloneConfiguration with single host and port.
LettuceClientConfiguration configuration = LettuceTestClientConfiguration.builder().readFrom(ReadFrom.SLAVE).build();
LettuceConnectionFactory factory = new LettuceConnectionFactory(SettingsUtils.standaloneConfiguration(),configuration);
I know there is a similar question Configuring Spring Data Redis with Lettuce for Redis master/slave
But I don't think it works for ElastiCache Master/Slave setup as currently the above code would try to use MasterSlaveTopologyProvider to discover slave ips. However, slave IP addresses are not reachable. So what's the right way to configure Spring Data Redis to make it compatible with Master/Slave ElastiCache? It seems to me LettuceConnectionFactory needs to take in a list of endpoints and use StaticMasterSlaveTopologyProvider in order to work.
There have been further improvements in AWS and Lettuce making it easier to support Master/Slave.
One improvement that has happened recently in AWS is it has launched reader endpoints for Redis which distributes load among replicas: Amazon ElastiCache launches reader endpoints for Redis.
Hence the best way to connect to Redis using Spring Data Redis will be to use the primary endpoint (master) and reader endpoint (for replicas) of the Redis cluster.
You can get both of them from the AWS console. Here is a sample code:
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.readFrom(ReadFrom.SLAVE_PREFERRED)
.build();
RedisStaticMasterReplicaConfiguration redisStaticMasterReplicaConfiguration =
new
RedisStaticMasterReplicaConfiguration(REDIS_CLUSTER_PRIMARY_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.addNode(REDIS_CLUSTER_READER_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.setPassword(redisPassword);
return new LettuceConnectionFactory(redisStaticMasterReplicaConfiguration, clientConfig);
}
Right now, static Master/Slave with provided endpoints is not supported by Spring Data Redis. I filed a ticket to add support for that.
You can implement this functionality yourself by subclassing LettuceConnectionFactory, creating an own configuration and LettuceConnectionFactory.
You would start with something like:
public static class MyLettuceConnectionFactory extends LettuceConnectionFactory {
private final MyMasterSlaveConfiguration configuration;
public MyLettuceConnectionFactory(MyMasterSlaveConfiguration standaloneConfig,
LettuceClientConfiguration clientConfig) {
super(standaloneConfig, clientConfig);
this.configuration = standaloneConfig;
}
#Override
protected LettuceConnectionProvider doCreateConnectionProvider(AbstractRedisClient client, RedisCodec<?, ?> codec) {
return new ElasticacheConnectionProvider((RedisClient) client, codec, getClientConfiguration().getReadFrom(),
this.configuration);
}
}
static class MyMasterSlaveConfiguration extends RedisStandaloneConfiguration {
private final List<RedisURI> endpoints;
public MyMasterSlaveConfiguration(List<RedisURI> endpoints) {
this.endpoints = endpoints;
}
public List<RedisURI> getEndpoints() {
return endpoints;
}
}
You can find all code in this gist, not posting all code here as it would be a wall of code.
I have a Spring-based module that I need to integrate with an existing/legacy Java "standalone" app. The Spring-based module is a simple implementation of an AuthenticationProvider (spring-security).
What I would like to do is hook up the spring based module in a Java application in a way that I can simply call the authenticate method on that provider from the Java code.
Is that possible? What is required?
Is it possible to wrap the spring module in a plain Java library and use that as API interface for my standalone Java app?
I already searched for specific tutorials but it seems there isn't one that fit this requirement.
OK, I have to use the ApplicationContext directly so I can manage the Spring framework in my application.
public class MyApplication {
private static final Logger LOGGER = LoggerFactory.getLogger(MyApplication.class);
public static void main(String[] args) {
MyAuthenticationProvider authProvider;
try (ConfigurableApplicationContext springApplicationContext = new AnnotationConfigApplicationContext(
MySpringConfiguration.class)) {
springApplicationContext.registerShutdownHook();
authProvider = springApplicationContext.getBean(MyAuthenticationProvider.class);
}
Authentication request = new UsernamePasswordAuthenticationToken("foo", "foo");
Authentication result = authProvider.authenticate(request);
if (result.isAuthenticated()) {
LOGGER.debug("User is authenticated");
} else {
LOGGER.debug("Cannot authenticate user.");
}
}
In my Spring Boot application I'm listening message queue. When a message appears I need to execute it synchronously(one by one) in some task-executor.
I'm using Amazon SQS, this is my config:
/**
* AWS Credentials Bean
*/
#Bean
public AWSCredentials awsCredentials() {
return new BasicAWSCredentials(accessKey, secretAccessKey);
}
/**
* AWS Client Bean
*/
#Bean
public AmazonSQS amazonSQSAsyncClient() {
AmazonSQS sqsClient = new AmazonSQSClient(awsCredentials());
sqsClient.setRegion(Region.getRegion(Regions.US_EAST_1));
return sqsClient;
}
/**
* AWS Connection Factory
*/
#Bean
public SQSConnectionFactory connectionFactory() {
SQSConnectionFactory.Builder factoryBuilder = new SQSConnectionFactory.Builder(
Region.getRegion(Regions.US_EAST_1));
factoryBuilder.setAwsCredentialsProvider(new AWSCredentialsProvider() {
#Override
public AWSCredentials getCredentials() {
return awsCredentials();
}
#Override
public void refresh() {
}
});
return factoryBuilder.build();
}
/**
* Registering QueueListener for queueName
*/
#Bean
public DefaultMessageListenerContainer defaultMessageListenerContainer() {
DefaultMessageListenerContainer messageListenerContainer = new DefaultMessageListenerContainer();
messageListenerContainer.setConnectionFactory(connectionFactory());
messageListenerContainer.setMessageListener(new MessageListenerAdapter(new QueueListener()));
messageListenerContainer.setDestinationName(queueName);
return messageListenerContainer;
}
Also I need to have possibility to check the status of this task-executor, for example - number of scheduled tasks.
Is it a good idea to use Spring SyncTaskExecutor for this purpose ? If so, could you please show an example how it can be used with Spring Boot.
EDIT:
After revealing your messaging technology and Spring configuration for it, simplest way for you is to configure SyncTaskExecutor (or Executors.newFixedThreadPool(1) would do the job also) as executor for your DefaultMessageListenerContainer. Use this method.
You can register Task executor as separate bean (via #Bean annotation) and autowire it to defaultMessageListenerContainer() method (just add TaskExectuor as parameter).
Below answer is relevant for JMS messaging. It was created before AWS SQS usage was revealed in question:
You didn't mention which messaging technology are you using, therefore I assume JMS.
If synchronous execution is requirement, I believe you can't use native JMS listeners (need to avoid SimpleJmsListenerContainerFactory or SimleMessageListenerContainer).
Instead I would suggest to use #JmsListener annotation with DefaultJmsListenerContainerFactory (this uses long polling instead of native JMS listeners) and configure SyncTaskExecutor (or Executors.newFixedThreadPool(1) would do the job also) as executor for mentioned container factory: DefaultJmsListenerContainerFactory.setTaskExecutor().
This is simple Spring Boot JMS example with DefaultJmsListenerContainerFactory configured. You just need to plug in suitable task executor.