Configuring Spring Data Redis with Lettuce for ElastiCache Master/Slave - java

I have a Elasticache setup with one master and two slaves. I am still not sure how to pass in a list of master slave RedisURIs to construct a StatefulRedisMasterSlaveConnection for LettuceConnectionFactory. I only see support for standaloneConfiguration with single host and port.
LettuceClientConfiguration configuration = LettuceTestClientConfiguration.builder().readFrom(ReadFrom.SLAVE).build();
LettuceConnectionFactory factory = new LettuceConnectionFactory(SettingsUtils.standaloneConfiguration(),configuration);
I know there is a similar question Configuring Spring Data Redis with Lettuce for Redis master/slave
But I don't think it works for ElastiCache Master/Slave setup as currently the above code would try to use MasterSlaveTopologyProvider to discover slave ips. However, slave IP addresses are not reachable. So what's the right way to configure Spring Data Redis to make it compatible with Master/Slave ElastiCache? It seems to me LettuceConnectionFactory needs to take in a list of endpoints and use StaticMasterSlaveTopologyProvider in order to work.

There have been further improvements in AWS and Lettuce making it easier to support Master/Slave.
One improvement that has happened recently in AWS is it has launched reader endpoints for Redis which distributes load among replicas: Amazon ElastiCache launches reader endpoints for Redis.
Hence the best way to connect to Redis using Spring Data Redis will be to use the primary endpoint (master) and reader endpoint (for replicas) of the Redis cluster.
You can get both of them from the AWS console. Here is a sample code:
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.readFrom(ReadFrom.SLAVE_PREFERRED)
.build();
RedisStaticMasterReplicaConfiguration redisStaticMasterReplicaConfiguration =
new
RedisStaticMasterReplicaConfiguration(REDIS_CLUSTER_PRIMARY_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.addNode(REDIS_CLUSTER_READER_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.setPassword(redisPassword);
return new LettuceConnectionFactory(redisStaticMasterReplicaConfiguration, clientConfig);
}

Right now, static Master/Slave with provided endpoints is not supported by Spring Data Redis. I filed a ticket to add support for that.
You can implement this functionality yourself by subclassing LettuceConnectionFactory, creating an own configuration and LettuceConnectionFactory.
You would start with something like:
public static class MyLettuceConnectionFactory extends LettuceConnectionFactory {
private final MyMasterSlaveConfiguration configuration;
public MyLettuceConnectionFactory(MyMasterSlaveConfiguration standaloneConfig,
LettuceClientConfiguration clientConfig) {
super(standaloneConfig, clientConfig);
this.configuration = standaloneConfig;
}
#Override
protected LettuceConnectionProvider doCreateConnectionProvider(AbstractRedisClient client, RedisCodec<?, ?> codec) {
return new ElasticacheConnectionProvider((RedisClient) client, codec, getClientConfiguration().getReadFrom(),
this.configuration);
}
}
static class MyMasterSlaveConfiguration extends RedisStandaloneConfiguration {
private final List<RedisURI> endpoints;
public MyMasterSlaveConfiguration(List<RedisURI> endpoints) {
this.endpoints = endpoints;
}
public List<RedisURI> getEndpoints() {
return endpoints;
}
}
You can find all code in this gist, not posting all code here as it would be a wall of code.

Related

Spring Cloud Stream Kafka Binder Configuration update at runtime

I am using Spring cloud stream along with Kafka binders to connect to a Kafka cluster using SASL. The SASL config looks as follows:
spring.cloud.stream.kafka.binder.configuration.sasl.mechanism=SCRAM-SHA-512
spring.cloud.stream.kafka.binder.configuration.sasl.jaas.config= .... required username="..." password="..."
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
I want to update the username and password programmatically/at runtime, how can I do that in Spring Cloud Stream using Spring Kafka binders?
Side note:
Using BinderFactory I can get reference to KafkaMessageChannelBinder which has KafkaBinderConfigurationProperties, in its configuration hashmap I can see those configurations but I want to know how can I update the configuration at runtime such that those changes are reflected in the connections as well?
#Autowired
BinderFactory binderFactory
....
public void foo()
{
KafkaMessageChannelBinder k = (KafkaMessageChannelBinder)binderFactory.getBinder(null, MessageChannel.class);
// Using debugger I inspected k.configurationProperties.configuration which has the SASL properties I need to update
}
jaas username and password can be provided using configuration, which also means that they can be overridden using the same properties at runtime.
Here is an example: https://github.com/spring-cloud/spring-cloud-stream-samples/blob/master/multi-binder-samples/kafka-multi-binder-jaas/src/main/resources/application.yml#L26
At runtime, you can override the values set in application.properties. For example, if you are running the app using java -jar, you could simply pass the property along with it: spring.cloud.stream.kafka.binder.jaas.options.username. Then this new value will take effect for the duration of the application run.
I came across this problem yesterday and spent about 3-4 hours in order to figure out how to programmatically update the username and password in Spring Cloud Stream using Spring Kafka binders as one cannot/should not store passwords inside Git.(Spring Boot Version 2.5.2)
Overriding the bean KafkaBinderConfigurationProperties works.
#Bean
#Primary
public KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties(KafkaBinderConfigurationProperties properties) {
String saslJaasConfigString = "org.apache.kafka.common.security.scram.ScramLoginModule required username=${USERNAME_FROM_EXTERNAL_SYSTEM_LIKE_VAULT} password=${PASSWORD_FROM_EXTERNAL_SYSTEM_LIKE_VAULT}"
Map<String, String> configMap = properties.getConfiguration();
configMap.put(SaslConfigs.SASL_JAAS_CONFIG, saslJaasConfigString);
return properties;
}

Junit 5 functional testing the Micronaut Messaging-Driven Application

I have a Rabbit MQ Micronaut Messaging-Driven application. The application only contains the Consumer side, Producer side is on another REST API application.
Now I want to perform JUnit 5 testing with the consumer side only. Trying to get the best idea to test the Messaging-Driven application that contains only the Rabbit MQ Listener
#RabbitListener
public record CategoryListener(IRepository repository) {
#Queue(ConstantValues.ADD_CATEGORY)
public CategoryViewModel Create(CategoryViewModel model) {
LOG.info(String.format("Listener --> Adding the product to the product collection"));
Category category = new Category(model.name(), model.description());
return Single.fromPublisher(this.repository.getCollection(ConstantValues.PRODUCT_CATEGORY_COLLECTION_NAME, Category.class)
.insertOne(category)).map(success->{
return new CategoryViewModel(
success.getInsertedId().asObjectId().getValue().toString(),
category.getName(),
category.getDescription());
}).blockingGet();
}
}
After some research, I found that we can use Testcontainers for integration testing, In my case, the Producer and receiver are on a different server. So do I need to create RabbitClient for each RabbitListener in the test environment or is there any way to mock RabbitClient
#MicronautTest
#Testcontainers
public class CategoryListenerTest {
#Container
private static final RabbitMQContainer RABBIT_MQ_CONTAINER = new RabbitMQContainer("rabbitmq")
.withExposedPorts(5672, 15672);
#Test
#DisplayName("Rabbit MQ container should be running")
void rabbitMqContainerShouldBeRunning() {
Assertions.assertTrue(RABBIT_MQ_CONTAINER.isRunning());
}
}
What is the best way to perform functional tests of Micronaut Messaging-Driven Application? In this question, I have a PRODUCER on another application. So I can't inject a PRODUCER client. How can I test this function on the LISTENER side?
Create producers with #RabbitClient or use the java api directly

Managing Kafka Topic with spring

We are planning to use Kafka for queueing in our application. I have some bit of experience in RabbitMQ and Spring.
With RabbitMQ and Spring, we used to manage queue creation while starting up the spring service.
With Kafka, I'm not sure what could be the best way to create the topics? Is there a way to manage the topics with Spring.
Or, should we write a separate script which helps in creating topics? Maintaining a separate script for creating topics seems a bit weird for me.
Any suggestions will be appreciated.
In spring it is possible to create topics during the start of the application using beans:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(kafkaEmbedded().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic1() {
return new NewTopic("foo", 10, (short) 2);
}
Alternatively you can write your own create topics by autowiring the AdminClient, so for instance reading the list from an input file or specify advanced properties such as partition numbers:
#Autowired
private KafkaAdmin admin;
//...your implementation
Also note that since Kafka 1.1.0 auto.create.topics.enable is enabled by default (see Broker configs).
For more information refer to the spring-kafka docs
To automatically create a Kafka topic in Spring Boot, only this is required:
#Bean
public NewTopic topic1() {
return new NewTopic("foo", 10, (short) 2);
//foo: topic name
//10: number of partitions
//2: replication factor
}
The Kafka Admin is being automatically created and configured by Spring Boot.
Version 2.3 of Spring Kafka introduced a TopicBuilder class, to make building topics fluent and more intuitive:
#Bean
public NewTopic topic(){
return TopicBuilder.name("foo")
.partitions(10)
.replicas(2)
.build();
}

Problems connecting to existing ElasticSearch instance in a spring boot application

I have an elasticsearch instance running locally.
I have a spring boot application.
In my application I have a service ServiceX which contains an elasticsearch repository which extends ElasticsearchRepository.
So
Service X contains
YRepository extends ElasticsearchRepository
I have an elasticsearch instance running locally.
My elastic search settings are
ELASTICSEARCH (ElasticsearchProperties)
spring.data.elasticsearch.properties.http.enabled=true
spring.data.elasticsearch.properties.host = localhost
spring.data.elasticsearch.properties.port = 9300
When the application is started an elasticsearch template is created.
The client that is used is a NodeClient.
The settings for the NodeClient are
"http.enabled" -> "true"
"port" -> "9300"
"host" -> "localhost"
"cluster.name" -> "elasticsearch"
"node.local" -> "true"
"name" -> "Human Robot"
"path.logs" -> "C:/dev/git/xxx/logs"
The name of the elasticsearch (Human Robot in this case), does not match the local elasticsearch instance running (Nikki in this case).
It looks like it
1. creates a new instance of logstash
2. creates an embedded instance of logstash.
I have searched through a lot of information but cannot find any documentation to help.
Could people please advise about what settings to use?
Thanks.
I believe that you do not want to use the NodeClient but the TransportClient unless you want your application to become part of the cluster
I believe you have the following dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artificatId>spring-boot-starter-data-elasticsearch</artificatId>
</dependency>
then you need to create some configuration class as follows:
#Configuration
#PropertySource(value = "classpath:config/elasticsearch.properties")
public class ElasticsearchConfiguration {
#Resource
private Environment environment;
#Bean
public Client client() {
TransportClient client = new TransportClient();
TransportAddress address = new InetSocketTransportAddress(
environment.getProperty("elasticsearch.host"),
Integer.parseInt(environment.getProperty("elasticsearch.port"))
);
client.addTransportAddress(address);
return client;
}
#Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchTemplate(client());
}
}
Also check ElasticSearch section of the Spring Boot guide, and especially the section about spring.data.elasticsearch.cluster-nodes if you put multiple comma seperated list of host port it will be generated a TransportClient instead, your choice
Try it, hope it helps
Thanks. Would you believe I actually just started trying to use a configuration file before I saw your post. I added a configuration class
#Configuration
public class ElasticSearchConfig {
#Bean
public Client client() {
TransportClient client = new TransportClient();
TransportAddress address = new InetSocketTransportAddress(
"localhost",9300);
client.addTransportAddress(address);
return client;
}
}
And the client is now being injected into the elasticsearch template (so don't need the elasticsearchtemplate bean).
I had an error when I tried to connect but that turned out to be due to elasticsearch 2.2.0, have tried it with elasticsearch 1.7.3 and it worked so now onto the next problem!

What is the best way to switch AWSCredentialsProviders?

I am writing a Java application that users Spring for dependency injection and AWS for various services. I will be deploying the application to EC2. The issue I am having is setting the AWS credentials in a secure way during development/deployment. Because the service is running on EC2, I would like to use the InstanceProfileCredentialsProvider in production. However, these credentials are not available during development.
Almost all the AWS clients are currently injected using Spring. Here is an example using DynamoDB:
#Lazy
#Configuration
public class SpringConfiguration {
#Bean(name = "my.dynamoDB")
public DynamoDB dynamoDB() {
return DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(
new AWSCredentialsProvider() /* What should go here? */));
}
}
Any thoughts?
Try creating a separate bean that returns a credentials provider. Within that method switch between the two credential sources based on stage or host type.
/**
* #return: an AWSCredentialsProvider appropriate for the stage.
*/
#Bean
public AWSCredentialsProvider awsCredentialsProvider() {
if(isProd() /* define what this means in your configuration code */) {
return new InstanceProfileCredentialsProvider()
} else {
return new AWSCredentialsProvider()
}
}

Categories