I have a Spring Boot application which needs to make use of CosmosDB. My goal is to load the CosmosDB connection key from Key Vault and use that to connect to CosmosDB.
I have placed the key as a secret in Key Vault, but it seems that there is an ordering issue going on, as the Cosmos bean is created before the Key Vault. I am able to connect to successfully connect to Key Vault and have received several keys before this, and I am also able to connect to Cosmos if I hard code the connection key.
Is it possible to load the key from Key Vault and use it to create the Cosmos bean?
What I have tried is the following, but I receive a connection error with Cosmos (due to the key being not set) - probably because it loads before the Key Vault. Is there a robust way to connect to Cosmos or any proper examples available for Spring boot?
Dependencies I am using:
azure-cosmosdb-spring-boot-starter (from com.microsoft.azure)
azure-identity (from com.azure)
azure-security-keyvault-secrets (from com.azure)
CosmosConfiguration.java class:
#Slf4j
#Configuration
#Profile("!local")
public class CosmosConfiguration extends AbstractCosmosConfiguration {
#Value("${cosmosPrimaryKey}")
private String key;
#Override
public CosmosClient cosmosClient(CosmosDBConfig config) {
return CosmosClient
.builder()
.endpoint(config.getUri())
.cosmosKeyCredential(new CosmosKeyCredential(key))
.consistencyLevel(consistencyLevel.STRONG)
.build()
}
}
The application.properties (only the relevant parts):
azure.keyvault.enabled=true
azure.keyvault.uri=https://mykeyvault.azure.net
azure.keyvault.secrets-keys=cosmosPrimaryKey
cosmosdb.keyname=cosmosPrimaryKey
azure.cosmosdb.uri=https://mycosmos.documents.azure.com:443
azure.cosmodb.repositories.enabled=true
spring.main.allow-bean-definition-overriding=true
My idea on your case is add judgement when creating 'CosmosClient'. And here's my code.
#Autowired
private CosmosProperties properties;
public CosmosClientBuilder cosmosClientBuilder() {
DirectConnectionConfig directConnectionConfig = DirectConnectionConfig.getDefaultConfig();
String uri = properties.getUri();
if(true) {
String temp = getConnectUriFromKeyvault();
properties.setUri(temp);
}
return new CosmosClientBuilder()
.endpoint(properties.getUri())
.key(properties.getKey())
.directMode(directConnectionConfig);
}
public String getConnectUriFromKeyvault() {
SecretClient secretClient = new SecretClientBuilder()
.vaultUrl("https://vauxxxxen.vault.azure.net/")
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
KeyVaultSecret secret = secretClient.getSecret("cosmosdbScanWithwrongkey");
return secret.getValue();
}
CosmosProperties entity:
import org.springframework.boot.context.properties.ConfigurationProperties;
#ConfigurationProperties(prefix = "cosmos")
public class CosmosProperties {
private String uri;
private String key;
private String secondaryKey;
private boolean queryMetricsEnabled;
//get set function
//...
}
application.properties:
cosmos.uri=https://txxxb.documents.azure.com:443/
cosmos.key=gdvBggxxxxxWA==
cosmos.secondaryKey=wDcxxxfinXg==
dynamic.collection.name=spel-property-collection
# Populate query metrics
cosmos.queryMetricsEnabled=true
I followed this doc to get key vault secret.
Related
What we have on start
Let's say there is a simple Spring Boot application which provides an API for some frontend. Tech stack is quite regular: Kotlin, Gradle, Spring WebMVC, PostgreSQL, Keycloak.
The frontend interacts with the app synchronously via HTTP. Client authenticates with JWT token.
The business task
There is a list of events that could be raised somewhere in the system. User should be notified about them.
User is able to subscribe to one or more event notification. Subscriptions is just a pair of user_id + event_type_id persisted in dedicated Postgres table.
When event X is being raised we should find all the users subscribed to it and send them some data via Websocket
Configuration
Let's configure Spring first. Spring uses a STOMP over Websocket protocol.
Add dependencies to build.gradle.kts
implementation("org.springframework.boot:spring-boot-starter-websocket")
implementation("org.springframework:spring-messaging")
Add a Websocket config
#Configuration
#EnableWebSocket
#EnableWebSocketMessageBroker
class WebsocketConfig(
private val websocketAuthInterceptor: WebsocketAuthInterceptor
) : WebSocketMessageBrokerConfigurer {
override fun configureMessageBroker(config: MessageBrokerRegistry) {
config.enableSimpleBroker("/queue/")
config.setUserDestinationPrefix("/user")
}
override fun registerStompEndpoints(registry: StompEndpointRegistry) {
registry.addEndpoint("/ws").setAllowedOrigins("*")
registry.addEndpoint("/ws").setAllowedOrigins("*").withSockJS()
}
override fun configureClientInboundChannel(registration: ChannelRegistration) {
registration.interceptors(websocketAuthInterceptor) // we'll talk later about this
}
}
registerStompEndpoints is about how to establish connection with our websocket. The config allows frontend interact with us via SockJS of Websocket libraries. What they are and what the difference is not a today's topic
configureMessageBroker is about how we will interact with the frontend after the connection established.
configureClientInboundChannel is about message interception. Let talk about it later.
Add /ws/** path to ignored patterns in the Spring Security config.
Connection establishing
On the frontend side it should looks something like that (JavaScript)
const socket = new WebSocket('ws://<api.host>/ws')
//const socket = new SockJS('http://<api.host>/ws') // alternative way
const headers = {Authorization: 'JWT token here'};
const stompClient = Stomp.over(socket, {headers});
stompClient.connect(
headers,
function (frame) {
stompClient.subscribe('/user/queue/notification',
function (message) { ...message processing... });
});
The key moment is to pass an authorization header. It is not a HTTP header. And initial HTTP handshake itself will not be authorized. This is the reason of adding /ws/** to ignored patterns.
We need the header and the token because we want to allow websocket connection only for authorized users and also we want to know which exactly user is connected.
Authentication
Now we are adding the authentication mechanism
#Component
class WebsocketAuthInterceptor(
private val authService: AuthService, //your implementation
private val sessionStore: WebsocketUserSessionStore
) : ChannelInterceptor {
override fun preSend(message: Message<*>, channel: MessageChannel): Message<*>? {
val accessor: StompHeaderAccessor = MessageHeaderAccessor.getAccessor(message, StompHeaderAccessor::class.java)
val sessionId: String = accessor.messageHeaders["simpSessionId"].toString()
if (StompCommand.CONNECT == accessor.command) {
val jwtToken: String = accessor.getFirstNativeHeader("Authorization")
val token: AccessToken = authService.verifyToken(jwtToken)
val userId: Long = token.otherClaims["user_id"].toString().toLong()
sessionStore.add(sessionId, userId)
} else if (StompCommand.DISCONNECT == accessor.command) {
sessionStore.remove(sessionId)
}
return message
}
}
The point is to link a random generating websocket session ID with a user ID from our Spring Security store and persist that pair during the session life. JWT token should be parsed from the message headers.
Then by the given token a user ID should be obtained. Implementation of that part depends on what Spring Security config you exactly have. In case of Keycloack there is a useful static method org.keycloak.adapters.rotation.AdapterTokenVerifier::verifyToken
WebsocketUserSessionStore is just a map for linking session_id with user_id It may looks like the following code. Remember the concurrent access of course
#Component
class WebsocketUserSessionStore {
private val lock = ReentrantLock()
private val store = HashMap<String, Long>()
fun add(sessionId: String, userId: Long) = lock.withLock {
store.compute(sessionId) { _, _ -> userId }
}
fun remove(sessionId: String) = lock.withLock {
store.remove(sessionId)
}
fun remove(userId: Long) = lock.withLock {
store.values.remove(userId)
}
}
Notification actually
So event A was raised somewhere inside business logic. Let's implement a websocket publisher.
#Component
class WebsocketPublisher(
private val messagingTemplate: SimpMessagingTemplate,
private val objectMapper: ObjectMapper,
private val sessionStore: WebsocketUserSessionStore,
private val userNotificationRepository: UserNotificationRepository
) {
suspend fun publish(eventType: EventType, eventData: Any) {
val userIds = userNotificationRepository.getUserSubscribedTo(eventType)
val sessionIds = sessionStore.getSessionIds(userIds)
sessionIds.forEach {
messagingTemplate.convertAndSendToUser(
it,
"/queue/notification",
objectMapper.writeValueAsString(eventData),
it.toMessageHeaders()
)
}
}
private fun String.toMessageHeaders(): MessageHeaders {
val headerAccessor = SimpMessageHeaderAccessor.create(SimpMessageType.MESSAGE)
headerAccessor.sessionId = this
headerAccessor.setLeaveMutable(true)
return headerAccessor.messageHeaders
}
}
EventType is an enumeration of event types the system has.
UserNotificationRepository is just a part of data persistence layer (Hibernate|JOOQ repository, MyBatis mapper or smth). Function getUserSubscribedTo should do something like select user_id from user_subscription where event_type_id = X.
The rest of code is plenty straightforward. By the giving userIds it is possible to obtain living websocket sessions. Then for every session convertAndSendToUser function should be called.
I think they have the tutorial to build the WebSocket push-notification service.
Small question regarding how to connect to a Cassandra cluster that is SSL enabled please.
Currently, I am connecting to a Cassandra cluster that is not SSL enabled by doing the following, and it is working perfectly fine.
#Configuration
public class BaseCassandraConfiguration extends AbstractReactiveCassandraConfiguration {
#Value("${spring.data.cassandra.username}")
private String username;
#Value("${spring.data.cassandra.password}")
private String passPhrase;
#Value("${spring.data.cassandra.keyspace-name}")
private String keyspace;
#Value("${spring.data.cassandra.local-datacenter}")
private String datacenter;
#Value("${spring.data.cassandra.contact-points}")
private String contactPoints;
#Value("${spring.data.cassandra.port}")
private int port;
#Bean
#NonNull
#Override
public CqlSessionFactoryBean cassandraSession() {
final CqlSessionFactoryBean cqlSessionFactoryBean = new CqlSessionFactoryBean();
cqlSessionFactoryBean.setContactPoints(contactPoints);
cqlSessionFactoryBean.setKeyspaceName(keyspace);
cqlSessionFactoryBean.setLocalDatacenter(datacenter);
cqlSessionFactoryBean.setPort(port);
cqlSessionFactoryBean.setUsername(username);
cqlSessionFactoryBean.setPassword(passPhrase);
return cqlSessionFactoryBean;
}
I have another Cassandra cluster, that is SSL enabled.
I was expecting to see something like cqlSessionFactoryBean.setSSLEnabled(true), something like that. Unfortunately, it seems there is no such.
May I ask what is the proper way to set up this bean in order to connect to a Cassandra with SSL please?
Thank you.
The CqlSessionFactoryBean doesn't have a method for SSL connections, so you might have to change it and use CqlSession instead.
SSLContext sslContext = ...
CqlSession session = CqlSession.builder()
.withSslContext(sslContext)
.build();
or
SslEngineFactory yourFactory = ...
CqlSession session = CqlSession.builder()
.withSslEngineFactory(yourFactory)
.build();
I am working on a spring boot application where I have to store OTP in Elastic cache (Redis).
Is elastic cache right choice to store OTP?
Using Redis to store OTP
To connect to Redis locally I used "sudo apt-get install Redis-server". It installed and successfully run.
I created a Redisconfig where I asked the application config file for port and hostname. Here I thought I will use this hostname and port to connect to aws elastic cache but Right now I am running locally.
public class RedisConfig {
#Value("${redis.hostname}")
private String redisHostName;
#Value("${redis.port}")
private int redisPort;
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
return new JedisConnectionFactory();
}
#Bean
public RedisTemplate<String,Integer> redisTemplate() {
final RedisTemplate<String, Integer> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
Now I used the RedisTemplate and valueOperation to put, read the data in Redis cache
public class MyService {
private RedisTemplate<String, Integer> redisTemplate;
private ValueOperations<String, Integer> valueOperations;
public OtpService(RedisTemplate<String, Integer> redisTemplate) {
super();
this.redisTemplate = redisTemplate;
valueOperations = redisTemplate.opsForValue();
}
public int generateOTP(String key) throws Exception {
try {
Random random = new Random();
int otp = 1000 + random.nextInt(9000);
valueOperations.set(key, otp, 120, TimeUnit.SECONDS);
return otp;
} catch (Exception e) {
throw new Exception("Exception while setting otp" + e.getMessage()) ;
}
}
public int getOtp(String key) {
try {
return valueOperations.get(key);
} catch (Exception e) {
return 0;
}
}
}
Now This is what I have done and which is running perfectly in local.
Questions I have :
What changes do I need when I am deploying the application in EC2 instance. Do we need to configure hostname and port in the code?
If we need to configure, Is there a way to test locally what would happen when we deploy? Can we simulate that environment somehow?
I have read that to access aws elastic cache (Redis) locally we have to set up proxy server, which is not a good practice, so how can we easily build the app locally and deploy on the cloud?
Why did ValueOperations don't have "delete" method when it has set, put methods? How can I invalidate cache once its usage is done before the expiry time?
Accessing the AWS cache locally:
When I tried to access the aws elastic cache (Redis) by putting the post and hostname in the creation of JedisConnectionFactory instance
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration(redisHostName, redisPort);
JedisConnectionFactory factory = new JedisConnectionFactory(configuration);
return factory;
}
I got an error while setting the key value:
Cannot get Jedis connection; nested exception is
redis.clients.jedis.exceptions.JedisConnectionException: Could not get
a resource from the pool
I tried to explain what I have done and what I needed to know?
If anybody knows any blog, resources where things are mentioned in detail please direct me there.
After posting the question, I tried things myself.
As per amazon,
Your Amazon ElastiCache instances are designed to be accessed through
an Amazon EC2 instance.
To connect to Redis locally on Linux,
Run "sudo apt-get install Redis-server". It will install redis server.
Run "redis-cli". It will run Redis on localhost:6379 successfully run.
To connect to server in java(spring boot)
Redisconfig
For local in application.properties: redis.hostname = localhost, redis.port = 6379
For cloud or when deployed to ec2: redis.hostname = "amazon Elastic cache endpoint", redis.port = 6379
public class RedisConfig {
#Value("${redis.hostname}")
private String redisHostName;
#Value("${redis.port}")
private int redisPort;
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration(redisHostName, redisPort);
JedisConnectionFactory factory = new JedisConnectionFactory(configuration);
return factory;
}
#Bean
public RedisTemplate<String,Integer> redisTemplate() {
final RedisTemplate<String, Integer> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
With this whether you are running locally or on cloud just need to change the URL and things will work perfectly.
After this use RedisTemplate and valueOperation to put, read the data in Redis cache. Same as I mentioned in the question above. No need for any changes.
Answers to the questions:
We need to change the hostname when deploying in the EC2 instance.
Running Redis server locally is exactly same as running Redis when the application is deployed on EC2, no need for changes, use the Redis config I am using.
Yes, don't create a proxy server, this beats the very idea of the cache. run locally with Redis server and change hostname
I still need to find a way to invalidate the cache when using valueOperations
I'm studying the Vertx MongoClient API. I previously installed Restheart from Docker and it's own copy of mongodb, so now I have the default configuration for Restheart and the default configuration of Mongo in docker-compose.yml:
MONGO_INITDB_ROOT_USERNAME: restheart
MONGO_INITDB_ROOT_PASSWORD: R3ste4rt!
I put the Vertx Mongoclient into a Verticle:
public class MongoClientVerticle extends AbstractVerticle {
MongoClient mongoClient;
String db = "monica";
String collection = "sessions";
String uri = "mongodb://localhost:27017";
String username = "admin";
String password = "password";
MongoAuth authProvider;
#Override
public void start() throws Exception {
JsonObject config = Vertx.currentContext().config();
JsonObject mongoconfig = new JsonObject()
.put("connection_string", uri)
.put("db_name", db);
mongoClient = MongoClient.createShared(vertx, mongoconfig);
JsonObject authProperties = new JsonObject();
authProvider = MongoAuth.create(mongoClient, authProperties);
// authProvider.setHashAlgorithm(HashAlgorithm.SHA512);
JsonObject authInfo = new JsonObject()
.put("username", username)
.put("password", password);
authProvider.authenticate(authInfo, res -> {
if (res.succeeded()) {
User user = res.result();
System.out.println("User " + user.principal() + " is now authenticated");
} else {
res.cause().printStackTrace();
}
});
}
and I built a simple query:
public void find(int limit) {
JsonObject query = new JsonObject();
FindOptions options = new FindOptions();
options.setLimit(1000);
mongoClient.findWithOptions(collection, query, options, res -> {
if (res.succeeded()) {
List<JsonObject> result = res.result();
result.forEach(System.out::println);
} else {
res.cause().printStackTrace();
}
});
}
but when I access the db I get this error:
MongoQueryException: Query failed with error code 13 and error message 'there are no users authenticated' on server localhost:27017
What am I missing in the authentication process?
I'm using lastest restheart + mongodb and vertx 3.5.3
To be clear, RESTHeart doesn't come with its own copy of Mongodb but connects to any existing instance of Mongodb. The instance you can start via docker compose is for demo purposes only.
This question is much related to Vertx + Mongodb. I'm not an expert of it, but apparently Vert.x Auth Mongo does not use database accounts to authenticate users, it uses a specific collection (by default the "user" collection). You could double check the Vertx docs in case to be sure about this.
However, note that RESTHeart's main purpose is to provide direct HTTP access to Mongodb, without the need to program any specific client or driver. So the side point is that if you are using Vertx then you presumably don't need RESTHeart, and vice versa. Otherwise, you could simply connect to RESTHeart via Vertx's HTTP client, entirely skipping the MongoClient API.
Currently I'm using Amzone EC2 to host my mongo database, below is code for MongoCongig file in java using java mongodb driver and it's working fine.
#Configuration
#EnableMongoRepositories
public class MongoConfig extends AbstractMongoConfiguration
{
#Value("my_amazone_ec2_host")
private String host;
#Value("27017")
private Integer port;
#Value("my_database_name")
private String database;
#Value("database_admin")
private String username;
#Value("admin_pass")
private String password;
#Override
public String getDatabaseName()
{
return database;
}
#Override
#Bean
public Mongo mongo() throws Exception
{
return new MongoClient(
singletonList( new ServerAddress( host, port ) ),
singletonList( MongoCredential.createCredential( username,
database, password.toCharArray() ) ) );
}
}
Now I want to using MongoLab to host my database and MongoLab provide URI to connect to mongo db something like this:
mongodb://<dbuser>:<dbpassword>#ser_num.mongolab.com:port/database_name
I tried to modify my host name with this URI but not successful. Can anyone help me config this file?
I'm using only java configuration, not XML configuration; MongoDB version 3.
I just found the solution by replacing relative information from MongoLab URI:
#Value("ser_num.mongolab.com")
private String host;
#Value("port")
private Integer port;