Springboot Cassandra - CqlSessionFactoryBean with SSL - java

Small question regarding how to connect to a Cassandra cluster that is SSL enabled please.
Currently, I am connecting to a Cassandra cluster that is not SSL enabled by doing the following, and it is working perfectly fine.
#Configuration
public class BaseCassandraConfiguration extends AbstractReactiveCassandraConfiguration {
#Value("${spring.data.cassandra.username}")
private String username;
#Value("${spring.data.cassandra.password}")
private String passPhrase;
#Value("${spring.data.cassandra.keyspace-name}")
private String keyspace;
#Value("${spring.data.cassandra.local-datacenter}")
private String datacenter;
#Value("${spring.data.cassandra.contact-points}")
private String contactPoints;
#Value("${spring.data.cassandra.port}")
private int port;
#Bean
#NonNull
#Override
public CqlSessionFactoryBean cassandraSession() {
final CqlSessionFactoryBean cqlSessionFactoryBean = new CqlSessionFactoryBean();
cqlSessionFactoryBean.setContactPoints(contactPoints);
cqlSessionFactoryBean.setKeyspaceName(keyspace);
cqlSessionFactoryBean.setLocalDatacenter(datacenter);
cqlSessionFactoryBean.setPort(port);
cqlSessionFactoryBean.setUsername(username);
cqlSessionFactoryBean.setPassword(passPhrase);
return cqlSessionFactoryBean;
}
I have another Cassandra cluster, that is SSL enabled.
I was expecting to see something like cqlSessionFactoryBean.setSSLEnabled(true), something like that. Unfortunately, it seems there is no such.
May I ask what is the proper way to set up this bean in order to connect to a Cassandra with SSL please?
Thank you.

The CqlSessionFactoryBean doesn't have a method for SSL connections, so you might have to change it and use CqlSession instead.
SSLContext sslContext = ...
CqlSession session = CqlSession.builder()
.withSslContext(sslContext)
.build();
or
SslEngineFactory yourFactory = ...
CqlSession session = CqlSession.builder()
.withSslEngineFactory(yourFactory)
.build();

Related

Response from postgres r2db taking lot of time leading to connection time out

We are using spring boot(2.3.1) reactive programing in our project. DB used is r2dbc-postgres (0.8.7). we are unable to find out the root cause why the apis written using reactive stops responding once it connects to DB.
For example in the following code:
#Autowired
PlanPackageCurrencyPriceRepository planPackageCurrencyPriceRepository;
public Mono<Object> viewBySkuCodeAndCountryCode(String skuCode, String countryCode) {
Mono<PlanPackageCurrencyPrice> planPackagePriceInfo = planPackageCurrencyPriceRepository
.findBySkuCodeAndCountryCode(skuCode, countryCode);
return planPackagePriceInfo.map(planInfo -> {
PlanPackageCurrencyPriceDTO currencyPriceDTO = PlanPackageCurrencyPriceDTO.builder()
.skuCode(planInfo.getSkuCode())
.countryCode(planInfo.getCountryCode())
.currencyCode(planInfo.getCurrencyCode())
.price(planInfo.getPrice())
.status(planInfo.getStatus())
.build();
if(planInfo.getStatus() == Status.ACTIVE) {
final Mono<Boolean> monovalue = redisTemplate.opsForHash().put("getplanpackagecurrencycodeprice",
skuCode + countryCode, currencyPriceDTO);
logger.info(REDIS_VALUE, monovalue.subscribe(System.out::println));
return currencyPriceDTO;
} else {
logger.debug(serviceName.concat(LoggerConstants.PLAN_PACKAGE_GROUP_INFO_VIEW_DEBUG_LOG)
.concat(" No items found for Plan/Package Group Info for the sku code {} "), skuCode);
throw new CustomException("VIEW_ERRORMESSAGE", HttpStatus.MULTI_STATUS, 10006);
}
});
}
when a query is made to DB using planPackageCurrencyPriceRepository, The logs stops at this query, following is the response seen right before time out
2021-03-07 10:52:47.427 DEBUG 1 --- [tor-tcp-epoll-4] o.s.d.r.c.R2dbcTransactionManager : Acquired Connection [MonoRetry] for R2DBC transaction
2021-03-07 10:52:47.427 DEBUG 1 --- [tor-tcp-epoll-4] o.s.d.r.c.R2dbcTransactionManager : Switching R2DBC Connection [PooledConnection[PostgresqlConnection{client=io.r2dbc.postgresql.client.ReactorNettyClient#7d1a251f, codecs=io.r2dbc.postgresql.codec.DefaultCodecs#7925be64}]] to manual commit
given some time. The API responds with error saying connection time out.
But then it works fine if we restart our docker container.Then the same behaviour is observed after some time. We are not able to find solution for this intermittent behaviour.
Following is the DB configuration used:
#Configuration
#EnableR2dbcRepositories(basePackages = "com.crm.smsauth.postgresrepo")
public class DatabaseConfig extends AbstractR2dbcConfiguration {
#Value("${postgres.host}")
private String host;
#Value("${postgres.protocol}")
private String protocol;
#Value("${postgres.username}")
private String username;
#Value("${postgres.password}")
private String password;
#Value("${postgres.database}")
private String database;
#Override
#Bean
public ConnectionFactory connectionFactory() {
final ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
.option(ConnectionFactoryOptions.DRIVER, "pool")
.option(ConnectionFactoryOptions.PROTOCOL, protocol)
.option(ConnectionFactoryOptions.HOST, host)
.option(ConnectionFactoryOptions.USER, username)
.option(ConnectionFactoryOptions.PASSWORD, password)
.option(ConnectionFactoryOptions.DATABASE, database)
.option(MAX_SIZE, 1000)
.option(INITIAL_SIZE, 1)
.build());
return connectionFactory;
}
#Bean
ReactiveTransactionManager transactionManager(ConnectionFactory connectionFactory) {
return new R2dbcTransactionManager(connectionFactory);
}
}
Please let me know if the issue is with the way reactive code is written or in the DB configuration.
EDIT 2:
Postgres logs : The DB name is planPackage.
2021-03-07 16:26:47.389 IST [24368] postgres#planpackage LOG: could not receive data from client: Connection timed out
The timestamps of both the logs doesn't match because, our deployment vm has a timezone set to GMT, But the postgres one is IST.

Connect to Cosmos using key from Key Vault

I have a Spring Boot application which needs to make use of CosmosDB. My goal is to load the CosmosDB connection key from Key Vault and use that to connect to CosmosDB.
I have placed the key as a secret in Key Vault, but it seems that there is an ordering issue going on, as the Cosmos bean is created before the Key Vault. I am able to connect to successfully connect to Key Vault and have received several keys before this, and I am also able to connect to Cosmos if I hard code the connection key.
Is it possible to load the key from Key Vault and use it to create the Cosmos bean?
What I have tried is the following, but I receive a connection error with Cosmos (due to the key being not set) - probably because it loads before the Key Vault. Is there a robust way to connect to Cosmos or any proper examples available for Spring boot?
Dependencies I am using:
azure-cosmosdb-spring-boot-starter (from com.microsoft.azure)
azure-identity (from com.azure)
azure-security-keyvault-secrets (from com.azure)
CosmosConfiguration.java class:
#Slf4j
#Configuration
#Profile("!local")
public class CosmosConfiguration extends AbstractCosmosConfiguration {
#Value("${cosmosPrimaryKey}")
private String key;
#Override
public CosmosClient cosmosClient(CosmosDBConfig config) {
return CosmosClient
.builder()
.endpoint(config.getUri())
.cosmosKeyCredential(new CosmosKeyCredential(key))
.consistencyLevel(consistencyLevel.STRONG)
.build()
}
}
The application.properties (only the relevant parts):
azure.keyvault.enabled=true
azure.keyvault.uri=https://mykeyvault.azure.net
azure.keyvault.secrets-keys=cosmosPrimaryKey
cosmosdb.keyname=cosmosPrimaryKey
azure.cosmosdb.uri=https://mycosmos.documents.azure.com:443
azure.cosmodb.repositories.enabled=true
spring.main.allow-bean-definition-overriding=true
My idea on your case is add judgement when creating 'CosmosClient'. And here's my code.
#Autowired
private CosmosProperties properties;
public CosmosClientBuilder cosmosClientBuilder() {
DirectConnectionConfig directConnectionConfig = DirectConnectionConfig.getDefaultConfig();
String uri = properties.getUri();
if(true) {
String temp = getConnectUriFromKeyvault();
properties.setUri(temp);
}
return new CosmosClientBuilder()
.endpoint(properties.getUri())
.key(properties.getKey())
.directMode(directConnectionConfig);
}
public String getConnectUriFromKeyvault() {
SecretClient secretClient = new SecretClientBuilder()
.vaultUrl("https://vauxxxxen.vault.azure.net/")
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
KeyVaultSecret secret = secretClient.getSecret("cosmosdbScanWithwrongkey");
return secret.getValue();
}
CosmosProperties entity:
import org.springframework.boot.context.properties.ConfigurationProperties;
#ConfigurationProperties(prefix = "cosmos")
public class CosmosProperties {
private String uri;
private String key;
private String secondaryKey;
private boolean queryMetricsEnabled;
//get set function
//...
}
application.properties:
cosmos.uri=https://txxxb.documents.azure.com:443/
cosmos.key=gdvBggxxxxxWA==
cosmos.secondaryKey=wDcxxxfinXg==
dynamic.collection.name=spel-property-collection
# Populate query metrics
cosmos.queryMetricsEnabled=true
I followed this doc to get key vault secret.

Aws Elastic Cache (Redis) failed to connect (jedis connection error) when acessed locally through spring boot java

I am working on a spring boot application where I have to store OTP in Elastic cache (Redis).
Is elastic cache right choice to store OTP?
Using Redis to store OTP
To connect to Redis locally I used "sudo apt-get install Redis-server". It installed and successfully run.
I created a Redisconfig where I asked the application config file for port and hostname. Here I thought I will use this hostname and port to connect to aws elastic cache but Right now I am running locally.
public class RedisConfig {
#Value("${redis.hostname}")
private String redisHostName;
#Value("${redis.port}")
private int redisPort;
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
return new JedisConnectionFactory();
}
#Bean
public RedisTemplate<String,Integer> redisTemplate() {
final RedisTemplate<String, Integer> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
Now I used the RedisTemplate and valueOperation to put, read the data in Redis cache
public class MyService {
private RedisTemplate<String, Integer> redisTemplate;
private ValueOperations<String, Integer> valueOperations;
public OtpService(RedisTemplate<String, Integer> redisTemplate) {
super();
this.redisTemplate = redisTemplate;
valueOperations = redisTemplate.opsForValue();
}
public int generateOTP(String key) throws Exception {
try {
Random random = new Random();
int otp = 1000 + random.nextInt(9000);
valueOperations.set(key, otp, 120, TimeUnit.SECONDS);
return otp;
} catch (Exception e) {
throw new Exception("Exception while setting otp" + e.getMessage()) ;
}
}
public int getOtp(String key) {
try {
return valueOperations.get(key);
} catch (Exception e) {
return 0;
}
}
}
Now This is what I have done and which is running perfectly in local.
Questions I have :
What changes do I need when I am deploying the application in EC2 instance. Do we need to configure hostname and port in the code?
If we need to configure, Is there a way to test locally what would happen when we deploy? Can we simulate that environment somehow?
I have read that to access aws elastic cache (Redis) locally we have to set up proxy server, which is not a good practice, so how can we easily build the app locally and deploy on the cloud?
Why did ValueOperations don't have "delete" method when it has set, put methods? How can I invalidate cache once its usage is done before the expiry time?
Accessing the AWS cache locally:
When I tried to access the aws elastic cache (Redis) by putting the post and hostname in the creation of JedisConnectionFactory instance
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration(redisHostName, redisPort);
JedisConnectionFactory factory = new JedisConnectionFactory(configuration);
return factory;
}
I got an error while setting the key value:
Cannot get Jedis connection; nested exception is
redis.clients.jedis.exceptions.JedisConnectionException: Could not get
a resource from the pool
I tried to explain what I have done and what I needed to know?
If anybody knows any blog, resources where things are mentioned in detail please direct me there.
After posting the question, I tried things myself.
As per amazon,
Your Amazon ElastiCache instances are designed to be accessed through
an Amazon EC2 instance.
To connect to Redis locally on Linux,
Run "sudo apt-get install Redis-server". It will install redis server.
Run "redis-cli". It will run Redis on localhost:6379 successfully run.
To connect to server in java(spring boot)
Redisconfig
For local in application.properties: redis.hostname = localhost, redis.port = 6379
For cloud or when deployed to ec2: redis.hostname = "amazon Elastic cache endpoint", redis.port = 6379
public class RedisConfig {
#Value("${redis.hostname}")
private String redisHostName;
#Value("${redis.port}")
private int redisPort;
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration(redisHostName, redisPort);
JedisConnectionFactory factory = new JedisConnectionFactory(configuration);
return factory;
}
#Bean
public RedisTemplate<String,Integer> redisTemplate() {
final RedisTemplate<String, Integer> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
With this whether you are running locally or on cloud just need to change the URL and things will work perfectly.
After this use RedisTemplate and valueOperation to put, read the data in Redis cache. Same as I mentioned in the question above. No need for any changes.
Answers to the questions:
We need to change the hostname when deploying in the EC2 instance.
Running Redis server locally is exactly same as running Redis when the application is deployed on EC2, no need for changes, use the Redis config I am using.
Yes, don't create a proxy server, this beats the very idea of the cache. run locally with Redis server and change hostname
I still need to find a way to invalidate the cache when using valueOperations

How to config connection to mongo database on MongoLab using java mongodb driver, java configuration (not xml configuration)?

Currently I'm using Amzone EC2 to host my mongo database, below is code for MongoCongig file in java using java mongodb driver and it's working fine.
#Configuration
#EnableMongoRepositories
public class MongoConfig extends AbstractMongoConfiguration
{
#Value("my_amazone_ec2_host")
private String host;
#Value("27017")
private Integer port;
#Value("my_database_name")
private String database;
#Value("database_admin")
private String username;
#Value("admin_pass")
private String password;
#Override
public String getDatabaseName()
{
return database;
}
#Override
#Bean
public Mongo mongo() throws Exception
{
return new MongoClient(
singletonList( new ServerAddress( host, port ) ),
singletonList( MongoCredential.createCredential( username,
database, password.toCharArray() ) ) );
}
}
Now I want to using MongoLab to host my database and MongoLab provide URI to connect to mongo db something like this:
mongodb://<dbuser>:<dbpassword>#ser_num.mongolab.com:port/database_name
I tried to modify my host name with this URI but not successful. Can anyone help me config this file?
I'm using only java configuration, not XML configuration; MongoDB version 3.
I just found the solution by replacing relative information from MongoLab URI:
#Value("ser_num.mongolab.com")
private String host;
#Value("port")
private Integer port;

Check MQ queue depth

Can anyone help in doing the code in java of getting the depth of the queues. We are having 4 queues in IBM WebSphere MQ and inside them there are messages.
I want to write a jsp to read the queue names and their depth while running the report.
How do I do that?
See http://blog.guymahieu.com/2008/06/11/getting-the-depth-of-an-mqseries-queue-from-java/.
I re-implemented this as follows:
import com.ibm.mq.*;
public class QueueManager {
private final String host;
private final int port;
private final String channel;
private final String manager;
private final MQQueueManager qmgr;
public QueueManager(String host, int port, String channel, String manager) throws MQException {
this.host = host;
this.port = port;
this.channel = channel;
this.manager = manager;
this.qmgr = createQueueManager();
}
public int depthOf(String queueName) throws MQException {
MQQueue queue = qmgr.accessQueue(queueName, MQC.MQOO_INQUIRE | MQC.MQOO_INPUT_AS_Q_DEF, null, null, null);
return queue.getCurrentDepth();
}
#SuppressWarnings("unchecked")
private MQQueueManager createQueueManager() throws MQException {
MQEnvironment.channel = channel;
MQEnvironment.port = port;
MQEnvironment.hostname = host;
MQEnvironment.properties.put(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES);
return new MQQueueManager(manager);
}
}
Put the following jars on your classpath:
com.ibm.mq*jar
j2ee.jar
I saw a response access queue with Websphere MQ API
Have you look at MBeans accessibles in JMX context ? If I had to do that I'll expose it in a Mbean.
You can see there IBM JMS Topologies
For monitoring and checking the status of resources, such as queue depths, there are a couple of options. The preferred option is to use the JMX Mbeans provided with Application Serve for monitoring: JMSBasicFunction, JMSAdministration, and EmbeddedJMSAdministration.
You can access these Mbeans through wsadmin or programmatically. Secondly, you can use the traditional WMQ administration utilities, such as runmqsc or MQExplorer, to look at queues and other resources. If you do use these utilities, it is essential that you do not make any configuration changes to the Application Server queue manager and queues. These resources are under the control of Application Server. Making changes to these resources using the MQ utilities results in a non-functioning configuration
Dont know if you are on a WAS server and if this is still the same MBeans, but you should find equivalents Mbeans on your AS.

Categories