I have a MongoDB with two redundant MongoS router hosts. When using org.springframework.data.mongo to create a MongoTemplate and MongoClient, I can only add a single host. In the event that the host in use falls over, there is no failover to the alternate router host.
I initially referenced https://dzone.com/articles/multiple-mongodb-connectors-with-spring-boot , but the use case there is for two entirely different repositories, where as my case is a single database with dual routers.
In the code below, we would like to add a redundant second host in case the first host fails during runtime.
public class MongoConfiguration extends AbstractMongoConfiguration {
#Value("${mongo.database}")
private String databaseName;
#Value("${mongo.host}")
private String host;
#Value("${mongo.readFromSecondary}")
private String readFromSecondary;
#Value("${mongo.port}")
private int port;
#VaultKey("vault.mongo_username")
private String username;
#VaultKey("vault.mongo_password")
private String password;
#Override
protected String getDatabaseName() {
return databaseName;
}
#Override
#Primary
public MongoClient mongoClient() {
final ServerAddress serverAddress = new ServerAddress(host, port);
final MongoCredential credential = MongoCredential.createCredential(username,
getDatabaseName(), password.toCharArray());
return new MongoClient(serverAddress, credential,
MongoClientOptions.builder().build());
}
#Override
#Primary
#Bean(name = "mongoTemplate")
public MongoTemplate mongoTemplate() throws Exception {
final MongoTemplate template = super.mongoTemplate();
if (this.readFromSecondary != null && Boolean.valueOf(this.readFromSecondary)) {
template.setReadPreference(ReadPreference.secondary());
}
return template;
}
}
Currently at startup a connection to the host in the config file will be loaded with out error, we would like to rotate in a backup host.
You can achieve this by two ways:
1. Multiple Mongo Client or multiple Server Address(hosts):
A MongoDB client with internal connection pooling. For most applications, you should have one MongoClient instance for the entire JVM.
The following are equivalent, and all connect to the local database running on the default port:
MongoClient mongoClient1 = new MongoClient();
MongoClient mongoClient1 = new MongoClient("localhost");
MongoClient mongoClient2 = new MongoClient("localhost", 27017);
MongoClient mongoClient4 = new MongoClient(new ServerAddress("localhost"));
MongoClient mongoClient5 = new MongoClient(new ServerAddress("localhost"),
new MongoClientOptions.Builder().build());
You can connect to a replica set using the Java driver by passing a ServerAddress list to the MongoClient constructor. For example:
MongoClient mongoClient = new MongoClient(Arrays.asList(
new ServerAddress("localhost", 27017),
new ServerAddress("localhost", 27018),
new ServerAddress("localhost", 27019)));
You can connect to a sharded cluster using the same constructor. MongoClient will auto-detect whether the servers are a list of replica set members or a list of mongos servers.
By default, all read and write operations will be made on the primary, but it's possible to read from secondaries by changing the read preference:
mongoClient.setReadPreference(ReadPreference.secondaryPreferred());
By default, all write operations will wait for acknowledgment by the server, as the default write concern is WriteConcern.ACKNOWLEDGED
2. Using Multiple Mongo Connectors and Multiple Mongo Templates:
First of all create the following #ConfigurationProperties class.
#ConfigurationProperties(prefix = "mongodb")
public class MultipleMongoProperties {
private MongoProperties primary = new MongoProperties();
private MongoProperties secondary = new MongoProperties();
}
And then add the following properties in the application.yml
mongodb:
primary:
host: localhost
port: 27017
database: second
secondary:
host: localhost
port: 27017
database: second
Now it’s necessary to create the MongoTemplates to bind the given configuration in the previous step.
#EnableConfigurationProperties(MultipleMongoProperties.class)
public class MultipleMongoConfig {
private final MultipleMongoProperties mongoProperties;
#Primary
#Bean(name = "primaryMongoTemplate")
public MongoTemplate primaryMongoTemplate() throws Exception {
return new MongoTemplate(primaryFactory(this.mongoProperties.getPrimary()));
}
#Bean(name = "secondaryMongoTemplate")
public MongoTemplate secondaryMongoTemplate() throws Exception {
return new MongoTemplate(secondaryFactory(this.mongoProperties.getSecondary()));
}
#Bean
#Primary
public MongoDbFactory primaryFactory(final MongoProperties mongo) throws Exception {
return new SimpleMongoDbFactory(new MongoClient(mongo.getHost(), mongo.getPort()),
mongo.getDatabase());
}
#Bean
public MongoDbFactory secondaryFactory(final MongoProperties mongo) throws Exception {
return new SimpleMongoDbFactory(new MongoClient(mongo.getHost(), mongo.getPort()),
mongo.getDatabase());
}
}
With the configuration above you’ll be able to have two different MongoTemplates based in the custom configuration properties that we provided previously in this guide.
In the previous step we created two MongoTemplates, primaryMongoTemplate and secondaryMongoTemplate
More details: https://blog.marcosbarbero.com/multiple-mongodb-connectors-in-spring-boot/
Related
We are using spring boot(2.3.1) reactive programing in our project. DB used is r2dbc-postgres (0.8.7). we are unable to find out the root cause why the apis written using reactive stops responding once it connects to DB.
For example in the following code:
#Autowired
PlanPackageCurrencyPriceRepository planPackageCurrencyPriceRepository;
public Mono<Object> viewBySkuCodeAndCountryCode(String skuCode, String countryCode) {
Mono<PlanPackageCurrencyPrice> planPackagePriceInfo = planPackageCurrencyPriceRepository
.findBySkuCodeAndCountryCode(skuCode, countryCode);
return planPackagePriceInfo.map(planInfo -> {
PlanPackageCurrencyPriceDTO currencyPriceDTO = PlanPackageCurrencyPriceDTO.builder()
.skuCode(planInfo.getSkuCode())
.countryCode(planInfo.getCountryCode())
.currencyCode(planInfo.getCurrencyCode())
.price(planInfo.getPrice())
.status(planInfo.getStatus())
.build();
if(planInfo.getStatus() == Status.ACTIVE) {
final Mono<Boolean> monovalue = redisTemplate.opsForHash().put("getplanpackagecurrencycodeprice",
skuCode + countryCode, currencyPriceDTO);
logger.info(REDIS_VALUE, monovalue.subscribe(System.out::println));
return currencyPriceDTO;
} else {
logger.debug(serviceName.concat(LoggerConstants.PLAN_PACKAGE_GROUP_INFO_VIEW_DEBUG_LOG)
.concat(" No items found for Plan/Package Group Info for the sku code {} "), skuCode);
throw new CustomException("VIEW_ERRORMESSAGE", HttpStatus.MULTI_STATUS, 10006);
}
});
}
when a query is made to DB using planPackageCurrencyPriceRepository, The logs stops at this query, following is the response seen right before time out
2021-03-07 10:52:47.427 DEBUG 1 --- [tor-tcp-epoll-4] o.s.d.r.c.R2dbcTransactionManager : Acquired Connection [MonoRetry] for R2DBC transaction
2021-03-07 10:52:47.427 DEBUG 1 --- [tor-tcp-epoll-4] o.s.d.r.c.R2dbcTransactionManager : Switching R2DBC Connection [PooledConnection[PostgresqlConnection{client=io.r2dbc.postgresql.client.ReactorNettyClient#7d1a251f, codecs=io.r2dbc.postgresql.codec.DefaultCodecs#7925be64}]] to manual commit
given some time. The API responds with error saying connection time out.
But then it works fine if we restart our docker container.Then the same behaviour is observed after some time. We are not able to find solution for this intermittent behaviour.
Following is the DB configuration used:
#Configuration
#EnableR2dbcRepositories(basePackages = "com.crm.smsauth.postgresrepo")
public class DatabaseConfig extends AbstractR2dbcConfiguration {
#Value("${postgres.host}")
private String host;
#Value("${postgres.protocol}")
private String protocol;
#Value("${postgres.username}")
private String username;
#Value("${postgres.password}")
private String password;
#Value("${postgres.database}")
private String database;
#Override
#Bean
public ConnectionFactory connectionFactory() {
final ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
.option(ConnectionFactoryOptions.DRIVER, "pool")
.option(ConnectionFactoryOptions.PROTOCOL, protocol)
.option(ConnectionFactoryOptions.HOST, host)
.option(ConnectionFactoryOptions.USER, username)
.option(ConnectionFactoryOptions.PASSWORD, password)
.option(ConnectionFactoryOptions.DATABASE, database)
.option(MAX_SIZE, 1000)
.option(INITIAL_SIZE, 1)
.build());
return connectionFactory;
}
#Bean
ReactiveTransactionManager transactionManager(ConnectionFactory connectionFactory) {
return new R2dbcTransactionManager(connectionFactory);
}
}
Please let me know if the issue is with the way reactive code is written or in the DB configuration.
EDIT 2:
Postgres logs : The DB name is planPackage.
2021-03-07 16:26:47.389 IST [24368] postgres#planpackage LOG: could not receive data from client: Connection timed out
The timestamps of both the logs doesn't match because, our deployment vm has a timezone set to GMT, But the postgres one is IST.
Issue:
1. I am not using Spring Boot MongoAutoConfiguration because we need mongo to be optional.
2. Other applications in same namespace can access MongoDB and network namespace is same for Application namespace and database namespace.
3. When I try to connect I get timeout exception.
4. Same worked on my local
POINTS ALREADY VERIFIED:
1. Checked mongod is up and running. other app in same namepsace is able to access same but its using Spring Mongo implementation.
2. no network issue.
3. There are post in stackoverflow for same exception, Its already tested. Not working.
a) application.properties:
mongo.hosts = mongo-node-1.database, mongo-node-2.database, mongo-node-3.database
mongo.port = 27017
mongo.database = database
isMongoEnabled = true
b) MongoClient bean :
#Configuration
public class MongoConfiguration {
#Value("#{'${mongo.hosts}'.split(',')}")
private List<String> hosts;
#Value("${mongo.port}")
private int port;
#Value("${isMongoEnabled}")
private boolean isMongoEnabled;
#Value("${mongo.database}")
private String database;
private Mongo createMongo() throws Exception {
final List<ServerAddress> serverList = new ArrayList<>();
for (final String host : hosts) {
serverList.add(new ServerAddress(host, port));
}
return new MongoClient(serverList);
}
#Bean
public Mongo mongoClient() throws Exception {
final Mongo mongo = createMongo();
return mongo;
}
}
c) Template bean:
#Configuration
#EnableMongoRepositories(
basePackages = "com.abc.test",
mongoTemplateRef = "customMongoNodeTemplate"
)
#Import(MongoConfiguration.class)
public class TemplateConfiguration {
#Value("${mongo.database}")
private String database;
#Bean
public MongoTemplate customMongoNodeTemplate(#Qualifier ("mongoClient") Mongo mongo) {
final MongoDbFactory factory = new SimpleMongoDbFactory( (MongoClient) mongo, database);
return new MongoTemplate(factory);
}
}
d) Exception:
com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
UPDATE:
After trying SimpleMongoClientDBFactory(uri) got below exception :
Application.properties:
mongo.uri= mongodb://mongo-node-1.database:27017,mongo-node-2.database:27017,mongo-node-3.database:27017/database
Exception :
#com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
SPRING boot version :
2.1.6
Mongo version :
4.0.6
Mongo Driver Version:
3.8.2
Spring-boot:2.3.0.RELEASE
Disable #SpringBootApplication(exclude = MongoAutoConfiguration.class)
To initiate MongoTemplate we can do with:
MongoDatabaseFactory
com.mongodb.client.MongoClient [interface] (Do not confuse with com.mongodb.MongoClient [class])
Implementation with MongoDatabaseFactory
application.properties:
mongo.uri=mongodb://mongo-node-1.database:27017,mongo-node-2.database:27017,mongo-node-3.database:27017/database
isMongoEnabled=true
Template bean
#Configuration
#EnableMongoRepositories(basePackages = "com.abc.test", mongoTemplateRef = "customMongoNodeTemplate")
public class TemplateConfiguration {
#Value("${mongo.uri}")
private String uri;
#Bean
public MongoTemplate customMongoNodeTemplate() {
return new MongoTemplate(new SimpleMongoClientDatabaseFactory(uri));
}
}
So this was a issue from my end , Turns out some one committed a mongoClient recently in git and when i took pull , that was a new MongoClient() and on local went to localhost but on cluster failed. Code was fine from the start. Thank you very much for baring with me
One of the application uses MongoClient as core for interacting with MongoDB in which authentication has been enabled recently. In this mongoClient is initialize as:
mongoClient = new MongoClient(serverAddress, Arrays.asList(MongoCredential.createCredential(userName, dbName, password.toCharArray())));
However at many places app uses mongoTemplate to query the data. Now if MongoTemplate is created as :
new MongoTemplate(mongoClient, dbName);
It leads to authentication failure.
The only way seems to pass user credentials to MongoTemplate is via using UserCredentials class by
However if we pass UserCredentials as :
public MongoTemplate(Mongo mongo, String databaseName, UserCredentials userCredentials) {
Which results to :
Usage of 'UserCredentials' with 'MongoClient' is no longer supported. Please use 'MongoCredential' for 'MongoClient' or just 'Mongo'.
It seems like two different API exists in parallel. What's the best way so that both of them can live together.
This app uses mongodata version as '1.10.6.RELEASE'
Try this:
MongoCredential mongoCredential = MongoCredential.createCredential("user", "database","password".toCharArray());
ServerAddress address = new ServerAddress("mymongo.mycompany.com", 62797);
MongoClient mongoClient = new MongoClient(address, Arrays.asList(mongoCredential));
MongoTemplate mongoTemplate = new MongoTemplate(mongoClient, "database");
Try this configuration:
#Configuration
public class MongoConfiguration {
#Bean
public MongoDbFactory mongoDbFactory() throws Exception {
UserCredentials userCredentials = new UserCredentials("YOUR_USER_NAME", "YOUR_PASSWORD");
return new SimpleMongoDbFactory(new Mongo(), "YOUR_DATABASE", userCredentials);
}
#Bean
public MongoTemplate mongoTemplate() throws Exception {
return new MongoTemplate(mongoDbFactory());
}
}
And to create database repositories just use MongoRepository like this:
public interface UserRepository extends MongoRepository<User,Serializable>{
User findById(String id);
}
In this case seems like it was problem of applying user authentication at mongod server end.
Needed authentication was applied to mongo and has been validated by
db.auth('user','pass');
command which results to 1. However database at that time doesn't exists. Afterwards database was created first by inserted a dummy record and permissions were assigned.
App uses all together a different DB for unit test cases for which it looks like configuration was not correctly applied where this issue was arriving.
Once corrected applied
new MongoClient(serverAddress, Arrays.asList(MongoCredential.createCredential(userName, dbName, password.toCharArray())));
seems to work fine. However at same time mongo driver errors are a bit cryptic without much explanation leading to making debugging time consuming.
Currently I'm using Amzone EC2 to host my mongo database, below is code for MongoCongig file in java using java mongodb driver and it's working fine.
#Configuration
#EnableMongoRepositories
public class MongoConfig extends AbstractMongoConfiguration
{
#Value("my_amazone_ec2_host")
private String host;
#Value("27017")
private Integer port;
#Value("my_database_name")
private String database;
#Value("database_admin")
private String username;
#Value("admin_pass")
private String password;
#Override
public String getDatabaseName()
{
return database;
}
#Override
#Bean
public Mongo mongo() throws Exception
{
return new MongoClient(
singletonList( new ServerAddress( host, port ) ),
singletonList( MongoCredential.createCredential( username,
database, password.toCharArray() ) ) );
}
}
Now I want to using MongoLab to host my database and MongoLab provide URI to connect to mongo db something like this:
mongodb://<dbuser>:<dbpassword>#ser_num.mongolab.com:port/database_name
I tried to modify my host name with this URI but not successful. Can anyone help me config this file?
I'm using only java configuration, not XML configuration; MongoDB version 3.
I just found the solution by replacing relative information from MongoLab URI:
#Value("ser_num.mongolab.com")
private String host;
#Value("port")
private Integer port;
I'm writing Java application with Apache Camel and use JDBC DataSource(Apache commons DBCP).
I can create many connections but all connections are handled in one Oracle client process, what I may see in sessions table select * from v$session.
The question is how to connect from one Java application to different Oracle processes to improve performance? If you know a way to do it using other Java technologies, not used in my example, it is also very interesting.
public static void main(String... args) throws Exception {
Main main = new Main();
String url = "some url";
String user = "user";
String password = "password";
DataSource dataSource = setupDataSource(url, user, password);
// bind dataSource into the registery
main.bind("oraDataSource", dataSource);
main.enableHangupSupport();
main.addRouteBuilder(new MyRouteBuilder());
main.run(args);
}
private static DataSource setupDataSource(String connectURI, String user, String password) {
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("oracle.jdbc.driver.OracleDriver");
ds.setMaxActive(20);
LOG.info("max active conn: " + ds.getMaxActive());
ds.setUsername(user);
ds.setPassword(password);
ds.setUrl(connectURI);
return ds;
}
public class MyRouteBuilder extends RouteBuilder {
Processor logProcessor = new LogProcessor();
Processor createAnswer = new CreateAnswerProc();
Processor dbPaymentProcessor = new DbPaymentQueryProcessor();
/**
* Let's configure the Camel routing rules using Java code...
*/
public void configure() {
from("rabbitmq://localhost:5672/ps.vin_test_send?exchangeType=topic&autoDelete=false&queue=ps.test_send_queue&concurrentConsumers=20&threadPoolSize=20")
.unmarshal().json(JsonLibrary.Jackson, Payment[].class)
.process(dbPaymentProcessor)
.to("jdbc:oraDataSource")
.process(logProcessor)
.process(createAnswer)
.to("rabbitmq://localhost:5672/ps.vin_test?username=test&password=test&exchangeType=topic&autoDelete=false&routingKey=ps.vin_test_key&queue=vin_test_queue&concurrentConsumers=20");
}
So, the way for oracle is to change db propeties from dedicated to shared. And it will share processes between users. but it is not recomended and not give any performance.
Creation of multiple datasources only reduces performance.