One of the application uses MongoClient as core for interacting with MongoDB in which authentication has been enabled recently. In this mongoClient is initialize as:
mongoClient = new MongoClient(serverAddress, Arrays.asList(MongoCredential.createCredential(userName, dbName, password.toCharArray())));
However at many places app uses mongoTemplate to query the data. Now if MongoTemplate is created as :
new MongoTemplate(mongoClient, dbName);
It leads to authentication failure.
The only way seems to pass user credentials to MongoTemplate is via using UserCredentials class by
However if we pass UserCredentials as :
public MongoTemplate(Mongo mongo, String databaseName, UserCredentials userCredentials) {
Which results to :
Usage of 'UserCredentials' with 'MongoClient' is no longer supported. Please use 'MongoCredential' for 'MongoClient' or just 'Mongo'.
It seems like two different API exists in parallel. What's the best way so that both of them can live together.
This app uses mongodata version as '1.10.6.RELEASE'
Try this:
MongoCredential mongoCredential = MongoCredential.createCredential("user", "database","password".toCharArray());
ServerAddress address = new ServerAddress("mymongo.mycompany.com", 62797);
MongoClient mongoClient = new MongoClient(address, Arrays.asList(mongoCredential));
MongoTemplate mongoTemplate = new MongoTemplate(mongoClient, "database");
Try this configuration:
#Configuration
public class MongoConfiguration {
#Bean
public MongoDbFactory mongoDbFactory() throws Exception {
UserCredentials userCredentials = new UserCredentials("YOUR_USER_NAME", "YOUR_PASSWORD");
return new SimpleMongoDbFactory(new Mongo(), "YOUR_DATABASE", userCredentials);
}
#Bean
public MongoTemplate mongoTemplate() throws Exception {
return new MongoTemplate(mongoDbFactory());
}
}
And to create database repositories just use MongoRepository like this:
public interface UserRepository extends MongoRepository<User,Serializable>{
User findById(String id);
}
In this case seems like it was problem of applying user authentication at mongod server end.
Needed authentication was applied to mongo and has been validated by
db.auth('user','pass');
command which results to 1. However database at that time doesn't exists. Afterwards database was created first by inserted a dummy record and permissions were assigned.
App uses all together a different DB for unit test cases for which it looks like configuration was not correctly applied where this issue was arriving.
Once corrected applied
new MongoClient(serverAddress, Arrays.asList(MongoCredential.createCredential(userName, dbName, password.toCharArray())));
seems to work fine. However at same time mongo driver errors are a bit cryptic without much explanation leading to making debugging time consuming.
Related
I'm trying to modify existing Java app (WildFly, Jboss, oracle) which currently working fine as using persistence-unit and EntityManager connect to Oracle database(using standalone.xml and persistence.xml). However, I need to create every time new connection to database for the user which calls new GET API Endpoint using credentials from the HttpHeaders. Currently, I'm creating new entitymanager object which session is commit, rollback nad close. Unfortunately time response for every call become higher and higher. There is warning about "PersistenceUnitUser" being already registered and memory usage constantly growing. So that is bad solution.
Is there any proper way to do it, which works witout any harms ?
P.S.
Currently app using standalone.xml and persistence.xml. And that is working fine. I'm calling java api endpoint using entity manager being connected as Admin user/pass but I need to create new connection using user/pass from the httpHeaders and call one sql statement to see proper results as ORACLE uses reserved word such us: 'user'. For instance : select * from table where create_usr = user. When done 'Main EntityManager will use data from it to continue some process.
Please see code example below :
#GET
#Path("/todo-list-enriched")
#Produces(MediaType.APPLICATION_JSON)
public Response getToDoListEnriched(#Context HttpHeaders httpHeaders, #QueryParam("skip") int elementNumber, #QueryParam("take") int pageSize, #QueryParam("orderby") String orderBy)
{
String userName = httpHeaders.getHeaderString(X_USER_NAME);
String userName = httpHeaders.getHeaderString(X_PASSWORD);
EntityManager entityManager = null;
try {
Map<String, String> persistenceMap = new HashMap<String, String>();
persistenceMap.put("hibernate.dialect","org.hibernate.dialect.Oracle8iDialect");
persistenceMap.put("hibernate.connection.username", asUserName);
persistenceMap.put("hibernate.connection.password", asPassword);
EntityManagerFactory emf = Persistence.createEntityManagerFactory("PersistenceUnitUser", persistenceMap);
entityManager = emf.createEntityManager();
if (!entityManager.getTransaction().isActive()) {
entityManager.getTransaction().begin();
}
-- Do some works as select, update, select
-- and after that
if (entityManager.getTransaction().isActive()) {
entityManager.getTransaction().commit();
}
}
catch (Exception ex)
{
if (entityManager != null && entityManager.getTransaction().isActive()) {
entityManager.getTransaction().rollback();
}
}
finally {
if (entityManager != null && entityManager.isOpen()) {
entityManager.close();
}
}
}
}
``
Best Regards
Marcin
You should define a connection pool and a datasource in the standalone.xml (cf. https://docs.wildfly.org/26.1/Admin_Guide.html#DataSource) and then use it in your persistence.xml and inject the EntitytManager in your rest service class (cf. https://docs.wildfly.org/26.1/Developer_Guide.html#entity-manager).
You may look at this example application: https://github.com/wildfly/quickstart/tree/main/todo-backend
I have a MongoDB with two redundant MongoS router hosts. When using org.springframework.data.mongo to create a MongoTemplate and MongoClient, I can only add a single host. In the event that the host in use falls over, there is no failover to the alternate router host.
I initially referenced https://dzone.com/articles/multiple-mongodb-connectors-with-spring-boot , but the use case there is for two entirely different repositories, where as my case is a single database with dual routers.
In the code below, we would like to add a redundant second host in case the first host fails during runtime.
public class MongoConfiguration extends AbstractMongoConfiguration {
#Value("${mongo.database}")
private String databaseName;
#Value("${mongo.host}")
private String host;
#Value("${mongo.readFromSecondary}")
private String readFromSecondary;
#Value("${mongo.port}")
private int port;
#VaultKey("vault.mongo_username")
private String username;
#VaultKey("vault.mongo_password")
private String password;
#Override
protected String getDatabaseName() {
return databaseName;
}
#Override
#Primary
public MongoClient mongoClient() {
final ServerAddress serverAddress = new ServerAddress(host, port);
final MongoCredential credential = MongoCredential.createCredential(username,
getDatabaseName(), password.toCharArray());
return new MongoClient(serverAddress, credential,
MongoClientOptions.builder().build());
}
#Override
#Primary
#Bean(name = "mongoTemplate")
public MongoTemplate mongoTemplate() throws Exception {
final MongoTemplate template = super.mongoTemplate();
if (this.readFromSecondary != null && Boolean.valueOf(this.readFromSecondary)) {
template.setReadPreference(ReadPreference.secondary());
}
return template;
}
}
Currently at startup a connection to the host in the config file will be loaded with out error, we would like to rotate in a backup host.
You can achieve this by two ways:
1. Multiple Mongo Client or multiple Server Address(hosts):
A MongoDB client with internal connection pooling. For most applications, you should have one MongoClient instance for the entire JVM.
The following are equivalent, and all connect to the local database running on the default port:
MongoClient mongoClient1 = new MongoClient();
MongoClient mongoClient1 = new MongoClient("localhost");
MongoClient mongoClient2 = new MongoClient("localhost", 27017);
MongoClient mongoClient4 = new MongoClient(new ServerAddress("localhost"));
MongoClient mongoClient5 = new MongoClient(new ServerAddress("localhost"),
new MongoClientOptions.Builder().build());
You can connect to a replica set using the Java driver by passing a ServerAddress list to the MongoClient constructor. For example:
MongoClient mongoClient = new MongoClient(Arrays.asList(
new ServerAddress("localhost", 27017),
new ServerAddress("localhost", 27018),
new ServerAddress("localhost", 27019)));
You can connect to a sharded cluster using the same constructor. MongoClient will auto-detect whether the servers are a list of replica set members or a list of mongos servers.
By default, all read and write operations will be made on the primary, but it's possible to read from secondaries by changing the read preference:
mongoClient.setReadPreference(ReadPreference.secondaryPreferred());
By default, all write operations will wait for acknowledgment by the server, as the default write concern is WriteConcern.ACKNOWLEDGED
2. Using Multiple Mongo Connectors and Multiple Mongo Templates:
First of all create the following #ConfigurationProperties class.
#ConfigurationProperties(prefix = "mongodb")
public class MultipleMongoProperties {
private MongoProperties primary = new MongoProperties();
private MongoProperties secondary = new MongoProperties();
}
And then add the following properties in the application.yml
mongodb:
primary:
host: localhost
port: 27017
database: second
secondary:
host: localhost
port: 27017
database: second
Now it’s necessary to create the MongoTemplates to bind the given configuration in the previous step.
#EnableConfigurationProperties(MultipleMongoProperties.class)
public class MultipleMongoConfig {
private final MultipleMongoProperties mongoProperties;
#Primary
#Bean(name = "primaryMongoTemplate")
public MongoTemplate primaryMongoTemplate() throws Exception {
return new MongoTemplate(primaryFactory(this.mongoProperties.getPrimary()));
}
#Bean(name = "secondaryMongoTemplate")
public MongoTemplate secondaryMongoTemplate() throws Exception {
return new MongoTemplate(secondaryFactory(this.mongoProperties.getSecondary()));
}
#Bean
#Primary
public MongoDbFactory primaryFactory(final MongoProperties mongo) throws Exception {
return new SimpleMongoDbFactory(new MongoClient(mongo.getHost(), mongo.getPort()),
mongo.getDatabase());
}
#Bean
public MongoDbFactory secondaryFactory(final MongoProperties mongo) throws Exception {
return new SimpleMongoDbFactory(new MongoClient(mongo.getHost(), mongo.getPort()),
mongo.getDatabase());
}
}
With the configuration above you’ll be able to have two different MongoTemplates based in the custom configuration properties that we provided previously in this guide.
In the previous step we created two MongoTemplates, primaryMongoTemplate and secondaryMongoTemplate
More details: https://blog.marcosbarbero.com/multiple-mongodb-connectors-in-spring-boot/
I'm studying the Vertx MongoClient API. I previously installed Restheart from Docker and it's own copy of mongodb, so now I have the default configuration for Restheart and the default configuration of Mongo in docker-compose.yml:
MONGO_INITDB_ROOT_USERNAME: restheart
MONGO_INITDB_ROOT_PASSWORD: R3ste4rt!
I put the Vertx Mongoclient into a Verticle:
public class MongoClientVerticle extends AbstractVerticle {
MongoClient mongoClient;
String db = "monica";
String collection = "sessions";
String uri = "mongodb://localhost:27017";
String username = "admin";
String password = "password";
MongoAuth authProvider;
#Override
public void start() throws Exception {
JsonObject config = Vertx.currentContext().config();
JsonObject mongoconfig = new JsonObject()
.put("connection_string", uri)
.put("db_name", db);
mongoClient = MongoClient.createShared(vertx, mongoconfig);
JsonObject authProperties = new JsonObject();
authProvider = MongoAuth.create(mongoClient, authProperties);
// authProvider.setHashAlgorithm(HashAlgorithm.SHA512);
JsonObject authInfo = new JsonObject()
.put("username", username)
.put("password", password);
authProvider.authenticate(authInfo, res -> {
if (res.succeeded()) {
User user = res.result();
System.out.println("User " + user.principal() + " is now authenticated");
} else {
res.cause().printStackTrace();
}
});
}
and I built a simple query:
public void find(int limit) {
JsonObject query = new JsonObject();
FindOptions options = new FindOptions();
options.setLimit(1000);
mongoClient.findWithOptions(collection, query, options, res -> {
if (res.succeeded()) {
List<JsonObject> result = res.result();
result.forEach(System.out::println);
} else {
res.cause().printStackTrace();
}
});
}
but when I access the db I get this error:
MongoQueryException: Query failed with error code 13 and error message 'there are no users authenticated' on server localhost:27017
What am I missing in the authentication process?
I'm using lastest restheart + mongodb and vertx 3.5.3
To be clear, RESTHeart doesn't come with its own copy of Mongodb but connects to any existing instance of Mongodb. The instance you can start via docker compose is for demo purposes only.
This question is much related to Vertx + Mongodb. I'm not an expert of it, but apparently Vert.x Auth Mongo does not use database accounts to authenticate users, it uses a specific collection (by default the "user" collection). You could double check the Vertx docs in case to be sure about this.
However, note that RESTHeart's main purpose is to provide direct HTTP access to Mongodb, without the need to program any specific client or driver. So the side point is that if you are using Vertx then you presumably don't need RESTHeart, and vice versa. Otherwise, you could simply connect to RESTHeart via Vertx's HTTP client, entirely skipping the MongoClient API.
I'm writing Java application with Apache Camel and use JDBC DataSource(Apache commons DBCP).
I can create many connections but all connections are handled in one Oracle client process, what I may see in sessions table select * from v$session.
The question is how to connect from one Java application to different Oracle processes to improve performance? If you know a way to do it using other Java technologies, not used in my example, it is also very interesting.
public static void main(String... args) throws Exception {
Main main = new Main();
String url = "some url";
String user = "user";
String password = "password";
DataSource dataSource = setupDataSource(url, user, password);
// bind dataSource into the registery
main.bind("oraDataSource", dataSource);
main.enableHangupSupport();
main.addRouteBuilder(new MyRouteBuilder());
main.run(args);
}
private static DataSource setupDataSource(String connectURI, String user, String password) {
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("oracle.jdbc.driver.OracleDriver");
ds.setMaxActive(20);
LOG.info("max active conn: " + ds.getMaxActive());
ds.setUsername(user);
ds.setPassword(password);
ds.setUrl(connectURI);
return ds;
}
public class MyRouteBuilder extends RouteBuilder {
Processor logProcessor = new LogProcessor();
Processor createAnswer = new CreateAnswerProc();
Processor dbPaymentProcessor = new DbPaymentQueryProcessor();
/**
* Let's configure the Camel routing rules using Java code...
*/
public void configure() {
from("rabbitmq://localhost:5672/ps.vin_test_send?exchangeType=topic&autoDelete=false&queue=ps.test_send_queue&concurrentConsumers=20&threadPoolSize=20")
.unmarshal().json(JsonLibrary.Jackson, Payment[].class)
.process(dbPaymentProcessor)
.to("jdbc:oraDataSource")
.process(logProcessor)
.process(createAnswer)
.to("rabbitmq://localhost:5672/ps.vin_test?username=test&password=test&exchangeType=topic&autoDelete=false&routingKey=ps.vin_test_key&queue=vin_test_queue&concurrentConsumers=20");
}
So, the way for oracle is to change db propeties from dedicated to shared. And it will share processes between users. but it is not recomended and not give any performance.
Creation of multiple datasources only reduces performance.
I'm using spring framework with mongoTemplate. bean initiation:
public
#Bean
MongoTemplate mongoTemplate() throws Exception {
MongoTemplate mongoTemplate = new MongoTemplate(mongoDbFactory());
mongoTemplate.setWriteResultChecking(WriteResultChecking.EXCEPTION);
return mongoTemplate;
}
in short this code does not fail on duplicate key
collection= mTemplate.getCollection("col");
try {
final WriteResult writeResult = collection.insert(edge);
} catch (DuplicateKeyException e) {
log.warn("#error> edge already exists");
return null;
}
writeResult._lastErrorResult is not null and has the relevant errors.
the document i'm trying to insert:
Also I've tried to catch Exception e without success.
collection.createIndex(new BasicDBObject("a", 1).append(, 1), unique);
DbObject edge = new BasicDBObject("a", "123").append("b", "345");
You need to set the WriteConcern of the MongoDB driver to Acknowledged.
From the docs,
Write operations that use this write concern will wait for
acknowledgement from the primary server before returning. Exceptions
are raised for network issues, and server errors.