I have a multi-tenant mongoDB application and let's assume that
right connection to right database is chosen from tenant name from HTTP request header(i usage earlier prepared properties file with tenant name).
When application is started mongoDB is configuring and i don't have information about tenant, because none request to application hasn't been sent, so i don't know to which database i should be connect. Is a possibility that mongoDB connection to database would be configured dynamicly, when I try to get some data from mongo repository(then I have tenant name from HTTP request)?
MongoDbConfiguration:
#Configuration
public class MongoDbConfiguration {
private final MongoConnector mongoConnector;
#Autowired
public MongoDbConfiguration(MongoConnector mongoConnector) {
this.mongoConnector = mongoConnector;
}
#Bean
public MongoDbFactory mongoDbFactory() {
return new MultiTenantSingleMongoDbFactory(mongoConnector, new MongoExceptionTranslator());
}
#Bean
public MongoTemplate mongoTemplate() {
return new MongoTemplate(mongoDbFactory());
}
}
#Component
#Slf4j
public class MultiTenantMongoDbFactory extends SimpleMongoDbFactory {
private static final Logger logger = LoggerFactory.getLogger(MultiTenantMongoDbFactory.class);
private Map<String, DbConfig> tenantToDbConfig;
private Map<String, MongoDatabase> tenantToMongoDatabase;
#Autowired
public MultiTenantMongoDbFactory(
final #Qualifier("sibTenantContexts") Map<String, DbConfig> dbConfigs,
final SibEnvironment env) {
super(new MongoClientURI(env.getDefaultDatabase()));
this.tenantToDbConfig = dbConfigs;
// Initialize tenantToMongoDatabase map.
buildTenantDbs();
}
#Override
public MongoDatabase getDb() {
String tenantId = (!StringUtils.isEmpty(TenantContext.getId()) ? TenantContext.getId()
: SibConstant.DEFAULT_TENANT);
return this.tenantToMongoDatabase.get(tenantId);
}
/**
* Create tenantToMongoDatabase map.
*/
#SuppressWarnings("resource")
private void buildTenantDbs() {
log.debug("Building tenantDB configuration.");
this.tenantToMongoDatabase = new HashMap<>();
/*
* for each tenant fetch dbConfig and intitialize MongoClient and set it to
* tenantToMongoDatabase
*/
for (Entry<String, DbConfig> idToDbconfig : this.tenantToDbConfig.entrySet()) {
try {
this.tenantToMongoDatabase.put(idToDbconfig.getKey(),
new MongoClient(new MongoClientURI(idToDbconfig.getValue()
.getUri())).getDatabase(idToDbconfig.getValue()
.getDatabase()));
} catch (MongoException e) {
log.error(e.getMessage(), e.getCause());
}
}
}
}
In this, tenantToDbConfig is a bean which I have created at the time of application boot where I store DBConfiguration like (Url/database name) against every tenant. There is one default database which is required at boot time and for every request, I am expecting tenantId in request Header.
Related
I have a spring boot application which has multi-schema multi-tenancy implemented. Without multi-tenancy, same API response time was 300-400 ms. But after implementing multi-tenancy, response time bumped to 6-7 seconds (on same server and same schema).
I understand that additional processing is required to read header, switching database based on header etc. But I feel that it should not be 6-7 seconds. Can someone suggest how can I reduce this response time. Below are the classes added for multitenancy
public class TenantAwareRoutingSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return ThreadLocalStorage.getTenantName();
}
}
public class TenantNameInterceptor extends HandlerInterceptorAdapter {
#Value("${schemas.list}")
private String schemasList;
private Gson gson = new Gson();
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
String tenantName = request.getHeader("tenant-id");
if(StringUtils.isBlank(schemasList)) {
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
response.getWriter().write(gson.toJson(new Error("Tenants not initalized...")));
response.setStatus(HttpServletResponse.SC_BAD_REQUEST);
return false;
}
if(!schemasList.contains(tenantName)) {
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
response.getWriter().write(gson.toJson(new Error("User not allowed to access data")));
response.setStatus(HttpServletResponse.SC_BAD_REQUEST);
return false;
}
ThreadLocalStorage.setTenantName(tenantName);
return true;
}
#Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
ThreadLocalStorage.setTenantName(null);
}
#Setter
#Getter
#AllArgsConstructor
public static class Error {
private String message;
}
}
public class ThreadLocalStorage {
private static ThreadLocal<String> tenant = new ThreadLocal<>();
public static void setTenantName(String tenantName) {
tenant.set(tenantName);
}
public static String getTenantName() {
return tenant.get();
}
}
#Configuration
public class AutoDDLConfig
{
#Value("${spring.datasource.username}")
private String username;
#Value("${spring.datasource.password}")
private String password;
#Value("${schemas.list}")
private String schemasList;
#Value("${db.host}")
private String dbHost;
#Bean
public DataSource dataSource()
{
AbstractRoutingDataSource multiDataSource = new TenantAwareRoutingSource();
if (StringUtils.isBlank(schemasList))
{
return multiDataSource;
}
String[] tenants = schemasList.split(",");
Map<Object, Object> targetDataSources = new HashMap<>();
for (String tenant : tenants)
{
System.out.println("####" + tenant);
tenant = tenant.trim();
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("com.mysql.jdbc.Driver"); // Change here to MySql Driver
dataSource.setSchema(tenant);
dataSource.setUrl("jdbc:mysql://" + dbHost + "/" + tenant
+ "?autoReconnect=true&characterEncoding=utf8&useSSL=false&useTimezone=true&serverTimezone=Asia/Kolkata&useLegacyDatetimeCode=false&allowPublicKeyRetrieval=true");
dataSource.setUsername(username);
dataSource.setPassword(password);
targetDataSources.put(tenant, dataSource);
LocalContainerEntityManagerFactoryBean emfBean = new LocalContainerEntityManagerFactoryBean();
emfBean.setDataSource(dataSource);
emfBean.setPackagesToScan("com"); // Here mention JPA entity path / u can leave it scans all packages
emfBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
emfBean.setPersistenceProviderClass(HibernatePersistenceProvider.class);
Map<String, Object> properties = new HashMap<>();
properties.put("hibernate.hbm2ddl.auto", "update");
properties.put("hibernate.default_schema", tenant);
properties.put("hibernate.dialect", "org.hibernate.dialect.MySQL5InnoDBDialect");
emfBean.setJpaPropertyMap(properties);
emfBean.setPersistenceUnitName(dataSource.toString());
emfBean.afterPropertiesSet();
}
multiDataSource.setTargetDataSources(targetDataSources);
multiDataSource.afterPropertiesSet();
return multiDataSource;
}
}
Snippet from application.properties
spring.datasource.username=<<username>>
spring.datasource.password=<<pssword>>
schemas.list=suncitynx,kalpavrish,riddhisiddhi,smartcity,businesspark
db.host=localhost
########## JPA Config ###############
spring.jpa.open-in-view=false
spring.jpa.show-sql=false
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
spring.jpa.database=mysql
spring.datasource.initialize=false
spring.jpa.hibernate.ddl-auto=none
spring.jpa.database-platform=org.hibernate.dialect.MySQLDialect
spring.jpa.properties.hibernate.jdbc.time_zone = Asia/Kolkata
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect
##############Debug Logging#########################
#logging.level.org.springframework=DEBUG
#logging.level.org.hibernate.SQL=DEBUG
#logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE
######### HIkari Pool ##############
spring.datasource.hikari.maximum-pool-size=20
######### Jackson ############
spring.jackson.serialization.WRITE_ENUMS_USING_TO_STRING=true
spring.jackson.deserialization.READ_ENUMS_USING_TO_STRING=true
spring.jackson.time-zone: Asia/Kolkata
#common request logger
logging.level.org.springframework.web.filter.CommonsRequestLoggingFilter=DEBUG
#Multi part file size
spring.servlet.multipart.max-file-size = 15MB
spring.servlet.multipart.max-request-size = 15MB
Are you sure you maintain the connection pool per tenant?
I am trying to seed my multiple tenant (single database, multiple schemas) system with data but running into an issue that wasn't present when I was using the same code with a single database. I fully expect during my research that I have missed something obvious.
Each schema will contain the exact same table structure.
Here is my Tenant Context
public class TenantContext {
public static final String DEFAULT_TENANT_IDENTIFIER = "public";
private static final ThreadLocal<String> TENANT_IDENTIFIER = new ThreadLocal<>();
public static void setTenant(String tenantIdentifier) {
TENANT_IDENTIFIER.set(tenantIdentifier);
}
public static void reset(String tenantIdentifier) {
TENANT_IDENTIFIER.remove();
}
#Component
public static class TenantIdentifierResolver implements CurrentTenantIdentifierResolver {
#Override
public String resolveCurrentTenantIdentifier() {
String currentTenantId = TENANT_IDENTIFIER.get();
return currentTenantId != null ?
currentTenantId :
DEFAULT_TENANT_IDENTIFIER;
}
#Override
public boolean validateExistingCurrentSessions() {
return false;
}
}
}
And my HibernateConfig
#Configuration
public class HibernateConfig {
#Autowired
private JpaProperties jpaProperties;
#Bean
public JpaVendorAdapter jpaVendorAdapter() {
return new HibernateJpaVendorAdapter();
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource,
MultiTenantConnectionProvider multiTenantConnectionProvider, CurrentTenantIdentifierResolver currentTenantIdentifierResolver) {
Map<String, Object> jpaPropertiesMap = new HashMap<>();
jpaPropertiesMap.putAll(jpaProperties.getProperties());
jpaPropertiesMap.put(Environment.MULTI_TENANT, MultiTenancyStrategy.SCHEMA);
jpaPropertiesMap.put(Environment.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProvider);
jpaPropertiesMap.put(Environment.MULTI_TENANT_IDENTIFIER_RESOLVER, TenantContext.TenantIdentifierResolver.class);
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setDataSource(dataSource);
entityManagerFactoryBean.setPackagesToScan(UppStudentAppBeApplication.class.getPackage().getName());
entityManagerFactoryBean.setJpaVendorAdapter(jpaVendorAdapter());
entityManagerFactoryBean.setJpaPropertyMap(jpaPropertiesMap);
return entityManagerFactoryBean;
}
}
And my TenantConenctionProvider
#Component
public class TenantConnectionProvider implements MultiTenantConnectionProvider {
private static Logger logger = LoggerFactory.getLogger(TenantConnectionProvider.class);
#Autowired
private DataSource dataSource;
public TenantConnectionProvider(DataSource dataSource) {
this.dataSource = dataSource;
}
#Override
public Connection getAnyConnection() throws SQLException {
return dataSource.getConnection();
}
#Override
public void releaseAnyConnection(Connection connection) throws SQLException {
connection.close();
}
#Override
public Connection getConnection(String tenantIdentifier) throws SQLException {
logger.info("Get connection for tenant " + String.join(":", tenantIdentifier ));
final Connection connection = getAnyConnection();
try {
//connection.createStatement().execute( String.format("SET SCHEMA \"%s\";", tenantIdentifier));
connection.setSchema(tenantIdentifier);
} catch ( SQLException e ) {
throw new HibernateException(
"Could not alter JDBC connection to specified schema [" +
tenantIdentifier + "]",
e
);
}
return connection;
}
#Override
public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {
try {
//connection.createStatement().execute( String.format("SET SCHEMA \"%s\";", TenantContext.DEFAULT_TENANT_IDENTIFIER) );
connection.setSchema(TenantContext.DEFAULT_TENANT_IDENTIFIER);
} catch ( SQLException e ) {
throw new HibernateException(
"Could not alter JDBC connection to specified schema [" +
tenantIdentifier + "]",
e
);
}
releaseAnyConnection(connection);
}
#Override
public boolean supportsAggressiveRelease() {
return false;
}
#Override
public boolean isUnwrappableAs(Class unwrapType) {
return false;
}
#Override
public <T> T unwrap(Class<T> unwrapType) {
return null;
}
}
I call my seeding class that builds out my tenants and schemas using flyway migration.
I then try to loop through the saved tenants switching the TenantContext. Which when debugging appears to work. However when I try and do anything with the repo I get the following error.
o.h.engine.jdbc.spi.SqlExceptionHelper : ERROR: column campus0_.createdat does not exist
Hint: Perhaps you meant to reference the column "campus0_.created_at".
Position: 32
As I said earlier this worked fine previously when it was a single database and schema. I am not 100% sure on where I have gone wrong. Am I supposed to register the schemas some how? If so how can I onboard new tenants without redeploying? Should I use a custom query at this stage that uses the schema in the repo?
Thank you in advance for any help or advice.
EDIT 1
So I have now got past my initial hurdle by checking the hibernate properties so by changing the hibernate config as follows
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource,
MultiTenantConnectionProvider multiTenantConnectionProvider,
HibernateProperties hibernateProperties) {
Map<String, Object> jpaPropertiesMap = hibernateProperties.determineHibernateProperties(jpaProperties.getProperties(), new HibernateSettings());
//jpaPropertiesMap.putAll(jpaProperties.getProperties());
jpaPropertiesMap.put(Environment.MULTI_TENANT, MultiTenancyStrategy.SCHEMA);
jpaPropertiesMap.put(Environment.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProvider);
jpaPropertiesMap.put(Environment.MULTI_TENANT_IDENTIFIER_RESOLVER, TenantContext.TenantIdentifierResolver.class);
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setDataSource(dataSource);
entityManagerFactoryBean.setPackagesToScan(UppStudentAppBeApplication.class.getPackage().getName());
entityManagerFactoryBean.setJpaVendorAdapter(jpaVendorAdapter());
entityManagerFactoryBean.setJpaPropertyMap(jpaPropertiesMap);
return entityManagerFactoryBean;
}
This has now removed the above naming error. However now it is saving to my default schema rather than the schema set in the TenantIdentifierResolver.
Have you implemented AsyncHandlerInterceptor - interceptor of Spring. Should be registered also in WebMvcConfigurer.
#Component
public class TenantRequestInterceptor implements AsyncHandlerInterceptor{
private SecurityDomain securityDomain;
public TenantRequestInterceptor(SecurityDomain securityDomain) {
this.securityDomain = securityDomain;
}
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) {
return Optional.ofNullable(request)
.map(req -> securityDomain.getTenantIdFromJwt(req))
.map(tenant -> setTenantContext(tenant))
.orElse(false);
}
#Override
public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) {
TenantContext.reset();
}
private boolean setTenantContext(String tenant) {
TenantContext.setCurrentTenant(tenant);
return true;
}
}
This is important, because here you fill TenantContext with tenant.
Have you debuged method getConnection(String tenantIdentifier) what value is as tenantIdentifier?
My problem: Stuck on implementing change of schema after user login, following a StackOverFlow.
Description: Im using the class below. However, I have no idea on how to use it. Im reading every tutorial but I'm stuck. The result I'm expecting are:
1- Spring initializes with the default URL so the user can login.
2- After a successful login, it changes to the schema based on the UserDetails class.
I'm following the Stack Overflow solution at: Change database schema during runtime based on logged in user
The Spring version I'm using is
> : Spring Boot :: (v2.3.3.RELEASE)
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import java.sql.Connection;
import java.sql.ConnectionBuilder;
import java.sql.SQLException;
import java.util.concurrent.TimeUnit;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.core.env.Environment;
import org.springframework.jdbc.datasource.AbstractDataSource;
public class UserSchemaAwareRoutingDataSource extends AbstractDataSource {
#Autowired
UsuarioProvider customUserDetails;
#Autowired
Environment env;
private LoadingCache<String, DataSource> dataSources = createCache();
public UserSchemaAwareRoutingDataSource() {
}
public UserSchemaAwareRoutingDataSource(UsuarioProvider customUserDetails, Environment env) {
this.customUserDetails = customUserDetails;
this.env = env;
}
private LoadingCache<String, DataSource> createCache() {
return CacheBuilder.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.MINUTES)
.build(
new CacheLoader<String, DataSource>() {
public DataSource load(String key) throws Exception {
return buildDataSourceForSchema(key);
}
});
}
private DataSource buildDataSourceForSchema(String schema) {
System.out.println("schema:" + schema);
String url = "jdbc:mysql://REDACTED.com/" + schema;
String username = env.getRequiredProperty("spring.datasource.username");
String password = env.getRequiredProperty("spring.datasource.password");
System.out.println("Flag A");
DataSource build = (DataSource) DataSourceBuilder.create()
.driverClassName(env.getRequiredProperty("spring.datasource.driverClassName"))
.username(username)
.password(password)
.url(url)
.build();
System.out.println("Flag B");
return build;
}
#Override
public Connection getConnection() throws SQLException {
return determineTargetDataSource().getConnection();
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return determineTargetDataSource().getConnection(username, password);
}
private DataSource determineTargetDataSource() {
try {
Usuario usuario = customUserDetails.customUserDetails();
//
String db_schema = usuario.getTunnel().getDb_schema();
//
String schema = db_schema;
return dataSources.get(schema);
} catch (Exception ex) {
ex.printStackTrace();
}
return null;
}
#Override
public ConnectionBuilder createConnectionBuilder() throws SQLException {
return super.createConnectionBuilder();
}
}
References:
https://spring.io/blog/2007/01/23/dynamic-datasource-routing/
How to create Dynamic connections (datasource) in spring using JDBC
Spring Boot Configure and Use Two DataSources
Edit (Additional information required on the comments):
I have 1 database.
This database has a n number of schemas. Each schema pertains to one company. One user pertains to one company. The login logic is as follows:
-User input username and password.
-When successful, the UserDetails will contain the name of the 'schema' of this user. Basically, to which company/schema this user pertains.
It should, after that, connect directly to that schema, so the user can work with the data of his own company.
I hope this clarify as much as possible.
Edit 2:
#Component
public class UsuarioProvider {
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS) // or just #RequestScope
public Usuario customUserDetails() {
return (Usuario) SecurityContextHolder.getContext().getAuthentication().getPrincipal();
}
}
public class UserSchemaAwareRoutingDataSource extends AbstractDataSource {
#Autowired
private UsuarioProvider usuarioProvider;
#Autowired // This references the primary datasource, because no qualifier is given
private DataSource companyDependentDataSource;
#Autowired
#Qualifier(value = "loginDataSource")
private DataSource loginDataSource;
#Autowired
Environment env;
private LoadingCache<String, DataSource> dataSources = createCache();
public UserSchemaAwareRoutingDataSource() {
}
private LoadingCache<String, DataSource> createCache() {
return CacheBuilder.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.MINUTES)
.build(
new CacheLoader<String, DataSource>() {
public DataSource load(String key) throws Exception {
return buildDataSourceForSchema(key);
}
});
}
private DataSource buildDataSourceForSchema(String schema) {
System.out.println("schema:" + schema);
String url = "jdbc:mysql://REDACTED.com/" + schema;
String username = env.getRequiredProperty("spring.datasource.username");
String password = env.getRequiredProperty("spring.datasource.password");
System.out.println("Flag A");
DataSource build = (DataSource) DataSourceBuilder.create()
.driverClassName(env.getRequiredProperty("spring.datasource.driverClassName"))
.username(username)
.password(password)
.url(url)
.build();
System.out.println("Flag B");
return build;
}
#Override
public Connection getConnection() throws SQLException {
return determineTargetDataSource().getConnection();
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return determineTargetDataSource().getConnection(username, password);
}
private DataSource determineTargetDataSource() {
try {
System.out.println("Flag G");
Usuario usuario = usuarioProvider.customUserDetails(); // request scoped answer!
String db_schema = usuario.getTunnel().getDb_schema();
return dataSources.get(db_schema);
} catch (Exception ex) {
ex.printStackTrace();
}
return null;
}
#Override
public ConnectionBuilder createConnectionBuilder() throws SQLException {
return super.createConnectionBuilder();
}
}
Do I need to put #Configuration on top of this class? I'm not being able to make Spring Boot aware of this settings. I'm a bit confused on how to tell Spring Boot what is the loginDataSource; url is. I was using the application.properties default values to login.
Your setting seams the classical situation for two different DataSources.
Here is a Baeldung-Blog-Post how to configure Spring Data JPA.
First thing to notice, they are using #Primary. This is helping and standing in your way at the same time. You can only have ONE primary bean of a certain type. This is causing trouble for some people, since they try to "override" a spring bean by making their testing spring beans primary. Which results in having two primary beans with the same type. So be careful, when setting up your tests.
But it also eases things up, if you are mostly referring to one DataSource and only in a few cases to the other. This seams to be your case, so lets adopt it.
Your DataSource configuration could look like
#Configuration
public class DataSourceConfiguration {
#Bean(name="loginDataSource")
public DataSource loginDataSource(Environment env) {
String url = env.getRequiredProperty("spring.logindatasource.url");
return DataSourceBuilder.create()
.driverClassName(env.getRequiredProperty("spring.logindatasource.driverClassName"))
[...]
.url(url)
.build();
}
#Bean(name="companyDependentDataSource")
#Primary // use with caution, I'd recommend to use name based autowiring. See #Qualifier
public DataSource companyDependentDataSource(Environment env) {
return new UserSchemaAwareRoutingDataSource(); // Autowiring is done afterwards by Spring
}
}
These two DataSources can now be used in your repositories/DAOs or how ever you structure your program
#Autowired // This references the primary datasource, because no qualifier is given. UserSchemaAwareRoutingDataSource is its implementation
// #Qualifier("companyDependentDataSource") if #Primary is omitted
private DataSource companyDependentDataSource;
#Autowired
#Qualifier(name="loginDataSource") // reference by bean name
private DataSource loginDataSource
Here is an example how to configure Spring Data JPA with a DataSource referenced by name:
#Configuration
#EnableJpaRepositories(
basePackages = "<your entity package>",
entityManagerFactoryRef = "companyEntityManagerFactory",
transactionManagerRef = "companyTransactionManager"
)
public class CompanyPersistenceConfiguration {
#Autowired
#Qualifier("companyDependentDataSource")
private DataSource companyDependentDataSource;
#Bean(name="companyEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean companyEntityManagerFactory() {
LocalContainerEntityManagerFactoryBean emf = new LocalContainerEntityManagerFactoryBean();
emf.setDataSource(companyDependentDataSource);
// ... see Baeldung Blog Post
return emf;
}
#Bean(name="companyTransactionManager")
public PlatformTransactionManager companyTransactionManager() {
JpaTransactionManager tm = new JpaTransactionManager();
tm.setEntityManagerFactory(companyEntityManagerFactory().getObject());
return tm;
}
}
As described in my SO-answer you referred to there is an important assumption
The current schema name to be used for the current user is accessible through a Spring JSR-330 Provider like private javax.inject.Provider<User> user; String schema = user.get().getSchema();. This is ideally a ThreadLocal-based proxy.
This is the trick which makes the UserSchemaAwareRoutingDataSource implementation possible. Spring beans are mostly singletons and therefore stateless. This also applies to the normal usage of DataSources. They are treated as stateless singletons and the references to them are passed over in the whole program. So we need to find a way to provide a single instance of the companyDependentDataSource which is behaving different on user basis regardless. To get that behavior I suggest to use a request-scoped bean.
In a web application, you can use #Scope(REQUEST_SCOPE) to create such objects. There is also a Bealdung Post talking about that topic. As usual, #Bean annotated methods reside in #Confiugration annotated classes.
#Configuration
public class UsuarioConfiguration {
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST,
proxyMode = ScopedProxyMode.TARGET_CLASS) // or just #RequestScope
public Usuario usario() {
// based on your edit2
return (Usuario) SecurityContextHolder.getContext().getAuthentication().getPrincipal();
}
}
Now you can use this request scoped object with a provider inside your singleton DataSource to behave different according to the logged in user:
#Autowired
private Usario usario; // this is now a request-scoped proxy which will create the corresponding bean (see UsuarioConfiguration.usario()
private DataSource determineTargetDataSource() {
try {
String db_schema = this.usuario.getTunnel().getDb_schema();
return dataSources.get(db_schema);
} catch (Exception ex) {
ex.printStackTrace();
}
return null;
}
I hope this helps you understand the request scope concept of Spring.
So your login process would look something like
User input username and password
A normal spring bean, referencing the userDataSource by name, is checking the login and is putting the user information into the session/securitycontext/cookie/....
When successful, during the next request the companyDependentDataSource is capable of retrieving a properly setup Usario object
You can use this datasource now to do user specific stuff.
To verify your DataSource is properly working you could create a small Spring MVC endpoint
#RestController
public class DataSourceVerificationController {
#Autowired
private Usario usario;
#Autowired
#Qualifier("companyDependentDataSource") // omit this annotation if you use #Primary
private DataSource companyDependentDataSource;
#GetRequest("/test")
public String test() throws Exception {
String schema = usario.getTunnel().getDb_schema()
Connection con = companyDependentDataSource.getConnection();
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select name from Employee"); // just a random guess
rs.next();
String name = rs.getString("name")
rs.close();
stmt.close();
con.close();
return "name = '" + name + "', schema = '" + schema + "'";
}
}
Take your favorite browser go to your login page, do a valid login and call http://localhost:8080/test afterwards
I have an API we wrote using Spring 4 with a Mongo database. When the application loads into my local WAS, I can see the app will go out and connect to the database. However when I go to execute a function that should open a query, I get socket closed error.
My Configuration:
#Bean
public MongoDbFactory mongoDbFactory() throws Exception {
logger.info("loading MongoDBFactory bean" );
String PROCESS_ID_MONGO_KEY = "PROCESS_ID_MONGO";
Credentials credentials = credentialsManager().getCredentialsFor(PROCESS_ID_MONGO_KEY);
MongoClient mongoClient = new MongoClient(
Arrays.asList(new ServerAddress(PropertiesManagerUtility.getKeyValue(CollectionType.CREDENTIAL, "mongo.url"), 27017)),
Arrays.asList(MongoCredential.createPlainCredential(credentials.getUserid(), "$external", credentials.getPassword().toCharArray())),
MongoClientOptions.builder()
.sslEnabled(true).connectTimeout(30)
.writeConcern(WriteConcern.MAJORITY)
.socketKeepAlive(true)
.build());
return new SimpleMongoDbFactory(mongoClient, PropertiesManagerUtility.getKeyValue(CollectionType.CREDENTIAL, "mongo.db"));
}
#Bean
public MongoTemplate mongoTemplate() throws Exception {
logger.info("loading MongoTemplate bean" );
// MongoTemplate mongoTemplate = new MongoTemplate(mongoDbFactory());
return new MongoTemplate(mongoDbFactory());
}
My Dao
#Component("achResponseDMDao")
public class AchResponseDMDaoImpl implements IBasicDao<AchResponseDM>{
#Autowired
MongoTemplate mongoTemplate;
public AchResponseDMDaoImpl(MongoTemplate mongoTemplate){
this.mongoTemplate = mongoTemplate;
}
#Override
public AchResponseDM findByResponseCode( String responseCode){
Query query = new Query(Criteria.where("responseCode").is(responseCode));
return mongoTemplate.findOne(query, AchResponseDM.class);
}
...
}
Question is I thought Spring would give me a new connection using the MongoFactory but it appears that original connection gets closed and no more are created. What do I need to do? Thanks in advance.
Injecting the factory instead of the MongoTemplate instance created new connections as needed. The corresponding DAOImpl would have #Autowired MongoFactory mongoFactory; and the method would create an instance of new mongoTemplate(mongoFactory).find(...) or otherwise.
the resulting DAO looks like:
#Component("achResponseDMDao")
public class AchResponseDMDaoImpl implements IBasicDao<AchResponseDM>{
#Autowired
MongoFactory mongoFactory;
#Override
public AchResponseDM findByResponseCode( String responseCode){
Query query = new Query(Criteria.where("responseCode").is(responseCode));
List<AchResponseDM> listOfResponses = mongoTemplate(mongoFactory).find(query, AchResponseDM.class);
return (listOfResponses!=null && !listOfResponses.isEmpty())?listOfResponses.get(0):defaultNonNullResponse();
}
...
}
I am using the below code to connect with cassandra using spring data. But it's painful to create connection everytime.
try {
cluster = Cluster.builder().addContactPoint(host).build();
session = cluster.connect("digitalfootprint");
CassandraOperations cassandraOps = new CassandraTemplate(session);
Select usersQuery = QueryBuilder.select(userColumns).from("Users");
usersQuery.where(QueryBuilder.eq("username", username));
List<Users> userResult = cassandraOps
.select(usersQuery, Users.class);
userList = userResult;
} catch(Exception e) {
e.printStackTrace();
} finally {
cluster.close();
}
Is there any way we can have a common static connection or utility kind of stuff. I am using this in web application where lots of CRUD operation will be there. SO it will be painful to repeat the code every where.
Just instantiate appropriate beans at the startup time of your spring web application. An example would be :
#Configuration
public class CassandraConfig {
#Bean
public CassandraClusterFactoryBean cluster() throws UnknownHostException {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(InetAddress.getLocalHost().getHostName());
cluster.setPort(9042);
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
return new BasicCassandraMappingContext();
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName("mykeyspace");
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
Now Inject or Autowire CassandraOperations bean , any time you want