My problem: Stuck on implementing change of schema after user login, following a StackOverFlow.
Description: Im using the class below. However, I have no idea on how to use it. Im reading every tutorial but I'm stuck. The result I'm expecting are:
1- Spring initializes with the default URL so the user can login.
2- After a successful login, it changes to the schema based on the UserDetails class.
I'm following the Stack Overflow solution at: Change database schema during runtime based on logged in user
The Spring version I'm using is
> : Spring Boot :: (v2.3.3.RELEASE)
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import java.sql.Connection;
import java.sql.ConnectionBuilder;
import java.sql.SQLException;
import java.util.concurrent.TimeUnit;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.core.env.Environment;
import org.springframework.jdbc.datasource.AbstractDataSource;
public class UserSchemaAwareRoutingDataSource extends AbstractDataSource {
#Autowired
UsuarioProvider customUserDetails;
#Autowired
Environment env;
private LoadingCache<String, DataSource> dataSources = createCache();
public UserSchemaAwareRoutingDataSource() {
}
public UserSchemaAwareRoutingDataSource(UsuarioProvider customUserDetails, Environment env) {
this.customUserDetails = customUserDetails;
this.env = env;
}
private LoadingCache<String, DataSource> createCache() {
return CacheBuilder.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.MINUTES)
.build(
new CacheLoader<String, DataSource>() {
public DataSource load(String key) throws Exception {
return buildDataSourceForSchema(key);
}
});
}
private DataSource buildDataSourceForSchema(String schema) {
System.out.println("schema:" + schema);
String url = "jdbc:mysql://REDACTED.com/" + schema;
String username = env.getRequiredProperty("spring.datasource.username");
String password = env.getRequiredProperty("spring.datasource.password");
System.out.println("Flag A");
DataSource build = (DataSource) DataSourceBuilder.create()
.driverClassName(env.getRequiredProperty("spring.datasource.driverClassName"))
.username(username)
.password(password)
.url(url)
.build();
System.out.println("Flag B");
return build;
}
#Override
public Connection getConnection() throws SQLException {
return determineTargetDataSource().getConnection();
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return determineTargetDataSource().getConnection(username, password);
}
private DataSource determineTargetDataSource() {
try {
Usuario usuario = customUserDetails.customUserDetails();
//
String db_schema = usuario.getTunnel().getDb_schema();
//
String schema = db_schema;
return dataSources.get(schema);
} catch (Exception ex) {
ex.printStackTrace();
}
return null;
}
#Override
public ConnectionBuilder createConnectionBuilder() throws SQLException {
return super.createConnectionBuilder();
}
}
References:
https://spring.io/blog/2007/01/23/dynamic-datasource-routing/
How to create Dynamic connections (datasource) in spring using JDBC
Spring Boot Configure and Use Two DataSources
Edit (Additional information required on the comments):
I have 1 database.
This database has a n number of schemas. Each schema pertains to one company. One user pertains to one company. The login logic is as follows:
-User input username and password.
-When successful, the UserDetails will contain the name of the 'schema' of this user. Basically, to which company/schema this user pertains.
It should, after that, connect directly to that schema, so the user can work with the data of his own company.
I hope this clarify as much as possible.
Edit 2:
#Component
public class UsuarioProvider {
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS) // or just #RequestScope
public Usuario customUserDetails() {
return (Usuario) SecurityContextHolder.getContext().getAuthentication().getPrincipal();
}
}
public class UserSchemaAwareRoutingDataSource extends AbstractDataSource {
#Autowired
private UsuarioProvider usuarioProvider;
#Autowired // This references the primary datasource, because no qualifier is given
private DataSource companyDependentDataSource;
#Autowired
#Qualifier(value = "loginDataSource")
private DataSource loginDataSource;
#Autowired
Environment env;
private LoadingCache<String, DataSource> dataSources = createCache();
public UserSchemaAwareRoutingDataSource() {
}
private LoadingCache<String, DataSource> createCache() {
return CacheBuilder.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.MINUTES)
.build(
new CacheLoader<String, DataSource>() {
public DataSource load(String key) throws Exception {
return buildDataSourceForSchema(key);
}
});
}
private DataSource buildDataSourceForSchema(String schema) {
System.out.println("schema:" + schema);
String url = "jdbc:mysql://REDACTED.com/" + schema;
String username = env.getRequiredProperty("spring.datasource.username");
String password = env.getRequiredProperty("spring.datasource.password");
System.out.println("Flag A");
DataSource build = (DataSource) DataSourceBuilder.create()
.driverClassName(env.getRequiredProperty("spring.datasource.driverClassName"))
.username(username)
.password(password)
.url(url)
.build();
System.out.println("Flag B");
return build;
}
#Override
public Connection getConnection() throws SQLException {
return determineTargetDataSource().getConnection();
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return determineTargetDataSource().getConnection(username, password);
}
private DataSource determineTargetDataSource() {
try {
System.out.println("Flag G");
Usuario usuario = usuarioProvider.customUserDetails(); // request scoped answer!
String db_schema = usuario.getTunnel().getDb_schema();
return dataSources.get(db_schema);
} catch (Exception ex) {
ex.printStackTrace();
}
return null;
}
#Override
public ConnectionBuilder createConnectionBuilder() throws SQLException {
return super.createConnectionBuilder();
}
}
Do I need to put #Configuration on top of this class? I'm not being able to make Spring Boot aware of this settings. I'm a bit confused on how to tell Spring Boot what is the loginDataSource; url is. I was using the application.properties default values to login.
Your setting seams the classical situation for two different DataSources.
Here is a Baeldung-Blog-Post how to configure Spring Data JPA.
First thing to notice, they are using #Primary. This is helping and standing in your way at the same time. You can only have ONE primary bean of a certain type. This is causing trouble for some people, since they try to "override" a spring bean by making their testing spring beans primary. Which results in having two primary beans with the same type. So be careful, when setting up your tests.
But it also eases things up, if you are mostly referring to one DataSource and only in a few cases to the other. This seams to be your case, so lets adopt it.
Your DataSource configuration could look like
#Configuration
public class DataSourceConfiguration {
#Bean(name="loginDataSource")
public DataSource loginDataSource(Environment env) {
String url = env.getRequiredProperty("spring.logindatasource.url");
return DataSourceBuilder.create()
.driverClassName(env.getRequiredProperty("spring.logindatasource.driverClassName"))
[...]
.url(url)
.build();
}
#Bean(name="companyDependentDataSource")
#Primary // use with caution, I'd recommend to use name based autowiring. See #Qualifier
public DataSource companyDependentDataSource(Environment env) {
return new UserSchemaAwareRoutingDataSource(); // Autowiring is done afterwards by Spring
}
}
These two DataSources can now be used in your repositories/DAOs or how ever you structure your program
#Autowired // This references the primary datasource, because no qualifier is given. UserSchemaAwareRoutingDataSource is its implementation
// #Qualifier("companyDependentDataSource") if #Primary is omitted
private DataSource companyDependentDataSource;
#Autowired
#Qualifier(name="loginDataSource") // reference by bean name
private DataSource loginDataSource
Here is an example how to configure Spring Data JPA with a DataSource referenced by name:
#Configuration
#EnableJpaRepositories(
basePackages = "<your entity package>",
entityManagerFactoryRef = "companyEntityManagerFactory",
transactionManagerRef = "companyTransactionManager"
)
public class CompanyPersistenceConfiguration {
#Autowired
#Qualifier("companyDependentDataSource")
private DataSource companyDependentDataSource;
#Bean(name="companyEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean companyEntityManagerFactory() {
LocalContainerEntityManagerFactoryBean emf = new LocalContainerEntityManagerFactoryBean();
emf.setDataSource(companyDependentDataSource);
// ... see Baeldung Blog Post
return emf;
}
#Bean(name="companyTransactionManager")
public PlatformTransactionManager companyTransactionManager() {
JpaTransactionManager tm = new JpaTransactionManager();
tm.setEntityManagerFactory(companyEntityManagerFactory().getObject());
return tm;
}
}
As described in my SO-answer you referred to there is an important assumption
The current schema name to be used for the current user is accessible through a Spring JSR-330 Provider like private javax.inject.Provider<User> user; String schema = user.get().getSchema();. This is ideally a ThreadLocal-based proxy.
This is the trick which makes the UserSchemaAwareRoutingDataSource implementation possible. Spring beans are mostly singletons and therefore stateless. This also applies to the normal usage of DataSources. They are treated as stateless singletons and the references to them are passed over in the whole program. So we need to find a way to provide a single instance of the companyDependentDataSource which is behaving different on user basis regardless. To get that behavior I suggest to use a request-scoped bean.
In a web application, you can use #Scope(REQUEST_SCOPE) to create such objects. There is also a Bealdung Post talking about that topic. As usual, #Bean annotated methods reside in #Confiugration annotated classes.
#Configuration
public class UsuarioConfiguration {
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST,
proxyMode = ScopedProxyMode.TARGET_CLASS) // or just #RequestScope
public Usuario usario() {
// based on your edit2
return (Usuario) SecurityContextHolder.getContext().getAuthentication().getPrincipal();
}
}
Now you can use this request scoped object with a provider inside your singleton DataSource to behave different according to the logged in user:
#Autowired
private Usario usario; // this is now a request-scoped proxy which will create the corresponding bean (see UsuarioConfiguration.usario()
private DataSource determineTargetDataSource() {
try {
String db_schema = this.usuario.getTunnel().getDb_schema();
return dataSources.get(db_schema);
} catch (Exception ex) {
ex.printStackTrace();
}
return null;
}
I hope this helps you understand the request scope concept of Spring.
So your login process would look something like
User input username and password
A normal spring bean, referencing the userDataSource by name, is checking the login and is putting the user information into the session/securitycontext/cookie/....
When successful, during the next request the companyDependentDataSource is capable of retrieving a properly setup Usario object
You can use this datasource now to do user specific stuff.
To verify your DataSource is properly working you could create a small Spring MVC endpoint
#RestController
public class DataSourceVerificationController {
#Autowired
private Usario usario;
#Autowired
#Qualifier("companyDependentDataSource") // omit this annotation if you use #Primary
private DataSource companyDependentDataSource;
#GetRequest("/test")
public String test() throws Exception {
String schema = usario.getTunnel().getDb_schema()
Connection con = companyDependentDataSource.getConnection();
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select name from Employee"); // just a random guess
rs.next();
String name = rs.getString("name")
rs.close();
stmt.close();
con.close();
return "name = '" + name + "', schema = '" + schema + "'";
}
}
Take your favorite browser go to your login page, do a valid login and call http://localhost:8080/test afterwards
Related
I have a spring boot application which has multi-schema multi-tenancy implemented. Without multi-tenancy, same API response time was 300-400 ms. But after implementing multi-tenancy, response time bumped to 6-7 seconds (on same server and same schema).
I understand that additional processing is required to read header, switching database based on header etc. But I feel that it should not be 6-7 seconds. Can someone suggest how can I reduce this response time. Below are the classes added for multitenancy
public class TenantAwareRoutingSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return ThreadLocalStorage.getTenantName();
}
}
public class TenantNameInterceptor extends HandlerInterceptorAdapter {
#Value("${schemas.list}")
private String schemasList;
private Gson gson = new Gson();
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
String tenantName = request.getHeader("tenant-id");
if(StringUtils.isBlank(schemasList)) {
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
response.getWriter().write(gson.toJson(new Error("Tenants not initalized...")));
response.setStatus(HttpServletResponse.SC_BAD_REQUEST);
return false;
}
if(!schemasList.contains(tenantName)) {
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
response.getWriter().write(gson.toJson(new Error("User not allowed to access data")));
response.setStatus(HttpServletResponse.SC_BAD_REQUEST);
return false;
}
ThreadLocalStorage.setTenantName(tenantName);
return true;
}
#Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
ThreadLocalStorage.setTenantName(null);
}
#Setter
#Getter
#AllArgsConstructor
public static class Error {
private String message;
}
}
public class ThreadLocalStorage {
private static ThreadLocal<String> tenant = new ThreadLocal<>();
public static void setTenantName(String tenantName) {
tenant.set(tenantName);
}
public static String getTenantName() {
return tenant.get();
}
}
#Configuration
public class AutoDDLConfig
{
#Value("${spring.datasource.username}")
private String username;
#Value("${spring.datasource.password}")
private String password;
#Value("${schemas.list}")
private String schemasList;
#Value("${db.host}")
private String dbHost;
#Bean
public DataSource dataSource()
{
AbstractRoutingDataSource multiDataSource = new TenantAwareRoutingSource();
if (StringUtils.isBlank(schemasList))
{
return multiDataSource;
}
String[] tenants = schemasList.split(",");
Map<Object, Object> targetDataSources = new HashMap<>();
for (String tenant : tenants)
{
System.out.println("####" + tenant);
tenant = tenant.trim();
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("com.mysql.jdbc.Driver"); // Change here to MySql Driver
dataSource.setSchema(tenant);
dataSource.setUrl("jdbc:mysql://" + dbHost + "/" + tenant
+ "?autoReconnect=true&characterEncoding=utf8&useSSL=false&useTimezone=true&serverTimezone=Asia/Kolkata&useLegacyDatetimeCode=false&allowPublicKeyRetrieval=true");
dataSource.setUsername(username);
dataSource.setPassword(password);
targetDataSources.put(tenant, dataSource);
LocalContainerEntityManagerFactoryBean emfBean = new LocalContainerEntityManagerFactoryBean();
emfBean.setDataSource(dataSource);
emfBean.setPackagesToScan("com"); // Here mention JPA entity path / u can leave it scans all packages
emfBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
emfBean.setPersistenceProviderClass(HibernatePersistenceProvider.class);
Map<String, Object> properties = new HashMap<>();
properties.put("hibernate.hbm2ddl.auto", "update");
properties.put("hibernate.default_schema", tenant);
properties.put("hibernate.dialect", "org.hibernate.dialect.MySQL5InnoDBDialect");
emfBean.setJpaPropertyMap(properties);
emfBean.setPersistenceUnitName(dataSource.toString());
emfBean.afterPropertiesSet();
}
multiDataSource.setTargetDataSources(targetDataSources);
multiDataSource.afterPropertiesSet();
return multiDataSource;
}
}
Snippet from application.properties
spring.datasource.username=<<username>>
spring.datasource.password=<<pssword>>
schemas.list=suncitynx,kalpavrish,riddhisiddhi,smartcity,businesspark
db.host=localhost
########## JPA Config ###############
spring.jpa.open-in-view=false
spring.jpa.show-sql=false
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
spring.jpa.database=mysql
spring.datasource.initialize=false
spring.jpa.hibernate.ddl-auto=none
spring.jpa.database-platform=org.hibernate.dialect.MySQLDialect
spring.jpa.properties.hibernate.jdbc.time_zone = Asia/Kolkata
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect
##############Debug Logging#########################
#logging.level.org.springframework=DEBUG
#logging.level.org.hibernate.SQL=DEBUG
#logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE
######### HIkari Pool ##############
spring.datasource.hikari.maximum-pool-size=20
######### Jackson ############
spring.jackson.serialization.WRITE_ENUMS_USING_TO_STRING=true
spring.jackson.deserialization.READ_ENUMS_USING_TO_STRING=true
spring.jackson.time-zone: Asia/Kolkata
#common request logger
logging.level.org.springframework.web.filter.CommonsRequestLoggingFilter=DEBUG
#Multi part file size
spring.servlet.multipart.max-file-size = 15MB
spring.servlet.multipart.max-request-size = 15MB
Are you sure you maintain the connection pool per tenant?
I am trying to integrate Spring Boot and Shiro. When I tried to call SecurityUtils.getSubject() in one of my controllers, an exception occurred:
org.apache.shiro.UnavailableSecurityManagerException: No SecurityManager accessible to the calling code, either bound to the org.apache.shiro.util.ThreadContext or as a vm static singleton. This is an invalid application configuration.
I just followed some tutorials and docs to configure Shiro and here is my ShiroConfig class:
#Configuration
public class ShiroConfig {
#Bean
public Realm realm() {
return new UserRealm();
}
#Bean
public HashedCredentialsMatcher hashedCredentialsMatcher() {
HashedCredentialsMatcher hashedCredentialsMatcher = new HashedCredentialsMatcher();
hashedCredentialsMatcher.setHashAlgorithmName(PasswordEncoder.getALGORITHM());
hashedCredentialsMatcher.setHashIterations(PasswordEncoder.getITERATION());
return hashedCredentialsMatcher;
}
#Bean
public UserRealm shiroRealm() {
UserRealm userRealm = new UserRealm();
userRealm.setCredentialsMatcher(hashedCredentialsMatcher());
return userRealm;
}
#Bean
public SessionsSecurityManager securityManager() {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager();
securityManager.setRealm(shiroRealm());
return securityManager;
}
#Bean
public DefaultAdvisorAutoProxyCreator defaultAdvisorAutoProxyCreator() {
DefaultAdvisorAutoProxyCreator defaultAdvisorAutoProxyCreator = new DefaultAdvisorAutoProxyCreator();
defaultAdvisorAutoProxyCreator.setUsePrefix(true);
return defaultAdvisorAutoProxyCreator;
}
#Bean
public ShiroFilterChainDefinition shiroFilterChainDefinition() {
DefaultShiroFilterChainDefinition definition = new DefaultShiroFilterChainDefinition();
definition.addPathDefinition("/login", "anon");
definition.addPathDefinition("/register", "anon");
definition.addPathDefinition("/api/**", "user");
return definition;
}
}
And this is the code which caused exception:
#PostMapping("/login")
#ResponseBody
public Object login(#RequestParam("username") String username,
#RequestParam("password") String password) {
if (username.equals("") || password.equals("")) {
return "please provide complete information";
}
UsernamePasswordToken token = new UsernamePasswordToken(username, password);
Subject subject = SecurityUtils.getSubject(); // this line caused exception
...
}
I am very confused about this exception. Could anyone help?
EDIT
I am using Spring Boot 2.1.6.RELEASE and shiro-spring-boot-starter 1.4.0.
Are you using the shiro-spring-boot-web-starter dependency instead of the shiro-spring-boot-starter dependency?
It looks like that is required for spring boot web applications according to this doc.
https://shiro.apache.org/spring-boot.html#Spring-WebApplications
I have a multi-tenant mongoDB application and let's assume that
right connection to right database is chosen from tenant name from HTTP request header(i usage earlier prepared properties file with tenant name).
When application is started mongoDB is configuring and i don't have information about tenant, because none request to application hasn't been sent, so i don't know to which database i should be connect. Is a possibility that mongoDB connection to database would be configured dynamicly, when I try to get some data from mongo repository(then I have tenant name from HTTP request)?
MongoDbConfiguration:
#Configuration
public class MongoDbConfiguration {
private final MongoConnector mongoConnector;
#Autowired
public MongoDbConfiguration(MongoConnector mongoConnector) {
this.mongoConnector = mongoConnector;
}
#Bean
public MongoDbFactory mongoDbFactory() {
return new MultiTenantSingleMongoDbFactory(mongoConnector, new MongoExceptionTranslator());
}
#Bean
public MongoTemplate mongoTemplate() {
return new MongoTemplate(mongoDbFactory());
}
}
#Component
#Slf4j
public class MultiTenantMongoDbFactory extends SimpleMongoDbFactory {
private static final Logger logger = LoggerFactory.getLogger(MultiTenantMongoDbFactory.class);
private Map<String, DbConfig> tenantToDbConfig;
private Map<String, MongoDatabase> tenantToMongoDatabase;
#Autowired
public MultiTenantMongoDbFactory(
final #Qualifier("sibTenantContexts") Map<String, DbConfig> dbConfigs,
final SibEnvironment env) {
super(new MongoClientURI(env.getDefaultDatabase()));
this.tenantToDbConfig = dbConfigs;
// Initialize tenantToMongoDatabase map.
buildTenantDbs();
}
#Override
public MongoDatabase getDb() {
String tenantId = (!StringUtils.isEmpty(TenantContext.getId()) ? TenantContext.getId()
: SibConstant.DEFAULT_TENANT);
return this.tenantToMongoDatabase.get(tenantId);
}
/**
* Create tenantToMongoDatabase map.
*/
#SuppressWarnings("resource")
private void buildTenantDbs() {
log.debug("Building tenantDB configuration.");
this.tenantToMongoDatabase = new HashMap<>();
/*
* for each tenant fetch dbConfig and intitialize MongoClient and set it to
* tenantToMongoDatabase
*/
for (Entry<String, DbConfig> idToDbconfig : this.tenantToDbConfig.entrySet()) {
try {
this.tenantToMongoDatabase.put(idToDbconfig.getKey(),
new MongoClient(new MongoClientURI(idToDbconfig.getValue()
.getUri())).getDatabase(idToDbconfig.getValue()
.getDatabase()));
} catch (MongoException e) {
log.error(e.getMessage(), e.getCause());
}
}
}
}
In this, tenantToDbConfig is a bean which I have created at the time of application boot where I store DBConfiguration like (Url/database name) against every tenant. There is one default database which is required at boot time and for every request, I am expecting tenantId in request Header.
I've been trying unsuccessfully to solve a problem of using 2 DBs with the same schema in Spring. The problem I'm trying to solve is creating a web page for a restaurant that is based in 2 different cities, so I thought using a separate database for each city would be the best solution.
I am only getting results from one database, the other one is for some reason unused. The databases are termed BA and KE for the city and I'm using a City enum with the same values.
BAConfig.java
#Configuration
#PropertySources(
{#PropertySource("classpath:jpa.properties"),
#PropertySource("classpath:jdbc.properties")})
#EnableTransactionManagement // Enable use of the #Transactional annotation
#ComponentScan(basePackages = "dao")
#EnableAspectJAutoProxy(proxyTargetClass = true)
public class BAConfig {
#Autowired
private Environment environment;
#Bean(name="dataSourceBA")
public DataSource buildDataSource()
{
HikariConfig hkcfg = new HikariConfig();
hkcfg.setJdbcUrl(environment.getRequiredProperty("jdbc.urlBA"));
hkcfg.setDriverClassName(environment.getRequiredProperty("jdbc.driverClassName"));
hkcfg.setUsername(environment.getRequiredProperty("jdbc.username"));
hkcfg.setPassword(environment.getRequiredProperty("jdbc.password"));
HikariDataSource ds = new HikariDataSource(hkcfg);
return ds;
}
public static LocalContainerEntityManagerFactoryBean entityManagerFactoryBuilder(DataSource ds)
{
LocalContainerEntityManagerFactoryBean emf = new LocalContainerEntityManagerFactoryBean();
emf.setDataSource(ds);
emf.setJpaVendorAdapter(new EclipseLinkJpaVendorAdapter());
emf.setPackagesToScan("model"); // Look for entities in this package
Properties props = new Properties();
props.setProperty("databasePlatform", "org.eclipse.persistence.platform.database.PostgreSQLPlatform");
props.setProperty("generateDdl", "true");
props.setProperty("showSql", "true");
props.setProperty("eclipselink.weaving", "false");
props.setProperty("eclipselink.ddl-generation", "create-tables");
emf.setJpaProperties(props);
return emf;
}
#Bean(name="BAEM")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(#Qualifier("dataSourceBA") DataSource ds) {
return entityManagerFactoryBuilder(ds);
}
#Bean(name = "txManagerBA")
JpaTransactionManager transactionManager(#Qualifier("BAEM") EntityManagerFactory em) {
JpaTransactionManager transactionManager = new JpaTransactionManager(em);
return transactionManager;
}
}
KEConfig.java
#Configuration
#PropertySources(
{#PropertySource("classpath:jpa.properties"),
#PropertySource("classpath:jdbc.properties")})
#EnableTransactionManagement // Enable use of the #Transactional annotation
#ComponentScan(basePackages = "dao")
#EnableAspectJAutoProxy(proxyTargetClass = true)
public class KEConfig {
#Autowired
private Environment environment;
#Bean("dataSourceKE")
public DataSource buildDataSource()
{
HikariConfig hkcfg = new HikariConfig();
hkcfg.setJdbcUrl(environment.getRequiredProperty("jdbc.urlKE"));
hkcfg.setDriverClassName(environment.getRequiredProperty("jdbc.driverClassName"));
hkcfg.setUsername(environment.getRequiredProperty("jdbc.username"));
hkcfg.setPassword(environment.getRequiredProperty("jdbc.password"));
HikariDataSource ds = new HikariDataSource(hkcfg);
return ds;
}
#Bean(name="KEEM")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(#Qualifier("dataSourceKE")DataSource ds) {
return BAConfig.entityManagerFactoryBuilder(ds);
}
#Bean(name = "txManagerKE")
JpaTransactionManager transactionManager(#Qualifier("KEEM") EntityManagerFactory em) {
JpaTransactionManager transactionManager = new JpaTransactionManager(em);
return transactionManager;
}
}
These are both imported into the MainConfig.java class and use the following properties file.
jdbc.properties
jdbc.driverClassName=org.postgresql.Driver
jdbc.urlBA=jdbc:postgresql://localhost:5432/BambooBA
jdbc.urlKE=jdbc:postgresql://localhost:5432/BambooKE
Here is the rest controller for the given entity.
ReservationsController.java
#RestController
#RequestMapping("/reservations")
public class ReservationsController {
#Autowired
private ReservationsService reservationsService;
#RequestMapping(value = "/getAll/{c}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<List<Reservations>> getAll(#PathVariable City c) {
try {
List<Reservations> reservations = new ArrayList<Reservations>();
switch(c)
{
case BA: reservations = reservationsService.findAllBA(); break;
case KE: reservations = reservationsService.findAllKE(); break;
}
return new ResponseEntity<List<Reservations>>(reservations, HttpStatus.OK);
} catch (NoSuchElementException e)
{
return new ResponseEntity<List<Reservations>>(HttpStatus.NOT_FOUND);
}
}
}
Here is the reservations service, where i've been trying to pull dummy data (in both DBs the id is 1).
ReservationsService.java
#Service
public class ReservationsService {
#Autowired
private ReservationsDao reservationsDao;
#Transactional("txManagerBA")
public List<Reservations> findAllBA() throws NoSuchElementException {
reservationsDao.setEM(City.BA);
List<Reservations> reservations = new ArrayList<Reservations>();
reservations.add(reservationsDao.find(1));
if(reservations.size() == 0)
{
throw new NoSuchElementException();
}
return reservations;
}
#Transactional("txManagerKE")
public List<Reservations> findAllKE() throws NoSuchElementException {
reservationsDao.setEM(City.KE);
List<Reservations> reservations = new ArrayList<Reservations>();
reservations.add(reservationsDao.find(1));
if(reservations.size() == 0)
{
throw new NoSuchElementException();
}
return reservations;
}
}
And here is the DAO superclass (the particular DAO inherits from this class and only has a super constructor in it).
BaseDao.java
public abstract class BaseDao<T>{
#PersistenceContext(unitName = "BAEM")
EntityManager emBA;
#PersistenceContext(unitName = "KEEM")
EntityManager emKE;
EntityManager em;
protected final Class<T> type;
protected BaseDao(Class<T> type) {
this.type = type;
}
public void setEM(City c)
{
switch(c) {
case BA: em = emBA; break;
case KE: em = emKE; break;
}
}
public T find(Integer id) {
return em.find(type, id);
}
public List<T> findAll() {
return em.createQuery("SELECT e FROM " + type.getSimpleName() + " e", type).getResultList();
}
}
The debug (breakpoint set in BaseDAO in the find() function) shows that the correct persistence unit is being used to retrieve data (when i move all the way down to persistenceUnitInfo.nonJtaDataSource.jdbcUrl the URL is correct).
Yet only one of the databases is being used, no matter the request. I have also tried using an AbstractRoutingDataSource but with the same problem - the database would get set on the first request and from then only that database would be used, indifferent to the request.
Here is the configuration that we are using in our spring4 application with Hikaripool
The #Qualifier annotation will be useful to differentiate when you have multiple database data sources and #Primary to make the default datasource when used with #AutoWiredand when using other datasource make use of #Qualifier annotation along with #AutoWired
#Bean(destroyMethod = "close")
#Primary
#Qualifier("tspDataSource")
public DataSource dataSource() {
HikariConfig config = new HikariConfig("/hikari-tsp.properties");
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
final HikariDataSource ds = new HikariDataSource(config);
return ds;
}
and second one
#Bean(destroyMethod = "close")
#Qualifier("fdxDataSource")
public DataSource fdxDataSource() {
HikariConfig config = new HikariConfig("/hikari-fdx.properties");
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
final HikariDataSource ds = new HikariDataSource(config);
return ds;
}
I am using the below code to connect with cassandra using spring data. But it's painful to create connection everytime.
try {
cluster = Cluster.builder().addContactPoint(host).build();
session = cluster.connect("digitalfootprint");
CassandraOperations cassandraOps = new CassandraTemplate(session);
Select usersQuery = QueryBuilder.select(userColumns).from("Users");
usersQuery.where(QueryBuilder.eq("username", username));
List<Users> userResult = cassandraOps
.select(usersQuery, Users.class);
userList = userResult;
} catch(Exception e) {
e.printStackTrace();
} finally {
cluster.close();
}
Is there any way we can have a common static connection or utility kind of stuff. I am using this in web application where lots of CRUD operation will be there. SO it will be painful to repeat the code every where.
Just instantiate appropriate beans at the startup time of your spring web application. An example would be :
#Configuration
public class CassandraConfig {
#Bean
public CassandraClusterFactoryBean cluster() throws UnknownHostException {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(InetAddress.getLocalHost().getHostName());
cluster.setPort(9042);
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
return new BasicCassandraMappingContext();
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName("mykeyspace");
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
Now Inject or Autowire CassandraOperations bean , any time you want