I have now 2 tables in database:
User
user_database
In user I store login, password,role
In user_database i store database driver,url,password and user.
Diagram database
I want user login to my page and next connection what he done will be sent to user database. Why i need what? I planing map popular e commerce and create android application where user login and see store data, can add and view product orders.
Now time to go practice, my knowledge in spring technology is small please explain me something when I doing wrong.
All examples on web for AbstractRoutingDataSource have declaration datasource in persistence file or create datasource bean and start using AbstractRoutingDataSource.
In my project I don't now user connection and i need get this from database. I was try get using repository and this example
https://stackoverflow.com/a/17575648/3037869
but i getting null on #Autowired in controller, i think connection for repository is null. How to set connection for this repository and set Route? Defect this method is when i add user i need restart server to refresh connection.
Next try what i using now is class User implement UserDetails after user login i can get user connection from getPrincipal() and add to map.
private void setDataSources() {
HashMap<Object, Object> targetDataSources = new HashMap<>();
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName("org.h2.Driver");
dataSourceBuilder.url("jdbc:h2:mem:AZ;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE");
dataSourceBuilder.username("sa");
dataSourceBuilder.password("");
targetDataSources.put("auth", dataSourceBuilder.build());
setDefaultTargetDataSource(dataSourceBuilder.build());
if( SecurityContextHolder.getContext().getAuthentication()!=null) {
User user=(User) SecurityContextHolder.getContext().getAuthentication().getPrincipal();
System.out.println(user.getUserDatabase().getDriver());
dataSourceBuilder.driverClassName(user.getUserDatabase().getDriver());
dataSourceBuilder.url(user.getUserDatabase().getUrl());
dataSourceBuilder.username("3450_Menadzer");
dataSourceBuilder.password(user.getUserDatabase().getPassword());
targetDataSources.put("user", dataSourceBuilder.build());
}
setTargetDataSources(targetDataSources);
afterPropertiesSet(); //map is refresh when i add this
}
I run this method on constuctor and determineCurrentLookupKey
protected Object determineCurrentLookupKey() {
if( SecurityContextHolder.getContext().getAuthentication()!=null) {
setDataSources();
return "user";
}
return "auth";
}
This is working but when i refresh 3-4 times request for user database i getting
User 3450_Menadzer already has more than 'max_user_connections' active connections
Setting connection map manualy and not refreshing every method determineCurrentLookupKey run i don't have this problem. I think my method is not clossing connection. How i can clean this? This is possible to better method to route connection?
EDIT
#SergeBallesta i change some code from your examples
This is my class for map
#Component
#Scope(value = "singleton")
public class DataSourceMap {
private Map<Object,Object> dataSourceMap;
public DataSourceMap()
{
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName("org.h2.Driver");
dataSourceBuilder.url("jdbc:h2:mem:AZ;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE");
dataSourceBuilder.username("sa");
dataSourceBuilder.password("");
dataSourceMap=new HashMap<Object,Object>();
dataSourceMap.put("auth",dataSourceBuilder.build());
}
public void addDataSource(String session,DataSource dataSource)
{
this.dataSourceMap.put(session,dataSource);
}
public Map<Object,Object> getDataSourceMap()
{
return dataSourceMap;
}
public void removeSource(String session)
{
dataSourceMap.remove(session);
}
}
For AbstractRoutingDataSource i done some changes, i was add afterPropertiesSet() beacuse datasource not refresh. I done some refresh and i not getting error i think this is working. I need test this for more databases in future
#Component
public class CustomRoutingDataSource extends AbstractRoutingDataSource{
#Autowired
DataSourceMap dataSources;
#Override
protected Object determineCurrentLookupKey() {
setDataSources(dataSources);
afterPropertiesSet();
System.out.println("test");
if( SecurityContextHolder.getContext().getAuthentication()!=null) {
HttpServletRequest request = ((ServletRequestAttributes)
RequestContextHolder.getRequestAttributes()).getRequest();
return request.getSession().getId();
}
return "auth";
}
#Autowired
public void setDataSources(DataSourceMap dataSources) {
System.out.println("data adding");
setTargetDataSources(dataSources.getDataSourceMap());
}
}
First, per user database is a very uncommon design. If all those databases will end with same structure, please do not do that in a real world application, but just add user_id in your tables and queries.
Next, I found another (not full) example of a dynamic AbstractRoutingDataSource in another answer of mine.
And one big difference between my code (beware never tested) and your question is that I use a SessionListener to close the databases to avoid that the number of open database grows indefinitively.
If you to this to learn Spring, you could try the following pattern (bottom-up description) :
a session scoped bean that would hold the actual database connection for a user, the connection should be created on first request (to be sure that user id is present in session) and cached for subsequent uses. A destroy method (automaticaly called by Spring when session is closed) should close the connection.
an AbstractRoutingDataSource, that would be injected with a proxy to above holder, and that would ask actual datasource to the holder
As in the other answer, if same user is likely to have many simultaneous sessions, you could have a singleton been injected in session holders that would keep the actual database connections along with the number of active sessions. That way you would get one single connection per user, no matter how many concurrent sessions he could have.
Related
I have the following scenario:
I have one controller which based on a path variable calls a different service.
In every service there is a transactional method where some import logic is happening(call one external api, get a csv file, parse it, convert it to entity and save it in database).
Additionally in every service I want to keep statistics of how many entities will be updated, inserted and deleted. For that reason I am using the org.hibernate.SessionFactory . One example of how I am using that is:
#Service
#Slf4j
public class MarketReportImporterImpl extends Support implements MarketReportImporter {
#Override
#Transactional
public void importMarketReports(ImporterLog importerLog) {
try {
String export = getCsvFile();
Session session = getCurrentSessionAndClearSessionFactoryStatistics();
// parse the csv and save the entities
flushSession(session);
setSuccessfulImport(session, importerLog);
} catch (Exception e) {
log.error("Failed to import market reports. Unable to parse export", e);
getTelemetryClient().trackException(e);
importerLogService.setFailedImport(importerLog, e.getMessage());
}
}
and the methods getCurrentSessionAndClearSessionFactoryStatistics() and setSuccessfulImport(session, importerLog); are in the Support class:
#Component
public abstract class Support {
private final ImporterLogService importerLogService;
#PersistenceContext
private EntityManager entityManager;
public Support(ImporterLogService importerLogService) {
this.importerLogService = importerLogService;
}
public void flushSession(Session session) {
session.flush();
}
public void setSuccessfulImport(Session session, ImporterLog importerLog) {
Statistics statistics = session.getSessionFactory().getStatistics();
int entityInsertCount = (int) statistics.getEntityInsertCount();
int entityDeleteCount = (int) statistics.getEntityDeleteCount();
int entityUpdateCount = (int) statistics.getEntityUpdateCount();
importerLogService.setSuccessfulImport(importerLog, entityUpdateCount, entityDeleteCount, entityInsertCount);
}
public Session getCurrentSessionAndClearSessionFactoryStatistics() {
Session session = entityManager.unwrap(Session.class);
SessionFactory sessionFactory = session.getSessionFactory();
sessionFactory.getStatistics().clear();
return session;
}
This works perfectly fine when calling it for one importer. But If from the frontend I am calling quickly two importers (meaning two threads run in parallel) there will be mix in the results. The session.getSessionFactory().getStatistics(); will have mix results from the first importer and from the second importer and I want to have a clear result only for the current session. For example service A and service B are running in parallel and in service A I am saving entities of type aa and in service B of type bb. In each service I want to know how many entities are saved, updated or deleted meaning in service A -> how many of type aa and in service B -> how many of type bb . As I understand every thread should open a session on it's own and then for every session I should get the correct sessionFactory and the correct results. But as it turns out this sessionFactory I guess (not sure in this statement) it belongs to every session and that is why I have mix results.
My question is if there is a way to separate somehow the sessionFactory and have clear vision of which entity how many times is saved,deleted,updated even in multithreaded environment.
If you want the statistics of a session, then call the getStatistics() method on session, which will give you the SessionStatistics. A SessionFactory should only exist once and statistics there are across all sessions.
I am having a hard time changing the oracle datasource schema for my springboot app, that will eventually be used by my camel routes. I am logging in as user readonly, but all the data is in schema mydata. Readonly has read rights to the mydata schema.
I have tried calling ALTER SESSION SET CURRENT_SCHEMA=mydata against the datasource (by autowiring, and then getting the connection object from the datasource) and it doesn't work, I have no issue running selects from statement objects I create off the connection (see code below)
If I create a rest endpoint that executes ALTER SESSION SET CURRENT_SCHEMA=mydata and if I call that from postman or a browser, that will change my schema and my other endpoints will work, but I would prefer not to do it that way since I will have to call that endpoint. I guess I could call that endpoint in my springboot app when it loads but it just seems like the wrong way to do it.
I also do not want to hardcode/prefix all my tables with the schema name since different regions have different schema names, I'd like to configure the schema name in the properties file.
Here is my application.properties, I have tried various ways to set the schema in the properties file based on other stack overflow posts, and so far none of them work.
spring.datasource.first.url=jdbc:oracle:thin:#myserver:10100:db9
spring.datasource.first.username=readonly
spring.datasource.first.password=readonlypass
## DOESNT WORK ->spring.datasource.hikari.schema=mydata
## DOESNT WORK ->spring.datasource.hikari.first.schema=mydata
#sync database
spring.datasource.second.driverClassName=oracle.jdbc.OracleDriver
spring.datasource.second.url = jdbc:oracle:thin:myserver2:10100:db15
spring.datasource.second.username = eam
spring.datasource.second.password = eampass
Here is the code from my springboot application:
/**
* A spring-boot application that includes a Camel route builder to setup the Camel routes
*/
#SpringBootApplication
#ImportResource({"classpath:spring/camel-context.xml"})
public class Application extends RouteBuilder {
int workorderSyncFrequency = 5000;
//Autowired the first datasource in attempts to alter the session to set my schema name.
#Autowired
DataSource firstDataSource;
// must have a main method spring-boot can run
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
//setup first datasource
#Bean
#Primary
#ConfigurationProperties("spring.datasource.first")
public DataSourceProperties firstDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("spring.datasource.first.configuration")
public DataSource firstDataSource() {
return firstDataSourceProperties().initializeDataSourceBuilder()
.type(HikariDataSource.class).build();
}
//setup second data source
#Bean
#ConfigurationProperties("spring.datasource.second")
public DataSourceProperties secondDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#ConfigurationProperties("spring.datasource.second.configuration")
public DataSource secondDataSource() {
return firstDataSourceProperties().initializeDataSourceBuilder()
.type(HikariDataSource.class).build();
}
#Override
public void configure() throws Exception {
Connection con = DataSourceUtils.getConnection(firstDataSource);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select count(*) from mydata.ASSET");
rs.next();
//simply testing I am using the correct datasource and I can query from the second schema and this works.
System.out.println("++++++++++++++++++++++ASSET COUNT+++++++++++++++++++"+rs.getInt(1));
//Tried both of these statements, neither works.
//stmt.executeQuery("ALTER SESSION SET CURRENT_SCHEMA=mydata");
//stmt.executeUpdate("ALTER SESSION SET CURRENT_SCHEMA=mydata");
//Connection is defaulted to autocommit tried this just in case.
con.commit();
//ASSET table doesnt exist on the readonly schema, only on the mydata schema
//if I call test3 I will get a table or view does not exist, unless I first call the "schema"
//endpoint below.
rest()
.get("test3")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.to("sql:SELECT * FROM ASSET where rownum < 10"
+ "?dataSource=firstDataSource&outputType=SelectList");
//This works if I call this route, but its a weird way to make this work.
rest()
.get("schema")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.to("sql:ALTER SESSION SET CURRENT_SCHEMA=mydata"
+ "?dataSource=firstDataSource&outputType=SelectList");
}
I have started converting an existing Spring Boot(1.5.4.RELEASE) application to support multi-tenant capabilities. So i am using MySQL as the database and Spring Data JPA as the data access mechanism. i am using the schema based multi-tenant approach. As Hibernate document suggests below
https://docs.jboss.org/hibernate/orm/4.2/devguide/en-US/html/ch16.html
I have implemented MultiTenantConnectionProvider and CurrentTenantIdentifierResolver interfaces and I am using ThreadLocal variable to maintain the current tenant for the incoming request.
public class TenantContext {
final public static String DEFAULT_TENANT = "master";
private static ThreadLocal<Tenant> tenantConfig = new ThreadLocal<Tenant>() {
#Override
protected Tenant initialValue() {
Tenant tenant = new Tenant();
tenant.setSchemaName(DEFAULT_TENANT);
return tenant;
}
};
public static Tenant getTenant() {
return tenantConfig.get();
}
public static void setTenant(Tenant tenant) {
tenantConfig.set(tenant);
}
public static String getTenantSchema() {
return tenantConfig.get().getSchemaName();
}
public static void clear() {
tenantConfig.remove();
}
}
Then i have implemented a filter and there i set the tenant dynamically looking at a request header as below
String targetTenantName = request.getHeader(TENANT_HTTP_HEADER);
Tenant tenant = new Tenant();
tenant.setSchemaName(targetTenantName);
TenantContext.setTenant(tenant);
This works fine and now my application points to different schema based on the request header value.
However there is a master schema where i store the some global settings and i need to access that schema while in a middle of a request for a tenant. Therefore i tried to hard code the Threadlocal variable just before that database call in the code as below.
Tenant tenant = new Tenant();
tenant.setSchemaName("master");
TenantContext.setTenant(tenant);
However this does not point to the master schema and instead it tries to access the original schema set during the filter. What is the reason for this?
As per my understanding Hibernate invokes openSession() during the first database call to a tenant and after i try to invoke another database call for "master" it still use the previous tenant as CurrentTenantIdentifierResolver invokes only during the openSession(). However these different database calls does not invoke within a transaction.
Can you please help me to understand the issue with my approach and any suggestions to fix the issue
Thanks
Keth
#JonathanJohx actually i am trying to override the TenantContext set by the filter in one of the controllers. First i am loging in a tenant where TenantContext is set to that particular tenant. While the request is in that tenant i am requesting data from master. In order to do that i am simply hard code the tenant as below
#RestController
#RequestMapping("/jobTemplates")
public class JobTemplateController {
#Autowired
JobTemplateService jobTemplateService;
#GetMapping
public JobTemplateList list(Pageable pageable){
Tenant tenant = new Tenant();
tenant.setSchemaName(multitenantMasterDb);
TenantContext.setTenant(tenant);
return jobTemplateService.list(pageable);
}
I work for a company that has multiple brands, therefore we have a couple MongoDB instances on some different hosts holding the same document model for our Customer on each of these brands. (Same structure, not same data)
For the sake of simplicity let's say we have an Orange brand, with database instance serving on port 27017 and Banana brand with database instance serving on port 27018
Currently I'm developing a fraud detection service which is required to connect to all databases and analyze all the customers' behavior together regardless of the brand.
So my "model" has a shared entity for Customer, annotated with #Document (org.springframework.data.mongodb.core.mapping.Document)
Next thing I have is two MongoRepositories such as:
public interface BananaRepository extends MongoRepository<Customer, String>
List<Customer> findAllByEmail(String email);
public interface OrangeRepository extends MongoRepository<Customer, String>
List<Customer> findAllByEmail(String email);
With some stub method for finding customers by Id, Email, and so on. Spring is responsible for generating all implementation classes for such interfaces (pretty standard spring stuff)
In order to hint each of these repositories to connect to the right mongodb instance, I need two Mongo Config such as:
#Configuration
#EnableMongoRepositories(basePackageClasses = {Customer.class})
public class BananaConfig extends AbstractMongoConfiguration {
#Value("${database.mongodb.banana.username:}")
private String username;
#Value("${database.mongodb.banana.database}")
private String database;
#Value("${database.mongodb.banana.password:}")
private String password;
#Value("${database.mongodb.banana.uri}")
private String mongoUri;
#Override
protected Collection<String> getMappingBasePackages() {
return Collections.singletonList("com.acme.model");
}
#Override
protected String getDatabaseName() {
return this.database;
}
#Override
#Bean(name="bananaClient")
public MongoClient mongoClient() {
final String authString;
//todo: Use MongoCredential
//todo: Use ServerAddress
//(See https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#repositories) 10.3.4
if ( valueIsPresent(username) ||valueIsPresent(password)) {
authString = String.format("%s:%s#", username, password);
} else {
authString = "";
}
String conecctionString = "mongodb://" + authString + mongoUri + "/" + database;
System.out.println("Going to connect to: " + conecctionString);
return new MongoClient(new MongoClientURI(conecctionString, builder()
.connectTimeout(5000)
.socketTimeout(8000)
.readPreference(ReadPreference.secondaryPreferred())
.writeConcern(ACKNOWLEDGED)));
}
#Bean(name = "bananaTemplate")
public MongoTemplate mongoTemplate(#Qualifier("bananaFactory") MongoDbFactory mongoFactory) {
return new MongoTemplate(mongoFactory);
}
#Bean(name = "bananaFactory")
public MongoDbFactory mongoFactory() {
return new SimpleMongoDbFactory(mongoClient(),
getDatabaseName());
}
private static int sizeOfValue(String value){
if (value == null) return 0;
return value.length();
}
private static boolean valueIsMissing(String value){
return sizeOfValue(value) == 0;
}
private static boolean valueIsPresent(String value){
return ! valueIsMissing(value);
}
}
I also have similar config for Orange which points to the proper mongo instance.
Then I have my service like this:
public List<? extends Customer> findAllByEmail(String email) {
return Stream.concat(
bananaRepository.findAllByEmail(email).stream(),
orangeRepository.findAllByEmail(email).stream())
.collect(Collectors.toList());
}
Notice that I'm calling both repositories and then collecting back the results into one single list
What I would expect to happen is that each repository would connect to its corresponding mongo instance and query for the customer by its email.
But this don't happened. I always got the query executed against the same mongo instance.
But in the database log I can see both connections being made by spring.
It just uses one connection to run the queries for both repositories.
This is not surprising as both Mongo Config points to the same model package here. Right. But I also tried other approaches such as creating a BananaCustomer extends Customer, into its own model.banana package, and another OrangeCustomer extends Customer into its model.orange package, along with specifying the proper basePackageClasses into each config. But that neither worked, I've ended up getting both queries run against the same database.
:(
After scavenging official Spring-data-mongodb documentation for hours, and looking throughout thousands of lines of code here and there, I've run out of options: seems like nobody have done what I'm trying to accomplish before.
Except for this guy here that had to do the same thing but using JPA instead of mongodb: Link to article
Well, while it's still spring-data it't not for mongodb.
So here is my question:
¿How can I explicitly tell each repository to use a specific mongo config?
Magical autowiring rules, except when it doesn't work and nobody understands the magic.
Thanks in advance.
Well, I had a very detailed answer but StackOverflow complained about looking like spam and didn't allow me to post
The full answer is still available as a Gist file here
The bottom line is that both MongoRepository (interface) and the model object must be placed in the same package.
I am trying to implement logging in DB table using Spring AOP. By "logging in table" I mean to write in special log table information about record that was CREATED/UPDATED/DELETED in usual table for domain object.
I wrote some part of the code and all is working good except one thing - when transaction is rolled back then changes in log table still commit successfully. It's strange for me because in my AOP advice the same transaction is using as in my business and DAO layer. (From my AOP advice I called methods of special manager class with Transaction propagation MANDATORY and also I checked transaction name TransactionSynchronizationManager.getCurrentTransactionName() in business layer, dao layer and AOP advice and it is the same).
Does anyone tried to implement similar things in practice? Is it possible to use in AOP advice the same transaction as in the business layer and rollback changes made in AOP advice if some error in business layer occurs?
Thank you in advance for unswers.
EDIT
I want to clarify that problem with rollback occurs only for changes that was made from AOP advice. All changes that is made in DAO layer are rollbacked successfully. I mean that, for example, if some exception is thrown then changes made in DAO layer will be successfully rollbacked, but in log table information will be saved (commited). But I can't understand why it is like that because as I wrote above in AOP advice the same transaction is using.
EDIT 2
I checked with debugger the piece of the code where I am writting to the log table in AOP advice and it seems to me that JdbcTemplate's update method executes outside transaction because changes had been commited to the DB directly after execution of the statement and before transactional method was finished.
EDIT 3
I solved this problem. Actually, that was my stupid fault. I'm using MySQL. After creation of the log table I did't change DB engine and HeidySQL set MyIsam by default. But MyIsam doesn't support transaction so I changed DB engine to InnoDB (as for all other tables) and now all is working perfectly.
Thank you all for help and sorry for disturbing.
If someone is interested, here is simplified example that illustrate my approach.
Consider DAO class that has save method:
#Repository(value="jdbcUserDAO")
#Transactional(propagation=Propagation.SUPPORTS, readOnly=true, rollbackFor=Exception.class)
public class JdbcUserDAO implements UserDAO {
#Autowired
private JdbcTemplate jdbcTemplate;
#LoggedOperation(affectedRows = AffectedRows.ONE, loggedEntityClass = User.class, operationName = OperationName.CREATE)
#Transactional(propagation=Propagation.REQUIRED, readOnly=false, rollbackFor=Exception.class)
#Override
public User save(final User user) {
if (user == null || user.getRole() == null) {
throw new IllegalArgumentException("Input User object or nested Role object should not be null");
}
KeyHolder keyHolder = new GeneratedKeyHolder();
jdbcTemplate.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection connection)
throws SQLException {
PreparedStatement ps = connection.prepareStatement(SQL_INSERT_USER, new String[]{"ID"});
ps.setString(1, user.getUsername());
ps.setString(2, user.getPassword());
ps.setString(3, user.getFullName());
ps.setLong(4, user.getRole().getId());
ps.setString(5, user.geteMail());
return ps;
}
}, keyHolder);
user.setId((Long) keyHolder.getKey());
VacationDays vacationDays = user.getVacationDays();
vacationDays.setId(user.getId());
// Create related vacation days record.
vacationDaysDAO.save(vacationDays);
user.setVacationDays(vacationDays);
return user;
}
}
Here is how aspect looks like:
#Component
#Aspect
#Order(2)
public class DBLoggingAspect {
#Autowired
private DBLogManager dbLogManager;
#Around(value = "execution(* com.crediteuropebank.vacationsmanager.server.dao..*.*(..)) " +
"&& #annotation(loggedOperation)", argNames="loggedOperation")
public Object doOperation(final ProceedingJoinPoint joinPoint,
final LoggedOperation loggedOperation) throws Throwable {
Object[] arguments = joinPoint.getArgs();
/*
* This should be called before logging operation.
*/
Object retVal = joinPoint.proceed();
// Execute logging action
dbLogManager.logOperation(arguments,
loggedOperation);
return retVal;
}
}
And here is how my db log manager class LooksLike:
#Component("dbLogManager")
public class DBLogManager {
#Autowired
private JdbcTemplate jdbcTemplate;
#InjectLogger
private Logger logger;
#Transactional(rollbackFor={Exception.class}, propagation=Propagation.MANDATORY, readOnly=false)
public void logOperation(final Object[] inputArguments, final LoggedOperation loggedOperation) {
try {
/*
* Prepare query and array of the arguments
*/
jdbcTemplate.update(insertQuery.toString(),
insertedValues);
} catch (Exception e) {
StringBuilder sb = new StringBuilder();
// Prepare log string
logger.error(sb.toString(), e);
}
}
It could be to do with the order of the advice - you would want your #Transaction related advice to take effect around(or before and after) your logging related advice. If you are using Spring AOP you can probably control it using the order attribute of the advice - give your transaction related advice the highest precedence so that it executes last on the way out.
Nothing to do with AOP, set datasource property autocommit to false like :
<bean id="datasource" ...>
<property name="autoCommit" value="false/>
</bean>
If you are using xml configuration