My Project is on spring-boot-starter-parent - "1.5.9.RELEASE" and I'm migrating it to spring-boot-starter-parent - "2.3.1.RELEASE".
This is multi-tenant env application, where one database will have multiple schemas, and based on the tenant-id, execution switches between schemas.
I had achieved this schema switching using SimpleNativeJdbcExtractor but in the latest Springboot version NativeJdbcExtractor is no longer available.
Code snippet for the existing implementation:
#Bean
#Scope(
value = ConfigurableBeanFactory.SCOPE_PROTOTYPE,
proxyMode = ScopedProxyMode.TARGET_CLASS)
public JdbcTemplate jdbcTemplate() {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
SimpleNativeJdbcExtractor simpleNativeJdbcExtractor = new SimpleNativeJdbcExtractor() {
#Override
public Connection getNativeConnection(Connection con) throws SQLException {
LOGGER.debug("Set schema for getNativeConnection "+Utilities.getTenantId());
con.setSchema(Utilities.getTenantId());
return super.getNativeConnection(con);
}
#Override
public Connection getNativeConnectionFromStatement(Statement stmt) throws SQLException {
LOGGER.debug("Set schema for getNativeConnectionFromStatement "+Utilities.getTenantId());
Connection nativeConnectionFromStatement = super.getNativeConnectionFromStatement(stmt);
nativeConnectionFromStatement.setSchema(Utilities.getTenantId());
return nativeConnectionFromStatement;
}
};
simpleNativeJdbcExtractor.setNativeConnectionNecessaryForNativeStatements(true);
simpleNativeJdbcExtractor.setNativeConnectionNecessaryForNativePreparedStatements(true);
jdbcTemplate.setNativeJdbcExtractor(simpleNativeJdbcExtractor);
return jdbcTemplate;
}
Here Utilities.getTenantId() ( Stored value in ThreadLocal) would give the schema name based on the REST request.
Questions:
What are the alternates to NativeJdbcExtractor so that schema can be dynamically changed for JdbcTemplate?
Is there any other way, where while creating the JdbcTemplate bean I can set the schema based on the request.
Any help, code snippet, or guidance to solve this issue is deeply appreciated.
Thanks.
When I was running the application in debug mode I saw Spring was selecting Hikari Datasource.
I had to intercept getConnection call and update schema.
So I did something like below,
Created a Custom class which extends HikariDataSource
public class CustomHikariDataSource extends HikariDataSource {
#Override
public Connection getConnection() throws SQLException {
Connection connection = super.getConnection();
connection.setSchema(Utilities.getTenantId());
return connection;
}
}
Then in the config class, I created bean for my CustomHikariDataSource class.
#Bean
public DataSource customDataSource(DataSourceProperties properties) {
final CustomHikariDataSource dataSource = (CustomHikariDataSource) properties
.initializeDataSourceBuilder().type(CustomHikariDataSource.class).build();
if (properties.getName() != null) {
dataSource.setPoolName(properties.getName());
}
return dataSource;
}
Which will be used by the JdbcTemplate bean.
#Bean
#Scope(
value = ConfigurableBeanFactory.SCOPE_PROTOTYPE,
proxyMode = ScopedProxyMode.TARGET_CLASS)
public JdbcTemplate jdbcTemplate() throws SQLException {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
return jdbcTemplate;
}
With this approach, I will have DataSource bean created only once and for every JdbcTemplate access, the proper schema will be updated during runtime.
There's no need to get rid of JdbcTemplate. NativeJdbcExtractor was removed in Spring Framework 5 as it isn't needed with JDBC 4.
You should replace your usage of NativeJdbcExtractor with calls to connection.unwrap(Class). The method is inherited by Connection from JDBC's Wrapper.
You may also want to consider using AbstractRoutingDataSource which is designed to route connection requests to different underlying data sources based on a lookup key.
Related
I am having a hard time changing the oracle datasource schema for my springboot app, that will eventually be used by my camel routes. I am logging in as user readonly, but all the data is in schema mydata. Readonly has read rights to the mydata schema.
I have tried calling ALTER SESSION SET CURRENT_SCHEMA=mydata against the datasource (by autowiring, and then getting the connection object from the datasource) and it doesn't work, I have no issue running selects from statement objects I create off the connection (see code below)
If I create a rest endpoint that executes ALTER SESSION SET CURRENT_SCHEMA=mydata and if I call that from postman or a browser, that will change my schema and my other endpoints will work, but I would prefer not to do it that way since I will have to call that endpoint. I guess I could call that endpoint in my springboot app when it loads but it just seems like the wrong way to do it.
I also do not want to hardcode/prefix all my tables with the schema name since different regions have different schema names, I'd like to configure the schema name in the properties file.
Here is my application.properties, I have tried various ways to set the schema in the properties file based on other stack overflow posts, and so far none of them work.
spring.datasource.first.url=jdbc:oracle:thin:#myserver:10100:db9
spring.datasource.first.username=readonly
spring.datasource.first.password=readonlypass
## DOESNT WORK ->spring.datasource.hikari.schema=mydata
## DOESNT WORK ->spring.datasource.hikari.first.schema=mydata
#sync database
spring.datasource.second.driverClassName=oracle.jdbc.OracleDriver
spring.datasource.second.url = jdbc:oracle:thin:myserver2:10100:db15
spring.datasource.second.username = eam
spring.datasource.second.password = eampass
Here is the code from my springboot application:
/**
* A spring-boot application that includes a Camel route builder to setup the Camel routes
*/
#SpringBootApplication
#ImportResource({"classpath:spring/camel-context.xml"})
public class Application extends RouteBuilder {
int workorderSyncFrequency = 5000;
//Autowired the first datasource in attempts to alter the session to set my schema name.
#Autowired
DataSource firstDataSource;
// must have a main method spring-boot can run
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
//setup first datasource
#Bean
#Primary
#ConfigurationProperties("spring.datasource.first")
public DataSourceProperties firstDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("spring.datasource.first.configuration")
public DataSource firstDataSource() {
return firstDataSourceProperties().initializeDataSourceBuilder()
.type(HikariDataSource.class).build();
}
//setup second data source
#Bean
#ConfigurationProperties("spring.datasource.second")
public DataSourceProperties secondDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#ConfigurationProperties("spring.datasource.second.configuration")
public DataSource secondDataSource() {
return firstDataSourceProperties().initializeDataSourceBuilder()
.type(HikariDataSource.class).build();
}
#Override
public void configure() throws Exception {
Connection con = DataSourceUtils.getConnection(firstDataSource);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select count(*) from mydata.ASSET");
rs.next();
//simply testing I am using the correct datasource and I can query from the second schema and this works.
System.out.println("++++++++++++++++++++++ASSET COUNT+++++++++++++++++++"+rs.getInt(1));
//Tried both of these statements, neither works.
//stmt.executeQuery("ALTER SESSION SET CURRENT_SCHEMA=mydata");
//stmt.executeUpdate("ALTER SESSION SET CURRENT_SCHEMA=mydata");
//Connection is defaulted to autocommit tried this just in case.
con.commit();
//ASSET table doesnt exist on the readonly schema, only on the mydata schema
//if I call test3 I will get a table or view does not exist, unless I first call the "schema"
//endpoint below.
rest()
.get("test3")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.to("sql:SELECT * FROM ASSET where rownum < 10"
+ "?dataSource=firstDataSource&outputType=SelectList");
//This works if I call this route, but its a weird way to make this work.
rest()
.get("schema")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.to("sql:ALTER SESSION SET CURRENT_SCHEMA=mydata"
+ "?dataSource=firstDataSource&outputType=SelectList");
}
I use jooq to handle SQL Queries on a PostgreSQL database in an Wildfly web application. The DSLContext is injected via CDI into my beans following the example on http://awolski.com/integrating-jooq-easy/. A bean looks like this:
public class Foo {
private #Inject DSLContext jooq;
...
public boolean update....
{
...
try {
jooq.doStuff();
}
catch(Exception e) {
System.out.println(e.getCause().getMessage());
}
finally {
jooq.close();
}
...
}
When I run the application, every connection leaks. What am I missing ?
The constructor was the critical point.
Since the documentation on https://www.jooq.org/doc/3.10/manual/sql-building/dsl-context/connection-vs-datasource/ says that jooq will manage the connection, passing the datasource object instead of the connection object solves the issue:
public DSLContext getDSLContext() throws SQLException {
return DSL.using(ds, SQLDialect.MYSQL);
}
in http://awolski.com/integrating-jooq-easy/
I am using Spring to configure transactions in my application. I have two transaction managers defined for two RabbitMQ servers.
....
#Bean(name = "devtxManager")
public PlatformTransactionManager devtxManager() {
return new RabbitTransactionManager(devConnectionFactory());
}
#Bean(name = "qatxManager")
public PlatformTransactionManager qatxManager() {
return new RabbitTransactionManager(qaConnectionFactory());
}
#Bean
public ConnectionFactory devConnectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setHost(propertyLoader.loadProperty("dev.rabbit.host"));
factory.setPort(Integer.parseInt(propertyLoader.loadProperty("dev.rabbit.port")));
factory.setVirtualHost("product");
factory.setUsername(propertyLoader.loadProperty("dev.sender.rabbit.user"));
factory.setPassword(propertyLoader.loadProperty("dev.sender.rabbit.password"));
return factory;
}
#Bean
public ConnectionFactory qaConnectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setHost(propertyLoader.loadProperty("qa.rabbit.host"));
factory.setPort(Integer.parseInt(propertyLoader.loadProperty("qa.rabbit.port")));
factory.setVirtualHost("product");
factory.setUsername(propertyLoader.loadProperty("qa.sender.rabbit.user"));
factory.setPassword(propertyLoader.loadProperty("qa.sender.rabbit.password"));
return factory;
}
...
In my service class I need to pick the right transaction manager by the 'env' variable passed in. ( i.e if env=='qa' I need to choose 'qatxManager' else if 'env==dev' I need to choose 'devtxManager'.
....
#Transactional(value = "qatxManager")
public String requeue(String env, String sourceQueue, String destQueue) {
// read from queue
List<Message> messageList = sendReceiveImpl.receive(env, sourceQueue);
....
How can I get it done?
I think you need a Facade. Define an interface and create 2 classes implementing the same interface but with different #Transactional(value = "qatxManager")
Then define one Facade class which keeps 2 implementations (use #Qualifier to distinguish them) The Facade gets the env String and call method of proper bean
This is related to
MongoDB and SpEL Expressions in #Document annotations
This is the way I am creating my mongo template
#Bean
public MongoDbFactory mongoDbFactory() throws UnknownHostException {
String dbname = getCustid();
return new SimpleMongoDbFactory(new MongoClient("localhost"), "mydb");
}
#Bean
MongoTemplate mongoTemplate() throws UnknownHostException {
MappingMongoConverter converter =
new MappingMongoConverter(mongoDbFactory(), new MongoMappingContext());
return new MongoTemplate(mongoDbFactory(), converter);
}
I have a tenant provider class
#Component("tenantProvider")
public class TenantProvider {
public String getTenantId() {
--custome Thread local logic for getting a name
}
}
And my domain class
#Document(collection = "#{#tenantProvider.getTenantId()}_device")
public class Device {
-- my fields here
}
As you see I have created my mongotemplate as specified in the post, but I still get the below error
Exception in thread "main" org.springframework.expression.spel.SpelEvaluationException: EL1057E:(pos 1): No bean resolver registered in the context to resolve access to bean 'tenantProvider'
What am I doing wrong?
Finally figured out why i was getting this issue.
When using Servlet 3 initialization make sure that you add the application context to the mongo context as follows
#Autowired
private ApplicationContext appContext;
public MongoDbFactory mongoDbFactory() throws UnknownHostException {
return new SimpleMongoDbFactory(new MongoClient("localhost"), "apollo-mongodb");
}
#Bean
MongoTemplate mongoTemplate() throws UnknownHostException {
final MongoDbFactory factory = mongoDbFactory();
final MongoMappingContext mongoMappingContext = new MongoMappingContext();
mongoMappingContext.setApplicationContext(appContext);
// Learned from web, prevents Spring from including the _class attribute
final MappingMongoConverter converter = new MappingMongoConverter(factory, mongoMappingContext);
converter.setTypeMapper(new DefaultMongoTypeMapper(null));
return new MongoTemplate(factory, converter);
}
Check the autowiring of the context and also
mongoMappingContext.setApplicationContext(appContext);
With these two lines i was able to get the component wired correctly to use it in multi tenant mode
The above answers just worked partially in my case.
I've been struggling with the same problem and finally realized that under some runtime execution path (when RepositoryFactorySupport relies on AbstractMongoQuery to query MongoDB, instead of SimpleMongoRepository which as far as I know is used in "out of the box" methods provided by SpringData) the metadata object of type MongoEntityMetadata that belongs to MongoQueryMethod used in AbstractMongoQuery is updated only once, in a method named getEntityInformation()
Because MongoQueryMethod object that holds a reference to this 'stateful' bean seems to be pooled/cached by infrastructure code #Document annotations with Spel not always work.
As far as I know as a developer we just have one choice, use MongoOperations directly from your #Repository bean in order to be able to specify the right collection name evaluated at runtime with Spel.
I've tried to use AOP in order to modify this behaviour, by setting a null collection name in MongoEntityMetadata but this does not help because changes in AbstractMongoQuery inner classes, that implement Execution interface, would also need to be done in order to check if MongoEntityMetadata collection name is null and therefore use a different MongoTemplate method signature.
MongoTemplate is smart enough to guess the right collection name by using its private method
private <T> String determineEntityCollectionName(T obj)
I've a created a ticket in spring's jira https://jira.spring.io/browse/DATAMONGO-1043
If you have the mongoTemplate configured as in the related issue, the only thing i can think of is this:
<context:component-scan base-package="com.tenantprovider.package" />
Or if you want to use annotations:
#ComponentScan(basePackages = "com.tenantprovider.package")
You might not be scanning the tenant provider package.
Ex:
#ComponentScan(basePackages = "com.tenantprovider.package")
#Document(collection = "#{#tenantProvider.getTenantId()}_device")
public class Device {
-- my fields here
}
Firstly, I can't use the declarative #Transactional approach as the application has multiple JDBC data-sources, I don't want to bore with the details, but suffice it to say the DAO method is passed the correct data-source to perform the logic. All JDBC data sources have the same schema, they're separated as I'm exposing rest services for an ERP system.
Due to this legacy system there are a lot of long lived locked records which I do not have control over, so I want dirty reads.
Using JDBC I would perform the following:
private Customer getCustomer(DataSource ds, String id) {
Customer c = null;
PreparedStatement stmt = null;
Connection con = null;
try {
con = ds.getConnection();
con.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);
stmt = con.prepareStatement(SELECT_CUSTOMER);
stmt.setString(1, id);
ResultSet res = stmt.executeQuery();
c = buildCustomer(res);
} catch (SQLException ex) {
// log errors
} finally {
// Close resources
}
return c;
}
Okay, lots' of boiler-plate, I know. So I've tried out JdbcTemplate since I'm using spring.
Use JdbcTemplate
private Customer getCustomer(JdbcTemplate t, String id) {
return t.queryForObject(SELECT_CUSTOMER, new CustomerRowMapper(), id);
}
Much nicer, but it's still using default transaction isolation. I need to somehow change this. So I thought about using a TransactionTemplate.
private Customer getCustomer(final TransactionTemplate tt,
final JdbcTemplate t,
final String id) {
return tt.execute(new TransactionCallback<Customer>() {
#Override
public Customer doInTransaction(TransactionStatus ts) {
return t.queryForObject(SELECT_CUSTOMER, new CustomerRowMapper(), id);
}
});
}
But how do I set the transaction isolation here? I can't find it anywhere on the callback or the TransactionTemplate to do this.
I'm reading Spring in Action, Third Edition which explains as far as I've done, though the chapter on transactions continues on to using declarative transactions with annotations, but as mentioned I can't use this as my DAO needs to determine at runtime which data-source to used based on provided arguments, in my case a country code.
Any help would be greatly appreciated.
I've currently solved this by using the DataSourceTransactionManager directly, though it seems like I'm not saving as much boiler-plate as I first hoped. Don't get me wrong, it's cleaner, though I still can't help but feel there must be a simpler way. I don't need a transaction for the read, I just want to set the isolation.
private Customer getCustomer(final DataSourceTransactionManager txMan,
final JdbcTemplate t,
final String id) {
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setIsolationLevel(TransactionDefinition.ISOLATION_READ_UNCOMMITTED);
TransactionStatus status = txMan.getTransaction(def);
Customer c = null;
try {
c = t.queryForObject(SELECT_CUSTOMER, new CustomerRowMapper(), id);
} catch (Exception ex) {
txMan.rollback(status);
throw ex;
}
txMan.commit(status);
return c;
}
I'm still going to keep this one unanswered for a while as I truly believe there must be a better way.
Refer to Spring 3.1.x Documentation - Chapter 11 - Transaction Management
Using the TransactionTemplate helps you here, you need to configure it appropriately. The transaction template also contains the transaction configuration. Actually the TransactionTemplate extends DefaultTransactionDefinition.
So somewhere in your configuration you should have something like this.
<bean id="txTemplate" class=" org.springframework.transaction.support.TransactionTemplate">
<property name="isolationLevelName" value="ISOLATION_READ_UNCOMMITTED"/>
<property name="readOnly" value="true" />
<property name="transactionManager" ref="transactionManager" />
</bean>
If you then inject this bean into your class you should be able to use the TransactionTemplate based code you posted/tried earlier.
However there might be a nicer solution which can clean up your code. For one of the projects I worked on, we had a similar setup as yours (single app multiple databases). For this we wrote some spring code which basically switches the datasource when needed. More information can be found here.
If that is to far fetched or overkill for your application you can also try and use Spring's AbstractRoutingDataSource, which based on a lookup key (country code in your case) selects the proper datasource to use.
By using either of those 2 solutions you can start using springs declarative transactionmanagement approach (which should clean up your code considerably).
Define a proxy data source, class being org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy and set the transaction isolation level. Inject actual data source either through setter or constructor.
<bean id="yourDataSource" class="org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy">
<constructor-arg index="0" ref="targetDataSource"/>
<property name="defaultTransactionIsolationName" value="TRANSACTION_READ_UNCOMMITTED"/>
</bean>
I'm not sure you can do it without working with at the 'Transactional' abstraction-level provided by Spring.
A more 'xml-free' to build your transactionTemplate could be something like this.
private TransactionTemplate getTransactionTemplate(String executionTenantCode, boolean readOnlyTransaction) {
TransactionTemplate tt = new TransactionTemplate(transactionManager);
tt.setReadOnly(readOnlyTransaction);
tt.setIsolationLevel(TransactionDefinition.ISOLATION_READ_UNCOMMITTED);
tt.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
return tt;
}
In any case I would "leverage" the #Transactional annotation specifying the appropriate transaction manager, binded with a separate data-source. I've done this for a multi-tenant application.
The usage:
#Transactional(transactionManager = CATALOG_TRANSACTION_MANAGER,
isolation = Isolation.READ_UNCOMMITTED,
readOnly = true)
public void myMethod() {
//....
}
The bean(s) declaration:
public class CatalogDataSourceConfiguration {
#Bean(name = "catalogDataSource")
#ConfigurationProperties("catalog.datasource")
public DataSource catalogDataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = ENTITY_MANAGER_FACTORY)
public EntityManagerFactory entityManagerFactory(
#Qualifier("catalogEntityManagerFactoryBean") LocalContainerEntityManagerFactoryBean emFactoryBean) {
return emFactoryBean.getObject();
}
#Bean(name= CATALOG_TRANSACTION_MANAGER)
public PlatformTransactionManager catalogTM(#Qualifier(ENTITY_MANAGER_FACTORY) EntityManagerFactory emf) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(emf);
return transactionManager;
}
#Bean
public NamedParameterJdbcTemplate catalogJdbcTemplate() {
return new NamedParameterJdbcTemplate(catalogDataSource());
}
}