I have started converting an existing Spring Boot(1.5.4.RELEASE) application to support multi-tenant capabilities. So i am using MySQL as the database and Spring Data JPA as the data access mechanism. i am using the schema based multi-tenant approach. As Hibernate document suggests below
https://docs.jboss.org/hibernate/orm/4.2/devguide/en-US/html/ch16.html
I have implemented MultiTenantConnectionProvider and CurrentTenantIdentifierResolver interfaces and I am using ThreadLocal variable to maintain the current tenant for the incoming request.
public class TenantContext {
final public static String DEFAULT_TENANT = "master";
private static ThreadLocal<Tenant> tenantConfig = new ThreadLocal<Tenant>() {
#Override
protected Tenant initialValue() {
Tenant tenant = new Tenant();
tenant.setSchemaName(DEFAULT_TENANT);
return tenant;
}
};
public static Tenant getTenant() {
return tenantConfig.get();
}
public static void setTenant(Tenant tenant) {
tenantConfig.set(tenant);
}
public static String getTenantSchema() {
return tenantConfig.get().getSchemaName();
}
public static void clear() {
tenantConfig.remove();
}
}
Then i have implemented a filter and there i set the tenant dynamically looking at a request header as below
String targetTenantName = request.getHeader(TENANT_HTTP_HEADER);
Tenant tenant = new Tenant();
tenant.setSchemaName(targetTenantName);
TenantContext.setTenant(tenant);
This works fine and now my application points to different schema based on the request header value.
However there is a master schema where i store the some global settings and i need to access that schema while in a middle of a request for a tenant. Therefore i tried to hard code the Threadlocal variable just before that database call in the code as below.
Tenant tenant = new Tenant();
tenant.setSchemaName("master");
TenantContext.setTenant(tenant);
However this does not point to the master schema and instead it tries to access the original schema set during the filter. What is the reason for this?
As per my understanding Hibernate invokes openSession() during the first database call to a tenant and after i try to invoke another database call for "master" it still use the previous tenant as CurrentTenantIdentifierResolver invokes only during the openSession(). However these different database calls does not invoke within a transaction.
Can you please help me to understand the issue with my approach and any suggestions to fix the issue
Thanks
Keth
#JonathanJohx actually i am trying to override the TenantContext set by the filter in one of the controllers. First i am loging in a tenant where TenantContext is set to that particular tenant. While the request is in that tenant i am requesting data from master. In order to do that i am simply hard code the tenant as below
#RestController
#RequestMapping("/jobTemplates")
public class JobTemplateController {
#Autowired
JobTemplateService jobTemplateService;
#GetMapping
public JobTemplateList list(Pageable pageable){
Tenant tenant = new Tenant();
tenant.setSchemaName(multitenantMasterDb);
TenantContext.setTenant(tenant);
return jobTemplateService.list(pageable);
}
Related
I’m trying to implement a multi tenant micro service using Spring Boot. I already implemented the web layer and the persistence layer. On web layer, I’ve implement a filter which sets the tenant id in a prototype bean (using ThreadLocalTargetSource), on persistence layer I’ve used Hibernate multi tenancy configuration (schema per tenant), they work fine, data is persisted in the appropriate schema. Currently I am implementing the same behaviour on messaging layer, using spring-kaka library, so far ir works the way I expected, but I’d like to know if there is a better way to do it.
Here is my code:
This si the class that manage a KafkaMessageListenerContainer:
#Component
public class MessagingListenerContainer {
private final MessagingProperties messagingProperties;
private KafkaMessageListenerContainer<String, String> container;
#PostConstruct
public void init() {
ContainerProperties containerProps = new ContainerProperties(
messagingProperties.getConsumer().getTopicsAsList());
containerProps.setMessageListener(buildCustomMessageListener());
container = createContainer(containerProps);
container.start();
}
#Bean
public MessageListener<String, String> buildCustomMessageListener() {
return new CustomMessageListener();
}
private KafkaMessageListenerContainer<String, String> createContainer(
ContainerProperties containerProps) {
Map<String, Object> props = consumerProps();
…
return container;
}
private Map<String, Object> consumerProps() {
Map<String, Object> props = new HashMap<>();
…
return props;
}
#PreDestroy
public void finish() {
container.stop();
}
}
This is the CustomMessageListener:
#Slf4j
public class CustomMessageListener implements MessageListener<String, String> {
#Autowired
private TenantStore tenantStore; // Prototype Bean
#Autowired
private List<ServiceListener> services;
#Override
public void onMessage(ConsumerRecord<String, String> record) {
log.info(“Tenant {} | Payload: {} | Record: {}", record.key(),
record.value(), record.toString());
tenantStore.setTenantId(record.key()); // Currently tenant is been setting as key
services.stream().forEach(sl -> sl.onMessage(record.value()));
}
}
This is a test service which would use the message data and tenant:
#Slf4j
#Service
public class ConsumerService implements ServiceListener {
private final MessagesRepository messages;
private final TenantStore tenantStore;
#Override
public void onMessage(String message) {
log.info("ConsumerService {}, tenant {}", message, tenantStore.getTenantId());
messages.save(new Message(message));
}
}
Thanks for your time!
Just to be clear ( correct me if I'm wrong ): you are using the same topic(s) for all your tenants. The way that you distinguish the message according to each tenant is by using the message key which in your case is the tenant id.
A slight improvement can be done by using message headers to store the tenant id instead of the key. By doing this then you will not be limited to partitioning messages based on tenants.
Although the model described by you works it has a major security issue. If someone gets access to your topic then you will be leaking data of all your tenants.
A more secure approach is using topic naming conventions and ACL's ( access control lists ). You can find a short explanation here. In a nutshell, you can include the name of your tenant in the topic's name by either using a suffix or a prefix.
e.g: orders_tenantA, orders_tenantB or tenantA_orders, tenantB_orders
Then, using ACL's you can restrict which applications can connect to those specific topics. This scenario is also helpful if one of your tenants need to connect one of their applications directly to your Kafka cluster.
So far, the only way I know to set the name of a database, to use with Spring Data ArangoDB, is by hardcoding it in a database() method while extending AbstractArangoConfiguration, like so:
#Configuration
#EnableArangoRepositories(basePackages = { "com.company.mypackage" })
public class MyConfiguration extends AbstractArangoConfiguration {
#Override
public ArangoDB.Builder arango() {
return new ArangoDB.Builder();
}
#Override
public String database() {
// Name of the database to be used
return "example-database";
}
}
What if I'd like to implement multi-tenancy, where each tenant has data in a separate database and use e.g. a subdomain to determine which database name should be used?
Can the database used by Spring Data ArangoDB be determined at runtime, dynamically?
This question is related to the discussion here: Manage multi-tenancy ArangoDB connection - but is Spring Data ArangoDB specific.
Turns out this is delightfully simple: Just change the ArangoConfiguration database() method #Override to return a Spring Expression (SpEL):
#Override
public String database() {
return "#{tenantProvider.getDatabaseName()}";
}
which in this example references a TenantProvider #Component which can be implemented like so:
#Component
public class TenantProvider {
private final ThreadLocal<String> databaseName;
public TenantProvider() {
super();
databaseName = new ThreadLocal<>();
}
public String getDatabaseName() {
return databaseName.get();
}
public void setDatabaseName(final String databaseName) {
this.databaseName.set(databaseName);
}
}
This component can then be #Autowired wherever in your code to set the database name, such as in a servlet filter, or in my case in an Apache Camel route Processor and in database service methods.
P.s. I became aware of this possibility by reading the ArangoTemplate code and a Spring Expression support documentation section
(via), and one merged pull request.
I work for a company that has multiple brands, therefore we have a couple MongoDB instances on some different hosts holding the same document model for our Customer on each of these brands. (Same structure, not same data)
For the sake of simplicity let's say we have an Orange brand, with database instance serving on port 27017 and Banana brand with database instance serving on port 27018
Currently I'm developing a fraud detection service which is required to connect to all databases and analyze all the customers' behavior together regardless of the brand.
So my "model" has a shared entity for Customer, annotated with #Document (org.springframework.data.mongodb.core.mapping.Document)
Next thing I have is two MongoRepositories such as:
public interface BananaRepository extends MongoRepository<Customer, String>
List<Customer> findAllByEmail(String email);
public interface OrangeRepository extends MongoRepository<Customer, String>
List<Customer> findAllByEmail(String email);
With some stub method for finding customers by Id, Email, and so on. Spring is responsible for generating all implementation classes for such interfaces (pretty standard spring stuff)
In order to hint each of these repositories to connect to the right mongodb instance, I need two Mongo Config such as:
#Configuration
#EnableMongoRepositories(basePackageClasses = {Customer.class})
public class BananaConfig extends AbstractMongoConfiguration {
#Value("${database.mongodb.banana.username:}")
private String username;
#Value("${database.mongodb.banana.database}")
private String database;
#Value("${database.mongodb.banana.password:}")
private String password;
#Value("${database.mongodb.banana.uri}")
private String mongoUri;
#Override
protected Collection<String> getMappingBasePackages() {
return Collections.singletonList("com.acme.model");
}
#Override
protected String getDatabaseName() {
return this.database;
}
#Override
#Bean(name="bananaClient")
public MongoClient mongoClient() {
final String authString;
//todo: Use MongoCredential
//todo: Use ServerAddress
//(See https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#repositories) 10.3.4
if ( valueIsPresent(username) ||valueIsPresent(password)) {
authString = String.format("%s:%s#", username, password);
} else {
authString = "";
}
String conecctionString = "mongodb://" + authString + mongoUri + "/" + database;
System.out.println("Going to connect to: " + conecctionString);
return new MongoClient(new MongoClientURI(conecctionString, builder()
.connectTimeout(5000)
.socketTimeout(8000)
.readPreference(ReadPreference.secondaryPreferred())
.writeConcern(ACKNOWLEDGED)));
}
#Bean(name = "bananaTemplate")
public MongoTemplate mongoTemplate(#Qualifier("bananaFactory") MongoDbFactory mongoFactory) {
return new MongoTemplate(mongoFactory);
}
#Bean(name = "bananaFactory")
public MongoDbFactory mongoFactory() {
return new SimpleMongoDbFactory(mongoClient(),
getDatabaseName());
}
private static int sizeOfValue(String value){
if (value == null) return 0;
return value.length();
}
private static boolean valueIsMissing(String value){
return sizeOfValue(value) == 0;
}
private static boolean valueIsPresent(String value){
return ! valueIsMissing(value);
}
}
I also have similar config for Orange which points to the proper mongo instance.
Then I have my service like this:
public List<? extends Customer> findAllByEmail(String email) {
return Stream.concat(
bananaRepository.findAllByEmail(email).stream(),
orangeRepository.findAllByEmail(email).stream())
.collect(Collectors.toList());
}
Notice that I'm calling both repositories and then collecting back the results into one single list
What I would expect to happen is that each repository would connect to its corresponding mongo instance and query for the customer by its email.
But this don't happened. I always got the query executed against the same mongo instance.
But in the database log I can see both connections being made by spring.
It just uses one connection to run the queries for both repositories.
This is not surprising as both Mongo Config points to the same model package here. Right. But I also tried other approaches such as creating a BananaCustomer extends Customer, into its own model.banana package, and another OrangeCustomer extends Customer into its model.orange package, along with specifying the proper basePackageClasses into each config. But that neither worked, I've ended up getting both queries run against the same database.
:(
After scavenging official Spring-data-mongodb documentation for hours, and looking throughout thousands of lines of code here and there, I've run out of options: seems like nobody have done what I'm trying to accomplish before.
Except for this guy here that had to do the same thing but using JPA instead of mongodb: Link to article
Well, while it's still spring-data it't not for mongodb.
So here is my question:
¿How can I explicitly tell each repository to use a specific mongo config?
Magical autowiring rules, except when it doesn't work and nobody understands the magic.
Thanks in advance.
Well, I had a very detailed answer but StackOverflow complained about looking like spam and didn't allow me to post
The full answer is still available as a Gist file here
The bottom line is that both MongoRepository (interface) and the model object must be placed in the same package.
I am using AWS ECS to host my application and using DynamoDB for all database operations. So I'll have same database with different table names for different environments. Such as "dev_users" (for Dev env), "test_users" (for Test env), etc.. (This is how our company uses same Dynamo account for different environments)
So I would like to change the "tableName" of the model class using the environment variable passed through "AWS ECS task definition" environment parameters.
For Example.
My Model Class is:
#DynamoDBTable(tableName = "dev_users")
public class User {
Now I need to replace the "dev" with "test" when I deploy my container in test environment. I know I can use
#Value("${DOCKER_ENV:dev}")
to access environment variables. But I'm not sure how to use variables outside the class. Is there any way that I can use the docker env variable to select my table prefix?
My Intent is to use like this:
I know this not possible like this. But is there any other way or work around for this?
Edit 1:
I am working on the Rahul's answer and facing some issues. Before writing the issues, I'll explain the process I followed.
Process:
I have created the beans in my config class (com.myapp.users.config).
As I don't have repositories, I have given my Model class package name as "basePackage" path. (Please check the image)
For 1) I have replaced the "table name over-rider bean injection" to avoid the error.
For 2) I printed the name that is passing on to this method. But it is Null. So checking all the possible ways to pass the value here.
Check the image for error:
I haven't changed anything in my user model class as beans will replace the name of the DynamoDBTable when the beans got executed. But the table name over riding is happening. Data is pulling from the table name given at the Model Class level only.
What I am missing here?
The table names can be altered via an altered DynamoDBMapperConfig bean.
For your case where you have to Prefix each table with a literal, you can add the bean as such. Here the prefix can be the environment name in your case.
#Bean
public TableNameOverride tableNameOverrider() {
String prefix = ... // Use #Value to inject values via Spring or use any logic to define the table prefix
return TableNameOverride.withTableNamePrefix(prefix);
}
For more details check out the complete details here:
https://github.com/derjust/spring-data-dynamodb/wiki/Alter-table-name-during-runtime
I am able to achieve table names prefixed with active profile name.
First added TableNameResolver class as below,
#Component
public class TableNameResolver extends DynamoDBMapperConfig.DefaultTableNameResolver {
private String envProfile;
public TableNameResolver() {}
public TableNameResolver(String envProfile) {
this.envProfile=envProfile;
}
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
String stageName = envProfile.concat("_");
String rawTableName = super.getTableName(clazz, config);
return stageName.concat(rawTableName);
}
}
Then i setup DynamoDBMapper bean as below,
#Bean
#Primary
public DynamoDBMapper dynamoDBMapper(AmazonDynamoDB amazonDynamoDB) {
DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB,new DynamoDBMapperConfig.Builder().withTableNameResolver(new TableNameResolver(envProfile)).build());
return mapper;
}
Added variable envProfile which is an active profile property value accessed from application.properties file.
#Value("${spring.profiles.active}")
private String envProfile;
We have the same issue with regards to the need to change table names during runtime. We are using Spring-data-dynamodb 5.0.2 and the following configuration seems to provide the solutions that we need.
First I annotated my bean accessor
#EnableDynamoDBRepositories(dynamoDBMapperConfigRef = "getDynamoDBMapperConfig", basePackages = "my.company.base.package")
I also setup an environment variable called ENV_PREFIX which is Spring wired via SpEL.
#Value("#{systemProperties['ENV_PREFIX']}")
private String envPrefix;
Then I setup a TableNameOverride bean:
#Bean
public DynamoDBMapperConfig.TableNameOverride getTableNameOverride() {
return DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix(envPrefix);
}
Finally, I setup the DynamoDBMapperConfig bean using TableNameOverride injection. In 5.0.2, we had to setup a standard DynamoDBTypeConverterFactory in the DynamoDBMapperConfig builder to avoid NPE.:
#Bean
public DynamoDBMapperConfig getDynamoDBMapperConfig(DynamoDBMapperConfig.TableNameOverride tableNameOverride) {
DynamoDBMapperConfig.Builder builder = new DynamoDBMapperConfig.Builder();
builder.setTableNameOverride(tableNameOverride);
builder.setTypeConverterFactory(DynamoDBTypeConverterFactory.standard());
return builder.build();
}
In hind sight, I could have setup a DynamoDBTypeConverterFactory bean that returns a standard DynamoDBTypeConverterFactory and inject that into the getDynamoDBMapperConfig() method using the DynamoDBMapperConfig builder. But this will also do the job.
I up voted the other answer but here is an idea:
Create a base class with all your user details:
#MappedSuperclass
public abstract class AbstractUser {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String firstName;
private String lastName;
Create 2 implentations with different table names and spirng profiles:
#Profile(value= {"dev","default"})
#Entity(name = "dev_user")
public class DevUser extends AbstractUser {
}
#Profile(value= {"prod"})
#Entity(name = "prod_user")
public class ProdUser extends AbstractUser {
}
Create a single JPA respository that uses the mapped super classs
public interface UserRepository extends CrudRepository<AbstractUser, Long> {
}
Then switch the implentation with the spring profile
#RunWith(SpringJUnit4ClassRunner.class)
#DataJpaTest
#Transactional
public class UserRepositoryTest {
#Autowired
protected DataSource dataSource;
#BeforeClass
public static void setUp() {
System.setProperty("spring.profiles.active", "prod");
}
#Test
public void test1() throws Exception {
DatabaseMetaData metaData = dataSource.getConnection().getMetaData();
ResultSet tables = metaData.getTables(null, null, "PROD_USER", new String[] { "TABLE" });
tables.next();
assertEquals("PROD_USER", tables.getString("TABLE_NAME"));
}
}
I have now 2 tables in database:
User
user_database
In user I store login, password,role
In user_database i store database driver,url,password and user.
Diagram database
I want user login to my page and next connection what he done will be sent to user database. Why i need what? I planing map popular e commerce and create android application where user login and see store data, can add and view product orders.
Now time to go practice, my knowledge in spring technology is small please explain me something when I doing wrong.
All examples on web for AbstractRoutingDataSource have declaration datasource in persistence file or create datasource bean and start using AbstractRoutingDataSource.
In my project I don't now user connection and i need get this from database. I was try get using repository and this example
https://stackoverflow.com/a/17575648/3037869
but i getting null on #Autowired in controller, i think connection for repository is null. How to set connection for this repository and set Route? Defect this method is when i add user i need restart server to refresh connection.
Next try what i using now is class User implement UserDetails after user login i can get user connection from getPrincipal() and add to map.
private void setDataSources() {
HashMap<Object, Object> targetDataSources = new HashMap<>();
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName("org.h2.Driver");
dataSourceBuilder.url("jdbc:h2:mem:AZ;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE");
dataSourceBuilder.username("sa");
dataSourceBuilder.password("");
targetDataSources.put("auth", dataSourceBuilder.build());
setDefaultTargetDataSource(dataSourceBuilder.build());
if( SecurityContextHolder.getContext().getAuthentication()!=null) {
User user=(User) SecurityContextHolder.getContext().getAuthentication().getPrincipal();
System.out.println(user.getUserDatabase().getDriver());
dataSourceBuilder.driverClassName(user.getUserDatabase().getDriver());
dataSourceBuilder.url(user.getUserDatabase().getUrl());
dataSourceBuilder.username("3450_Menadzer");
dataSourceBuilder.password(user.getUserDatabase().getPassword());
targetDataSources.put("user", dataSourceBuilder.build());
}
setTargetDataSources(targetDataSources);
afterPropertiesSet(); //map is refresh when i add this
}
I run this method on constuctor and determineCurrentLookupKey
protected Object determineCurrentLookupKey() {
if( SecurityContextHolder.getContext().getAuthentication()!=null) {
setDataSources();
return "user";
}
return "auth";
}
This is working but when i refresh 3-4 times request for user database i getting
User 3450_Menadzer already has more than 'max_user_connections' active connections
Setting connection map manualy and not refreshing every method determineCurrentLookupKey run i don't have this problem. I think my method is not clossing connection. How i can clean this? This is possible to better method to route connection?
EDIT
#SergeBallesta i change some code from your examples
This is my class for map
#Component
#Scope(value = "singleton")
public class DataSourceMap {
private Map<Object,Object> dataSourceMap;
public DataSourceMap()
{
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName("org.h2.Driver");
dataSourceBuilder.url("jdbc:h2:mem:AZ;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE");
dataSourceBuilder.username("sa");
dataSourceBuilder.password("");
dataSourceMap=new HashMap<Object,Object>();
dataSourceMap.put("auth",dataSourceBuilder.build());
}
public void addDataSource(String session,DataSource dataSource)
{
this.dataSourceMap.put(session,dataSource);
}
public Map<Object,Object> getDataSourceMap()
{
return dataSourceMap;
}
public void removeSource(String session)
{
dataSourceMap.remove(session);
}
}
For AbstractRoutingDataSource i done some changes, i was add afterPropertiesSet() beacuse datasource not refresh. I done some refresh and i not getting error i think this is working. I need test this for more databases in future
#Component
public class CustomRoutingDataSource extends AbstractRoutingDataSource{
#Autowired
DataSourceMap dataSources;
#Override
protected Object determineCurrentLookupKey() {
setDataSources(dataSources);
afterPropertiesSet();
System.out.println("test");
if( SecurityContextHolder.getContext().getAuthentication()!=null) {
HttpServletRequest request = ((ServletRequestAttributes)
RequestContextHolder.getRequestAttributes()).getRequest();
return request.getSession().getId();
}
return "auth";
}
#Autowired
public void setDataSources(DataSourceMap dataSources) {
System.out.println("data adding");
setTargetDataSources(dataSources.getDataSourceMap());
}
}
First, per user database is a very uncommon design. If all those databases will end with same structure, please do not do that in a real world application, but just add user_id in your tables and queries.
Next, I found another (not full) example of a dynamic AbstractRoutingDataSource in another answer of mine.
And one big difference between my code (beware never tested) and your question is that I use a SessionListener to close the databases to avoid that the number of open database grows indefinitively.
If you to this to learn Spring, you could try the following pattern (bottom-up description) :
a session scoped bean that would hold the actual database connection for a user, the connection should be created on first request (to be sure that user id is present in session) and cached for subsequent uses. A destroy method (automaticaly called by Spring when session is closed) should close the connection.
an AbstractRoutingDataSource, that would be injected with a proxy to above holder, and that would ask actual datasource to the holder
As in the other answer, if same user is likely to have many simultaneous sessions, you could have a singleton been injected in session holders that would keep the actual database connections along with the number of active sessions. That way you would get one single connection per user, no matter how many concurrent sessions he could have.