How To Make Components Initialize After JPA Creates Schemas? - java

Context
The server runs on spring-boot and utilizes spring-data. The database
being used is postgresql.
Problem
Some of the components read from information_schema, pg_user,
pg_policies, and pg_catalog. These components' PostConstruct are
currently running before jpa schema creation does. This means that the
information that the components are trying to fetch hasn't been
created by jpa yet, so the components crash.
Prior Research
No errors are being thrown by hibernate itself. Running the server
twice makes the problematic components run correctly. This implies
that these components are running before jpa.
My properties file includes spring.jpa.hibernate.ddl-auto = update . I
tried to find the code behind spring.jpa.hibernate.ddl-auto to see how
I could get the components to require it by way of #DependsOn, but I
have yet to find anything on it.
I can't simply wait for ApplicationReadyEvent with an event listener
as that will break the dependencies between these components.
Code
These are my hikari data sources
#RequiredArgsConstructor
#Configuration
#EnableConfigurationProperties
public class DatabaseConfiguration {
#Bean(name = "server")
#ConfigurationProperties(prefix = "server.datasource")
public HikariDataSource server() {
return (HikariDataSource) DataSourceBuilder.create().build();
}
#Bean(name = "client")
#ConfigurationProperties(prefix = "client.datasource")
public HikariDataSource client() {
return (HikariDataSource) DataSourceBuilder.create().build();
}
}
I have a custom DataSource component.
#Component
public class DatabaseRouterBean {
private final AwsCognitoConfiguration cognitoConfiguration;
private final DatabaseService databaseService;
private final HikariDataSource server;
private final HikariDataSource client;
private final ModelSourceInformation modelSourceInformation;
public DatabaseRouterBean(
#Qualifier("server") final HikariDataSource server,
#Qualifier("client") final HikariDataSource client,
final AwsCognitoConfiguration cognitoConfiguration,
final DatabaseService databaseService,
final ModelSourceInformation modelSourceInformation
) {
this.server = server;
this.client = client;
this.cognitoConfiguration = cognitoConfiguration;
this.databaseService = databaseService;
this.modelSourceInformation = modelSourceInformation;
}
#Bean
#Primary
public DatabaseRouter dataSource() {
return new DatabaseRouter(cognitoConfiguration, databaseService, server, client, modelSourceInformation);
}
}
The following is the implementation of the data source.
// could have a better name
#RequiredArgsConstructor
#Log4j2
public class DatabaseRouter implements DataSource {
private final AwsCognitoConfiguration config;
private final DatabaseService databaseService;
private final HikariDataSource superuser;
private final HikariDataSource user;
private final ModelSourceInformation modelSourceInformation;
The custom data source component is used to create connections for entity managers using one of two accounts on the database for the purpose of multi-tenancy. One account is superuser while the other is a limited user account. Multi-tenancy is achieved through the use of policies. The custom data source runs SET_CONFIG on the connection.
DatabaseService is a very low level service class that supports reading from information_schema, pg_user, pg_policies, and pg_catalog.
#Service
#Log4j
public class DatabaseServiceImpl implements DatabaseService {
private final HikariDataSource server;
private final HikariDataSource client;
ModelSourceInformation has no dependencies. It is used to convert a class type into a configuration variable name and vice versa. It is used by the custom data source to populate SET_CONFIG based on the type of user. It supports defining configuration variables and tying them to models by way of annotations.
AwsCognitoConfiguration is simply a Configuration class that reads the cognito settings from the properties file.
Defined Execution Order By Dependency
DatabaseConfiguration, ModelSourceInformation, AwsCognitoConfiguration
DatabaseService
DatabaseRouter
JPA
Rest of beans
The following components are initialized before jpa. They need to be initialized after jpa. There are dependencies between them.
ModelDynamismInformation
ModelEntityInformation
ModelInformation
ModelPrimaryKeyInformation
ModelSchemaInformation
ModelSecurityInformation
PolicyInitializer

You can use #DependsOn to control the order in which beans get initialized. A bean depending on an EntityManagerFactory should get initialized after Hibernate did its schema creation.

Related

how to pass a datasource to a library?

I am writing a library which retrieves data from a specific data schema. This library holds a Datasource object which can be anything. Right now I have defined the name of the datasource within the library which I would like to avoid.
import javax.sql.DataSource
public class MyLibraryDao.java {
private static final DS_NAME = "MY_DS_NAME";
#Resource(name = "default", lookup = DS_NAME , type = DataSource.class)
protected DataSource dataSource;
}
The DAO class is not directly exposed to the client. There is a service layer inbetween:
import javax.inject.Inject;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.inject.Model;
#ApplicationScoped
#Model
public class MyLibraryService {
#Inject
MyLibraryDao dao;
}
Now, how would I pass the datasource object to the library?
I assume I need to create a constructor in the DAO with takes a DataSource but what about the service?
The library will be used in a CDI environment.
First things first your library needs a datasource, let's declare the dependency:
public class MyLibraryDao {
#Inject
protected DataSource dataSource;
}
Now the rest of the application that is using the library is responsible to provide a datasource to CDI; a simple way is:
// Example; your implementation may vary
public class AppDatasourceProducer {
private static final DS_NAME = "MY_APP_DS_NAME";
#Resource(name = "default", lookup = DS_NAME , type = DataSource.class)
protected DataSource dataSource;
#Produces
#ApplicationScoped
public DataSource getDatasource() {
return dataSource;
}
}
What's changed? Now your application is responsible for knowing the datasource name AND providing the datasource itself. The example above can work in JEE environments that honor the #Resource annotation. Using a different implementation for the provider would work in e.g. a desktop environment (standalone application), making your library reusable.
The exact datasource name may be fixed, just like in the example, or read from configuration, e.g. from system properties (like mentioned in a comment); e.g.:
// Example 2, DS name from system properties
#ApplicationScoped
public class AppDatasourceProducer {
protected DataSource dataSource;
#PostConstruct
void init() throws Exception {
String dsName = System.getProperty("XXXXX");
InitialContext ic = new InitialContext();
dataSource = (DataSource) ic.lookup(dsName);
}
#Produces
#ApplicationScoped
public DataSource getDatasource() {
return dataSource;
}
}
Going further:
An application that uses your library may be using several datasources, for whatever reason. You may want to provide a qualifier to specify the datasource to be used by your app.
I used field injection in MyLibraryDao for simplicity. If you change to constructor injection then, at least MyLibraryDao, will be usable in non-CDI environments as well, i.e. if you have obtained a DataSource somehow, you can now do new MyLibraryDao(datasource). Even more reusability.

How can I inject and use multiple datasources dynamically in Spring boot application?

I have these profiles, uat-nyc, uat-ldn.
uat-nyc datasource is oracle and uat-ldn is mysql server
This configuration is setup in application-uat-nyc.yml and application-uat-ldn.yml
I have below configuration class
#Profile({"uat-nyc", "uat-ldn"})
#Configuration
#EnableConfigurationPropeties(DatSourceProperties.class)
public class DataSourceConfig{
private DataSourceProperties properties; // server, username, password are set here
DataSource getDataSource(){// gets datasource based on profiles}
}
if my application is run with spring.profiles.active: uat-nyc,uat-ldn will it create two datasources, ?
one with configuration from uat-nyc and another from uat-ldn
I have a function below, in this function, I am getting data from third-party service, and depending on ldn or nyc , I need to persist into ldn or nyc database. How can I make the below if else section dynamic? How can I get respective datasources i.e ldn and nyc in the if else section in below getProducts method?
class Product{
String name;
int price;
int region;
}
#Component
Class ProductLoader{
JdbcTemplate jdbcTemplate;
public ProductLoader(DataSource ds){
jdbcTemplate = new JdbcTemplate(ds);
}
public void getProducts(){
List<Product> products = // rest service to get products
if(Product product : product){
if(product.getRegion().equals("LONDON"){
//write to LONDON datbase
// How can I get ldn datasource here?
}
if else(product.getRegion().equals("NewYork"){
//write to NewYork datbase
How can I get NewYork datasource here?
}
else{
// Unknown location
}
}
}
}
Question -
if my application is run with spring.profiles.active: uat-nyc,uat-ldn will it create two datasources, ?
How can I inject the datasources dynamically into ProductLoader and use specific datasource for ldn and nyc
At the first time you need to tell spring you gonna use two datasource will be managed by context spring. use #Bean on #Configuration class then use #Autowired to declareing vars managed by spring.
you could use #Qualifier to choose and qualify your beans
#Configuration
public class ConfigDataSource {
// example for dataSource
#Bean("dataSourceWithPropOnCode") // this name will qualify on #autowired
public DataSource dataSourceWithPropOnCode() {
return DataSourceBuilder.create().url("").password("").username("").driverClassName("").build();
}
#Bean("dataSourceWithPropFromProperties") // this name will qualify on #autowired
#ConfigurationProperties(prefix="spring.datasource.yourname-datasource") // this is the name for the prefix for datasource on .properties settings
public DataSource dataSourcePostgres() {
return DataSourceBuilder.create().build();
}
// example for jdbctemplate
#Bean("jdbcTemaplateWithPropFromProperties") // this name will qualify on #autowired
public JdbcTemplate jdbcTemplatePostgres(#Qualifier("dataSourceWithPropFromProperties") DataSource dataSource) {
return new JdbcTemplate(dataSource);
}
#Bean("jdbcTemaplateWithPropOnCode") // this name will qualify on #autowired
public JdbcTemplate jdbcTemplatePostgres(#Qualifier("dataSourceWithPropOnCode") DataSource dataSource) {
return new JdbcTemplate(dataSource);
}
}
settings on properties
spring.datasource.yourname-datasource.url=...
spring.datasource.yourname-datasource.jdbcUrl=${spring.datasource.yourname-datasource}
spring.datasource.yourname-datasource.username=user
spring.datasource.yourname-datasource.password=pass
spring.datasource.yourname-datasource.driver-class-name=your.driver
using on services
#Qualifier("jdbcTemaplateWithPropFromProperties")
#Autowired
private JdbcTemplate jdbcTemplate1;
#Qualifier("jdbcTemaplateWithPropOnCode")
#Autowired
private JdbcTemplate jdbcTemplate2;
#Qualifier("dataSourceWithPropOnCode")
#Autowired
private DataSource dataSource1;
private DataSource dataSource2;
public someContructorIfYouPrefer(#Qualifier("dataSourceWithPropFromProperties") #Autowired private DataSource dataSource2){
this.dataSource2 = dataSource2;
}

Issue with Declarative Transactions and TransactionAwareDataSourceProxy in combination with JOOQ

I have a data source configuration class that looks as follows, with separate DataSource beans for testing and non-testing environments using JOOQ. In my code, I do not use DSLContext.transaction(ctx -> {...} but rather mark the method as transactional, so that JOOQ defers to Spring's declarative transactions for transactionality. I am using Spring 4.3.7.RELEASE.
I have the following issue:
During testing (JUnit), #Transactional works as expected. A single method is transactional no matter how many times I use the DSLContext's store() method, and a RuntimeException triggers a rollback of the entire transaction.
During actual production runtime, #Transactional is completely ignored. A method is no longer transactional, and TransactionSynchronizationManager.getResourceMap() holds two separate values: one showing to my connection pool (which is not transactional), and one showing the TransactionAwareDataSourceProxy).
In this case, I would have expected only a single resource of type TransactionAwareDataSourceProxy which wraps my DB CP.
After much trial and error using the second set of configuration changes I made (noted below with "AFTER"), #Transactional works correctly as expected even during runtime, though TransactionSynchronizationManager.getResourceMap() holds the following value:
In this case, my DataSourceTransactionManager seems to not even know the TransactionAwareDataSourceProxy (most likely due to my passing it the simple DataSource, and not the proxy object), which seems to completely 'skip' the proxy anyway.
My question is: the initial configuration that I had seemed correct, but did not work. The proposed 'fix' works, but IMO should not work at all (since the transaction manager does not seem to be aware of the TransactionAwareDataSourceProxy).
What is going on here? Is there a cleaner way to fix this issue?
BEFORE (not transactional during runtime)
#Configuration
#EnableTransactionManagement
#RefreshScope
#Slf4j
public class DataSourceConfig {
#Bean
#Primary
public DSLContext dslContext(org.jooq.Configuration configuration) throws SQLException {
return new DefaultDSLContext(configuration);
}
#Bean
#Primary
public org.jooq.Configuration defaultConfiguration(DataSourceConnectionProvider dataSourceConnectionProvider) {
org.jooq.Configuration configuration = new DefaultConfiguration()
.derive(dataSourceConnectionProvider)
.derive(SQLDialect.POSTGRES_9_5);
configuration.set(new DeleteOrUpdateWithoutWhereListener());
return configuration;
}
#Bean
public DataSourceTransactionManager transactionManager(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
#Bean
public DataSourceConnectionProvider dataSourceConnectionProvider(DataSource dataSource) {
return new DataSourceConnectionProvider(dataSource);
}
#Configuration
#ConditionalOnClass(EmbeddedPostgres.class)
static class EmbeddedDataSourceConfig {
#Value("${spring.jdbc.port}")
private int dbPort;
#Bean(destroyMethod = "close")
public EmbeddedPostgres embeddedPostgres() throws Exception {
EmbeddedPostgres embeddedPostgres = EmbeddedPostgresHelper.startDatabase(dbPort);
return embeddedPostgres;
}
#Bean
#Primary
public DataSource dataSource(EmbeddedPostgres embeddedPostgres) throws Exception {
DataSource dataSource = embeddedPostgres.getPostgresDatabase();
return new TransactionAwareDataSourceProxy(dataSource);
}
}
#Configuration
#ConditionalOnMissingClass("com.opentable.db.postgres.embedded.EmbeddedPostgres")
#RefreshScope
static class DefaultDataSourceConfig {
#Value("${spring.jdbc.url}")
private String url;
#Value("${spring.jdbc.username}")
private String username;
#Value("${spring.jdbc.password}")
private String password;
#Value("${spring.jdbc.driverClass}")
private String driverClass;
#Value("${spring.jdbc.MaximumPoolSize}")
private Integer maxPoolSize;
#Bean
#Primary
#RefreshScope
public DataSource dataSource() {
log.debug("Connecting to datasource: {}", url);
HikariConfig hikariConfig = buildPool();
DataSource dataSource = new HikariDataSource(hikariConfig);
return new TransactionAwareDataSourceProxy(dataSource);
}
private HikariConfig buildPool() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(url);
config.setUsername(username);
config.setPassword(password);
config.setDriverClassName(driverClass);
config.setConnectionTestQuery("SELECT 1");
config.setMaximumPoolSize(maxPoolSize);
return config;
}
}
AFTER (transactional during runtime, as expected, all non-listed beans identical to above)
#Configuration
#EnableTransactionManagement
#RefreshScope
#Slf4j
public class DataSourceConfig {
#Bean
public DataSourceConnectionProvider dataSourceConnectionProvider(TransactionAwareDataSourceProxy dataSourceProxy) {
return new DataSourceConnectionProvider(dataSourceProxy);
}
#Bean
public TransactionAwareDataSourceProxy transactionAwareDataSourceProxy(DataSource dataSource) {
return new TransactionAwareDataSourceProxy(dataSource);
}
#Configuration
#ConditionalOnMissingClass("com.opentable.db.postgres.embedded.EmbeddedPostgres")
#RefreshScope
static class DefaultDataSourceConfig {
#Value("${spring.jdbc.url}")
private String url;
#Value("${spring.jdbc.username}")
private String username;
#Value("${spring.jdbc.password}")
private String password;
#Value("${spring.jdbc.driverClass}")
private String driverClass;
#Value("${spring.jdbc.MaximumPoolSize}")
private Integer maxPoolSize;
#Bean
#Primary
#RefreshScope
public DataSource dataSource() {
log.debug("Connecting to datasource: {}", url);
HikariConfig hikariConfig = buildPoolConfig();
DataSource dataSource = new HikariDataSource(hikariConfig);
return dataSource; // not returning the proxy here
}
}
}
I'll turn my comments into an answer.
The transaction manager should NOT be aware of the proxy. From the documentation:
Note that the transaction manager, for example
DataSourceTransactionManager, still needs to work with the underlying
DataSource, not with this proxy.
The class TransactionAwareDataSourceProxy is a special purpose class that is not needed in most cases. Anything that is interfacing with your data source through the Spring framework infrastructure should NOT have the proxy in their chain of access. The proxy is intended for code that cannot interface with the Spring infrastructure. For example, a third party library that was already setup to work with JDBC and did not accept any of Spring's JDBC templates. This is stated in the same docs as above:
This proxy allows data access code to work with the plain JDBC API and
still participate in Spring-managed transactions, similar to JDBC code
in a J2EE/JTA environment. However, if possible, use Spring's
DataSourceUtils, JdbcTemplate or JDBC operation objects to get
transaction participation even without a proxy for the target
DataSource, avoiding the need to define such a proxy in the first
place.
If you do not have any code that needs to bypass the Spring framework then do not use the TransactionAwareDataSourceProxy at all. If you do have legacy code like this then you will need to do what you already configured in your second setup. You will need to create two beans, one which is the data source, and one which is the proxy. You should then give the data source to all of the Spring managed types and the proxy to the legacy types.

Producing EntityManager in a Hybrid Tenancy Context

I have this requirement in which every client must have his data stored individually in a separated database.
I would like to achieve the following structure:
A global microservice handles authentication and also provide information about the database in which the client data is stored.
The others microservices, when requested, query the auth service to know the client database information, only then the entity manager gets produced.
I am struggling to properly manage the state of the EntityManagerFactory instances.
I've tried to store in in a WeakHashMap but some buggy things started to happen. Like a simple findById throwing exceptions.
I am actually using JEE with DeltaSpike data running on a Payara server.
Anyone have ever done that using a similar stack?
If you are using bean managed transaction, then it becomes even easier to use CDI to manage this kind of entity manager factory resource.
First create a datasource context annotation.
#Qualifier
#Retention(RetentionPolicy.RUNTIME)
#Target({TYPE, PARAMETER, FIELD, METHOD})
public #interface Datasource {
/**
* This may be the database url or whatever.
*/
#Nonbinding
String value() default "";
}
#SuppressWarnings("AnnotationAsSuperInterface")
public class DatasourceLiteral extends AnnotationLiteral<Datasource> implements Datasource {
private static final long serialVersionUID = 7485753390480718735L;
private final String dbName;
public DatasourceLiteral(final String dbName) {
this.dbName = dbName;
}
#Override
public String value() {
return dbName;
}
}
#ApplicationScoped
public class EntityManagerFactoryProvider {
#Produces
#Datasource
#ApplicationScoped
public EntityManagerFactory entityManagerFactory(final InjectionPoint ip) {
final Annotated annotated = ip.getAnnotated();
final Datasource datasource = annotated.getAnnotation(Datasource.class);
/**
* Add relevant jpa properties
*/
final Map<String, String> jpaProperties = new HashMap<>();
/**
* The main point is here.
*/
jpaProperties.put("javax.persistence.jdbc.url", datasource.value());
return Persistence.createEntityManagerFactory("persistence-unit-jpa", jpaProperties);
}
public void dispose(#Disposes #Datasource final EntityManagerFactory emf) {
emf.close();
}
}
#ApplicationScoped
public class ExampleUserDatasource {
#Any
#Inject
private Instance<EntityManagerFactory> entityManagerFactories;
public void doSomething(final String user) {
final UserInfo userInfo = authenticationService.getUser(user);
final Datasource datasource = new DatasourceLiteral(userInfo.getDatasource());
final EntityManagerFactory entityManagerFactory = entityManagerFactories.select(datasource).get();
/**
* You could also actually inject this.
* Do whatever you want with it inside a transaction and close it too.
*/
final EntityManager entityManager = entityManagerFactory.createEntityManager();
}
}

Single API, multiple Elasticsearch instances

We have a Spring Boot Restful API that needs to get data from 2 different Elasticsearch instances (on different servers), 1 for "shared" data (with about 5 different indexes on it) and 1 for "private" data (with about 3 different indexes). Currently running against just the "private" data instance, everything is good. But we now need to get at the "shared" data now.
In our Spring Boot application, we have enabled Elasticsearch repositories like this
#SpringBootApplication
#EnableElasticsearchRepositories(basePackages = {
"com.company.core.repositories", //<- private repos here...
"com.company.api.repositories" //<-- shared repos here...
})
public class Application { //... }
Then we access the "private" data with an ElasticsearchRepository like:
package com.company.core.repositories
public interface DocRepository extends ElasticsearchRepository<Doc, Integer> { ... }
In our endpoint, we have...
#RestController
#CrossOrigin
#RequestMapping("/v2/statuses/")
public class StatusEndpoint {
#Resource
private ElasticsearchTemplate template;
#Autowired
private DocRepository docRepository;
#Autowired
private Validator validator;
//...
}
Now we want to add another repository like:
package com.company.api.repositories
public interface LookupRepository extends ElasticsearchRepository<Lookup, Integer> { ... }
Then in our API layer we would add an auto-wired instance...
#Autowired
private LookupRepository lookupRepo;
We were thinking that we could define multiple Beans with different names, but how do we associate each of the "elasticsearchTemplate" beans with the different ElasticsearchRepository instances that need them? Also, how do we associate the "private" bean/configuration with injected instances of
#Resource
private ElasticsearchTemplate template;
Where we need to use that natively?
You can solve this with 2 unique Elasticsearch configuration beans and an #Resource(name="XXX") annotation for the template injection in your StatusEndpoint controller.
If you segregate your repositories into different packages depending on which Elasticsearch cluster they should use, you can associate them with different configurations using the #EnableElasticsearchRepositories annotation.
For example:
If you have these packages and classes:
com.company.data.repositories.private.YourPrivateRepository
com.company.data.repositories.shared.YourSharedRepository
And then these Configurations:
#Configuration
#EnableElasticsearchRepositories(
basePackages = {"com.company.data.repositories.private"},
elasticsearchTemplateRef = "privateElasticsearchTemplate")
public class PrivateElasticsearchConfiguration {
#Bean(name="privateElasticsearchTemplate")
public ElasticsearchTemplate privateTemplate() {
//code to create connection to private ES cluster
}
}
#Configuration
#EnableElasticsearchRepositories(
basePackages = {"com.company.data.repositories.shared"},
elasticsearchTemplateRef = "sharedElasticsearchTemplate")
public class SharedElasticsearchConfiguration {
#Bean(name="sharedElasticsearchTemplate")
public ElasticsearchTemplate sharedTemplate() {
//code to create connection to shared ES cluster
}
}
Because of the elasticsearchTemplateRef parameter in the #EnableElasticsearchRepositories annotation, the JPA code that implements the repositories will use the specified template for repositories in the basePackages list.
For the StatusEndpoint portion, you would just provide your #Resource annotation with the correct template bean name. Your StatusEndpoint would look like this:
#RestController
#CrossOrigin
#RequestMapping("/v2/statuses/")
public class StatusEndpoint {
#Resource(name="privateElasticsearchTemplate")
private ElasticsearchTemplate template;
#Autowired
private DocRepository docRepository;
#Autowired
private Validator validator;
//...
}
There might be multiple ways to do this. Here is one that takes advantage of the #Bean name and the #Resource name.
#Configuration
public class MyElasticConfig{
#Bean //this is your private template
public ElasticsearchTemplate template(){
//construct your template
return template;
}
#Bean //this is your public template
public ElasticsearchTemplate publicTemplate(){
//construct your template
return template;
}
}
then you can get them like this...
#Resource
private ElasticsearchTemplate template;
#Resource
private ElasticsearchTemplate publicTemplate;
or
#Resource(name="template")
private ElasticsearchTemplate anyName;
#Resource(name="publicTemplate")
private ElasticsearchTemplate anyOtherName;
also you can name your #Bean's directly instead of relying on the #Bean's method name.
#Bean (name="template")
public ElasticsearchTemplate myPrivateTemplate(){
//construct your template
return template;
}
#Bean (name="publicTemplate")
public ElasticsearchTemplate myPubTemplate(){
//construct your template
return template;
}
Check out these wonderful resources on the topic.
SPRING INJECTION WITH #RESOURCE, #AUTOWIRED AND #INJECT
Bean Annotation Type
Autowired vs Resource

Categories