I have an app which, depending on it's deployment config, can write the same data into one or more databases that have the same schema. Basically think of it like there's an A environment a B environment and in A we write to the A AND the B databases so that there's data from both environments there.
Further confusing the issue is that the DBs in the environments MAY (and often do) have different schemas. Though they are only different in that a change may have been made to A but not B yet.
Currently I am handling this through creating basic "entity" pojos that match the table structures and then writing specific JDBC queries for each side and then conditionally running them based on a config check.
I would love to be able to use like spring jpa to handle all the queries and allow me to do some DI for testing but the only way I can see to do this is to basically create a separate repository "AXyzRepository" and "BXyzRepository" for each Entity and perhaps even different entities for the different schemas.
Is there a way to tell a repository to like "hey, this time I want you to use /this/ ConfigurationProperty" or whatever? And to ignore any missing fields from an entity?
You can use Spring profiles to setup different db sources, schema.sql (that initialize db) and data.sql (to load initial data).
For example, you need two profiles: 'dev' and 'prod'. In application.properties you set parameter spring.profiles.active for example to prod value as default (and other options common for both profiles):
application.properties
spring.profiles.active=prod
spring.datasource.initialize=true
spring.jpa.hibernate.ddl-auto=none
Then you create two application-${profile}.properties files, for dev and prod profiles in your application resources folder:
application-dev.properties
spring.datasource.url=jdbc:h2:mem:mydb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.datasource.platform=h2
application-prod.properties
spring.datasource.url=jdbc:mysql://myhost:3306/mydb
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.username=user
spring.datasource.password=password
spring.datasource.platform=mysql
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
Then you have to create two schema-${platform}.sql and two data-${platform}.sql files (for h2 and mysql platforms):
schema-h2.sql
CREATE TABLE...
...
schema-mysql.sql
DROP TABLE...
...
CREATE TABLE...
...
data-h2.sql
INSERT INTO TABLE...
...
data-mysql.sql
INSERT INTO TABLE...
...
In your dev environment you can set command line parameter -Dspring.profiles.active=dev (or setup it right in your IDE. For example in InelliJ IDE you can setup Spring profile in Run/Debug Configuration dialog) and work with H2 DB.
When you deploy your app to prod server it will use prod profile as the default.
More info is in the documentation.
Related
Is there any way to Load the database schema from .sql or json or textfile to create the ORM mapping with JPA/Hibernate to database in spring-boot while starting up the server.
Spring Boot enables you to use database migration tools such as Liquibase and Flyway, you can read more about that on Spring's official documentation.
Edit: From the docs
85.5 Use a Higher-level Database Migration Tool
Spring Boot supports two higher-level migration tools: Flyway and Liquibase.
85.5.1 Execute Flyway Database Migrations on Startup
To automatically run Flyway database migrations on startup, add the org.flywaydb:flyway-core to your classpath.
The migrations are scripts in the form V__.sql (with an underscore-separated version, such as ‘1’ or ‘2_1’). By default, they are in a folder called classpath:db/migration, but you can modify that location by setting spring.flyway.locations. This is a comma-separated list of one or more classpath: or filesystem: locations. For example, the following configuration would search for scripts in both the default classpath location and the /opt/migration directory:
spring.flyway.locations=classpath:db/migration,filesystem:/opt/migration
You can also add a special {vendor} placeholder to use vendor-specific scripts. Assume the following:
spring.flyway.locations=classpath:db/migration/{vendor}
Rather than using db/migration, the preceding configuration sets the folder to use according to the type of the database (such as db/migration/mysql for MySQL). The list of supported databases is available in DatabaseDriver.
FlywayProperties provides most of Flyway’s settings and a small set of additional properties that can be used to disable the migrations or switch off the location checking. If you need more control over the configuration, consider registering a FlywayConfigurationCustomizer bean.
Spring Boot calls Flyway.migrate() to perform the database migration. If you would like more control, provide a #Bean that implements FlywayMigrationStrategy.
Flyway supports SQL and Java callbacks. To use SQL-based callbacks, place the callback scripts in the classpath:db/migration folder. To use Java-based callbacks, create one or more beans that implement Callback. Any such beans are automatically registered with Flyway. They can be ordered by using #Order or by implementing Ordered. Beans that implement the deprecated FlywayCallback interface can also be detected, however they cannot be used alongside Callback beans.
By default, Flyway autowires the (#Primary) DataSource in your context and uses that for migrations. If you like to use a different DataSource, you can create one and mark its #Bean as #FlywayDataSource. If you do so and want two data sources, remember to create another one and mark it as #Primary. Alternatively, you can use Flyway’s native DataSource by setting spring.flyway.[url,user,password] in external properties. Setting either spring.flyway.url or spring.flyway.user is sufficient to cause Flyway to use its own DataSource. If any of the three properties has not be set, the value of its equivalent spring.datasource property will be used.
There is a Flyway sample so that you can see how to set things up.
You can also use Flyway to provide data for specific scenarios. For example, you can place test-specific migrations in src/test/resources and they are run only when your application starts for testing. Also, you can use profile-specific configuration to customize spring.flyway.locations so that certain migrations run only when a particular profile is active. For example, in application-dev.properties, you might specify the following setting:
spring.flyway.locations=classpath:/db/migration,classpath:/dev/db/migration
With that setup, migrations in dev/db/migration run only when the dev profile is active.
85.5.2 Execute Liquibase Database Migrations on Startup
To automatically run Liquibase database migrations on startup, add the org.liquibase:liquibase-core to your classpath.
By default, the master change log is read from db/changelog/db.changelog-master.yaml, but you can change the location by setting spring.liquibase.change-log. In addition to YAML, Liquibase also supports JSON, XML, and SQL change log formats.
By default, Liquibase autowires the (#Primary) DataSource in your context and uses that for migrations. If you need to use a different DataSource, you can create one and mark its #Bean as #LiquibaseDataSource. If you do so and you want two data sources, remember to create another one and mark it as #Primary. Alternatively, you can use Liquibase’s native DataSource by setting spring.liquibase.[url,user,password] in external properties. Setting either spring.liquibase.url or spring.liquibase.user is sufficient to cause Liquibase to use its own DataSource. If any of the three properties has not be set, the value of its equivalent spring.datasource property will be used.
See LiquibaseProperties for details about available settings such as contexts, the default schema, and others.
There is a Liquibase sample so that you can see how to set things up.
Spring also supports a database initialization on its own, the official docs are here.
Spring Boot can automatically create the schema (DDL scripts) of your DataSource and initialize it (DML scripts). It loads SQL from the standard root classpath locations: schema.sql and data.sql, respectively.
I have a spring boot batch application. In application.properties, I specify my data source details as follows
spring.datasource.url=jdbc:jtds:sqlserver://1*.2**.6*.25:14**
spring.datasource.database=MYDB_DEV
spring.datasource.username=username
spring.datasource.password=password
The problem is, when I run the batch job, all user defined tables are taken from the MYDB_DEV. But Metadata tables like BATCH_JOB_EXECUTION, BATCH_JOB_EXECUTION_CONTEXT are taken from the MASTER schema even though I have created the same tables in MYDB_DEV. Why this happens? Is there any work around to make the application read Metadata tables from user defined schema?
I have debugged though the jobLauncher.run(myjob, jobParameters), Could not find any lead from where it is taking master Schema
Use below property in application.properties or application.yml
spring.batch.tablePrefix=MYDB_DEV.BATCH_
I have the following application.properties :
## Spring DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.url= ${DATASOURCE_URL}
spring.datasource.username= ${DATASOURCE_USERNAME}
spring.datasource.password= ${DATASOURCE_PASSWORD}
## Other Database
second.datasource.url="jdbc:oracle:thin:#localhost:1521:XE"
second.datasource.username=usr
second.datasource.password=password
second.datasource.driver-class-name=oracle.jdbc.OracleDriver
second.jpa.show-sql=true
## Hibernate Properties
# The SQL dialect makes Hibernate generate better SQL for the chosen database
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.hibernate.ddl-auto = update
spring.jpa.database=default
Goal and things that currently work : The spring datasource at the top works fine. I am able to use that for all of my main needs. The second one, below it, is going to query a legacy system and get data from there.
Problem : I have no idea how to get that second datasource to work at all. I need to get it to perform a query and get it returning something. Ideally I would love to see an example of this that works. I looked at a few blog posts, and googled around, and I am clearly missing some vital information.
the above is default and spring can find it on its own. To create a different DataSource you need to setup a datasource bean somewhere, and read values from a config.
Easiest way would be to create a class with configuration annotation and define beans for both dataSources.
I'd suggest HikariDataSource, you can read more about how to set it up here: https://www.baeldung.com/hikaricp
For configuration you can use Environment by autowiring it and reading your properties from there.
environment.getProperty("second.datasource.url") for example
Check the instructions in this link: https://www.baeldung.com/spring-data-jpa-multiple-databases
1) You need to provide JPA configuration for each of your datasources with 2) independent entity packages for each of these sources and, 3) you must specify one of the data source as Primary.
We use hierarchically organised Spring boot property files in our application.
For example,
Our application.properties will just contain a single line.
spring.profiles.include = logging, kafka, oracle, misc
Where all the values separated by comma here are other property files (namely application-logging.properties and so on) that it's referring to (We chose this for reusability in different environments)
And I have another properties file application-h2.properties that can be included while testing. So while I test, my application.properties will look like this.
spring.profiles.include = logging, kafka, h2, misc
The problem that's been bugging me here is that my application is always considering h2 database when it starts up, although I include oracle.
Here's how my application-oracle.properties file looks.
spring.datasource.url=${ORACLE_URL}
spring.datasource.username=${ORACLE_USERNAME}
spring.datasource.password=${ORACLE_PASSWORD}
spring.jpa.show-sql=true
spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.jpa.database-platform=org.hibernate.dialect.Oracle10gDialect
spring.jpa.properties.hibernate.jdbc.time_zone = UTC
The only way I have to get Oracle enabled is that I have remove the h2 properties file, and also remove the h2 dependency from the gradle build file.
Appreciate your help!
I would like to initialize my postgres database with data.sql file. I have created queries like:
insert into network_hashrate (
rep_date, hashrate
)
select
date_from - (s.a || ' hour')::interval,
s.a::double precision
from generate_series(0, 9999, 1) AS s(a);
Is it even possible to populate database using postgres functions in Spring? If not, what are my other options. I need like 10k sample records.
According to Spring Boot doc:
Spring Boot can automatically create the schema (DDL scripts) of your DataSource and initialize it (DML scripts). It loads SQL from the standard root classpath locations: schema.sql and data.sql, respectively.
So if you need to populate data only - just create data.sql file with your sql-scripts, place it to resources folder, then check spring.jpa.hibernate.ddl-auto in the application.properties to be set to none.
If you need more flexible solution, you can use Flyway. To use it - add its dependency to your project
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>
Turn the spring.jpa.hibernate.ddl-auto to validate.
Add spring.flyway.enabled=true to application.properties.
Place you 'migration' sql scripts to the 'default' location resources/db/migration folder. Call them like this, for example:
V1__schema_initialization.sql
V2__data_population.sql
When your spring boot app will be starting, Flyway check your database for missing schema and data then rolls these scripts sequentially.
More info about Flyway is here.
Seems you can run sql script after db scheme validate/created
Just name sql query file import.sql and spring should run it according this doc
You need something that will keep a track of what query ran and when ran. Also it should only run once not all the time when application startups.
liquibase is a option which can be used for that.
It will allow DDL as well as DML.
This link will give detail, How can you configure liquibase with spring
https://medium.com/#harittweets/evolving-your-database-using-spring-boot-and-liquibase-844fcd7931da