So I have two Springboot applications that are independant. Both are using distinct mariadb databases with distince schemas names.
To make it simple:
For some reason, my app1 has to be able to access db2 data. Ive been able to add a second datasource in my app1, however, due to differences between flyway and liquibase, it fails to start.
How can i do so that my flyway configuration is ignored for datasource 2 (db2). App1 should just access data (read/write whatever) but the whole database structure is handled by App2.
Related
I have two environments like onprem and aws, for
onprem we have connected with Oracle database and required configuration
is done and but for aws we are using aurora postgres and whenever
i have tried to add postgres dependency then deploying into
aws environment it's giving error so is it possible if we have
multiple dependencies in same pom file and used for different environments.
Quarks version : 1.13.3.final
Sample configuration:
Oracle:
-------
quarkus.datasource.db-kind=oracle
quarkus.datasource.username=user-default
quarkus.datasource.password=password-default
quarkus.datasource.reactive.url=jdbc:oracle:thin://localhost:5432/default
Postgresql:
-----------
quarkus.datasource.db-kind=postgresql
quarkus.datasource.username=user-default
quarkus.datasource.password=password-default
quarkus.datasource.reactive.url=postgresql://localhost:5432/default
Code or issue:
The issue i'm getting is as follows:
2021-11-11 07:18:45,173 WARN [org.hib.eng.jdb.env.int.JdbcEnvironmentInitiator] (main) HHH000342: Could not obtain connection to query metadata: java.sql.SQLException: Driver does not support the provided URL: jdbc:postgresql://
at io.agroal.pool.ConnectionFactory.connectionSetup(ConnectionFactory.java:220)
at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:204)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:490)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:472)
Dependencies:
quarkus-jdbc-oracle
quarkus-jdbc-postgresql
According to the official documentation, you can use Multiple Datasources.
However if you have more than 1, then you must provide a Name, otherwise how will Quarkus know which Drivers to use.
The format : quarkus.datasource.<datasourceCustomName>. If it is not your default DataSource which will not require a name.
Important to Notice: For now, multiple datasources are only supported for JDBC and the Agroal extension. So it is not currently possible to create multiple reactive datasources.
https://quarkus.io/guides/datasource#multiple-datasources
Multiple Datasources
Configuring Multiple Datasources
For now, multiple datasources are only supported for JDBC and the
Agroal extension. So it is not currently possible to create multiple
reactive datasources.
The Hibernate ORM extension supports defining several persistence
units using configuration properties. For each persistence unit, you
can point to the datasource of your choice.
Defining multiple datasources works exactly the same way as defining a
single datasource, with one important change: you define a name.
In the following example, you have 3 different datasources:
The default one,
A datasource named users,
A datasource named inventory,
each with its own configuration.
quarkus.datasource.db-kind=h2
quarkus.datasource.username=username-default
quarkus.datasource.jdbc.url=jdbc:h2:mem:default
quarkus.datasource.jdbc.max-size=13
quarkus.datasource.users.db-kind=h2
quarkus.datasource.users.username=username1
quarkus.datasource.users.jdbc.url=jdbc:h2:mem:users
quarkus.datasource.users.jdbc.max-size=11
quarkus.datasource.inventory.db-kind=h2
quarkus.datasource.inventory.username=username2
quarkus.datasource.inventory.jdbc.url=jdbc:h2:mem:inventory
quarkus.datasource.inventory.jdbc.max-size=12
quarkus.datasource.db-kind is fixed at build time. The property can not change at runtime.
You have to build the app for Oracle and seperate for Postgresql
https://quarkus.io/guides/all-config#quarkus-datasource_quarkus.datasource.db-kind
Is there any way to Load the database schema from .sql or json or textfile to create the ORM mapping with JPA/Hibernate to database in spring-boot while starting up the server.
Spring Boot enables you to use database migration tools such as Liquibase and Flyway, you can read more about that on Spring's official documentation.
Edit: From the docs
85.5 Use a Higher-level Database Migration Tool
Spring Boot supports two higher-level migration tools: Flyway and Liquibase.
85.5.1 Execute Flyway Database Migrations on Startup
To automatically run Flyway database migrations on startup, add the org.flywaydb:flyway-core to your classpath.
The migrations are scripts in the form V__.sql (with an underscore-separated version, such as ‘1’ or ‘2_1’). By default, they are in a folder called classpath:db/migration, but you can modify that location by setting spring.flyway.locations. This is a comma-separated list of one or more classpath: or filesystem: locations. For example, the following configuration would search for scripts in both the default classpath location and the /opt/migration directory:
spring.flyway.locations=classpath:db/migration,filesystem:/opt/migration
You can also add a special {vendor} placeholder to use vendor-specific scripts. Assume the following:
spring.flyway.locations=classpath:db/migration/{vendor}
Rather than using db/migration, the preceding configuration sets the folder to use according to the type of the database (such as db/migration/mysql for MySQL). The list of supported databases is available in DatabaseDriver.
FlywayProperties provides most of Flyway’s settings and a small set of additional properties that can be used to disable the migrations or switch off the location checking. If you need more control over the configuration, consider registering a FlywayConfigurationCustomizer bean.
Spring Boot calls Flyway.migrate() to perform the database migration. If you would like more control, provide a #Bean that implements FlywayMigrationStrategy.
Flyway supports SQL and Java callbacks. To use SQL-based callbacks, place the callback scripts in the classpath:db/migration folder. To use Java-based callbacks, create one or more beans that implement Callback. Any such beans are automatically registered with Flyway. They can be ordered by using #Order or by implementing Ordered. Beans that implement the deprecated FlywayCallback interface can also be detected, however they cannot be used alongside Callback beans.
By default, Flyway autowires the (#Primary) DataSource in your context and uses that for migrations. If you like to use a different DataSource, you can create one and mark its #Bean as #FlywayDataSource. If you do so and want two data sources, remember to create another one and mark it as #Primary. Alternatively, you can use Flyway’s native DataSource by setting spring.flyway.[url,user,password] in external properties. Setting either spring.flyway.url or spring.flyway.user is sufficient to cause Flyway to use its own DataSource. If any of the three properties has not be set, the value of its equivalent spring.datasource property will be used.
There is a Flyway sample so that you can see how to set things up.
You can also use Flyway to provide data for specific scenarios. For example, you can place test-specific migrations in src/test/resources and they are run only when your application starts for testing. Also, you can use profile-specific configuration to customize spring.flyway.locations so that certain migrations run only when a particular profile is active. For example, in application-dev.properties, you might specify the following setting:
spring.flyway.locations=classpath:/db/migration,classpath:/dev/db/migration
With that setup, migrations in dev/db/migration run only when the dev profile is active.
85.5.2 Execute Liquibase Database Migrations on Startup
To automatically run Liquibase database migrations on startup, add the org.liquibase:liquibase-core to your classpath.
By default, the master change log is read from db/changelog/db.changelog-master.yaml, but you can change the location by setting spring.liquibase.change-log. In addition to YAML, Liquibase also supports JSON, XML, and SQL change log formats.
By default, Liquibase autowires the (#Primary) DataSource in your context and uses that for migrations. If you need to use a different DataSource, you can create one and mark its #Bean as #LiquibaseDataSource. If you do so and you want two data sources, remember to create another one and mark it as #Primary. Alternatively, you can use Liquibase’s native DataSource by setting spring.liquibase.[url,user,password] in external properties. Setting either spring.liquibase.url or spring.liquibase.user is sufficient to cause Liquibase to use its own DataSource. If any of the three properties has not be set, the value of its equivalent spring.datasource property will be used.
See LiquibaseProperties for details about available settings such as contexts, the default schema, and others.
There is a Liquibase sample so that you can see how to set things up.
Spring also supports a database initialization on its own, the official docs are here.
Spring Boot can automatically create the schema (DDL scripts) of your DataSource and initialize it (DML scripts). It loads SQL from the standard root classpath locations: schema.sql and data.sql, respectively.
How i could define some schema and data to be inserted into db for
sql database in spring boot
Also could i do this for embedded databases
For example i am using two databases and i want to populate some data or define some schema and apply to different databases before application starts.
A file named import.sql in the root of the classpath is executed on startup if Hibernate creates the schema from scratch (that is, if the ddl-auto property is set to create or create-drop). This can be useful for demos and for testing if you are careful but is probably not something you want to be on the classpath in production. It is a Hibernate feature (and has nothing to do with Spring).
You can take a look in spring docs
I want to generate DB structure from my Java classes
jpa.generate-ddl: true
jpa.ddl-auto: true
Also, I need to run SQL script before application will up because I have #PostConstruct methods where I use these data.
Can you show an example how to do it in Spring Boot?
A simple spring boot app with the required functionality can be found at.
https://github.com/salilotr89/Spring-boot-postgres-dbinit
Spring JDBC has a DataSource initializer feature. Spring Boot enables it by default and loads SQL from the standard locations schema.sql and data.sql (in the root of the classpath).
In addition Spring Boot will load the schema-${platform}.sql and data-${platform}.sql files (if present), where platform is the value of spring.datasource.platform, e.g. you might choose to set it to the vendor name of the database (hsqldb, h2, oracle, mysql, postgresql etc.).
Spring Boot enables the fail-fast feature of the Spring JDBC initializer by default, so if the scripts cause exceptions the application will fail to start. The script locations can be changed by setting spring.datasource.schema and spring.datasource.data, and neither location will be processed if spring.datasource.initialize=false.
To disable the fail-fast you can set spring.datasource.continue-on-error=true. This can be useful once an application has matured and been deployed a few times, since the scripts can act as ‘poor man’s migrations’ — inserts that fail mean that the data is already there, so there would be no need to prevent the application from running, for instance.
If you want to use the schema.sql initialization in a JPA app (with Hibernate) then ddl-auto=create-drop will lead to errors if Hibernate tries to create the same tables. To avoid those errors set ddl-auto explicitly to "" (preferable) or "none". Whether or not you use ddl-auto=create-drop you can always use data.sql to initialize new data.
http://docs.spring.io/spring-boot/docs/current/reference/html/howto-database-initialization.html
For Reference: Spring Boot - Loading Initial Data
I am working on an application where we have decided to go for a multi-tenant architecture using the solution provided by Spring, so we route the data to each datasource depending on the value of a parameter. Let's say this parameter is a number from 1 to 10, depending on our clients id.
However, this requires altering the application-context each time we add a new datasource, so to start we have thought on the following solution:
Start with 10 datasources (or more) pointing to different IPs and the same schema, but in the end all routed to the same physical database. No matter the datasource we use, the data will be sent to the same schema in this first scenario.
The data would be in the same schema, so the same table would be shared among datasources, but each row would only be visible to each datasource (using a fixed where clause in every CRUD operation)
When we have performance problems, we will create another database, migrate some clients to the new schema, and reroute the IP of one of the datasources to the new database, so this new database gets part of the load of the old one
Are there any drawbacks with this approach? I am concerned about:
ACID properties lost
Problems with hibernate sessionFactory and second level cache
Table locking issues
We are using Spring 3.1, Hibernate 4.1 and MySQL 5.5
i think your spring-link is a little outdated, hibernate 4 can handle multi-tenancy pretty well on it's own. i would suggest to use the multiple schemas approach because setting up and initializing a new schema is programmatically relativly easy to do (for example on registration-time), if you have so much load though (and your database-vendor does not provide a solution to make this transparent to your application) you need the multiple database approach, you should try to incorporate the tenant-id in the database-url or something in that case http://docs.jboss.org/hibernate/orm/4.1/devguide/en-US/html/ch16.html