How liquibase picks up hibernate transactions? - java

My j2ee application performs db transactions through hibernate. I want to integrate liquibase into my project.
I completed liquibase setup using the link.
I forced hibernate to be read only by setting hibernate.hbm2ddl.auto to none.
Now I run the server and perform insert/update operations. Hibernate doesn't save into database.
I am not understanding how liquibase picks up hibernate db transactions that was called before from configuration file specified in this link.
Am I missing some logic?

Related

Flyway & Hibernate : Cannot populate data to initial database

In a Spring Boot app, I am using Hibernate and 2 tables is created properly. However, I also need to insert data one of these tables and for this purpose I thought I should use Flyway.
Then I just added insert clauses to the Flyway and use the following parameters for Hibernate and Flyway in application.properties:v
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.hibernate.ddl-auto= update # also tried none
spring.flyway.url=jdbc:mysql://localhost:3306
spring.flyway.schemas=demo-db
spring.flyway.user=root
spring.flyway.password=******
I have not used Flyway for initializing database and I am not sure if I can use Flyway with Hibernate as I mentioned above. Or, should I disable Hibernate table creation and create another migration script for table creation?
If you use flyway only for insert data don't do that. Try to use this:
With Hibernate:
In addition, a file named import.sql in the root of the classpath is executed on startup if Hibernate creates the schema from scratch (that is, if the ddl-auto property is set to create or create-drop).
With Basic SQL Scripts:
Spring Boot can automatically create the schema (DDL scripts) of your JDBC DataSource or R2DBC ConnectionFactory and initialize it (DML scripts). It loads SQL from the standard root classpath locations: schema.sql and data.sql
The issue here is that Hibernate does not automatically create tables. Additionally, if using Spring Boot, Flyway will run before the service using hibernate has started. As a result, your Flyway script are interacting with a table that does not exist.
The recommended way to do this is to use Flyway to manage both your database structure, your create tables etc, and static data. This will mean your database is versioned and provisioned ready for your service and hibernate can connect.

spring boot sql database DML and DDL scripts

How i could define some schema and data to be inserted into db for
sql database in spring boot
Also could i do this for embedded databases
For example i am using two databases and i want to populate some data or define some schema and apply to different databases before application starts.
A file named import.sql in the root of the classpath is executed on startup if Hibernate creates the schema from scratch (that is, if the ddl-auto property is set to create or create-drop). This can be useful for demos and for testing if you are careful but is probably not something you want to be on the classpath in production. It is a Hibernate feature (and has nothing to do with Spring).
You can take a look in spring docs

Spring Boot schema.sql - drop db schema on restart

Hi I'm using Spring Boot version 1.5.9.
When using Spring Boot to initialize schema.sql for mysql database, it works all fine and the database schema is getting created successfully. But on restart of the application this schema.sql script is executing again and the application fails to start because the tables already exist.
I tried spring.jpa.hibernate.ddl-auto=create-drop option in application.properties but it does not have any effect (probably because it only works for Hibernate entities which I'm not using)
Is there a way to have Spring Boot to re-create schema from schema.sql every time on restart if the database is not in-memory one?
GitHub:
https://github.com/itisha/spring-batch-demo/tree/database-input
According to the documentation you can simply ignore exceptions by setting spring.datasource.continue-on-error property to true
Spring Boot enables the fail-fast feature of the Spring JDBC
initializer by default, so if the scripts cause exceptions the
application will fail to start. You can tune that using
spring.datasource.continue-on-error.
or even turn it off with spring.datasource.initialize set to false
You can also disable initialization by setting spring.datasource.initialize to false.
A workaround could be, to change the create statements in your schema.sql
from
CREATE TABLE test .....
to
CREATE TABLE IF NOT EXISTS test ...
use the IF NOT EXISTS statements
turn off automatic schema creation to avoid conflicts: add this line in your application.properties
spring.jpa.hibernate.ddl-auto=none

Spring Auto configuration resulting in old MySQL dialect

I created a small POC app with spring boot, using hibernate (5.2.9) and maria db (10.1.19).
I had some sql dialect issues where my create/drop table SQL was using type=MyIasam but resolved that locally by setting the spring.jpa.properties.hibernate.dialect, however, when I deploy to the cloud (PCF) all of the cloud profile stuff kicks in, and I end up with hibernate deciding its dialect is going to be org.hibernate.dialect.MySQLDialect
this results in invalid SQL getting generated for creating new tables.
Note that I'm not really sure what else could be happening. This is a spring boot app (1.5.3) and the cloud profile is kicking in to do auto configuration. There's a bunch of properties injected. And I can't seem to get my dialect property to be respected.
This is a solid crushingly easy problem that is the escaping me.
Any ideas what I need to set, or provide as dependencies?
I tried removing all of the mysql dependencies, but then the connection string inject is jdbc:mysql... which i think may be part of the problem...

writing to 2 schemas in one database with one datasource, using Spring and Hibernate

We have two schemas in one oracle database. We are writing a Spring/Hibernate application, which will write to tables in both schemas in one operation.
My question is: Can one datasource write to both schemas, in one transaction, and rollback all updates in both schemas if required?
We are in a non Java EE environment, using just Tomcat, so there is no out of the box support for Global Transactions/JTA. I know, if Global Transactions are required, we could utilize Springs support for JTA (and Atomikos).
However, are Global Transactions required in the above situation.. as both schemas are in one database? Is this a use case for JTA?

Categories