Is it possible to restore postgres DB to local liquibase - java

I am new to liquibase. The 'backend' app runs locally with some basic changelog files.
I would like to get some test data from a server using pg_dump and pg_restore, and restore it to my local postgres DB.
How do I get that to work with liduibase? If that is not a good option, what would be a better option?

I found out a good way to export an entire database data into csv, is to use the below pg function, configure changelog to load the csv files, and map column headers if records have a lot of null data.
Create one csv file per table:
Export Database into CSV file
Add changelog LoadData
(the file path is relative to src/main/resources):
for example:
<changeSet author="programmer" id="mock_user_data">
<loadData tableName="user_data" file="db/csv/public.user_data.csv" separator=";">
<column name="default_user" type="BOOLEAN"/>
<column name="username" type="STRING"/>
<column name="store_id" type="NUMERIC"/>
</loadData> </changeSet>
If you get error messages caused by null values, look at your csv file, insert NULL in between delimiters, and make sure the problematic column name is mapped in the loadData section.

you could add a changelog to load data
Load data
you could possibly export from your DB as a csv and load with liquibase

Related

Start liquibase after activiti under Spring Boot 2

We have a Project with activiti and liquibase dependecies.
Activiti is automatically configured (exc. 'spring.activiti.async-executor-activate' and 'spring.activiti.database-schema-update' params in application.yml) and liquibase too (exc. 'spring.liquibase.change-log').
Now we need to rebuild indexes on activiti tables (ACT_*) with special liquibase changesets like <sql>alter index ... rebuild tablespace ...</sql>.
There is no problem on existing database, but it crashes on first app start with DB installation because of liquibase is trying to change a nonexistent ACT_* indexes.
How can I start liquibase after activiti DB installation considering Spring Boot autoconfiguration?
You could control the execution of these changesets by using preconditions. For example:
<changeSet id="1" author="bob">
<preConditions onFail="MARK_RAN">
<indexExists>your_act_index</indexExists>
</preConditions>
<sql>alter index ... rebuild tablespace ...</sql>
</changeSet>
This way indexes are only rebuilt if they exists, which is not the case on an empty database.

Liquibase: How to identify change set only basis ID?

As per the liquibase documentation:
Each changeSet tag is uniquely identified by the combination of the
“id” tag, the “author” tag, and the changelog file classpath name.
This seems to be a very poor design choice. The identity of a changeset shouldn't be linked to its location. If the changelog is run via automatic application deployment the changeset would come from a classpath location within a JAR file. If I want to run the same changesets from commandline manually, the location might be the current directory.
In this case instead of recognizing the changeset as same based on its ID liquibase will try to apply it twice. Is there a way to change this behavior and have it identify changesets only basis specified ID?
I would suggest using the logicalFilePath attribute of databaseChangeLog tag.
This gives you more freedom to change the directory structure of your project.
Also it prevents the file name from being stored as an absolute path (which might happen in some circumstances).
#binoternary's answer works. But the problem is that logicalFilePath is only available in XML changesets whereas I was using SQL changesets. The work around is to create a XML changeset and then include SQL into it like this:
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.4.xsd">
<changeSet id="new-tables" author="kshitiz" logicalFilePath="new_tables.sql">
<sqlFile path="new_tables.sql" relativeToChangelogFile="true" />
</changeSet>
</databaseChangeLog>
Only if you manipulate the sourcecode and recompile your own version of liquibase.
Actually the design is fine, you just use it wrongly.
If you e. g. have a big team where each team maintains their changesets in a separate liquibase file, it would be fatal to not take the filename into account, as different teams could use the same ID.
Just make sure you are calling Liquibase always the same way and the identities of the changesets will not change.

Specify a regular expression for LocalSessionFactoryBean mappingLocations

In my application, I have specified the below configuration for automatically picking up all the HBM files under a specific folder in the classpath.
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="mappingLocations">
<list>
<value>classpath:hbms/**/*.hbm.xml</value>
</list>
</property>
</bean>
Now, for a new requirement, there is a need to create multiple HBM files with named queries specific to database. The HBM file names will be of the pattern test.DB.hbm.xml. For example, test.oracle.hbm.xml and test.db2.hbm.xml. In addition to these, there are the old regular HBM files (for mapping to tables) with name format as table1.hbm.xml, table2.hbm.xml, etc. also present in the same folder.
Using the above pattern, hibernate loading of the files fails, due to duplicate named queries in the new hbm files (since the name would be same in all such files).
The requirement is now to load the regular HBM files and also the DB specific HBM files. Is it possible to achieve this by using a regular expression as below?
classpath:hbms/**/*.(.${dbType}).hbm.xml
In the above example, dbType is available as a Spring environment property. My attempt with these changes resulted in none of the HBM files being loaded (including the old ones).
Am I doing something wrong with the regular expression or is it not possible to do this via XML configuration?
Thank you.
You have two options:
You can have each database specific config files be stored in a separate folder and then your config looks like this:
classpath:hbms/**/${dbType}/*.hbm.xml
You can have them follow he pattern you provided, but change the configuration to
classpath:hbms/**/*${dbType}.hbm.xml
To load common files, you need to rename them to include something you can match, like:
one.hbm.xml
becoming:
common-one.hbm.xml
Then the configuration might look like this:
classpath:hbms/**/common-*.hbm.xml
classpath:hbms/**/*${dbType}.hbm.xml

Hibernate Mapping Changes not taking effect

I have a project using Hibernate and external XML mapping files. I switched from MySQL to Oracle. Some of my fields have the name 'date', which is okay in MySQL but not in
Oracle. It does not like
<property name="date" column="date" type="string" />
so I changed it to
<property name="sdate" column="sdate" type="string" />
When I re-rerun the code to generate the schema, it is still following the old version of the mapping file and not taking into account the new changes. I have even created a similar but different xml file and pointed my Hibernate config to this new file and it has the same problem.
Does anyone know why it could be following the old version of my mapping file and refusing to follow my updates?

Hibernate 3.2.5 with Play Framework 1.2.5

I am trying to use Hibernate 3.2.5 with Play framework 1.2.5
In Hibernate I am having two files:
1) cfg.xml file (containing the db config details along with some additional properties
2) hbm.xml file (containing the mapping between the java bean and the db table
For getting connected to the oracle 10g db, I am providing the db details in the application.config files like this and the connection is successful also when I start the server:
db.url=jdbc:oracle:thin:#localhost:1521/orcl
db.driver=oracle.jdbc.OracleDriver
db.user=system
db.pass=tiger
I want to know Where will I place the hbm.xml file (for mapping details) and the cfg.xml file for the remaining properties other than db connecion details?
Please let me know about this.
Regards,
Starting from the root directory of your application:
the hibernate.cfg.xml must be placed inside the app directory
the mapping files (the hbm files) where your models classes are defined, usually inside the app/models/ directory
Inside your hibernate.cfg.xml the mapping attributes should be something like:
<mapping class="models.yourHmbFile1"/>
<mapping class="models.yourHmbFile2"/>
Btw, I find easy to use the hibernate annotations instead of the hbm - xml mapping. Easier to write and to mantain.
If you prefer to annotate your model classes, you can delete the hbm files and directly map your annotated classes in your hibernate.cfg.xml.
In the application.conf you've to specify the data you have already added:
db.url=jdbc:oracle:thin:#localhost:1521/orcl
db.driver=oracle.jdbc.OracleDriver
db.user=system
db.pass=tiger
Also in the hibernate.cfg.xml you need to specify the connection data:
<property name="hibernate.dialect">...</property>
<property name="hibernate.connection.driver_class">...</property>
<property name="hibernate.connection.url">...</property>
<property name="hibernate.connection.username">...</property>
<property name="hibernate.connection.password">...</property>

Categories