I am just starting to use Liquibase and I am wondering: why is it that when I run ./mvnw compile liquibase:diff are change sets generated to first drop existing indexes and then recreate them if they already exist?
Ex:
<changeSet author="me (generated)" id="1486157347995-13">
<dropIndex indexName="my_idx" tableName="notification"/>
<createIndex indexName="my_idx" tableName="notification">
<column name="index_col"/>
</createIndex>
</changeSet>
Probably out of "laziness".
This is a simple way to make sure the index created is the same (not only the name, but the columns used) than the one in the reference database.
It handles two diff cases in one:
missing index name in the target db,
same index name but with a different definition.
Related
My initial change set was:
<changeSet id="1.2.0-01" author="Arya">
<createIndex tableName="org_message" indexName="ix_org_message_userid_peerid">
<column name="user_id"/>
<column name="peer_id"/>
</createIndex>
</changeSet>
It was executed successfully without any warning.
Then I've deleted the executed 1.2.0-01 record from DATABASECHANGELOG table (Note: the created index still exists) and added an indexExists precondition to the changeset:
<changeSet id="1.2.0-01" author="Arya">
<preConditions onFail="MARK_RAN">
<not>
<indexExists indexName="ix_org_message_userid_peerid"/>
</not>
</preConditions>
<createIndex tableName="org_message" indexName="ix_org_message_userid_peerid">
<column name="user_id"/>
<column name="peer_id"/>
</createIndex>
</changeSet>
In the execution, i saw this log:
JdbcDatabaseSnapshot$CachingDatabaseMetaData -| Liquibase needs to
access the DBA_RECYCLEBIN table so we can automatically handle the
case where constraints are deleted and restored. Since Oracle doesn't
properly restore the original table names referenced in the
constraint, we use the information from the DBA_RECYCLEBIN to
automatically correct this issue.
The user you used to connect to the database (ORG_PLATFORM) needs to
have "SELECT ON SYS.DBA_RECYCLEBIN" permissions set before we can
perform this operation. Please run the following SQL to set the
appropriate permissions, and try running the command again.
GRANT SELECT ON SYS.DBA_RECYCLEBIN TO ORG_PLATFORM;
But the change-set was executed successfully: a 1.2.0-01 record with 'MARK_RAN' is added to DATABASECHANGELOG table.
Is this warning an important issue that should be fixed or is it just a default logging (like mentioned in CORE-2940 issue)? I'm using liquibae 3.8.9 and oracle 12c.
It's a warning you can ignore. You can disable the warning using the property:
liquibase.oracle.ignoreRecycleBin=true
I recently upgraded my Java Liquibase version from 3.5.3 to 3.6.3
I have a very heavy environment where there are lots of databases and tables (I am using Oracle).
On this environment, I am trying to execute a huge changelog file where I create tables and indices.
Find below a small part of the changelog.
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.2.xsd">
...
...
...
<changeSet author="me" id="tableCreation78">
<preConditions onFail="MARK_RAN">
<not>
<tableExists tableName="MY_TABLE_NAME" />
</not>
</preConditions>
<comment>Creating table MY_TABLE_NAME</comment>
<createTable tableName="MY_TABLE_NAME">
<column name="M_ID" type="bigint">
<constraints nullable="false" primaryKey="true" primaryKeyName="PK_MY_TABLE_NAME_190" />
</column>
<column name="M_FORMAT" type="int" />
</createTable>
</changeSet>
...
...
...
<changeSet author="me" id="indexCreation121">
<preConditions onFail="MARK_RAN">
<tableExists tableName="MY_TABLE_NAME"/>
<not>
<indexExists tableName="MY_TABLE_NAME" columnNames="M_FEEDER_ID"/>
</not>
</preConditions>
<comment>Creating index for MY_TABLE_NAME</comment>
<createIndex tableName="MY_TABLE_NAME" indexName="MY_INDEX_NAME">
<column name="M_ID_INDEX"/>
</createIndex>
</changeSet>
...
...
...
</databaseChangeLog>
On the Liquibase 3.5.3, creating the index used to be quick.
When I migrated to Liquibase 3.6.3, I had a severe regression in performance.
What used to run in 1-2 minutes, now takes up to 20 minutes to complete.
The changelog does not define Unique Constraints.
While debugging, I noticed one of the many differences between the two versions. In the 3.5.3, the listConstraints and listColumns methods from UniqueConstraintSnapshotGenerator are not called.
In the 3.6.3 version, these methods are called a lot, even though no unique constraints are defined in the changelog. I am guessing that they are here from the previously defined tables of the environment.
Some of these queries (see below) are called multiple times with the exact same parameters. I don't know if it's a maintenance step that was added in the 3.6.3.
2020-08-13 17:03:52,270 INFO [main] select ucc.owner as constraint_container, ucc.constraint_name as constraint_name, ucc.column_name, f.validated as constraint_validate from all_cons_columns ucc INNER JOIN all_constraints f ON ucc.owner = f.owner AND ucc.constraint_name = f.constraint_name where ucc.constraint_name='UC' and ucc.owner='DB' and ucc.table_name not like 'BIN$%' order by ucc.position
I am not sure if this is the cause of the regression but honestly, I am out of ideas.
Does anybody know if this might be the cause of this regression?
Did they add new maintenance steps in Liquibase 3.6.3 that might be causing this big performance degradation?
Thank you so much!
You may need to perform maintenance on your Oracle data dictionary. Databases that use Liquibase tend to drop and create more objects than the average Oracle database, which can cause performance problems with metadata queries.
First, gather optimizer statistics for fixed objects (V$ objects) and the data dictionary (ALL_ objects). This information helps Oracle build good execution plans for metadata queries. The below statement will take a few minutes but may only need to be run once a year:
begin
dbms_stats.gather_fixed_objects_stats;
dbms_stats.gather_dictionary_stats;
end;
/
Another somewhat-common reason for data dictionary query problems is a large number of objects in the recycle bin. The recycle bin is great on production systems, where it lets you instantly recover from dropping the wrong table. But on a development environment, if thousands of objects are constantly dropped but not purged, those old objects can slow down some metadata queries.
--Count the number of objects in the recycle bin.
select count(*) from dba_recyclebin;
--Purge all of them if you don't need them. Must be run as SYS.
purge dba_recyclebin;
Those are two quick and painless solutions to some data dictionary problems. If that doesn't help, you may need to tune specific SQL statements, which may require a lot of information. For example - exactly how long does it take your system to run that query against ALL_CONS_COLUMNS? (On my database, it runs in much less than a second.)
Run Liquibase and then use a query like the one below to find the slowest metadata queries:
select elapsed_time/1000000 seconds, executions, sql_id, sql_fulltext, gv$sql.*
from gv$sql
order by elapsed_time desc;
I'm migrating OracleDB11g to MSSQL2014.
Currently getting following error when trying to save new data:
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Cannot
insert the value NULL into column 'ID', table 'testDB.FILE_SETTINGS';
column does not allow nulls. INSERT fails.
My interpretation is that this is caused as the "native" ID generator differs between Oracle & MSSQL (Sequence vs. Identity).
In Oracle we had small customization to HIBERNATE_SEQUENCE:
alter sequence hibernate_sequence increment by 5;
...but thats all.
hibernate-mapping is originally like this:
<id name="id" column="ID" type="java.lang.Long">
<generator class="native">
</generator>
</id>
In MSSQL I have tried it like this with no luck:
<id name="id" column="ID" type="java.lang.Long">
<generator class="sequence">
<param name="sequence">HIBERNATE_SEQUENCE</param>
</generator>
</id>
And in MSSQL server I have sequence (migration tool created):
HIBERNATE_SEQUENCE in testDB->Views->sys.sequences
Also found in (migration tool created):
testDB->Views->INFORMATION_SCHEMA.SEQUENCES
How should this be done properly as I want to retain the same way to generate identity as was in Oracle? Something wrong in MSSQL or in hibernate settings?
Hibernate version is quite old: 2.1.8
Your interpretation of the error is wrong. This error tells you nothing about sequence. It tells you that your testDB.FILE_SETTINGS has id column defined as NOT NULL but you try to insert NULL value here.
I don't see your code but I think there is something like this:
create table dbo.MyTbl_wrong (id int NOT NULL, col1 varchar(100) );
insert into dbo.MyTbl_wrong(col1) values ('1 str'), ('2 str'), ('3 str');
Cannot insert the value NULL into column 'id', table
'db2.dbo.MyTbl_wrong'; column does not allow nulls. INSERT fails. The
statement has been terminated.
What you should do instead is to use sequence in the default for your id column like this:
create sequence dbo.MySeq
start with 1;
create table dbo.MyTbl (id int NOT NULL default(next value for dbo.MySeq), col1 varchar(100) );
insert into dbo.MyTbl(col1) values ('1 str'), ('2 str'), ('3 str');
--select *
--from dbo.MyTbl;
-------
--id col1
--1 1 str
--2 2 str
--3 3 str
Answering own question if some day someone has similar issues:
Apparently there was nothing wrong with the settings I mentioned in original question, only that hibernate 2.1.8 seemed not to support the SEQUENCE id genererator. Also old hibernate obviously did not support the needed SQLServer2012Dialect.
Ended up updating the hibernate version to 4.3.11 (with its dependencies). This version got selected as it required the least amount of refactoring.
I had some issues with the dependencies as this old project was not using Maven.
Also faced this error as any DB query was attempted:
org.hibernate.LazyInitializationException: could not initialize proxy - no Session
I found out that new hibernate version was defaulted to use lazy loading which was not supported in the old version. So I ended up fixing it with setting "lazy=false" in hibernate setting files.
You may see "org.hibernate.LazyInitializationException: could not initialize proxy - no Session" while upgrading from hibernate 2.1 to hibernate 3.0. You will suddenly find yourself puzzling what happened, it was working before update. Reasons is, Hibernate 3 introduced lazy loading as the default i.e. lazy="true". If you want it to work the same as before you can mark everything as lazy="false". Alternatively you'll have to start eagerly initialising your entities and associations.
Read more: http://javarevisited.blogspot.com/2014/04/orghibernatelazyinitializationException-Could-not-initialize-proxy-no-session-hibernate-java.html#ixzz56ahBmiBl
I managed to integrate Liquibase into our Maven build to initialize a H2 inmemory database with a few enrys. Those rows have the primary key generated using a sequence table which works as expected (BigInt incremented values starting from 1).
My issue is that when i try to persist a new entity into that table from within a Junit integration test i get a "unique key constraint violation" because that new entity has the same primary key as the very first row inserted using the Liquibase changelog-xmls.
So the initialisation works perfectly fine as expected. The maven build uses the liquibase changelog-xmls
For now i just wipe the according tables completly before any integration tests with an own Runner... but that wont be a possibility in the furture. Its currently quite a chalange to investigate such issues since there is not yet much specific information on Liquibase available.
Update Workaround
While id prefer below answer using H2 brings up the problem that below changeset wont work because the required minValue is not supported.
<changeSet author="liquibase-docs" id="alterSequence-example">
<alterSequence
incrementBy="1"
maxValue="371717"
minValue="40"
ordered="true"
schemaName="public"
sequenceName="seq_id"/>
As a simple workaround i now just drop the existing sequence that was used to insert my testdata in a second changeSet:
<changeSet id="2" author="Me">
<dropSequence
sequenceName="SEQ_KEY_MY_TBL"/>
<createSequence
sequenceName="SEQ_KEY_MY_TBL"
incrementBy="1"
startValue="40"/>
</changeSet>
This way the values configured in the changelog-*.xml will be inserted using the sequence with an initial value of 1. I insert 30 rows so Keys 1-30 are used. After that the sequence gets dropped and recreated with a higher startValue. This way when persisting entities from within a Junit based integration Test the new entities will have primary keys starting from 40 and the previous unique constraint problem is solved.
Not H2 will probably soon release a version supporting minValue/maxValue since the according patch already exists.
Update:
Maybe we should mention this still is just a Workaround, anyone knows if H2 supports a Sequence with Liquibase that wont start over after DB-Init?
You should instruct liquibase to set the start value for those sequences to a value beyond those you have used for the entries you created. Liquibase has an alterSequence element for this. You can add such elements at the end of your current liquibase script.
I am writing a query but it always says "No matching index found". I don't know why. My code is as below:
Query query = pm.newQuery(Classified.class);
query.setFilter("emp_Id == emp");
query.setOrdering("upload_date desc");
query.declareParameters("String emp");
List<Classified> results = (List<Classified>)query.execute(session.getAttribute("emp_Id").toString());
<?xml version="1.0" encoding="utf-8"?>
<datastore-indexes autoGenerate="true">
<datastore-index kind="Classified" ancestor="false">
<property name="emp_Id" direction="asc" />
<property name="category" direction="asc" />
<property name="upload_date" direction="desc" />
</datastore-index>
</datastore-indexes>
I have added the above index, but it did not help.
I believe you need to configure a Datastore Index. There's probably one already generated for you in Eclipse at WEB-INF/appengine-generated/datastore-indexes-auto.xml that you just need to copy to WEB-INF/datastore-indexes.xml and deploy again.
Because this needs to be somewhere on the internet...
I kicked myself when I found this out
The error is you do not have a index matching what the query would like to perform. You can have multiple indexes for each entity.
In the Logcat, error, it will tell you exactly what index to set and what order the elements need to be.
ie, if the error says it wants (it wont be nicely formatted):
<datastore-index kind="Classified" ancestor="false">
<property name="category" direction="desc" />
<property name="upload_date" direction="desc" />
</datastore-index>
then Project -> war -> WEB-INF -> appengine-generated -> datastore-indexes-auto.xml and add exactly that. Then, redeploy the project.
Next go into your Google Cloud Console and look at Datastore -> indexes. It should say that the index is being prepared (This goes quicker if you can kill all apps connected and shut down the instance in the console).
Once this has moved into the list of other indexes, rerun the your application and it wont error out with regards to the index anymore.
Go get it Gentlemen/Ladies
The index you define must hold all possible results in the order they will be returned. Your query asks for a particular emp_Id, ordered by upload_date, but your index is ordered primarily by category.
Try removing the category line from your index definition, or swapping the order of category and upload_date, to make upload_date the primary sort order for the index. If another part of your code relies on the category line, you may have to make two separate indices (which incurs some computational cost).
Edit: see comment below by Nick Johnson re. extra parameters.
I am running into this issue at the moment when doing a single property query such as:
const query = datastore
.createQuery('Emailing_dev')
.filter('status', '=', 'Scheduled')
In my case, I should not be getting any errors, however I get Error 9: No matching index found.
If I defined the single property index twice in the yaml, it works:
indexes:
- kind: Emailing_dev
properties:
- name: status
- name: status
but this for sure must be a bug..!!