I managed to integrate Liquibase into our Maven build to initialize a H2 inmemory database with a few enrys. Those rows have the primary key generated using a sequence table which works as expected (BigInt incremented values starting from 1).
My issue is that when i try to persist a new entity into that table from within a Junit integration test i get a "unique key constraint violation" because that new entity has the same primary key as the very first row inserted using the Liquibase changelog-xmls.
So the initialisation works perfectly fine as expected. The maven build uses the liquibase changelog-xmls
For now i just wipe the according tables completly before any integration tests with an own Runner... but that wont be a possibility in the furture. Its currently quite a chalange to investigate such issues since there is not yet much specific information on Liquibase available.
Update Workaround
While id prefer below answer using H2 brings up the problem that below changeset wont work because the required minValue is not supported.
<changeSet author="liquibase-docs" id="alterSequence-example">
<alterSequence
incrementBy="1"
maxValue="371717"
minValue="40"
ordered="true"
schemaName="public"
sequenceName="seq_id"/>
As a simple workaround i now just drop the existing sequence that was used to insert my testdata in a second changeSet:
<changeSet id="2" author="Me">
<dropSequence
sequenceName="SEQ_KEY_MY_TBL"/>
<createSequence
sequenceName="SEQ_KEY_MY_TBL"
incrementBy="1"
startValue="40"/>
</changeSet>
This way the values configured in the changelog-*.xml will be inserted using the sequence with an initial value of 1. I insert 30 rows so Keys 1-30 are used. After that the sequence gets dropped and recreated with a higher startValue. This way when persisting entities from within a Junit based integration Test the new entities will have primary keys starting from 40 and the previous unique constraint problem is solved.
Not H2 will probably soon release a version supporting minValue/maxValue since the according patch already exists.
Update:
Maybe we should mention this still is just a Workaround, anyone knows if H2 supports a Sequence with Liquibase that wont start over after DB-Init?
You should instruct liquibase to set the start value for those sequences to a value beyond those you have used for the entries you created. Liquibase has an alterSequence element for this. You can add such elements at the end of your current liquibase script.
Related
I'm running the liquibase command "dropAllForeignKey" on Sybase database with more than 12000 tables and more than 380000 columns. I'm getting an out of memory exception since liquibase code is trying to query all the columns in the data base.
The JVM is launched with : -Xms64M -Xmx512M (if I increase it to 5GO it'll work but I don't see why we have to query all the columns in the data base)
The script I'm using :
<dropAllForeignKeyConstraints baseTableName="Table_Name"/>
When I checked liquibase code I found that:
In DropAllForeignKeyConstraintsChange: we create a snapshot for the table mentioned in the xml
Table target = SnapshotGeneratorFactory.getInstance().createSnapshot(
new Table(catalogAndSchema.getCatalogName(), catalogAndSchema.getSchemaName(),
database.correctObjectName(getBaseTableName(), Table.class))
, database);
In JdbcDatabaseSnapshot: when we call getColumns, we call the bulkFetchQuery() instead of fastFetchQuery() because the table is neither "DatabaseChangeLogTableName" nor "DatabaseChangeLogLockTableName". In this case, the bulkFetchQuery does not filter on the table given in the dropAllForeignKey xml. Instead, it uses SQL_FILTER_MATCH_ALL, so it'll retrieve all the columns in the database. (It already takes time to query all the columns)
In ColumnMapRowMapper: for each table, we create a LinkedHashMap with size aqual to the number of columns. And here, I'm getting the out of memory
Is it normal that we query all the column when dropping all the foreign keys for a given table? If it's the case, why we need to do it and is there a solution for my problem without increasing the size of the JVM?
PS: There is another command called dropForeignKey to drop the forign key but it needs the name of the foreign key as an input and I don't have it. In fact, I can find the name of the foreign key for a given data base, but I'm running this command on different data bases and the name of the foreign key changes from one to another and I need to have a generic liquibase change. So, I can't use dropForeignKey and I need to use dropAllForeignKey.
Here the stack:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.LinkedHashMap.newNode(LinkedHashMap.java:256)
at java.base/java.util.HashMap.putVal(HashMap.java:637)
at java.base/java.util.HashMap.put(HashMap.java:607)
at liquibase.executor.jvm.ColumnMapRowMapper.mapRow(ColumnMapRowMapper.java:35)
at liquibase.executor.jvm.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:72)
at liquibase.snapshot.ResultSetCache$ResultSetExtractor.extract(ResultSetCache.java:297)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData$3.extract(JdbcDatabaseSnapshot.java:774)
at liquibase.snapshot.ResultSetCache$ResultSetExtractor.extract(ResultSetCache.java:288)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData$3.bulkFetchQuery(JdbcDatabaseSnapshot.java:606)
at liquibase.snapshot.ResultSetCache$SingleResultSetExtractor.bulkFetch(ResultSetCache.java:353)
at liquibase.snapshot.ResultSetCache.get(ResultSetCache.java:59)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData.getColumns(JdbcDatabaseSnapshot.java:539)
at liquibase.snapshot.jvm.ColumnSnapshotGenerator.addTo(ColumnSnapshotGenerator.java:106)
at liquibase.snapshot.jvm.JdbcSnapshotGenerator.snapshot(JdbcSnapshotGenerator.java:79)
at liquibase.snapshot.SnapshotGeneratorChain.snapshot(SnapshotGeneratorChain.java:49)
at liquibase.snapshot.DatabaseSnapshot.include(DatabaseSnapshot.java:286)
at liquibase.snapshot.DatabaseSnapshot.init(DatabaseSnapshot.java:102)
at liquibase.snapshot.DatabaseSnapshot.<init>(DatabaseSnapshot.java:59)
at liquibase.snapshot.JdbcDatabaseSnapshot.<init>(JdbcDatabaseSnapshot.java:38)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:217)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:246)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:230)
at liquibase.change.core.DropAllForeignKeyConstraintsChange.generateChildren(DropAllForeignKeyConstraintsChange.java:90)
at liquibase.change.core.DropAllForeignKeyConstraintsChange.generateStatements(DropAllForeignKeyConstraintsChange.java:59)
I try to prepare an integration test with test data. I read insert queries from an external file and execute them as native queries. After the insertions I execute select setval('vlan_id_seq', 2000, true );. Here is the entity ID definition:
#Id
#Column(name = "id", unique = true, nullable = false)
#GeneratedValue(strategy = IDENTITY)
private Integer id;
When I try tor persist a new entry, I got a Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "vlan_pkey"
Detail: Key (id)=(1) already exists. exception. The ID of the sequence is 2000. The column definition is done by the serial macro and is id integer NOT NULL DEFAULT nextval('vlan_id_seq'::regclass).
I executed the native queries in a user transaction, so all test entries are stored in the postgresql data base, but it seems that hibernate not sync the sequence. The entityManager.flush(); also didn't force a sequence synchronisation. It seems that hibernate did not use sequences with #GeneratedValue(strategy = IDENTITY). I use a XA-Datasource and wildfly 13.
I tested now an other initialisation method. I defined a SQL data script (I generated the script with Jailer) in the persitence.xml (javax.persistence.sql-load-script-source) and end the script with select pg_catalog.setval('vlan_id_seq', (SELECT max(id) FROM vlan), true );. I set a breakpoint before the first persist command, check the sequence in the postgresql db, the sequence has the max id value 16. Now persisting works and the entry has the id 17. The scripts are executed before the entity manager is started and hibernate read the the updated sequences while starting. But this solution did not answer my question.
Is there a possibility that hibernate reread the sequences to use the nextval value?
if the strategy is Identity this means hibernate will create a sequence table and fetch the IDs from it, by using native sql you are just inserting your own values without updating that table so you have TWO solutions
Insert using hibernate itself which will be fairly easy, in your
integration test inject your DAOs and let hibernate do the insertion
for you which is recommended so you do not need to rehandle what
hibernate already handled
Update the sequence table whenever you do the insert by increment the
value which I do not recommend.
the TL;DR is that I am not able to delete a row previously created with an upsert using Java.
Basically I have a table like this:
CREATE TABLE transactions (
key text PRIMARY KEY,
created_at timestamp
);
Then I execute:
String sql = "update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null";
session.execute(sql)
As expected the row is created:
cqlsh:thingleme> SELECT * FROM transactions ;
key | created_at
------+---------------------------------
test | 2018-01-30 16:35:16.663000+0000
But (this is what is making me crazy) if I execute:
sql = "delete from transactions where key = 'test'";
ResultSet resultSet = session.execute(sql);
Nothing happens. I mean: no exception is thrown and the row is still there!
Some other weird stuff:
if I replace the upsert with a plain insert, then the delete works
if I directly run the sql code (update and delete) by using cqlsh, it works
If I run this code against an EmbeddedCassandraService, it works (this is very bad, because my integration tests are just green!)
My environment:
cassandra: 3.11.1
datastax java driver: 3.4.0
docker image: cassandra:3.11.1
Any idea/suggestion on how to tackle this problem is really appreciated ;-)
I think the issue you are encountering might be explained by the mixing of lightweight transactions (LWTs) (update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null) and non-LWTs (delete from transactions where key = 'test').
Cassandra uses timestamps to determine which mutations (deletes, updates) are the most recently applied. When using LWTs, the timestamp assignment is different then when not using LWTs:
Lightweight transactions will block other lightweight transactions from occurring, but will not stop normal read and write operations from occurring. Lightweight transactions use a timestamping mechanism different than for normal operations and mixing LWTs and normal operations can result in errors. If lightweight transactions are used to write to a row within a partition, only lightweight transactions for both read and write operations should be used.
Source: How do I accomplish lightweight transactions with linearizable consistency?
Further complicating things is that by default the java driver uses client timestamps, meaning the write timestamp is determined by the client rather than the coordinating cassandra node. However, when you use LWTs, the client timestamp is bypassed. In your case, unless you disable client timestamps, your non-LWT queries are using client timestamps, where your LWT queries are using a timestamp assigned by the paxos logic in cassandra. In any case, even if the driver wasn't assigning client timestamps this still might be a problem because the timestamp assignment logic is different on the C* side for LWT and non-LWT as well.
To fix this, you could alter your delete statement to include IF EXISTS, i.e.:
delete from transactions where key = 'test' if exists
Similar issue from the java driver mailing list
I'm updating old libraries from a legacy system. Just now i'm trying to update Hibernate 3.4.0.GA to 4.3.11.Final, i just needed to change small things in the code, everything was fine. But when i put the system to run, i'm receiving a "schema "FOO" does not exist" while execute a query. Trying to isolate the problem, i discovered this happen from Hibernate 3.5.1 to 3.5.2 and the reasons.
Hibernate when generating the sql, is adding schema to functions. I show now the difference in two versions.
protocolo_1 is the alias of main schema, this is a subquery added by #Formula in Protocolo.java, the name of schema is protocolo too.
#Formula
select max (pm2.id) from protocolo.protocolomovimento pm2 where pm2.id_protocolo = id
Hibernate 3.5.1 SQL generated
select max (pm2.id) from protocolo.protocolomovimento pm2 where pm2.id_protocolo = protocolo1_.id
Hibernate 3.5.2 SQL generated
select protocolo_1.max (pm2.id) from protocolo.protocolomovimento pm2 where pm2.id_protocolo = protocolo1_.id
I'm using PostgreSQL 9.4.12 with respective driver and org.hibernate.dialect.PostgreSQLDialect (in this versions of hibernate, it's the unique dialect to PostgreSQL)
I found another guy with similar problem here Why is Hibernate adding schema name to Hsql functions? but i think its only similar, it's not my case.
Why is Hibernate doing this? How can i fix this?
Looks like hibernate don't understand space character between max and ( in expression max (pm2.id), so it thinks that max is column name and adds table alias there.
Removing space will solve the problem.
I have let's say two pc's.PC-a and PC-b which both have the same application installed with java db support.I want from time to time to copy the data from the database on PC-a to database to PC-b and vice-versa so the two PC's to have the same data all the time.
Is there an already implemented API in the database layer for this(i.e 1.export-backup database from PC-a 2.import-merge databases to PC-b) or i have to do this in the sql layer(manually)?
As you mention in the comments that you want to "merge" the databases, this sounds like you need to write custom code to do this, as presumably there could be conficts - the same key in both, but with different details against it, for example.
In short: You can't do this without some work on your side. SalesLogix fixed this problem by giving everything a site code, so here's how your table looked:
Customer:
SiteCode varchar,
CustomerID varchar,
....
primary key(siteCode, CustomerID)
So now you would take your databases, and match up each record by primary key. Where there are conflicts you would have to provide a report to the end-user, on what data was different.
Say machine1:
SiteCode|CustomerID|CustName |phone |email
1 XXX |0001 |Customer1 |555.555.1212 |darth#example.com
and on machine2:
SiteCode|CustomerID|CustName |phone |email
2 XXY |0001 |customer2 |555.555.1213 |darth#nowhere.com
3 XXX |0001 |customer1 |555.555.1212 |darth#nowhere.com
When performing a resolution:
Record 1 and 3 are in conflict, because the PK matches, but the data doesnt (email is different).
Record 2 is unique, and can freely exist in both databases.
There is NO way to do this automatically without error or data corruption or referential integrity issues.
I guess you are using Java DB (aka Derby) - in which case, assuming you just can't use a single instance, you can do a backup/restore.
Why dont you have the database on one pc. and have all other pc's request data from the host pc