I have mentioned a sequence generation strategy as IDENTITY on my entity class for the primary key of a table in AS400 system.
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(name = "SEQNO")
private Integer seqNo;
The table's primary key column is defined as GENERATED ALWAYS AS IDENTITY in database.
SEQNO BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY(START WITH 1, INCREMENT BY 1)
My understanding of IDENTITY strategy is that it will leave the primary key generation responsibility to the table itself.
The problem that I am facing is that somehow in one environment, while inserting record in the table it gives me [SQL0803] Duplicate Key value specified.
Now there are couple of questions in my mind:
Is my understanding correct for #GeneratedValue(strategy=GenerationType.IDENTITY)?
In which scenario table will generate Duplicate key?
I figured out there are sequence values missing in the table, i.e. after 4, the sequence till 20 is missing and I do not know if someone manually deleted it or not, but could this be related to duplicate key generation?
YES. IDENTITY means use in-datastore features like "AUTO_INCREMENT", "SERIAL", "IDENTITY". So any INSERT should omit the IDENTITY column, and will pull the value back (into memory, for that object) after the INSERT is executed.
Should never get a duplicate key. Check the INSERT statement being used.
Some external process using the same table? Use the logs to see SQL and work it out.
I don't use JPA, but what you have seems reasonable to me.
As far as the DB2 for i side...
Are you sure you're getting the duplicate key error on the identity column? Are there no other columns defined as unique?
It is possible to have a duplicate key error on an identity column.
What you need to realize is that the next identity value is stored in the table object; not calculated on the fly. When I started using Identities, I got bit by a CMS package that routinely used CPYF to move data between newly created versions of a table. The new version of the table would have a next identity value of 1, even though there might be 100K records in it. (the package has since gotten smarter :) But the point remains that CPYF for instance, doesn't play nice with identity columns.
Additionally, it is possible to override the GENERATED ALWAYS via the OVERRIDING SYSTEM VALUE or OVERRIDING USER VALUE clauses of the INSERT statement. But inserting with an override has no effect on the stored next identity value. I suppose one could consider CPYF as using OVERRIDING SYSTEM VALUE
Now, as far as your missing identities...
Data was deleted
Data was copied in with overridden identities
Somebody ALTER TABLE <...> ALTER COLUMN <...> RESTART WITH
You lost the use of some values
Let me explain #4. For performance reasons, DB2 for i by default will cache 20 identity values for a process to use. So if you have two processes adding records, one will get values 1-20 the other 20-40. This allows both process to insert concurrently. However, if process 1 only inserts 10 records, then identity values 11-20 will be lost. If you absolutely must have continuous identity values, then specify NO CACHE during the creation of the identity.
create table test
myid int generated always
as identity
(start with 1, increment by 1, no cache)
Finally, with respect to the caching of identity values. While confirming a few things for this answer, I noticed that the use of ALTER TABLE to add a new column seemed to cause a loss of the cached values. I inserted 3 rows, did the alter table and the next row got an identity value of 21.
Related
I'm trying to insert a new record using UpdatableRecords in jOOQ 3.4.2. The pattern is extremely concise and pleasant to use, except that the INSERT reads null values as no value and ignores default values or a generated index. How can I use the UpdatableRecord to do an insert that respects default values and generated indexes?
Here's my table:
CREATE TABLE aragorn_sys.org_person (
org_person_id SERIAL NOT NULL,
first_name CHARACTER VARYING(128) NOT NULL,
last_name CHARACTER VARYING(128) NOT NULL,
created_time TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp NOT NULL,
created_by_user_id INTEGER,
last_modified_time TIMESTAMP WITH TIME ZONE,
last_modified_by_user_id INTEGER,
org_id INTEGER NOT NULL,
CONSTRAINT PK_org_person PRIMARY KEY (org_person_id)
);
Note my primary key and default values. Now here's my jOOQ code:
// orgPerson represents a POJO filled with my values to be inserted and null for everything else
// Note that orgPerson.orgPersonId is null
OrgPersonRecord orgPersonRecord = create.newRecord( ORG_PERSON, orgPerson );
Integer orgPersonId = create.executeInsert( orgPersonRecord );
But when I run this, I get the error null value in column "org_person_id" violates not-null constraint.
I noticed the jOOQ docs say that calling newRecord automatically sets all the internal "changed" flags to true on the UpdatableRecord. So then I tried this:
// orgPerson represents a POJO filled with my values to be inserted and null for everything else
// Note that orgPerson.orgPersonId is null
OrgPersonRecord orgPersonRecord = create.newRecord( ORG_PERSON, orgPerson );
orgPersonRecord.changed( ORG_PERSON.ORG_PERSON_ID, false );
orgPersonRecord.changed( ORG_PERSON.CREATED_TIME, false );
orgPersonRecord.insert()
Integer orgPersonId = orgPersonRecord.getOrgPersonId();
But that gives me the error ERROR: duplicate key value violates unique constraint "pk_org_person". And when I do this repeatedly, the values seem to keep increasing by 1. This doesn't really make sense to me, but my greater question is: Is there a good way I can do an INSERT based on my object values, or better yet, simply include only the non-null columns?
I saw JOOQ ignoring database columns with default values, but that doesn't seem to resolve this. Any recommendations on the most concise way to handle this?
By the way, jOOQ has been fantastic to work with so far. Lukas, thank you for this awesome tool!
UPDATE #1:
The "not null issue" is addressed by Lukas's answer below, and that's an easy fix.
For the duplicate primary keys, I am definitely not confusing INSERT with UPDATE. When I run the above code (slight update since original post), jOOQ seems to arbitrarily pick a "starting" primary key value for OrgPersonId. For example, when I first load up my environment, jOOQ might start with "11" for OrgPersonId.
Then, when I do an INSERT, jOOQ will attempt to supply a value of "11" for OrgPersonId, I'll get the ERROR: duplicate key value and the INSERT will fail. If I then repeat the INSERT, jOOQ uses "12", then "13". It succeeds or fails based on whether that ID is available, but it's not "starting" with the right ID.
The manual (http://www.jooq.org/doc/3.4/manual/sql-execution/crud-with-updatablerecords/identity-values/) says that If you're using jOOQ's code generator, the above table will generate a org.jooq.UpdatableRecord with an IDENTITY column. This information is used by jOOQ internally, to update IDs after calling store().
UPDATE #2:
Ok, I just tried the generated query directly in Postgres and it fails there, too, with the same issue. So, clearly this is a Postgres issue and not a jOOQ issue. I'll post the final resolution on that when I find it in case anyone else runs into this.
UPDATE #3:
Issue has been resolved. We use FlywayDB (another awesome tool) to automate our database schema migration, and we had a bunch of INSERT statements in our Flyway scripts that manually INSERTED the id number. This was convenient because we wanted to create a bunch of dummy data and wanted to guarantee the right foreign key relationships.
But manually specifying the primary key increment does not advance the Postgres sequence! Hence, we had to cycle through the Postgres sequence before (correctly operating) jOOQ would get the right sequence value.
Solution is to remove all our manual inserts of the primary keys in our demo data migration scripts.
violates not-null constraint
The first part that you're describing is a flaw (#3582), which is related to a previous issue (#2700), which enforced storing null values loaded from POJOs into jOOQ Records for database columns that are NOT NULL. The fix will be in jOOQ 3.5.0, 3.4.3, 3.3.4, and 3.2.7
duplicate key value violates unique constraint "pk_org_person"
The second part probably is caused by the fact that you are really loading an existing record and then calling executeInsert() on it (observe the INSERT, which will always execute an INSERT statement). You might want to call executeUpdate(), instead
I have 2 tables user and userinfo. userinfo table contains user_id(id of user table) column which has UNIQUE constraint.
now i have 2users(primaryUser and secondaryUser) which has records in user and userInfo tables.
The primaryInfo object contains primaryUserId and secondaryInfo object contains secondaryUserId
I want to swap the userinfo data of primaryUser to secondaryUser and viceversa. I am doing like this
primaryInfo.setUserId(secondaryUser.getId());
secondaryInfo.setUserId(primaryUser.getId());
session.update(primaryInfo);
session.update(secondaryInfo);
but when commiting the transaction it is giving error like
ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper:147 ERROR: duplicate key value violates unique constraint "user_infos_unique_user"
Detail: Key (ui_user_id)=(52560087) already exists.
can you please tell how to do this.. Thanks
You can use the DEFERRABLE and INITIALLY DEFERRED properties on the constraint and update both records in a single transaction. DEFERRED means the constraint will not be evaluated until the transaction is commited -- at which time it should be valid again.
However: I have not figured out how to use Hibernate annotations to specify the DEFERRED properties, so you will have to use LiquiBase to maintain the database schema (not a bad idea anyway.) (Or use "raw" SQL which is not so good an idea.)
See this question for more about the annotations (alas I cannot use LiquiBase on the project I ask about there.)
For Oracle database you can create next unique constraint with special attributes 'DEFERRABLE INITIALLY DEFERRED':
ALTER TABLE table_name ADD CONSTRAINT constraint_name UNIQUE (table_field) DEFERRABLE INITIALLY DEFERRED
A possible trick to work around the unique constraint is to do 3 updates:
update row A with a value for the column that no other row can use. NULL may be used if not forbidden by a not-NULL constraint, otherwise 0 if not forbidden and assuming it's an integer, otherwise a negative value.
then update row B with its final value (the previous value from row A)
then update row A with its real final value (the previous value from row B)
As error Shows:
there is a unique constraint on userInfo table. that means user must be unique. So If you wnat to swipe the two user Id. you have to perform following steps
1. Remove the constraint
2. Swap two id's(same code as you currently have)
3. Add Constaint.
The problem
I have a table for some data that has an ID column of type integer (which is also the primary key).
When a new data entry is added to the table, it should get a new ID whereas the ID is not known by the application that inserts the object but it should be given by the database. For example, the IDs should be assigned like 0, 1, 2, ...
Assume that I have all other data for the new entry, how would I do the insert? Normally:
insert into T values(123, 'data');
But now I don't know what to put instead of 123
- would you create some kind of global variable NEXTID in the database that provides the IDs and query/update this value each time before inserting into T?
The questions
How to handle this kind of problem? A solution that is concurrency save is preferable.
How to achieve this with Java/myBatis? I Have a Java class that corresponds to the table structure and a new object should be added to the database, getting a new ID automatically.
Update
What I searched for was auto-increment.
Is there a standard SQL way (database independent) of declaring a column as auto-increment? I am using Apache Derby and GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1) is suggested here.
How does the insert to a table that contains auto-increment columns look like?
What is the best way to get the created auto-increment value after an insert when simultaneaous access to the database is possible?
I'll accept an answer that includes explanation and SQL instructions for declaration and insertion :)
If you are using sqlserver, making column of identity type will solve the purpose something like this
.
ALTER TABLE [dbo].[T] ADD [Column1] INT identity (1, 1)
For others like oracle you can for simple database sequence.
In MySQL you can use
ALTER TABLE table_name ADD id INT AUTO_INCREMENT;
this auto increment the id column, you don't have to give in insert.
I have a table "groups" with four columns. The database is postgres and the group_id column is a Serial. So in reality it is an Integer with a default to get the next value.
I have a use case where I need to use #SQLInsert (using the normal persist method is not an option), but I can't get it to work with the default. Here is what I have:
#SQLInsert(sql="INSERT INTO groups (group_id, parent_id, group_name, version) VALUES (DEFAULT,?,?,?)")
I set the entity attributes to values where group_id and version are null, and the other two are correctly populated. group_id is not nullable in the DB, version can be null.
I get this exception:
WARNING: SQL Error: 0, SQLState: 22023
SEVERE: The column index is out of range: 4, number of columns: 3.
SEVERE: Could not synchronize database state with session
If I enter the following DML directly on the database, it works:
INSERT INTO groups (group_id, parent_id, group_name, version) VALUES (DEFAULT, 3, 'abcd', null);
Is there some way to make the same thing happen using #SQLInsert.
If the class members which you want to save are not reference types they can not hold a null value. It may be the cause of failure in synchronization with database records. Try to use reference types like Integer and Double, etc. And get sure that default values are assumed with a direct insert query.
Another thing in your error messages. It may the default value is out of boundary of the type you are using in Java for that column. Check the default value to be in range. If a value out of range is set for your class member, it can't be synced.
EDIT: Sorry, the second part is not true in this case.
So the short answer is "it can't be done this way". Despite quite a few places I've seen this asked, the Hibernate people have not provided for this use case.
My solution was to decouple the Postgres sequence from the table. That is, I removed the default constraint that selects the nextval from the sequence and populates one of the two primary key fields.
I then manually grab the nextval using a native query (yep, forced to un-abstract the database), and use that value to manually populate the primary key field. It works. It's kludgy, but I might use it more often. It certainly is a lot more understandable as to what is happening than using the pure ORM methods. This can be debugged without a wizards hat. :)
public class...
#PersistenceContext(unitName = "persistence_unit")
private EntityManager em;
...
mymethod(){
...
Query q = em.createNativeQuery("SELECT nextval('groups_group_id_seq')");
BigInteger groupId = (BigInteger)q.getSingleResult();
BigInteger parentId = methodToGetParentId();
GroupsPK gpk = new GroupsPK(groupId, parentId);
Groups grps = new Groups(gpk, "other parameters");
...
}
I have an table (in ORADB) containing two columns: VARCHAR unique key and NUMBER unique key generated from an sequence.
I need my Java code to constantly (and in parallel) add records to this column whenever a new VARCHAR key it gets, returning the newly generated NUMBER key. Or returns the existing NUMBER key when it gets an existing VARCHAR (it doesn't insert it then, that would throw an exception of course due to the uniq key violation).
Such procedure would be executed from many (Java) clients working in parallel.
Hope my English is understandable :)
What is the best (maybe using PL/SQL block instead of Java code...) way to do it?
I do not think you can do better than
SELECT the_number FROM the_table where the_key = :key
if found, return it
if not found, INSERT INTO the_table SELECT :key, the_seq.NEXT_VAL RETURNING the_number INTO :number and COMMIT
this could raise a ORA-00001(duplicate primary key insert)
if the timing is unlucky. In this case, SELECT again.
Not sure if JDBC supports RETURNING, so you might need to wrap it into a stored procedure (also saves database roundtrips).
You can use an index-organized table (with the_key as primary key), makes the lookup faster.