I'm using unique constraint for avoiding duplicates except null values because this column can remains null (it's not mandatory field but it helps in search like search by email e.t.c)
In above situation, Is it is right to choose unique constraint or not?
Alternative
As unique allows only one null value, so it is possible to generate different default values for unique constraint? i-e unique for each row.
You tagged the question both sqlite and mysql, so I'll cover both.
SQLite
In SQLite, a UNIQUE constraint will work as you want.
The documentation for UNIQUE constraint says:
For the purposes of UNIQUE constraints, NULL values are considered distinct from all other values, including other NULLs.
The documentation for CREATE INDEX says:
If the UNIQUE keyword appears between CREATE and INDEX then duplicate index entries are not allowed. Any attempt to insert a duplicate entry will result in an error. For the purposes of unique indices, all NULL values are considered different from all other NULL values and are thus unique. This is one of the two possible interpretations of the SQL-92 standard (the language in the standard is ambiguous) and is the interpretation followed by PostgreSQL, MySQL, Firebird, and Oracle. Informix and Microsoft SQL Server follow the other interpretation of the standard.
However, in most other databases, the columns of a UNIQUE constraint cannot be NULL, so I would recommend using a UNIQUE INDEX instead, for consistency, and so as not to confuse people.
MySQL
In MySQL, a UNIQUE constraint will work as you want.
The documentation for Unique Indexes says:
A UNIQUE index permits multiple NULL values for columns that can contain NULL.
A UNIQUE KEY is a synonym for a UNIQUE INDEX.
SQL Server
As mentioned in the SQLite documentation, Microsoft SQL Server follow a different interpretation of NULL handling for UNIQUE indexes.
The documentation for UNIQUE INDEX says:
Columns that are used in a unique index should be set to NOT NULL, because multiple null values are considered duplicates when a unique index is created.
To work around that, use a filtered index, e.g.
CREATE UNIQUE INDEX Person_Email
ON Person ( Email )
WHERE Email IS NOT NULL;
You can use a trick with generated columns. Something like this:
alter table t add unique_val as
( concat(coalesce(col, ''), ':', (case when col is null then pk end)) ) unique;
Where pk is the primary key column.
This replaces the NULL values with something known to be unique on each row.
Related
I have mentioned a sequence generation strategy as IDENTITY on my entity class for the primary key of a table in AS400 system.
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(name = "SEQNO")
private Integer seqNo;
The table's primary key column is defined as GENERATED ALWAYS AS IDENTITY in database.
SEQNO BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY(START WITH 1, INCREMENT BY 1)
My understanding of IDENTITY strategy is that it will leave the primary key generation responsibility to the table itself.
The problem that I am facing is that somehow in one environment, while inserting record in the table it gives me [SQL0803] Duplicate Key value specified.
Now there are couple of questions in my mind:
Is my understanding correct for #GeneratedValue(strategy=GenerationType.IDENTITY)?
In which scenario table will generate Duplicate key?
I figured out there are sequence values missing in the table, i.e. after 4, the sequence till 20 is missing and I do not know if someone manually deleted it or not, but could this be related to duplicate key generation?
YES. IDENTITY means use in-datastore features like "AUTO_INCREMENT", "SERIAL", "IDENTITY". So any INSERT should omit the IDENTITY column, and will pull the value back (into memory, for that object) after the INSERT is executed.
Should never get a duplicate key. Check the INSERT statement being used.
Some external process using the same table? Use the logs to see SQL and work it out.
I don't use JPA, but what you have seems reasonable to me.
As far as the DB2 for i side...
Are you sure you're getting the duplicate key error on the identity column? Are there no other columns defined as unique?
It is possible to have a duplicate key error on an identity column.
What you need to realize is that the next identity value is stored in the table object; not calculated on the fly. When I started using Identities, I got bit by a CMS package that routinely used CPYF to move data between newly created versions of a table. The new version of the table would have a next identity value of 1, even though there might be 100K records in it. (the package has since gotten smarter :) But the point remains that CPYF for instance, doesn't play nice with identity columns.
Additionally, it is possible to override the GENERATED ALWAYS via the OVERRIDING SYSTEM VALUE or OVERRIDING USER VALUE clauses of the INSERT statement. But inserting with an override has no effect on the stored next identity value. I suppose one could consider CPYF as using OVERRIDING SYSTEM VALUE
Now, as far as your missing identities...
Data was deleted
Data was copied in with overridden identities
Somebody ALTER TABLE <...> ALTER COLUMN <...> RESTART WITH
You lost the use of some values
Let me explain #4. For performance reasons, DB2 for i by default will cache 20 identity values for a process to use. So if you have two processes adding records, one will get values 1-20 the other 20-40. This allows both process to insert concurrently. However, if process 1 only inserts 10 records, then identity values 11-20 will be lost. If you absolutely must have continuous identity values, then specify NO CACHE during the creation of the identity.
create table test
myid int generated always
as identity
(start with 1, increment by 1, no cache)
Finally, with respect to the caching of identity values. While confirming a few things for this answer, I noticed that the use of ALTER TABLE to add a new column seemed to cause a loss of the cached values. I inserted 3 rows, did the alter table and the next row got an identity value of 21.
I'm trying to insert a new record using UpdatableRecords in jOOQ 3.4.2. The pattern is extremely concise and pleasant to use, except that the INSERT reads null values as no value and ignores default values or a generated index. How can I use the UpdatableRecord to do an insert that respects default values and generated indexes?
Here's my table:
CREATE TABLE aragorn_sys.org_person (
org_person_id SERIAL NOT NULL,
first_name CHARACTER VARYING(128) NOT NULL,
last_name CHARACTER VARYING(128) NOT NULL,
created_time TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp NOT NULL,
created_by_user_id INTEGER,
last_modified_time TIMESTAMP WITH TIME ZONE,
last_modified_by_user_id INTEGER,
org_id INTEGER NOT NULL,
CONSTRAINT PK_org_person PRIMARY KEY (org_person_id)
);
Note my primary key and default values. Now here's my jOOQ code:
// orgPerson represents a POJO filled with my values to be inserted and null for everything else
// Note that orgPerson.orgPersonId is null
OrgPersonRecord orgPersonRecord = create.newRecord( ORG_PERSON, orgPerson );
Integer orgPersonId = create.executeInsert( orgPersonRecord );
But when I run this, I get the error null value in column "org_person_id" violates not-null constraint.
I noticed the jOOQ docs say that calling newRecord automatically sets all the internal "changed" flags to true on the UpdatableRecord. So then I tried this:
// orgPerson represents a POJO filled with my values to be inserted and null for everything else
// Note that orgPerson.orgPersonId is null
OrgPersonRecord orgPersonRecord = create.newRecord( ORG_PERSON, orgPerson );
orgPersonRecord.changed( ORG_PERSON.ORG_PERSON_ID, false );
orgPersonRecord.changed( ORG_PERSON.CREATED_TIME, false );
orgPersonRecord.insert()
Integer orgPersonId = orgPersonRecord.getOrgPersonId();
But that gives me the error ERROR: duplicate key value violates unique constraint "pk_org_person". And when I do this repeatedly, the values seem to keep increasing by 1. This doesn't really make sense to me, but my greater question is: Is there a good way I can do an INSERT based on my object values, or better yet, simply include only the non-null columns?
I saw JOOQ ignoring database columns with default values, but that doesn't seem to resolve this. Any recommendations on the most concise way to handle this?
By the way, jOOQ has been fantastic to work with so far. Lukas, thank you for this awesome tool!
UPDATE #1:
The "not null issue" is addressed by Lukas's answer below, and that's an easy fix.
For the duplicate primary keys, I am definitely not confusing INSERT with UPDATE. When I run the above code (slight update since original post), jOOQ seems to arbitrarily pick a "starting" primary key value for OrgPersonId. For example, when I first load up my environment, jOOQ might start with "11" for OrgPersonId.
Then, when I do an INSERT, jOOQ will attempt to supply a value of "11" for OrgPersonId, I'll get the ERROR: duplicate key value and the INSERT will fail. If I then repeat the INSERT, jOOQ uses "12", then "13". It succeeds or fails based on whether that ID is available, but it's not "starting" with the right ID.
The manual (http://www.jooq.org/doc/3.4/manual/sql-execution/crud-with-updatablerecords/identity-values/) says that If you're using jOOQ's code generator, the above table will generate a org.jooq.UpdatableRecord with an IDENTITY column. This information is used by jOOQ internally, to update IDs after calling store().
UPDATE #2:
Ok, I just tried the generated query directly in Postgres and it fails there, too, with the same issue. So, clearly this is a Postgres issue and not a jOOQ issue. I'll post the final resolution on that when I find it in case anyone else runs into this.
UPDATE #3:
Issue has been resolved. We use FlywayDB (another awesome tool) to automate our database schema migration, and we had a bunch of INSERT statements in our Flyway scripts that manually INSERTED the id number. This was convenient because we wanted to create a bunch of dummy data and wanted to guarantee the right foreign key relationships.
But manually specifying the primary key increment does not advance the Postgres sequence! Hence, we had to cycle through the Postgres sequence before (correctly operating) jOOQ would get the right sequence value.
Solution is to remove all our manual inserts of the primary keys in our demo data migration scripts.
violates not-null constraint
The first part that you're describing is a flaw (#3582), which is related to a previous issue (#2700), which enforced storing null values loaded from POJOs into jOOQ Records for database columns that are NOT NULL. The fix will be in jOOQ 3.5.0, 3.4.3, 3.3.4, and 3.2.7
duplicate key value violates unique constraint "pk_org_person"
The second part probably is caused by the fact that you are really loading an existing record and then calling executeInsert() on it (observe the INSERT, which will always execute an INSERT statement). You might want to call executeUpdate(), instead
I have 2 tables user and userinfo. userinfo table contains user_id(id of user table) column which has UNIQUE constraint.
now i have 2users(primaryUser and secondaryUser) which has records in user and userInfo tables.
The primaryInfo object contains primaryUserId and secondaryInfo object contains secondaryUserId
I want to swap the userinfo data of primaryUser to secondaryUser and viceversa. I am doing like this
primaryInfo.setUserId(secondaryUser.getId());
secondaryInfo.setUserId(primaryUser.getId());
session.update(primaryInfo);
session.update(secondaryInfo);
but when commiting the transaction it is giving error like
ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper:147 ERROR: duplicate key value violates unique constraint "user_infos_unique_user"
Detail: Key (ui_user_id)=(52560087) already exists.
can you please tell how to do this.. Thanks
You can use the DEFERRABLE and INITIALLY DEFERRED properties on the constraint and update both records in a single transaction. DEFERRED means the constraint will not be evaluated until the transaction is commited -- at which time it should be valid again.
However: I have not figured out how to use Hibernate annotations to specify the DEFERRED properties, so you will have to use LiquiBase to maintain the database schema (not a bad idea anyway.) (Or use "raw" SQL which is not so good an idea.)
See this question for more about the annotations (alas I cannot use LiquiBase on the project I ask about there.)
For Oracle database you can create next unique constraint with special attributes 'DEFERRABLE INITIALLY DEFERRED':
ALTER TABLE table_name ADD CONSTRAINT constraint_name UNIQUE (table_field) DEFERRABLE INITIALLY DEFERRED
A possible trick to work around the unique constraint is to do 3 updates:
update row A with a value for the column that no other row can use. NULL may be used if not forbidden by a not-NULL constraint, otherwise 0 if not forbidden and assuming it's an integer, otherwise a negative value.
then update row B with its final value (the previous value from row A)
then update row A with its real final value (the previous value from row B)
As error Shows:
there is a unique constraint on userInfo table. that means user must be unique. So If you wnat to swipe the two user Id. you have to perform following steps
1. Remove the constraint
2. Swap two id's(same code as you currently have)
3. Add Constaint.
I have an table (in ORADB) containing two columns: VARCHAR unique key and NUMBER unique key generated from an sequence.
I need my Java code to constantly (and in parallel) add records to this column whenever a new VARCHAR key it gets, returning the newly generated NUMBER key. Or returns the existing NUMBER key when it gets an existing VARCHAR (it doesn't insert it then, that would throw an exception of course due to the uniq key violation).
Such procedure would be executed from many (Java) clients working in parallel.
Hope my English is understandable :)
What is the best (maybe using PL/SQL block instead of Java code...) way to do it?
I do not think you can do better than
SELECT the_number FROM the_table where the_key = :key
if found, return it
if not found, INSERT INTO the_table SELECT :key, the_seq.NEXT_VAL RETURNING the_number INTO :number and COMMIT
this could raise a ORA-00001(duplicate primary key insert)
if the timing is unlucky. In this case, SELECT again.
Not sure if JDBC supports RETURNING, so you might need to wrap it into a stored procedure (also saves database roundtrips).
You can use an index-organized table (with the_key as primary key), makes the lookup faster.
I've written this java servlet which inserts items into a table however it fails. I think it might be due to my insertion and deleting which got me in trouble some how. The java servlet runs an insert statement into sql server. In my error log, it says:
com.microsoft.sqlserver.jdbc.sqlserverexception: cannot insert duplicate key row in object 'dbo.timitem' with unique index 'XAK1timitem'.
any ideas?
UPDATE: i found there is an index called "XAK1timItem (Unique, Non-Clustered)" which I'm not really sure what to do with.. hope this helps the question.
The unique index will enforce uniqueness for the combination of the rows included in the index. So if you have a row in the database which has, for the indexed column, values equal to those you are trying to insert, you will get an error back from the database.
The AK part indicates that this is an alternative key which probably means that the table has a regular primary key, and does not need to rely on the AK for unique identification of a row.
Some options:
drop the index if not needed
add another column to the unique index
make the index not unique, so that it allows duplicate values
check if there is already a row that matches the one you are about to
insert and abort the insert, but I guess you don't want to do this