I have an table (in ORADB) containing two columns: VARCHAR unique key and NUMBER unique key generated from an sequence.
I need my Java code to constantly (and in parallel) add records to this column whenever a new VARCHAR key it gets, returning the newly generated NUMBER key. Or returns the existing NUMBER key when it gets an existing VARCHAR (it doesn't insert it then, that would throw an exception of course due to the uniq key violation).
Such procedure would be executed from many (Java) clients working in parallel.
Hope my English is understandable :)
What is the best (maybe using PL/SQL block instead of Java code...) way to do it?
I do not think you can do better than
SELECT the_number FROM the_table where the_key = :key
if found, return it
if not found, INSERT INTO the_table SELECT :key, the_seq.NEXT_VAL RETURNING the_number INTO :number and COMMIT
this could raise a ORA-00001(duplicate primary key insert)
if the timing is unlucky. In this case, SELECT again.
Not sure if JDBC supports RETURNING, so you might need to wrap it into a stored procedure (also saves database roundtrips).
You can use an index-organized table (with the_key as primary key), makes the lookup faster.
Related
I am unable to grasp the concept of the lookup table.
I am currently working on a project wherein I am using two tables.
The first table consists of two columns- name(varchar) and value(varchar).
The second table also has two rows- Result(varchar) and value(varchar).
Result is used to store the values which are obtained from a Java code. Whenever the Result of the Java code matches the name in the first table, I need to update the second table with the corresponding value in the first table.
Does using lookup table help in any way? If it does, can it be explained with an example?If not, is there any other way?
Just imagine a table person with a column GenderIsMale BIT. You can set this value to 1 (yes, it is a boy) or to 0 (no, a girl). This was easy in earlier days.
Now we have more categories. According to this link facebook offers more than 50 differing categories...
There the lookup-table comes into play: You create a table which has - as minium - a unique key and a value. In most cases this is an ID INT IDENTITY and a Content VARCHAR(100) NOT NULL. You can add more columns like Abbreviation or any other additional content (e.g. other languages or codes of external code systems read about mapping tables also) directly bound to this value.
The next step is, to take the GenderIsMale-column away and replace it with a
GenderID INT NOT NULL
CONSTRAINT FK_Person_GenderID FOREIGN KEY REFERENCES GenderLookUpTable(GenderID)
The person table will store the GenderID only, the related values are stored in the side table and can be looked up.
The simple lookup table is the basic construct of how to create a relational database model in min. 3.NF or BCNF (which should be a minium reuqirement for professional database design).
Whenever the Result of the Java code matches the name in the first
table, I need to update the second table with the corresponding value
in the first table.
That's a perfect use case for database trigger, which can be used to perform various things when a change (insert, update, delete) happens in a table.
Assuming you're inserting the value of your Java calculations to your (result, value) table (let's call it foo, and the other table is bar), you can write a trigger that replaces the value being written with the value from the other table. Example given for Postgres, if using another db refer to your particular RDBMS manual to see the syntax.
CREATE FUNCTION get_value_from_lookup_table() RETURNS trigger AS $$
BEGIN
IF EXISTS (SELECT 1 FROM bar WHERE name = NEW.result) THEN
RETURN SELECT name, value FROM bar WHERE name = NEW.result;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER lookup_value
INSTEAD OF INSERT ON foo
FOR EACH ROW
EXECUTE PROCEDURE get_value_from_lookup_table();
Every time an INSERT is done on foo, a check is done to see if a row exists in bar where name=result. If so, that row is inserted, otherwise the insert goes on normally. That's the basic gist of it. The actual solution depends on table constraints, whether you need to handle inserts and updates, etc.
I have mentioned a sequence generation strategy as IDENTITY on my entity class for the primary key of a table in AS400 system.
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(name = "SEQNO")
private Integer seqNo;
The table's primary key column is defined as GENERATED ALWAYS AS IDENTITY in database.
SEQNO BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY(START WITH 1, INCREMENT BY 1)
My understanding of IDENTITY strategy is that it will leave the primary key generation responsibility to the table itself.
The problem that I am facing is that somehow in one environment, while inserting record in the table it gives me [SQL0803] Duplicate Key value specified.
Now there are couple of questions in my mind:
Is my understanding correct for #GeneratedValue(strategy=GenerationType.IDENTITY)?
In which scenario table will generate Duplicate key?
I figured out there are sequence values missing in the table, i.e. after 4, the sequence till 20 is missing and I do not know if someone manually deleted it or not, but could this be related to duplicate key generation?
YES. IDENTITY means use in-datastore features like "AUTO_INCREMENT", "SERIAL", "IDENTITY". So any INSERT should omit the IDENTITY column, and will pull the value back (into memory, for that object) after the INSERT is executed.
Should never get a duplicate key. Check the INSERT statement being used.
Some external process using the same table? Use the logs to see SQL and work it out.
I don't use JPA, but what you have seems reasonable to me.
As far as the DB2 for i side...
Are you sure you're getting the duplicate key error on the identity column? Are there no other columns defined as unique?
It is possible to have a duplicate key error on an identity column.
What you need to realize is that the next identity value is stored in the table object; not calculated on the fly. When I started using Identities, I got bit by a CMS package that routinely used CPYF to move data between newly created versions of a table. The new version of the table would have a next identity value of 1, even though there might be 100K records in it. (the package has since gotten smarter :) But the point remains that CPYF for instance, doesn't play nice with identity columns.
Additionally, it is possible to override the GENERATED ALWAYS via the OVERRIDING SYSTEM VALUE or OVERRIDING USER VALUE clauses of the INSERT statement. But inserting with an override has no effect on the stored next identity value. I suppose one could consider CPYF as using OVERRIDING SYSTEM VALUE
Now, as far as your missing identities...
Data was deleted
Data was copied in with overridden identities
Somebody ALTER TABLE <...> ALTER COLUMN <...> RESTART WITH
You lost the use of some values
Let me explain #4. For performance reasons, DB2 for i by default will cache 20 identity values for a process to use. So if you have two processes adding records, one will get values 1-20 the other 20-40. This allows both process to insert concurrently. However, if process 1 only inserts 10 records, then identity values 11-20 will be lost. If you absolutely must have continuous identity values, then specify NO CACHE during the creation of the identity.
create table test
myid int generated always
as identity
(start with 1, increment by 1, no cache)
Finally, with respect to the caching of identity values. While confirming a few things for this answer, I noticed that the use of ALTER TABLE to add a new column seemed to cause a loss of the cached values. I inserted 3 rows, did the alter table and the next row got an identity value of 21.
I'm trying to insert a new record using UpdatableRecords in jOOQ 3.4.2. The pattern is extremely concise and pleasant to use, except that the INSERT reads null values as no value and ignores default values or a generated index. How can I use the UpdatableRecord to do an insert that respects default values and generated indexes?
Here's my table:
CREATE TABLE aragorn_sys.org_person (
org_person_id SERIAL NOT NULL,
first_name CHARACTER VARYING(128) NOT NULL,
last_name CHARACTER VARYING(128) NOT NULL,
created_time TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp NOT NULL,
created_by_user_id INTEGER,
last_modified_time TIMESTAMP WITH TIME ZONE,
last_modified_by_user_id INTEGER,
org_id INTEGER NOT NULL,
CONSTRAINT PK_org_person PRIMARY KEY (org_person_id)
);
Note my primary key and default values. Now here's my jOOQ code:
// orgPerson represents a POJO filled with my values to be inserted and null for everything else
// Note that orgPerson.orgPersonId is null
OrgPersonRecord orgPersonRecord = create.newRecord( ORG_PERSON, orgPerson );
Integer orgPersonId = create.executeInsert( orgPersonRecord );
But when I run this, I get the error null value in column "org_person_id" violates not-null constraint.
I noticed the jOOQ docs say that calling newRecord automatically sets all the internal "changed" flags to true on the UpdatableRecord. So then I tried this:
// orgPerson represents a POJO filled with my values to be inserted and null for everything else
// Note that orgPerson.orgPersonId is null
OrgPersonRecord orgPersonRecord = create.newRecord( ORG_PERSON, orgPerson );
orgPersonRecord.changed( ORG_PERSON.ORG_PERSON_ID, false );
orgPersonRecord.changed( ORG_PERSON.CREATED_TIME, false );
orgPersonRecord.insert()
Integer orgPersonId = orgPersonRecord.getOrgPersonId();
But that gives me the error ERROR: duplicate key value violates unique constraint "pk_org_person". And when I do this repeatedly, the values seem to keep increasing by 1. This doesn't really make sense to me, but my greater question is: Is there a good way I can do an INSERT based on my object values, or better yet, simply include only the non-null columns?
I saw JOOQ ignoring database columns with default values, but that doesn't seem to resolve this. Any recommendations on the most concise way to handle this?
By the way, jOOQ has been fantastic to work with so far. Lukas, thank you for this awesome tool!
UPDATE #1:
The "not null issue" is addressed by Lukas's answer below, and that's an easy fix.
For the duplicate primary keys, I am definitely not confusing INSERT with UPDATE. When I run the above code (slight update since original post), jOOQ seems to arbitrarily pick a "starting" primary key value for OrgPersonId. For example, when I first load up my environment, jOOQ might start with "11" for OrgPersonId.
Then, when I do an INSERT, jOOQ will attempt to supply a value of "11" for OrgPersonId, I'll get the ERROR: duplicate key value and the INSERT will fail. If I then repeat the INSERT, jOOQ uses "12", then "13". It succeeds or fails based on whether that ID is available, but it's not "starting" with the right ID.
The manual (http://www.jooq.org/doc/3.4/manual/sql-execution/crud-with-updatablerecords/identity-values/) says that If you're using jOOQ's code generator, the above table will generate a org.jooq.UpdatableRecord with an IDENTITY column. This information is used by jOOQ internally, to update IDs after calling store().
UPDATE #2:
Ok, I just tried the generated query directly in Postgres and it fails there, too, with the same issue. So, clearly this is a Postgres issue and not a jOOQ issue. I'll post the final resolution on that when I find it in case anyone else runs into this.
UPDATE #3:
Issue has been resolved. We use FlywayDB (another awesome tool) to automate our database schema migration, and we had a bunch of INSERT statements in our Flyway scripts that manually INSERTED the id number. This was convenient because we wanted to create a bunch of dummy data and wanted to guarantee the right foreign key relationships.
But manually specifying the primary key increment does not advance the Postgres sequence! Hence, we had to cycle through the Postgres sequence before (correctly operating) jOOQ would get the right sequence value.
Solution is to remove all our manual inserts of the primary keys in our demo data migration scripts.
violates not-null constraint
The first part that you're describing is a flaw (#3582), which is related to a previous issue (#2700), which enforced storing null values loaded from POJOs into jOOQ Records for database columns that are NOT NULL. The fix will be in jOOQ 3.5.0, 3.4.3, 3.3.4, and 3.2.7
duplicate key value violates unique constraint "pk_org_person"
The second part probably is caused by the fact that you are really loading an existing record and then calling executeInsert() on it (observe the INSERT, which will always execute an INSERT statement). You might want to call executeUpdate(), instead
I am working on designing the Cassandra Column Family schema for my below use case.. I am not sure what is the best way to design the cassandra column family for my below use case? I will be using CQL Datastax Java driver for this..
Below is my use case and the sample schema that I have designed for now -
SCHEMA_ID RECORD_NAME SCHEMA_VALUE TIMESTAMP
1 ABC some value t1
2 ABC some_other_value t2
3 DEF some value again t3
4 DEF some other value t4
5 GHI some new value t5
6 IOP some values again t6
Now what I will be looking from the above table is something like this -
For the first time whenever my application is running, I will ask for everything from the above table.. Meaning give me everything from the above table..
Then every 5 or 10 minutes, my background thread will be checking this table and will ask for give me everything that has changed only (full row if anything got changed for that row).. so that is the reason I am using timestamp as one of the column here..
But I am not sure how to design the query pattern in such a way such that both of my use cases gets satisfied easily and what will be the proper way of designing the table for this? Here SCHEMA_ID will be primary key I am thinking to use...
I will be using CQL and Datastax Java driver for this..
Update:-
If I am using something like this, then is there any problem with this approach?
CREATE TABLE TEST (SCHEMA_ID TEXT, RECORD_NAME TEXT, SCHEMA_VALUE TEXT, LAST_MODIFIED_DATE TIMESTAMP, PRIMARY KEY (ID));
INSERT INTO TEST (SCHEMA_ID, RECORD_NAME, SCHEMA_VALUE, LAST_MODIFIED_DATE) VALUES ('1', 't26', 'SOME_VALUE', 1382655211694);
Because, in my this use case, I don't want anybody to insert same SCHEMA_ID everytime.. SCHEMA_ID should be unique whenever we are inserting any new row into this table.. So with your example (#omnibear), it might be possible, somebody can insert same SCHEMA_ID twice? Am I correct?
And also regarding type you have taken as an extra column, that type column can be record_name in my example..
Regarding 1)
Cassandra is used for heavy writing, lots of data on multiple nodes. To retrieve ALL data from this kind of set-up is daring since this might involve huge amounts that have to be handled by one client. A better approach would be to use pagination. This is natively supported in 2.0.
Regarding 2)
The point is that partition keys only support EQ or IN queries. For LT or GT (< / >) you use column keys. So if it makes sense to group your entries by some ID like "type", you can use this for your partition key, and a timeuuid as a column key. This allows to query for all entries newer than X like so
create table test
(type int, SCHEMA_ID int, RECORD_NAME text,
SCHEMA_VALUE text, TIMESTAMP timeuuid,
primary key (type, timestamp));
select * from test where type IN (0,1,2,3) and timestamp < 58e0a7d7-eebc-11d8-9669-0800200c9a66;
Update:
You asked:
somebody can insert same SCHEMA_ID twice? Am I correct?
Yes, you can always make an insert with an existing primary key. The values at that primary key will be updated. Therefore, to preserve uniqueness, a UUID is often used in the primary key, for instance, timeuuid. It is a unique value containing a timestamp and the MAC address of the client. There is excellent documentation on this topic.
General advice:
Write down your queries first, then design your model. (Use case!)
Your queries define your data model which in turn is primarily defined by your primary keys.
So, in your case, I'd just adapt my schema above, like so:
CREATE TABLE TEST (SCHEMA_ID TEXT, RECORD_NAME TEXT, SCHEMA_VALUE TEXT,
LAST_MODIFIED_DATE TIMEUUID, PRIMARY KEY (RECORD_NAME, LAST_MODIFIED_DATE));
Which allows this query:
select * from test where RECORD_NAME IN ("componentA","componentB")
and LAST_MODIFIED_DATE < 1688f180-4141-11e3-aa6e-0800200c9a66;
the uuid corresponds to -> Wednesday, October 30, 2013 8:55:55 AM GMT
so you would fetch everything after that
I'm having a little trouble using Hibernate with a char(6) column in Oracle. Here's the structure of the table:
CREATE TABLE ACCEPTANCE
(
USER_ID char(6) PRIMARY KEY NOT NULL,
ACCEPT_DATE date
);
For records whose user id has less than 6 characters, I can select them without padding the user id when running queries using SQuirreL. I.E. the following returns a record if there's a record with a user id of "abc".
select * from acceptance where user_id = "abc"
Unfortunately, when doing the select via Hibernate (JPA), the following returns null:
em.find(Acceptance.class, "abc");
If I pad the value though, it returns the correct record:
em.find(Acceptance.class, "abc ");
The module that I'm working on gets the user id unpadded from other parts of the system. Is there a better way to get Hibernate working other than putting in code to adapt the user id to a certain length before giving it to Hibernate? (which could present maintenance issues down the road if the length ever changes)
That's God's way of telling you to never use CHAR() for primary key :-)
Seriously, however, since your user_id is mapped as String in your entity Hibernate's Oracle dialect translates that into varchar. Since Hibernate uses prepared statements for all its queries, that semantics carries over (unlike SQuirreL, where the value is specified as literal and thus is converted differently).
Based on Oracle type conversion rules column value is then promoted to varchar2 and compared as such; thus you get back no records.
If you can't change the underlying column type, your best option is probably to use HQL query and rtrim() function which is supported by Oracle dialect.
How come that your module gets an unpadded value from other parts of the system?
According to my understanding, if the other part of the system don't alter the PK, they should read 6 chars from the db and pass 6 chars all along the way -- that would be ok. The only exception would be when a PK is generated, in which case it may need to be padded.
You can circumvent the problem (by trimming or padding the value each time it's necessary), but it won't solve the problem upfront that your PK is not handled consistently. To solve the problem upfront you must eiher
always receive 6 chars from the other parts of the module
use varchar2 to deal with dynamic size correctly
If you can't solve the problem upfront, then you will indeed need to either
add trimming/padding all around the place when necessary
add trimming/padding in the DAO if you have one
add trimming/padding in the user type if this works (suggestion from N. Hughes)