I have a table named GROUPS which has a column GROUP_ID whose values are
GRP001, GRP002, GRP003
and so on. Now every time I insert a new row I have to insert it with a
(current) GROUP_ID= (highest)GROUP_ID+ 1 , for example if highest GROUP_ID= GRP003 I have to generate a new GROUP_ID GRP004 when I insert a new row.
How can I do this using java?
I am currently using Hibernate along with Struts 2 in my program
Is there any way to deal with this using hibernate? Or will I have to write additional code to lock the table, check the db for max Id (and then increment it) and finally release the lock?
I remember solving a problem similar to this once. What I did was I create a custom primary key generator as supported by hibernate.
This guy explains it clearly here: "Custom Hibernate Primary Key Generator"
Basically you just need to implement org.hibernate.id.IdentifierGenerator and all should be set.
Just be aware that the solution implemented in the example above is database dependent. But I think sometimes common sense should prevail over overengineering.
Related
Is there a way to tell Hibernate to first check if the current primary key generated by a Table Generator is usable or outdated?
I have an application which uses hibernate to create new entries in several tables in my database, but sometimes these generated values are outdated and already used. This happens because this database is used by quite a few applications and scripts, and some of these use the "select MAX(ID)+1"-Keygeneration"strategy". It is not really an option to change all other components to use the table generator (although it would solve the problem), so I have to make sure that the values I get from the table generator are really usable.
Is there any way to tell Hibernate to check the validity of the generated values before it tries to insert a new record into the database (and throw a ConstraintViolationException)?
Or, alternatively, is there a way to manually update the generator tables before hibernate uses them to generate new Ids?
The obvious way would be to run a native query like UPDATE pk_generator SET value=(SELECT MAX(ID)+1 from members) WHERE column='members'
When you save a object with saveOrUpdate() the objects id field will get updated with the auto generated id if it was a create operation. So that it will never conflict with id which was already generated and used.
Just a quick question about locking tables in a postgres database using JDBC. I have a table for which I want to add a new record to, however, To do this for the primary key, I use an increasing integer value.
I want to be able to retrieve the max value of this column in Java and store it as a variable to be used as a new primary key when adding a new row.
This gives me a small problem, as this is going to be modelled as a multi-user system, what happens when 2 locations request the same max value? This will of course create a problem when trying to add the same primary key.
I realise that I should be using an EXCLUSIVE lock on the table to prevent reading or writing while getting the key and adding a new row. However, I can't seem to find any way to deal with table locking in JDBC, just standard transactions.
psuedo code as such:
primaryKey = "SELECT MAX(id) FROM table1;";
primary key++;
//id retrieved again from 2nd source
"INSERT INTO table1 (primaryKey, value 1, value 2);"
You're absolutely right, if two locations request at around the same time, you'll run into a race condition.
The way to handle this is to create a sequence in postgres and select the nextval as the primary key.
I don't know exactly what direction you're heading and how your handle your data, but you could also set the column as a serial and not even include the column in your insert query. The column will automatically auto increment.
I have several databases ,and need exchange data between them. When I export from db A import into db B, Id confliction will happen. I think out two approach, no one satisfy me.
select max(id) then create new id to avoid confliction ,but one column store json structure and contains id too! (history reason). So I need create new Id (primary key) and modify all ids in that json column.
or I can add a batch info for each data import. When I import data, I find out every id in sql and add batch id before them. Such as:
The original db like:
ID COL_JSON
11 {id:11,name:xx ...}
I want to insert a new record :11 ,after insert I add a batch info "1000" before id
now db looks like
ID COL_JSON
11 {id:11,name:xx ...}
100011 {id:100011,name:xx ...}
the next batch will be 1001,1002 1003 ..., so if a new 11 record need to be insert the db looks like
ID COL_JSON
11 {id:11,name:xx ...}
100011 {id:100011,name:xx ...}
100111 {id:100111,name:xx ...}
Although the two approach can resolve conflict, I feel the two approach is stupid. Is there some graceful scheme?
For legacy system, there is no better approach except to align with it. We cannot change it more on legacy system, so your 2th approach seems good. Frankly, tt's not stupid and just the right way to go.
I don't understand exactly what your databases exchange.
If you need data from both databases in both of them you could use something similar to your batch but using characters - A11 and B11 ids.
This way you won't have conflicts even if your database grows a lot.
Edit: You could make also a primary key with two fields: the integer ID with autoincrement and a varchar for the original database name.
When I have a table that should be synchronized (not on the fly), I use this approach :
The main table that will be overwritten has a big autoincrement (ie : autoincrement = 100000)
The secondary table that will be merged to the main one has a normal autoincrement starting to 1.
The only requirement is that you ensure that the main table has a big enough autocrement set up, so the secondary table id will never reach the main table first ID
Can I put a MAX value for the database table primary key, either via JPA or at the database level? If it is not possible, then I was thinking about
Create a random key between 0-9999999999 (9999999999 is my MAX)
Do a SELECT on the database with the newly create key, if return object is null, then INSERT, if not repeat go back to step 1
So if I do the above, two questions. Please keep in mind that the environment is high concurrent:
Q1: Does the overhead of check with SELECT, if not there, INSERT significant? What I really mean is: is this process normal, since usually I let the DB create a unique PK for me?
Q2: If Q1 does not create significant performance degradation, can I run into concurrent issue? For example, if P1 with Id1 check the table, Id1 is not there, it ready to insert, P2 sneak in insert Id1 before P1 could. So when P1 insert Id1, it fails. I dont want the process to fail here, I want it to go back up the loop, find a new id, repeat the process. How do I do that?
My environment is SQL and MYSQL db. I use JPA with Eclipselink implementation
NOTE: Some people question my decision to implement it this way, the answer is exact what TravisJ suggest below. I have a very high concurrent environment. When a process kick off, I need to create a request to another process, passing to that process a unique 10 character long id. Since the environment is high current, I want to leverage the unique, not null feature of PK. The request contain lot of information in it, so I create aRequest table, with the request Id as my PK. I know since all DB index their PK, query out the PK is fast. If there are better way, please let me know.
You can implement a Check Constraint in your table definition:
CREATE TABLE P
(
P_Id int PRIMARY KEY NOT NULL,
...
CONSTRAINT chk_P_Id CHECK (P_Id>0 and P_Id<9999999999)
)
EDIT: As stated in the comments, MySql does not honor CHECK constraints. This is a 6-year old defect in the bug log and the MySql team has yet to fix it. As MySql is now overseen by Oracle Corp, it may never be fixed (simply considered a "documented limitation", and people who don't like it can upgrade to the paid DBMS). However, this syntax, and the check constraint feature itself, DO work in Oracle, MS SQL Server (including SQLExpress/MSDE), MS Access, Postgre and SQLite.
Why not start at 1 and use auto-increment? This will be much more efficient because you will not get collisions, which you must cycle through. If you run out of numbers, you will be in the same boat either way, but at least going sequentially, you won't have to deal with collisions.
Imagine trying to find an unused key when you have used up 90% of your available numbers. That will take some time, and there is always a possibility that it never (in your lifetime) finds an unused key if you are generating them randomly.
Also, using auto-increment, it's easy to tell if you're close to the limit (SELECT MAX(col)). You could script an alert to let you know when you need to reset. For the random method, what would that query look like?
If you're using InnoDB, then you still might not want to use a primary key. Inserting random records into a clustered index is a performance hit since the actual table data must be reordered. Instead use a unique key (with an auto-increment primary key).
Using a unique index on the column in question, simply generate a random number in the range and attempt to insert it. If the insertion fails, then generate a new number and try again. If the insert succeeds, then proceed. This accounts for the concurrency issue.
Still, the sequential auto-increment key is going to yield better performance.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Identity_and_Sequencing
and,
http://en.wikibooks.org/wiki/Java_Persistence/Identity_and_Sequencing#Advanced_Sequencing
JPA already has good Id generation support, it does not make sense to implement your own.
If you are concerned about concurrency and performance, and using MySQL, I would recommend using TABLE generator with a large preallocation size (on other databases I would recommend SEQUENCE generator). If you have a lot of data, ensure you use a long for your id.
If you really think you need more than this, then consider UUID id generation. EclipseLink 2.4 with provide a #UUIDGenerator.
Hello and happy new year for everyone.
I need to insert a record at the end of a table (the table has not set autoincrement) using JPA.
I know I could get the last id (integer) and apply to the entity before insert, but how could that be done? Which way would be most effective?
There is no such thing as "the end of the table". Rows in a relational table are not sorted.
Simply insert your new row. If you need any particular order, you need to apply an ORDER BY when selecting the rows from the table.
If you are talking about generating a new ID, then use an Oracle sequence. It guarantees uniqueness.
I would not recommend using a "counter table".
That solution is either not scalable (if it's correctly implemented) or not safe (if it's scalable).
That's what sequences were created for. I don't know JPA, but if you can't get the ID from a sequence then I suggest you find a better ORM.
Well, while i do not know where the end of a table really is, JPA has a lot of options for plugging in ID generators.
One common option is to use a table of its own, having a counter for each entity you need an ID for (from http://download.oracle.com/docs/cd/B32110_01/web.1013/b28221/cmp30cfg001.htm).
#Id(generate=TABLE, generator="ADDRESS_TABLE_GENERATOR")
#TableGenerator(
name="ADDRESS_TABLE_GENERATOR",
tableName="EMPLOYEE_GENERATOR_TABLE",
pkColumnValue="ADDRESS_SEQ"
)
#Column(name="ADDRESS_ID")
public Integer getId() {
return id;
}
...other "Generator" strategies to be googled...
EDIT
I dare to reference #a_horse_with_no_name as he says he does not know about JPA. If you want to use native mechanisms like sequence (that are not available in every DB) you can declare such a generator in JPA, too.
I do not know what issues he encountered with the table approach - i know large installations running this successfully. But anyway, this depends on a lot of factors besides scalability, for example if you want this to be portable etc. Just lookup the different strategies and select the appropriate.