OpenJPA Sequence generator with negative values - java

Environment: Websphere 8.5, OpenJPA 2.0, DB2 z/OS
There are two tables: one with verified data and another with draft data (staging table) + View that display information from both tables.
To avoid Primary Key clash I've decided that staging table will have negative values as a primary key. It was working in plain SQL, but my approach failed when I tried to define a generator for staging table in Java code
Generator for negative key was defined like this:
CREATE SEQUENCE X AS INTEGER START WITH -1 INCREMENT BY -1
MINVALUE -999999 MAXVALUE 0
On entity side:
#Id
#SequenceGenerator(name="X", sequenceName="X")
#GeneratedValue(strategy=GenerationType.SEQUENCE,generator="X")
#Column(name = "ID")`
First element was created successfully (with value -1), but insertion of second element failed with
THE RANGE OF VALUES FOR THE IDENTITY COLUMN OR SEQUENCE IS EXHAUSTED. SQLCODE=-359, SQLSTATE=23522
Can you help me define #SequenceGenerator? Is it possible under Open JPA 2.0? Maybe sequence definition was wrong (MINVALUE/MAXVALUE)

First, I think your option to do a 'START WITH -99999 INCREMENT BY 1' is the best option. I'm not sure why you feel it is "not pretty". If you do this:
CREATE SEQUENCE SEQ_MYSEQ AS INTEGER START WITH -99999 INCREMENT BY 1 MINVALUE -999999 MAXVALUE 0
You are still range bound between -99999 and 0, right?
As I'll explain below, I think OpenJPA and EclipseLink like to count up. So I think you'll have better luck with this.
That said, let me answer your opening question. I have ran a test on OpenJPA and EclipseLink (since WebSphere uses Ecliplselink in WAS v9 and Liberty). I can not get your scenario to work with EclipseLink, but could get it to work with OpenJPA (but it wasn't pretty). Let me state what I did: this is the SQL I defined my sequence with (just as you listed in the description):
CREATE SEQUENCE SEQ_MYSEQ AS INTEGER START WITH -1 INCREMENT BY -1 MINVALUE -999999 MAXVALUE 0;
I defined my sequence generator in my entity as:
#Id
#SequenceGenerator(name = "IDGENERATOR", sequenceName = "SEQ_MYSEQ", allocationSize = 1, initialValue = -1)
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "IDGENERATOR")
private int id;
Notice that I'm telling JPA to start at -1, and to use an allocationSize of 1. The JavaDocs state the default is 50. One bummer (but not a show stopper) with allocationSize of 1 is that the JPA provider will go to the database for each sequence value (i.e. local caching will not be used). However, if this is not used, it seems both OpenJPA and EclipseLink want to count up by the allocation size. It is hard coded to count up. That is, either one will ask the DB for the next value, and then count up from there by allocationSize, NOT count down. On OpenJPA, you need to use this property:
Otherwise, by default OpenJPA executes an 'ALTER SEQUENCE' to make certain the INCREMENT BY defined in the SequenceGenerator matches what is in the database. If I don't add this property, I get the same exception about the range exhaustion. Anyway, with this, all works well on OpenJPA. On EclipseLink, I get this exception:
Exception Description: The sequence named [SEQ_MYSEQ] is setup incorrectly. Its increment does not match its pre-allocation size.
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.persist(EntityManagerImpl.java:510)
at hat.test.MySeqTest.main(MySeqTest.java:28)
I didn't dig enough into EclipseLink to figure this out, but I did play around with the Sequence a bit and it seems like EclipseLink doesn't like negative values????
Thanks,
Heath Thomann

Related

Hibernate GenerationType.IDENTITY not generating sequence ids

In my springboot application, I noticed one strange issue when inserting new rows.
My ids are generated by sequence, but after I restart the application it starts from 21.
Example:
First launch, I insert 3 rows - ids generated by sequence 1,2,3
After restart second launch, I insert 3 rows ids generated from 21. So ids are 21,22 ...
Every restart It increased to 20. - This increasing pattern always 20
Refer my database table (1,2 after restart 21)
My JPA entity
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(unique = true, nullable = false)
private Long id;
I tried some stackoverflow solutions, it's not working
I tried this, not working
spring.jpa.properties.hibernate.id.new_generator_mappings=false
I want to insert rows by sequence like 1,2,3,4. Not like this 1,2,21,22, How to resolve this problem?
Although I think the question comments already provide all the information necessary to understand the problem, please, let me try explain some things and try fixing some inaccuracies.
According to your source code you are using the IDENTITY id generation strategy:
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(unique = true, nullable = false)
private Long id;
You are using an Oracle database and this is a very relevant information for the question.
Support for IDENTITY columns was introduced in Oracle 12c, probably Release 1, and in Hibernate version - I would say 5.1 although here in SO is indicated that you need at least - 5.3.
Either way, IDENTITY columns in Oracle are supported by the use of database SEQUENCEs: i.e., for every IDENTITY column a corresponding sequence is created. As you can read in the Oracle documentation this explain why, among others, all the options for creating sequences can be applied to the IDENTITY column definition, like min and max ranges, cache size, etc.
By default a sequence in Oracle has a cache size of 20 as indicated in a tiny note in the aforementioned Oracle documentation:
Note: When you create an identity column, Oracle recommends that you
specify the CACHE clause with a value higher than the default of 20 to
enhance performance.
And this default cache size is the reason that explains why you are obtaining this non consecutive numbers in your id values.
This behavior is not exclusive to Hibernate: please, just issue a simple JDBC insert statement or SQL commands with any suitable tool and you will experiment the same.
To solve the issue create your table indicating NOCACHE for your IDENTITY column:
CREATE TABLE your_table (
id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY NOCACHE,
--...
)
Note you need to use NOCACHE and not CACHE 0 as indicated in the question comments and now in a previous version of other answers, which is an error because the value for the CACHE option should be at least 2.
Probably you could modify your column without recreating the whole table as well:
ALTER TABLE your_table MODIFY (ID GENERATED BY DEFAULT ON NULL AS IDENTITY NOCACHE);
Having said all that, please, be aware that in fact the cache mechanism is an optimization and not a drawback: in the end, and this is just my opinion, those ids are only non natural assigned IDs and, in a general use case, the cache benefits outweigh the drawbacks.
Please, consider read this great article about IDENTITY columns in Oracle.
The provided answer related to the use of the hilo optimizer could be right but it requires explicitly using the optimizer in your id field declaration which seems not to be the case.
It is related to Hi/Lo algorithm that Hibernate uses for incrementing the sequence value. Read more in this example: https://www.baeldung.com/hi-lo-algorithm-hibernate.
This is an optimization used by Hibernate, which consumes some values from the DB sequence into a pool (Java runtime) and uses them while executing appropriate INSERT statements on the table. If this optimization is turned off and set allocationSize=1, then the desired behavior (no gaps in ids) is possible (with a certain precision, not always), but for the price of making two requests to DB for each INSERT.
Examples give the idea of what is going on in the upper level of abstraction.
(Internal implementation is more complex, but here we don't care)
Scenario: user makes 21 inserts during some period of time
Example 1 (current behavior allocationSize=20)
#1 insert: // first cycle
- need next MY_SEQ value, but MY_SEQ_PREFETCH_POOL is empty
- select 20 values from MY_SEQ into MY_SEQ_PREFETCH_POOL // call DB
- take it from MY_SEQ_PREFETCH_POOL >> remaining=20-1
- execute INSERT // call DB
#2-#20 insert:
- need next MY_SEQ value,
- take it from MY_SEQ_PREFETCH_POOL >> remaining=20-i
- execute INSERT // call DB
#21 insert: // new cycle
- need next MY_SEQ value, but MY_SEQ_PREFETCH_POOL is empty
- select 20 values from MY_SEQ into MY_SEQ_PREFETCH_POOL // call DB
- take it from MY_SEQ_PREFETCH_POOL >> remaining=19
- execute INSERT // call DB
Example 2 (current behavior allocationSize=1)
#1-21 insert:
- need next MY_SEQ value, but MY_SEQ_PREFETCH_POOL is empty
- select 1 value from MY_SEQ into MY_SEQ_PREFETCH_POOL // call DB
- take it from MY_SEQ_PREFETCH_POOL >> remaining=0
- execute INSERT // call DB
Example#1: total calls to DB is 23
Example#2: total calls to DB is 42
Manual declaration of the sequence in the database will not help in this case, because, for instance in this statement\
CREATE SEQUENCE ABC START WITH 1 INCREMENT BY 1 CYCLE NOCACHE;
we control only "cache" used in the DB internal runtime, which is not visible to Hibernate. It affects sequence gaps in situations when DB stopped and started again, and this is not the case.
When Hibernate consumes values from the sequence it implies that the state of the sequence is changed on DB side. We may treat it as hotel rooms booking: a company (Hibernate) booked 20 rooms for a conference in a hotel (DB), but only 2 participants arrived. Then 18 rooms will stay empty and cannot be used by other guests. In this case the "booking period" is forever.
More details on how to configure Hibernate work with sequences is here:
https://ntsim.uk/posts/how-to-use-hibernate-identifier-sequence-generators-properly
Here is a short answer for older version of Hibernate. Still it has relevant ideas:
https://stackoverflow.com/a/5346701/2774914

Hibernate GeneratorSequence fails after 50 entries

I have an entity with the following id configuration:
public class Publication implements Serializable, Identifiable {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "sequenceGenerator")
#SequenceGenerator(name = "sequenceGenerator")
private Long id;
}
with this generator (Liquibase syntax):
<createSequence incrementBy="10" sequenceName="sequence_generator" startValue="1" cacheSize="10"/>
and a Spring Data JPA Repository:
#Repository
public interface PublicationRepository extends JpaRepository<Publication, Long>, JpaSpecificationExecutor<Publication> {
// ...
}
Now I have part in my application where I create about 250 new Publication objects without an id and then do publicationRepository.saveAll(). I get the following exception:
Caused by: javax.persistence.EntityExistsException: A different object with the same identifier value was already associated with the session : [mypackage.Publication#144651]
I debugged with breakpoints and found that this always happens with the 50th object, where the assigned ID suddenly is set as an ID that is already present in the set of already saved objects – so the generator seems to return the wrong value. For collections with less than 50 objects, it works fine.
What is also strange: The objects IDs created have an increase of 1, while if if execute NEXT VALUE FOR sequence_generator on my database i get IDs in increments of 10.
Am I using the generator wrong?
You need to sync SequenceGenerator's allocationSize with your sequence's incrementBy. The default value for allocationSize is 50, which means that after every 50th insert, the hibernate will generate select nextval('sequence_name)` (or something similar depending on the dialect), to get the next starting value for IDs.
What happens in your case is that:
for the first insert Hibernate fetches next value for the sequence, which is 1. By first insert I mean first insert whenever the service/application is (re)started.
then it performs 50 inserts (default allocationSize) without asking DB what is the next value for the sequence. Generated ID will be from 1 to 50.
51st insert fetches next value for the sequence, which is 11 (startBy + incrementBy). Previously you inserted an entity with ID=11, which is why it fails to insert the new entity (PK constraint violation).
Also, every time you call select nextval on sequence, it simply does currentValue + incrementBy. For your sequence, it'll be 1, 11, 21, 31, etc.
If you enable SQL logs, you'll see following:
Calling repository.save(entity) the first time would generate
select nextval('sequence_name`);
insert into table_name(...) values (...);
Saving second entity with repository.save(entity) would generate only
insert into table_name(...) values (...);
After allocationSize number of inserts you would again see:
select nextval('sequence_name`);
insert into table_name(...) values (...);
Advantage of using sequences is to minimize the number of times Hibernate would need to talk to the DB to get the next ID. Depending on your use-case, you can adjust the allocationSize to get the best results.
Note: one of the comments suggested to use allocationSize = 1 which is very bad and will have a huge impact on performance. For Hibernate that would mean that it needs to issue select nextval every time it performs an insert. In other words, you'll have 2 SQL statements for every insert.
Note 2: Also keep in mind that you need to keep initialValue of SequenceGenerator and startValue of sequence in sync as well. allocationSize and initialValue are the two values used by the sequence generator to calculate the next value.
Note 3: It worth mentioning that depending on the algorithm used to generate sequences (hi-lo, pooled, pooled-lo, etc.), gaps may occur between service/application restarts.
Useful resources:
Hibernate pooled and pooled-lo identifier generators - in case you wish to change the algorithm used by the sequence generator to calculate the next value. There might be the case (e.g. in a concurrent environment) where the two service use the same DB sequence to generate values and their generated values might collide. For cases like that, one strategy is better that the other.

nextVal from PostgreSQL sequence in Hibernate fetch sequence multiple times

For business logic we need to update a record with the next value from a sequence defined directly in database and not in Hibernate (because is not always applied on insert / updates)
For this purpose we have a sequence defined in PostgreSQL witch DDL is:
CREATE SEQUENCE public.facturaproveedor_numeracionperiodofiscal_seq
INCREMENT 1 MINVALUE 1
MAXVALUE 9223372036854775807 START 1
CACHE 1;
Then in DAO, when some conditions are true, we get the nextVal via:
Query query = sesion.createSQLQuery("SELECT nextval('facturaproveedor_numeracionperiodofiscal_seq')");
Long siguiente = ((BigInteger) query.uniqueResult()).longValue();
But the values asigned aren't consecutive. Looking the Hibernate output log we see four fetchs to the sequence in the same transaction:
Hibernate: SELECT nextval('facturaproveedor_numeracionperiodofiscal_seq') as num
Hibernate: SELECT nextval('facturaproveedor_numeracionperiodofiscal_seq') as num
Hibernate: SELECT nextval('facturaproveedor_numeracionperiodofiscal_seq') as num
Hibernate: SELECT nextval('facturaproveedor_numeracionperiodofiscal_seq') as num
Why is this happening? Is this for catching purposes? There is a way to disable this? Or this workaround is not correct?
Hibernate won't usually generate standalone nextval calls that look like that, so I won't be too surprised if it's your application doing the multiple fetches. You'll need to collect more tracing information to be sure.
I think you may have a bigger problem though. If you care about sequences skipping values or leaving holes then you're using the wrong tool for the job, you should be using a counter in a table that you UPDATE, probably UPDATE my_id_generator SET id = id + 1 RETURNING id. This locks out concurrent transactions and also ensures that if the transaction rolls back, the update is undone.
Sequences by contrast operate in parallel, which means that it's impossible to roll back a sequence increment when a transaction rolls back (see the PostgreSQL documentation). So they're generally not suitable for accounting purposes like invoice numbering and other things where you require a gapless sequence.
For other readers who don't have the specific requirement to only sometimes generate a value: don't generate the sequence values manually; use a #GeneratedValue annotation.
In Hibernate 3.6 and newer, you should set hibernate.id.new_generator_mappings in your Hibernate properties.
Assuming you're mapping a generated key from a PostgreSQL SERIAL column, use the mapping:
#Id
#SequenceGenerator(name="mytable_id_seq",sequenceName="mytable_id_seq", allocationSize=1)
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="mytable_id_seq")
If you omit the allocationSize then Hibernate assumes it's bizarre default of 50 and fails to check that the sequence actually increments by 50 so it tries to insert already-used IDs and fails.
Hibernate/JPA isn't able to automatically create a value for your non-id-properties. The #GeneratedValue annotation is only used in conjunction with #Id to create auto-numbering
Hibernate caches query results.
Notice the "as num", looks like hibernate is altering the SQL as well.
You should be able to tell Hibernate not to cache results with
query.setCacheable(false);

Generated sequence starts with 1 instead of 1000 which is set in annotation

I would like to ask some help regarding database sequence created by Hibernate.
I have this annotation - the code below - in my entity class in order to have individual sequence for partners table. I expect that the sequence starts with 1000, because I insert test data into my database using import.sql during deploy and I would like to avoid the constraint violation. But when I want to persist data than I got the constraint violation exception and it informs me about the fact the partner_id = 2 already exists. It looks like I missed something.
#Id
#Column(name = "partner_id")
#SequenceGenerator(initialValue=1000,
allocationSize=1,
name = "partner_sequence",
sequenceName="partner_sequence")
#GeneratedValue(generator="partner_sequence")
private Long partnerId;
The generated sequence looks like this:
CREATE SEQUENCE partner_sequence
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 1;
ALTER TABLE partner_sequence
OWNER TO postgres;
I use postgres 9.1.
Did I miss something? This is the way how can I approach what I want?
Thanks for any help in advance!
initialValue is supported if hibernate.id.new_generator_mappings=true is specified according to this article. I had the same problem as stated in this post, and I solved it following this recipe. Sequences are generated correctly now.
initialValue and alocattionSize are specific to hilo algorithm that uses sequence. According to this initialValue is not even supported. I don't even see how it could be supported from Java layer since sequence values are generated in the database.
Also see hibernate oracle sequence produces large gap

Hibernate using sequence generator and sequence in Oracle

I have the following sequence:
[as seen now in Toad]:
CREATE SEQUENCE LOG_ID_SEQ
START WITH 787585
MAXVALUE 1000000000000000000000000000
MINVALUE 1
NOCYCLE
NOCACHE
NOORDER
/
I have the following table sequence generator:
#SequenceGenerator(name="LOG_ID_SEQ", sequenceName="LOG_ID_SEQ")
#Id
#Column(name = "log_id", nullable = false)
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="LOG_ID_SEQ")
Long id;
The highest value of log_id is currently 39379151
Now the weird problem: the client created a dump of the poduction database
and imported it in the test database.
When I tested the application I got a ORA-00001 unique constraint error on this table.
When I imported the same dump and test the application on my machine I do not get that error??
How is this possible with Hibernate? I have no idea where or what to look for.
[UPDATED]:
To be accurate: after I imported the dump into a new schema locally the last sequence value in the dump was 39354002. Without resetting the sequence my next value is 39379151.
You're getting values back from your sequence that collide with data in your table.
max(log_id) = 39,379,151, which is higher than your sequence "start with" value = 787,585.
Re-create your sequence with "start with" higher than max(log_id) and you should be all set.
The error may not be consistent because you may not be using every sequence value, so it's possible that on occasion inserts may succeed if you get a value that falls in a gap between existing rows.
If the current highest value of log_id is 39379151, but you're re-creating the LOG_ID_SEQ in a new schema/database with a starting value of 787585, then the next new row inserted will have a log_id value that already exists. You should probably alter your CREATE SEQUENCE statement to reflect the updated new max value for log_id.

Categories