GenerationType.IDENTITY with transaction - java

Is it possible to use GenerationType.IDENTITY with Transaction in Hibernate/Spring data?
I have an existing database, with an identity column in all tables. So, I have to use GenerationType.IDENTITY for it. But, when I create a new entity instance and change its state to managed with someRepository.save(...) method, the persistence provider can't acquire a new ID for that entity, because it must happen on flush time, at the end of the transaction.
If I create one entity, all works as expected. After save(), the entity goes to the managed state, the id is changed from NULL to 0 (zero), and at the flush time, the new ID for the row is generated by the database.
But what if we create two instances of the same entity class inside one transaction? The exception will be thrown, and this is justly because we have two different objects with same ID = 0. So, is there a way to deal with it without change strategy from IDENTITY to something else?
#Entity
public class Customer {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
Long id;
...
}
#Transactional
public void brokenCode() {
Customer one = new Customer();
Customer two = new Customer();
someRepository.save(one);
someRepository.save(two); <--- org.springframework.dao.DataIntegrityViolationException: A different object with the same identifier value was already associated with the session
}
CREATE TABLE [dbo].[Customer]
(
[id] [int] IDENTITY (1,1) NOT NULL,
...
CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED ([id] ASC)
WITH (PAD_INDEX = OFF
, STATISTICS_NORECOMPUTE = OFF
, IGNORE_DUP_KEY = OFF
, ALLOW_ROW_LOCKS = ON
, ALLOW_PAGE_LOCKS = ON
, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [DATA]
)
UPD!
The reason is the table has "trigger instead of insert". In that trigger some fields are calculated based on other fields. And the identity doesn't return to Hibernate.
ALTER trigger [dbo].[Customer] on [dbo].[Customer]
instead of insert
as
if ##rowcount = 0 return
set nocount on
Set dateformat dmy
Insert into dbo.Customer
(f1, f2, ....)
Select
I.f1,
isnull(f2,newid()),
...
from Inserted I
So. Is there a way to somehow return identity to Hibernate from that trigger?
Thanks in advance!

Related

JPA/Hibernate: Update Parent column value before inserting child column value

This is my scenario. I have a Parent table Files_Info and a child table Files_Versions.
create table files_info(
id bigint primary key,
name varchar(255) not null,
description varchar(255) not null,
last_modified TIMESTAMP,
latest_version integer default 0 not null
);
create table files_versions(
id bigint primary key,
file_id bigint references files_info(id),
version integer not null,
location text not null,
created TIMESTAMP,
unique(file_id, version)
);
This is mainly to track a file and its various versions. When the user initiates a new file creation (not yet uploaded any version of the file), an entry is made to the files_info table with basic info like name, description. The latest_version will be 0 initially.
Then when the user uploads the first version, an entry is created in the files_versions table for that file_id and the version
value is set as parent's latest_version + 1. Parent's latest_version is now set to 1.
The user can also upload an initial version of the file when he/she initiates a new file creation. In that case, parent record
will be created with latest_version as 1 and also the corresponding version 1 child record.
I do not know how to design this using JPA / Hibernate.
I wrote my Entity and Repository classes and the save methods seem to work independently. But I do not know how to do the simultaneously latest_version updates.
Can this be done using JPA / Hibernate? Or should it be a database trigger?
A trigger is a valid option, but It can be done using JPA/Hibernate.
I'll suggest to use #PrePersist annotation on some method defined at the files_versions entity ... This method will be called by JPA when you execute: EntityManager.persist(FileVersion); and it can be use to update entity's derivative attributes ... In your case, will be the sum of the file last_version + 1 ... Example:
#Entity
#Table(name = "files_info")
public class FileInfo {
}
#Entity
#Table(name = files_versions)
public class FileVersion {
... //some attributes
#Column(name = "version")
private int version;
#ManyToOne
#JoinColumn(name = "file_id")
private FileInfo fileInfo;
... //some getters and setters
#PrePersist
private void setupVersion() {
// fileInfo should be set before of calling persist()!
// fileInfo should increase its lastest Version before of calling persist()!
this.version = this.fileInfo.getLastVersion();
}
}

Violation of UNIQUE KEY constraint in MS-SQL using Hibernate when deleting objects

The application we have used MySQL until today and everything was fine. Now we need to use MSSQL.
A lot of our unit tests are now failing. A sample is as follows:
Caused by: java.sql.BatchUpdateException:
Violation of UNIQUE KEY constraint 'UQ__field_ty__5068257C6DE5E37D'.
Cannot insert duplicate key in object 'dbo.field_type_mapping'.
The duplicate key value is (<NULL>, -11).
As I said, this test is successful when using MySQL.
The table field_type_mapping has a constraint:
/****** Object: Index [UQ__field_ty__5068257C6DE5E37D]
ALTER TABLE [dbo].[field_type_mapping] ADD UNIQUE NONCLUSTERED
(
[mapping_entity_id] ASC,
[field_type_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
The test is as follows and the exception is thrown at the last line of this test:
Invoice document = documentDao.get(5000);
assertEquals("Document should have exactly one reference field!", 1, document.getFieldTypeMappings().size());
assertEquals("Document should have exactly one item!", 1, document.getDocumentItems().size());
Set<InvoiceItem> items = document.getDocumentItems();
InvoiceItem item = items.iterator().next();
assertEquals("Document's item should have no reference field!", 0, item.getFieldTypeMappings().size());
ReferenceFieldType referenceFieldType = referenceFieldTypeDao.get(-11L);
FieldTypeMapping documentFieldType = new FieldTypeMapping();
documentFieldType.setFieldType(referenceFieldType);
documentFieldType.setFieldValue("a value");
document.addFieldTypeMapping(documentFieldType);
FieldTypeMapping documentItemFieldType = new FieldTypeMapping();
documentItemFieldType.setFieldType(referenceFieldType);
documentItemFieldType.setFieldValue("another value");
item.addFieldTypeMapping(documentItemFieldType);
documentDao.save(document);
flush();
document = documentDao.get(id);
assertEquals("Reference object for document not added!", 2, document.getFieldTypeMappings().size());
items = document.getDocumentItems();
item = items.iterator().next();
assertEquals("Reference object for document item not added!", 1, item.getFieldTypeMappings().size());
document.addFieldTypeMapping(documentFieldType);
item.addFieldTypeMapping(documentItemFieldType);
documentDao.save(document);
flush();
document = documentDao.get(id);
assertEquals("Number of reference object should not have changed for document!", 2, document.getFieldTypeMappings().size());
items = document.getDocumentItems();
item = items.iterator().next();
assertEquals("Number of reference object should not have changed for document' item!", 1, item.getFieldTypeMappings().size());
document.getFieldTypeMappings().remove(documentFieldType);
item.getFieldTypeMappings().remove(documentItemFieldType);
documentDao.save(document);
flush(); // Exception is thrown at this point..
My understanding is something is wrong with:
item.getFieldTypeMappings().remove(documentItemFieldType);
as the exception is mentioning id -11 ?
The hibernate code for removal is as follows:
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.LAZY)
#Cascade({org.hibernate.annotations.CascadeType.SAVE_UPDATE,
org.hibernate.annotations.CascadeType.DETACH,
org.hibernate.annotations.CascadeType.LOCK})
#JoinColumn(name = "mapping_entity_id")
#XmlTransient
#Fetch(FetchMode.SELECT)
public Set<FieldTypeMapping> getFieldTypeMappings() {
return fieldTypeMappings;
}
As I am pretty novice with this I do not even understand what might be wrong. How can I fix this issue? Is this an issue with hibernate and how it handles the queries? I also want to mention that all the db is created with hibernate as well, no manual sql execution and db creation is made.
You usually need the ability to have a null FK when you may not know the value at the time of entering the data, especially whilst you know other values to be entered.
To allow nulls in an FK generally all you have to do is allow nulls on the field that has the FK. The null value is separate from the idea of it being an FK. - This is what I believe you need to do.
Whether it is unique or not unique relates to whether the table has a one-one or a one-many relationship to the parent table.

How to implement persisting related (cascading) objects via JDBC

I have related objects consisting of parent entities such as Organisation.java which has object-typed child attributes as #OneToMany lists like activities (i.e. List activitiyList) (Activiy.java has its own object-typed attributes.
It is very easy to use JPA persistence to do CRUD operations of these objects on a database, but my current requirement forbids me to use JPA, and implement the same functionality using only-JDBC - which I'm not sure how to implement.
How could the same functionality be implemented via JDBC when both parent and child objects are created for the first time (i.e. with all of the objects having null IDs)?
Assuming you have a foreign key relationship between Organisation and Activity, you must create the parent first, then the child rows with the parent id.
You can do this with spring, here's an old post, but the principals remain the same.
To implement manually, your database must provide a mechanism by which to generate primary keys for a given table without having to create a row first. Oracle supports sequence.nextVal, so your database should support something similar.
I'm pseudo-coding this, you can fill in the blanks:
try{
connection.setAutoCommit(false)
//get organisation id first
String nextOrgIdSql = "select orgSeq.nextval from someVirtualTable" //depends on database
ResultSet orgIdRs = statement.executeQuery( nextOrgIdSql)
int orgId = -1
if( orgIdRs.next())
orgId = orgIdRs.getInt(1)
//create organisation first
String orgSql =
"Insert into ORGANISATION (ORGID, ...) values ("+ orgId + ",...)"
//create activities
for( Activity activity : organisation.getActivityList()){
String nextActvIdSql = "select activitySeq.nextval from someVirtualTable"
ResultSet actvIdRs = statement.executeQuery( nextActvIdSql)
int actvId = -1
if( actIdRs.next())
actvId = actvIdRs.getInt(1)
statement.execute(
"Insert INTO ACTIVITY (ACTVID, ORGID) values ("+actvId+","+orgId+")"
}
connection.commit()
}catch(SQLException e){
connection.rollback()
}

Verify if Insertion query was completed in hibernate

So say I did something like this:
Employee emp=new Employee();
emp.setId(1); // PK
emp.setName("Earl");
emp.setAge("73");
session.save();
According to the tutorial here: http://www.mkyong.com/hibernate/hibernate-one-to-one-relationship-example/ , this should bring about an insert in the Employee table, assuming all mappings are correct. However, I keep on getting a serialization error. Is it because I am giving the ID a value? The query generates (However it takes Id as NULL... why is this?)
Is there any way I can verify barring checking the database to see if the query was done? Also, please do look at my other queries. I am very new to hibernate.
This can be done by keeping your id as generated value and you can type cast the return type to integer.In model class place this annotation upon id
#generatedvalue(strategy = generationtype.auto)
private int id;
Then there no need to insert the Employee id hibernate automatically generates the value and inserts and to check that values are inserted
Int i=(Integer)session.save(emp);
if(i > 0)
{
System.out.println("Values inserted");
}
First of all check that you pass a value to the function session.save():
session.save(emp);
If it still fails...
May be your primary key is set to auto increment. So you need not to specify a value for it.
Also try wrapping it in a transaction.
Just try this :
Transaction txn = session.beginTransaction();
Employee emp=new Employee();
//emp.setId(1); // PK
emp.setName("Earl");
emp.setAge("73");
session.save(emp);
txn.commit();
And best wishes..

Tomcat cayenne fetching transient object from relationship

Why is my persistent object returning transient objects when fetching via a relationship?
ObjectContext context = BaseContext.getThreadObjectContext();
// Delete some employee schedules
List<EmployeeSchedule> employeeSchedules = this.getEmployeeSchedules();
for (EmployeeSchedule employeeSchedule : employeeSchedules) {
context.deleteObject(employeeSchedule);
}
// Add new schedules
for(int i = 0; i < someCondition; i++) {
EmployeeSchedule employeeSchedule = context.newObject(EmployeeSchedule.class);
addToEmployeeSchedules(employeeSchedule);
}
context.commitChanges();
List<EmployeeSchedule> es = getEmployeeSchedules(); // returns transient objects
It is inserting the data correctly into the database. Would this be an issue with stale data in the cache?
I'm answering my own question in case someone else get tripped up by this in the future.
I have a many-to-many relationship.
Employee - EmployeeSchedule - Schedule
According to the delete rules here: http://cayenne.apache.org/docs/3.0/delete-rules.html, I set the fields employee_id and schedule_id in the EmployeeSchedule to Nullify rule on delete.
I also had to configure the join table EmployeeSchedule by making employee_id and schedule_id primary keys in the Modeler and checking the "to Dep PK" checkbox in the employee and schedule dbEntity.
Relevant links: http://objectstyle.org/cayenne/lists/cayenne-user/2004/02/0017.html
http://grokbase.com/t/cayenne/user/085d70sysk/to-dep-checkbox-was-one-to-many-problem

Categories