Hibernate Many-To-One Foreign Key Default 0 - java

I have a table where the the parent object has an optional many-to-one relationship. The problem is that the table is setup to default the fkey column to 0.
When selecting, using fetch="join", etc-- the default of 0 on the fkey is being used to try over and over to select from another table for the ID 0. Of course this doesn't exist, but how can I tell Hibernate to treat a value of 0 to be the same as NULL-- to not cycle through 20+ times in fetching a relationship which doesn't exist?
<many-to-one name="device" lazy="false" class="Device" not-null="true" access="field" cascade="none" not-found="ignore">
<column name="DEVICEID" default="0" not-null="false"/>

There are two ways of doing this, the way that can get ugly performance-wise and the way that is painful and awkward.
The potentially ugly way is done on the ToOne end. Using Hibernate Annotations it would be:
#Entity
public class Foo
{
...
#ManyToOne
#JoinColumn( name = "DEVICEID" )
#NotFound( action = NotFoundAction.IGNORE )
private Device device;
...
}
Unfortunately, this forces a preemptive database hit (no lazy loading) because device can be null, and if Hibernate created a lazy Device then "device == null" would never be true.
The other way involves creating a custom UserType that intercepts requests for the ID 0 and returns null for them, and then assigning that to the primary key of Device with #Type. This forces the 0 ~ null interpretation on everyone with a foreign key into Device.

I was able to fix this by creating an id-long type which extends the built in Long type, but if the id returned from SQL was 0, return null instead. This kept the allowance of default 0s in our DB while getting hibernate to stop doing lazy fetches.
public class IdentifierLongType extends LongType implements IdentifierType {
#Override
public Object get(ResultSet rs, String name) throws SQLException {
long i = rs.getLong(name);
if (i == 0) {
return null;
} else {
return Long.valueOf(i);
}
}
}
The reason for enforcing explicit default 0 is that Oracle handles indexing and null values oddly, suggesting better query performance with explicit values vs. 'where col is [not] null'

I think you are using primitive type as your primary/foreign key columns in your object. If yes then try using wrapper classes. Because primitive types can't have default values as null.

Related

org.springframework.orm.jpa.JpaSystemException: identifier of an instance of com.cc.domain.User was altered from 90 to null; [duplicate]

org.hibernate.HibernateException: identifier of an instance
of org.cometd.hibernate.User altered from 12 to 3
in fact, my user table is really must dynamically change its value, my Java app is multithreaded.
Any ideas how to fix it?
Are you changing the primary key value of a User object somewhere? You shouldn't do that. Check that your mapping for the primary key is correct.
What does your mapping XML file or mapping annotations look like?
You must detach your entity from session before modifying its ID fields
In my case, the PK Field in hbm.xml was of type "integer" but in bean code it was long.
In my case getters and setter names were different from Variable name.
private Long stockId;
public Long getStockID() {
return stockId;
}
public void setStockID(Long stockID) {
this.stockId = stockID;
}
where it should be
public Long getStockId() {
return stockId;
}
public void setStockId(Long stockID) {
this.stockId = stockID;
}
In my case, I solved it changing the #Id field type from long to Long.
In my particular case, this was caused by a method in my service implementation that needed the spring #Transactional(readOnly = true) annotation. Once I added that, the issue was resolved. Unusual though, it was just a select statement.
Make sure you aren't trying to use the same User object more than once while changing the ID. In other words, if you were doing something in a batch type operation:
User user = new User(); // Using the same one over and over, won't work
List<Customer> customers = fetchCustomersFromSomeService();
for(Customer customer : customers) {
// User user = new User(); <-- This would work, you get a new one each time
user.setId(customer.getId());
user.setName(customer.getName());
saveUserToDB(user);
}
In my case, a template had a typo so instead of checking for equivalency (==) it was using an assignment equals (=).
So I changed the template logic from:
if (user1.id = user2.id) ...
to
if (user1.id == user2.id) ...
and now everything is fine. So, check your views as well!
It is a problem in your update method. Just instance new User before you save changes and you will be fine. If you use mapping between DTO and Entity class, than do this before mapping.
I had this error also. I had User Object, trying to change his Location, Location was FK in User table. I solved this problem with
#Transactional
public void update(User input) throws Exception {
User userDB = userRepository.findById(input.getUserId()).orElse(null);
userDB.setLocation(new Location());
userMapper.updateEntityFromDto(input, userDB);
User user= userRepository.save(userDB);
}
Also ran into this error message, but the root cause was of a different flavor from those referenced in the other answers here.
Generic answer:
Make sure that once hibernate loads an entity, no code changes the primary key value in that object in any way. When hibernate flushes all changes back to the database, it throws this exception because the primary key changed. If you don't do it explicitly, look for places where this may happen unintentionally, perhaps on related entities that only have LAZY loading configured.
In my case, I am using a mapping framework (MapStruct) to update an entity. In the process, also other referenced entities were being updates as mapping frameworks tend to do that by default. I was later replacing the original entity with new one (in DB terms, changed the value of the foreign key to reference a different row in the related table), the primary key of the previously-referenced entity was already updated, and hibernate attempted to persist this update on flush.
I was facing this issue, too.
The target table is a relation table, wiring two IDs from different tables. I have a UNIQUE constraint on the value combination, replacing the PK.
When updating one of the values of a tuple, this error occured.
This is how the table looks like (MySQL):
CREATE TABLE my_relation_table (
mrt_left_id BIGINT NOT NULL,
mrt_right_id BIGINT NOT NULL,
UNIQUE KEY uix_my_relation_table (mrt_left_id, mrt_right_id),
FOREIGN KEY (mrt_left_id)
REFERENCES left_table(lef_id),
FOREIGN KEY (mrt_right_id)
REFERENCES right_table(rig_id)
);
The Entity class for the RelationWithUnique entity looks basically like this:
#Entity
#IdClass(RelationWithUnique.class)
#Table(name = "my_relation_table")
public class RelationWithUnique implements Serializable {
...
#Id
#ManyToOne
#JoinColumn(name = "mrt_left_id", referencedColumnName = "left_table.lef_id")
private LeftTableEntity leftId;
#Id
#ManyToOne
#JoinColumn(name = "mrt_right_id", referencedColumnName = "right_table.rig_id")
private RightTableEntity rightId;
...
I fixed it by
// usually, we need to detach the object as we are updating the PK
// (rightId being part of the UNIQUE constraint) => PK
// but this would produce a duplicate entry,
// therefore, we simply delete the old tuple and add the new one
final RelationWithUnique newRelation = new RelationWithUnique();
newRelation.setLeftId(oldRelation.getLeftId());
newRelation.setRightId(rightId); // here, the value is updated actually
entityManager.remove(oldRelation);
entityManager.persist(newRelation);
Thanks a lot for the hint of the PK, I just missed it.
Problem can be also in different types of object's PK ("User" in your case) and type you ask hibernate to get session.get(type, id);.
In my case error was identifier of an instance of <skipped> was altered from 16 to 32.
Object's PK type was Integer, hibernate was asked for Long type.
In my case it was because the property was long on object but int in the mapping xml, this exception should be clearer
If you are using Spring MVC or Spring Boot try to avoid:
#ModelAttribute("user") in one controoler, and in other controller
model.addAttribute("user", userRepository.findOne(someId);
This situation can produce such error.
This is an old question, but I'm going to add the fix for my particular issue (Spring Boot, JPA using Hibernate, SQL Server 2014) since it doesn't exactly match the other answers included here:
I had a foreign key, e.g. my_id = '12345', but the value in the referenced column was my_id = '12345 '. It had an extra space at the end which hibernate didn't like. I removed the space, fixed the part of my code that was allowing this extra space, and everything works fine.
Faced the same Issue.
I had an assosciation between 2 beans. In bean A I had defined the variable type as Integer and in bean B I had defined the same variable as Long.
I changed both of them to Integer. This solved my issue.
I solve this by instancing a new instance of depending Object. For an example
instanceA.setInstanceB(new InstanceB());
instanceA.setInstanceB(YOUR NEW VALUE);
In my case I had a primary key in the database that had an accent, but in other table its foreign key didn't have. For some reason, MySQL allowed this.
It looks like you have changed identifier of an instance
of org.cometd.hibernate.User object menaged by JPA entity context.
In this case create the new User entity object with appropriate id. And set it instead of the original User object.
Did you using multiple Transaction managers from the same service class.
Like, if your project has two or more transaction configurations.
If true,
then at first separate them.
I got the issue when i tried fetching an existing DB entity, modified few fields and executed
session.save(entity)
instead of
session.merge(entity)
Since it is existing in the DB, when we should merge() instead of save()
you may be modified primary key of fetched entity and then trying to save with a same transaction to create new record from existing.

How to do ON DUPLICATE KEY UPDATE in Spring Data JPA? [duplicate]

MySQL supports an "INSERT ... ON DUPLICATE KEY UPDATE ..." syntax that allows you to "blindly" insert into the database, and fall back to updating the existing record if one exists.
This is helpful when you want quick transaction isolation and the values you want to update to depend on values already in the database.
As a contrived example, let's say you want to count the number of times a story is viewed on a blog. One way to do that with this syntax might be:
INSERT INTO story_count (id, view_count) VALUES (12345, 1)
ON DUPLICATE KEY UPDATE set view_count = view_count + 1
This will be more efficient and more effective than starting a transaction, and handling the inevitable exceptions that occur when new stories hit the front page.
How can we do the same, or accomplish the same goal, with Hibernate?
First, Hibernate's HQL parser will throw an exception because it does not understand the database-specific keywords. In fact, HQL doesn't like any explicit inserts unless it's an "INSERT ... SELECT ....".
Second, Hibernate limits SQL to selects only. Hibernate will throw an exception if you attempt to call session.createSQLQuery("sql").executeUpdate().
Third, Hibernate's saveOrUpdate does not fit the bill in this case. Your tests will pass, but then you'll get production failures if you have more than one visitor per second.
Do I really have to subvert Hibernate?
Have you looked at the Hibernate #SQLInsert Annotation?
#Entity
#Table(name="story_count")
#SQLInsert(sql="INSERT INTO story_count(id, view_count) VALUES (?, ?)
ON DUPLICATE KEY UPDATE view_count = view_count + 1" )
public class StoryCount
This is an old question, but I was having a similar issue and figured I would add to this topic. I needed to add a log to an existing StatelessSession audit log writer. The existing implementation was using a StatelessSession because the caching behavior of the standard session implementation was unnecessary overhead and we did not want our hibernate listeners to fire for audit log writing. This implementation was about achieving as high a write performance as possible with no interactions.
However, the new log type needed to use an insert-else-update type of behavior, where we intend to update existing log entries with a transaction time as a "flagging" type of behavior. In a StatelessSession, saveOrUpdate() is not offered so we needed to implement the insert-else-update manually.
In light of these requirements:
You can use the mysql "insert ... on duplicate key update" behavior via a custom sql-insert for the hibernate persistent object. You can define the custom sql-insert clause either via annotation (as in the above answer) or via a sql-insert entity a hibernate xml mapping, e.g.:
<class name="SearchAuditLog" table="search_audit_log" persister="com.marin.msdb.vo.SearchAuditLog$UpsertEntityPersister">
<composite-id name="LogKey" class="SearchAuditLog$LogKey">
<key-property
name="clientId"
column="client_id"
type="long"
/>
<key-property
name="objectType"
column="object_type"
type="int"
/>
<key-property
name="objectId"
column="object_id"
/>
</composite-id>
<property
name="transactionTime"
column="transaction_time"
type="timestamp"
not-null="true"
/>
<!-- the ordering of the properties is intentional and explicit in the upsert sql below -->
<sql-insert><![CDATA[
insert into search_audit_log (transaction_time, client_id, object_type, object_id)
values (?,?,?,?) ON DUPLICATE KEY UPDATE transaction_time=now()
]]>
</sql-insert>
The original poster asks about MySQL specifically. When I implemented the insert-else-update behavior with mysql I was getting exceptions when the 'update path' of the sql exectued. Specifically, mysql was reporting 2 rows were changed when only 1 row was updated (ostensibly because the existing row is delete and the new row is inserted). See this issue for more detail on that particular feature.
So when the update returned 2x the number of rows affected to hibernate, hibernate was throwing a BatchedTooManyRowsAffectedException, would roll back the transaction, and propogate the exception. Even if you were to catch the exception and handle it, the transaction had already been rolled back by that point.
After some digging I found that this was an issue with the entity persister that hibernate was using. In my case hibernate was using SingleTableEntityPersister, which defines an Expectation that the number of rows updated should match the number of rows defined in the batch operation.
The final tweak necessary to get this behavior to work was to define a custom persister (as shown in the above xml mapping). In this instance all we had to do was extend the SingleTableEntityPersister and 'override' the insert Expectation. E.g. I just tacked this static class onto the persistence object and define it as the custom persister in the hibernate mapping:
public static class UpsertEntityPersister extends SingleTableEntityPersister {
public UpsertEntityPersister(PersistentClass arg0, EntityRegionAccessStrategy arg1, SessionFactoryImplementor arg2, Mapping arg3) throws HibernateException {
super(arg0, arg1, arg2, arg3);
this.insertResultCheckStyles[0] = ExecuteUpdateResultCheckStyle.NONE;
}
}
It took quite a while digging through hibernate code to find this - I wasn't able to find any topics on the net with a solution to this.
If you are using Grails, I found this solution which did not require moving your Domain class into the JAVA world and using #SQLInsert annotations:
Create a custom Hibernate Configuration
Override the PersistentClass Map
Add your custom INSERT sql to the Persistent Classes you want using the ON DUPLICATE KEY.
For example, if you have a Domain object called Person and you want to INSERTS to be INSERT ON DUPLICATE KEY UPDATE you would create a configuration like so:
public class MyCustomConfiguration extends GrailsAnnotationConfiguration {
public MyCustomConfiguration() {
super();
classes = new HashMap<String, PersistentClass>() {
#Override
public PersistentClass put(String key, PersistentClass value) {
if (Person.class.getName().equalsIgnoreCase(key)) {
value.setCustomSQLInsert("insert into person (version, created_by_id, date_created, last_updated, name) values (?, ?, ?, ?, ?) on duplicate key update id=LAST_INSERT_ID(id)", true, ExecuteUpdateResultCheckStyle.COUNT);
}
return super.put(key, value);
}
};
}
and add this as your Hibernate Configuration in DataSource.groovy:
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
configClass = 'MyCustomConfiguration'
}
Just a note to be careful using LAST_INSERT_ID, as this will NOT be set correctly if the UPDATE is executed instead of the INSERT unless you set it explicitly in the statement, e.g. id=LAST_INSERT_ID(id). I haven't checked where GORM gets the ID from, but I'm assuming somewhere it is using LAST_INSERT_ID.
Hope this helps.

Using not-found attribute of one-to-many mapping of hibernate

As per Hibernate docs for one-to-many xml mapping tag there is an attribute called as not-found
http://docs.jboss.org/hibernate/orm/3.3/reference/en-US/html/collections.html#collections-onetomany
The Doc says:
not-found (optional - defaults to exception): specifies how cached
identifiers that reference missing rows will be handled. ignore will
treat a missing row as a null association.
What is the use of this attribute? I tried to create a mapping between Product and Parts with Product having a set of Parts with below mapping details:
<set name="parts" cascade="all">
<key column="productSerialNumber" not-null="true" />
<one-to-many class="Part" not-found="ignore"/>
</set>
Then I wrote my Java code as:
public static void main(String[] args) {
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
session.beginTransaction();
Product prod = (Product) session.get(Product.class, 1);
session.getTransaction().commit();
System.out.println(prod);
HibernateUtil.getSessionFactory().close();
}
I was expecting null for my set which has Parts as I configured in my mapping file as not-found="ignore". But I got the regular exception - org.hibernate.LazyInitializationException
Please help me in understanding what is the use of this attribute? What are cached identifiers here?
The not-found has nothing to do with lazy loading. It's used to handle incoherences in your database.
Suppose you know nothing about good database practices, and have an order_line table containing an order_id column, supposed to reference the order it belongs to. And suppose that since you know nothing about good practices, you don't have a foreign key constraint on this column.
Deleting an order will thus be possible, even if the order has order lines referencing it. When loading such an OrderLine with Hibernate, Hibernate will load the Order and fail with an exception because it's supposed to exist, but doesn't.
Using not-found=ignore makes Hibernate ignore the order_id in the OrderLine, and will thus initialize the order field to null.
In a well-designed database, this attribute should never be used.

Hibernate handle long 0 value instead of NULL in ManyToOne relations

I use Hibernate to access to a legacy DB. For some tables the parent-child reference integrity is not enforced, and the long 0 value is used instead of NULL for some "parent" columns in child tables to denote "no parent".
I still want to use these relations in #ManyToOne and #OneToMany fields, but get EntityNotFound error since the 0 value does not correspond to any record in master table.
What are my options?
Use the NotFound annotation:
#NotFound(action = NotFoundAction.IGNORE)
See http://docs.jboss.org/hibernate/core/3.6/reference/en-US/html_single/#mapping-declaration-manytoone
Instead of the #JoinColumn could be used #JoinFormula. Like this
#JoinFormula(value="CASE the0isNullColumn"
+ " WHEN 0"
+ " THEN NULL"
+ " ELSE the0isNullColumn"
+ " END")
The expression means we check the column and if it's 0 return NULL. Then hibernate doesn't search for the related entity.
You can map it to java.lang.Long which default value is null. Or you can use a #PostLoad and null it if 0. You can also use a #Formula and ignore 0.
The #Formula as written in their documentation can be used to join conditions.
Since I don't know your data model providing a valid example is tricky. Try with:
id_fk is not null or id_fk <> 0
block.
If it does not suit your needs you can write you own Query loader
If you are using some sort of logging enable the show_sql property. And add to your config the org.hibernate.sql DEBUG.

JPA or Hibernate - Joining tables on columns of different types

Is there a way to tell Hibernate to wrap a column in a to_char when using it to join to another table or conversely convert a NUMBER to a VARCHAR? I have a situation where I have a table which contains a generic key column of type VARCHAR which stores the Id of another table which is a Number. I am getting a SQL exception when Hibernate executes the SQL it generates which uses '=' to compare the two columns.
Thanks...
P.S. I know this is not ideal but I am stuck with the schema so I have to deal with it.
This should be possible using a formula in your many-to-one. From section 5.1.22. Column and formula elements (solution also mentioned in this previous answer):
column and formula attributes can
even be combined within the same
property or association mapping to
express, for example, exotic join
conditions.
<many-to-one name="homeAddress" class="Address"
insert="false" update="false">
<column name="person_id" not-null="true" length="10"/>
<formula>'MAILING'</formula>
</many-to-one>
With annotations (if you are using Hibernate 3.5.0-Beta-2+, see HHH-4382):
#ManyToOne
#Formula(value="( select v_pipe_offerprice.offerprice_fk from v_pipe_offerprice where v_pipe_offerprice.id = id )")
public OfferPrice getOfferPrice() { return offerPrice; }
Or maybe check the #JoinColumnsOrFormula:
#ManyToOne
#JoinColumnsOrFormulas(
{ #JoinColumnOrFormula(formula=#JoinFormula(value="SUBSTR(product_idnf, 1, 3)", referencedColumnName="product_idnf")) })
#Fetch(FetchMode.JOIN)
private Product productFamily;

Categories