Can Hibernate work with MySQL's "ON DUPLICATE KEY UPDATE" syntax? - java

MySQL supports an "INSERT ... ON DUPLICATE KEY UPDATE ..." syntax that allows you to "blindly" insert into the database, and fall back to updating the existing record if one exists.
This is helpful when you want quick transaction isolation and the values you want to update to depend on values already in the database.
As a contrived example, let's say you want to count the number of times a story is viewed on a blog. One way to do that with this syntax might be:
INSERT INTO story_count (id, view_count) VALUES (12345, 1)
ON DUPLICATE KEY UPDATE set view_count = view_count + 1
This will be more efficient and more effective than starting a transaction, and handling the inevitable exceptions that occur when new stories hit the front page.
How can we do the same, or accomplish the same goal, with Hibernate?
First, Hibernate's HQL parser will throw an exception because it does not understand the database-specific keywords. In fact, HQL doesn't like any explicit inserts unless it's an "INSERT ... SELECT ....".
Second, Hibernate limits SQL to selects only. Hibernate will throw an exception if you attempt to call session.createSQLQuery("sql").executeUpdate().
Third, Hibernate's saveOrUpdate does not fit the bill in this case. Your tests will pass, but then you'll get production failures if you have more than one visitor per second.
Do I really have to subvert Hibernate?

Have you looked at the Hibernate #SQLInsert Annotation?
#Entity
#Table(name="story_count")
#SQLInsert(sql="INSERT INTO story_count(id, view_count) VALUES (?, ?)
ON DUPLICATE KEY UPDATE view_count = view_count + 1" )
public class StoryCount

This is an old question, but I was having a similar issue and figured I would add to this topic. I needed to add a log to an existing StatelessSession audit log writer. The existing implementation was using a StatelessSession because the caching behavior of the standard session implementation was unnecessary overhead and we did not want our hibernate listeners to fire for audit log writing. This implementation was about achieving as high a write performance as possible with no interactions.
However, the new log type needed to use an insert-else-update type of behavior, where we intend to update existing log entries with a transaction time as a "flagging" type of behavior. In a StatelessSession, saveOrUpdate() is not offered so we needed to implement the insert-else-update manually.
In light of these requirements:
You can use the mysql "insert ... on duplicate key update" behavior via a custom sql-insert for the hibernate persistent object. You can define the custom sql-insert clause either via annotation (as in the above answer) or via a sql-insert entity a hibernate xml mapping, e.g.:
<class name="SearchAuditLog" table="search_audit_log" persister="com.marin.msdb.vo.SearchAuditLog$UpsertEntityPersister">
<composite-id name="LogKey" class="SearchAuditLog$LogKey">
<key-property
name="clientId"
column="client_id"
type="long"
/>
<key-property
name="objectType"
column="object_type"
type="int"
/>
<key-property
name="objectId"
column="object_id"
/>
</composite-id>
<property
name="transactionTime"
column="transaction_time"
type="timestamp"
not-null="true"
/>
<!-- the ordering of the properties is intentional and explicit in the upsert sql below -->
<sql-insert><![CDATA[
insert into search_audit_log (transaction_time, client_id, object_type, object_id)
values (?,?,?,?) ON DUPLICATE KEY UPDATE transaction_time=now()
]]>
</sql-insert>
The original poster asks about MySQL specifically. When I implemented the insert-else-update behavior with mysql I was getting exceptions when the 'update path' of the sql exectued. Specifically, mysql was reporting 2 rows were changed when only 1 row was updated (ostensibly because the existing row is delete and the new row is inserted). See this issue for more detail on that particular feature.
So when the update returned 2x the number of rows affected to hibernate, hibernate was throwing a BatchedTooManyRowsAffectedException, would roll back the transaction, and propogate the exception. Even if you were to catch the exception and handle it, the transaction had already been rolled back by that point.
After some digging I found that this was an issue with the entity persister that hibernate was using. In my case hibernate was using SingleTableEntityPersister, which defines an Expectation that the number of rows updated should match the number of rows defined in the batch operation.
The final tweak necessary to get this behavior to work was to define a custom persister (as shown in the above xml mapping). In this instance all we had to do was extend the SingleTableEntityPersister and 'override' the insert Expectation. E.g. I just tacked this static class onto the persistence object and define it as the custom persister in the hibernate mapping:
public static class UpsertEntityPersister extends SingleTableEntityPersister {
public UpsertEntityPersister(PersistentClass arg0, EntityRegionAccessStrategy arg1, SessionFactoryImplementor arg2, Mapping arg3) throws HibernateException {
super(arg0, arg1, arg2, arg3);
this.insertResultCheckStyles[0] = ExecuteUpdateResultCheckStyle.NONE;
}
}
It took quite a while digging through hibernate code to find this - I wasn't able to find any topics on the net with a solution to this.

If you are using Grails, I found this solution which did not require moving your Domain class into the JAVA world and using #SQLInsert annotations:
Create a custom Hibernate Configuration
Override the PersistentClass Map
Add your custom INSERT sql to the Persistent Classes you want using the ON DUPLICATE KEY.
For example, if you have a Domain object called Person and you want to INSERTS to be INSERT ON DUPLICATE KEY UPDATE you would create a configuration like so:
public class MyCustomConfiguration extends GrailsAnnotationConfiguration {
public MyCustomConfiguration() {
super();
classes = new HashMap<String, PersistentClass>() {
#Override
public PersistentClass put(String key, PersistentClass value) {
if (Person.class.getName().equalsIgnoreCase(key)) {
value.setCustomSQLInsert("insert into person (version, created_by_id, date_created, last_updated, name) values (?, ?, ?, ?, ?) on duplicate key update id=LAST_INSERT_ID(id)", true, ExecuteUpdateResultCheckStyle.COUNT);
}
return super.put(key, value);
}
};
}
and add this as your Hibernate Configuration in DataSource.groovy:
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
configClass = 'MyCustomConfiguration'
}
Just a note to be careful using LAST_INSERT_ID, as this will NOT be set correctly if the UPDATE is executed instead of the INSERT unless you set it explicitly in the statement, e.g. id=LAST_INSERT_ID(id). I haven't checked where GORM gets the ID from, but I'm assuming somewhere it is using LAST_INSERT_ID.
Hope this helps.

Related

Custom SQL for Order in JPA Criteria API

I'm switching from deprecated (unfortunately) Hibernate Criteria API to JPA Criteria API. We have a custom Order (from Hibernate) interface implementation to redefine SQL generated for it. The case is quite sophisticated as we need to use a giant SELECT with subqueries. We implemented toSqlString method of the interface to return this huge SQL and we need a way to migrate it to JPA Criteria API.
The question is: is there a way in JPA Criteria API to redefine the SQL generated? Or is there a weird way to use Hibernate Order with JPA Criteria API?
Thank you!
UPDATE Although #Tobias Liefke suggestion is quite interesting, my SQL varies too much to create a function class per SQL. I tried implementing a single function class and passing the SQL there as an argument but that didn't work (the rendered SQL was enclosed in single quotes thus it was sent to the database as parameter and not as part of the generated query)
You can't use SQL fragments in JPQL or criteria queries...
... except when ...
1. Calling a function
JPA and Hibernate allow to use functions in their expressions, for example:
... ORDER BY trim(entity.label) ASC
Resp.
query.orderBy(criteriaBuilder.asc(
criteriaBuilder.function("trim", String.class, root.get(ExampleEntity_.label))));
The problem is, that this is not really the call to the SQL function trim, but the call to a JPA function, which must be registered (Hibernate does this already for the most common SQL functions).
Fortunately you can define your own JPA functions in a DialectResolver:
public class MyDialectResolver implements DialectResolver {
public Dialect resolveDialect(final DialectResolutionInfo info) {
Dialect dialect = StandardDialectResolver.INSTANCE.resolve(info);
dialect.registerFunction("myOrderFunction", ...);
return dialect;
}
}
registerFunction takes two parameters, the first is the name of the function in JPA, the other is the mapping to SQL.
Don't forget to declare your dialect resolver in your persistence.xml:
<persistence-unit name="database">
<properties>
<property name="hibernate.dialect_resolvers"
value="my.package.MyDialectResolver" />
</properties>
</persistence-unit>
You could now create your own function in your SQL server which contains your huge SQL and register that as function:
dialect.registerFunction("myOrderFunction",
new StandardSQLFunction("myOrderFunctionInSQL", StringType.INSTANCE));
Or you could write your own mapping, which includes your huge SQL:
public class MyOrderFunction implements SQLFunction {
public String render((Type firstArgumentType, List arguments,
SessionFactoryImplementor factory) throws QueryException) {
return my_huge_SQL;
}
// ...
}
And register that one:
dialect.registerFunction("myOrderFunction", new MyOrderFunction());
Another advantage of this solution: you could define different SQLs depending on the actual database dialect.
2. Using a formula
You could use an additional attribute for your entity:
#Formula("my huge SQL")
private String orderAttribute;
You could now sort by this attribute:
... ORDER BY entity.orderAttribute ASC
Resp.
query.orderBy(criteriaBuilder.asc(root.get(ExampleEntity_.orderAttribute))));
I only recommend this solution, if you need the result of the huge SQL in your model anyway. Otherwise it will only pollute your entity model and add the SQL to every query of your entity (except you mark it with #Basic(fetch = FetchType.lazy) and use byte code instrumentation).
A similar solution would be to define a #Subselect entity with the huge SQL - with the same drawbacks.

How to do ON DUPLICATE KEY UPDATE in Spring Data JPA? [duplicate]

MySQL supports an "INSERT ... ON DUPLICATE KEY UPDATE ..." syntax that allows you to "blindly" insert into the database, and fall back to updating the existing record if one exists.
This is helpful when you want quick transaction isolation and the values you want to update to depend on values already in the database.
As a contrived example, let's say you want to count the number of times a story is viewed on a blog. One way to do that with this syntax might be:
INSERT INTO story_count (id, view_count) VALUES (12345, 1)
ON DUPLICATE KEY UPDATE set view_count = view_count + 1
This will be more efficient and more effective than starting a transaction, and handling the inevitable exceptions that occur when new stories hit the front page.
How can we do the same, or accomplish the same goal, with Hibernate?
First, Hibernate's HQL parser will throw an exception because it does not understand the database-specific keywords. In fact, HQL doesn't like any explicit inserts unless it's an "INSERT ... SELECT ....".
Second, Hibernate limits SQL to selects only. Hibernate will throw an exception if you attempt to call session.createSQLQuery("sql").executeUpdate().
Third, Hibernate's saveOrUpdate does not fit the bill in this case. Your tests will pass, but then you'll get production failures if you have more than one visitor per second.
Do I really have to subvert Hibernate?
Have you looked at the Hibernate #SQLInsert Annotation?
#Entity
#Table(name="story_count")
#SQLInsert(sql="INSERT INTO story_count(id, view_count) VALUES (?, ?)
ON DUPLICATE KEY UPDATE view_count = view_count + 1" )
public class StoryCount
This is an old question, but I was having a similar issue and figured I would add to this topic. I needed to add a log to an existing StatelessSession audit log writer. The existing implementation was using a StatelessSession because the caching behavior of the standard session implementation was unnecessary overhead and we did not want our hibernate listeners to fire for audit log writing. This implementation was about achieving as high a write performance as possible with no interactions.
However, the new log type needed to use an insert-else-update type of behavior, where we intend to update existing log entries with a transaction time as a "flagging" type of behavior. In a StatelessSession, saveOrUpdate() is not offered so we needed to implement the insert-else-update manually.
In light of these requirements:
You can use the mysql "insert ... on duplicate key update" behavior via a custom sql-insert for the hibernate persistent object. You can define the custom sql-insert clause either via annotation (as in the above answer) or via a sql-insert entity a hibernate xml mapping, e.g.:
<class name="SearchAuditLog" table="search_audit_log" persister="com.marin.msdb.vo.SearchAuditLog$UpsertEntityPersister">
<composite-id name="LogKey" class="SearchAuditLog$LogKey">
<key-property
name="clientId"
column="client_id"
type="long"
/>
<key-property
name="objectType"
column="object_type"
type="int"
/>
<key-property
name="objectId"
column="object_id"
/>
</composite-id>
<property
name="transactionTime"
column="transaction_time"
type="timestamp"
not-null="true"
/>
<!-- the ordering of the properties is intentional and explicit in the upsert sql below -->
<sql-insert><![CDATA[
insert into search_audit_log (transaction_time, client_id, object_type, object_id)
values (?,?,?,?) ON DUPLICATE KEY UPDATE transaction_time=now()
]]>
</sql-insert>
The original poster asks about MySQL specifically. When I implemented the insert-else-update behavior with mysql I was getting exceptions when the 'update path' of the sql exectued. Specifically, mysql was reporting 2 rows were changed when only 1 row was updated (ostensibly because the existing row is delete and the new row is inserted). See this issue for more detail on that particular feature.
So when the update returned 2x the number of rows affected to hibernate, hibernate was throwing a BatchedTooManyRowsAffectedException, would roll back the transaction, and propogate the exception. Even if you were to catch the exception and handle it, the transaction had already been rolled back by that point.
After some digging I found that this was an issue with the entity persister that hibernate was using. In my case hibernate was using SingleTableEntityPersister, which defines an Expectation that the number of rows updated should match the number of rows defined in the batch operation.
The final tweak necessary to get this behavior to work was to define a custom persister (as shown in the above xml mapping). In this instance all we had to do was extend the SingleTableEntityPersister and 'override' the insert Expectation. E.g. I just tacked this static class onto the persistence object and define it as the custom persister in the hibernate mapping:
public static class UpsertEntityPersister extends SingleTableEntityPersister {
public UpsertEntityPersister(PersistentClass arg0, EntityRegionAccessStrategy arg1, SessionFactoryImplementor arg2, Mapping arg3) throws HibernateException {
super(arg0, arg1, arg2, arg3);
this.insertResultCheckStyles[0] = ExecuteUpdateResultCheckStyle.NONE;
}
}
It took quite a while digging through hibernate code to find this - I wasn't able to find any topics on the net with a solution to this.
If you are using Grails, I found this solution which did not require moving your Domain class into the JAVA world and using #SQLInsert annotations:
Create a custom Hibernate Configuration
Override the PersistentClass Map
Add your custom INSERT sql to the Persistent Classes you want using the ON DUPLICATE KEY.
For example, if you have a Domain object called Person and you want to INSERTS to be INSERT ON DUPLICATE KEY UPDATE you would create a configuration like so:
public class MyCustomConfiguration extends GrailsAnnotationConfiguration {
public MyCustomConfiguration() {
super();
classes = new HashMap<String, PersistentClass>() {
#Override
public PersistentClass put(String key, PersistentClass value) {
if (Person.class.getName().equalsIgnoreCase(key)) {
value.setCustomSQLInsert("insert into person (version, created_by_id, date_created, last_updated, name) values (?, ?, ?, ?, ?) on duplicate key update id=LAST_INSERT_ID(id)", true, ExecuteUpdateResultCheckStyle.COUNT);
}
return super.put(key, value);
}
};
}
and add this as your Hibernate Configuration in DataSource.groovy:
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
configClass = 'MyCustomConfiguration'
}
Just a note to be careful using LAST_INSERT_ID, as this will NOT be set correctly if the UPDATE is executed instead of the INSERT unless you set it explicitly in the statement, e.g. id=LAST_INSERT_ID(id). I haven't checked where GORM gets the ID from, but I'm assuming somewhere it is using LAST_INSERT_ID.
Hope this helps.

Using not-found attribute of one-to-many mapping of hibernate

As per Hibernate docs for one-to-many xml mapping tag there is an attribute called as not-found
http://docs.jboss.org/hibernate/orm/3.3/reference/en-US/html/collections.html#collections-onetomany
The Doc says:
not-found (optional - defaults to exception): specifies how cached
identifiers that reference missing rows will be handled. ignore will
treat a missing row as a null association.
What is the use of this attribute? I tried to create a mapping between Product and Parts with Product having a set of Parts with below mapping details:
<set name="parts" cascade="all">
<key column="productSerialNumber" not-null="true" />
<one-to-many class="Part" not-found="ignore"/>
</set>
Then I wrote my Java code as:
public static void main(String[] args) {
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
session.beginTransaction();
Product prod = (Product) session.get(Product.class, 1);
session.getTransaction().commit();
System.out.println(prod);
HibernateUtil.getSessionFactory().close();
}
I was expecting null for my set which has Parts as I configured in my mapping file as not-found="ignore". But I got the regular exception - org.hibernate.LazyInitializationException
Please help me in understanding what is the use of this attribute? What are cached identifiers here?
The not-found has nothing to do with lazy loading. It's used to handle incoherences in your database.
Suppose you know nothing about good database practices, and have an order_line table containing an order_id column, supposed to reference the order it belongs to. And suppose that since you know nothing about good practices, you don't have a foreign key constraint on this column.
Deleting an order will thus be possible, even if the order has order lines referencing it. When loading such an OrderLine with Hibernate, Hibernate will load the Order and fail with an exception because it's supposed to exist, but doesn't.
Using not-found=ignore makes Hibernate ignore the order_id in the OrderLine, and will thus initialize the order field to null.
In a well-designed database, this attribute should never be used.

Hibernate/Spring HibernateTemplate.findByCriteria(Deatched Criteria dc) executes a sql update on view

I am trying to search a view based on given criteria. This view has a few fields for multiple different entities in my application that a user may want to search for.
When I enter the name of an entity I want to search for, I add a restriction for the name field to the detached criteria before calling .findByCriteria(). This causes .findByCriteria() to retrieve a list of results with the name I am looking for.
Also, when I look through my log, I can see hibernate calling a select statment.
I have now added another entity to my view, with a few searchable fields. When I try to search for a field related to this new entity, I get an exception in my log.
When I look through my log with the exception, I can see hibernate calling a select statment with an update statement right after the select (I am not trying to update a record, just retrieve it in a list).
So why is hibernate calling an update when I am calling .findByCriteria() for my new entity?
org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:90)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
SQL that is executed:
Hibernate:
select
*
from
( select
this_.SEARCH_ID as SEARCH1_35_0_,
this_.ST_NM as ST24_35_0_
from
SEARCH_RESULT this_
where
this_.LOAN_TYPE=? )
where
rownum <= ?
DEBUG 2012-03-21 11:37:19,332 142195 (http-8181-3:org.springframework.orm.hibernate3.HibernateTemplate):
[org.springframework.orm.hibernate3.HibernateAccessor.flushIfNecessary(HibernateAccessor.java:389)]
Eagerly flushing Hibernate session
DEBUG 2012-03-21 11:37:19,384 142247 (http-8181-3:org.hibernate.SQL):
[org.hibernate.jdbc.util.SQLStatementLogger.logStatement(SQLStatementLogger.java:111)]
update
SEARCH_RESULT
set
ADDR_LINE1=?,
ASSGND_REGION=?,
BASE_DEAL_ID=?,
ST_NM=?
where
SEARCH_ID=?
There is probably an update happening because Hibernate is set up to do an autoflush before executing the queries, so if the persistence context thinks it has dirty data, it will try to update it. Without seeing the code I can't be sure, but I'd guess that even though search_result is a view, your corresponding Java object is annotated on the getters and the object has matching setters. Hibernate doesn't make a distinction between tables and views, and if you call a setter, Hibernate will assume that it has data changes to update.
You can tweak how you build your Java objects for views by adding the #Immutable annotation (or hibernate.#Entity(mutable = false) depending on which version you're using. This should be enough to indicate to Hibernate to not flush changes. You can also annotate the fields directly and get rid of your setters so that consumers of the SearchResult object know that it's read only.

Hibernate - apply locks to parent tables in polymorphic queries

I have two objects:
public class ParentObject {
// some basic bean info
}
public class ChildObject extends ParentObject {
// more bean info
}
Each of these tables corresponds to a differnet table in a database. I am using Hibernate to query the ChildObject, which will in turn populate the parent objects values.
I have defined my mapping file as so:
<hibernate-mapping>
<class name="ParentObject"
table="PARENT_OBJECT">
<id name="id"
column="parent"id">
<generator class="assigned"/>
</id>
<property name="beaninfo"/>
<!-- more properties -->
<joined-subclass name="ChildObject" table="CHILD_OBJECT">
<key column="CHILD_ID"/>
<!--properties again-->
</joined-subclass>
</class>
</hibernate-mapping>
I can use hibernate to query the two tables without issue.
I use
session.createQuery("from ChildObject as child ");
This is all basic hibernate stuff. However, the part which I am having issues with is that I need to apply locks to the all the tables in the query.
I can set the lock type for the child object by using the query.setLockType("child", LockMode.?). However, I cannot seem to find a way to place a lock on the parent table.
I am new to Hibernate, and am still working around a few mental roadblocks. The question is: how can I place a lock on the parent table?
I was wondering if there was a way around having to do this without undoing the Polymorphic structure that I have set up.
Why do you have to lock both tables? I'm asking because depending on what you're trying to do there may be alternative solutions to achieve what you want.
The way things are, Hibernate normally only locks the root table unless you're using some exotic database / dialect. So, chances are you're already locking your ParentObject table rather than ChildObject.
Update (based on comment):
Since you are using an exotic database :-) which doesn't support FOR UPDATE syntax, Hibernate is locking the "primary" tables as they are specified in query ("primary" in this case being table mapped for the entity listed in FROM clause, not the root of the hierarchy - e.g. ChildObject, not ParentObject). Since you want to lock both tables, I'd suggest you try one of the following:
Call session.lock() on entities after you've obtained them from the query. This should lock the root table of the hierarchy, however I'm not 100% sure on whether it'll work because technically you're trying to "upgrade" the lock that's already being held on a given entity.
Try to cheat by explicitly naming ParentObject table in your query and requesting lock mode for it:
String hql = "select c from ChildObject c, ParentObject p where c.id = p.id";
session.createQuery(hql)
.setLockMode("c", LockMode.READ)
.setLockMode("p", LockMode.READ).list();

Categories