I have two objects:
public class ParentObject {
// some basic bean info
}
public class ChildObject extends ParentObject {
// more bean info
}
Each of these tables corresponds to a differnet table in a database. I am using Hibernate to query the ChildObject, which will in turn populate the parent objects values.
I have defined my mapping file as so:
<hibernate-mapping>
<class name="ParentObject"
table="PARENT_OBJECT">
<id name="id"
column="parent"id">
<generator class="assigned"/>
</id>
<property name="beaninfo"/>
<!-- more properties -->
<joined-subclass name="ChildObject" table="CHILD_OBJECT">
<key column="CHILD_ID"/>
<!--properties again-->
</joined-subclass>
</class>
</hibernate-mapping>
I can use hibernate to query the two tables without issue.
I use
session.createQuery("from ChildObject as child ");
This is all basic hibernate stuff. However, the part which I am having issues with is that I need to apply locks to the all the tables in the query.
I can set the lock type for the child object by using the query.setLockType("child", LockMode.?). However, I cannot seem to find a way to place a lock on the parent table.
I am new to Hibernate, and am still working around a few mental roadblocks. The question is: how can I place a lock on the parent table?
I was wondering if there was a way around having to do this without undoing the Polymorphic structure that I have set up.
Why do you have to lock both tables? I'm asking because depending on what you're trying to do there may be alternative solutions to achieve what you want.
The way things are, Hibernate normally only locks the root table unless you're using some exotic database / dialect. So, chances are you're already locking your ParentObject table rather than ChildObject.
Update (based on comment):
Since you are using an exotic database :-) which doesn't support FOR UPDATE syntax, Hibernate is locking the "primary" tables as they are specified in query ("primary" in this case being table mapped for the entity listed in FROM clause, not the root of the hierarchy - e.g. ChildObject, not ParentObject). Since you want to lock both tables, I'd suggest you try one of the following:
Call session.lock() on entities after you've obtained them from the query. This should lock the root table of the hierarchy, however I'm not 100% sure on whether it'll work because technically you're trying to "upgrade" the lock that's already being held on a given entity.
Try to cheat by explicitly naming ParentObject table in your query and requesting lock mode for it:
String hql = "select c from ChildObject c, ParentObject p where c.id = p.id";
session.createQuery(hql)
.setLockMode("c", LockMode.READ)
.setLockMode("p", LockMode.READ).list();
Related
MySQL supports an "INSERT ... ON DUPLICATE KEY UPDATE ..." syntax that allows you to "blindly" insert into the database, and fall back to updating the existing record if one exists.
This is helpful when you want quick transaction isolation and the values you want to update to depend on values already in the database.
As a contrived example, let's say you want to count the number of times a story is viewed on a blog. One way to do that with this syntax might be:
INSERT INTO story_count (id, view_count) VALUES (12345, 1)
ON DUPLICATE KEY UPDATE set view_count = view_count + 1
This will be more efficient and more effective than starting a transaction, and handling the inevitable exceptions that occur when new stories hit the front page.
How can we do the same, or accomplish the same goal, with Hibernate?
First, Hibernate's HQL parser will throw an exception because it does not understand the database-specific keywords. In fact, HQL doesn't like any explicit inserts unless it's an "INSERT ... SELECT ....".
Second, Hibernate limits SQL to selects only. Hibernate will throw an exception if you attempt to call session.createSQLQuery("sql").executeUpdate().
Third, Hibernate's saveOrUpdate does not fit the bill in this case. Your tests will pass, but then you'll get production failures if you have more than one visitor per second.
Do I really have to subvert Hibernate?
Have you looked at the Hibernate #SQLInsert Annotation?
#Entity
#Table(name="story_count")
#SQLInsert(sql="INSERT INTO story_count(id, view_count) VALUES (?, ?)
ON DUPLICATE KEY UPDATE view_count = view_count + 1" )
public class StoryCount
This is an old question, but I was having a similar issue and figured I would add to this topic. I needed to add a log to an existing StatelessSession audit log writer. The existing implementation was using a StatelessSession because the caching behavior of the standard session implementation was unnecessary overhead and we did not want our hibernate listeners to fire for audit log writing. This implementation was about achieving as high a write performance as possible with no interactions.
However, the new log type needed to use an insert-else-update type of behavior, where we intend to update existing log entries with a transaction time as a "flagging" type of behavior. In a StatelessSession, saveOrUpdate() is not offered so we needed to implement the insert-else-update manually.
In light of these requirements:
You can use the mysql "insert ... on duplicate key update" behavior via a custom sql-insert for the hibernate persistent object. You can define the custom sql-insert clause either via annotation (as in the above answer) or via a sql-insert entity a hibernate xml mapping, e.g.:
<class name="SearchAuditLog" table="search_audit_log" persister="com.marin.msdb.vo.SearchAuditLog$UpsertEntityPersister">
<composite-id name="LogKey" class="SearchAuditLog$LogKey">
<key-property
name="clientId"
column="client_id"
type="long"
/>
<key-property
name="objectType"
column="object_type"
type="int"
/>
<key-property
name="objectId"
column="object_id"
/>
</composite-id>
<property
name="transactionTime"
column="transaction_time"
type="timestamp"
not-null="true"
/>
<!-- the ordering of the properties is intentional and explicit in the upsert sql below -->
<sql-insert><![CDATA[
insert into search_audit_log (transaction_time, client_id, object_type, object_id)
values (?,?,?,?) ON DUPLICATE KEY UPDATE transaction_time=now()
]]>
</sql-insert>
The original poster asks about MySQL specifically. When I implemented the insert-else-update behavior with mysql I was getting exceptions when the 'update path' of the sql exectued. Specifically, mysql was reporting 2 rows were changed when only 1 row was updated (ostensibly because the existing row is delete and the new row is inserted). See this issue for more detail on that particular feature.
So when the update returned 2x the number of rows affected to hibernate, hibernate was throwing a BatchedTooManyRowsAffectedException, would roll back the transaction, and propogate the exception. Even if you were to catch the exception and handle it, the transaction had already been rolled back by that point.
After some digging I found that this was an issue with the entity persister that hibernate was using. In my case hibernate was using SingleTableEntityPersister, which defines an Expectation that the number of rows updated should match the number of rows defined in the batch operation.
The final tweak necessary to get this behavior to work was to define a custom persister (as shown in the above xml mapping). In this instance all we had to do was extend the SingleTableEntityPersister and 'override' the insert Expectation. E.g. I just tacked this static class onto the persistence object and define it as the custom persister in the hibernate mapping:
public static class UpsertEntityPersister extends SingleTableEntityPersister {
public UpsertEntityPersister(PersistentClass arg0, EntityRegionAccessStrategy arg1, SessionFactoryImplementor arg2, Mapping arg3) throws HibernateException {
super(arg0, arg1, arg2, arg3);
this.insertResultCheckStyles[0] = ExecuteUpdateResultCheckStyle.NONE;
}
}
It took quite a while digging through hibernate code to find this - I wasn't able to find any topics on the net with a solution to this.
If you are using Grails, I found this solution which did not require moving your Domain class into the JAVA world and using #SQLInsert annotations:
Create a custom Hibernate Configuration
Override the PersistentClass Map
Add your custom INSERT sql to the Persistent Classes you want using the ON DUPLICATE KEY.
For example, if you have a Domain object called Person and you want to INSERTS to be INSERT ON DUPLICATE KEY UPDATE you would create a configuration like so:
public class MyCustomConfiguration extends GrailsAnnotationConfiguration {
public MyCustomConfiguration() {
super();
classes = new HashMap<String, PersistentClass>() {
#Override
public PersistentClass put(String key, PersistentClass value) {
if (Person.class.getName().equalsIgnoreCase(key)) {
value.setCustomSQLInsert("insert into person (version, created_by_id, date_created, last_updated, name) values (?, ?, ?, ?, ?) on duplicate key update id=LAST_INSERT_ID(id)", true, ExecuteUpdateResultCheckStyle.COUNT);
}
return super.put(key, value);
}
};
}
and add this as your Hibernate Configuration in DataSource.groovy:
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
configClass = 'MyCustomConfiguration'
}
Just a note to be careful using LAST_INSERT_ID, as this will NOT be set correctly if the UPDATE is executed instead of the INSERT unless you set it explicitly in the statement, e.g. id=LAST_INSERT_ID(id). I haven't checked where GORM gets the ID from, but I'm assuming somewhere it is using LAST_INSERT_ID.
Hope this helps.
As per Hibernate docs for one-to-many xml mapping tag there is an attribute called as not-found
http://docs.jboss.org/hibernate/orm/3.3/reference/en-US/html/collections.html#collections-onetomany
The Doc says:
not-found (optional - defaults to exception): specifies how cached
identifiers that reference missing rows will be handled. ignore will
treat a missing row as a null association.
What is the use of this attribute? I tried to create a mapping between Product and Parts with Product having a set of Parts with below mapping details:
<set name="parts" cascade="all">
<key column="productSerialNumber" not-null="true" />
<one-to-many class="Part" not-found="ignore"/>
</set>
Then I wrote my Java code as:
public static void main(String[] args) {
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
session.beginTransaction();
Product prod = (Product) session.get(Product.class, 1);
session.getTransaction().commit();
System.out.println(prod);
HibernateUtil.getSessionFactory().close();
}
I was expecting null for my set which has Parts as I configured in my mapping file as not-found="ignore". But I got the regular exception - org.hibernate.LazyInitializationException
Please help me in understanding what is the use of this attribute? What are cached identifiers here?
The not-found has nothing to do with lazy loading. It's used to handle incoherences in your database.
Suppose you know nothing about good database practices, and have an order_line table containing an order_id column, supposed to reference the order it belongs to. And suppose that since you know nothing about good practices, you don't have a foreign key constraint on this column.
Deleting an order will thus be possible, even if the order has order lines referencing it. When loading such an OrderLine with Hibernate, Hibernate will load the Order and fail with an exception because it's supposed to exist, but doesn't.
Using not-found=ignore makes Hibernate ignore the order_id in the OrderLine, and will thus initialize the order field to null.
In a well-designed database, this attribute should never be used.
I am working on an enterprise application where we use Hibernate and a many-to-many relationship with a join table. We are seeing very sporadic database deadlocks in production (with high volume) that we cannot recreate.
Category.java
public class Category {
....
private Set<Product> products = new HashSet<Product>();
...
}
Category.hbm.xml
<class
name="Category"
table="CATEGORY"
>
...
<!-- uni-directional many-to-many association to Product -->
<set
name="products"
table="CATEGORY_PRODUCT_ASSC"
lazy="false"
cascade="none"
>
<key column="CATEGORY_ID" />
<many-to-many class="Product" column="PRODUCT_ID" />
</set>
</class>
Product.java, Product.hbm.xml do not have a set of Categories, as this is uni-directional many-to-many
The CATEGORY_PRODUCT_ASSC table is a simple join table that only has 2 columns: CATEGORY_ID and PRODUCT_ID.
Right now, we are calling Session.saveOrUpdate on the Category instance object for the sole purpose of getting the inserts in the CATEGORY_PRODUCT_ASSC join table (nothing changed on the Category)
I turn on Hibernate show_sql and see the following:
update CATEGORY set NAME=?, DESCRIPTION=?, where category_id=?
insert into CATEGORY_PRODUCT_ASSC (CATEGORY_ID, PRODUCT_ID) values (?, ?)
The problem is that we have many products being created at the exact same second on multiple servers, all for the same Category.
When we see deadlocks, the update CATEGORY call is inevitably involved. We need to prevent these update CATEGORY SQL statements from being executed.
Option 1: Is there any way that I can call Session.saveOrUpdate(category) and have it not update Category (since that has not changed), but still do the insert into the join table CATEGORY_PRODUCT_ASSC ?
Option 2: If not, we have thought about just doing a straight INSERT of the CATEGORY_PRODUCT_ASSC rows via JDBC. However, one concern is stale Hibernate objects (Category objects) in the cache. Any ideas/recommendations on this possible approach?
Thank you very much in advance for your help. :-)
We resolved this issue. It did turn out to be the update category statement. Instead of using the CATEGORY_PRODUCT_ASSC table as a join-through for the many-to-many relationship, we created a Hibernate-managed entity that represents this join table ... CategoryProductAssc.
This way, we could directly persist the relationship without having to call Session.saveOrUpdate on the Category instance object for the sole purpose of getting the inserts in the CATEGORY_PRODUCT_ASSC join table when nothing changed on the Category object.
I created Cactus tests that spun up 20 simultaneous executions, tested old vs new code and our DBAs monitored and saw concurrency with the old code and no concurrency with the new code.
I have two tables Item and Property and one item can have multiple properties. I have modeled it correctly (i think) in hibernate and when loading the ItemModel object, all the properties load properly.
The problem is when I am trying to delete properties and then save it, the properties just get added to the existing ones.
ItemModel m = ...;
m.getPropertySet().size() // returns 5 initially
m.getPropertySet().clear();
// some update function which adds properties
m.getPropertySet().size(); // returns 1
...currentSession().saveOrUpdate(m);
What happens is that now the database has 6 properties for that category instead of 1. What should I do to make this work?
The model for Item's mapping to properties looks something like this
<set name="propertySet" cascade="all">
<key column="item_id" not-null="true"/>
<one-to-many class="Property"/>
</set>
Use cascade="all-delete-orphan". See the first example in the reference guide for a walkthrough of relationships like this. Also, if this is a bidirectional one-to-many, then this side (the set) should be mapped with inverse="true" so that the relationship is determined solely based on the other side of the relationship.
I would like to evaluate JPA on an existing project. The database model and the java classes exists and are currently mapped via self generated code. The database model and the java classes do not fit ideally together - but the custom mapping works well. Nevertheless the usage of JPA in general seems worth a try.
As you see I am new to JPA and have to do the work with xml configuration. Currently I am working on a one-to-many unidirectional relationship using a join table (please do not discuss this szenario here).
A (one - relationship owner) <-> AB (JoinTable) <-> B (many)
The tables look like this
A
--
ID
BREF
...
B
--
ID
...
AB
--
A_BREF (foreign key to a reference column in A which is NOT the id)
B_ID
I would like to define a unidirectional one-to-many relationship for class A.
class A {
private List<B> bs;
}
and did it like this:
<one-to-many name="bs">
<join-table name="ab">
<join-column name="a_bref">
<referenced-column-name name="bref" />
</join-column>
<inverse-join-column name="b_id">
<referenced-column-name name="id" />
</inverse-join-column>
</join-table>
</one-to-many>
Althoug this does not force an error it is not working. The problem is that the join table does not work on the ID column of A. The query to select the "B" entities works with the A.ID column value instead of the A.BREF column value to select the entities.
(How) can I make this mapping work (I use eclipselink 2.2.0)?
Thanks for any suggestion!
EDIT:
After looking at a link provided in #SJuan76 answer I slightly modified my mapping to
<one-to-many name="bs">
<join-table name="ab">
<join-column name="a_bref" referenced-column-name="bref" />
<inverse-join-column name="b_id" referenced-column-name="id" />
</join-table>
</one-to-many>
This now causes the following errors (tested with eclipselink 2.1.0 and 2.2.0)
eclipselink 2.1.0
Exception Description: The parameter
name [bref] in the query's selection
criteria does not match any parameter
name defined in the query.
eclipselink 2.2.0
Exception Description: The reference
column name [bref] mapped on the
element [field bs] does not
correspond to a valid field on the
mapping reference.
By the way - if I remove the referenced-column-name="bref" from the definition I get the same exception for the referenced-column-name="id" on the inverse-join-column element. So I doubt that I have understood referenced-column-name correct. I used it to specify the database column name of the tables which are related to the join table. Is this correct?
SOLUTION:
The final error in my szenario was that I did not have the BREF field definied in my class
class A {
private long bref; // missing !
private List<B> bs;
}
and in my orm.xml mapping file for this class
<basic name="bref">
<column name="bref" />
</basic>
I was not aware that I have to define the used join mapping referenced-column-name attributes somewhere in my mapping classes (as I also did not have the join-table itself or the name attributes of join-column/inverse-join-column mapped to a class or class members.)
Also the tip to check the case issue was helpful for me. I feel now quite to verbose in specifying my mapping as I overwrite all default (uppercase) mappings with lowercase values. As my database is not case sensitive I will use upper case notation if special mapping is needed to go with the default.
+1 for all!
Can you try defining the field as "BREF" or the same exact case used if you defined it on the attribute mapping, or you can try setting the eclipselink.jpa.uppercase-column-names persistence property to true. This is likely the issue with "id" when referenced-column-name="bref" is removed, since it is likely the field in the entity defaults to "ID".
In general JPA requires that the foreign keys/join columns reference the primary key/Id of the Entity. But, this should work with EclipseLink, so please include the SQL that is being generated, and if it is wrong, please log a bug.
How is the Id of A defined, is it just ID or ID and BREF?
You can use a DescriptorCustomizer to customize the ManyToManyMapping for the relationship and set the correct foreign key field name.