I have a parent -> child relationship, with a #ManyToOne / #OneToMany relationship.
I'm processing updates to the parent, in code that goes roughly like this:
Get the parent
Retrieve from (in order - ehCache, db, or create if not found)
Process an update, creating a child on the parent if not found
Save to the db
Store in the cache
When running through, I find the following sequence occurs
First update completes - parent & child both created an cached
Second update - parent retrieved from the cache, new child is added
When the second update completes, the child's id is still null. However, the update did complete successfully. (verified against both hibernate logs and db)
Third update - DataIntegrityViolationException is thrown, as the child from the 2nd update is INSERTed again.
I assume that this must be related to the fact the parent is cached, rather than returned from the database. I'm not sure what the correct process here should be.
Relevant information:
The Parent <--> child back references are defined and annotated correctly.
After the initial INSERT of the parent, I've tried re-fetching the parent from the db, and caching this, to see if it made a difference --- it didn't.
Transactional boundaries must be playing a role here, as this initially didn't fail in my tests that were annotated as #Transactional. (A lesson hard-learnt)
Whats the correct way to handle this - specifically, to avoid having to load the Parent from the db every time, while still having child entities tracked correctly?
Code example shown below.
#Entity // Parent
class Fixture {
#OneToMany(cascade=CascadeType.ALL, mappedBy="fixture", fetch=FetchType.EAGER) #Getter #Setter
#MapKey(name="instrumentPriceId")
private Map<String,Instrument> instruments = Maps.newHashMap();
private Instrument addInstrument(Instrument instrument)
{
instruments.put(instrument.getInstrumentPriceId(), instrument);
instrument.setFixture(this);
log.info("Created instrument {}",instrument.getInstrumentPriceId());
return instrument;
}
/**
* Returns an instrument with the matching instrumentId.
* If the instrument does not exist, it is created, appended to the internal collection,
* and then returned.
*
* This method is guaranteed to always return an instrument.
* This method is thread-safe.
*
* #param instrumentId
* #return
*/
public Instrument getInstrument(String instrumentId)
{
if (!instruments.containsKey(instrumentId))
{
addInstrument(new Instrument(instrumentId));
}
return instruments.get(instrumentId);
}
}
#Entity // Child
public class Instrument {
#Column(unique=true)
#Getter #Setter
private String instrumentPriceId;
#ManyToOne(optional=false)
#Getter #Setter #JsonIgnore
private Fixture fixture;
public Instrument(String instrumentPriceId)
{
this.instrumentPriceId = instrumentPriceId;
}
}
And, the update processor code:
class Processor {
#Autowired
#Qualifier("FixtureCache")
private Ehcache fixtureCache;
#Autowired
private FixtureRepository fixtureRepository;
void update(String fixtureId, String instrumentId) {
Fixture fixture = getFixture(fixtureId);
// Get the instrument, creating it & appending
// to the collection, if it doesn't exist
fixture.getInstrument(instrumentId);
// do some updates...ommitted
fixtureRepository.save(fixture);
fixtureCache.put(new Element(fixtureId, fixture));
}
/**
* Returns a fixture.
* Returns from the cache first, if present
* If not present in the cache, the db is checked.
* Finally, if the fixture does not exist, a new one is
* created and returned
*/
Fixture getFixture(String fixtureId) {
Fixture fixture;
Element element = fixtureCache.get(fixtureId);
if (element != null)
{
fixture = element.getValue();
} else {
fixture = fixtureRepostiory.findOne(fixtureId);
if (fixture == null)
{
fixture = new Fixture(fixtureId);
}
}
return fixture;
}
}
The answer to this was frustratingly simple.
In the update method, I was ignoring the result of the save() operation.
Often, this is fine, if you're not planning on using object again. (which is common, as you save right at the end of your unit of work).
However, as I was continuing to use my 'parent' again, I needed to observe the returned value:
So this:
fixtureRepository.save(fixture);
fixtureCache.put(new Element(fixtureId, fixture));
becomes this:
fixture = fixtureRepository.save(fixture);
fixtureCache.put(new Element(fixtureId, fixture));
Related
Update: The issue seems to be the id that I'm using twice, or in other words, the id from the product entity that I want to use for the productinventory entity. As soon as I generate a new id for the productinventory entity, it seems to work fine. But I want to have the same id for both, since they're the same product.
I have 2 Services:
ProductManagementService (saves a Product entity with product details)
1.) For saving the Product Entity, I implemented an EventHandler that listens to ProductCreatedEvent and saves the product to a mysql database.
ProductInventoryService (saves a ProductInventory entity with stock quantities of product to a certain productId defined in ProductManagementService )
2.) For saving the ProductInventory Entity, I also implemented an EventHandler that listens to ProductInventoryCreatedEvent and saves the product to a mysql database.
What I want to do:
When a new Product is created in ProductManagementService, I want to create a ProductInventory entity in ProductInventoryService directly afterwards and save it to my msql table. The new ProductInventory entity shall have the same id as the Product entity.
For that to accomplish, I created a Saga, which listes to a ProductCreatedEvent and sends a new CreateProductInventoryCommand. As soon as the CreateProductInventoryCommand triggers a ProductInventoryCreatedEvent, the EventHandler as described in 2.) should catch it. Except it doesn't.
The only thing thta gets saved is the Product Entity, so in summary:
1.) works, 2.) doesn't. A ProductInventory Aggregate does get created, but it doesn't get saved since the saving process that is connected to an EventHandler isn't triggered.
I also get an Exception, the application doesn't crash though: Command 'com.myApplication.apicore.command.CreateProductInventoryCommand' resulted in org.axonframework.commandhandling.CommandExecutionException(OUT_OF_RANGE: [AXONIQ-2000] Invalid sequence number 0 for aggregate 3cd71e21-3720-403b-9182-130d61760117, expected 1)
My Saga:
#Saga
#ProcessingGroup("ProductCreationSaga")
public class ProductCreationSaga {
#Autowired
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "productId")
public void handle(ProductCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductCreatedEvent");
String productInventoryId = event.productId;
SagaLifecycle.associateWith("productInventoryId", productInventoryId);
//takes ID from product entity and sets all 3 stock attributes to zero
commandGateway.send(new CreateProductInventoryCommand(productInventoryId, 0, 0, 0));
}
#SagaEventHandler(associationProperty = "productInventoryId")
public void handle(ProductInventoryCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductInventoryCreatedEvent");
SagaLifecycle.end();
}
}
The EventHandler that works as intended and saves a Product Entity:
#Component
public class ProductPersistenceService {
#Autowired
private ProductEntityRepository productRepository;
//works as intended
#EventHandler
void on(ProductCreatedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductCreatedEvent");
ProductEntity entity = new ProductEntity(event.productId, event.productName, event.productDescription, event.productPrice);
productRepository.save(entity);
}
#EventHandler
void on(ProductNameChangedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductNameChangedEvent");
ProductEntity existingEntity = productRepository.findById(event.productId).get();
ProductEntity entity = new ProductEntity(event.productId, event.productName, existingEntity.getProductDescription(), existingEntity.getProductPrice());
productRepository.save(entity);
}
}
The EventHandler that should save a ProductInventory Entity, but doesn't:
#Component
public class ProductInventoryPersistenceService {
#Autowired
private ProductInventoryEntityRepository productInventoryRepository;
//doesn't work
#EventHandler
void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventoryPersistenceService, EventHandler, ProductInventoryCreatedEvent");
ProductInventoryEntity entity = new ProductInventoryEntity(event.productInventoryId, event.physicalStock, event.reservedStock, event.availableStock);
System.out.println(entity.toString());
productInventoryRepository.save(entity);
}
}
Product-Aggregate:
#Aggregate
public class Product {
#AggregateIdentifier
private String productId;
private String productName;
private String productDescription;
private double productPrice;
public Product() {
}
#CommandHandler
public Product(CreateProductCommand command) {
System.out.println("Product, CommandHandler, CreateProductCommand");
AggregateLifecycle.apply(new ProductCreatedEvent(command.productId, command.productName, command.productDescription, command.productPrice));
}
#EventSourcingHandler
protected void on(ProductCreatedEvent event) {
System.out.println("Product, EventSourcingHandler, ProductCreatedEvent");
this.productId = event.productId;
this.productName = event.productName;
this.productDescription = event.productDescription;
this.productPrice = event.productPrice;
}
}
ProductInventory-Aggregate:
#Aggregate
public class ProductInventory {
#AggregateIdentifier
private String productInventoryId;
private int physicalStock;
private int reservedStock;
private int availableStock;
public ProductInventory() {
}
#CommandHandler
public ProductInventory(CreateProductInventoryCommand command) {
System.out.println("ProductInventory, CommandHandler, CreateProductInventoryCommand");
AggregateLifecycle.apply(new ProductInventoryCreatedEvent(command.productInventoryId, command.physicalStock, command.reservedStock, command.availableStock));
}
#EventSourcingHandler
protected void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventory, EventSourcingHandler, ProductInventoryCreatedEvent");
this.productInventoryId = event.productInventoryId;
this.physicalStock = event.physicalStock;
this.reservedStock = event.reservedStock;
this.availableStock = event.availableStock;
}
}
What you are noticing right now is the uniqueness requirement of the [aggregate identifier, sequence number] pair within a given Event Store. This requirement is in place to safe guard you from potential concurrent access on the same aggregate instance, as several events for the same aggregate all need to have a unique overall sequence number. This number is furthermore use to identify the order in which events need to be handled to guarantee the Aggregate is recreated in the same order consistently.
So, you might think this would opt for a "sorry there is no solution in place", but that is luckily not the case. There are roughly three things you can do in this set up:
Life with the fact both aggregates will have unique identifiers.
Use distinct bounded contexts between both applications.
Change the way aggregate identifiers are written.
Option 1 is arguably the most pragmatic and used by the majority. You have however noted the reuse of the identifier is necessary, so I am assuming you have already disregarded this as an option entirely. Regardless, I would try to revisit this approach as using UUIDs per default for each new entity you create can safe you from trouble in the future.
Option 2 would reflect itself with the Bounded Context notion pulled in by DDD. Letting the Product aggregate and ProductInventory aggregate reside in distinct contexts will mean you will have distinct event stores for both. Thus, the uniqueness constraint would be kept, as no single store is containing both aggregate event streams. Whether this approach is feasible however depends on whether both aggregates actually belong to the same context yes/no. If this is the case, you could for example use Axon Server's multi-context support to create two distinct applications.
Option 3 requires a little bit of insight in what Axon does. When it stores an event, it will invoke the toString() method on the #AggregateIdentifier annotated field within the Aggregate. As your #AggregateIdentifier annotated field is a String, you are given the identifier as is. What you could do is have typed identifiers, for which the toString() method doesn't return only the identifier, but it appends the aggregate type to it. Doing so will make the stored aggregateIdentifier unique, whereas from the usage perspective it still seems like you are reusing the identifier.
Which of the three options suits your solution better is hard to deduce from my perspective. What I did do, is order them in most reasonable from my perspective.
Hoping this will help your further #Jan!
Current stack:
Spring Boot 1.5.1
Spring Data JPA 1.11.0
Hibernate Core 5.2.6
Let's say we have the following #Entity structure
#Entity
class Root {
#Id
private Long id;
#OneToMany
#JoinColumn(name = "root_id")
private Set<Child> children
}
#Entity
class Child {
#Id
private Long id;
#OneToMany
#JoinColumn(name = "child_id")
private Set<Grandchild> grandchildren;
}
#Entity
class Grandchild {
#Id
private Long id;
}
When I query for all/specific Root objects Hibernate selects only from the corresponding table and the resulting objects' children Set is null a Hibernate proxy - as it should be.
When I call getChildren() Hibernate correctly initializes the collection but also (unwarrantedly) fetches each Child object's grandchildren Set.
Can someone please explain exactly why this recursive fetching is happening and is there a way to disable it?
I did some more digging and this is what I came up with: it seems to be related to the way Hibernate maps #OneToMany depending on whether the target collection is a List or Set.
private final RootRepo repo;
If the collections are Sets
public void test() {
List<Root> all = repo.findAll(); // SELECT root0_.* FROM root root0_
all.forEach(root -> {
System.out.println(root.getChildren() == null); // false
System.out.println(Hibernate.isInitialized(root.getChildren())); // false
root.getChildren().forEach(child -> {
// SELECT child0_.* FROM children child0_
// SELECT grandchild0_.* FROM grandchildren grandchild0_
System.out.println(child.getGrandchildren() == null); // false
System.out.println(Hibernate.isInitialized(child.getGrandchildren())); // true
child.getGrandChildren().forEach(grandchild -> {});
});
});
}
However, with Lists
public void test() {
List<Root> all = repo.findAll(); // SELECT root0_.* FROM root root0_
all.forEach(root -> {
System.out.println(root.getChildren() == null); // false
System.out.println(Hibernate.isInitialized(root.getChildren())); // false
root.getChildren().forEach(child -> {
// SELECT child0_.* FROM children child0_
System.out.println(child.getGrandchildren() == null); // false
System.out.println(Hibernate.isInitialized(child.getGrandchildren())); // false
child.getGrandChildren().forEach(grandchild -> {
// SELECT grandchild0_.* FROM grandchildren grandchild0_
});
});
});
}
I am a certifiable idiot.
I'm using Lombok to generate getters/setters and the like for my POJOs and its default implementation of #EqualsAndHashCode annotation generates both methods taking into account every field.. including the subcollections.
I am quite surprised that the children of Root are actually null.
The way it works in your situation (please double check if the children are actually set as null), is that when you access the getChildren() (by invoking size() for example on it).. that collection is fetched from the database along with all its eager dependencies.
All the lazy dependencies (grandchildren in this particular case) are instantiated as Proxy objects, but there should be no sql query performed on the database for those (please check that).
Additionally
It never happened to me but just a little thing to remember.. According to the JPA, the lazy loading feature is just a hint to the persistence provider. Even when you set the fetchType as LAZY or in general you expect to have your collection dependencies lazy-loaded by default (which can done while configuring the session factory), the implementation may still decide to do an EAGER fetch:
Defines strategies for fetching data from the database. The EAGER
strategy is a requirement on the persistence provider runtime that
data must be eagerly fetched. The LAZY strategy is a hint to the
persistence provider runtime that data should be fetched lazily when
it is first accessed. The implementation is permitted to eagerly fetch
data for which the LAZY strategy hint has been specified.
I have an Entity (Layer) that maps a list of other Entities (Member). This List may have no entries / be null. Yet, when I query for the Entity I get a NOT NULL check constraint error from the database.
It seems to be connected to the NamedQueries as I can read the Entity from DB if I query by id.
#Entity
#NamedQueries({
#NamedQuery(name="getChildLayers",-
query = "SELECT la
FROM Layer la
WHERE la.parent = :parent AND la.deletedDate IS NULL")})
public class Layer extends CommonModel {
/*... other field */
#ManyToOne(fetch = FetchType.LAZY, targetEntity = Layer.class, optional = true)
private Layer parent;
#ManyToMany(fetch = FetchType.LAZY, targetEntity = MyUser.class)
private List<MyUser> members;
public List<MyUser> getMembers() {
return members;
}
public void setMembers(List<MyUser> members) {
this.members = members;
}
/*... other getters and setters */
}
I get this error: integrity constraint violation: NOT NULL check constraint; SYS_CT_10298 table: LAYER_MYUSER column: MEMBERS_ID
I am able to create the entry, though.
When I run my tests then all tests fail that read the Entity (but creation works). If I add the following line in the creation method:
layer.setMembers(new ArrayList<MyUser>());
then the methods that test the alternation of the members work (meaning, I can create a Layer and alter its members by adding and removing elements from the list).
It seems to me that reading the Entity from Database fails whenever there are no Member to the Layer.
I did try adding #JoinColumn(nullable=true) to the field, but it changed nothing.
I import javax.persistence classes.
Example as to how I access the variable (in LayerService)
// this method works as expected
public Layer getById(Long id) {
Session s = sessionFactory.getCurrentSession();
return (Layer)s.get(Layer.class, id);
}
// this does not.
public List<Layer> getChildren(Layer layer) {
Query childrenQuery = sessionFactory.getCurrentSession().getNamedQuery("getChildLayers");
childrenQuery.setParameter("parent", layer);
return (List<Layer>) childrenQuery.list();
}
Code changed after Jason Cs answer:
Layer
...
private final List<OCWUser> members = new ArrayList<>();
...
public void setMembers(List<OCWUser> members) {
this.members.clear();
this.members.addAll(members);
}
Problem still exists.
It can be so simple. I forgot to add #JoinTable
#JoinTable(name = "LAYER_USER", joinColumns = #JoinColumn(nullable = true))
One important thing to be aware of is you shouldn't replace this.members with another list in setMembers unless you know you are doing it before you call persist(). Instead you need to clear this.members then add all the specified elements to it. The reason is that Hibernate can and will use its own proxied / instrumented collections classes when [de]serializing an entity, and you blow that away when overwriting the collection class. You should declare members as final and always initialize it to a non-null empty List.
See for example (3.6 but still relevant): http://docs.jboss.org/hibernate/core/3.6/reference/en-US/html/collections.html#collections-persistent, In particular:
Notice how in Example 7.2, “Collection mapping using #OneToMany and
#JoinColumn” the instance variable parts was initialized with an
instance of HashSet. This is the best way to initialize collection
valued properties of newly instantiated (non-persistent) instances.
When you make the instance persistent, by calling persist(), Hibernate
will actually replace the HashSet with an instance of Hibernate's own
implementation of Set.
As long as you are messing with collection fields in this way, any number of strange things can happen.
Also, in general, you want to be careful about stating your invariants and such when accessing collections in this way, as it's easily possible to, e.g., create two Layers that reference the same collection internally, so that actions on one affect the other, or external actions on the passed-in collection affect the layer, e.g. the following code probably doesn't behave like you want it to:
List<MyUser> u = new ArrayList<MyUser>();
Layer a = new Layer();
Layer b = new Layer();
u.add(...);
a.setMembers(u);
b.setMembers(u);
u.clear();
Further, when you persist() one of the layers there, and Hibernate overwrites the field with its own collection class, the behavior then changes as the objects are no longer referencing the same collection:
// not only did u.clear() [possibly undesirably] affect a and b above, but:
session.persist(a);
u.add(...); // ... now it only affects b.
I have a bidirectional one-to-many relationship.
0 or 1 client <-> List of 0 or more product orders.
That relationship should be set or unset on both entities:
On the client side, I want to set the List of product orders assigned to the client; the client should then be set / unset to the orders chosen automatically.
On the product order side, I want to set the client to which the oder is assigned; that product order should then be removed from its previously assiged client's list and added to the new assigned client's list.
I want to use pure JPA 2.0 annotations and one "merge" call to the entity manager only (with cascade options). I've tried with the following code pieces, but it doesn't work (I use EclipseLink 2.2.0 as persistence provider)
#Entity
public class Client implements Serializable {
#OneToMany(mappedBy = "client", cascade= CascadeType.ALL)
private List<ProductOrder> orders = new ArrayList<>();
public void setOrders(List<ProductOrder> orders) {
for (ProductOrder order : this.orders) {
order.unsetClient();
// don't use order.setClient(null);
// (ConcurrentModificationEx on array)
// TODO doesn't work!
}
for (ProductOrder order : orders) {
order.setClient(this);
}
this.orders = orders;
}
// other fields / getters / setters
}
#Entity
public class ProductOrder implements Serializable {
#ManyToOne(cascade= CascadeType.ALL)
private Client client;
public void setClient(Client client) {
// remove from previous client
if (this.client != null) {
this.client.getOrders().remove(this);
}
this.client = client;
// add to new client
if (client != null && !client.getOrders().contains(this)) {
client.getOrders().add(this);
}
}
public void unsetClient() {
client = null;
}
// other fields / getters / setters
}
Facade code for persisting client:
// call setters on entity by JSF frontend...
getEntityManager().merge(client)
Facade code for persisting product order:
// call setters on entity by JSF frontend...
getEntityManager().merge(productOrder)
When changing the client assignment on the order side, it works well: On the client side, the order gets removed from the previous client's list and is added to the new client's list (if re-assigned).
BUT when changing on the client side, I can only add orders (on the order side, assignment to the new client is performed), but it just ignores when I remove orders from the client's list (after saving and refreshing, they are still in the list on the client side, and on the order side, they are also still assigned to the previous client.
Just to clarify, I DO NOT want to use a "delete orphan" option: When removing an order from the list, it should not be deleted from the database, but its client assignment should be updated (that is, to null), as defined in the Client#setOrders method. How can this be archieved?
EDIT: Thanks to the help I received here, I was able to fix this problem. See my solution below:
The client ("One" / "owned" side) stores the orders that have been modified in a temporary field.
#Entity
public class Client implements Serializable, EntityContainer {
#OneToMany(mappedBy = "client", cascade= CascadeType.ALL)
private List<ProductOrder> orders = new ArrayList<>();
#Transient
private List<ProductOrder> modifiedOrders = new ArrayList<>();
public void setOrders(List<ProductOrder> orders) {
if (orders == null) {
orders = new ArrayList<>();
}
modifiedOrders = new ArrayList<>();
for (ProductOrder order : this.orders) {
order.unsetClient();
modifiedOrders.add(order);
// don't use order.setClient(null);
// (ConcurrentModificationEx on array)
}
for (ProductOrder order : orders) {
order.setClient(this);
modifiedOrders.add(order);
}
this.orders = orders;
}
#Override // defined by my EntityContainer interface
public List getContainedEntities() {
return modifiedOrders;
}
On the facade, when persisting, it checks if there are any entities that must be persisted, too. Note that I used an interface to encapsulate this logic as my facade is actually generic.
// call setters on entity by JSF frontend...
getEntityManager().merge(entity);
if (entity instanceof EntityContainer) {
EntityContainer entityContainer = (EntityContainer) entity;
for (Object childEntity : entityContainer.getContainedEntities()) {
getEntityManager().merge(childEntity);
}
}
JPA does not do this and as far as I know there is no JPA implementation that does this either. JPA requires you to manage both sides of the relationship. When only one side of the relationship is updated this is sometimes referred to as "object corruption"
JPA does define an "owning" side in a two-way relationship (for a OneToMany this is the side that does NOT have the mappedBy annotation) which it uses to resolve a conflict when persisting to the database (there is only one representation of this relationship in the database compared to the two in memory so a resolution must be made). This is why changes to the ProductOrder class are realized but not changes to the Client class.
Even with the "owning" relationship you should always update both sides. This often leads people to relying on only updating one side and they get in trouble when they turn on the second-level cache. In JPA the conflicts mentioned above are only resolved when an object is persisted and reloaded from the database. Once the 2nd level cache is turned on that may be several transactions down the road and in the meantime you'll be dealing with a corrupted object.
You have to also merge the Orders that you removed, just merging the Client is not enough.
The issue is that although you are changing the Orders that were removed, you are never sending these orders to the server, and never calling merge on them, so there is no way for you changes to be reflected.
You need to call merge on each Order that you remove. Or process your changes locally, so you don't need to serialize or merge any objects.
EclipseLink does have a bidirectional relationship maintenance feature which may work for you in this case, but it is not part of JPA.
Another possible solution is to add the new property on your ProductOrder, I named it detached in the following examples.
When you want to detach the order from the client you can use a callback on the order itself:
#Entity public class ProductOrder implements Serializable {
/*...*/
//in your case this could probably be #Transient
private boolean detached;
#PreUpdate
public void detachFromClient() {
if(this.detached){
client.getOrders().remove(this);
client=null;
}
}
}
Instead of deleting the orders you want to delete you set detached to true. When you will merge & flush the client, the entity manager will detect the modified order and execute the #PreUpdate callback effectively detaching the order from the client.
If i have a #OneToMany relationship with #Cascade(CascadeType.SAVE_UPDATE) as follows
public class One {
private Integer id;
private List<Many> manyList = new ArrayList<Many>();
#Id
#GeneratedValue
public Integer getId() {
return this.id;
}
#OneToMany
#JoinColumn(name="ONE_ID", updateable=false, nullable=false)
#Cascade(CascadeType.SAVE_UPDATE)
public List<Many> getManyList() {
return this.manyList;
}
}
And Many class
public class Many {
private Integer id;
/**
* required no-arg constructor
*/
public Many() {}
public Many(Integer uniqueId) {
this.id = uniqueId
}
/**
* Without #GeneratedValue annotation
* Hibernate will use assigned Strategy
*/
#Id
public Integer getId() {
return this.id;
}
}
If i have The following scenario
One one = new One();
/**
* generateUniqueId method will Take care of assigning unique id for each Many instance
*/
one.getManyList().add(new Many(generateUniqueId()));
one.getManyList().add(new Many(generateUniqueId()));
one.getManyList().add(new Many(generateUniqueId()));
one.getManyList().add(new Many(generateUniqueId()));
And i call
sessionFactory.getCurrentSession().save(one);
Before going on
According to Transitive persistence Hibernate reference documentation, you can see
If a parent is passed to save(), update() or saveOrUpdate(), all children are passed to saveOrUpdate()
ok. Now Let's see what Java Persistence With Hibernate book Talks about saveOrUpdate method
Hibernate queries the MANY table for the given id, and if it is found, Hibernate updates the row. If it is not found, insertion of a new row is required and done.
Which can be translated according to
INSERT INTO ONE (ID) VALUES (?)
/**
* I have four Many instances added To One instance
* So four select-before-saving
*
* I DO NOT NEED select-before-saving
* Because i know i have a Fresh Transient instance
*/
SELECT * FROM MANY WHERE MANY.ID = ?
SELECT * FROM MANY WHERE MANY.ID = ?
SELECT * FROM MANY WHERE MANY.ID = ?
SELECT * FROM MANY WHERE MANY.ID = ?
INSERT INTO MANY (ID, ONE_ID) VALUES (?, ?)
INSERT INTO MANY (ID, ONE_ID) VALUES (?, ?)
INSERT INTO MANY (ID, ONE_ID) VALUES (?, ?)
INSERT INTO MANY (ID, ONE_ID) VALUES (?, ?)
Any workaround To avoid select-before-saving ??? Yes, You can either
Add a #Version column (Not applied)
Implement isTransient method provided by Hibernate interceptor (The option i have)
So as a way to avoid select-before-saving default behavior when using this kind of cascading, i have improved my code by assigning a Hibernate Interceptor to a Hibernate Session whose Transaction is managed by Spring.
Here goes my repository
Before (Without any Hibernate Interceptor): It works fine!
#Repository
public class SomeEntityRepository extends AbstractRepository<SomeEntity, Integer> {
#Autowired
private SessionFactory sessionFactory;
#Override
public void add(SomeEntity instance) {
sessionFactory.getCurrentSession().save(instance);
}
}
After (With Hibernate Inteceptor): something goes wrong (No SQL query is performed - Neither INSERT Nor SELECT-BEFORE-SAVING)
#Repository
public class SomeEntityRepository extends AbstractRepository<SomeEntity, Integer> {
#Autowired
private SessionFactory sessionFactory;
#Override
public void add(SomeEntity instance) {
sessionFactory.openSession(new EmptyInterceptor() {
/**
* To avoid select-before-saving
*/
#Override
public Boolean isTransient(Object o) {
return true;
}
}).save(instance);
}
}
My question is: Why Spring does not persist my Entity and its relationships when using Hibernate Interceptor and what should i do as workaround to work fine ???
Spring maintains an association between the current session and the current transaction (see SessionFactoryUtils.java.) Since there is already a session associated for the current DAO method call, you have to use this Session, or take the plunge of getting involved with the murky details of associating the new session with the previous transaction context. It's probably possible, but with considerable risk, and is definitely not recommended. In hibernate, if you have a session already open, then it should be used.
Having said that, you may be able to get spring to create a new session for you and associate it with the current transaction context. Use SessionFactoryUtils.getNewSession(SessionFactory, Interceptor). If you use this rather than hibernate's sessionFactory, then this should keep the association with the transaction.
Initially, you can code this up directly in the DAO. When it's tried and tested and hopefully found to be working, you can then take steps to move the spring code out of your DAO, such as using AOP to add around advice to the add() methods that create and clean up new session.
Another alternative is to use a global Interceptor. Even though it's global, you can give it locally controllable behaviour. The TransientInterceptor contains a threadLocal<Boolean>. This is the flag for the current thread to indicate if the interceptor should return true for isTransient. You set it to true at the start of the add() method and clear it at the end. E.g.
class TransientInterceptor extends EntityInterceptor {
ThreadLocal<Boolean> transientFlag = new ThreadLocal<Boolean)();
public boolean isTransient() {
return transientFlag.get()==Boolean.TRUE;
}
static public setTransient(boolean b) {
transientFlag.set(b);
}
}
And then in your DAO:
#Override
public void add(SomeEntity instance) {
try {
TransientInterceptor.set(true);
sessionFactory.getCurrentSession().save(instance);
}
finally {
TransientInterceptor.set(false);
}
}
You can then setup TransientInterceptor as a global interceptor on the SessionFactory (e.g. LocalSessionFactoryBean.) To make this less invasive, you could create an AOP around advice to apply this behaviour to all your DAO add methods, where appropriate.
In the 'after' method you are creating a new session and not flushing it, therefore no update is sent to the database. This has nothing to do with Spring, but is pure Hibernate behavior.
What you probably want is adding an (entity) interceptor to the sessionFactory, probably configured using Spring. You can then just keep your repository's add() method as before.
See http://static.springsource.org/spring/docs/2.5.x/api/org/springframework/orm/hibernate3/LocalSessionFactoryBean.html#setEntityInterceptor%28org.hibernate.Interceptor%29