I want to know if there is a way to let Hazelcast automatically sync the cache with all the updates that an entity receives. here's an example:
I want to update an entity that has been stored into a cache and is also used in other caches. I do a get() and retrieve the value, and then perform an update changing some properties. The problem is that Hazelcast does not automatically serialize the item in the cache after every change, but i need to force a put to actually propagate the update to the cache. And i would need to perform a put for every cache that my entity is stored into. Is there someway to let Hazelcast manage those situations automatically?
#Edit
Just to clarify the problem, here i give you an example:
imagine i have 2 IMaps: map1 stores objects of Entity A, map2 contains object of entity B, and entity B represents a complex object which in turn contains Entity A. If i want to perform an edit on one of the objects contained in map1 (which will be an instance of Entity A), i will do a map1.get(key) and retreive the object. Then i modify this object in some way and i will persist the changes in map1 by doing a map1.put(key, entityA).
The problem rises because in map2, all the entitiesB which contain the jsut modified entityA will not show the update, and i have to manually do a get and a put also in this map.
You can use EntryProcessor to update the content of the entity in the map, directly on the server (without having to perform a get() first). That ways a common update code can be used for all the caches and you save on all network and ser/des overheads too.
Related
Consider the following class:
Class MainObject {
....
ChildObject1 child1;
ChildObject2 child2;
ChildObject3 child3;
ChildObject4 child4;
}
I need to save a newly created object of type MainObject using Hibernate. I added two newly created objects to it child1 and child2 since they were not present in DB.
Let us say child3 and child4 were already present in DB. However I may not prefer to load the entire objects from DB and add them to the MainObject since those objects may have too many attributes, which are not needed.
So I just fetch the DB keys for the two objects and add the keys to newly instantiated ChildObject3 and ChildObject4 objects (which will have all other fields as null). I add these objects to MainObject.
Now, if I try to save MainObject in CASCADED mode, ChildObject3 and ChildObject4 will also get updated in DB with all null values, which I do not want. What I want is a way of cascading by which only non-persisted objects will be saved to DB (in this case ChildObject1 and ChildObject2). The other objects which hold a DB key will not be updated and will only serve to save MainObject.
I know cascading cannot function in this way. Is there any other way to do it such refresh the whole object hierarchy such that ChildObject3 and ChildObject4 get refreshed from DB and then I can save the MainObject in cascaded mode? Or can I put certain conditions in callout methods whether to update or not to update an object by checking if it holds a DB key or not?
I feel this is a needed operation. Whenever we save a new object, we may not always want to load all attributes of all the inner objects, just to be able to save the main object in cascaded mode. All we need is the DB primary key and may not prefer to stress the memory by loading all fields of all inner objects.
Any good solution or best practices, please suggest.
In a container managed transaction i get a detached object and merge it so that the detached object is brought to managed state.My initial question is by caching the Pojo java objects and merging is a better idea to get the object into session or performing the get of the data from the DB to get in to session context a better idea in terms of cost of operation/time involved in getting the data from the DB?If i am performing an merge at start to get the object into the session context and doing the modification on this merged object will the hibernate take care of generating all the required sql statements and at the end will it be taken care ?
Please comment back which is better approach to get the entity to session , using a merge of the cached detached object or fetching the data from the DB is lesser time consumption?
when you call detach and then merge, merge returns you the attached entity in the context. it's a common mistake that users would use the passed entity after merge operation hoping that would be managed but this is not the case. you have to use the returned entity from merge which will be managed by hibernate and subsequent changes will be flushed at transaction end automatically.
it doesnt matter much when u load your entity coz hibernate will anyways fire a select if it is already not loaded in the context. also even if you keep on doing changes to your managed entity, hibernate will fire update only when you exit your transaction or call flush() explicitly.
Copy the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance. The given instance does not become associated with the session. This operation cascades to associated instances if the association is mapped with cascade="merge".
According to the API it saves a copy when you perform the merge and then returns a new instance. Based on my experience its always better to merge at the end after you have performed all the updates on the objects in detached state. Its better because you will call merge operation only at the end when the object state is ready to be persisted.
Also this will perform better because the object is moved to persistent context at the end and hence Hibernate will not have to come into picture till the end.
I have an entity, that represent order sent by the customer , this order might be updated after some discussion with the customer on phone, but the initial order sent by the customer must be persisted without update.
how i can persist same entity twice , is it efficient to use deep cloning.
i have tried to detach the the entity in order for persistence context to persist a new one , but still the persistence context is updating the first entry.
You can not persist one object twice in one session, so you need copy your order and save (persist) it again.
hibernate copy object values into new object with new generated ID
That's an interesting question. I think the quickest solution would probably be to use a multi-part ID. The first part would be the original order number and then every change increments the second part of the key. In your code you'd just need to find the object, make sure it's detached, alter the second part of the key and then persist it. As long as it's been detached it should then be saved away as a new order.
This post shows you how to use a composite key.
You need to clone/copy the object, ensure it has a unique id (or null if generated).
In EclipseLink there is an API to copy objects,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/AttributeGroup#Copy_Examples
I have three layers i.e Action,Service,DAO layers. I have loaded an object(of Employee class) from DB using hibernate with id 123.I have done some modifications to employee object.Later, I have create hibernate business object and done some modifications to that.
Employee e = service.getEmp(123);
e.setName("Ashok");
Order o = new Order
o.setNumber(567);
service.saveOrUpdate(o);
In this scenario, why it is trying to update employee object even though I not saying to save it? How to detach that from session?I don't want hibernate to update employee object.
In this scenario, why it is trying to update employee object even
though I not saying to save it?
I quote from the hibernate docs:
Transactional persistent instances (i.e. objects loaded, saved,
created or queried by the Session) can be manipulated by the
application, and any changes to persistent state will be persisted
when the Session is flushed. [...].
There is no need to call a particular method (like update(), which has
a different purpose) to make your modifications persistent.
And
How to detach that from session?
Mark collections with cascade="evict". Then Session.evict(Object) the object before flushing on your object (if you have FlushMode.AUTO then maybe set it to MANUAL until you have done what you want).
I have a problem that whenever I load a parent entity (User in my case) and put it to cache, all it's children (in an owned relationship) are cached as well.
If I'm not wrong, the explanation is simple: the serialization process touches all properties of the object which causes that all child object are fetched as well. Eventually, the whole entity group is fetched.
How do I avoid that? The User entity group is planned to contain quite a lot of information and I don't want to cache it all at once. Not to mention that fetching all the child objects at once would be really demanding.
I came across transient modifier and was happy for a while until I realized, that not only it stops certain fields from getting cached, it also prevents those fields from getting persistent.
So the answer is to use the detached version of the entity. I load all entities using one function which looks right now something like this:
#SuppressWarnings("unchecked")
E cachedEntity = (E) cache.get(cacheKey);
if (cachedEntity != null) {
entity = cachedEntity;
}
else {
entity = pm.getObjectById(Eclass, key);
cache.put(cacheKey, pm.detachCopy(entity));
}
The disadvantage is, that when I want to get the child objects, I have to explicitly attach the entity back using entity = pm.makePersistent(entity) which generates Datastore.GET RPC. However this doesn't happen very often and most likely I just want to access the entity itself, not its child objects, therefore it's quite efficient.
I came across an even better solution. The one RPC call when attaching the entity is there because the JDO checks whether the entity really exists in the datastore. And according to the DataNucleus documentation, this can be turned off just by setting datanucleus.attachSameDatastore to false in PMF. However it doesn't work for me, Datastore.GET is always called when attaching the object. If it worked, I could implicitly attach every object just after fetching it from the cache with zero cost and I wouldn't have to do it manually when needed.