I'm having trouble figuring out the proper way to update "nested" data using Google App Engine
and JDO. I have a RecipeJDO and an IngredientJDO.
I want to be able to completely replace the ingredients in a given recipe instance with a new list of ingredients. Then, when that recipe is (re)persisted, any previously attached ingredients will be deleted totally from the datastore, and the new ones will be persisted and associated with the recipe.
Something like:
// retrieve from GAE datastore
RecipeJDO recipe = getRecipeById();
// fetch new ingredients from the user
List<IngredientJDO> newIngredients = getNewIngredients();
recipe.setIngredients(newIngredients);
// update the recipe w/ new ingredients
saveUpdatedRecipe(recipe);
This works fine when I update (detached) recipe objects directly, as returned from the datastore. However if I copy a RecipeJDO, then make the aforementioned updates, it ends up appending the new ingredients, which are then returned along with the old ingredients when the recipe is then re-fetched from the datastore. (Why bother with the copy at all? I'm using GWT on the front end, so I'm copying the JDO objects to DTOs, the user edits them on the front end, and then they are sent to the backend for updating the datastore.)
Why do I get different results with objects that I create by hand (setting all the fields, including the id) vs operating on instances returned by the PersistenceManager? Obviously
JDO's bytecode enhancement is involved somehow.
Am I better off just explicitly deleting the old ingredients before persisting the updated
recipe?
(Side question- does anyone else get frustrated with ORM and wish we could go back to plain old RDBMS? :-)
Short answer. Change RecipeJDO.setIngredients() to this:
public void setIngredients(List<IngredientJDO> ingredients) {
this.ingredients.clear();
this.ingredients.addAll(ingredients);
}
When you fetch the RecipeJDO, the ingredients list is not a real ArrayList, it is a dynamic proxy that handles the persistence of the contained elements. You shouldn't replace it.
While the persistence manager is open, you can iterate through the ingredients list, add items or remove items, and the changes will be persisted when the persistence manager is closed (or the transaction is committed, if you are in a transaction). Here's how you would do the update without a transaction:
public void updateRecipe(String id, List<IngredientDTO> newIngredients) {
List<IngredientJDO> ingredients = convertIngredientDtosToJdos(newIngredients);
PersistenceManager pm = PMF.get().getPersistenceManager();
try {
RecipeJDO recipe = pm.getObjectById(RecipeJDO.class, id);
recipe.setIngredients(ingredients);
} finally {
pm.close();
}
}
If you never modify the IngredientJDO objects (only replace them and read them), you might want to make them Serializable objects instead of JDO objects. If you do that, you may be able to reuse the Ingredient class in your GWT RPC code.
Incidentally, even if Recipe was not a JDO object, you would want to make a copy in the setIngredients() method, otherwise someone could do this:
List<IngredientJDO> ingredients = new ArrayList<IngredientJDO>;
// add items to ingredients
recipe.setIngredients(ingredients);
ingredients.clear(); // Woops! Modifies Recipe!
I am facing the same problem!
I would like to update an existing entity by calling makePersistent() and assigning an existent id/key! the update works fine except for nested objects! The nested objects are appended to the old ones instead of being replaced? I don't know if this is the intended behaviour or if this is a bug? I expect overwriting to have the same effect as inserting a new entity!
How about first deleting the old entity and persisting the new one in the same transaction? Does this work? I tried this but it resulted in deleting the entity completely?! I don't know why (even though I tried flushing directly after deleting)!
#NamshubWriter, not sure if you'll catch this post... regarding your comment,
(if you used Stripes and JSP, you could avoid the GWT RPC and GWT model representations of Recipe and Ingredient)
I am using Stripes and JSP, but I face the same problem. When the user submits the form back, Stripes instantiates my entity objects from scratch, and so JDO is completely ignorant of them. When I call PersistenceManager.makePersistent on the root object, the previous version is correctly overwritten - with one exception, its child objects are appended to the List<child> of the previous version.
If you could suggest any solution (better than manually copying the object fields) I would greatly appreciate.
(seeing as Stripes is so pluggable, I wonder if I can override how it instantiates the entity objects...)
Related
I'm not really even sure how to google this, so if it's a commonly asked question, please direct me to the answer.
A general description of the issue is that one or more parent records has a a set of child records of type 'Document'. The object graph is actually deeper and more complicated than that, but those are the relevant bits for my issue.
Generally we update the whole object graph with merge() when the user hits Save. But we have a requirement to save the Document record...AND ONLY the Document record as soon as they hit add or remove on the document. They can do as many add/removes as they want. There is no instant update, so when they want to update the description, that happens on the made 'Save'.
The instant update works fine. The problem is that on a later request, the user hits 'Save', thereby updating the parent and child records and if there was a remove during the process, I'm getting javax.persistence.EntityNotFoundException despite the fact that I DID remove the object from the set on it's parent.
On remove, I did this:
item.getDocuments().remove(document);
documentService.deleteDocumentByID(document.getId());
The first line removes it from the parent record, which i thought would notify hibernate not to freak out about my deleting it in the next like when I ran an HQL 'delete' on that ID.
Then, in the Save it's basically just a 'merge()' on the whole object graph.
Is there a way to make Hibernate okay with me adding/deleting JUST THAT DOCUMENT outside the merge()? Note that I do NOT want to save the whole object graph on that document add/remove. Just the document record.
When you use merge, all basic attributes of that object and all associations that have CascadeType.MERGE will be flushed to the database. So in order to change what is flushed, you need to configure this correctly.
If you require different update/flush graphs because you have multiple use cases, you will have to come up with a different solution i.e. maybe introduce a DTO and apply the changes from the DTO to the managed entities instead of using merge.
If you would also like to avoid all the unnecessary select statements for state synchronization, I can recommend that you take a look at what Blaze-Persistence Entity-Views has to offer.
You can create an updatable entity view that will be just about updating the documents collection.
#EntityView(Item.class)
#UpdatableEntityView
public interface ItemUpdateView {
#IdMapping
Integer getId();
#UpdatableMapping
Set<DocumentView> getDocuments();
}
#EntityView(Document.class)
public interface DocumentView {
#IdMapping
Integer getId();
String getName();
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
ItemUpdateView dto = entityViewManager.find(entityManager, ItemUpdateView.class, id);
And flushing the changes can be done like this:
entityViewManager.save(entityManager, dto);
You will see that it will only flush what really changed without having to do select statements, thaks to its dirty tracking capabilities.
I have an entity that has been loaded from the database in a previous request, that is now modified. It is still detached in the persistence context.
When I'm submitting and entering my save() method, first entityManager.load() gets called to get the previous state of the object, make some comparisons, computations etc. (Im now working with entity and entityBefore)
Saving entity now results in an error. I'm trying to save a different object with the same id.
The solution at the moment is to just detach entityBefore and then use saveOrUpdate. Seemed to work like a charm.
However it only leads to another problem. The entity contains a list of other objects (1-n). Removing one of those from the list returns an error, they've previously been detached aswell.
At the moment I'm not sure what's the best approach to solve this whole thing. How can I manage 2 different versions of the same object, without storing both in the database? Is there a way I can get the old State from the database without modifying the context? Do I need to refresh every object in the list one by one?...
Thanks for any suggestions.
I have a problem that whenever I load a parent entity (User in my case) and put it to cache, all it's children (in an owned relationship) are cached as well.
If I'm not wrong, the explanation is simple: the serialization process touches all properties of the object which causes that all child object are fetched as well. Eventually, the whole entity group is fetched.
How do I avoid that? The User entity group is planned to contain quite a lot of information and I don't want to cache it all at once. Not to mention that fetching all the child objects at once would be really demanding.
I came across transient modifier and was happy for a while until I realized, that not only it stops certain fields from getting cached, it also prevents those fields from getting persistent.
So the answer is to use the detached version of the entity. I load all entities using one function which looks right now something like this:
#SuppressWarnings("unchecked")
E cachedEntity = (E) cache.get(cacheKey);
if (cachedEntity != null) {
entity = cachedEntity;
}
else {
entity = pm.getObjectById(Eclass, key);
cache.put(cacheKey, pm.detachCopy(entity));
}
The disadvantage is, that when I want to get the child objects, I have to explicitly attach the entity back using entity = pm.makePersistent(entity) which generates Datastore.GET RPC. However this doesn't happen very often and most likely I just want to access the entity itself, not its child objects, therefore it's quite efficient.
I came across an even better solution. The one RPC call when attaching the entity is there because the JDO checks whether the entity really exists in the datastore. And according to the DataNucleus documentation, this can be turned off just by setting datanucleus.attachSameDatastore to false in PMF. However it doesn't work for me, Datastore.GET is always called when attaching the object. If it worked, I could implicitly attach every object just after fetching it from the cache with zero cost and I wouldn't have to do it manually when needed.
I've got a hibernate interfaced mysql database with a load of different types of objects, some of which are periodically retrieved and altered by other pieces of code, which are operating in JADE agents. Because of the way the objects are retrieved (in queries, returning collections of objects) they don't seem to be managed by the entity manager, and definitely aren't managed when they're passed to agents without an entity manager factory or manager.
The objects from the database are passed about between agents, before arriving back at the database, at this point, I want to update the version of the object in the database - but each time I merge the object, it creates a new object in the database.
I'm fairly sure that I'm not using the merge method properly. Can anyone suggest a good way that I can combine the updated object with the existing database object without knowing in advance which properties of the object have changed? Possibly something along the lines of searching for the existing object and deleting it, then adding the new one, but I'm not sure how to do this without messing up PKeys etc
Hibernate has saveOrUpdate-method which either saves the object or updates it depending if an object with a same ID already exists:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/objectstate.html#objectstate-saveorupdate
I have the following use case: There's a class called Template and with that class I can create instances of the ActualObject class (ActualObject copies its inital data from the Template). The Template class has a list of Product:s.
Now here comes the tricky part, the user should be able to delete Products from the database but these deletions may not affect the content of a Template. In other words, even if a Product is deleted, the Template should still have access to it. This could be solved by adding a flag "deleted" to the Product. If a Product is deleted, then it may not be searched explicitly from the database, but it can be fetched implicitly (for example via the reference in the Template class).
The idea behind this is that when an ActualObject is created from a template, the user is notified in the user interface that "The Template X had a Product Z with the parameters A, B and C, but this product has been deleted and cannot be added as such in ActualObject Z".
My problem is how I should mark these deleted objects as deleted. Before someone suggests that just update the delete flag instead of doing an actual delete query, my problem is not that simple. The delete flag and its behaviour should exist in all POJOs, not just in Product. This means I'll be getting cascade problems. For example, if I delete a Template, then the Products should also be deleted and each Product has a reference to a Price-object which also should be deleted and each Price may have a reference to a VAT-object and so forth. All these cascaded objects should be marked as deleted.
My question is how can I accomplish this in a sensible manner. Going through every object (which are being deleted) checking each field for references which should be deleted, going through their references etc is quite laborious and bugs are easy to slip in.
I'm using Hibernate, I was wondering if Hibernate would have any such inbuilt features. Another idea that I came to think of was to use hibernate interceptors to modify an actual SQL delete query to an update query (I'm not even 100% sure this is possible). My only concern is that does Hibernate rely on cascades in foreign keys, in other words, the cascaded deletes are done by the database and not by hibernate.
My problem is how I should mark these
deleted objects as deleted.
I think you have choosen a very complex way to solve the task. It would be more easy to introduce ProductTemplate. Place into this object all required properties you need. And also you need here a reference to a Product instance. Than instead of marking Product you can just delete it (and delete all other entities, such as prices). And, of course, you should clean reference in ProductTemplate. When you are creating an instance of ActualObject you will be able to notify the user with appropriate message.
I think you're trying to make things much more complicated than they should be... anyway, what you're trying to do is handling Hibernate events, take a look at Chapter 12 of Hibernate Reference, you can choose to use interceptors or the event system.
In any case... well good luck :)
public interface Deletable {
public void delete();
}
Have all your deletable objects implement this interface. In their implementations, update the deleted flag and have them call their children's delete() method also - which implies that the children must be Deletable too.
Of course, upon implementation you'll have to manually figure which children are Deletable. But this should be straightforward, at least.
If I understand what you are asking for, you add an #OneToMany relationship between the template and the product, and select your cascade rules, you will be able to delete all associated products for a given template. In your product class, you can add the "deleted" flag as you suggested. This deleted flag would be leveraged by your service/dao layer e.g. you could leverage a getProdcuts(boolean includeDeleted) type concept to determine if you should include the "deleted" records for return. In this fashion you can control what end users see, but still expose full functionality to internal business users.
The flag to delete should be a part of the Template Class itself. That way all the Objects that you create have a way to be flagged as alive or deleted. The marking of the Object to be deleted, should go higher up to the base class.