How to properly update entities using REST and JPA/Hibernate - java

I have entity Document, which has lots of columns, one-to-one, one-to-many and many-to-many mappings to some other entities.
Example:
Document:
id,
title,
body,
authors,
viewers,
...
Using REST, I want to update some particular document, controller receives serialized Document object, calling EntityManager's merge method persists null results to the database if controller received for instance only body , then I want the body to be updated only, but merge deletes records for title, authors and viewers and etc.
I understand that it is a standard behavior of EntityManager, but I am asking what is the most preferred technique to do updates on entities without receiving whole entity from front-end or some other endpoint. Should I load the entity from database using the id I received and set MANUALLY all of the fields and then save to database or should I use another technique.
I don't have any problem with writing manually all of the setters to copy the changes, but entities are really big in size with lots of relations. Asking for best practice in this case.
I know about DTOs but I want alternate approach when using entities for controllers and service methods.

For entity partial update, you will need to use either criteria api or jpql ... if you are using older versions with no criteria update or old query parser where jpql update is not allowed you will have to read from database first, update then insert again .... you can also make use of updatable=false for columns that should be only set on creation (like CREATION_DATE) and there is also a nice feature in hibernate called #DynamicUpdate which I haven't tried but looks brilliant ... It only updates the modified field (check Vlad's post here) ... concerning the DTO DP , I you might always need to use if you want to hide / modify some data from the client regardless to the way you store the data ... and it's always a good way to separate concerns (but comes with the mapping headache between domain & DTO objects which is much released thanks to spring converters)

There are two options one is update query, which works fine but you may feel
you are loosing some hibernate features and simplicity of the code. Else you can do it in Hibernate way like below
AuditorBean auditorBean = (AuditorBean) session.get(AuditorBean.class, AuditorBean.getId());
auditorBean.setFirstName("aa");
auditorBean.setLatName("bb");
auditorBean.setTrainLevel("ISO");
auditorBean.setAccessLevel(4);
Here you should not call any method like saveOrUpdate() or merge().
object is attached with transaction, so object is flushed and committed at the end of the transaction automatically .

Related

Does Objectify have an equivalent of Datastore.add() to insert non-existing entities only?

I would like to insert multiple entities (under the same entity group) into Datastore as a batch and only have the missing ones inserted and the rest unmodified. Datastore.add(Entities...) seems to support it as explained in this client issue and in the docs.
I dont see an alternative on ofy() as the save() operation eventually converts into a datastore.put() as seen here which will overwrite all the entities.
The alternative would be to open a new transaction within which I can get these entities by their keys and find the missing ones from the list and insert them back but I am assuming that would be more expensive than the earlier option given that this transaction has a broader concurrency level than the row-level required with add().
Not currently, but it looks easy enough to support. Add a feature request at https://github.com/objectify/objectify/issues

How to merge input from a web service to a JPA entity

I'm trying to figure out the best way to use JPA in the context of a restful web service. The input comes in as JSON and I can use Jackson/JAX-RS to convert that to a POJO. This gets passed to a service where I need to somehow merge into a JPA entity.
These are the options I've found so far with pros and cons.
1. JPA merge()
The first thing I tried was probably the simplest. The GET action returns the JPA entity which is easily serialized into JSON. On the update the object is passed back is JSON which can be used to populate a detached entity. This can be saved to the DB using the JPA merge() method.
Pros
Simple architecture with less code duplication (i.e. no DTO's)
Cons
As far as I can tell this only works if you pass the whole model around. If you try to hide certain fields, like the maybe the password on a User entity, then the merge thinks you're trying to set these fields to null in the DB. Not good!
2. DTO's using JPA find() and dozer
Next I thought I'd look at using data transfer objects. Apparently an anti-pattern but worth a look. The service now creates a DTO instance based on the entity and it is this DTO that is serialized to JSON. The update then gets the entity from the DB using a find() method and the values need to be copied across from the DTO to the entity. I tried automating this mapping using the dozer framework.
Pros
You don't have to return the entire model. If you have certain fields you don't want to be updated you can leave them off the DTO and they can't be copied to the entity by mistake. Using dozer means you don't have to manually copy attributes from dto to entity and vice versa.
Cons
It feels like repeating yourself when writing the DTO's. Somehow you have to map between entities and DTO's. I tried to automate this with dozer but it was a bit disappointing. It was nulling out things it shouldn't have been and to get full control you have to write xml.
3. DTO's using manual merge
A third way would be to abandon dozer and just copy the properties across from the DTO to the entity in the service. Everybody seems to say anti-pattern but it's pretty much how every non-trivial application that I've seen in the past has worked.
Summary
It seems to be a decision between keeping things simple for the developer but not having control over the input/output or making a more robust web service but having to use an anti-pattern in the process...
Have I missed anything? Perhaps there's an elusive alternative?
Using JPA merge looks the simplest, cleanest and with very less effort but as correctly discovered creates problems with detached entity attributes set to null.
Another problem which turned out to be big in one of my experiences was that if you rely on JPA merge operation you must be using Cascade feature as well.
For simple and less nested relation this works reasonably well, but for deeply nested domain objects and lots of relations, this becomes a big impact on performance. The reason being that the ORM tool (Hibernate in my experience) upfront caches the SQL to load the merge entity ( 'merge path' in Hibernate parlance) and if the nesting is too deep with Cascade mappings the joins in the SQL becomes too big. Marking realtions Lazy does not help here as the merge path is determined by the Cascades in relations. This problem becomes apparent slowly as your model evolves. Plus the prospect of angry DBA waving a huge join query on our face prompted us to do something different :-)
There is an interesting issue related to Hibernate regarding Merging of Lazy relations still unresolved (actually rejected but the discussion is very enjoyable to read) in Hibernate JIRA.
We then moved towards the DTO approach where we refrained from using merge and relied on doing it manually. Yes it was tedious and required the knowledge of
what state is actally coming from the detached entity, but to us it was worth. This way we do not touch the Lazy relations and attributes not meant to change. and set only what is required. The automatic state detection of Hibernate does the rest on transaction commit.
This is approach I am using:
suppress serialization of certain fields with XmlTransient annotation
when updating the record from the client, get the entity from the database and use ModelMapper with custom property mapping to copy the updated values without changing the fields that are not in the JSON representation.
For example:
public class User {
#Id
private long id;
private String email;
#XmlTransient
private String password;
...
}
public class UserService {
...
public User updateUser(User dto) {
User entity = em.find(User.class, dto.getId());
ModelMapper modelMapper = new ModelMapper();
modelMapper.addMappings(new UserMap());
modelMapper.map(userDto, user);
return user;
}
}
public class UserMap extends PropertyMap<User, User> {
protected void configure() {
skip().setPassword(null);
}
}
BeanUtils is an alternative to ModelMapper.
It would be nice if these libraries could recognize the XmlTransient annotation so the programmer can avoid creating the custom property map.

How would I audit the changes to a list of JPA entities?

I've got two lists of entities: One that is the current state of the rows in the DB, the other is the changes that were made to the list. How do I audit the rows that were deleted, added, and the changes made to the entities? My audit table is used by all the entities.
Entity listeners and Callback methods look like a perfect fit, until you notice the sentence that says: A callback method must not invoke EntityManager or Query methods! Because of this restriction, I can collect audits, but I can't persist them to the database :(
My solution has been a complex algorithm to discover the audits.
If the entity is in the change list and has no key, it's an add
If the entity is in the db but not the changes list, it's a delete
If the entity is in both list, recursively compare their fields to find differences to audit (if any)
I collect these and insert them into the DB in the same transaction I merge the changes list. But I hate the fact that I'm writing this by hand. It seems like JPA should be able to do this logic for me.
One solution we've come up with is to use an Entity Listener that posts the audits to a JMS queue. The queue then inserts the audits into the database. But I don't like this solution because I think setting up a JMS queue is a pain. It's currently the best solution we've got though.
I'm using eclipselink (ideally, that's not relevant) and have found these two things that look helpful but the JMS queue is a better solution than them:
http://wiki.eclipse.org/EclipseLink/FAQ/JPA#How_to_access_what_changed_in_an_object_or_transaction.3F This looks really difficult to use. You search for the fields by a string. So if I refactor my entity and forget to update this, it'll throw a runtime error.
http://wiki.eclipse.org/EclipseLink/Examples/JPA/History This isn't consistent with the way we currently audit. It expects a special entity_history table.
The EntityListener looks like a good approach since you are able to collect the audit information.
Have you tried persisting the information in a different transaction than the one persisting the changes? perhaps obtaining a reference to a Stateless EJB (assuming you are using EJBs) and using methods marked with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW). In this way the transaction persisting the original changes is put on hold while the transaction of the audit completes. Note that you will not be able to access the updated information in this separate audit transaction, since the original one has not committed yet.

JPA merge in a RESTful web application with DTOs and Optimistic Locking?

My question is this: Is there ever a role for JPA merge in a stateless web application?
There is a lot of discussion on SO about the merge operation in JPA. There is also a great article on the subject which contrasts JPA merge via a more manual Do-It-Yourself process (where you find the entity via the entity manager and make your changes).
My application has a rich domain model (ala domain-driven design) that uses the #Version annotation in order to make use of optimistic locking. We have also created DTOs to send over the wire as part of our RESTful web services. The creation of this DTO layer also allows us to send to the client everything it needs and nothing it doesn't.
So far, I understand this is a fairly typical architecture. My question is about the service methods that need to UPDATE (i.e. HTTP PUT) existing objects. In this case we have these two approaches 1) JPA Merge, and 2) DIY.
What I don't understand is how JPA merge can even be considered an option for handling updates. Here's my thinking and I am wondering if there is something I don't understand:
1) In order to properly create a detached JPA entity from a wire DTO, the version number must be set correctly...else an OptimisticLockException is thrown. But the JPA spec says:
An entity may access the state of its version field or property or
export a method for use by the application to access the version, but
must not modify the version value[30]. Only the persistence provider
is permitted to set or update the value of the version attribute in
the object.
2) Merge doesn't handle bi-directional relationships ... the back-pointing fields always end up as null.
3) If any fields or data is missing from the DTO (due to a partial update), then the JPA merge will delete those relationships or null-out those fields. Hibernate can handle partial updates, but not JPA merge. DIY can handle partial updates.
4) The first thing the merge method will do is query the database for the entity ID, so there is no performance benefit over DIY to be had.
5) In a DYI update, we load the entity and make the changes according to the DTO -- there is no call to merge or to persist for that matter because the JPA context implements the unit-of-work pattern out of the box.
Do I have this straight?
Edit:
6) Merge behavior with regards to lazy loaded relationships can differ amongst providers.
Using Merge does require you to either send and receive a complete representation of the entity, or maintain server side state. For trivial CRUD-y type operations, it is easy and convenient. I have used it plenty in stateless web apps where there is no meaningful security hazard to letting the client see the entire entity.
However, if you've already reduced operations to only passing the immediately relevant information, then you need to also manually write the corresponding services.
Just remember that when doing your 'DIY' update you still need to pass a Version number around on the DTO and manually compare it to the one that comes out of the database. Otherwise you don't get the Optimistic Locking that spans 'user think-time' that you would have if you were using the simpler approach with merge.
You can't change the version on an entity created by the provider, but when you have made your own instance of the entity class with the new keyword it is fine and expected to set the version on it.
It will make the persistent representation match the in-memory representation you provide, this can include making things null. Remember when an object is merged that object is supposed to be discarded and replaced with the one returned by merge. You are not supposed to merge an object and then continue using it. Its state is not defined by the spec.
True.
Most likely, as long as your DIY solution is also using the entity ID and not an arbitrary query. (There are other benefits to using the 'find' method over a query.)
True.
I would add:
7) Merge translates to insert or to update depending on the existence of the record on DB, hence it does not deal correctly with update-vs-delete optimistic concurrency. That is, if another user concurrently deletes the record and you update it, it must (1) throw a concurrency exception... but it does not, it just inserts the record as new one.
(1) At least, in most cases, in my opinion, it should. I can imagine some cases where I would want this use case to trigger a new insert, but they are far from usual. At least, I would like the developer to think twice about it, not just accept that "merge() == updateWithConcurrencyControl()", because it is not.

JPA2/Hibernate - Stop lazy loading?

I'm having a problem where JPA is trying to lazily load my data when I don't want it to. Essentially what is happening is I'm using a Service to retrieve some data, and when I go to parse that data into JSON, the JSON library is triggering hibernate to try and lazily load the data. Is there any way to stop this? I've given an example below.
// Web Controller method
public String getEmployeesByQuery(String query) {
Gson gson = new Gson();
List<Employee> employees = employeeService.findEmployeesByQuery(query);
// Here is where the problem is occurring - the gson.toJSON() method is (I imagine)
// using my getters to format the JSON output, which is triggering hibernate to
// try and lazily load my data...
return gson.toJSON(employees);
}
Is it possible to set JPA/hibernate to not try and lazily load the data?
UPDATE: I realize that you can use FetchType.EAGER - but what if I don't want to eager load that data? I just want to stop hibernate from trying to retrieve more data - I already have the data I want. Right now whenever I try and access a get() method hibernate will throw a "no session or session is closed" error, which makes sense because my transaction was already committed from my service.
Thanks!
There are several options:
If you always need to load your collection eagerly, you can specify fetch = FetchType.EAGER in your mapping, as suggested in other answers.
Otherwise you can enable eager fetching for particular query:
By using JOIN FETCH clause in HQL/JPQL query:
SELECT e FROM Employee e JOIN FETCH e.children WHERE ...
By using fetch profiles (in JPA you can access Hibernate Session via em.unwrap(Session.class))
You really have two options:
You can copy the data from employee to one that is not being proxied by hibernate.
See if there is a way to not have the toJSON library reflect the entire object graph. I know some JSON libraries allow you to only serialize some properties of an object to JSON.
Personally I would think #1 would be easier if your library only uses reflection.
As others have stated, this is not an issue with JPA/hibernate but rather with the json serialization library you are using. You should instruct gson to exclude the properties you don't want traversed.
Yes:
#*ToMany(fetch=FetchType.EAGER)
I suggest you to make a fetched copy of the entities you want to use outside of a transaction. That way, the lazy loading will occur from within a transaction and you can pass to Gson a plain, not enhanced, POJO.
You can use Doozer to do this. It is very flexible and through a little configuration (read you'll gonna loose your hair configuring it) you can even retrieve only partially the data you want to send to Gson.
You could always change the fetch attribute to FetchType.EAGER, but it is also worth considering if you have your transactions have the right scope. Collections will be correctly loaded if they are accessed within a transaction.
Your problem is that you are serializing the data. We ran into the same sort of problem with Flex and JPA/Hibernate. The trick is, depending on how much you want to mangle things, either
Change your data model to not chase after the data you don't want.
Copy the data you do want into some sort of DTO that has no relationships to worry about.
Assuming you're using Hibernate, add the Session-in-view filter....its something like that, it will keep the session open while you serialize the entire database. ;)
Option one is what we did for the first big project we did, but it ruined the data access library we had for any sort of general purpose use. Since that time we've tended more toward option two.
YMMV
The easy and straight forward thing to do is create new Data classes (something like DTO)
use Hibernate.isInitialized() to check if the object is initialized by hibernate or not.
I am checking gson if i can override anything. I will post it here if I find anything new.

Categories