Let's pretend a RESTful service receives a PATCH request to update one or more fields of an entity that might have tens of fields.
#Entity
public class SomeEntity {
#Id
#GeneratedValue
private Long id;
// many other fields
}
One dirty way to patch the corresponding entity is to write something like this:
SomeEntity patch = deserialize(json);
SomeEntity existing = findById(patch.getId());
if (existing != null)
{
if (patch.getField1() != null)
{
existing.setField1(patch.getField1());
}
if (patch.getField2() != null)
{
existing.setField2(patch.getField2());
}
if (patch.getField3() != null)
{
existing.setField3(patch.getField3());
}
}
But this is insane! And if I want to patch 1 to many & other associations of the entity the insanity could even become hazardous!
Is there a sane an elegant way to achieve this task?
Modify the getter's of SomeEntity and apply check, if any value is blank or null just return the corresponding entity object value.
class SomeEntity {
transient SomeEntity existing;
private String name;
public String getName(){
if((name!=null&&name.length()>0)||existing==null){
return name;
}
return existing.getName();
}
}
You can send an array containing the name of the fields you are going to patch. Then, in the server side, by reflection or any field mapping, set each field to the entity. I have already implemented that and it works, thought my best advice is this:
Don't publish an endpoint to perform a "generic" PATCH (modification), but one that performs a specific operation. For instance, if you want to modify an employee's address, publish an endpoint like:
PUT /employees/3/move
that expects a JSON with the new address {"address" : "new address"}.
Instead of reinventing the wheel by writing the logic yourself, why don't you use a mapping library like Dozer? You want to use the 'map-null' mapping property: http://dozer.sourceforge.net/documentation/exclude.html
EDIT I am not sure whether or not it would be possible to map a class onto itself. You could use an intermediary DTO, though.
Related
Hi considering the following example:
Resource:
#PUT
#Path("{id}")
public Response update(#PathParam(value = "id") final String id, final Person person) {
final Person person = service.getPerson(id);
final EntityTag etag = new EntityTag(Integer.toString(person.hashCode()));
// If-Match is required
ResponseBuilder builder = request.evaluatePreconditions(etag);
if (builder != null) {
throw new DataHasChangedException("Person data has changed: " + id);
}
service.updatePerson(id, person.getName());
....
}
Service:
public void updatePerson(final String id, final String name) {
final Query<Person> findQuery = morphiaDataStore.createQuery(Person.class).filter("id ==", id);
UpdateOperations<Person> operation = morphiaDataStore.createUpdateOperations(Person.class).set("name", name);
morphiaDataStore.findAndModify(findQuery, operation );
}
Person:
#Entity("person")
public class Person {
#Id
private ObjectId id;
#Version
private Long version;
private String name;
...
}
I do check if the etag provided is the same of the person within the database. However this check is been done on the resource itself. I don't think that this is safe since the update happens after the check and another thread could have gone threw the check in the meantime. How can this be solved correctly? Any example or advise is appreciated.
Morphia already implements optimistic-locking via #Version annotation.
http://mongodb.github.io/morphia/1.3/guides/annotations/#version
#Version marks a field in an entity to control optimistic locking. If the versions change in the database while modifying an entity (including deletes) a ConcurrentModificationException will be thrown. This field will be automatically managed for you – there is no need to set a value and you should not do so. If another name beside the Java field name is desired, a name can be passed to this annotation to change the document’s field name.
I see you have already use the annotation in your example. Make sure the clients include the version of the document as part of the request so you can also pass it to morphia.
Not sure if findAndModify will be able to handle it (I would think it does). but at least I'm sure save does handle it.
Assuming the object person contains the new name and version that the client was looking at, you can do directly something like this to update the record:
morphiaDataStore.save(person);
If there was another save before this client could pick it up the versions will no longer match and a ConcurrentModificationException will be issued with this message:
Entity of class %s (id='%s',version='%d') was concurrently updated
I have a bidirectional one-to-many relationship.
0 or 1 client <-> List of 0 or more product orders.
That relationship should be set or unset on both entities:
On the client side, I want to set the List of product orders assigned to the client; the client should then be set / unset to the orders chosen automatically.
On the product order side, I want to set the client to which the oder is assigned; that product order should then be removed from its previously assiged client's list and added to the new assigned client's list.
I want to use pure JPA 2.0 annotations and one "merge" call to the entity manager only (with cascade options). I've tried with the following code pieces, but it doesn't work (I use EclipseLink 2.2.0 as persistence provider)
#Entity
public class Client implements Serializable {
#OneToMany(mappedBy = "client", cascade= CascadeType.ALL)
private List<ProductOrder> orders = new ArrayList<>();
public void setOrders(List<ProductOrder> orders) {
for (ProductOrder order : this.orders) {
order.unsetClient();
// don't use order.setClient(null);
// (ConcurrentModificationEx on array)
// TODO doesn't work!
}
for (ProductOrder order : orders) {
order.setClient(this);
}
this.orders = orders;
}
// other fields / getters / setters
}
#Entity
public class ProductOrder implements Serializable {
#ManyToOne(cascade= CascadeType.ALL)
private Client client;
public void setClient(Client client) {
// remove from previous client
if (this.client != null) {
this.client.getOrders().remove(this);
}
this.client = client;
// add to new client
if (client != null && !client.getOrders().contains(this)) {
client.getOrders().add(this);
}
}
public void unsetClient() {
client = null;
}
// other fields / getters / setters
}
Facade code for persisting client:
// call setters on entity by JSF frontend...
getEntityManager().merge(client)
Facade code for persisting product order:
// call setters on entity by JSF frontend...
getEntityManager().merge(productOrder)
When changing the client assignment on the order side, it works well: On the client side, the order gets removed from the previous client's list and is added to the new client's list (if re-assigned).
BUT when changing on the client side, I can only add orders (on the order side, assignment to the new client is performed), but it just ignores when I remove orders from the client's list (after saving and refreshing, they are still in the list on the client side, and on the order side, they are also still assigned to the previous client.
Just to clarify, I DO NOT want to use a "delete orphan" option: When removing an order from the list, it should not be deleted from the database, but its client assignment should be updated (that is, to null), as defined in the Client#setOrders method. How can this be archieved?
EDIT: Thanks to the help I received here, I was able to fix this problem. See my solution below:
The client ("One" / "owned" side) stores the orders that have been modified in a temporary field.
#Entity
public class Client implements Serializable, EntityContainer {
#OneToMany(mappedBy = "client", cascade= CascadeType.ALL)
private List<ProductOrder> orders = new ArrayList<>();
#Transient
private List<ProductOrder> modifiedOrders = new ArrayList<>();
public void setOrders(List<ProductOrder> orders) {
if (orders == null) {
orders = new ArrayList<>();
}
modifiedOrders = new ArrayList<>();
for (ProductOrder order : this.orders) {
order.unsetClient();
modifiedOrders.add(order);
// don't use order.setClient(null);
// (ConcurrentModificationEx on array)
}
for (ProductOrder order : orders) {
order.setClient(this);
modifiedOrders.add(order);
}
this.orders = orders;
}
#Override // defined by my EntityContainer interface
public List getContainedEntities() {
return modifiedOrders;
}
On the facade, when persisting, it checks if there are any entities that must be persisted, too. Note that I used an interface to encapsulate this logic as my facade is actually generic.
// call setters on entity by JSF frontend...
getEntityManager().merge(entity);
if (entity instanceof EntityContainer) {
EntityContainer entityContainer = (EntityContainer) entity;
for (Object childEntity : entityContainer.getContainedEntities()) {
getEntityManager().merge(childEntity);
}
}
JPA does not do this and as far as I know there is no JPA implementation that does this either. JPA requires you to manage both sides of the relationship. When only one side of the relationship is updated this is sometimes referred to as "object corruption"
JPA does define an "owning" side in a two-way relationship (for a OneToMany this is the side that does NOT have the mappedBy annotation) which it uses to resolve a conflict when persisting to the database (there is only one representation of this relationship in the database compared to the two in memory so a resolution must be made). This is why changes to the ProductOrder class are realized but not changes to the Client class.
Even with the "owning" relationship you should always update both sides. This often leads people to relying on only updating one side and they get in trouble when they turn on the second-level cache. In JPA the conflicts mentioned above are only resolved when an object is persisted and reloaded from the database. Once the 2nd level cache is turned on that may be several transactions down the road and in the meantime you'll be dealing with a corrupted object.
You have to also merge the Orders that you removed, just merging the Client is not enough.
The issue is that although you are changing the Orders that were removed, you are never sending these orders to the server, and never calling merge on them, so there is no way for you changes to be reflected.
You need to call merge on each Order that you remove. Or process your changes locally, so you don't need to serialize or merge any objects.
EclipseLink does have a bidirectional relationship maintenance feature which may work for you in this case, but it is not part of JPA.
Another possible solution is to add the new property on your ProductOrder, I named it detached in the following examples.
When you want to detach the order from the client you can use a callback on the order itself:
#Entity public class ProductOrder implements Serializable {
/*...*/
//in your case this could probably be #Transient
private boolean detached;
#PreUpdate
public void detachFromClient() {
if(this.detached){
client.getOrders().remove(this);
client=null;
}
}
}
Instead of deleting the orders you want to delete you set detached to true. When you will merge & flush the client, the entity manager will detect the modified order and execute the #PreUpdate callback effectively detaching the order from the client.
In my datamodel a have many entities where attributes are mapped to enumerations like this:
#Enumerated(EnumType.STRING)
private MySpecialEnum enumValue;
MySpecialEnum defines some fixed values. The mapping works fine and if the database holds a NULL-value for a column I get NULL in the enumValue-attribute too.
The problem is, that my backend module (where I have no influence on) uses spaces in CHAR-columns to identify that no value is set. So I get an IllegalArgumentException instead of a NULL-value.
So my question is: Is there a JPA-Event where I can change the value read from the database before mapping to the enum-attribute?
For the write-access there is the #PrePersist where I can change Null-values to spaces. I know there is the #PostLoad-event, but this is handled after mapping.
Btw: I am using OpenJpa shipped within WebSphere Application Server.
You could map the enum-type field as #Transient (it will not be persisted) and map another field directly as String, synchronizing them in #PostLoad:
#Transient
private MyEnum fieldProxy;
private String fieldDB;
#PostLoad
public void postLoad() {
if (" ".equals(fieldDB))
fieldProxy = null;
else
fieldProxy = MyEnum.valueOf(fieldDB);
}
Use get/setFieldProxy() in your Java code.
As for synchronizing the other way, I'd do it in a setter, not in a #PreUpdate, as changes to #Transient fields probably do not mark the entity as modified and the update operation might not be triggered (I'm not sure of this):
public void setFieldProxy(MyEnum value) {
fieldProxy = value;
if (fieldProxy == null)
fieldDB = " ";
else
fieldDB = value.name();
}
OpenJPA offers #Externalizer and #Factory to handle "special" database values.
See this: http://ci.apache.org/projects/openjpa/2.0.x/manual/manual.html#ref_guide_pc_extern_values
You might end up with something like this: not tested...
#Factory("MyClass.mySpecialEnumFactory")
private MySpecialEnum special;
...
public static MySpecialEnum mySpecialEnumFactory(String external) {
if(StringUtils.isBlank(external) return null; // or why not MySpecialEnum.NONE;
return MySpecialEnum.valueOf(external);
}
I have a java class which has one field with getter and setter, and a second pair of getter and setter that access this field in another way:
public class NullAbleId {
private static final int NULL_ID = -1;
private int internalId;
getter & setter for internalId
public Integer getId() {
if(this.internalId == NULL_ID) {
return null;
} else {
return Integer.valueOf(internalId);
}
}
public void setId(Integer id) {
if (id == null) {
this.internalId = NULL_ID;
} else {
this.internalId = id.intValue();
}
}
}
(the reason for this construction is that I want to build a way to hande Nullable Intergers)
On the Flash/Flex client side, I have a Class with two properties: id and internalId (the id properties are only for testing, at the end they should return the internalId value)
BlazeDS seams to transfer both values: id and internalId, because both have a complete getter setter pair. I want Blaze not to transfer id, only internalId should be transferred. But I have no idea how I have to configure that.
All the rules for BlazeDS serialization are here:
http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=serialize_data_3.html
Here is a quote: "Fields that are static, transient, or nonpublic, as well as bean properties that are nonpublic or static, are excluded."
So if you can make your id property fit that criteria it will be excluded. Another option would be to create a custom serializer that overtly does not include your id property.
All the best,
~harris
Besides transient / marshaller you can implement the Externalizable interface and create your custom serialization.
See serialization rules
It maybe a little bit old, but it could help some : there is a nice ticket about excluding properties from Java to Flex via BlazeDS
EDIT : a better soluce, it's to use the #AmfIgnore (or #AmfIgnoreField if your serialization is directly on the fields) annotation present in the spring-flex-core.jar (I've used the 1.5.2-RELEASE)
I defined a generator for a JPA class:
<sequence-generator name="MY_SEQ" allocation-size="-1"
sequence-name="MY_SEQ"
initial-value="100000000" />
There are cases where I already have an ID for an entity but when I insert the Entity the Id gets generated using the generator.
Is it possible to define a generator that will only generate an Id when one does not exist?
I am using Hibernate as a JPA Provider.
Thank you
I couldn't find a way to do this in JPA so I used Hibernate EJB3 event listeners. I over rode the saveWithGeneratedId to use reflection to check the entity for an #Id annotation and then to check that field for a value. If it has a value then I call saveWithRequestedId instead. Other wise I let it generate the Id. This worked well because I can still use the sequence for Hibernate that is set up if I need an Id. The reflection might add overhead so I might change it a little. I was thinking of having a getId() or getPK() method in all entities so I don't have to search for which field is the #Id.
Before I used reflection I tried calling session.getIdentifier(entity) to check but I was getting TransientObjectException( "The instance was not associated with this session" ). I couldn;t figure out how to get the Entity into the session without saving it first so I gave up. Below is the listener code I wrote.
public class MergeListener extends org.hibernate.ejb.event.EJB3MergeEventListener
{
#Override
protected Serializable saveWithGeneratedId(Object entity, String entityName, Object anything, EventSource source, boolean requiresImmediateIdAccess) {
Integer id = null;
Field[] declaredFields = entity.getClass().getDeclaredFields();
for (Field field : declaredFields) {
Id annotation = field.getAnnotation(javax.persistence.Id.class);
if(annotation!=null) {
try {
Method method = entity.getClass().getMethod("get" + field.getName().substring(0, 1).toUpperCase() + field.getName().substring(1));
Object invoke = method.invoke(entity);
id = (Integer)invoke;
} catch (Exception ex) {
//something failed (method not found..etc) , keep going anyway
}
break;
}
}
if(id == null ||
id == 0) {
return super.saveWithGeneratedId(entity, entityName, anything, source, requiresImmediateIdAccess);
} else {
return super.saveWithRequestedId(entity, id, entityName, anything, source);
}
}
}
I then had to add the listener to my persistence.xml
<property name="hibernate.ejb.event.merge" value="my.package.MergeListener"/>
it's not a good Idea, sequences are used for surrogate keys, are meaningless in the business sense but assures you, there won't be duplicates thus no error at inserting time.