I have a JPA entity (but this question is interesting in general) that consists of multiple child classes (aggregation).
I need to create a new entry in the DB that is 90% identical to the existing one (a few business values and of course the IDs need to be different).
As we need mapstruct for mapping between entity and TO I was thinking "Can mapstruct do this for me?" After Creating a deep copy I could simply update the remaining fields and persist the object.
Writing a copy constructor by hand is error prone (as newly added fields could be forgotten), a generator aproach would be much appreciated.
Yes, you can use #DeepClone:
This Javadoc contains an example:
https://mapstruct.org/documentation/dev/api/org/mapstruct/control/MappingControl.html
#Mapper(mappingControl = DeepClone.class)
public interface CloningMapper {
CloningMapper INSTANCE = Mappers.getMapper( CloningMapper.class );
MyDTO clone(MyDTO in);
}
Yes. But be careful though. If MapStruct discovers the same type on source and target it will simply take (not clone) the source type. Unless you define a method signature for it.
In other words: check the generated code carefully.
Related
It is not easy to explain my issue.
JPA creates some complex objects for calculations, which are stored in a database.
We decided to set the results in a working copy of this objects.
This means for each object model we created a seperated working copy model file with the same fields but some other LocalDates values and new result fields.
When the calculation was starting the working copies are instantiated.
This approach is not the best i think.
I think of the prototype pattern to clone the object.
There i come to the problem how to add the new fields. How?
Instantion costs and ist creates lots of additionals model class files.
I only think of put the result field in the calculation data models as transient fields.
Maybe inner class or local class?
I also tried to use an interface as data bucket.
But thats not the realy purpose of interfaces and also it works only with many curious trick.
For Unit Tests and user input i think it is the best to use the builder pattern and then tell JPA to store the parent object, or not?
Sorry but my answer was to long for a comment :(
There is big complex object relationship with Lists and Sets One To Many etc. relationship. When i set the result i a new class i cant determine the right object e.g. in a list. So we bild the same structurefor these result and seperated these classes in a package. Maybe it is possible to dont build the structure a second time with also references to the "basic classes". It should be sufficient to reference to each basic class a result class. It would only a little bit more navigation to get values from deeper classes. For a similiar use case there must be a best practise, or? Interfaces or sth. I very dislike the many classes for the result. Is it not possible to clone and add classmember to it for the result or to logical group easier or something like this?
It could be a solution for somebody:
http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.isv%2Freference%2Fapi%2Forg%2Feclipse%2Fjdt%2Fcore%2FIWorkingCopy.html
Here you will work with the Eclipse API and create IWorkingCopies.
For the described task toooo much.
I have quite complex object structure (with bunch of primitive fields and object references) and want to test all fields except -a few- of them. As an example;
ComplexObject actual = generateMagically("someInput");
ComplexObject expected = ActualFunction.instance.workMagically(actual);
// we want to be sure that workMagically() would create a new ComplexObject
// with some fields are different than "actual" object.
// assertThat(actual, samePropertyValuesAs(expected)); would check all fields.
// what I want is actually; - notice that "fieldName1" and "fieldName2" are
// primitives belong to ComplexObject
assertThat(actual, samePropertyValuesExceptAs(expected, "fieldName1", "fieldName2"))
Since I don't want to check all fields manually, I believe there must be a way to write that test elegantly. Any ideas?
Cheers.
You should have a look at shazamcrest, a great Hamcrest extension that offers what you need.
assertThat(expected, sameBeanAs(expectedPerson).ignoring("fieldName1").ignoring("fieldName2"));
See https://github.com/shazam/shazamcrest#ignoring-fields
Just pass the list of properties to ignore as 2nd parameter to samePropertyValuesAs.
Hamcrest matcher API
public static <B> Matcher<B> samePropertyValuesAs(B expectedBean, String... ignoredProperties)
e.g.
samePropertyValuesAs(salesRecord,"id")
In general I see two solutions if ComplexObject can be modified by yourself.
You could introduce an interface that represents the properties of ComplexObject that are being changed by ActualFunction. Then you can test that all properties of that new interface have changed. This would require that ComplexObject implements that new interface.
Another approach would be to replace the properties of ComplextObject that are changed by ActualFunction with a new property of a new type that contains all those properties. A better design would then be to let ActualFunction return an instance of the new type.
Last time I had a similar requirements I came to the conclusion that manually writing both code and tests to assert that some values are updated is inherently fagile and error-prone.
I externalized the fields in a bag object and generated the Java source files for both the bag class itself and the copier at compile time. This way you can test actual code (the generator) and have the actual definition of the domain in exactly one place, so the copy code can't be out-of-date.
The language to describe the property can be anything you are comfortable with, from JSON-schema to XML to Java itself (Java example follows - custom annotations are to be consumed from the generator)
public class MyBag {
#Prop public int oh;
#Prop public String yeah;
}
I need to add new String object to Array of custom type object, ServiceOrderEntity in this case. I know that this kind of breaks ServiceOrderEntity integrity but I have to access this field from jsp. What is the best way to do it?
DAO class:
...
SQLQuery localSQLQuery = localSession.createSQLQuery(query).addEntity(ServiceOrderEntity.class);
localList = localSQLQuery.list();
Iterator itr = localList.iterator();
while (itr.hasNext()){
String field = "some value";
itr.next().append( field ); // something like that maybe....
}
return to Service class
...
Service class
...
List list = perform DAO request
model.addAttribute("serviceOrderList", localList);
....
UPDATE
I have all models generated by Hibernate and I don't want to touch them. I need to add to custom object, in this case ServiceOrderEntity or find workaround. I think I can make copy of it and append new field to it (using Dozer)? New fields is result of other complex subqueries.
List of ServiceOrderEntity objects at runtime:
-list
--[0]model.ServiceOrderEntity#d826d3c7
---createdBy = {....}
---serviceRequestFK{java.Lang.Integer} // << this one
--[1]
....
etc
I need to get name using serviceRequestFK in ServiceOrderEntity. As long as java doesn't allow hot fix (to add custom filed to already created object) I need to find a way to pass to JSP the name field as well. What is the right way?
I really don't want to include DAO mathod requests from jsp...
Create separate list of names?...
Since Java does not allow mix-ins (aka monkey-patching) you'll have to:
Add the field to the base entity.
Return a sub-class that includes this field.
If you'd like to add the field so that the Service class can do its job, then fair enough. However, if the new field is part of the payload in/out then consider instead for that particular service then consider:
Making use-case specific payloads for each service call.
Map the results of these onto your reusable object model. (You can use something like Dozer for this).
The rationale behind this suggestion is to follow the principles of contract-first development.
Your model will be more general purpose, and thus reusable. You can add reusable behaviors to your model classes. Your services will use these behaviors to orchestrate process. (As opposed to having 'anaemic' entitites).
Your service payloads can remain relatively stable over time. So changes to your model won't effect all of your service subscribers. (aka "don't spill your guts").
One of my goals is to create an engine that will set values in pojo object from JPA objects dynamically using reflection. One of the matching criteria is, that the field names should match.
I was successfully able to implement this for two pojo objects. But when I tried using JPA objects as one of the object parameter, it didn't work. Based on my research I found out that the method Class.getDeclaredFields() , does not give me the name of the field but the getter/setter method name of member variable for JPA objects.
Can anyone please give me a lead or direction as in where/what should I look to accomplish this task?
JPA providers will often use dynamic proxy classes of your concrete JPA classes, so you have no guarantee of the field names in the proxy. The only guarantee about a proxy is that the methods are the same. Use a debugger to inspect the runtime class of the JPA class instances that you're trying to use and you'll see the problem.
The best you'll be able to do is use reflection to call methods on JPA-returned objects.
All that aside, I don't really see why you'd need to POJO-ify an entity class anyway, since an entity is primarily an annotated... POJO.
One of the matching criteria is, that the field names should match.
I think that this is the root of your problem. There is simply no guarantee that a Java object's field names will match the names of getters and setters ... or anything else. If you make this assumption, you will run into cases where is doesn't work.
The best solution is to simply not use this approach. Make it a requirement that the Pojo classes conform to the JavaBeans spec and rely on the setters to set the properties. This is likely to work more often than making assumptions about (private) field names.
In fact, the state of a generic JPA object implemented using a dynamic proxies could well be held in a hash map. Those fields you can see could simply be constants used for something else.
I'm developing a Scala extension to an existing Java ORM (Ebean). The goal of this project is to add as much type safety as possible to the ORM.
Instead of
Ebean.find(Product.class).fetch("name", "unit").findList()
I would finally like to be able to write something like
(objects of entity[Product] with attributes name and unit) getIt
(note that this is just a very first DSL approach).
The ORM model is already defined as
#Entity
public class {
public String name;
public String unit;
}
In order to achieve type safety at compile time for the attributes in the query, I would need to access them on e.g. a dummy object like (new Product()).name.
I think this is the best way to ensure that only such model members are used that exists on that class, but, at runtime, I need a way to recognize that this variable was accessed. Otherwise I would just call that member name and wouldn't know about this in my query.
Does anybody know a way how to achieve this? Is there a possibility to trace when a variable is accessed and to give that information, at runtime, to any other object?
I already thought about hooking into getters and setters instead of using public members in the model classes, but this would either make the query or the model very ugly. Another problem is that any additional specific methods would have to be added manually for each model.
I would be happy if anyone could suggest possible solutions. Thanks!
If you are willing to define the fields of your model objects as something like the Record Fields, what Emil suggested could work, but if you're building your solution on top of a Java ORM using custom types might be an issue. If you need to track field access I think your best bet will be runtime bytecode instrumentation using a library like CGLib or Javassist. You can pass an instrumented "dummy" object into the body of your function, then track which field was accessed in a thread local. That's how it's done in Squeryl.
You could take a gander at how the Lift folks have implemented Mapper and Records. It allows for type safe queries using companion objects (as well as using raw sql). It does require inheriting traits into your model and the fields are specified as objects and not regular vals. Might be helpfull though. You can find the source for the persistance stuff here.