Best way to use ModelMapper in my scenario - java

Started working on java spring boot for a project requirement
I want to understand what is the best way to use ModelMapper for my scenario.
I have a model InputMsg:
public class InputMsg {
String EquipmentNumber;
//getters setters
}
Further, I have two DTOs OutputMsg and ErrorDesc
public class OutputMsg {
public ErrorDesc EquipmentNumber;
//getters setters
}
public class ErrorDesc {
String Value;
//getters setters
}
My requirement is to use the incoming input message and finally return a result in the OutputMsg format which I can further take to do some other calculations.
eg. InputMsg -- "ABCD1234"
OutputMsg : EquipmentNumber.value = "ABCD1234"
What I have used is something like this with ModelMapper is :
ModelMapper modelMapper = new ModelMapper();
PropertyMap <InputMsg, OutputMsg> orderMap =
new PropertyMap <InputMsg,OutputMsg>() {
protected void configure() {
OutputMsg.getEquipmentNumber()
.setValue(InputMsg.getEquipmentNumber());
}
modelMapper.addMappings(orderMap);
return OutputMsg;
}};
The Problem I see is if I have 100's of property i will have to write a 100 lines of code to map it.
How can i do it in a better way to automatically map the InputMsg value to Output.Msg
Help is appreciated
Regards

First of all: you should follow Java conventions in your field naming make fields private (there are getters and setters), so name your fields starting like:
private String equipmentNumber;
private ErrorDesc equipmentNumber;
private value;
However this is not so important but my example uses such naming. The following thing is more important.
The most powerful feature (at leas IMO) is that ModelMapper can associate data when some suitable convention is used in field naming without any configuration. For example if you had your InputMsg like:
#Getter #Setter
public InputMsg {
private String equipmentNumberValue;
}
the mapping would be straightforward and work without any configuration. So with a proper design you can reduce a bunch of boilerplate code.
Now (IMO) people usually use terms mapper & adapter interchangeably. But they are not exactly the same thing, see for example this question. Mapper can use adapter as you have configured in your ModelMapper configuration.
The thing is that no mapper can make complex/arbitrary adaptations from class A to class B without explicitly telling it how to adapt and it requires an adapter. There is no workaround other than make an adapter that ModelMapper uses and if it needs to handle 100 field it has to. Depending on the case it might be able to be done in some generic way but it could make the code harder to understand.
Also if you need to code lots of adapter code for ModelMapper to use maybe the whole thing would be more clear if it was done without ModelMapper.
So if possible try to change your design so that ModelMapper can associate it by itself instead of coding lots of adapting code.
But with a proper design and following conventions that mapper understands good mapper can adapt something like in my example.

Related

ModelMapper - Converter/ AbstractConverter vs Provider

I'm using ModelMapper to convert some objects to complex DTOs and vice-versa.
Even though I've tried to understand the documentation, I've found hard to understand when to use a Converter or a Provider or an AbstractConverter.
Now, for example, if I want to convert String properties to little DTOs inside of the destination DTO, I'm doing it manually inside an abstract converter.
For instance:
dest.setAddressDTO(new AddressDTO(source.getStreet(), source.getNumber()));
Is though that the right way to do it?
When should I use a provider?
And if I want to set properties with conditionals, can I use Conditional from within the converter or that's only when using a PropertyMap ?
Additionally, is it a good practice to use the same modelMapper instance to convert several different types of objects?
Thanks in advance
The right way to work with this is to use Converters.
For example, let's say I want to create a converter to convert a dto into a domain object.
So first you define a converter:
private Converter companyDtoToCompany = new AbstractConverter<CompanyDto, Company>() {
#Override
protected Company convert(CompanyDto source) {
Company dest = new Company();
dest.setName(source.getName());
dest.setAddress(source.getAddress());
dest.setContactName(source.getContactName());
dest.setContactEmail(source.getContactEmail());
(...)
dest.setStatus(source.getStatus());
return dest;
}
};
Then you add it to the mapper in the configureMappings() method:
modelMapper = new ModelMapper();
// Using STRICT mode to prevent strange entities mappin
modelMapper.getConfiguration()
.setMatchingStrategy(MatchingStrategies.STRICT);
modelMapper.addConverter(companyDtoToCompany);
// modelMapper.addConverter(otherConverter);
}
And finally you just need to add the mapping methods for those types you can use from your code:
public Company convertCompanyReqDtoToCompany(CompanyDto dto, Class<Company> destinationType) {
return modelMapper.map(dto, destinationType);
}

How to customize ModelMapper

I want to use ModelMapper to convert entity to DTO and back. Mostly it works, but how do I customize it. It has has so many options that it's hard to figure out where to start. What's best practice?
I'll answer it myself below, but if another answer is better I'll accept it.
First here are some links
modelmapper getting started
api doc
blog post
random code examples
My impression of mm is that it is very well engineered. The code is solid and a pleasure to read. However, the documentation is very terse, with very few examples. Also the api is confusing because there seems to be 10 ways to do anything, and no indication of why you’d do it one way or another.
There are two alternatives: Dozer is the most popular, and Orika gets good reviews for ease of use.
Assuming you still want to use mm, here’s what I’ve learned about it.
The main class, ModelMapper, should be a singleton in your app. For me, that meant a #Bean using Spring. It works out of the box for simple cases. For example, suppose you have two classes:
class DogData
{
private String name;
private int mass;
}
class DogInfo
{
private String name;
private boolean large;
}
with appropriate getters/setters. You can do this:
ModelMapper mm = new ModelMapper();
DogData dd = new DogData();
dd.setName("fido");
dd.setMass(70);
DogInfo di = mm.map(dd, DogInfo.class);
and the "name" will be copied from dd to di.
There are many ways to customize mm, but first you need to understand how it works.
The mm object contains a TypeMap for each ordered pair of types, such as <DogInfo, DogData> and <DogData, DogInfo> would be two TypeMaps.
Each TypeMap contains a PropertyMap with a list of mappings. So in the example the mm will automatically create a TypeMap<DogData, DogInfo> that contains a PropertyMap that has a single mapping.
We can write this
TypeMap<DogData, DogInfo> tm = mm.getTypeMap(DogData.class, DogInfo.class);
List<Mapping> list = tm.getMappings();
for (Mapping m : list)
{
System.out.println(m);
}
and it will output
PropertyMapping[DogData.name -> DogInfo.name]
When you call mm.map() this is what it does,
see if the TypeMap exists yet, if not create the TypeMap for the <S,
D> source/destination types
call the TypeMap Condition, if it returns FALSE, do nothing and STOP
call the TypeMap Provider to construct a new destination object if necessary
call the TypeMap PreConverter if it has one
do one of the following:
if the TypeMap has a custom Converter, call it
or, generate a PropertyMap (based on Configuration flags plus any custom mappings that were added), and use it
(Note: the TypeMap also has optional custom Pre/PostPropertyConverters that I think will run at this point before and after each mapping.)
call the TypeMap PostConverter if it has one
Caveat: This flowchart is sort of documented but I had to guess a lot, so it might not be all correct!
You can customize every single step of this process. But the two most common are
step 5a. – write custom TypeMap Converter, or
step 5b. – write custom Property Mapping.
Here is a sample of a custom TypeMap Converter:
Converter<DogData, DogInfo> myConverter = new Converter<DogData, DogInfo>()
{
public DogInfo convert(MappingContext<DogData, DogInfo> context)
{
DogData s = context.getSource();
DogInfo d = context.getDestination();
d.setName(s.getName());
d.setLarge(s.getMass() > 25);
return d;
}
};
mm.addConverter(myConverter);
Note the converter is one-way. You have to write another if you want to customize DogInfo to DogData.
Here is a sample of a custom PropertyMap:
Converter<Integer, Boolean> convertMassToLarge = new Converter<Integer, Boolean>()
{
public Boolean convert(MappingContext<Integer, Boolean> context)
{
// If the dog weighs more than 25, then it must be large
return context.getSource() > 25;
}
};
PropertyMap<DogData, DogInfo> mymap = new PropertyMap<DogData, DogInfo>()
{
protected void configure()
{
// Note: this is not normal code. It is "EDSL" so don't get confused
map(source.getName()).setName(null);
using(convertMassToLarge).map(source.getMass()).setLarge(false);
}
};
mm.addMappings(mymap);
The pm.configure function is really funky. It’s not actual code. It is dummy EDSL code that gets interpreted somehow. For instance the parameter to the setter is not relevant, it is just a placeholder. You can do lots of stuff in here, such as
when(condition).map(getter).setter
when(condition).skip().setter – safely ignore field.
using(converter).map(getter).setter – custom field converter
with(provider).map(getter).setter – custom field constructor
Note the custom mappings are added to the default mappings, so you do not need, for example, to specify
map(source.getName()).setName(null);
in your custom PropertyMap.configure().
In this example, I had to write a Converter to map Integer to Boolean. In most cases this will not be necessary because mm will automatically convert Integer to String, etc.
I'm told you can also create mappings using Java 8 lambda expressions. I tried, but I could not figure it out.
Final Recommendations and Best Practice
By default mm uses MatchingStrategies.STANDARD which is dangerous. It can easily choose the wrong mapping and cause strange, hard to find bugs. And what if next year someone else adds a new column to the database? So don't do it. Make sure you use STRICT mode:
mm.getConfiguration().setMatchingStrategy(MatchingStrategies.STRICT);
Always write unit tests and ensure that all mappings are validated.
DogInfo di = mm.map(dd, DogInfo.class);
mm.validate(); // make sure nothing in the destination is accidentally skipped
Fix any validation failures with mm.addMappings() as shown above.
Put all your mappings in a central place, where the mm singleton is created.
I faced a problem while mapping with ModelMapper. Not only properties but also My source and destination type were different. I solved this problem by doing this ->
if the source and destination type are different. For example,
#Entity
class Student {
private Long id;
#OneToOne
#JoinColumn(name = "laptop_id")
private Laptop laptop;
}
And Dto ->
class StudentDto {
private Long id;
private LaptopDto laptopDto;
}
Here, the source and destination types are different. So, if your MatchingStrategies are STRICT, you won't able to map between these two different types.
Now to solve this, Just simply put this below code in the constructor of your controller class or any class where you want to use ModelMapper->
private ModelMapper modelMapper;
public StudentController(ModelMapper modelMapper) {
this.modelMapper = modelMapper;
this.modelMapper.typeMap(Student.class, StudentDto.class).addMapping(Student::getLaptop, StudentDto::setLaptopDto);
}
That's it. Now you can use ModelMapper.map(source, destination) easily. It will map automatically
modelMapper.map(student, studentDto);
I've been using it from last 6 months, I'm going to explain some of my thoughts about that:
First of all, it is recommended to use it as an unique instance (singleton, spring bean,...), that's explained in the manual, and I think all agree with that.
ModelMapper is a great mapping library and wide flexible. Due to its flexibility, there are many ways to get the same result, and that's why it should be in the manual of best practices of when to use one or other way to do the same thing.
Starting with ModelMapper is a little bit difficult, it has a very tight learning curve and sometimes it is not easy to understand the best ways to do something, or how to do some other thing. So, to start it is required to read and understand the manual precisely.
You can configure your mapping as you want using the next settings:
Access level
Field matching
Naming convention
Name transformer
Name tokenizer
Matching strategy
The default configuration is simply the best (http://modelmapper.org/user-manual/configuration/), but if you want to customise it you are able to do it.
Just one thing related to the Matching Strategy configuration, I think this is the most important configuration and is need to be careful with it. I would use the Strict or Standard but never the Loose, why?
Due Loose is the most flexible and intelligent mapper it could be map some properties you can not expect. So, definitively, be careful with it. I think is better to create your own PropertyMap and use Converters if it is needed instead of configuring it as Loose.
Otherwise, it is important to validate all property matches, you verify all it works, and with ModelMapper it's more need due with intelligent mapping it is done via reflection so you will not have the compiler help, it will continue compiling but the mapping will fail without realising it. That's one of the things I least like, but it needs to avoid boilerplate and manual mapping.
Finally, if you are sure to use ModelMapper in your project you should use it using the way it proposes, don't mix it with manual mappings (for example), just use ModelMapper, if you don't know how to do something be sure is possible (investigate,...). Sometimes is hard to do it with model mapper (I also don't like it) as doing by hand but is the price you should pay to avoid boilerplate mappings in other POJOs.
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
#Service
public class EntityDtoConversionUtil {
#Autowired
private ModelMapper modelMapper;
public Object convert(Object object,Class<?> type) {
Object MapperObject=modelMapper.map(object, type);
return MapperObject;
}
}
Here is how you can make a customize Conversion Class and can then autowire it where you would like to convert the object to dto and vice versa.
#Component
public class ConversionUtil {
#Bean
public ModelMapper modelMapper() {
return new ModelMapper();
}
public <T,D> D mapItem(T item,Class<D> cl){
return modelMapper().map(item,cl);
}
public <T,D> List<D> map(List<T> list, Class<D> cl){
return list.stream()
.map(item -> modelMapper().map(item, cl))
.collect(Collectors.toList());
}
}

Patterns: Populate instance from Parameters and export it to XML

I'm building a simple RESTFul Service; and for achieve that I need two tasks:
Get an instance of my resource (i.e Book) from request parameters, so I can get that instance to be persisted
Build an XML document from that instance to send the representation to the clients
Right now, I'm doing both things in my POJO class:
public class Book implements Serializable {
private Long id;
public Book(Form form) {
//Initializing attributes
id = Long.parseLong(form.getFirstValue(Book.CODE_ELEMENT));
}
public Element toXml(Document document) {
// Getting an XML Representation of the Book
Element bookElement = document.createElement(BOOK_ELEMENT);
}
I've remembered an OO principle that said that behavior should be where the data is, but now my POJO depends from Request and XML API's and that doesn't feels right (also, that class has persistence anotations)
Is there any standard approach/pattern to solve that issue?
EDIT:
The libraries i'm using are Restlets and Objectify.
I agree with you when you say that the behavior should be where the data is. But at the same time, as you say I just don't feel confortable polluting a POJO interface with specific methods used for serialization means (which can grow considerably depending on the way you want to do it - JSON, XML, etc.).
1) Build an XML document from that instance to send the representation to the clients
In order to decouple the object from serialization logic, I would adopt the Strategy Pattern:
interface BookSerializerStrategy {
String serialize(Book book);
}
public class XmlBookSerializerStrategy implements BookSerializerStrategy {
public String serialize(Book book) {
// Do something to serialize your book.
}
}
public class JsonBookSerializerStrategy implements BookSerializerStrategy {
public String serialize(Book book) {
// Do something to serialize your book.
}
}
You POJO interface would become:
public class Book implements Serializable {
private Long id;
private BookSerializerStrategy serializer
public String serialize() {
return serializer.serialize(this);
}
public void setSerializer(BookSerializerStrategy serializer) {
this.serializer = serializer;
}
}
Using this approach you will be able to isolate the serialization logic in just one place and wouldn't pollute your POJO with that. Additionally, returning a String I won't need to couple you POJO with classes Document and Element.
2) Get an instance of my resource (i.e Book) from request parameters, so I can get that instance to be persisted
To find a pattern to handle the deserialization is more complex in my opinion. I really don't see a better way than to create a Factory with static methods in order to remove this logic from your POJO.
Another approach to answer your two questions would be something like JAXB uses: two different objects, an Unmarshaller in charge of deserialization and a Marshaller for serialization. Since Java 1.6, JAXB comes by default with JDK.
Finally, those are just suggestions. I've become really interested in your question actually and curious about other possible solutions.
Are you using Spring, or any other framework, in your project? If you used Spring, it would take care of serialization for you, as well as assigning request params to method params (parsing as needed).

How can I tell jackson to ignore a property for which I don't have control over the source code?

Long story short, one of my entities has a GeometryCollection that throws an exception when you call "getBoundary" (the why of this is another book, for now let's say this is the way it works).
Is there a way I can tell Jackson not to include that specific getter? I know I can use #JacksonIgnore when I do own/control the code. But this is not case, jackson ends reaching this point through continuous serialization of the parent objects. I saw a filtering option in jackson documentation. Is that a plausible solution?
Thanks!
You can use Jackson Mixins. For example:
class YourClass {
public int ignoreThis() { return 0; }
}
With this Mixin
abstract class MixIn {
#JsonIgnore abstract int ignoreThis(); // we don't need it!
}
With this:
objectMapper.getSerializationConfig().addMixInAnnotations(YourClass.class, MixIn.class);
Edit:
Thanks to the comments, with Jackson 2.5+, the API has changed and should be called with objectMapper.addMixIn(Class<?> target, Class<?> mixinSource)
One other possibility is, if you want to ignore all unknown properties, you can configure the mapper as follows:
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
Using Java Class
new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
Using Annotation
#JsonIgnoreProperties(ignoreUnknown=true)
Annotation based approach is better. But sometimes manual operation is needed. For this purpose you can use without method of ObjectWriter.
ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
ObjectWriter writer = mapper.writer().withoutAttribute("property1").withoutAttribute("property2");
String jsonText = writer.writeValueAsString(sourceObject);
Mix-in annotations work pretty well here as already mentioned. Another possibility beyond per-property #JsonIgnore is to use #JsonIgnoreType if you have a type that should never be included (i.e. if all instances of GeometryCollection properties should be ignored). You can then either add it directly (if you control the type), or using mix-in, like:
#JsonIgnoreType abstract class MixIn { }
// and then register mix-in, either via SerializationConfig, or by using SimpleModule
This can be more convenient if you have lots of classes that all have a single 'IgnoredType getContext()' accessor or so (which is the case for many frameworks)
I had a similar issue, but it was related to Hibernate's bi-directional relationships. I wanted to show one side of the relationship and programmatically ignore the other, depending on what view I was dealing with. If you can't do that, you end up with nasty StackOverflowExceptions. For instance, if I had these objects
public class A{
Long id;
String name;
List<B> children;
}
public class B{
Long id;
A parent;
}
I would want to programmatically ignore the parent field in B if I were looking at A, and ignore the children field in A if I were looking at B.
I started off using mixins to do this, but that very quickly becomes horrible; you have so many useless classes laying around that exist solely to format data. I ended up writing my own serializer to handle this in a cleaner way: https://github.com/monitorjbl/json-view.
It allows you programmatically specify what fields to ignore:
ObjectMapper mapper = new ObjectMapper();
SimpleModule module = new SimpleModule();
module.addSerializer(JsonView.class, new JsonViewSerializer());
mapper.registerModule(module);
List<A> list = getListOfA();
String json = mapper.writeValueAsString(JsonView.with(list)
.onClass(B.class, match()
.exclude("parent")));
It also lets you easily specify very simplified views through wildcard matchers:
String json = mapper.writeValueAsString(JsonView.with(list)
.onClass(A.class, match()
.exclude("*")
.include("id", "name")));
In my original case, the need for simple views like this was to show the bare minimum about the parent/child, but it also became useful for our role-based security. Less privileged views of objects needed to return less information about the object.
All of this comes from the serializer, but I was using Spring MVC in my app. To get it to properly handle these cases, I wrote an integration that you can drop in to existing Spring controller classes:
#Controller
public class JsonController {
private JsonResult json = JsonResult.instance();
#Autowired
private TestObjectService service;
#RequestMapping(method = RequestMethod.GET, value = "/bean")
#ResponseBody
public List<TestObject> getTestObject() {
List<TestObject> list = service.list();
return json.use(JsonView.with(list)
.onClass(TestObject.class, Match.match()
.exclude("int1")
.include("ignoredDirect")))
.returnValue();
}
}
Both are available on Maven Central. I hope it helps someone else out there, this is a particularly ugly problem with Jackson that didn't have a good solution for my case.
If you want to ALWAYS exclude certain properties for any class, you could use setMixInResolver method:
#JsonIgnoreProperties({"id", "index", "version"})
abstract class MixIn {
}
mapper.setMixInResolver(new ClassIntrospector.MixInResolver(){
#Override
public Class<?> findMixInClassFor(Class<?> cls) {
return MixIn.class;
}
#Override
public ClassIntrospector.MixInResolver copy() {
return this;
}
});
One more good point here is to use #JsonFilter.
Some details here Feature: JSON Filter

How-to dynamically fill a annotation

Sadly, I forgot to take the code from work with me today. But maybe this little example will clarify things.
I use hibernate to map a bean to a table.
Example:
import javax.persistence.column;
….
String columnameA;
….
#Column(name="columnameA")
public String getColumname(){
return columnameA
}
….
I do not want to hardcode the columnname (“columnameA”) in my sourcecode, because I need to switch the columname without building the entire project.
I wanted to use something like:
#Column(name=getColumnName())
This does not work. The idea is, to to write the columnname somewhere in the jndi tree and use it at startup. So i only need to restart the application to change the columnname.
The only way around this problem – which I can think of – is to write my own annotation, which extends the hibernate class. Is there a simpler way of doing this?
You can't achieve this with annotations, but a solution to your specific problem is to implement a custom NamingStrategy:
public class NamingStrategyWrapper implements NamingStrategy {
private NamingStrategy target;
public NamingStrategyWrapper(NamingStrategy target) {
this.target = target;
}
public String columnName(String arg0) {
if ("columnameA".equals(arg0)) return getColumnName();
else return target.columnName(arg0);
}
...
}
-
AnnotationConfiguration cfg = new AnnotationConfiguration();
cfg.setNamingStrategy(new NamingStrategyWrapper(cfg.getNamingStrategy()));
factory = cfg.configure().buildSessionFactory();
The only values you can assign to attributes are constant values, specified by hand, or stored in public static final variables.
Annotations do not define behavior, but only meta-informations about class, methods and the likes. You can specify behavior in annotation processors, that read your annotations and generate new source code or other files.
Writing an annotation processo is beyond my knowledge, but you could find other information in the Annotations Processing Tool guide by Sun.

Categories