Orika Generic Collection Custom Mapping - java

Orika has support for generic types, but I have trouble getting it to work with generic collections. Since Orika does not support different collection strategies (cumulative, non-cumulative, orphan removal) I need to write a custom mapper to handle my requirements.
The problem is that Orika does not apply this mapper, but instead tries to use the normal collection mapping logic.
Type<List<Document>> DOCUMENT_LIST = new TypeBuilder<List<Document>>() {}.build();
Type<List<DocumentRepresentation>> DOCUMENT_REP_LIST = new TypeBuilder<List<DocumentRepresentation>>() {}.build();
mapperFactory.classMap(DOCUMENT_LIST, DOCUMENT_REP_LIST)
.mapNulls(true)
.mapNullsInReverse(true)
.customize(new NonCumulativeListMapperDocumentToDocumentRepresentation())
.register();
public class NonCumulativeListMapperDocumentToDocumentRepresentation
extends CustomMapper<List<Document>, List<DocumentRepresentation>> {
//mapping logic
}
I also tried to explicitly setting the type list in the parent mappings
.fieldMap("documents", "documents")
.aElementType(Document.class)
.bElementType(DocumentRepresentation.class)
.add()
but this was also not picked up.
Any hints as to what I'm missing?

This could be done by registering your custom mapper:
mapperFactory.registerMapper(new NonCumulativeListMapperDocumentToDocumentRepresentation());
And it will be used later when Orika have to map DOCUMENT_LIST DOCUMENT_REP_LIST. The last fieldMap configuration is not needed.
For more information about Merging collections in Orika, please refer to this
simple test (CustomMergerTest).

Related

DefaultKafkaProducerFactory with multiple JsonSerializer mappings

i'm going through spring documentation and found that we can have multiple mappings for single producer factory spring-docs
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
senderProps.put(JsonSerializer.TYPE_MAPPINGS, "foo:com.myfoo.Foo, bar:com.mybar.bar");
But it is unclear for me how to create Producerfactory like below
#Bean
public ProducererFactory<Foo, Bar> kafkaProducerFactory(KafkaProperties properties,
JsonSserializer customSerializer) {
return new DefaultKafkaConsumerFactory<>(properties.buildProducerProperties(),
customSerializer, customSerializer);
}
According to my knowledge Foo must be key and Bar must be value right?, and what is this customSerializer? i'm looking for clear example with much more info.
My question is i wish to have single ProducerFactory and kafkatemplate that produces multiple type message to kafka, for example Foo, Bar,Car is that possible?
No; this
senderProps.put(JsonSerializer.TYPE_MAPPINGS, "foo:com.myfoo.Foo, bar:com.mybar.bar");
is only for when you define the deserializer when using properties only.
When using the DefaultKafkaConsumerFactory and DefaultKafkaProducerFactory constructors that take fully built serializer/deserializer objects directly, you must configure the deserializer yourself.
typeMapper = new DefaultJackson2JavaTypeMapper();
typeMapper.setIdClassMapping(myTypeMappingsMap);
deserializer = new JsonDeserializer();
deserlialzer.setTypeMapper(typeMapper);
(and similarly for the serializer).

ModelMapper - Converter/ AbstractConverter vs Provider

I'm using ModelMapper to convert some objects to complex DTOs and vice-versa.
Even though I've tried to understand the documentation, I've found hard to understand when to use a Converter or a Provider or an AbstractConverter.
Now, for example, if I want to convert String properties to little DTOs inside of the destination DTO, I'm doing it manually inside an abstract converter.
For instance:
dest.setAddressDTO(new AddressDTO(source.getStreet(), source.getNumber()));
Is though that the right way to do it?
When should I use a provider?
And if I want to set properties with conditionals, can I use Conditional from within the converter or that's only when using a PropertyMap ?
Additionally, is it a good practice to use the same modelMapper instance to convert several different types of objects?
Thanks in advance
The right way to work with this is to use Converters.
For example, let's say I want to create a converter to convert a dto into a domain object.
So first you define a converter:
private Converter companyDtoToCompany = new AbstractConverter<CompanyDto, Company>() {
#Override
protected Company convert(CompanyDto source) {
Company dest = new Company();
dest.setName(source.getName());
dest.setAddress(source.getAddress());
dest.setContactName(source.getContactName());
dest.setContactEmail(source.getContactEmail());
(...)
dest.setStatus(source.getStatus());
return dest;
}
};
Then you add it to the mapper in the configureMappings() method:
modelMapper = new ModelMapper();
// Using STRICT mode to prevent strange entities mappin
modelMapper.getConfiguration()
.setMatchingStrategy(MatchingStrategies.STRICT);
modelMapper.addConverter(companyDtoToCompany);
// modelMapper.addConverter(otherConverter);
}
And finally you just need to add the mapping methods for those types you can use from your code:
public Company convertCompanyReqDtoToCompany(CompanyDto dto, Class<Company> destinationType) {
return modelMapper.map(dto, destinationType);
}

How to customize ModelMapper

I want to use ModelMapper to convert entity to DTO and back. Mostly it works, but how do I customize it. It has has so many options that it's hard to figure out where to start. What's best practice?
I'll answer it myself below, but if another answer is better I'll accept it.
First here are some links
modelmapper getting started
api doc
blog post
random code examples
My impression of mm is that it is very well engineered. The code is solid and a pleasure to read. However, the documentation is very terse, with very few examples. Also the api is confusing because there seems to be 10 ways to do anything, and no indication of why you’d do it one way or another.
There are two alternatives: Dozer is the most popular, and Orika gets good reviews for ease of use.
Assuming you still want to use mm, here’s what I’ve learned about it.
The main class, ModelMapper, should be a singleton in your app. For me, that meant a #Bean using Spring. It works out of the box for simple cases. For example, suppose you have two classes:
class DogData
{
private String name;
private int mass;
}
class DogInfo
{
private String name;
private boolean large;
}
with appropriate getters/setters. You can do this:
ModelMapper mm = new ModelMapper();
DogData dd = new DogData();
dd.setName("fido");
dd.setMass(70);
DogInfo di = mm.map(dd, DogInfo.class);
and the "name" will be copied from dd to di.
There are many ways to customize mm, but first you need to understand how it works.
The mm object contains a TypeMap for each ordered pair of types, such as <DogInfo, DogData> and <DogData, DogInfo> would be two TypeMaps.
Each TypeMap contains a PropertyMap with a list of mappings. So in the example the mm will automatically create a TypeMap<DogData, DogInfo> that contains a PropertyMap that has a single mapping.
We can write this
TypeMap<DogData, DogInfo> tm = mm.getTypeMap(DogData.class, DogInfo.class);
List<Mapping> list = tm.getMappings();
for (Mapping m : list)
{
System.out.println(m);
}
and it will output
PropertyMapping[DogData.name -> DogInfo.name]
When you call mm.map() this is what it does,
see if the TypeMap exists yet, if not create the TypeMap for the <S,
D> source/destination types
call the TypeMap Condition, if it returns FALSE, do nothing and STOP
call the TypeMap Provider to construct a new destination object if necessary
call the TypeMap PreConverter if it has one
do one of the following:
if the TypeMap has a custom Converter, call it
or, generate a PropertyMap (based on Configuration flags plus any custom mappings that were added), and use it
(Note: the TypeMap also has optional custom Pre/PostPropertyConverters that I think will run at this point before and after each mapping.)
call the TypeMap PostConverter if it has one
Caveat: This flowchart is sort of documented but I had to guess a lot, so it might not be all correct!
You can customize every single step of this process. But the two most common are
step 5a. – write custom TypeMap Converter, or
step 5b. – write custom Property Mapping.
Here is a sample of a custom TypeMap Converter:
Converter<DogData, DogInfo> myConverter = new Converter<DogData, DogInfo>()
{
public DogInfo convert(MappingContext<DogData, DogInfo> context)
{
DogData s = context.getSource();
DogInfo d = context.getDestination();
d.setName(s.getName());
d.setLarge(s.getMass() > 25);
return d;
}
};
mm.addConverter(myConverter);
Note the converter is one-way. You have to write another if you want to customize DogInfo to DogData.
Here is a sample of a custom PropertyMap:
Converter<Integer, Boolean> convertMassToLarge = new Converter<Integer, Boolean>()
{
public Boolean convert(MappingContext<Integer, Boolean> context)
{
// If the dog weighs more than 25, then it must be large
return context.getSource() > 25;
}
};
PropertyMap<DogData, DogInfo> mymap = new PropertyMap<DogData, DogInfo>()
{
protected void configure()
{
// Note: this is not normal code. It is "EDSL" so don't get confused
map(source.getName()).setName(null);
using(convertMassToLarge).map(source.getMass()).setLarge(false);
}
};
mm.addMappings(mymap);
The pm.configure function is really funky. It’s not actual code. It is dummy EDSL code that gets interpreted somehow. For instance the parameter to the setter is not relevant, it is just a placeholder. You can do lots of stuff in here, such as
when(condition).map(getter).setter
when(condition).skip().setter – safely ignore field.
using(converter).map(getter).setter – custom field converter
with(provider).map(getter).setter – custom field constructor
Note the custom mappings are added to the default mappings, so you do not need, for example, to specify
map(source.getName()).setName(null);
in your custom PropertyMap.configure().
In this example, I had to write a Converter to map Integer to Boolean. In most cases this will not be necessary because mm will automatically convert Integer to String, etc.
I'm told you can also create mappings using Java 8 lambda expressions. I tried, but I could not figure it out.
Final Recommendations and Best Practice
By default mm uses MatchingStrategies.STANDARD which is dangerous. It can easily choose the wrong mapping and cause strange, hard to find bugs. And what if next year someone else adds a new column to the database? So don't do it. Make sure you use STRICT mode:
mm.getConfiguration().setMatchingStrategy(MatchingStrategies.STRICT);
Always write unit tests and ensure that all mappings are validated.
DogInfo di = mm.map(dd, DogInfo.class);
mm.validate(); // make sure nothing in the destination is accidentally skipped
Fix any validation failures with mm.addMappings() as shown above.
Put all your mappings in a central place, where the mm singleton is created.
I faced a problem while mapping with ModelMapper. Not only properties but also My source and destination type were different. I solved this problem by doing this ->
if the source and destination type are different. For example,
#Entity
class Student {
private Long id;
#OneToOne
#JoinColumn(name = "laptop_id")
private Laptop laptop;
}
And Dto ->
class StudentDto {
private Long id;
private LaptopDto laptopDto;
}
Here, the source and destination types are different. So, if your MatchingStrategies are STRICT, you won't able to map between these two different types.
Now to solve this, Just simply put this below code in the constructor of your controller class or any class where you want to use ModelMapper->
private ModelMapper modelMapper;
public StudentController(ModelMapper modelMapper) {
this.modelMapper = modelMapper;
this.modelMapper.typeMap(Student.class, StudentDto.class).addMapping(Student::getLaptop, StudentDto::setLaptopDto);
}
That's it. Now you can use ModelMapper.map(source, destination) easily. It will map automatically
modelMapper.map(student, studentDto);
I've been using it from last 6 months, I'm going to explain some of my thoughts about that:
First of all, it is recommended to use it as an unique instance (singleton, spring bean,...), that's explained in the manual, and I think all agree with that.
ModelMapper is a great mapping library and wide flexible. Due to its flexibility, there are many ways to get the same result, and that's why it should be in the manual of best practices of when to use one or other way to do the same thing.
Starting with ModelMapper is a little bit difficult, it has a very tight learning curve and sometimes it is not easy to understand the best ways to do something, or how to do some other thing. So, to start it is required to read and understand the manual precisely.
You can configure your mapping as you want using the next settings:
Access level
Field matching
Naming convention
Name transformer
Name tokenizer
Matching strategy
The default configuration is simply the best (http://modelmapper.org/user-manual/configuration/), but if you want to customise it you are able to do it.
Just one thing related to the Matching Strategy configuration, I think this is the most important configuration and is need to be careful with it. I would use the Strict or Standard but never the Loose, why?
Due Loose is the most flexible and intelligent mapper it could be map some properties you can not expect. So, definitively, be careful with it. I think is better to create your own PropertyMap and use Converters if it is needed instead of configuring it as Loose.
Otherwise, it is important to validate all property matches, you verify all it works, and with ModelMapper it's more need due with intelligent mapping it is done via reflection so you will not have the compiler help, it will continue compiling but the mapping will fail without realising it. That's one of the things I least like, but it needs to avoid boilerplate and manual mapping.
Finally, if you are sure to use ModelMapper in your project you should use it using the way it proposes, don't mix it with manual mappings (for example), just use ModelMapper, if you don't know how to do something be sure is possible (investigate,...). Sometimes is hard to do it with model mapper (I also don't like it) as doing by hand but is the price you should pay to avoid boilerplate mappings in other POJOs.
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
#Service
public class EntityDtoConversionUtil {
#Autowired
private ModelMapper modelMapper;
public Object convert(Object object,Class<?> type) {
Object MapperObject=modelMapper.map(object, type);
return MapperObject;
}
}
Here is how you can make a customize Conversion Class and can then autowire it where you would like to convert the object to dto and vice versa.
#Component
public class ConversionUtil {
#Bean
public ModelMapper modelMapper() {
return new ModelMapper();
}
public <T,D> D mapItem(T item,Class<D> cl){
return modelMapper().map(item,cl);
}
public <T,D> List<D> map(List<T> list, Class<D> cl){
return list.stream()
.map(item -> modelMapper().map(item, cl))
.collect(Collectors.toList());
}
}

Map a list of object to another list using Dozer's custom converters

What I am trying to do is to map a List of entities to a list of their String ids (more or less) using Dozer.
Obviously, it implies Custom Converter. My first idea was to make a converter from MyEntity to a String, and then say to Dozer something like "Map every object of this collection using this converter". But I couldn't figure out how to do so.
So my second idea was to make a converter form a list of entities to a list of string, directly. My problem on this idea is that I was strugling on something ridiculous which is to get the type of my list in the constructor, as below (which doesn't work at all):
public MyEntityListConverter() {
super(List<MyEntity>.class, List<String>.class);
}
I don't know how to pass an instantiated list's class in a single row wihout declaring anything.
So if someone know either :
How to specify to dozer an object convertor to use in collection mapping
How to get instantiated list type
A third/better solution to try
The way you tried is not possible due to generic types. And if it was, Dozer cannot detect types at runtime.
1st solution with List<>
Your converter :
public class MyEntityToStringConverter extends DozerConverter<MyEntity, String> {
// TODO constructor + impl
}
Your mapping :
mapping(MyEntityA.class, MyEntityB.class)
.fields("myEntityList", "myStringList",
hintA(MyEntity.class),
hintB(String.class));
mapping(MyEntity.class, String.class)
.fields(this_(), this_(), customConverter(MyEntityToStringConverter.class));
2nd solution with list wrappers
You can try to create your custom classes extending a list impl.
public class MyEntityList extends ArrayList<MyEntity> {
}
public class MyStringList extends ArrayList<String> {
}
Change your field in the parent classes you want to map.
Your converter :
public class MyEntityToStringConverter extends DozerConverter<MyEntityList, MyStringList> {
// TODO constructor + impl
}
Your mapping :
mapping(MyEntityA.class, MyEntityB.class)
.fields("myEntityList", "myStringList", customConverter(MyEntityToStringConverter.class));
Another option would be
super((Class<List<MyEntity>>) (Class<?>) List.class,(Class<List<String>>) (Class<?>) List.class);
I very much inclined to #Ludovic solution, but there might be a catch as mentioned in my comment up there.
But a slight tweak works for me though - register the custom converter in "configuration" rather than field level. I'm using XML config but it should work with coding config:
<configuration>
<custom-converters>
<converter type="f.q.c.n.MyEntityToStringConverter">
<class-a>java.lang.String</class-a>
<class-b>f.q.c.n.MyEntity</class-b>
</converter>
</custom-converters>
</configuration>

BeanUtils.copyProperties() vs DozerBeanMapper.map()

I am using BeanUtils.copyProperties() for bean to dto mapping when I need to map all fields and field names are same. But I need not all field of source bean to map in destination dto, I used DozerBeanMapper.map() , because I haven't idea about to use BeanUtils in this situation.
So I think both methods having their own functionality, and there is no any similarity between both. Am I right? Please guide me.
You might check out my ModelMapper. It will intelligently map properties (fields/methods) even if the names are not exactly the same. Defining specific properties to be mapped or skipped is simple and uses real code instead of XML:
ModelMapper modelMapper = new ModelMapper();
modelMapper.addMappings(new PropertyMap<Order, OrderDTO>() {
protected void configure() {
map().setBillingStreet(source.getBillingStreetAddress());
skip().setBillingCity(null);
}
});
Check out the project homepage for more info:
http://modelmapper.org
We considered mapstruct for our usecase. See a sample below:
#Mapper
public interface MyMapper {
MyMapper INSTANCE = Mappers.getMapper(MyMapper.class);
To to(From from);
}
Here is a performance comparison of MapStruct against Selma, Orika, ModelMapper, Dozer and manual mapping:
Selma vs. MapStruct

Categories