Are API calls in a mapper considered a bad practice? - java

It's quite common to use DTOs as API models. Often you need to map those DTOs to other models afterwards. I will keep it really simple with following example:
class RequestDto {
private String companyId;
// more fields ..
// getter, setter etc..
}
class SomeModel {
private Company company;
// more fields ..
// getter, setter etc..
}
So in the above case RequestDto is the model that is used in the API and SomeModel is the model that is used internally by the server for the business logic. Usually you would create a class to map from one to another object, e.g:
class RequestMapper {
public SomeModel mapRequestToSomeModel(RequestDto request){
Company company = fetchCompanyFromApi(request.getCompanyId()); // makes a request to another service
SomeModel someModel = new SomeModel();
someModel.setCompany(company);
// map more fields..
return someModel;
}
}
Question
Is it a good practice to put external API call logic (like fetchCompanyFromApi) inside such mapper functions?
Are there better alternatives? (I like to keep mappers very very simple, but maybe that's just me)

It seems a little smelly to me. My (personal) expecption for a mapper, particularly if other mappers are trivial, is to be very very cheap. No database or API calls involved. I would prefer so create some kind of conversion service, which performs the same steps, but is called differently.
A similar question often arises for functions that are called get… where I would never expect expensive operations.

From my point of view, in this case you break single-responsibility principle: you map some data and fetch smth from another resource. And in this case you should name your method fetchDataAndMapRequestToModel. Because names should be clear to understand.
I also expect mapper class to do some simple stuff like pure get/set or based on some primitive logic (if A is null set B).

Related

Generic - call actual methods of object passed in parameter

There are around 6 POJO classes (domain entities, DTO's, DMO's) all have almost same fields. To convert from one objec to another, I'm passing one object and calling its getters to set it in another object.
private UserTemp convertDmoToUserTempEntity(final UserDmo userDmo) {
final UserTemp userTemp = new UserTemp();
userTemp.setUsername(userDmo.getUsername());
userTemp.setPassword(userDmo.getPassword());
userTemp.setStatus(userDmo.getStatus());
return userTemp;
}
private UserDmo convertEntityToUserDmo(final UserTemp userTemp) {
final UserDmo userDmo = new UserDmo();
userDmo.setUserId(userTemp.getUserId());
userDmo.setUsername(userTemp.getUsername());
userDmo.setStatus(userTemp.getStatus());
return userDmo;
}
There are lots of these conversion like from one entity to another, DTO to DMO, DMO To DTO etc. I believe the better way to handle this would be Generics, passing source object and destination object.
public static <E, T> T convert(E e, T t) {
//call getter of source object to set it in destination object.
return t;
}
UserConverterImpl.convertFromTempToUser(userTemp, user);
I need help in this. When I pass object in parameter, I need a way its methods. Is there any better way to achieve this?
You could try to use a framework for this. In the past I have used Dozer to do this.
In following post, other frameworks are mentioned as well
any tool for java object to object mapping?.
(some advice: When mapping JPA entity objects, watch out for lazy fields being automatically mapped by the frameworks)
Perhaps you don't want/cannot use a separate framework for whatever reason.
Since you seem to have a strict layering, you will probably have mappers for each layer. Then I would go for a separate mapping for each object. This way you can easily work out mapping from username to userId etc. The chances that all objects in the layers have exactly the same names for their methods are not that high, and likely to change anyway.
Ended up using BeanUtils of Spring.
BeanUtils.copyProperties(Object source, Object target)

Correct way of finding what was modified by a post in an spring-mvc controller?

It is a rather general question, but I will give a stripped down example. Say I have a Web CRUD application that manages simple entities stored in a database, nothing but classic : JSP view, RequestMapping annotated controller, transactional service layer and DAO.
On an update, I need to know the previous values of my fields, because a business rule asks a for a test involving the old and new values.
So I am searching for a best practice on that use case.
I thing that spring code is way more extensively tested and more robust than my own, and I would like to do it the spring way as much as possible.
Here is what I have tried :
1/ load an empty object in controller and manage the update in service :
Data.java:
class Data {
int id; // primary key
String name;
// ... other fields, getters, and setters omitted for brevity
}
DataController
...
#RequestMapping("/data/edit/{id}", method=RequestMethod.GET)
public String edit(#PathVariable("id") int id, Model model) {
model.setAttribute("data", service.getData(id);
return "/data/edit";
}
#RequestMapping("/data/edit/{id}", method=RequestMethod.POST)
public String update(#PathVariable("id") int id, #ModelAttribute Data data, BindingResult result) {
// binding result tests omitted ..
service.update(id, data)
return "redirect:/data/show";
}
DataService
#Transactional
public void update(int id, Data form) {
Data data = dataDao.find(id);
// ok I have old values in data and new values in form -> do tests stuff ...
// and MANUALLY copy fields from form to data
data.setName(form.getName);
...
}
It works fine, but in real case, if I have many domain objects and many fields in each, it is quite easy to forget one ... when spring WebDataBinder has done it including validation in the controller without I have to write any single thing other than #ModelAttribute !
2/ I tried to preload the Data from the database by declaring a Converter
DataConverter
public class DataConverter<String, Data> {
Data convert(String strid) {
return dataService.getId(Integer.valueOf(strid));
}
}
Absolutely magic ! The data if fully initialized from database and fields present in form are properly updated. But ... no way to get the previous values ...
So my question is : what could be the way to use spring DataBinder magic and to have access to previous values of my domain objects ?
You have already found the possible choices so i will just add some ideas here ;)
I will start with your option of using a empty bean and copying the values over to a loaded instance:
As you have shown in your example it's an easy approach. It's quite easily adaptable to create a generalized solution.
You do not need to copy the properties manually! Take a look at the 'BeanWrapperImpl' class. This spring object allows you to copy properties and is in fact the one used by Spring itself to achieve it's magic. It's used by the 'ParameterResolvers' for example.
So copying properties is the easy part. Clone the loaded object, fill the loaded object and compare them somehow.
If you have one service or just several this is the way to go.
In my case we needed this feature on each entity. Using Hibernate we have the issue that an entity might not only change inside a specific service call, but theoretically all over the place..
So I decided to create a 'MappedSuperClass' which all entities need to extend. This entity has a 'PostLoad' event listener which clones the entity in a transient field directly after loading. (This works if you don't have to load thousands of entities in a request.) Then you need also the 'PostPersist' and 'PostUpdate' listeners to clone the new state again as you probably don't reload the entity before another modification.
To facilitate the controller mapping I have implemented a 'StringToEntityConverter' doing exactly what you did, just generalized to support any entity type.
Finding the changes in a generalized approach will involve quite a bit of reflection. It's not that hard and I don't have the code available right now, but you can also use the 'BeanWrapper' for that:
Create a wrapper for both objects. Get all 'PropertyDescriptors' and compare the results. The hardest part is to find out when to stop. Compare only the first level or do you need deep comparison?
One other solution could also be to rely on Hibernate Envers. This would work if you do not need the changes during the same transaction. As Envers tracks the changes during a flush and creates a 'Revision' you can "simply" fetch twp revisions and compare them.
In all scenarios you will have to write a comparison code. I'm not aware of a library but probably there is something around in the java world :)
Hope that helps a bit.

DTO objects for each entity

I have inherited an application written in Java that uses JPA to access a database. The application uses an design pattern that I haven't come across before and I would really appricate some guidance on why this pattern is used. Like many applications, we have a front end, middleware, and back end database. The database is accessed via DAOs. Each method on the DAO loads a entity-DTO which is just a POJO with nothing but getters and setters and that entity-DTO is then passed into a entity-proper that has other methods that change the entity state. An example [class names changed to protect the inocent]
enum Gender
{
Male,
Female
}
class PersonDTO
{
private String mFirstName;
private String mLastName;
private Gender mGender;
...
String getFirstName() { return this.mFirstName; }
String setFirstName(String name) { this.mFirstName = name; }
// etc
}
class Person
{
PersonDTO mDTO;
Person(PersonDTO dto)
{
mDTO = dto;
}
String getFirstName() { return mDTO.getFirstName() }
String setFirstName(String name) { mDTO.setFirstName(name); }
// and so on
void marry( Person aNotherPerson )
{
if( this.getGender()==Gender.Female &&
aNotherPerson.getGender()==Gender.Male)
{
this.setLastName( aNotherPerson.getLastName() );
}
aNotherPerson.marry( this );
}
}
This is repeated across 30 or so entity classes, doubled to 60 with the DTOs, and I just cant get my head around why. I understand (bits) about seperation of converns and I also understand (bits) about the difference between an EAO based design to say an active record based design.
But does it really have to go this far? Should there always be at least one "DB" object that contains nothing but getters and setters that map to the DB fields?
Disclaimer: there are varying opinions on this subject and depending on your system's architecture you might not have a choice.
With that said... I've seen this pattern implemented before, not a huge fan of it, in my opinion is duplicates large amounts of code without adding any real value. It seems to be particularly popular in systems with XML APIs like SOAP where it might be difficult to map XML structure directly to your object structure. In your particular case it seems to be even worse because on top of duplicate getFirstName()/getLastName() methods, there is business logic (which belongs in the service layer) coded right into a pojo (which should be a simple data transfer object like the DTO). Why should the pojo know that only people of opposite sex can get married?
To help better understand why, can you explain where these DTOs come from? Is there a front-end submitting data to a controller which then converts it to a DTO, which is then used to populate your entity-proper with data?
It could also be that they are using this just to separate the JPA annotations from the rich domain object.
So I'm guessing that somebody didn't like having JPA annotations and the rich domain object behaviour in one class. Somebody could have also argued that the JPA annotation and the rich domain object should not be in the same layer (because the annotations mixes the concerns) so you would get this kind of separation if you won this argument.
Another place where you'd see this kind of thing happening is when you want to abstract similar annotations away from the rich domain objects (like jaxb annotations in web services for example).
So the intent might be that the DTO serves as sort of the serialization mechanism from code to the database, which is very similar to the intent mentioned here by martin fowler.
This doesn't appear to be a known pattern.
In general
it is common to maintain a separate object to represent the record in the database, referred to as domain object.
the CRUD operations on the object are part of a DAO class and other business operations would be part of a Manager class, but none of these classes store the domain object as a member variable, i.e. neither DAO nor Manager carry state. They are just processing elements working on domain objects passed in as parameters.
a DTO is used for communication between the front-end and back-end to render data from DB or to accept input from end-user
DTOs are transformed to Domain objects by Manager class, where validations and modifications are performed per business rules. Such domain objects are persisted in the DB using DAO class.
I have worked on one project where we have DTOs for the sole purpose of transferring information from front-end controller to some facade layer. Then facade layer is responsible for converting these DTOs to domain objects.
The idea behind this layering is to decouple front-end (view) from domain. Sometimes DTOs can contain multiple domain objects for aggregated view. But domain layer always presents clean, reusable, cacheable(if required) objects.

design of building DTO from domain object

I have a graph of domain objects and i need to build a DTO to send it to the view. How to design it properly? I see 2 options where can I put the DTO building code:
1) Into the DTO constructor. But then the domain object has to present all fields to DTO via getters so it's not a DDD.
public DTO(DomainObject domain) {
/// access internal fields of different domain object.
}
2) Into the domain object. There will be no problem with accessing fields but the domain object will grow very fast when new view are added.
public DTO1 createDTO1() {
...
}
public DTO2 createDTO1() {
...
}
// and so on...
How should I build DTOs properly?
I think there is a bigger issue at play here. You should not be querying your domain. Your domain should be focused on behaviour and, as such, will quite possibly not contain the data in a format suitable for a view, especially for display purposes.
If you are sending back your entire, say, Customer object to Edit then you are performing entity-based interactions that are very much data focused. You may want to try and place more attention on task-based interactions.
So to get data to your view I'd suggest a simple query layer. Quite often you will need some denormalized data to improve query performance and that will not be present in your domain anyway. If you do need DTOs then map them directly from your data source. If you can get away with a more generic data container structure then that is first prize.
Variants:
Constructor with simple types in DTO: public DTO(Long id, String title, int year, double price)
Separate class - converter with methods like: DTO1 createDTO1(DomainObject domain)
Framework for copy properties from one object to other, like Dozer: http://dozer.sourceforge.net/
1) ... the domain object has to present all fields to DTO via getters ...
2) ... the domain object will grow very fast ...
As you can see, the problem is that both alternatives are coupling your model with your DTOs, so you need to decouple them: introduce a layer between them in charge of performing the mapping/translation.
You may find this SO question useful.
domain object has to present all fields to DTO via getters so it's not
a DDD
Just because a domain object has getters doesn't make it anemic or anti-DDD. You can have getters and behavior.
Besides, since you want to publish to the View a DTO containing particular data, I can't see how this data can remain private inside the domain object. You have to expose it somehow.
Solution 2) breaks separation of concerns and layering (it's not a domain object's business to create a DTO) so I'd stick with 1) or one of #pasha701's alternatives.

NoSQL Schemaless data and statically typed language

One of the key benefits of NoSQL data stores like MongoDB is that they're schemaless. With dynamically typed languages this seem to be a natural fit. You can receive some arbitrary JSON inputs, perform business logic on the known fields, and persist the whole thing without first having to define the object.
What if your choice of language is limited to the statically typed, say Java? How could I achieve the same level of flexibility?
A typical data flow like the following:
JSON Input
Serialize to Java Object to perform business logic
Deserialize into BSON to persist in Mongo
where the serialization to object step is necessary since you want to perform business logic with POJOs, not JSON strings. However, before I can serialize the input into objects, I must define it first. What if the input contains additional fields undefined in the object? While they may not be used in the business logic, I may still want to be able to persist them. I have seem implementations where the undefined fields are put into a map, but am not sure if that's the best approach. For one, the undefined fields may be complex objects as well.
Schemaless data doesn't necessarily mean structureless data; the fields are typically known in advance and some type-safe pattern can be applied on top of it to avoid the Magic Container anti-pattern But this is not always the case. Sometimes keys are entered by the user and cannot be known in advance.
I've used the Role Object Pattern several times to give coherence to a dynamic structure. I think it is well suited here for both cases.
The Role Object Pattern defines a way to access different views of an object. The canonical example being a User that can assume several roles such as Customer, Vendor, and Seller. Each of these views has different operations it can perform and can be accessed from any of the other views. Common fields are typically available at the interface level (especially userId(), or in your case toJson()).
Here's an example of using the pattern:
public void displayPage(User user) {
display(user.getName());
if (user.hasView(Customer.class))
displayShoppingCart(user.getView(Customer.class);
if (user.hasView(Seller.class))
displayProducts(user.getView(Seller.class));
}
In the case of data with a known structure, you can have several views bringing different sets of keys into cohesive units. These different views can read the json data on construction.
In the case of data with a dynamic structure, an authoritative RawDataView can have the data in it's dynamic form (ie. a Magic Container like a HashMap<String, Object>). This can be used to query the dynamic data. At the same time, type-safe wrappers can be created lazily and can delegate to the RawDataView to assist in program readability/maintainability:
public class Customer implements User {
private final RawDataView data;
public CustomerView(UserView source) {
this.data = source.getView(RawDataView.class);
}
// All User views must specify this
#Override
public long id() {
return data.getId();
}
#Override
public <T extends UserView> T getView(Class<T> view) {
// construct or look up view
}
#Override
public Json toJson() {
return data.toJson();
}
//
// Specific to Customer
//
public List<Item> shoppingCart() {
List<Item> items = (List<Item>) data.getValue("items", List.class);
}
// etc....
}
I've had success with both of these approaches. Here are some extra pointers that I've discovered along the way:
Have a static structure structure to your data as much as possible. This makes things a lot easier to maintain. I had to break this rule and use the RawDataView approach when working on a legacy system. You may also have to break it with dynamically-entered user data as mentioned above. In which case, use a convention for non-dynamic field names such as a leading underscore (_userId)
Have equals() and hashcode() implemented such that user.getView(A.class).equals(user.getView(B.class)) is always true for the same user.
Have a UserCore class that does all the heavy lifting of common code such as creating views; performing common operations (like toJson()) returning common fields (like userId()); and implementing equals() and hashcode(). Have all views delegate to this core object
Have an AbstractUserView that delegates to the UserCore and implements equals() and hashcode()
Use a type-safe heterogeneous container (like ClassToInstanceMap) constructing/caching views.
Allow the existence of a view to be queried. This can be done with either a hasView() method or by having getView return Optional<T>
You can always have a class which provides both:
easy access to attributes you know about and optional fallback cases to older formats (for example it can return "name" if it exists, or older case of "name.first" + "name.last" if it doesn't (or some similar scenario))
easy access to unknown elements simulating the map interface
Whether you do a full validation or not, whether you allow extra undefined attributes or not depends on what you want to achieve. But I think that creating an abstraction which allows you either way of accessing the data is the best solution.
Hopefully over time, you'll get to the stage where your schema is pretty much stable and messing directly with the attributes is not needed anymore.
This is not well solved in Java due to the lack of dynamic types. One way this can be solved is using Maps.
Map
The object can again be a Map of objects.
This is not an elegant way but works in Java. An example : SnakeYaml library for YAML allows traversal in this way.

Categories