map DTO to Backend Entities - java

I am developing a rest application where the data in DB is loaded in Entities then some transformations are made on the data while being filled in corresponding DTOs then returned back to the consumer.
According to the consumer and some other parameters, a different subset of the data should be returned to the user, for example if user is inquiring on his personal info, level of details returning will be different than if a manager is inquiring on the data of his employees, etc ...
My question:
Is there any framework to handle this custom mapping (i.e. an xml based file that determines which field in which BE Entity should be mapped to which DTO in which condition ? instead of making a custom code in each case? thanks in advance.
I am using spring rest + hibernate

About XML file mapping, I do not know any. But what I find really useful and very customizable is MapStruct. It is a very useful library and the docs and examples are very good.
A simple example:
#Mapper
public interface CarMapper {
CarMapper INSTANCE = Mappers.getMapper( CarMapper.class );
#Mapping(source = "numberOfSeats", target = "seatCount") // Here is one of the functionalities that you wanted...
CarDto carToCarDto(Car car);
}
And there is IDE and Lombok support also.

Related

DDD implementation with Spring Data and JPA + Hibernate problem with identities

So I'm trying for the first time in a not so complex project to implement Domain Driven Design by separating all my code into application, domain, infrastructure and interfaces packages.
I also went with the whole separation of the JPA Entities to Domain models that will hold my business logic as rich models and used the Builder pattern to instantiate. This approach created me a headache and can't figure out if Im doing it all wrong when using JPA + ORM and Spring Data with DDD.
Process explanation
The application is a Rest API consumer (without any user interaction) that process daily through Scheduler tasks a fairly big amount of data resources and stores or updates into MySQL. Im using RestTemplate to fetch and convert the JSON responses into Domain objects and from there Im applying any business logic within the Domain itself e.g. validation, events, etc
From what I have read the aggregate root object should have an identity in their whole lifecycle and should be unique. I have used the id of the rest API object because is already something that I use to identify and track in my business domain. I have also created a property for the Technical id so when I convert Entities to Domain objects it can hold a reference for the update process.
When I need to persist the Domain to the data source (MySQL) for the first time Im converting them into Entity objects and I persist them using the save() method. So far so good.
Now when I need to update those records in the data source I first fetch them as a List of Employees from data source, convert Entity objects to Domain objects and then I fetch the list of Employees from the rest API as Domain models. Up until now I have two lists of the same Domain object types as List<Employee>. I'm iterating them using Streams and checking if an objects are not equal() between them if yes a collection of List items is created as a third list with Employee objects that need to be updated. Here I've already passed the technical Id to the domain objects in the third list of Employees so Hibernate can identify and use to update the records that are already exists.
Up to here are all fairly simple stuff until I use the saveAll() method to update the records.
Questions
I alway see Hibernate using INSERT instead of updating the list of
records. So If Im correct Hibernate session is not recognising the
objects that Im throwing into it because I have detached them when I
used the convert to domain object?
Does anyone have a better idea how can I implement this differently or fix
this problem?
Or should I stop using this approach as two different objects and continue use
them as rich Entity models?
Simple classes to explain it with code
EmployeeDO.java
#Entity
#Table(name = "employees")
public class EmployeeDO implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
public EmployeeDO() {}
...omitted getter/setters
}
Employee.java
public class Employee {
private Long persistId;
private Long employeeId;
private String name;
private Employee() {}
...omitted getters and Builder
}
EmployeeConverter.java
public class EmployeeConverter {
public static EmployeeDO serialize(Employee employee) {
EmployeeDO target = new EmployeeDO();
if (employee.getPersistId() != null) {
target.setId(employee.getPersistId());
}
target.setName(employee.getName());
return target;
}
public static Employee deserialize(EmployeeDO employee) {
return new Country.Builder(employee.getEmployeeId)
.withPersistId(employee.getId()) //<-- Technical ID setter
.withName(employee.getName())
.build();
}
}
EmployeeRepository.java
#Component
public class EmployeeReporistoryImpl implements EmployeeRepository {
#Autowired
EmployeeJpaRepository db;
#Override
public List<Employee> findAll() {
return db.findAll().stream()
.map(employee -> EmployeeConverter.deserialize(employee))
.collect(Collectors.toList());
}
#Override
public void saveAll(List<Employee> employees) {
db.saveAll(employees.stream()
.map(employee -> EmployeeConverter.serialize(employee))
.collect(Collectors.toList()));
}
}
EmployeeJpaRepository.java
#Repository
public interface EmployeeJpaRepository extends JpaRepository<EmployeeDO, Long> {
}
I use the same approach on my project: two different models for the domain and the persistence.
First, I would suggest you to don't use the converter approach but use the Memento pattern. Your domain entity exports a memento object and it could be restored from the same object. Yes, the domain has 2 functions that aren't related to the domain (they exist just to supply a non-functional requirement), but, on the other side, you avoid to expose functions, getters and constructors that the domain business logic never use.
For the part about the persistence, I don't use JPA exactly for this reason: you have to write a lot of code to reload, update and persist the entities correctly. I write directly SQL code: I can write and test it fast, and once it works I'm sure that it does what I want. With the Memento object I can have directly what I will use in the insert/update query, and I avoid myself a lot of headaches about the JPA of handling complex tables structures.
Anyway, if you want to use JPA, the only solution is to:
load the persistence entities and transform them into domain entities
update the domain entities according to the changes that you have to do in your domain
save the domain entities, that means:
reload the persistence entities
change, or create if there're new ones, them with the changes that you get from the updated domain entities
save the persistence entities
I've tried a mixed solution, where the domain entities are extended by the persistence ones (a bit complex to do). A lot of care should be took to avoid that domain model should adapts to the restrictions of JPA that come from the persistence model.
Here there's an interesting reading about the splitting of the two models.
Finally, my suggestion is to think how complex the domain is and use the simplest solution for the problem:
is it big and with a lot of complex behaviours? Is expected that it will grow up in a big one? Use two models, domain and persistence, and manage the persistence directly with SQL It avoids a lot of caos in the read/update/save phase.
is it simple? Then, first, should I use the DDD approach? If really yes, I would let the JPA annotations to split inside the domain. Yes, it's not pure DDD, but we live in the real world and the time to do something simple in the pure way should not be some orders of magnitude bigger that the the time I need to to it with some compromises. And, on the other side, I can write all this stuff in an XML in the infrastructure layer, avoiding to clutter the domain with it. As it's done in the spring DDD sample here.
When you want to update an existing object, you first have to load it through entityManager.find() and apply the changes on that object or use entityManager.merge since you are working with detached entities.
Anyway, modelling rich domain models based on JPA is the perfect use case for Blaze-Persistence Entity Views.
Blaze-Persistence is a query builder on top of JPA which supports many of the advanced DBMS features on top of the JPA model. I created Entity Views on top of it to allow easy mapping between JPA models and custom interface defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure the way you like and map attributes(getters) via JPQL expressions to the entity model. Since the attribute name is used as default mapping, you mostly don't need explicit mappings as 80% of the use cases is to have DTOs that are a subset of the entity model.
The interesting point here is that entity views can also be updatable and support automatic translation back to the entity/DB model.
A mapping for your model could look as simple as the following
#EntityView(EmployeeDO.class)
#UpdatableEntityView
interface Employee {
#IdMapping("persistId")
Long getId();
Long getEmployeeId();
String getName();
void setName(String name);
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
Employee dto = entityViewManager.find(entityManager, Employee.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features and it can also be saved back. Here a sample repository
#Repository
interface EmployeeRepository {
Employee findOne(Long id);
void save(Employee e);
}
It will only fetch the mappings that you tell it to fetch and also only update the state that you make updatable through setters.
With the Jackson integration you can deserialize your payload onto a loaded entity view or you can avoid loading alltogether and use the Spring MVC integration to capture just the state that was transferred and flush that. This could look like the following:
#RequestMapping(path = "/employee/{id}", method = RequestMethod.PUT, consumes = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> updateEmp(#EntityViewId("id") #RequestBody Employee emp) {
employeeRepository.save(emp);
return ResponseEntity.ok(emp.getId().toString());
}
Here you can see an example project: https://github.com/Blazebit/blaze-persistence/tree/master/examples/spring-data-webmvc

Spring Boot and Hibernate: Still fetching one-to-many relationships even when filtered out from serializing

I have a Spring Boot project using a mysql database. It has a data model (Project) that has a one-to-many relationship with another model (Files), specified with a fetching approach of FetchType.LAZY.
#Entity
public class Project extends BaseEntity {
#OneToMany(fetch = FetchType.LAZY)
#JoinColumn(name="projectId")
private List<Files> files;
...
}
I have an endpoint that serializes a list of projects to JSON /projects. And another endpoint that provides me with the details from a single project /projects/[projectid]. I do not need the list of Files for each and every Project for the first endpoint. But I do for the latter.
To allow me to determine whether to fetch the Files associated with a project at the controller-level, I have created a filter for the /projects endpoint, to strip out the files JSON field, Following some of the guidance in the question here: Ignore fields from Java object dynamically while sending as JSON from Spring MVC
Making my Project model look like this:
#JsonFilter("FilterOutFiles")
#Entity
public class Project extends BaseEntity{
...
}
and the controller's endpoint as so:
#Slf4j
#RestController
public class ProjectsController {
...
#JsonRequestMapping(value = "/projects", method = RequestMethod.GET)
public MappingJacksonValue getUserProjects(#CurrentAccount final UserRuntime userRuntime) {
User user = userRuntime.getUser();
List<Project> ownProjectList = projectRepository.findAllByOwner_UserId(user.getId());
SimpleFilterProvider filters = new SimpleFilterProvider();
filters.addFilter("FilterOutFiles", SimpleBeanPropertyFilter.serializeAllExcept("files"));
MappingJacksonValue mapping = new MappingJacksonValue(ownProjectList);
mapping.setFilters(filters);
return mapping;
}
...
}
This seems to work fine in stripping out the list of files from the JSON. No problem there. My assumption was that with the fetch type set to "lazy" the retrieval of the files would only happen during serialization (if required!), and with it filtered out by the serializer, it wouldn't be required, therefore wouldn't be fetched.
In the course of executing the controller's function, I can see one SQL request to get the list of projects when calling projectRepository.findAllByOwner_UserId. Which is as expected. However, it seems that after returning from my controller function (I presume during the serializing process), hibernate makes queries to retrieve the list of files for each project. One query for each project. Because files themselves also have one-to-many relationships with other models, this quickly balloons into hundreds, sometimes thousands of SELECT statements. Instead of just one.
How can I prevent Hibernate from resolving this lazy fetch on data I do not require which has now been filtered out by the serializer?

How to use getById method to get the a corresponding name in Spring MVC?

I'm learning Spring MVC and I want find a car via an id but get in return the name.
In my service class I call a generic method getXXXById. This is something JPA gives me by nature.
I know that I get the whole entity but how can I just receive the corresponding name to the id.
Example: I call getCarById(2) and it gives me back Tesla.
My Table:
id | Name
----------
1 | Ford
2 | Tesla
My Service:
class CarService {
// code ...
public Optional<CarEntity> getCarById(int id) {
return carRepository.findById(id);
}
There are two options to do that.
Making your own query
You could write your own query in JQPL to retrive only names.
For example you could create method like that in your repository.
#Query("select t.name from CarEntity where id = ?1")
public String findNameById(Integer id);
more information on this feature of Spring Data Jpa HERE
Projections
Second option is to make projection. As it is written in documentation
Spring Data query methods usually return one or multiple instances of the aggregate root managed by the repository. However, it might sometimes be desirable to rather project on certain attributes of those types. Spring Data allows to model dedicated return types to more selectively retrieve partial views onto the managed aggregates.
In simple words, it allows you to aggregate your results form queries in some limited set of attributes rather then whole entity.
Specifically for your needs I'd suggest to use first approch, but it is worth to know both.

How to make dynamic queries at run-time in Spring Boot and Data?

I am new to Java and started with Spring Boot and Spring Data JPA, so I know 2 ways on how to fetch data:
by Repository layer, with Literal method naming: FindOneByCity(String city);
by custom repo, with #Query annotation: #Query('select * from table where city like ?');
Both ways are statical designed.
How should I do to get data of a query that I have to build at run time?
What I am trying to achieve is the possibility to create dynamic reports without touching the code. A table would have records of reports with names and SQl queries with default parameters like begin_date, end_date etc, but with a variety of bodies. Example:
"Sales report by payment method" | select * from sales where met_pay = %pay_method% and date is between %begin_date% and %end_date%;
The Criteria API is mainly designed for that.
It provides an alternative way to define JPA queries.
With it you could build dynamic queries according to data provided at runtime.
To use it, you will need to create a custom repository implementation ant not only an interface.
You will indeed need to inject an EntityManager to create needed objects to create and execute the CriteriaQuery.
You will of course have to write boiler plate code to build the query and execute it.
This section explains how to create a custom repository with Spring Boot.
About your edit :
What I am trying to achieve is the possibility to create dynamic
reports without touching the code. A table would have records of
reports with names and SQl queries with default parameters like
begin_date, end_date etc, but with a variety of bodies.
If the queries are written at the hand in a plain text file, Criteria will not be the best choice as JPQL/SQL query and Criteria query are really not written in the same way.
In the Java code, mapping the JPQL/SQL queries defined in a plain text file to a Map<String, String> structure would be more adapted.
But I have some doubts on the feasibility of what you want to do.
Queries may have specific parameters, for some cases, you would not other choice than modifying the code. Specificities in parameters will do query maintainability very hard and error prone. Personally, I would implement the need by allowing the client to select for each field if a condition should be applied.
Then from the implementation side, I would use this user information to build my CriteriaQuery.
And there Criteria will do an excellent job : less code duplication, more adaptability for the query building and in addition more type-checks at compile type.
Spring-data repositories use EntityManager beneath. Repository classes are just another layer for the user not to worry about the details. But if a user wants to get his hands dirty, then of course spring wouldn't mind.
That is when you can use EntityManager directly.
Let us assume you have a Repository Class like AbcRepository
interface AbcRepository extends JpaRepository<Abc, String> {
}
You can create a custom repository like
interface CustomizedAbcRepository {
void someCustomMethod(User user);
}
The implementation class looks like
class CustomizedAbcRepositoryImpl implements CustomizedAbcRepository {
#Autowired
EntityManager entityManager;
public void someCustomMethod(User user) {
// You can build your custom query using Criteria or Criteria Builder
// and then use that in entityManager methods
}
}
Just a word of caution, the naming of the Customized interface and Customized implementating class is very important
In last versions of Spring Data was added ability to use JPA Criteria API. For more information see blog post https://jverhoelen.github.io/spring-data-queries-jpa-criteria-api/ .

OData Service with Olingo V4 and MySQL database connection

I was following some example in which we can able to build OData service with Olingo from Java (maven project). The provided example doesn't have any database interaction. They are using some Storage.class, which contains hard codded data.
You can find sample code on git. please refer example p0_all in provided url.
Does anyone knows how we can connect git example with some database and furthermore perform CRUD operations
Please do help me with some good examples or concept.
Thanking you in advance.
I recently built an oData producer using Olingo and found myself similarly frustrated. I think that part of the issue is that there really are a lot of different ways to build an oData service with Olingo, and the data access piece is entirely up to the developer to sort out in their own project.
Firstly, you need an application that has a database connection set up. So completely disregarding Olingo, you should have an app that connects to and can query a database. If you are uncertain of how to build a java application that can query a MySQL datasource, then you should Google around for tutorials that are related to that problem and have nothing to do with Olingo.
Next you need to write the methods and queries to perform CRUD operations in your application. Again, these methods have nothing to do with Olingo.
Where Olingo starts to come in to play is in your implementation of the processor classes. EntityCollectionProcessor, EntityProcessor etc. (note that there are other concerns such as setting up your CsdlEntityTypes and Schema/Service Document etc., but those are outside the scope of your question)
Lets start by looking at EntityCollectionProcessor. By implementing the EntityCollectionProcessor class you need to override the readEntityCollection() function. The purpose of this function is to parse the oData URI for the entity name, fetch an EntityCollection for that Entity, and then serialize the EntityCollection into an oData compliant response. Here's the implementation of readEntityCollection() from your example link:
public void readEntityCollection(ODataRequest request, ODataResponse response, UriInfo uriInfo, ContentType responseFormat)
throws ODataApplicationException, SerializerException {
// 1st we have retrieve the requested EntitySet from the uriInfo object
// (representation of the parsed service URI)
List<UriResource> resourcePaths = uriInfo.getUriResourceParts();
UriResourceEntitySet uriResourceEntitySet = (UriResourceEntitySet) resourcePaths.get(0);
// in our example, the first segment is the EntitySet
EdmEntitySet edmEntitySet = uriResourceEntitySet.getEntitySet();
// 2nd: fetch the data from backend for this requested EntitySetName
// it has to be delivered as EntityCollection object
EntityCollection entitySet = getData(edmEntitySet);
// 3rd: create a serializer based on the requested format (json)
ODataSerializer serializer = odata.createSerializer(responseFormat);
// 4th: Now serialize the content: transform from the EntitySet object to InputStream
EdmEntityType edmEntityType = edmEntitySet.getEntityType();
ContextURL contextUrl = ContextURL.with().entitySet(edmEntitySet).build();
final String id = request.getRawBaseUri() + "/" + edmEntitySet.getName();
EntityCollectionSerializerOptions opts = EntityCollectionSerializerOptions.with().id(id).contextURL(contextUrl).build();
SerializerResult serializerResult = serializer.entityCollection(serviceMetadata, edmEntityType, entitySet, opts);
InputStream serializedContent = serializerResult.getContent();
// Finally: configure the response object: set the body, headers and status code
response.setContent(serializedContent);
response.setStatusCode(HttpStatusCode.OK.getStatusCode());
response.setHeader(HttpHeader.CONTENT_TYPE, responseFormat.toContentTypeString());
}
You can ignore (and reuse) everything in this example except for the "2nd" step:
EntityCollection entitySet = getData(edmEntitySet);
This line of code is where Olingo finally starts to interact with our underlying system, and the pattern that we see here informs how we should set up the rest of our CRUD operations.
The function getData(edmEntitySet) can be anything you want, in any class you want. The only restriction is that it must return an EntityCollection. So what you need to do is call a function that queries your MySQL database and returns all records for the given entity (using the string name of the entity). Then, once you have a List, or Set (or whatever) of your records, you need to convert it to an EntityCollection.
As an aside, I think that this is probably where the disconnect between the Olingo examples and real world application comes from. The code behind that getData(edmEntitySet); call can be architected in infinitely different ways, depending on the design pattern used in the underlying system (MVC etc.), styling choices, scalability requirements etc.
Here's an example of how I created an EntityCollection from a List that returned from my query (keep in mind that I am assuming you know how to query your MySQL datasource and have already coded a function that retrieves all records for a given entity):
private List<Foo> getAllFoos(){
// ... code that queries dataset and retrieves all Foo records
}
// loop over List<Foo> converting each instance of Foo into and Olingo Entity
private EntityCollection makeEntityCollection(List<Foo> fooList){
EntityCollection entitySet = new EntityCollection();
for (Foo foo: fooList){
entitySet.getEntities().add(createEntity(foo));
}
return entitySet;
}
// Convert instance of Foo object into an Olingo Entity
private Entity createEntity(Foo foo){
Entity tmpEntity = new Entity()
.addProperty(createPrimitive(Foo.FIELD_ID, foo.getId()))
.addProperty(createPrimitive(Foo.FIELD_FOO_NAME, foo.getFooName()));
return tmpEntity;
}
Just for added clarity, getData(edmEntitySet) might look like this:
public EntityCollection getData(String edmEntitySet){
// ... code to determine which query to call based on entity name
List<Foo> foos = getAllFoos();
EntityCollection entitySet = makeEntityCollection(foos);
return entitySet;
}
If you can find an Olingo example that uses a DataProvider class, there are some basic examples of how you might set up the // ...code to determine which query to call based on entity name. I ended up modifying that pattern heavily using Java reflection, but that is totally unrelated to your question.
So getData(edmEntitySet) is a function that takes an entity name, queries the datasource for all records of that entity (returning a List<Foo>), and then converts that List<Foo> into an EntityCollection. The EntityCollection is made by calling the createEntity() function which takes the instance of my Foo object and turns it into an Olingo Entity. The EntityCollection is then returned to the readEntityCollection() function and can be properly serialized and returned as an oData response.
This example exposes a bit of the architecture problem that Olingo has with its own examples. In my example Foo is an object that has constants that are used to identify the field names, which are used by Olingo to generate the oData Schema and Service Document. This object has a method to return it's own CsdlEntityType, as well as a constructor, its own properties and getters/setters etc. You don't have to set your system up this way, but for the scalability requirements of my project this is how I chose to do things.
This is the general pattern that Olingo uses. Override methods of an interface, then call functions in a separate part of your system that interact with your data in the desired manner. Then convert the data into Olingo readable objects so they can do whatever "oData stuff" needs to be done in the response. If you want to implement CRUD for a single entity, then you need to implement EntityProcessor and its various CRUD methods, and inside those methods, you need to call the functions in your system (totally separate from any Olingo code) that create(), read() (single entity), update(), or delete().

Categories