Data transfer object in dao design pattern - java

I am bit confused about what data should a DTO contain.
For example let's assume that we have two tables: User, and Orders.
Orders table contains id_users, which is foreign key to user table.
Obviously I have two DAOs, MysqlUserDao and MysqlOrdersDao, with crud operations, and two transfer objects User, and Order, in which I store jdbc rowset.
If I want to get the list of users and for each user all his orders how should I do:
1) In my MysqlUserDao create a function: getUsersAndOrders(select users.,orders. from users join orders)
And my User DTO should have a OrderList property in where i put orders ?
2) In my MysqlUserDao i create a function getAllUsers(select * from users),
and foreach user I use MysqlOrdersDao function getOrder(id_user);
And some clarifications:
1) For each table in database I need to create a DAO object? or just for complex ones?
For example products and images, should be 2 dao or just one?
2) a DTO object should have only properties and setter getter, or it is possible to have other methods like convertEuroToUsd etc.
thanks

In your scenario #1 is the best option because #2 generates too much overhead.
1) In my MysqlUserDao create a function: getUsersAndOrders(select users.,orders. from users join orders) And my User DTO should have a OrderList property in where i put orders ?
Clarifications:
1: If your database has a good Design, then a DAO for each table is a good approach. There some cases where you can merge DAOs together (e.g: inheritance).
2: Yes. It should be a plain bean (or POJO if you want). I suggest creating another layer where you can define your workflow. I've seem people calling this extra layer as model, sometimes DataManager, sometimes just Manager.
For instance: When creating a order you should insert a record in Order table and also insert a record in the Notification table (because end users will be notified via email every time a order is created)
class OrderManager {
private OrderDAO oDao;
private NotificationDao nDao;
public saveOrder(OrderDTO o) {
Long orderId = oDao.save(o);
NotificationDTO n = new NotificationDTO();
n.setType(NotificationType.ORDER_CREATED);
n.setEntityId(orderId);
nDao.save(n);
}
}
UPDATE:
In most cases we can say that:
"Managers" may handle many DAOs;
DAOs should not contain other DAOs and are tied to a DTO;
DTOs can contain other DTOs
There is an important idea of LAZY or EAGER load when it comes to handling collections. But this is another subject :D

Disclaimer:
+ The following assumes that these DTOs are used mainly for persistence, i.e., for use with DAOs.
+ this approach is very oriented towards a relational database persistence
+ it is assumed a user can have placed orders, but that an order can have at most one user
+ also, that you want to query/process separatedly orders and users
I would have done the following:
a DTO for User (UserDTO + UserDAO)
a DTO for Orders (OrderDTO + OrderDAO)
a DTO to connect both (UserOrderDTO + UserOrderDAO)
I would not have references in the UserDTO to any OrderDTO
I may have a reference in the OrderDTO to the UserDTO as an attribute having a string id (being the string id the user id), but also I may not. I assume the later.
a Service Application to manage the different DAOs associated to the Order (OrderSA)
The resulting code would be as follows:
class OrderManagerServiceApplication {
private OrderDAO oDao;
private UserDao uDao;
private UserOrderDao uoDao;
public saveOrder(OrderDTO o, String userId) {
// Save the order
Long orderId = oDao.save(o);
// Save the association to the user who ordered
UserOrderDTO uodto=new UserOrderDTO(orderId,userId);
uoDao.save(uodto);
}
public List<OrderDTO> getOrdersForUser(String userId) {
// get the orders associated to the user
List<String> orderIds=uoDao.getAllForUser(userId);
// retrieve the order DTOs
ArrayList<OrderDTO> result=new ArrayList<OrderDTO>();
for (String orderId:orderIds){
result.add(oDAO.getOrder(orderId));
}
return result;
}
public UserDTO getUserForOrder(Stirng orderId) {
// get the user associated with the order
String userId=uoao.getUserForOrder(orderId);
// retrieve the user DTO
return uDAO.getUser(userId);
}
}

Related

DDD implementation with Spring Data and JPA + Hibernate problem with identities

So I'm trying for the first time in a not so complex project to implement Domain Driven Design by separating all my code into application, domain, infrastructure and interfaces packages.
I also went with the whole separation of the JPA Entities to Domain models that will hold my business logic as rich models and used the Builder pattern to instantiate. This approach created me a headache and can't figure out if Im doing it all wrong when using JPA + ORM and Spring Data with DDD.
Process explanation
The application is a Rest API consumer (without any user interaction) that process daily through Scheduler tasks a fairly big amount of data resources and stores or updates into MySQL. Im using RestTemplate to fetch and convert the JSON responses into Domain objects and from there Im applying any business logic within the Domain itself e.g. validation, events, etc
From what I have read the aggregate root object should have an identity in their whole lifecycle and should be unique. I have used the id of the rest API object because is already something that I use to identify and track in my business domain. I have also created a property for the Technical id so when I convert Entities to Domain objects it can hold a reference for the update process.
When I need to persist the Domain to the data source (MySQL) for the first time Im converting them into Entity objects and I persist them using the save() method. So far so good.
Now when I need to update those records in the data source I first fetch them as a List of Employees from data source, convert Entity objects to Domain objects and then I fetch the list of Employees from the rest API as Domain models. Up until now I have two lists of the same Domain object types as List<Employee>. I'm iterating them using Streams and checking if an objects are not equal() between them if yes a collection of List items is created as a third list with Employee objects that need to be updated. Here I've already passed the technical Id to the domain objects in the third list of Employees so Hibernate can identify and use to update the records that are already exists.
Up to here are all fairly simple stuff until I use the saveAll() method to update the records.
Questions
I alway see Hibernate using INSERT instead of updating the list of
records. So If Im correct Hibernate session is not recognising the
objects that Im throwing into it because I have detached them when I
used the convert to domain object?
Does anyone have a better idea how can I implement this differently or fix
this problem?
Or should I stop using this approach as two different objects and continue use
them as rich Entity models?
Simple classes to explain it with code
EmployeeDO.java
#Entity
#Table(name = "employees")
public class EmployeeDO implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
public EmployeeDO() {}
...omitted getter/setters
}
Employee.java
public class Employee {
private Long persistId;
private Long employeeId;
private String name;
private Employee() {}
...omitted getters and Builder
}
EmployeeConverter.java
public class EmployeeConverter {
public static EmployeeDO serialize(Employee employee) {
EmployeeDO target = new EmployeeDO();
if (employee.getPersistId() != null) {
target.setId(employee.getPersistId());
}
target.setName(employee.getName());
return target;
}
public static Employee deserialize(EmployeeDO employee) {
return new Country.Builder(employee.getEmployeeId)
.withPersistId(employee.getId()) //<-- Technical ID setter
.withName(employee.getName())
.build();
}
}
EmployeeRepository.java
#Component
public class EmployeeReporistoryImpl implements EmployeeRepository {
#Autowired
EmployeeJpaRepository db;
#Override
public List<Employee> findAll() {
return db.findAll().stream()
.map(employee -> EmployeeConverter.deserialize(employee))
.collect(Collectors.toList());
}
#Override
public void saveAll(List<Employee> employees) {
db.saveAll(employees.stream()
.map(employee -> EmployeeConverter.serialize(employee))
.collect(Collectors.toList()));
}
}
EmployeeJpaRepository.java
#Repository
public interface EmployeeJpaRepository extends JpaRepository<EmployeeDO, Long> {
}
I use the same approach on my project: two different models for the domain and the persistence.
First, I would suggest you to don't use the converter approach but use the Memento pattern. Your domain entity exports a memento object and it could be restored from the same object. Yes, the domain has 2 functions that aren't related to the domain (they exist just to supply a non-functional requirement), but, on the other side, you avoid to expose functions, getters and constructors that the domain business logic never use.
For the part about the persistence, I don't use JPA exactly for this reason: you have to write a lot of code to reload, update and persist the entities correctly. I write directly SQL code: I can write and test it fast, and once it works I'm sure that it does what I want. With the Memento object I can have directly what I will use in the insert/update query, and I avoid myself a lot of headaches about the JPA of handling complex tables structures.
Anyway, if you want to use JPA, the only solution is to:
load the persistence entities and transform them into domain entities
update the domain entities according to the changes that you have to do in your domain
save the domain entities, that means:
reload the persistence entities
change, or create if there're new ones, them with the changes that you get from the updated domain entities
save the persistence entities
I've tried a mixed solution, where the domain entities are extended by the persistence ones (a bit complex to do). A lot of care should be took to avoid that domain model should adapts to the restrictions of JPA that come from the persistence model.
Here there's an interesting reading about the splitting of the two models.
Finally, my suggestion is to think how complex the domain is and use the simplest solution for the problem:
is it big and with a lot of complex behaviours? Is expected that it will grow up in a big one? Use two models, domain and persistence, and manage the persistence directly with SQL It avoids a lot of caos in the read/update/save phase.
is it simple? Then, first, should I use the DDD approach? If really yes, I would let the JPA annotations to split inside the domain. Yes, it's not pure DDD, but we live in the real world and the time to do something simple in the pure way should not be some orders of magnitude bigger that the the time I need to to it with some compromises. And, on the other side, I can write all this stuff in an XML in the infrastructure layer, avoiding to clutter the domain with it. As it's done in the spring DDD sample here.
When you want to update an existing object, you first have to load it through entityManager.find() and apply the changes on that object or use entityManager.merge since you are working with detached entities.
Anyway, modelling rich domain models based on JPA is the perfect use case for Blaze-Persistence Entity Views.
Blaze-Persistence is a query builder on top of JPA which supports many of the advanced DBMS features on top of the JPA model. I created Entity Views on top of it to allow easy mapping between JPA models and custom interface defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure the way you like and map attributes(getters) via JPQL expressions to the entity model. Since the attribute name is used as default mapping, you mostly don't need explicit mappings as 80% of the use cases is to have DTOs that are a subset of the entity model.
The interesting point here is that entity views can also be updatable and support automatic translation back to the entity/DB model.
A mapping for your model could look as simple as the following
#EntityView(EmployeeDO.class)
#UpdatableEntityView
interface Employee {
#IdMapping("persistId")
Long getId();
Long getEmployeeId();
String getName();
void setName(String name);
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
Employee dto = entityViewManager.find(entityManager, Employee.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features and it can also be saved back. Here a sample repository
#Repository
interface EmployeeRepository {
Employee findOne(Long id);
void save(Employee e);
}
It will only fetch the mappings that you tell it to fetch and also only update the state that you make updatable through setters.
With the Jackson integration you can deserialize your payload onto a loaded entity view or you can avoid loading alltogether and use the Spring MVC integration to capture just the state that was transferred and flush that. This could look like the following:
#RequestMapping(path = "/employee/{id}", method = RequestMethod.PUT, consumes = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> updateEmp(#EntityViewId("id") #RequestBody Employee emp) {
employeeRepository.save(emp);
return ResponseEntity.ok(emp.getId().toString());
}
Here you can see an example project: https://github.com/Blazebit/blaze-persistence/tree/master/examples/spring-data-webmvc

Jooq entity mapping

I have following schema:
Projects (ID, NAME)
Projects_Users (PROJECT_ID, USERS_ID)
Users (NAME, ID)
Entity are as follows
public class Projects {
private String name;
private long id;
private List<User> users;
public Projects() {
}
}
public class User {
private String name;
private Long id;
}
so clearly one-to-many. Projects can have multiple users.
Now my goal is to write jooq query where I can fetch project objects already with corresponding users.
.select(PROJECT.NAME, PROJECT.ID, USERS)
.from(PROJECT)
.join(USERS_PROJECT).on(USERS_PROJECT.PROJECT_ID=PROJECT.ID)
.join(USERS).on(USERS.ID=USERS_PROJECT.USER_ID)
.fetchInto(Project.class);
but the query would resturn thousends results when expecting ~15
You're doing two things:
Run a jOOQ query:
.select(PROJECT.NAME, PROJECT.ID, USERS)
.from(PROJECT)
.join(USERS_PROJECT).on(USERS_PROJECT.PROJECT_ID.eq(PROJECT.ID))
.join(USERS).on(USERS.ID.eq(USERS_PROJECT.USER_ID))
.fetch();
This is straightforward. jOOQ transforms the above expression to a SQL string, sends it to JDBC, receives the result set and provides you with a jOOQ Result. Obviously, the result is denormalised (because you wrote joins), and thus you have many rows (about as many as there are rows in the USERS_PROJECT table).
Map the Result to your POJO
Now, this is what's confusing you. You called fetchInto(Project.class), which is just syntax sugar for calling fetch().into(Class). These are two separate steps, and in particular, the into(Class) step has no knowledge of your query nor of your intent. Thus, it doesn't "know" that you're expecting ~15 unique projects, and a nested collection of users.
But there are ways to map things more explicitly. E.g. by using intoGroups(Class, Class). This may not return nested types as you designed them, but something like a
Map<Person, List<User>> map = query.fetch().intoGroups(Person.class, User.class);
You can take it from there, manually. Or, you can write a RecordMapper and use that:
List<Person> list = query.fetch().map(record -> new Person(...));
Or, you could use any of the recommended third party mappers, e.g.:
http://modelmapper.org
http://simpleflatmapper.org

Preserve Hibernate Lazy loading in data transfer object design pattern

I usually work on 3-tier applications using Hibernate in the persistence layer and I take care to not use the domain model classes in the presentation layer. This is why I use the DTO (Data Transfer Object) design pattern.
But I always have a dilemma in my entity-dto mapping. Whether I lose the lazy loading benefict, or I create complexity in the code by introducing filters to call or not the domain model getters.
Example : Consider a DTO UserDto that corresponds to the entity User
public UserDto toDto(User entity, OptionList... optionList) {
if (entity == null) {
return null;
}
UserDto userDto = new UserDto();
userDto.setId(entity.getId());
userDto.setFirstname(entity.getFirstname());
if (optionList.length == 0 || optionList[0].contains(User.class, UserOptionList.AUTHORIZATION)) {
IGenericEntityDtoConverter<Authorization, AuthorizationDto> authorizationConverter = converterRegistry.getConverter(Authorization.class);
List<AuthorizationDto> authorizations = new ArrayList<>(authorizationConverter.toListDto(entity.getAuthorizations(), optionList));
userDto.setAuthorizations(authorizations);
...
}
OptionList is used to filter the mapping and map just what is wanted.
Although the last solution allow lazy loading but it's very heavy because the optionList must be specified in the service layer.
Is there any better solution to preserve lazy loading in a DTO design pattern ?
For the same entity persistent state, I don't like having fields of an object un-initialized in some execution path, while these fields might also be initialized in other cases. This cause too much headache to maintain :
it will cause Nullpointer in the better cases
if null is also a valid option (and thus not cause NullPointer), it could mean the data was removed and might trigger unexpected removal business rules, while the data is in fact still there.
I would rather create a DTO hierarchy of interfaces and/or classes, starting with UserDto. All of the actual dto implementation fields are filled to mirror the persistent state : if there is data, the field of the dto is not null.
So then you just need to ask the service layer which implementation of the Dto you want :
public <T extends UserDto> T toDto(User entity, Class<T> dtoClass) {
...
}
Then in the service layer, you could have a :
Map<Class<? extends UserDto>, UserDtoBUilder> userDtoBuilders = ...
where you register the different builders that will create and initialize the various UserDto implementations.
I'm not sure why you would want lazy loading, but I guess because your UserDto serves multiple representations through optionList configuration?
I don't know how your presentation layer code looks like, but I guess you have lot's of if-else code for each element in optionList?
How about having different representations i.e. subclasses instead? I'm asking this because I'd like to suggest giving Blaze-Persistence Entity Views a try. Here a little code example that fits to your domain.
#EntityView(User.class)
public interface SimpleUserView {
// The id of the user entity
#IdMapping("id") int getId();
String getFirstname();
}
#EntityView(Authorization.class)
public interface AuthorizationView {
// The id of the authorization entity
#IdMapping("id") int getId();
// Whatever properties you want
}
#EntityView(User.class)
public interface AuthorizationUserView extends SimpleUserView {
List<AuthorizationView> getAuthorizations();
}
These are the DTOs with some metadata about the mapping to the entity model. And here comes the usage:
#Transactional
public <T> T findByName(String name, EntityViewSetting<T, CriteriaBuilder<T>> setting) {
// Omitted DAO part for brevity
EntityManager entityManager = // jpa entity manager
CriteriaBuilderFactory cbf = // query builder factory from Blaze-Persistence
EntityViewManager evm = // manager that can apply entity views to query builders
CriteriaBuilder<User> builder = cbf.create(entityManager, User.class)
.where("name").eq(name);
List<T> result = evm.applySetting(builder, setting)
.getResultList();
return result;
}
Now if you use it like service.findByName("someName", EntityViewSetting.create(SimpleUserView.class)) it will generate a query like
SELECT u.id, u.firstname
FROM User u
WHERE u.name = :param_1
and if you use the other view like service.findByName("someName", EntityViewSetting.create(AuthorizationUserView.class)) it will generate
SELECT u.id, u.firstname, a.id
FROM User u LEFT JOIN u.authorizations a
WHERE u.name = :param_1
Apart from being able to get rid of the manual object mapping, the performance will improve because of using optimized queries!

How to retrieve #Reference fields / which is best #Reference or #Embedded and SubQuery in MorphiaQuery

I have a mongodb with two model classes say User and UserInfo. The criteria is in User class I have to retrieve a multiple fields around 10 fields like "firstName","lastName", etc and in UserInfo Model class I like to retrieve only one field say "age".
At this moment I referenced the UserInfo class's object to the User class like stated below in the Structure and its stores in the DB as {"firstName","John"},{"lastName","Nash"},{userInfo: userInfoID} but if I make an Embedded Relation then it would store all the userInfo's fields and I think to retrieve one ("age") field it is Unwanted to Embed all the userInfo's fields which inturn will make the application slow I think.
Which scenario should I use whether #Reference or #Embedded, I think Embedded will slow down my response to DB but in the websites its given as reference annotation only slows down querying time and needs some sort of Lazy Loading an all, my structure is like below:
class User extends Model{
public String firstName;
public String lastName;
public String loginTime;
public String logoutTime;
public String emailId; etc,etc......
Some more 10 fields like this+userInfo reference object
#Reference
public UserInfo userInfo;
}
class UserInfo extends Model{
public String emailId;
public String age;
public String sex;
public String address;
public String bank; etc,etc......
Some more 10 fields like this
}
As I stated above I want only age field from UserInfo and all fields of User, so which Annotation is best and #Reference or #Embedded. It will be more helpful if I get a single query for User class in which I can retrieve all fields of User and only "age" field of UserInfo. In short I need a query like this when I go for #Reference relationship
field("userInfo.age") for userInfo.emailId = (MorphiaQuery q = User.createMorphiaQuery;
q.field("firstName").equal("John"); q.field("lastName").equal("Nash"); q.field("loginTime").greaterthan("sometime"))//the complex part where I need age of particular userInfo but I have only the ID of the userInfo since I am using Reference and that Id too got from a **subQuery**....
Please don't write two queries I need a single query or maybe a query with subquery. To be more clear I can tell in SQL language:
SELECT age FROM UserInfo where emailId = u.emailId
(SELECT * FROM User WHERE firstName='John' AND lastName='Nash' AND
logintime='someTime') AS u;
I need this exact same query without writing two morphia queries which consumes more time by referring two tables.
Mongo does not support query across tables / collections. And such page would satisfy you:
MongoDB and "joins"
As in sql, the join query is also build intermediate result set and make query again:
Understanding how JOIN works when 3 or more tables are involved. [SQL]
When you build your model, you should not consider a lot about what single query but structural modeling:
http://docs.mongodb.org/manual/core/data-modeling/
For your case, if you are using embeded, you can make this in one query and specify the fields you need by using queries like:
db.User.find({"some_field":"some_query"},{"firstName":1,....,"userInfo.age":1})
Check projections here:
http://docs.mongodb.org/manual/reference/method/db.collection.find/
If you are using reference or even soft link like using Morphia Key<> to lazy load the UserInfo, it requires two queries.
If it's not real-time application, you can also try mongo map-reduce to merge collection to handle big data, though the map-reduce is too bad for mongo though.
I'm reasonably sure you can't with just one query.

object mapping when using Mongodb

I am new to mongo-db and i have few questions.
I have the following code:
public class User
{
private String id;
private String name;
private List<Order> orders;
}
public class Order
{
private String id;
private String orderName;
private Date orderDate;
}
What's the best persisting strategy for User object ?
should i create collection for both User and Order or just for User ?
should i save Order and then user ?
i am using spring data MongoRepository
Thank you.
I would consider how you're accessing the data when modeling. Some questions to ask yourself:
Do I need to get a user with his orders in one call?
How many orders are with a user on average? If it's a lot, maybe not best to denormalize user and orders.
How will my front end access this information? Will most calls to user even need orders? Will that information be too heavy/slow on the wire?
In general I would err on the side of denormalizing instead of the relational instinct to normalize. It's ok to have redundant data and to have inconsistent data.
Mongo doesn't do joins real time - at best you can do map/reduce.
The joining of data therefore either needs to be in the database (denormalized) or in the UI.

Categories