I'm facing an API design issue. Consider the following flow:
As you can see, I have 2 classes to represent my model (SomethingDTO and SomethingResponse) and 2 classes to represent the 3rd party model (3rdPartyRequest and 3rdPartyResponse). I'm using a mapper to provide translation from the 3rdPArty model classes to my model classes.
The problem is: all these 4 classes have exactly the same attributes.
Should I repeat these attributes through all these classes? Should I have only one DTO class for the whole flow?
What is the best practice (or pattern) to solve this?
Thanks
As I previously answered, using DTOs helps to decouple the persistence models from the API models. So you seem to be doing the right thing.
The problem is: all these 4 classes have exactly the same attributes.
It's always a good idea to decouple the models of your API from the models of a third party API. If the third party API changes their contract, you won't break your clients. So use different models for each API.
And stick to mapping frameworks, such as MapStruct, to reduce the boilerplate code. You also may want to consider Lombok to generate getters, setters, equals(), hashcode() and toString() methods for you.
Should I repeat these attributes through all these classes? Should I have only one DTO class for the whole flow?
If both request and response models contain the same set of fields, then you could start with a single class for representing both request and response payloads of each API. When the fields start to differ (request payload different from the response payload), then you create new models for representing each payload.
In the long run, using distinct models for the request and response will give you flexibility, ensuring you'll expose and receive only the attributes you want to.
This is a tough one. Being pragmatic Vs correctness.
The correct approach (in my view) is to have a different class for each request/response, which is what you've done. This is because I try to design applications using a Ports and Adapters architecture and also use Domain-Driven Design. The benefit is that this gives more flexibility and clarity in the scenario where the objects start diverging.
If the classes have the exact same attributes, I would take a pragmatic approach of having one class per layer. So one for your Web request/response and one for the 3rd Party. But in no case 'I' would mix 2 integration layers (front end and the 3rd party).
Having single class for the whole thing smells really bad. As I mentioned above, because that is mixing layers (or Ports).
Related
With organizations which are slow to adapt to modern technology finally junking EJBs and getting ready to transform to SpringBoot, Microservices, REST, Angular, there are some some questions application design. One being about TransferObjects and Business Objects
When the call comes to the REST Controller, is it still popular to populate a TO (POJO) and then make Service call, which in turn populates a BusinessObject and then calls a Repository service?
OR
At the REST Controller layer do we directly populate the BO and send it to the Service (This does not make any sense to me, because a BO is populated only during the execution business logic).
If nowadays its still Option 1, then how do we avoid writing to exactly similar POJO classes in most cases (in order to use BeanUtils.copyProperties()), with the BO decorated with #Id, #Column etc.
To elaborate on #Turing85's comments...
Option 1 usually (see the end of my answer) makes infinitely more sense. It's a question of responsibility (purpose) and change; the two logical components you refer to, a REST API and a repository / system service:
Responsibility: a REST service cares about working with its callers, so ideally when designing a REST API you should be involving someone from the client-side (client as in caller), because if the API doesn't work for them it's not going to be an effective API. On the other hand, repositories are somewhat self-centered, and may need to consider things that are or no interest to API callers (and vis-versa).
Change: if you pay attention to design principles, like SOLID, you'll know that part of a system should do one job - as a way of limiting the reasons why it needs to change (see: SRP). Trying to use one object across both outward-facing API's, and inward-facing repositories, is asking for trouble because it's trying to do too much - its trying to help solve problems in two very different parts of the wider solution, and thee both have very different change drivers working against them. Turning85's comment about the persistence layer stems from the same idea.
"Option 1 usually makes infinitely more sense":
One case where the REST API's objects will / can bear a very close resemblance to those that hit the actual repository (or even be reused, I guess) is when the REST API is a System API - i.e. a dedicated façade / proxy to the repository. In this case, the System API is largely driven by the repository i.e. the main change driver is just the repository.
After researching a bit I agree, keeping things simple will result in simpler code. I found a nice simple way to take care of this manual work.
https://www.baeldung.com/entity-to-and-from-dto-for-a-java-spring-application
What are the differences of using Spring Data REST repository alone and implementing the “service” pattern around it (that is ItemService, ItemServiceImpl and so on)?
At the first glance the functionality is more or less the same with the difference that the service approach allows for a better customization but it also produces loads of boilerplate code (the implementation and the controller). Here is an example (look Payment and CreditCard entities) of using both approaches - RESTBucks of Oliver Drotbohm.
The payment abstraction there uses the "service" pattern used (PaymentService, PaymentImpl and then PaymentController with all methods in web folder) while the orders are exposed via Spring Data REST directly.
tl;dr
The payment functionality lives at a higher level of abstraction as it doesn't follow established HTTP resource patterns (collection resource, item resource, in general: the ones described here) and thus warrants a custom service implementation. In contrast, the lifecycle of the order aggregate does indeed follow those patterns and thus doesn't need anything but Spring Data REST exposure plus a few customizations. Find a conceptual overview about how the two implementation parts relate to each other here.
Details
That's a great question. The sample application is designed to showcase how different parts of an API can be driven by different requirements and how you can use Spring Data REST to take care of the parts that follow established patterns but at the same time augment it with higher level aspects that are needed to express business processes.
The application is split into two major parts: the order handling that's centered around the Order aggregate that is taken through different stages. A conceptual overview about those can be found here. So parts of our API for the orders will be following standard patterns: filterable collection resources to see all orders, add new orders etc. This is where Spring Data REST shines.
The payment part is different. It somehow needs to blend into both the URI and functional space of the order handling. We achieve that by the following steps:
We implement the required functionality in a dedicated service. The repository interaction doesn't match the necessary level of abstraction as we have to verify business constraints on both the Order and Payment aggregates. That logic needs to live somewhere: in the service.
We expose that functionality via a Spring MVC controller as we (currently) don't need standard patterns like listing all payments. Remember, the example is centered around modeling the ordering process, it's not an accounting backend. The payment resources are blended into the URI space of the orders: /orders/{id}/payment.
We use hypermedia elements to indicate when the functionality can be triggered by adding a link pointing to those resources conditionally so that clients can use the presence or absence of those elements to decide what UI affordances to offer to trigger that functionality.
Here's what I think is nice about this approach:
You only manually code the parts that are important from the business point of view. No need to implement a lot of boilerplate code for the parts of the API that follow well established patterns.
Clients don't need to care where exactly that seam is. Using hypermedia elements, the API just looks like one thing to the client. The server could even move the payment resources to a different URI space or a different service even.
Resources
This deck discusses what I described in detail. Here's a video recording of it. If you're interested in the higher level ideas of especially the drive towards hypermedia, I suggest this slide deck, too
Your service contains all the logic, but the repository layer is as stupid as possible. Its task is a specific operation(for ex. save, edit).
Spring Data is an additional convenient mechanism for interacting with database entities, organizing them in a repository, extracting data, changing it. in some cases, it will be enough to declare the interface and method in it, without implementing it.
P.S
and it's a good choice if you're creating a simple crud
I built a web application using spring-boot, spring-mvc and hibernate. I used the DAO in UI directly by just wrapping them in another objects. It makes my DAL and Presentation layer quite tightly coupled.
As per my understanding, mvc architecture reduces coupling by separating out each component and i worked against that. :(
Is it okay to do what i did? As it saves presentation layer object conversion to DAO to persist them in DB.
What is recommended and best way to design? what will the pay-off with current design (quite tightly coupled)?
I'm not able to figure out, could anyone please help me to understand it.
Thanks in advance!!
I used to do it like that:
I create several layers: the UI layer, the BLL layer and the DAL layer. Then I create the models for each of them. For example: MyUser_UI.java, MyUser_Bll.java and MyUser_Dal.java. This models are so-called POJO, they are used to carry data between layers. As you can see, MyUser_xxx.java(s) have similar property, so I use a automatic object mapper named DozerBeanMapper to help me to transmit data from one to another. That's what I have done.
I promise that is a practical method, however, obviously, it is far from the best. Too many classes I must maintain. Think about that: one day I want to add a new property for MyUser_xxx.java, I must change three places. I often miss something and get errors. So I changed to another way.
I extract the POJO to a separate package. All the three layers can access this package. In doing so, I feel better. But it also brings some other problems. POJOs requirement for each layer is often a bit different. So I have to create the base class MyUser.java, and the MyUserEx.java derived from the base.
It is a bit disappointing that I don't think there is a best design. But we can combine many methods to make our code better. Witch do you prefer? It's up to you.
Martin Fowler has a fairly seminal article on layering from his P of EAA book:
http://martinfowler.com/eaaCatalog/serviceLayer.html
I am building a service that depends on another service. A typical Service oriented architecture. The service i am dependent on exposes some API and data types. I am confused should i be converting the object types exposed by that service into specific objects which my service understands. I do expect their service to change with time as these are two different services. I have two options:
Directly use those data types in my service and pass those in methods.
Transform those into specific data types which only my service understands. ( objects will look exactly same if i do this with 0 changes ).
I tried to answer these questions but still could not make the final call. I need help in making this decision.
Why should I have encapsulated/transformed types ?
To prevent building every time they build changes in the service.
To prevent widespread changes ( adapter pattern ) : Changes to the wire
format will lead me to change only the encapsulating classes.
Why should I not have the changes for the types encapsulated ?
The classes will look exactly same as the wire format classes. ( Useless effort to maintain extra classes )
As i understand the impact will be same if i go with either approach. Help ?
I am no architect or SOA specialist, so excuse me if I am saying anything stupid :-)
But I really think the way here is to keep your services simple.
In your shoes, I'd just directly use the existent API. I would not spent any time wrapping or adapting the methods into another API. Your second service (that uses the existent first service) business logic should take care of this convertion, IMO, except if you're being forced to do something that is really expensive with the existent API.
Remember that services are mutable. They're software. They have bugs, business logic changes as time goes and you'll have to change the API and sometimes you'll have to keep older methods compatible for other service consumers. You probably don't want to maintain two APIs that provide the same information without any good practical reason. Not for twice the maintenance work.
Creating another API just to adapt the data format sounds to me a little like that old "DTOs are evil" flame war. And I think a very few people write about the advantages of using DTO nowadays :-)
This is sort of opinion based question, so my opinion is, you should make your own data-types to let your piece of code understand what should be contained in which variable.
I think of services as a data provider, which accepts certain request and fulfill our needs and in return may give us some data. I think role of service is just providing services to client.
It should be responsiblity of client to accept the data returned by service and store them in certain data-structure as there can be n different clients for single service and they can have n different requirements which may lead them to design client specific data-structure to contain data.
Also, as you said client service is subject to change over the period of time, then if you make your own data-structure, then you will need to make change in one single place, and rest of your code will be safe.
Starting a new GWT application and wondering if I can get some advice from someones experience.
I have a need for a lot of server-side functionality through RPC services...but I am wondering where to draw the line.
I can make a service for every little call or I can make fewer services which handle more operations.
Let's say I have Customer, Vendor and Administration services. I could make 3 services or a service for each function in each category.
I noticed that much of the service implementation does not provide compile-time help and at times troublesome to get going, but it provides good modularity. When I have a larger service, I don't have the modularity as I described, but I don't have to the service creation issues and reduce the entries in my web.xml file.
Is there a resource issue with using a lot of services? What is the best practice to determine what level of granularity to use?
in my opinion, you should make a rpc service for "logical" things.
in your example:
one for customer, another for vendors and a third one for admin
in that way, you keep several services grouped by meaning, and you will have a few lines to maintain in the web.xml file ( and this is a good news :-)
More seriously, rpc services are usually wrappers to call database stuff, so, you even could make a single 'MagicBlackBoxRpc' with a single web.xml entry and thousands of operations !
but making a separate rpc for admin operations, like you suggest, seems a good thing.
Read general advice on "how big should a class be?", which is available in any decent software engineering book.
In my opinion:
One class = One Subject (ie. group of functions or behaviours that are related)
A class should not deal with more than one subject. For example:
Class PersonDao -> Subject: interface between the DB and Java code.
It WILL NOT:
- cache Person instances
- update fields automatically (for example, update the field 'lastModified')
- find the database
Why?
Because for all these other things, there will be other classes doing it! Respectively:
- a cache around the PersonDao is concerned with the efficient storage of information to avoid hitting the DB more often than necessary
- the Service class which uses the DAO is responsible for modifying anything that needs to be modified automagically.
- To find the database is responsibility of the DataSource (usually part of a framework like Spring) and your Dao should NOT be worried about that. It's not part of its subject.
TDD is the answer
The need for this kind of separation becomes really clear when you do TDD (Test-Driven Development). Try to do TDD on bad code where a single class does all sorts of things! You can't even get started with one unit test! So this is my final hint: use TDD and that will tell you how big a class should be.
I think the thing to optimize for is that you can accomplish a result in one round trip to the server. I have an ad-hoc collection of methods on my service object, one for each situation the client finds itself in when it has to get something done. You do not want the client to RPC to the server several times in a row while the user is sitting there waiting.
REST makes things orthogonal, but orthogonality has a cost: there is a reason that the frequently used verbs in languages are irregular. In terms of maintaing clean orthogonal structure to your app, make sure your schema is well-designed. That is where each class should have semantics orthogonal to that of the other classes. When the semantics of each RPC call can be stated cleanly in the schema there will be no confusion as to what they mean, even if they aren't REST-fully ideal.