In modern application design, how do you implement / between TransferOject and BusinessObject - java

With organizations which are slow to adapt to modern technology finally junking EJBs and getting ready to transform to SpringBoot, Microservices, REST, Angular, there are some some questions application design. One being about TransferObjects and Business Objects
When the call comes to the REST Controller, is it still popular to populate a TO (POJO) and then make Service call, which in turn populates a BusinessObject and then calls a Repository service?
OR
At the REST Controller layer do we directly populate the BO and send it to the Service (This does not make any sense to me, because a BO is populated only during the execution business logic).
If nowadays its still Option 1, then how do we avoid writing to exactly similar POJO classes in most cases (in order to use BeanUtils.copyProperties()), with the BO decorated with #Id, #Column etc.

To elaborate on #Turing85's comments...
Option 1 usually (see the end of my answer) makes infinitely more sense. It's a question of responsibility (purpose) and change; the two logical components you refer to, a REST API and a repository / system service:
Responsibility: a REST service cares about working with its callers, so ideally when designing a REST API you should be involving someone from the client-side (client as in caller), because if the API doesn't work for them it's not going to be an effective API. On the other hand, repositories are somewhat self-centered, and may need to consider things that are or no interest to API callers (and vis-versa).
Change: if you pay attention to design principles, like SOLID, you'll know that part of a system should do one job - as a way of limiting the reasons why it needs to change (see: SRP). Trying to use one object across both outward-facing API's, and inward-facing repositories, is asking for trouble because it's trying to do too much - its trying to help solve problems in two very different parts of the wider solution, and thee both have very different change drivers working against them. Turning85's comment about the persistence layer stems from the same idea.
"Option 1 usually makes infinitely more sense":
One case where the REST API's objects will / can bear a very close resemblance to those that hit the actual repository (or even be reused, I guess) is when the REST API is a System API - i.e. a dedicated façade / proxy to the repository. In this case, the System API is largely driven by the repository i.e. the main change driver is just the repository.

After researching a bit I agree, keeping things simple will result in simpler code. I found a nice simple way to take care of this manual work.
https://www.baeldung.com/entity-to-and-from-dto-for-a-java-spring-application

Related

How to avoid duplicate POJOs in different web services, Java

I have a webapp with multiple Spring services (they each have their own ear and web-controllers) and they talk to each other via REST calls. Some of the different services use the same data POJOs. The existing code just has duplicates of those data objects in different services.
For example, my /users service has a myApp.users.UserData object, and my /emails service calls /users/{userId} and holds the result as a myApp.emails.UserData object. Both of these objects are identical.
The problem here is I have to keep myApp.emails.UserData and myApp.users.UserData in sync, since in reality they represent the same info and are meant to be the "same" class. Say I update the name of a field in emails.UserData, I better remember to update it in users.UserData, otherwise things will break.
I know I could make a shared dependency called something like SharedDataObjects, define myApp.sharedDataObjects.UserData there, and just have both versions refer to that. For some reason, though, my gut feeling is that this is not a good solution... (maybe it is though?)
Are there any better ways of approaching this issue? Is there something fundamentally wrong with the way the webapp structured, and if so how could that be addressed?
It's always tempting to spot duplicated (or nearly duplicated) code and try to eliminate the duplication. An obvious path in that direction is to make a shared library (or module) and have different services (or applications) depend on it.
Doing so, however, increases coupling between services/applications/modules, which has its own set of drawbacks in many contexts. Especially in a microservices architecture*, that kind of coupling often leads to headaches. It can even lead to loss of the value of using microservices* in the first place.
If two services* are supposed to be independent but integrated, they have an integration protocol of some kind. For REST services, for example, that's almost always HTTP and JSON**. By introducing a shared library, you have coupled them in a binary way, separate from (and more binding than) HTTP and JSON. Not a good situation; experience has taught many of us that painful lesson.
Instead, focus on the public interfaces that each service exposes, and use appropriate versioning to evolve those interfaces when needed. Don't worry about some duplicate-looking classes, especially if they're anemic POJOs; that's a minor concern compared to a tightly-coupled set of services that are supposed to be independent.
Not that shared code libraries are inherently bad, they're not and they have true value in many ways. Rather, my point is you need to make sure each service maintains its own definition of what important things are, so that each can remain as independent as possible - and evolve independently as much as possible.
By the way, this is somewhat related to the concept of Bounded Contexts; you might want to read more about it if you're working with microservices*.
*or whatever you call your architecture of independently-managed services/modules/apps
** Could also be some kind of messaging/queueing platform, as another example

What are the differences of using service pattern and using standalone repository Spring Data REST?

What are the differences of using Spring Data REST repository alone and implementing the “service” pattern around it (that is ItemService, ItemServiceImpl and so on)?
At the first glance the functionality is more or less the same with the difference that the service approach allows for a better customization but it also produces loads of boilerplate code (the implementation and the controller). Here is an example (look Payment and CreditCard entities) of using both approaches - RESTBucks of Oliver Drotbohm.
The payment abstraction there uses the "service" pattern used (PaymentService, PaymentImpl and then PaymentController with all methods in web folder) while the orders are exposed via Spring Data REST directly.
tl;dr
The payment functionality lives at a higher level of abstraction as it doesn't follow established HTTP resource patterns (collection resource, item resource, in general: the ones described here) and thus warrants a custom service implementation. In contrast, the lifecycle of the order aggregate does indeed follow those patterns and thus doesn't need anything but Spring Data REST exposure plus a few customizations. Find a conceptual overview about how the two implementation parts relate to each other here.
Details
That's a great question. The sample application is designed to showcase how different parts of an API can be driven by different requirements and how you can use Spring Data REST to take care of the parts that follow established patterns but at the same time augment it with higher level aspects that are needed to express business processes.
The application is split into two major parts: the order handling that's centered around the Order aggregate that is taken through different stages. A conceptual overview about those can be found here. So parts of our API for the orders will be following standard patterns: filterable collection resources to see all orders, add new orders etc. This is where Spring Data REST shines.
The payment part is different. It somehow needs to blend into both the URI and functional space of the order handling. We achieve that by the following steps:
We implement the required functionality in a dedicated service. The repository interaction doesn't match the necessary level of abstraction as we have to verify business constraints on both the Order and Payment aggregates. That logic needs to live somewhere: in the service.
We expose that functionality via a Spring MVC controller as we (currently) don't need standard patterns like listing all payments. Remember, the example is centered around modeling the ordering process, it's not an accounting backend. The payment resources are blended into the URI space of the orders: /orders/{id}/payment.
We use hypermedia elements to indicate when the functionality can be triggered by adding a link pointing to those resources conditionally so that clients can use the presence or absence of those elements to decide what UI affordances to offer to trigger that functionality.
Here's what I think is nice about this approach:
You only manually code the parts that are important from the business point of view. No need to implement a lot of boilerplate code for the parts of the API that follow well established patterns.
Clients don't need to care where exactly that seam is. Using hypermedia elements, the API just looks like one thing to the client. The server could even move the payment resources to a different URI space or a different service even.
Resources
This deck discusses what I described in detail. Here's a video recording of it. If you're interested in the higher level ideas of especially the drive towards hypermedia, I suggest this slide deck, too
Your service contains all the logic, but the repository layer is as stupid as possible. Its task is a specific operation(for ex. save, edit).
Spring Data is an additional convenient mechanism for interacting with database entities, organizing them in a repository, extracting data, changing it. in some cases, it will be enough to declare the interface and method in it, without implementing it.
P.S
and it's a good choice if you're creating a simple crud

Converting object from one format to another Java ( Design pattern )

I am building a service that depends on another service. A typical Service oriented architecture. The service i am dependent on exposes some API and data types. I am confused should i be converting the object types exposed by that service into specific objects which my service understands. I do expect their service to change with time as these are two different services. I have two options:
Directly use those data types in my service and pass those in methods.
Transform those into specific data types which only my service understands. ( objects will look exactly same if i do this with 0 changes ).
I tried to answer these questions but still could not make the final call. I need help in making this decision.
Why should I have encapsulated/transformed types ?
To prevent building every time they build changes in the service.
To prevent widespread changes ( adapter pattern ) : Changes to the wire
format will lead me to change only the encapsulating classes.
Why should I not have the changes for the types encapsulated ?
The classes will look exactly same as the wire format classes. ( Useless effort to maintain extra classes )
As i understand the impact will be same if i go with either approach. Help ?
I am no architect or SOA specialist, so excuse me if I am saying anything stupid :-)
But I really think the way here is to keep your services simple.
In your shoes, I'd just directly use the existent API. I would not spent any time wrapping or adapting the methods into another API. Your second service (that uses the existent first service) business logic should take care of this convertion, IMO, except if you're being forced to do something that is really expensive with the existent API.
Remember that services are mutable. They're software. They have bugs, business logic changes as time goes and you'll have to change the API and sometimes you'll have to keep older methods compatible for other service consumers. You probably don't want to maintain two APIs that provide the same information without any good practical reason. Not for twice the maintenance work.
Creating another API just to adapt the data format sounds to me a little like that old "DTOs are evil" flame war. And I think a very few people write about the advantages of using DTO nowadays :-)
This is sort of opinion based question, so my opinion is, you should make your own data-types to let your piece of code understand what should be contained in which variable.
I think of services as a data provider, which accepts certain request and fulfill our needs and in return may give us some data. I think role of service is just providing services to client.
It should be responsiblity of client to accept the data returned by service and store them in certain data-structure as there can be n different clients for single service and they can have n different requirements which may lead them to design client specific data-structure to contain data.
Also, as you said client service is subject to change over the period of time, then if you make your own data-structure, then you will need to make change in one single place, and rest of your code will be safe.

How many GWT services

Starting a new GWT application and wondering if I can get some advice from someones experience.
I have a need for a lot of server-side functionality through RPC services...but I am wondering where to draw the line.
I can make a service for every little call or I can make fewer services which handle more operations.
Let's say I have Customer, Vendor and Administration services. I could make 3 services or a service for each function in each category.
I noticed that much of the service implementation does not provide compile-time help and at times troublesome to get going, but it provides good modularity. When I have a larger service, I don't have the modularity as I described, but I don't have to the service creation issues and reduce the entries in my web.xml file.
Is there a resource issue with using a lot of services? What is the best practice to determine what level of granularity to use?
in my opinion, you should make a rpc service for "logical" things.
in your example:
one for customer, another for vendors and a third one for admin
in that way, you keep several services grouped by meaning, and you will have a few lines to maintain in the web.xml file ( and this is a good news :-)
More seriously, rpc services are usually wrappers to call database stuff, so, you even could make a single 'MagicBlackBoxRpc' with a single web.xml entry and thousands of operations !
but making a separate rpc for admin operations, like you suggest, seems a good thing.
Read general advice on "how big should a class be?", which is available in any decent software engineering book.
In my opinion:
One class = One Subject (ie. group of functions or behaviours that are related)
A class should not deal with more than one subject. For example:
Class PersonDao -> Subject: interface between the DB and Java code.
It WILL NOT:
- cache Person instances
- update fields automatically (for example, update the field 'lastModified')
- find the database
Why?
Because for all these other things, there will be other classes doing it! Respectively:
- a cache around the PersonDao is concerned with the efficient storage of information to avoid hitting the DB more often than necessary
- the Service class which uses the DAO is responsible for modifying anything that needs to be modified automagically.
- To find the database is responsibility of the DataSource (usually part of a framework like Spring) and your Dao should NOT be worried about that. It's not part of its subject.
TDD is the answer
The need for this kind of separation becomes really clear when you do TDD (Test-Driven Development). Try to do TDD on bad code where a single class does all sorts of things! You can't even get started with one unit test! So this is my final hint: use TDD and that will tell you how big a class should be.
I think the thing to optimize for is that you can accomplish a result in one round trip to the server. I have an ad-hoc collection of methods on my service object, one for each situation the client finds itself in when it has to get something done. You do not want the client to RPC to the server several times in a row while the user is sitting there waiting.
REST makes things orthogonal, but orthogonality has a cost: there is a reason that the frequently used verbs in languages are irregular. In terms of maintaing clean orthogonal structure to your app, make sure your schema is well-designed. That is where each class should have semantics orthogonal to that of the other classes. When the semantics of each RPC call can be stated cleanly in the schema there will be no confusion as to what they mean, even if they aren't REST-fully ideal.

Need help improving a tightly coupled design

I have an in-house enterprise application (EJB2) that works with a certain BPM vendor. The current implementation of the in-house application involves pulling in an object that is only exposed by the vendor's API and making changes to it through the exposed methods in the API.
I'm thinking that I need to somehow map an internal object to this external one, but that seems too simple and I'm not quite sure of the best strategy to go about doing this. Can anyone shed some light on how they have handled such a situation in the past?
I want to "black box" this vendor's software so I can replace it easily if needed. What would be the best approach from a design point of view to somehow map an internal object to this exposed API object? Keep in mind that my in-house app needs to talk to the API still, so there is going to be some dependency between the two, but I want to reduce it so I can also test in isolation from this software using junit.
Thanks,
Jason
Create an interface for the service layer, internally all your code can work with that. Then make a class that uses that interface and calls the third party api methods and as the api facade.
i.e.
interface IAPIEndpoint {
MyDomainDataEntity getData();
}
class MyAPIEndpoint : IAPIEndpoint {
public MyDomainDataEntity getData() {
MyDomainDataEntity dataEntity = new MyDomainDataEntity();
// Call the third party api and fill it
return dataEntity;
}
}
It is always a good idea to interface out third party apis so you don't get their funk invading your app domain, and you can swap out as needed. You could make another class implementation that uses a different service entirely.
To use it in code you just call
IAPIEndpoint endpoint = new MyAPIEndpoint(); // or get it specific to the lang you are using.
Making your stuff based on interfaces when it spans multiple implementations is the way to go. It works great for TDD as well so you can just swap out the interface to a local test one that can inspect your domain code entirely separate from the third party api.
Abstraction; implement a DAL which will provide the transition from internal to external and back.
Then if you switched vendors your internals would remain valuable and you could change out the vendor specific code; assuming the vendors provide the same functionality and the data types related to each other.
I will be the black sheep here and advocate for the YAGNI principle. The problem is that if you do an abstraction layer now, it will look so close to the third party API that it will just be a redundant layer. Since you don't know now what a hypothetical future second vendor's API will look like, you don't know what differences you need to account for, and any future port is likely to require a rework for those unforeseen differences anyway.
If you need a test framework, my recommendation is to make your own test implementation using the same API as the BPM vendor. Even better, almost all reputable API providers provide some sort of sandbox mode for testing. If they don't, you should ask for one.

Categories