Business delegate handling an exception - java

I'm confused in one of the line about Business delegate that says:
Business delegate handle & abstract
any remote exception
What do they mean with the word "abstract" here? Is it just providing a details not how to implement them?

Have you checked out the Sun documentation yet?
http://java.sun.com/blueprints/patterns/BusinessDelegate.html
If you tie a client directly to a business service interface, that client may potentially have to change every time the business service changes. In the scenario where you have one type of client using a service, that's not a big deal, but when you have a bunch of potentially different clients that all want to use the same service, it becomes more of a problem. On top of that, all of your clients that want to use the service probably want to handle looking up the service and handling exceptions from the service in a similar fashion.
In order to mitigate this scenario, you pull all the details of exception handling and distributed lookup out of the individual clients ("abstract" it out) and pull it into a business delegate object. All your clients can now use a business delegate to access the business service in a uniform way and when the business service changes, only your business delegate object has to change rather than all your individual clients.
That's kind of my understanding of the scenario. Hopefully that clears things up for you.

Related

How do we choose the business logic when building a Jhipster application?

When creating a entity with the Jhipster helper, it asks
? Do you want to use separate service class for your business logic? (Use arrow keys)
> No, the REST controller should use the repository directly
Yes, generate a separate service class
Yes, generate a separate service interface and implementation
In which case should I use which option?
What are the benefits and flaws of each solution?
Is it possible to change easily the architecture once everything is set?
IMHO it depends on how complex your application is going to be and how long you plan on having to maintain it.
If your domain model is quite simple and your REST controllers are straightforward CRUD operations without complex mapping, you can get away without using a separate service layer.
If your domain model or interactions get more complex, you might need a 'Separation of Concerns': your Controller classes should just map REST calls from/to the correct DTO's for the REST API, and business logic and coordination between different entities should go in a service class that does not have anything to do with the REST API. In the long term, that makes it easier to make changes in the REST API separate from changes in the business logic.
Some blog posts to read:
https://www.petrikainulainen.net/software-development/design/understanding-spring-web-application-architecture-the-classic-way/
https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html
Then about the decision to use interfaces or not. The main advantages of using interfaces used to be that it allowed better testing and avoided coupling modules too close. But since 2010, there has been a lot of discussion whether it's worth the overhead. Maybe start reading the discussion underneath Adam Bien's original post:
https://www.adam-bien.com/roller/abien/entry/service_s_new_serviceimpl_why

Persisting related data after client has completed some web service calls in a chain

Consider that our application has some configs that user set them, and we need to have a backup of those data in order to restore them later.
Configs are list of different Objects and I have created some web services for each List of Object and application calls them in a chain, it means that with getting success response from one service they call the next one.
Now what the problem is...
I need to store each services data somewhere and after finishing the last service call in front end, I will create the final Object with received data from client and persist it in database(here MongoDB).
What is the best way for implementing this strategy?, consider that I don't want to persist each List of Object per service, I need to persist whole Object once.
Is there any way for storing body of a request somewhere until other services to be called?
What is the best for that?
I will appreciate any clue or solution that help me!
BEST WAY:
store all objects in client side and send only one request to server.
it reduces resource usage of server side.
ALTERNATIVE:
if you realy want to handle it by several requests (which I do not recommend it) then one strategy is : store objects of each request by an identifier related to that session (best candidate is JSESSIONID) to a temporary_objects_table and after final request store it in main tables.
and in failure of any service for that session, remove records with that sessionid from temporary_objects_table.
it has much more complexity comparing first approche.
After some research I found my answer:
REST and transaction rollbacks
and
https://stackoverflow.com/a/1390393/607033
You cannot use transactions because by REST the client maintains the client state and the server maintains the resource state. So if you want the resource state to be maintained by the client then it is not REST, because it would violate the stateless constraint. Violating the stateless constraint usually causes bad scalability. In this case it will cause bad horizontal scalability because you have to sync ongoing transactions between the instances. So please, don't try to build multi-phase commits on top of REST services.
Possible solutions:
You can stick with immediate consistency and use only a single
webservice instead of two. By resources like database, filesystem,
etc. the multi phase commit is a necessity. When you break up a
bigger REST service and move the usage of these resources into
multiple smaller REST services, then problems can occur if you do
this splitting wrongly. This is because one of the REST services will
require a resource, which it does not have access to, so it has to
use another REST service to access that resource. This will force the
multi phase commit code to move to a higher abstraction level, to the
level of REST services. You can fix this by merging these 2 REST
services and move the code to the lower abstraction level where it
belongs.
Another workaround to use REST with eventual consistency so you can
respond with 202 accepted immediately and you can process the
accepted request later. If you choose this solution then you must be
aware by developing your application that the REST services are not
always in sync. Ofc. this approach works only by inner REST services
by which you are sure that the client retry if a REST service is not
available, so if you write and run the client code.

Is it bad to take HttpRequest and HttpRespone to the implementation level in spring

I heard that taking the HttpRequest and HttpRespone from controller to implementation level is no good for the Security level.. is that true.. and if so how to avoid it... please do advice..
Thank you in advance..
The main aim of Service layer is reusability and separation of concerns i.e., the service layer should be able process the business logic from various sources like web tier controllers or other web services (i.e., different end points).
So, if your webtier objects (FormBean objects, httprequest, httpsession objects, etc..) are scattered into the service layer then there is a tight coupling between the services and with the controller layer. If you wanted to expose or reuse the same service for other end points or channels, then you will end up in making changes (removing webtier objects or placing if else conditions in the code) to the service layer (to support different end systems), which is not good.
In n-tier (or 3-tier) architecture, service layer (along with DAOs) should only use domain/entity objects and should not be mixed with front end (web tier) objects. Otherwise, the application can't be supportable/extended easily to multiple endpoints.
HttpServletRequest should not be passed to the service layer.
If you need the request explicitly you can place the logic in the web layer. Or extend the library and allow it to take a Map of parameters (if possible) and you can also
Wrap the HttpRequest and HttpResponse in your classes implementing interfaces and make the service layer rely on the interfaces.
Your application should be designed in such a way the component responsible for a task should do the required job and delegate the piece of objects/variables to the next set of methods/classes. It would be better to extract the Header/Body/Attachments from the HTTP Request/Response and send it accordingly to business class for further processing.

Best Practice - Multi Layer Architecture and DTOs [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
After reading some of the Q/As here on stackoverflow, I am still confused about the correct implementation of DTOs in my web application. My current implementation is a (Java EE based) multi-tier architecture (with persistence, service and presentation layer) but with a "common" package used by all layers, containing (amongst others) domain objecs. In this case the layers can not really be considered as independent.
I am planning to remove the common package step by step, but I encounter various challenges/questions:
Assume the persistence layer would use a class myproject.persistence.domain.UserEntity (a JPA based entity) to store and load data to/from the database. To show data in the view I would provide another class myproject.service.domain.User. Where do I convert them? Would the service for the users be responsible to convert between the two classes? Would this really help to improve the coupling?
How should the User class look like? Should it contain only getters to be immutable? Wouldn't it be cumbersome for the views to edit existing users (create a new User, use the getters of the existing User object etc.)?
Should I use the same DTO-classes (User) to send a request to the service to modify an existing user/create a new user or should I implement other classes?
Wouldn't the presentation layer be very dependent on the service layer by using all the DTOs in myproject.service.domain?
How to handle my own exceptions? My current approach rethrows most "severe" exceptions until they are handled by the presentation layer (usually they are logged and the user is informed that something went wrong). On the one hand I have the problem that I hava again a shared package. On the other hand I am still not sure that this can be considered "best practice". Any ideas?
Thank you for any answers.
Having some packages among different layers is not uncommon, however it is usually done only for cross-cutting concerns such as logging. Your model should not be shared by different layers, or changes to the model would require changes in all those layers. Typically, your model is a lower layer, close to data layer (over, under, or intertwined, depending on the approach).
Data Transfer Objects, as their name imply, are simple classes used to transfer data. As such, they are usually used to communicate between layers, specially when you have a SOA architecture which communicates through messages and not objects. DTOs should be immutable since they merely exist for the purpose of transferring information, not altering it.
Your domain objects are one thing, your DTOs are a different thing, and the objects you need in your presentation layer are yet another thing. However, in small projects it may not be worth the effort of implementing all those different sets and converting between them. That just depends on your requirements.
You are designing a web application but it may help your design to ask yourself, "could I switch my web application by a desktop application? Is my service layer really unaware of my presentation logic?". Thinking in these terms will guide you towards a better architecture.
On to your questions:
Assume the persistence layer would use a class myproject.persistence.domain.UserEntity (a JPA based entity) to store and load data to/from the database. To show data in the view I would provide another class myproject.service.domain.User. Where do I convert them? Would the service for the users be responsible to convert between the two classes? Would this really help to improve the coupling?
The service layer knows its classes (DTOs) and the layer below it (let's say persistence). So yes, the service is responsible for translating between persistence and itself.
How should the User class look like? Should it contain only getters to be immutable? Wouldn't it be cumbersome for the views to edit existing users (create a new User, use the getters of the existing User object etc.)?
The idea behind DTOs is that you only use them for transfer, so operations like creating a new user are not required. For that you need different objects.
Should I use the same DTO-classes (User) to send a request to the service to modify an existing user/create a new user or should I implement other classes?
The service methods might express the operation, the DTOs being its parameters containing just the data. Another option is using commands which represent the operation and also contain the DTOs. This is popular in SOA architectures where your service may be a mere command processor for instance having one single Execute operation taking a ICommand interface as parameter (as opposed to having one operation per command).
Wouldn't the presentation layer be very dependent on the service layer by using all the DTOs in myproject.service.domain?
Yes, the layer over the service layer will be dependent on it. That is the idea. The upside is that only that layer is dependent on it, no upper or lower layers so changes only affect that layer (unlike what happens if you use your domain classes from every layer).
How to handle my own exceptions? My current approach rethrows most "severe" exceptions until they are handled by the presentation layer (usually they are logged and the user is informed that something went wrong). On the one hand I have the problem that I hava again a shared package. On the other hand I am still not sure that this can be considered "best practice". Any ideas?
Each layer can have its own exceptions. They flow from one layer to another encapsulated into the next kind of exception. Sometimes, they will be handled by one layer which will do something (logging, for instance) and maybe then throw a different exception that an upper layer must handle. Other times, they might be handled and the problem might be solved. Think for instance of a problem connecting to the database. It would throw an exception. You could handle it and decide to retry after a second and maybe then there is success, thus the exception would not flow upwards. Should the retry also fail, the exception would be re-thrown and it may flow all the way up to the presentation layer where you gracefully notify the user and ask him to retry layer.
Loose coupling is indeed the recommended way to go, which means you will end up with huge, boring to write, painful to maintain converters in your business logic. Yes, they belong in the business logic: the layer between the DAOs and the views. So the business layer will end up depending on both the DAO DTOs and the view DTOs. And will be full of Converter classes, diluting your view of the actual business logic...
If you can get away with having immutable view DTOs, that's great. A library you use for serializing them might require them to have setters though. Or you might find them easier to build if they have setters.
I have gotten away just fine with using the same DTO classes for both the views and the DAOs. It is bad, but honestly, I did not have the feeling that the system was more decoupled otherwise, since business logic, the most essential part, has to depend on everything anyway. This tight coupling provided for great conciseness, and made it easier to sync the view and DAO layers. I could still have things specific just to one of the layers and not seen in the other by using composition.
Finally, regarding exceptions. It is a responsibility of the outermost layer, the view layer (the Controllers if you are using Spring) to catch errors propagated from the inner layers be it using exceptions, be it using special DTO fields. Then this outermost layer needs to decide if to inform the client of the error, and how. The fact is that down to the innermost layer, you need to distinguish between the different types of errors that the outermost layer will need to handle. For example if something happens in the DAO layer, and the view layer needs to know if to return 400 or 500, the DAO layer will need to provide the view layer with the information needed to decide which one to use, and this information will need to pass through all intermediary levels, who should be able to add their own errors and error types. Propagating an IOException or SQLException to the outermost layer is not enough, the inner layer needs to also tell the outer layer if this is an expected error or not. Sad but true.

Transform Webservice Request into internal representation?

I am implementing a SOAP-Webservice which receives different Requests. Should my Manager-class transform this Request objects into an internal representation before delegating them to implementation classes?
I think this would be a good idea concerning decoupling. But doing this I have to create a copy of each RequestObject class and name it InternalRequestObject which stores the same data as the original Request.
Does this make sense?
It makes sense if you intend to reuse those implementation classes which I would call your business layer.
In your current setup, you have the business layer exposed as a web service. The skeleton of the web service is - if you will - a client for your business layer.
Now the question that arises is: should your business layer care what kind of clients it will have? Should the data contract of the business layer be dictated by the clients, or should the clients respect the data contract exposed by the business layer?
The obvious response is the clients should respect the data contract of the business layer, so the answer to your question would be: Yes, you should map the SOAP requests to an internal request type used by the implementation classes in order to obtain a better decoupling between the two.
There is only one case in which I would consider using the request types directly all the way down into my business layer: If I would be absolutely (101%) sure that I will never have to expose my business layer as anything else than a SOAP web service.
The idea is that you only have two main options at this:
1. keep the same request type all over the place. The disadvantage to this is that you will suffer a lot of rewrites ripping through your business layer if at some point you have to add other (non SOAP) clients.
OR
2. map the SOAP request type to an internal type. The disadvantage to this will be that you risk duplicating code and working extra just to find out at the end that all was for nothing and you didn't need to add other clients after all.
Think about your situation and choose carefully! But I must tell you that, personally, I haven't encountered the disadvantage of number 2 so far. I always ended up adding new clients to the thing and at that point it helped having mapped the types from the very beginning.

Categories