For quite some time I try to figure out where validation of user input should take place in a Spring MVC application. In many online blogs and tutorials I basically read that a controller should validate the users input and, if invalid, respond to the user by showing a page containing the error message. My current understanding of the Spring and Spring MVC layering system, however, is that a Controller is a only shallow interface between the application logic (service layer) and the "web world", allowing usage of the service layer from the web. Also, as far as I can see, Spring MVC does only provide reasonable tools for validation in a Controller.
If now validation takes place in a Controller, if at some later point I want to untie the application logic from the "web world", validation logic must be reimplemented in the new environment (e.g. a desktop application using Swing). In my opinion, the ability to decide which operations are "valid" on domain objects, and what "valid" states such objects may have, is core part of the service layer, and not the concern of some other part of the application (e.g. Controllers).
In this context, why is it "good practice" to place input validation logic in the controller layer and not the service layer?
A common approach is to do validation on both places. But if you are talking about #Valid, from my experience it is nicer to put on Controllers level.
It also depends what kind of validation logic we are talking about. Let's say you have a bean:
#Data
public class MyBean {
#NotNull private UUID someId;
#NotEmpty private String someName;
}
It would make sense for this bean to be annotated with #Valid on the controller level so it doesn't even reach the service. There is no benefit to putting the #Valid on the service method, because why would you propagate it further while you can immediately in the controller decide if it is that kind of valid or not.
Then there is a second type of validation: business logic validation. Let's say for the same bean that the someId property is a timeUUid and its timestamp needs to be at most 2 days after some event occurred, in other case, the bean should be discarded by the service.
That seems like a business logic validation case, because by just looking at the bean, you wouldn't be able to validate it, unless you apply some logic to it.
Since both approaches to validation actually validate different things, it is obvious to see that each of your MVC components - Model, View and Controller, do their own validation and it should be reasonable about what it validates without introducing dependency to the other components.
As for showing the error to the user, yes, the Errors object is indeed intended to be used for bean validation at controller level, but you can design some filter that catches exceptions on any level and then pretty formats it for the user. There are many approaches to it, and I am not sure if Spring prescribes that any is better than the other.
Depending on different resolve mechanism (as in, for example, jstl or jackson or something else), you would probably be inclined to deal with validation in a different way. For example, a traditional jstl view resolver would nicely work with a contraption that uses Errors, while a jackson resolver would probably work nicer with a combination of #ResponseBody and some filter that catches errors and puts them in a predefined error part of the response object.
In one of our previous projects, we had huge forms with very complex logic which meant a lot of validating code. So we used a third kind of solution. For every controller, we autowired a helper class.
Example:
myview <-> MyController <- MyService <- MyDAO
^
|
MyHelper
Controllers handled the view resolving.
Services handled mapping from dto-s to model objects for view and vice versa,
DAO-s handled database transactions and,
Helpers handled everything else including validation.
If now someone would have wanted to change the frontend from web to something else, it would have been a lot easier and at the same time, we didn't over-bloat the service implementation classes.
Related
We got this design:
Rest Controller -> Calling Service -> Calling JPA DAOs
Let say we save employee. And we have a requirements to validate employee name.
IEmployeeService.saveEmployee(Employee emp)
EmployeeController.saveEmployee(Employee emp)
Option 1: I can inject JSR303 annotations on Employee and Let Rest Controller validate it as part of automatic validation.
Option 2: I validate in service method and raise exception and let controller to return proper JSON of that exception.
Seems like service should have validation anyways... but in presence of JSR303 annotations controller is doing everything, hence there seem to be duplication of logic if service does those checks as well.
How do you approach? Everybody's comments welcome and will be appreciated.
Thanks
Babar
IMHO, the standard way is to use both Opt 1 and Opt 2.
But, you have to define what's going to be validated in each layer.
This is my favorite approach:
Validate field constraints in Controller. This include: Date format, string length, min, max, null... validation
Validate business logic constraint in Service. Include: Unique, exist check, from date < to date, max items per group of entities...
One side note: Entity should never been exposed to outside world. We should have some kind of converter logic to convert entity to output JSON/model
Ideally your web facing part should have your validation logic implemented. The service layer should be purely for the business part of project. It is also security best practice before it reach to your service layer code. Nowadays people do mix-up layer and want to do everything in same layer. The jsr-303 annotation are bean level validation. So normally it is applied at a place where model comes into picture.
So you can do that at
controller layer: to validate your input and other min/max range and basic validation.
service layer: To have any specific validation at service layer by Autowiring javax.validation.Validator.
I would do both.
In your example you seem to have both methods refer to Employee, which might be possible in very simple scenarios, but in practical scenarios this is not.
It is more likely that you are going to have an EmployeeDTO which has JSON properties mapped to Java object fields. You might even have more specific DTOs to specific employee operations (for example change password), depending on what forms your UI is exposing. This will be received by your controller, and JSR303 can help do syntactic validations, like checking the string is not empty, that a name does not contain numbers etc.
The DTO should not bleed in to the service, but its data should be translated to whatever the service is expecting, which should be decoupled from the inputs of the controller.
Your Service will then receive an Employee. This could be a JPA Entity if it makes sense, or some other intermediate domain object related to the operation being performed. The creation of the Employee should ideally already enforce some simple checks (for example non null checks), to ensure that the object is consistent after construction. The service itself should then validate what it is receiving, independently of what the controller validates. The service could be used by other controllers, or invoked by other services in the future, so you should always program defensively. The service could also do more logical checks (for example does another employee with the same ID but different details exist? etc.) This will do business logic validation that will enforce further robustness.
If you see a lot of common code between the DTO and the Entity, or intermediate objects, there are also mapper utility libraries which can help you avoid repetition, like MapStruct.
I'm currently on the process of adding Spring and Hibernate to an existing application, but after reading lots of tutorials there are still a couple (aka a lot of things) that either seem strange to me or I'm missing something...
All the tutorials that I found are straight forward ones (like most tutorials should be), as seen on Example A, one controller to handle the requests (JSP or WS) and autowire the manager class to interact with the DB.
In my case this doesn't apply, since the application has a class to handle the requests, then it instantiates a handler class, which in turn creates a new class to handle something else that creates a new class to handle (....)* and then handles the the database connection as seen on Example B.
My question is how can I make my Business logic Class n "Springable", ie, able to make a Database Manager autowired inside of it?
From all the examples that I've seen, I've come up with these alternatives:
Create the autowire to ALL the DbManager inside of the Controller, and then IoC to all the Business Classes until it reaches the Business Logic class n. This would follow with the Spring standards, but would imply the most code refactoring
Transform ALL the Business Logic classes into beans
Add SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext(this); to the Business Logic class n and use the #Autowire to access the DbManager
Am I missing something or is there any other alternative?
This is just my opinion, but you may be interested.
The basic philosophy of Spring, the fact that the creation and configuration of objects involved in the container, but not in the business objects, is known as IoC or Dependency Injection. Based on the configuration, the container creates and associates(injects) the objects with each other. This allows you to remove the code of the business-classes related to instantiation and configuration (this code can be quite complex). So your classes will become easier and cleaner, and can focus on the business-logic and nothing else.
I believe that business objects do not need to create each other. Let Spring do it. He does it perfectly.
Just mark your business logic classes, depend on its role, with one of stereotype: #Component, #Service, #Controller (meaning of stereotypes you can find here), and inject it with #Autowired. And if you need Database Manager in this classes, inject it same way.
So, my choice corresponds to point number two: "2. Transform ALL the Business Logic classes into beans..."
You can (and should!) use Spring Stereotypes for this.
Refer to my previous answer to a similar question for details about the proposed application structure.
I'm building a web application that primarily constitutes of CRUD operations of data from back end/database. There are instances where in I have to write business logic(I'm sure we will have more business logic built as we go deeper in to development). Currently for each UI screen I'm creating I create a model class,Service class, DAO class, a controller(it's servlet essentially) and bunch of jsp pages. In most cases the service class just calls the methods from DAO to pass in model objects. Essentially we use model classes to map data from UI screens. Hence the controller will have the model objects populated when a form is submitted. I have started using service classes to keep a separation layer from web layer to DAO layer. But at times I feel that the service class is just adding unnecessary level of API calls, I would think that I could just inject the DAO in to Controller and complete the task faster. I want to use the service class only when there is additional business logic to be performed. If you have to design an application what factors do you consider using controller->DAO vs controller->Service->DAO control flow?
DAOs are more granular and deal with one specific entity. Services provide macro level functionalities and can end up using more than one DAO. Typically, Services are used for defining transaction boundaries to gain atomicity. In other words, if you end up updating multiple tables using multiple DAOs, defining transaction boundary at service will help in either committing or rollbacking all the changes done to DB.
In your design, since you are primarily doing CRUD for various entities, it may seem that services are not adding much value. However, think of web-based front end as one way of updating data. Usage of services will allow you to expose same capabilities as a web-service later to other forms of client like third party integrators, etc.
So, in summary, your design seems to be in line with conventional practices. If you feel that you can combine multiple services into one based on some common theme such that it can reduce the overhead of code, then, you should go ahead and do it. At the end of day, ultimate goal is to create maintainable code which no one is afraid to change when need arises.
In Pro-Spring-3 book they mentioned below line, for controller with JPA2
Once the EntityManagerFactory had been properly configured, injecting it into your service layer
classes is very simple.
and they are using the same class as service and repository as in below:
package com.apress.prospring3.ch10.service.jpa;
// Import statements omitted
#Service("jpaContactService")
#Repository
#Transactional
public class ContactServiceImpl implements ContactService {
private Log log = LogFactory.getLog(ContactServiceImpl.class);
#PersistenceContext
private EntityManager em;
// Other code omitted
}
but in case you are going to use spring-data CRUDRepository or JPARepository then your DAO will be Interface and you have to make service layer to handle your code
I'd reference my answer here
The long and short of it is the advantage of using a Service layer is it gives you room to move in the future if you want to do anything with Spring Security and roles etc. It allows you to handle transactions more atomically and Spring itself has really nice annotations for this.
Use a service class when dealing with more than one aggregate root.
Inject repositories (aka a dao that returns a collection) or dao's directly into controller, no need for an extra layer/class to do a basic get.
Only use service classes where necessary, otherwise you have twice as much code as required.
You can make repository generic, and annoatoate with #Transactional(propagation = Propagation.REQUIRED) which enforces a transaction is present, but won't create a new one if already present. So if you later use multple repositoes in one service class method, you will only have the one transaction.
I have looked up a lot of information about the DAO pattern and I get the point of it. But I feel like most explainations aren't telling the whole story and by that I mean where would you actually use your DAO. So for example if I have a User class and a corresponding UserDAO that is able to save and restore users for me, which is the correct way:
The controller creates the User object and passes it to the UserDAO to save it to the database
The controller creates the User object and in its constructor the user object makes a call to the userDAO in order to save itself into the database
This is a code smell and you are missing an extra class "UserManager" which the controller will ask to create the user. The UserManager is responsible for creating the user and asking the UserDAO to save it
I really feel like the third option is the best, because all that the controller is responsible for is delegating the request to the correct model object.
What is your favorite way? Am I missing something here ?
From my experience with DAOs, the first approach is the only correct one. The reason is that it has the clearest responsibilities and produces the least clutter (well, some very respectable programmers regard DAOs themselves as clutter. Adam Bien sees the original DAO pattern already implemented in the EntityManager and further DAOs to be mostly unnecessary "pipes")
Approach 2 binds the model to the DAO, creating an "upstream dependency". What I mean is that usually the models are distributed as separate packages and are (and should be) ignorant of the details of their persistence. A similar pattern to what you are describing is the Active Record pattern. It is widely used in Ruby on Rails but has not been implemented with equal elegance and simplicity in Java.
Approach 3 - what is supposed to be the point of the UserManager? In your example the Manager performs 2 tasks - it has the duties of a User factory and is a proxy for persistence requests. If it is a factory and you need one, you should name it UserFactory without imposing additional tasks on it. As for the proxy - why should you need it?
IMHO most classes named ...Manager have a smell. The name itself suggests that the class has no clear purpose. Whenever I have an urge to name a class ...Manager, it's a signal for me to find a better fitting name or to think hard about my architecture.
For the first approach; IMHO, controller calling a method on a DAO object is not a good design. Controllers must be asking "service" level objects about business. How these "services" persist the data is not a concern for the controller.
For the second approach; sometimes you may want to just create the object, so constructor duty and persisting duty must not be tightly coupled like this.
Lastly, the manager or the service objects is a good abstraction for the layered architecture. This way you can group the business flows in the appropriate classes and methods.
But for Play, companion objects of case classes are also a good candidate to use as DAO. The singleton nature of these objects make it a good candidate.
case class TicketResponse(appId: String, ticket: String, ts: String)
object TicketResponse{
implicit val ticketWrites = Json.writes[TicketResponse]
def save(response: TicketResponse) = {
val result = DB.withConnection {
implicit connection =>
SQL("insert into tickets(ticket, appid, ts)"
+ " values ({ticket},{appid},{ts})")
.on('ticket -> response.ticket, 'appid -> response.appId, 'ts -> response.ts).executeInsert()
}
}
}
The Data Access Object (DAO) should be used closer to the data access layer of your application.
The data access object actually does the data access activities. So it is part of data access layer.
The architecture layers before DAO could vary in projects.
Controllers are basically for controlling the request flow. So they are kind of close to UI.
Although, a Manager, Handler is a bad idea, we could still add a layer between controller and DAO. So controller will pre-process the data that is coming from a request or going out (data sanity, security, localization, i18n, transform to JSON, etc). It sends data to service in the form of domain objects (User in this case). The service will invoke some business logic on this user or use it for some business logic. And it would then pass it to DAO.
Having the business logic in controller layer is not good if you are supporting multiple clients like JSPs, WebServices, handheld devices, etc.
Assuming Controller means the "C" in MVC, your third option is the right approach. Generally speaking Controller code extends or follows the conventions of a framework. One of the ideals of MVC is swapping frameworks, which is really the Controller, should be relatively easy. Controllers should just move data back and forth between the model and view layers.
From a model perspective, Controllers should interact with a service layer - a contextual boundary - in sitting front of the domain model. The UserManager object would be an example of a piece that you would consider part of your service layer - that is the domain model's public API.
for typical webapp i will prefer play framework with play's JPA and database implementation. It much more productive way.
please take a look here http://www.playframework.org/documentation/1.2.5/jpa
and here
http://www.playframework.org/documentation/1.2.5/guide1 and http://www.playframework.org/documentation/1.2.5/guide2
That's it))
I'd like to have your expert explanations about an architectural question. Imagine a Spring MVC webapp, with validation API (JSR 303). So for a request, I have a controller that handles the request, then passes it to the service layer, which passes to the DAO one.
Here's my question. At which layer should the validation occur, and how ?
My though is that the controller has to handle basic validation (are mandatory fields empty ? Is the field length ok ? etc.). Then the service layer can do some tricker stuff, that involve other objets. The DAO does no validation at all.
BUT, if I want to implement some unit testing (i.e. test layers below service, not the controllers), I'll end up with unexpected behavior because some validations should have been done in the Controller layer. As we don't use it for unit testing, there is a problem.
What is the best way to deal with this ? I know there is no universal answer, but your personal experience is very welcomed.
Thanks a lot.
Regards.
In your service unit tests, craft your test data as though it passed all controller level validation checks - essentially assume that the controller level validation will work perfectly and your service will only ever receive data that is valid from the controller's perspective. Then, test each of those validation cases in the controller's unit tests. Finally, include integration tests which fail validation on both levels and make sure none get through.