In Spring Boot, are #PreAuthorize("#bean.checkAuthorization(#argument)") calls cached on a per-request basis?
As a minimal example, suppose a #Controller has the following REST endpoint:
#GetMapping("/api/test") public String testAuthorization() {
String argument;
String foo = service.foo(argument);
String bar = service.bar(argument); // same arguments
return bar + foo;
}
and the foo and bar methods in service are defined as follows:
#PreAuthorize("#bean.checkAuthorization(#argument)")
public String foo(String argument) {
return "hello";
}
#PreAuthorize("#bean.checkAuthorization(#argument)")
public String bar(String argument) {
return "world";
}
Use case. Given that Bean.checkAuthorization(argument) is a costly function to run (e.g. it makes a database call), but is stateless (i.e. its result for a given argument will not change with multiple calls), it is only necessary to make the call once per request. Therefore, is it possible to cache the result for the whole request, but ensure that the check is performed on every request? (no long term cache)
This is not a good idea.
In implementing something like this, you're made vulnerable to a cache poisoning attack; that is, you expose yourself to the threat of anyone who is not authorized through your database call to be authorized in the manner you don't want them to be.
It's preferable to take the slight performance hit in determining if a user should be allowed access to a protected resource than having that resource leak.
The better option may be to investigate what's making the database call so expensive, or to convince the business that this is an expense that needs to be paid to safeguard this resource.
Just enable caching: #EnableCaching, and put #Cacheable("checkauth") on top of the checkAuthorization method.
Edit I realised that my answer wasn't for request as the comment below stated. There are definitely some fancy ways of doing that, such as here: Caching per request with Spring which uses AOP.
But so complicated! Just have I imagine most requests would take under a second for most modern web applications having a retention policy of #Cached(expire = 1, timeUnit = TimeUnit.MINUTES) would do the trick, pre-optimisation is the devil.
But I find this approach all really non standard for such a standard problem. Instead I suggest that you have roles: FOO & BAR, which will allow you to simply annotate your methods with: hasRole('FOO'), hasRole('BAR').
#Service
public class MyUserDetailsService implements UserDetailsService {
#Autowired
private UserRepository userRepository;
#Override
public UserDetails loadUserByUsername(String username) {
User user = userRepository.findByUsername(username);
if (checkAuthorizationFoo(user)) user.addRole("FOO");
if (checkAuthorizationBar(user)) user.addRole("BAR");
return new user;
}
}
#PreAuthorize("hasRole('FOO')")
public String foo(String argument) {
#PreAuthorize("hasRole('BAR')")
public String bar(String argument) {
Realise though that this isn't lazy and your checkAuthorization methods are always being called, which is a downside with this approach. You could make it lazy with a bit of work probably but again just be careful how much optimisation you go out and do.
Related
Our application calls many external API's which take a session token of the current user as input. So what we currently do is in a controller, get the session token for the user and pass it into a service which in turn might call another service or some API client. To give an idea, we end up with something like this (example is .NET but something similar is I think possible in Java)
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(this.HttpContext.SessionToken, something);
return View();
}
And then we have
public class SomeService
{
private readonly IApiClient apiClient;
public SomeService(IApiClient apiClient)
{
this.apiClient = apiClient;
}
public void DoSomethingForUser(string sessionToken, something)
{
this.apiClient.DoSomethingForUser(sessionToken, something);
}
}
It can also happen that in SomeService another service is injected which in turn calls the IApiClient instead of SomeService calling IApiClient directly, basically adding another "layer".
We had a discussion with the team if it isn't better to instead of passing the session token, inject it using DI so you get something like this:
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(something);
return View();
}
And then we have
public class SomeService
{
private readonly IUserService userService;
private readonly IApiClient apiClient;
public SomeService(IUserService userService, IApiClient apiClient)
{
this.userService = userService;
this.apiClient = apiClient;
}
public void DoSomethingForUser(string something)
{
this.apiClient.DoSomethingForUser(userService.SessionToken, something);
}
}
The IUserService would have an IHttpContextAccessor injected:
public class UserService : IUserService
{
private readonly IHttpContextAccessor httpContextAccessor;
public UserService(IHttpContextAccessor httpContextAccessor)
{
this.httpContextAccessor = httpContextAccessor;
}
public string SessionToken => httpContextAccessor.HttpContext.SessionToken;
}
The benefits of this pattern are I think pretty clear. Especially with many services, it keeps the code "cleaner" and you end up with less boilerplate code to pass a token around.
Still, I don't like it. To me the downsides of this pattern are more important than its benefit:
I like that passing the token in the methods is concise. It is clear that the service needs some sort of authentication token for it to function. I'm not sure if you can call it a side effect but the fact that a session token is magically injected three layers deep is impossible to tell just by reading the code
Unit testing is a bit more tedious if you have to Mock the IUserService
You run into problems when calling this in another thread, e.g. calling SomeService from another thread. Although these problems can be mitigated by injecting another concrete type of IUserService which gets the token from some place else, it feels like a chore.
To me it strongly feels like an anti pattern but apart from the arguments above it is mostly a feeling. There was a lot of discussion and not everybody was convinced that it was a bad idea. Therefor, my question is, is it an anti pattern or is it perfectly valid? What are some strong arguments for and against it, hopefully so there can be not much debate that this pattern is indeed, either perfectly valid or something to avoid.
I would say the main point is to enable your desired separation of concerns. I think it is a good question if expressed in those terms. As Kit says, different people may prefer different solutions.
REQUEST SCOPED OBJECTS
These occur quite naturally in APIs. Consider the following example, where a UI calls an Orders API, then the Orders API forwards the JWT to an upstream Billing API. A unique Request ID is also sent, in case the flow experiences a temporary problem. If the flow is retried, the Request ID can be used by APIs to prevent data duplication. Yet business logic should not need to know about either the Request ID or the JWT.
BUSINESS LOGIC CLASS DESIGN
I would start by designing my logic classes with my desired inputs, then work out the DI later. In my example the OrderService class might use claims to get the user identity and also for authorization. But I would not want it to know about HTTP level concerns:
public class OrderService
{
private readonly IBillingApiClient billingClient;
public OrderService(IBillingApiClient billingClient, ClaimsPrincipal user)
{
this.billingClient = billingClient;
}
public async void CreateOrder(OrderInput data)
{
this.Authorize();
var order = this.CreateOrder(data);
await this.billingClient.CreateInvoice(order);
}
}
DI SETUP
To enable my preferred business logic, I would write a little DI plumbing, so that I could inject request scoped dependencies in my preferred way. First, when the app starts, I would create a small middleware class. This will run early in the HTTP request pipeline:
private void ConfigureApiMiddleware(IApplicationBuilder api)
{
api.UseMiddleware<ClientContextMiddleware>();
}
In the middleware class I would then create a ClientContext object from runtime data. The OrderService class will run later, after next() is called:
public class ClientContextMiddleware
{
public async Task Invoke(HttpContext context)
{
var jwt = readJwt(context.Request);
var requestId = readRequestId(context.Request);
var holder = context.RequestServices.GetService<ClientContextHolder>();
holder.ClientContext = new ClientContext(jwt, requestIO);
await this.next(context);
}
}
In my DI composition at application startup I would express that the API client should be created when it is first referenced. In the HTTP request pipeline, the OrderService request scoped object will be constructed after the middleware has run. The below lambda will then be invoked:
private void RegisterDependencies(IServiceCollection services)
{
this.services.AddScoped<IApiClient>(
ctx =>
{
var holder = ctx.GetService<ClientContextHolder>();
return new ApiClient(holder.context);
});
this.services.AddScoped<ClientContextHolder>();
}
The holder object is just due to a technology limitation. The MS stack does not allow you to create new request scoped injectable objects at runtime, so you have to update an existing one. In a previous .NET tech stack, the concept of child container per request was made available to developers, so the holder object was not needed.
ASYNC AWAIT
Request scoped objects are stored against the HTTP request object, which is the correct behaviour when using async await. The current thread ID may switch, eg from 4 to 6 after the call to the Billing API.
If the OrderService class has a transient scope, it could get recreated when the flow resumes on thread 6. If this is the case, then resolution will continue to work.
SUMMARY
Designing inputs first, then writing some support code if needed is a good approach I think, and it is also useful to know the DI techniques. Personally I think natural request scoped objects that need to be created at runtime should be usable in DI. Some people may prefer a different approach though.
See in dotnet the area that I am an expert is not an anti standard on the contrary it is the model that many adopt but it is not a model that I would follow for the following reasons
it is not clear where is the token for those who read and use it being an anti clean code
you load important information in a place that is frequently accessed by the framework in the case of .netCore
your classes will reference a large property carrying a lot of unnecessary information when you could have created a more clean model that costs less memory and allocation time, I'm saying this because the HttpAcessor carries all the information relevant to your request
As I would take care of readability (clean code) and improve my performance
I would make a middleware or filter in my flow mvc where I would do the authentication part and create a class like:
public class TokenAuthenciationValues
{
public string TokenClient { get; set; }
public string TokenValue { get; set; }
}
Of course my method is an example but in my middleware I would implement it by loading its token values after calling the necessary apis (of course this model needs an interface and it needs to be configured as .AddScoped() in the case of .net)
That way I would use it in my methods only instantiating my ITokenAuthenciationValues in the constructor and I would have clear and clean information loaded in memory during the entire request
If it is necessary in the middle of the request to change the token any class can access it and change its value
I would have less memory allocated unused in my classes since the IHttpAcessor contract the ITokenAuthenciationValues only has relevant information
Hope this helps
In my spring project I have such an aspect class for logging
#Aspect
#Component
public class BaseLoggingAspect {
private static final Logger logger = LoggerFactory.getLogger(BaseLoggingAspect.class);
#Target({ ElementType.FIELD, ElementType.PARAMETER })
public #interface NonLoggingField {
}
#Pointcut("execution(public * *(..))")
private void allPublicMethods() {
}
#Pointcut("within(img.imaginary.service.*)")
private void inServices() {
}
#Pointcut("within(img.imaginary.dao.*)")
private void inDao() {
}
#Before("allPublicMethods() && inServices() || inDao()")
public void logBeforeCall(JoinPoint joinPoint) {
if (logger.isDebugEnabled()) {
logger.debug("begin method {} in {} class with arguments: {}", joinPoint.getSignature().getName(),
joinPoint.getTarget().getClass().getSimpleName(), joinPoint.getArgs());
}
}
}
this aspect simply catches all the public methods of the service and dao layers and outputs to the log at the beginning of execution the name of the method, the name of the class, and the masi of the values of the arguments of the method
in this aspect, I created a NonLoggingField annotation that I want to apply to some fields of classes of those objects that can be passed to the parameters of these logged methods, for example this:
public class User {
#NonLoggingField
public String userEmail;
public name;
public User(String userEmail, String name) {
this.userEmail = userEmail;
this.name= name;
}
public String tiString() {
return String.format("user name: %s and his email: %s", name, userEmail);
}
}
the fact is that such objects will be written to the log through its toString method, but it is necessary that the email somehow does not get into the log using the notLoggingField annotation, while there are thoughts in my head to do through reflection, but there is no clarity how to do this without over difficult code using reflection, especially considering that objects may have objects of other types inside, which may have the same fields with annotations or collections with objects with such fields. perhaps the AspectJ library can help, but I can't find such mechanisms in it. Please help me come up with something
During runtime, a method parameter is just a value. The JVM does not know at this point if the caller called the method using constants, literals, fields or results of other method calls. That kind of information, you only see in the source code. In byte code, whatever dereferencing operation or computation necessary to determine the parameter's value (or a reference to the corresponding object) is done before calling the method. So there is no connection to the field annotation.
Would annotating method parameters be an alternative for you?
If your requirement is very specific, e.g. intercept field accesses from toString methods and return dummy values instead, if the field is annotated, that would be possible. But this would not be fool-proof. Imagine for example that toString calls a getter method instead of directly accessing the field or that a method other than toString logs the field. You do not always want to falisfy the field value on read access, because other parts of the application might rely on it working correctly. Not every toString call is made in order to log something.
I think you should solve the problem in another way, e.g. by applying filter rules for the logging tool you use. Or if you really want solve it at the application level, you could create an interface like
public interface PrivacyLogger {
String toStringSensitive();
}
and make each class containing sensitive information implement that interface. The logging aspect could then for each printed object determine if it is instanceof toStringSensitive(). If so, it would log the result of toStringSensitive() instead of toString(), i.e. in the simplest case something like
Object toBeLogged = whatever();
logger.log(
toBeLogged instanceof PrivacyLogger
? ((PrivacyLogger) toBeLogged).toStringSensitive()
: toBeLogged
);
Of course, you need to iterate over getArgs() and determine the correct log string for each object. Probably, you want to write a utility method doing that for the whole parameters array.
Moreover, in a complex class, the toStringSensitive() implementation should of course also check if its own fields are PrivacyLogger instances and in that case fold the values of their resapctive toStringSensitive() methods into itw own, so that it works recursively.
I am sorry I have no better news for you, but privacy is something which needs too be built into an application from the ground. There is no simple, fool-proof way to do that with one little aspect. The aspect can utilise the existing application infrastructure and avoid scattering and tangling, but it cannot decide on its own what needs to be prohibited from getting logged and what not.
I read "Clean Code" book ((c) Robert C. Martin) and try to use SRP(single responsibility principle). And I have some questions about it. I have some service in my application, and I do not know how can I refactor it so it matched the right approach. For example, I have service:
public interface SendRequestToThirdPartySystemService {
void sendRequest();
}
What does it do if you look at the class name? - send a request to the third party system. But I have this implementation:
#Slf4j
#Service
public class SendRequestToThirdPartySystemServiceImpl implements SendRequestToThirdPartySystemService {
#Value("${topic.name}")
private String topicName;
private final EventBus eventBus;
private final ThirdPartyClient thirdPartyClient;
private final CryptoService cryptoService;
private final Marshaller marshaller;
public SendRequestToThirdPartySystemServiceImpl(EventBus eventBus, ThirdPartyClient thirdPartyClient, CryptoService cryptoService, Marshaller marshaller) {
this.eventBus = eventBus;
this.thirdPartyClient = thirdPartyClient;
this.cryptoService = cryptoService;
this.marshaller = marshaller;
}
#Override
public void sendRequest() {
try {
ThirdPartyRequest thirdPartyRequest = createThirdPartyRequest();
Signature signature = signRequest(thirdPartyRequest);
thirdPartyRequest.setSignature(signature);
ThirdPartyResponse response = thirdPartyClient.getResponse(thirdPartyRequest);
byte[] serialize = SerializationUtils.serialize(response);
eventBus.sendToQueue(topicName, serialize);
} catch (Exception e) {
log.error("Send request was filed with exception: {}", e.getMessage());
}
}
private ThirdPartyRequest createThirdPartyRequest() {
...
return thirdPartyRequest;
}
private Signature signRequest(ThirdPartyRequest thirdPartyRequest) {
byte[] elementForSignBytes = marshaller.marshal(thirdPartyRequest);
Element element = cryptoService.signElement(elementForSignBytes);
Signature signature = new Signature(element);
return signature;
}
What does it do actually? - create a request -> sign this request -> send this request -> to send the response to Queue
This service inject 4 another services: eventBus, thirdPartyClient, cryptoSevice and marshaller. And in sendRequest method calls each this service.
If I want to create a unit test for this service, I need mock 4 services. I think it's too much.
Can somebody indicate how can this service be changed?
Change the class name and leave as is?
Split into several classes?
Something else?
The SRP is a tricky one.
Let's ask two questions:
What is a responsibility?
What are the different types of responsibilities?
One important thing about responsibilities is that they have a Scope and you can define them in different levels of Granularity. and are hierarchical in nature.
Everything in your application can have a responsibility.
Let's start with Modules. Each module has responsibilities an can adhere to the SRP.
Then this Module can be made of Layers. Each Layer has a responsibility and can adhere to the SRP.
Each Layer is made of different Objects, Functions etc. Each Object and/or Function has responsibilities and can adhere to the SRP.
Each Object has Methods. Each Method can adhere to the SRP. Objects can contain other objects and so on.
Each Function or Method in an Object is made of statements and can be broken down to more Functions/Methods. Each statement can have responsibilities too.
Let's give an example. Let's say we have a Billing module. If this module is implemented in a single huge class, does this module adhere to the SRP?
From the point of view of the system, the module does indeed adhere to the SRP. The fact that it's a mess doesn't affect this fact.
From the point of view of the module, the class that represents this module doesn't adhere to the SRP as it will do a lot of other things, like communicate with DB, send Emails, do business logic etc.
Let's take a look at the different types of responsibilities.
When something should be done
How it should be dome
Let's take an example.
public class UserService_v1 {
public class SomeOperation(Guid userID) {
var user = getUserByID(userID);
// do something with the user
}
public User GetUserByID(Guid userID) {
var query = "SELECT * FROM USERS WHERE ID = {userID}";
var dbResult = db.ExecuteQuery(query);
return CreateUserFromDBResult(dbResult);
}
public User CreateUserFromDBResult(DbResult result) {
// parse and return User
}
}
public class UserService_v2 {
public void SomeOperation(Guid userID) {
var user = UserRepository.getByID(userID);
// do something with the user
}
}
Let's take a look at these two implementations.
UserService_v1 and UserService_v2 do exactly the same thing but different ways. From the point of view of the System, these services adhere to the SRP as they contain operations related to Users.
Now let's take a look at what they actually do to complete their work.
UserService_v1 does these things:
Builds a SQL query string.
Calls the db to execute the query
Takes the specific DbResult and creates a User from it.
Does the operation on the User
UserService_v2 does these things:
1. Requests from the repository the User by ID
2. Does the operation on the User
UserService_v1 contains:
How specific query is build
How the specific DbResult is mapped to a User
When this query need to be called (in the begging of the operation in this case)
UserService_v1 contains:
When a User should be retrieved from the DB
UserRepository contains:
How specific query is build
How the specific DbResult is mapped to a User
What we do here is to move the responsibility of How from the Service to the Repository. This way each class has one reason to change. If how changes, we change the Repository. If when changes, we change the Service.
This way we create objects that collaborate with each other to do specific work, by dividing responsibilities. The tricky parts is: what responsibilities we divide?
If we have a UserService and OrderService we don't divide when and how here. We divide what so we can have one service per Entity in our system.
It's natural for there services to need other objects to do their work. We can of course add all of the responsibilities of what, when and how to a single object but that just makes to the messy, unreadable and hard to change.
In this regard the SRP helps us to achieve cleaner code by having more smaller parts that collaborate with and use each other.
Let's take a look at your specific case.
If you can move the responsibility of how the ClientRequest is created and signed by moving it to the ThirdPartyClient, your SendRequestToThirdPartySystemService will only tell when this request should be sent. This will remove Marshaller, and CryptoService as dependencies from your SendRequestToThirdPartySystemService.
Also you have SerializationUtils that you probably rename to Serializer to capture the intent better as Utils is something that we stick to objects that we just don't know how to name and contains a lot of logic (and probably multiple responsibilities).
This will reduce the number of dependencies and your tests will have less things to mock.
Here's a version of the sendRequest method with less responsibilities.
#Override
public void sendRequest() {
try {
// params are not clear as you don't show them to your code
ThirdPartyResponse response = thirdPartyClient.sendRequest(param1, param2);
byte[] serializedMessage = SerializationUtils.serialize(response);
eventBus.sendToQueue(topicName, serialize);
} catch (Exception e) {
log.error("Send request was filed with exception: {}", e.getMessage());
}
}
From your code I'm not sure if you can also move the responsibility of serialization and deserialization to the EventBus, but if you can do that, it will remove Seriazaliation from your service also. This will make the EventBus responsible for how it serialized and stores the things inside it making it more cohesive. Other objects that collaborate with it will just tell it to send and object to the queue not caring how this objects get's there.
We created a resource, like:
#Path("whatever")
public class WhateverResource {
#POST
public Response createWhatever(CreateBean bean) { ...
#DELETE
#Path("/{uuid}")
public void deleteWhatever(#PathParam("uuid") UUID uuid) { ...
and so on for GET, PUT, HEAD.
Now we figured that we figured that we need to check whether the underlying feature is actually enabled. A single check, and when it fails, all operations should simply result in a 501.
My first thought was be to duplicate the existing resource, like:
#Path("whatever")
public class WhateverResourceIsntAvailable {
#POST
public Response createWhatever(CreateBean bean) {
throw 501
#DELETE
#Path("/{uuid}")
public void deleteWhatever(#PathParam("uuid") UUID uuid) {
throw 501
So, two resources, both specifying the exact same operations. Leading to the problem that we can't (easily) invoke that check at the point in time when the resource needs to be registered.
Beyond that, this duplication doesn't look very elegant, and I am wondering if there is a "more canonical" way of solving this?
EDIT: another option would be to add the check into the existing resource, into each resource, but that means: doing the check for each operation. Which can easily be forgotten when adding new operations.
I envision something like having:
a "base resource", that gets registered
when any operation is invoked on that resource, the request should be "delegated", depending on that underlying feature
either to a resource that just gives 501 always
or to the "real" resource that does the real work
And ideally, without duplicating checking code, or duplicating operation end point specs.
Following the suggestion given by user Samsotha, I implemented a simple filter, which is then "connected" via name binding, like:
#Path("whatever")
#MyNewFilter
public class WhateverResource {
...
And:
#MyNewFilter
public class MyNewFilterImpl implements ContainerRequestFilter {
#Override
public void filter(ContainerRequestContext context) {
if (... feature is enabled )) {
... nothing to do
} else {
context.abortWith(
Response.status(Response.Status.NOT_IMPLEMENTED).entity("not implemented").build());
}
}
The major advantage of this approach is the fact that one can annotate individual operations, but also a whole resource, such as my WhateverResource. The latter will make sure that any operation within that resource is going through the filter!
( further details can be found in any decent Jersey tutorial, like the one at baeldung )
I have an immutable User entity:
public class User {
final LocalDate lastPasswordChangeDate;
// final id, name, email, etc.
}
I need to add a method that will return information if the user's password must be changed i.d. it has not been changed for more than the passwordValidIntervalInDays system setting.
The current approach:
public class UserPasswordService {
private SettingsRepository settingsRepository;
#Inject
public UserPasswordService(SettingsRepository settingsRepository) {
this.settingsRepository = settingsRepository;
}
public boolean passwordMustBeChanged(User user) {
return user.lastPasswordChangeDate.plusDays(
settingsRepository.get().passwordValidIntervalInDays
).isBefore(LocalDate.now());
}
}
The question is how to make the above code more object oriented and avoid the anemic domain model antipattern? Should the passwordMustBeChanged method be moved to User if so how to access SettingsRepository, should it be injected into User's constructor, or should a Settings instance be provided to the ctor, or should the passwordMustBeChanged method require a Settings instance to be provided?
The code of Settings and SettingsRepository is not important, but for completness, here it is:
public class Settings {
int passwordValidIntervalInDays;
public Settings(int passwordValidIntervalInDays) {
this.passwordValidIntervalInDays = passwordValidIntervalInDays;
}
}
public class SettingsRepository {
public Settings get() {
// load the settings from the persistent storage
return new Settings(10);
}
}
For a system-wide password expiration policy your approach is not that bad, as long as your UserPasswordService is a domain service, not an application service. Embedding the password expiration policy within User would be a violation of the SRP IMHO, which is not much better.
You could also consider something like (where the factory was initialized with the correct settings):
PasswordExpirationPolicy policy = passwordExpirationPolicyFactory().createDefault();
boolean mustChangePassword = user.passwordMustBeChanged(policy);
//class User
public boolean passwordMustBeChanged(PasswordExpirationPolicy policy) {
return policy.hasExpired(currentDate, this.lastPasswordChangeDate);
}
If eventually the policy can be specified for individual users then you can simply store policy objects on User.
You could also make use of the ISP with you current design and implement a PasswordExpirationPolicy interface on your UserPasswordService service. That will give you the flexibility of refactoring into real policy objects later on without having to change how the User interacts with the policy.
If you had a Password value object you may also make things slightly more cohesive, by having something like (the password creation date would be embedded in the password VO):
//class User
public boolean passwordMustBeChanged(PasswordExpirationPolicy policy) {
return this.password.hasExpired(policy);
}
just to throw out another possible solution would be to implement a long-running process that could do the expiration check and send a command to a PasswordExpiredHandler that could mark the user with having an expired password.
I have stumbled upon a document that provides an answer to my question:
A common problem in applying DDD is when an entity requires access to data in a repository or other gateway in order to carry out a business operation. One solution is to inject repository dependencies directly into the entity, however this is often frowned upon. One reason for this is because it requires the plain-old-(C#, Java, etc…) objects implementing entities to be part of an application dependency graph. Another reason is that is makes reasoning about the behavior of entities more difficult since the Single-Responsibility Principle is violated. A better solution is to have an application service retrieve the information required by an entity, effectively setting up the execution environment, and provide it to the entity.
http://gorodinski.com/blog/2012/04/14/services-in-domain-driven-design-ddd/