I am building a REST-API with SpringBoot and using this Controller.
#RestController
class EmployeeController {
private final EmployeeRepository repository;
EmployeeController(EmployeeRepository repository) {
this.repository = repository;
}
#GetMapping("/employees")
List<Employee> all() {
return repository.findAll();
}
#PostMapping("/employees")
Employee newEmployee(#RequestBody Employee newEmployee) {
return repository.save(newEmployee);
}
I want to ensure that API-Consumers cannot spam multiple concurrent POST-Requests with the same Employee. I know that I can check if the entity already exists in the database before saving it, but I am afraid that the performance will be bad. I also already noticed that you can use Annotation like #version in your entity, to make updates on existing Entity`s more save.
But is there also a way or a best practice in Spring how to handle this POST-Requests with a potential new Entity?
What kind of request throughput are you expecting to the POST /employees endpoint? While performance is important, premature optimization is almost always going to cause your code to be uglier than it needs to for little gain.
As your code currently stands, multiple concurrent POST /employees requests would end up with a first-come first-serve basis where the first user with the given UNIQUE constraint in your application (which is hopefully enforced by your underlying DBMS) is created, and all other after that (for the same user) would fail due to a e.g. ConstraintViolationException (mapped to a e.g. DataIntegrityViolationException). From this point of view (as long as you do not have a complicated distributed DBMS setup), the consistency of the data is still guaranteed.
The downside, of course, is that the error messages that would be returned would be:
Vendor-specific and leak the underlying implementation (e.g. we're showing the client that we're using Hibernate)
Potentially difficult for the client to parse.
If you instead, change the implementation to something like the following:
#PostMapping("/employees")
Employee newEmployee(#RequestBody Employee newEmployee) {
verifyEmployeeDoesNotExist(newEmployee);
return repository.save(newEmployee);
}
private void verifyUserDoesNotExist(Employee employee) {
if (repository.exists(newEmployee) {
throw new EmployeeAlreadyExistsException("Employee " + newEmployee.getName() + " already exists";
}
}
then you could more easily control the control flow of your endpoint and the underlying process, which would potentially allow for more easily-digestible exception handling. This could be even further improved by adding e.g. custom exceptions which also contain some code of pre-defined error code such as e.g. error 409 code 1010 Employee already exists.
Of course, Spring's built-in exception translation for Hibernate (e.g. HibernateExceptionTranslator) might already be good enough for your use-case and could even be extended and this extension can be even generalized.
In the end, the best practice is making your code clean, readable and maintainable. Then start adding functionality to monitor your code. After that, and only if you have a problem with performance, you can still optimize it.
Related
Our application calls many external API's which take a session token of the current user as input. So what we currently do is in a controller, get the session token for the user and pass it into a service which in turn might call another service or some API client. To give an idea, we end up with something like this (example is .NET but something similar is I think possible in Java)
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(this.HttpContext.SessionToken, something);
return View();
}
And then we have
public class SomeService
{
private readonly IApiClient apiClient;
public SomeService(IApiClient apiClient)
{
this.apiClient = apiClient;
}
public void DoSomethingForUser(string sessionToken, something)
{
this.apiClient.DoSomethingForUser(sessionToken, something);
}
}
It can also happen that in SomeService another service is injected which in turn calls the IApiClient instead of SomeService calling IApiClient directly, basically adding another "layer".
We had a discussion with the team if it isn't better to instead of passing the session token, inject it using DI so you get something like this:
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(something);
return View();
}
And then we have
public class SomeService
{
private readonly IUserService userService;
private readonly IApiClient apiClient;
public SomeService(IUserService userService, IApiClient apiClient)
{
this.userService = userService;
this.apiClient = apiClient;
}
public void DoSomethingForUser(string something)
{
this.apiClient.DoSomethingForUser(userService.SessionToken, something);
}
}
The IUserService would have an IHttpContextAccessor injected:
public class UserService : IUserService
{
private readonly IHttpContextAccessor httpContextAccessor;
public UserService(IHttpContextAccessor httpContextAccessor)
{
this.httpContextAccessor = httpContextAccessor;
}
public string SessionToken => httpContextAccessor.HttpContext.SessionToken;
}
The benefits of this pattern are I think pretty clear. Especially with many services, it keeps the code "cleaner" and you end up with less boilerplate code to pass a token around.
Still, I don't like it. To me the downsides of this pattern are more important than its benefit:
I like that passing the token in the methods is concise. It is clear that the service needs some sort of authentication token for it to function. I'm not sure if you can call it a side effect but the fact that a session token is magically injected three layers deep is impossible to tell just by reading the code
Unit testing is a bit more tedious if you have to Mock the IUserService
You run into problems when calling this in another thread, e.g. calling SomeService from another thread. Although these problems can be mitigated by injecting another concrete type of IUserService which gets the token from some place else, it feels like a chore.
To me it strongly feels like an anti pattern but apart from the arguments above it is mostly a feeling. There was a lot of discussion and not everybody was convinced that it was a bad idea. Therefor, my question is, is it an anti pattern or is it perfectly valid? What are some strong arguments for and against it, hopefully so there can be not much debate that this pattern is indeed, either perfectly valid or something to avoid.
I would say the main point is to enable your desired separation of concerns. I think it is a good question if expressed in those terms. As Kit says, different people may prefer different solutions.
REQUEST SCOPED OBJECTS
These occur quite naturally in APIs. Consider the following example, where a UI calls an Orders API, then the Orders API forwards the JWT to an upstream Billing API. A unique Request ID is also sent, in case the flow experiences a temporary problem. If the flow is retried, the Request ID can be used by APIs to prevent data duplication. Yet business logic should not need to know about either the Request ID or the JWT.
BUSINESS LOGIC CLASS DESIGN
I would start by designing my logic classes with my desired inputs, then work out the DI later. In my example the OrderService class might use claims to get the user identity and also for authorization. But I would not want it to know about HTTP level concerns:
public class OrderService
{
private readonly IBillingApiClient billingClient;
public OrderService(IBillingApiClient billingClient, ClaimsPrincipal user)
{
this.billingClient = billingClient;
}
public async void CreateOrder(OrderInput data)
{
this.Authorize();
var order = this.CreateOrder(data);
await this.billingClient.CreateInvoice(order);
}
}
DI SETUP
To enable my preferred business logic, I would write a little DI plumbing, so that I could inject request scoped dependencies in my preferred way. First, when the app starts, I would create a small middleware class. This will run early in the HTTP request pipeline:
private void ConfigureApiMiddleware(IApplicationBuilder api)
{
api.UseMiddleware<ClientContextMiddleware>();
}
In the middleware class I would then create a ClientContext object from runtime data. The OrderService class will run later, after next() is called:
public class ClientContextMiddleware
{
public async Task Invoke(HttpContext context)
{
var jwt = readJwt(context.Request);
var requestId = readRequestId(context.Request);
var holder = context.RequestServices.GetService<ClientContextHolder>();
holder.ClientContext = new ClientContext(jwt, requestIO);
await this.next(context);
}
}
In my DI composition at application startup I would express that the API client should be created when it is first referenced. In the HTTP request pipeline, the OrderService request scoped object will be constructed after the middleware has run. The below lambda will then be invoked:
private void RegisterDependencies(IServiceCollection services)
{
this.services.AddScoped<IApiClient>(
ctx =>
{
var holder = ctx.GetService<ClientContextHolder>();
return new ApiClient(holder.context);
});
this.services.AddScoped<ClientContextHolder>();
}
The holder object is just due to a technology limitation. The MS stack does not allow you to create new request scoped injectable objects at runtime, so you have to update an existing one. In a previous .NET tech stack, the concept of child container per request was made available to developers, so the holder object was not needed.
ASYNC AWAIT
Request scoped objects are stored against the HTTP request object, which is the correct behaviour when using async await. The current thread ID may switch, eg from 4 to 6 after the call to the Billing API.
If the OrderService class has a transient scope, it could get recreated when the flow resumes on thread 6. If this is the case, then resolution will continue to work.
SUMMARY
Designing inputs first, then writing some support code if needed is a good approach I think, and it is also useful to know the DI techniques. Personally I think natural request scoped objects that need to be created at runtime should be usable in DI. Some people may prefer a different approach though.
See in dotnet the area that I am an expert is not an anti standard on the contrary it is the model that many adopt but it is not a model that I would follow for the following reasons
it is not clear where is the token for those who read and use it being an anti clean code
you load important information in a place that is frequently accessed by the framework in the case of .netCore
your classes will reference a large property carrying a lot of unnecessary information when you could have created a more clean model that costs less memory and allocation time, I'm saying this because the HttpAcessor carries all the information relevant to your request
As I would take care of readability (clean code) and improve my performance
I would make a middleware or filter in my flow mvc where I would do the authentication part and create a class like:
public class TokenAuthenciationValues
{
public string TokenClient { get; set; }
public string TokenValue { get; set; }
}
Of course my method is an example but in my middleware I would implement it by loading its token values after calling the necessary apis (of course this model needs an interface and it needs to be configured as .AddScoped() in the case of .net)
That way I would use it in my methods only instantiating my ITokenAuthenciationValues in the constructor and I would have clear and clean information loaded in memory during the entire request
If it is necessary in the middle of the request to change the token any class can access it and change its value
I would have less memory allocated unused in my classes since the IHttpAcessor contract the ITokenAuthenciationValues only has relevant information
Hope this helps
I am migrating and existing crud application into Axon, and I have some concerns with the following scenario. I have an api given below to create groups,
#PostMapping
public Mono<IdDto> createGroup(#RequestBody #Valid CreateGroupCommand command) {
log.trace("Create GroupResponseInfoDto request {}", command);
return commandGateway.send(command)
.map((o) -> new IdDto((UUID) o));
}
The command looks like,
#Data
public class CreateGroupCommand {
#NotBlank
private String name;
#NotBlank
private String description;
}
and the main requirement here is that the group name must be unique.
Therefore, in the Aggregate I have the following code to check this logic.
#CommandHandler
public GroupAggregate(CreateGroupCommand command, GroupRepository groupRepository, ModelMapper mapper) {
log.trace("Handle create group command {}", command);
groupRepository.findByName(command.getName())
.ifPresent((g) -> {
throw new ApplicationException(UserMsError.GROUP_ALREADY_EXISTS.name());
});
GroupCreatedEvent event = mapper.map(command, GroupCreatedEvent.class);
event.setId(UUID.randomUUID());
AggregateLifecycle.apply(event);
}
And once the validation pareses, the event is persisted by a projector into the db.
#EventSourcingHandler
public void on(GroupCreatedEvent event) {
log.trace("Group create event {}", event);
groupRepository.findByName(event.getName())
.ifPresent((g) -> {
throw new ApplicationException(UserMsError.GROUP_ALREADY_EXISTS.name());
});
Group group = modelMapper.map(event, Group.class);
groupRepository.save(group);
}
The problem now is, there is some lap time between the command execution and the persistance of the event results into group table. If another user creates a group in that time, the command does not fail as the record does not exist in the db. Now, I see in Axon site there is a way to create a temporary table where we put the command execution into some temporary table which we can use for validation purpose, but that requires additional coding and quite extra effort for each such requirement. It also means, if we persist the details on command execution, and for some reason the process fails then the record will exist in our validation table but not on the system. If we try to validate the scenario on event execution, that extra effort might not be required but in this case the problem is I am not able to fail the API call so that the user knows the results. Could you please recommend if there is an alternative approach to validate the input without an intermediate check?
The problem you are looking at is set-based validation. Whenever you're dealing with CQRS, it's the sets that will require extra work to be validated.
Although uncertain, I assume you're talking about the Set-Based Consistency Validation blog? That is, for a reason, the suggested approach to deal with set validation. Note that the implementation used in the blog can be found here.
Added, it quite recently has seen an update that does not include the problem you describe as follows:
It also means, if we persist the details on command execution, and for some reason, the process fails, then the record will exist in our validation table but not on the system.
Axon's transaction logic, supported through the UnitOfWork, will roll back the entire transaction if something fails. This thus anything you'd do inside the UnitOfWork, including updates to another table for validation.
I get that it's some boilerplate code, but it is the predicament of having the uniqueness requirement on a set. What might be something you can look into is forcing the uniqueness through the Aggregate Identifier. Axon's Event Store logic ensures no two events are using the same aggregate identifier. So, if you try to input a new aggregate (hence a new event) for an already existing aggregate identifier, the operation will fail.
This approach is typically not feasible whenever the set-based consistency validation issue is described, though, so I am guessing it won't help you out.
Concluding, I'd take your win from the shared repository on the blog to minimize your personal effort on the matter.
I've been trying to start a REST api with Spring Boot and I'm a bit strugling with the separation of my resources and which endpoint should be in which file.
Let's say we have an api enpoint to deal with a user and achievements from this user:
/user/{id} GET - to fetch user by id
/achievement/{id} GET - to fetch by achievement
Which are both in their separates resources file:
UserResource
#RestController
public class UserResource {
public UserResource() {...}
#GetMapping("/users/{id}")
public UserDTO getUser(String id) {
log.debug("REST request to get User : {}", login);
return userService.getUserWithAuthoritiesById(id).map(AdminUserDTO::new));
}
And AchievementResource
#RestController
public class AchievementResource {
public AchievementResource(...) {...}
#GetMapping("/achievements/{id}")
public ResponseEntity<Achievement> getAchievement(#PathVariable Long id) {
return achievementRepository.findById(id);
}
}
So far so good, pretty simple. My problem comes when I must get all achievements from a User. Naming covention says I should have an endpoint such as:
/user/{id}/achievements GET
But where should this endpoint be? I feel like both Resources could be good since for the UserResource, the root of the endpoint is the user, but the AchievementResource could be logical too since we are returning achievements.
Easy answer: you have the wrong problem
But where should this endpoint be?
The definition of the resource should be in your machine readable api definition. You produce the class files you need by feeding your definition into a code generator for your choice of language. The generator will put the classes it creates in files somewhere, and you leave them in this default arrangement until some point in the future when you have a compelling reason to arrange them differently (at which point, you fork the code generator and make your preferred design the default).
That said, when designing by hand there's nothing particularly special about "REST endpoints". The guidelines for where resource classes belong is no different from any other classes in Java....
That said, I find that the literature around file layout heuristics rather disappointing. There doesn't seem to be a lot of material discussing the trade offs of different designs, or contexts in which one choice might be more compelling than another.
For your specific situation, I would advise putting the new resource into a file of its own. The argument here being that your UserResource has User dependencies, and your AchievementsResource has achievements dependencies, but your new thing has both, and as a matter of (hand waves) principle, we should avoid bringing unneeded achievements dependencies into the namespace of the UserResource (and vice versa).
In other words, if we find ourselves adding imports to an existing file to implement a new thing, that's a hint that the new thing may be better placed somewhere else.
Using separate files also has nice mechanical advantages - it reduces merge collisions, each file will have its own source control history (meaning that the history of Users isn't cluttered with a bunch of commits that are exclusively about new thing). See Adam Tornhill's work over at CodeScene, for example.
As you separated the controllers, it is not wrong, you should classify the methods by their general entity, "if I need to recover the user's achievements", it is related to both, however, where does she get this data from? of the Achievements knowing that each achievement must have a relationship in the database with the user, you can very well look it up in the achievement controller with a List returnAchievementsByUser (Integer Id) method.
It depends on your point of view and the business behind the scene. You can use just one endpoint in many cases; if "users" are the main resources who have achievements, then "/users/{user-id}" and {users/{user-id}/achievements/{achievement-id} get the user by Id and special achievement of the user
#RestController
#RequestMapping("users")
public class UsersRestController{
#GetMapping("/{user-id}")
public UserDTO getUser(#PathVariable("user-id") String id) {
code...
}
#GetMapping("/{user-id}/achievements/{achievement-id}")
public AchievementDTO getAchievement(#PathVariable("user-id") String userId,
#PathVariable("achievement-id") String achievementId) {
code...
}
}
And if locating "achievements" on top of "users" in their entity hierarchy has meaning to you and your business, then /achievements/{achievement-id}/users/{user-id} can be a rest presentation:
#RestController
#RequestMapping("achievements")
public class AchievementsRestController{
#GetMapping("/{achievement-id}")
public UserDTO getAchievement(#PathVariable("achievements-id") String id) {
code
}
#GetMapping("/{achievements-id}/users/{user-id}")
public AchievementDTO getAchievement(#PathVariable("user-id") String userId,
#PathVariable("achievement-id") String achievementId) {
code
}
}
finally ,whenever they are not in an entity hierarchy, you can pass userId to
"/achievements/{achievements-id}" (or achievement-id to "/users/{user-id}") as a RequestParam.
In short, when #CacheEvict is called on a method and if the key for the entry is not found, Gemfire is throwing EntryNotFoundException.
Now in detail,
I have a class
class Person {
String mobile;
int dept;
String name;
}
I have two Cache regions defined as personRegion and personByDeptRegion and the Service is as below
#Service
class PersonServiceImpl {
#Cacheable(value = "personRegion")
public Person findByMobile(String mobile) {
return personRepository.findByMobile(mobile);
}
#Cacheable(value = "personByDeptRegion")
public List<Person> findByDept(int deptCode) {
return personRepository.findByDept(deptCode);
}
#Caching(
evict = { #CacheEvict(value = "personByDeptRegion", key="#p0.dept"},
put = { #CachePut(value = "personRegion",key = "#p0.mobile")}
)
public Person updatePerson(Person p1) {
return personRepository.save(p1);
}
}
When there is a call to updatePerson and if there are no entries in the personByDeptRegion, this would throw an exception that EntryNotFoundException for the key 1 ( or whatever is the dept code ). There is a very good chance that this method will be called before the #Cacheable methods are called and want to avoid this exception.
Is there any way we could tweak the Gemfire behavior to gracefully return when the key is not existing for a given region ?.
Alternatively, I am also eager to know if there is a better implementation of the above scenario using Gemfire as cache.
Spring Data Gemfire : 1.7.4
Gemfire Version : v8.2.1
Note: The above code is for representation purpose only and I have multiple services with same issue in actual project.
First, I commend you for using Spring's Caching annotations on your application #Service components. All too often developers enable caching in their Repositories, which I think is bad form, especially if complex business rules (or even additional IO; e.g. calling a web service from a service component) are involved prior to or after the Repository interaction(s), particularly in cases where caching behavior should not be affected (or determined).
I also think your caching UC (updating one cache (personRegion) while invalidating another (personByDeptRegion) on a data store update) by following a CachePut with a CacheEvict seems reasonable to me. Though, I would point out that the seemingly intended use of the #Caching annotation is to combine multiple Caching annotations of the same type (e.g. multiple #CacheEvict or multiple #CachePut) as explained in the core Spring Framework Reference Guide. Still, there is nothing preventing your intended use.
I created a similar test class here, modeled after your example above, to verify the problem. Indeed the jonDoeUpdateSuccessful test case fails (with the GemFire EntryNotFoundException, shown below) since no people in Department "R&D" were previously cached in the "DepartmentPeople" GemFire Region prior to the update, unlike the janeDoeUpdateSuccessful test case, which causes the cache to be populated before the update (even if the entry has no values, which is of no consequence).
com.gemstone.gemfire.cache.EntryNotFoundException: RESEARCH_DEVELOPMENT
at com.gemstone.gemfire.internal.cache.AbstractRegionMap.destroy(AbstractRegionMap.java:1435)
NOTE: My test uses GemFire as both a "cache provider" and a System of Record (SOR).
The problem really lies in SDG's use of Region.destroy(key) in the GemfireCache.evict(key) implementation rather than, and perhaps more appropriately, Region.remove(key).
GemfireCache.evict(key) has been implemented with Region.destroy(key) since inception. However, Region.remove(key) was not introduced until GemFire v5.0. Still, I can see no discernible difference between Region.destroy(key) and Region.remove(key) other than the EntryNotFoundException thrown by Region.destroy(key). Essentially, they both destroy the local entry (both key and value) as well as distribute the operation to other caches in the cluster (providing a non-LOCAL Scope is used).
So, I have filed SGF-539 to change SDG to call Region.remove(key) in GemfireCache.evict(key) rather than Region.destroy(key).
As for a workaround, well, there is basically only 2 things you can do:
Restructure your code and your use of the #CacheEvict annotation, and/or...
Make use of the condition on #CacheEvict.
It is unfortunate that a condition cannot be specified using a class type, something akin to a Spring Condition (in addition to SpEL), but this interface is intended for another purpose and the #CacheEvict, condition attribute does not accept a class type.
At the moment, I don't have a good example of how this might work so I am moving forward on SGF-539.
You can following this ticket for more details and progress.
Sorry for the inconvenience.
-John
I have made simple application for study perpose and i want to write some unit/intagration tests. I read some information about that i can mock data base insted of create new db for tests. I will copy the code which a write. I hope that some one will explain me how to mock database.
public class UserServiceImpl implements UserService {
#Autowired
private UserOptionsDao uod;
#Override
public User getUser(int id) throws Exception {
if (id < 1) {
throw new InvalidParameterException();
}
return uod.getUser(id);
}
#Override
public User changeUserEmail(int id, String email) {
if (id < 1) {
throw new InvalidParameterException();
}
String[] emailParts = email.split("#");
if (emailParts[0].length() < 5) {
throw new InvalidParameterException();
} else if (!emailParts[1].equals("email.com")) {
throw new InvalidParameterException();
}
return uod.changeUserEmail(id, email);
}
This above i a part of the code that i want to test with the mock data base.
Generally you have three options:
Mock the data returned by UserOptionsDao as #Betlista suggested, thus creating a "fake" DAO object.
Use an in-memory database like HSQLDB to create a database with mock data when the test starts, or
Use something like a Docker container to spin up an instance of MySQL or the like and populate it with data, so you can restart it as necessary.
None of these solutions are perfect.
With #1, your test will skip the intermediate steps of authenticating to the database and looking for data. That leaves a part of your code untested, and as they say, "the devil is in the details." Often people run into problems when they mock DAO's like this when they try to deploy.
With #2, you connect to an actual database, but you have to make sure that either you are using the exact same type of database in your production code or something compatible. It also makes debugging a pain because you have to pause the test to see the contents of the database if something goes wrong.
With #3, you avoid all the problems with #1 and #2, but then you have to wire up all the Docker stuff. (I'm doing this right now, and I'm having problems too). The advantage, though, is that like #2 you can set up all of your test data at once, and be guaranteed that the production database you choose will be exactly the same as your unit test.
In your case, I would go with #2 since the application is for study purposes. Yes, I know this is a long-winded answer, but as you gain experience, you will probably want to know how to "scale up."
What you can do very easily is to have your implementation of UserOptionsDao in test package and set this one to UserServiceImpl. This new implementation can return fixed set of data for example...
This is a highlevel idea. You probably do not want to have many implementations (different for each test in general), so you should use some mocking framework like Mockito or EasyMock, look at the documentation for more details.