Say I've got an entity class Person and a controller PersonController. I've got a custom REST endpoint I want to implement and can not use a CrudRepository method for.
This is what my PersonController looks like:
#RepositoryRestController
#RequestMapping("/people")
public class PersonController {
#Autowired
private PeopleRestResource peopleRestResource; //#RepositoryRestResource extending CrudRepository
#GetMapping("/custom")
public ResponseEntity<?> getCustomPeople(PersistentEntityResourceAssembler persistentEntityResourceAssembler) {
Set<Person> people = stream(this.peopleRestResource.findAll().spliterator(), true)
.filter(/*Filter logic*/)
.collect(toSet());
return ok(persistentEntityResourceAssembler.toFullResource(people));
}
}
This will throw an IllegalArgumentException with the message PersistentEntity must not be null. people will actually contain a set of 2 person objects so this error message was a bit confusing at first. However, I assume this message actually means Set is not a persistent entity, as if I were to return just one person, the code would run just fine.
#GetMapping("/custom")
public ResponseEntity<?> getCustomPeople(PersistentEntityResourceAssembler persistentEntityResourceAssembler) {
Person person = stream(this.peopleRestResource.findAll().spliterator(), true)
.filter(/*Filter logic*/)
.findFirst()
.elseThrow(() => new IllegalStateException());
return ok(persistentEntityResourceAssembler.toFullResource(person));
}
Is there a way to make use of the PersistentEntityResourceAssembler to construct a HAL resource for a list of entities?
Preferably I wouldn't want to construct a Resources object and constructing all the links myself.
To return a list of entities you can use CollectionModel
https://docs.spring.io/spring-hateoas/docs/current/reference/html/#fundamentals. You can call toCollectionModel() on the PersistentEntityResourceAssembler.
#GetMapping("/custom")
public ResponseEntity<?> getCustomPeople(PersistentEntityResourceAssembler persistentEntityResourceAssembler) {
List<Person> persons = this.peopleRestResource.findAll()
return ok(persistentEntityResourceAssembler.toCollectionModel(persons));
}
Related
Hi have read a lot about this but can't come to a conclusion about the best way to test a method that is dependent on other method call results to perform its actions.
Some of the questions I've read include:
Testing methods that depend on each other
Unit testing a method that calls other methods
Unit testing a method that calls another method
Some of the answers sugest that we should only test the methods that perform only one action and then test the method that call this methods for conditional behaviuour (for example, verifying if a given method was called or not) and that's fine, I get it, but I'm struggling with other scenario.
I have a service with a REST api.
The controller has a create method that receives a DTO and calls the Service class create method with this argument (DTO).
I'm trying to practice TDD and for this I use this project I'm building without a database.
The code is as follows:
#Service
public class EntityService implements FilteringInterface {
private MemoryDatabase db = MemoryDatabase.getInstance();
//Create method called from controller: receives DTO to create a new
//Entity after validating that it's name and code parameters are unique
public EntityDTO create(EntityDTO dto) throws Exception {
validateUniqueFields(dto);
Entity entity = Entity.toEntity(dto, "id1"); //maps DTO to Entity object
db.add(entity);
return new EntityDTO.Builder(entity);//maps entity to DTO
}
public void validateUniqueFields(EntityDTO dto) throws Exception {
Set<Entity> foundEntities = filterEntityByNameOrCode(dto.getName(),
dto.getCode(), db.getEntities());
if (!foundEntities.isEmpty()) {
throw new Exception("Already exists");
}
}
}
This is the interface with methods reused by other service classes:
public interface FilteringInterface {
default Set<Entity> filterEntityByNameOrCode(String name, String code, Set<Entity> list) {
return list.stream().filter(e -> e.getSiteId().equals(siteId)
&& (e.getName().equals(name)
|| e.getCode().equals(code))).collect(Collectors.toSet());
}
default Optional<Entity> filterEntityById(String id, Set<Entity> list) {
return list.stream().filter(e -> e.getId().equals(id)).findAny();
};
}
So, I'm testing this service class and I need to test the create() method because it can have different behaviors:
If the received DTO has a name that already exists on the list of entities -> throws Exception
If the received DTO has a code that already exists on the list of entities -> throws Exception
If the received DTO has a name and a code that already exists on the list of entities -> throws Exception
If name and code are different, than everything is ok, and creates the entity -> adds the entity to the existing list - > converts the entity to DTO and retrieves it.
Problem:
To test any of the scenarios, suppose, scenario 1: I need to make the filterEntityByNameOrCode() method return a list with an Entity that has the same name as the Entity I'm trying to create. This method is called inside validateUniqueFields() method.
Problem is: I can't call mockito when() for any of this methods because, for that, I would have to mock the service class, which is the class that I'm testing and, thus, it's wrong approach.
I've also read that using Spy for this is also wrong approach.
So, where thus that leaves me?
Also: if this code is not the correct aprocah, and thats why
it can't be correctly tested, than, whats should the correct approach be?
This service will have other methods (delete, update, etc.). All of this methods will make use of the FilteringInterface as well, so I will have the same problems.
What is the correct way of testing a service class?
I would apply an DI pattern in your service, in order to mock and control the db variable.
#Service
public class EntityService implements FilteringInterface {
private Persistence db;
public EntityService(Persistence db) {
this.db = db;
}
}
After that, you will be able to add entities to Set accordingly to your scenarios
#ExtendWith(MockitoExtension.class)
class EntityServiceTest {
#Mock
private Persistence persistence;
#InjectMocks
private EntityService entityService;
#BeforeEach
void before() {
final Set<Entity> existentEntity = Set.of(new Entity(1L,1L, "name", "code"));
when(persistence.getEntities()).thenReturn(existentEntity);
}
#Test
void shouldThrowWhenNameAlreadyExists() {
final EntityDTO dto = new EntityDTO(1L, "name", "anything");
assertThrows(RuntimeException.class, () -> entityService.create(dto));
}
#Test
void shouldThrowWhenCodeAlreadyExists() {
final EntityDTO dto = new EntityDTO(1L, "anything", "code");
assertThrows(RuntimeException.class, () -> entityService.create(dto));
}
#Test
void shouldThrowWhenNameAndCodeAlreadyExists() {
final EntityDTO dto = new EntityDTO(1L, "name", "code");
assertThrows(RuntimeException.class, () -> entityService.create(dto));
}
#Test
void shouldNotThrowWhenUnique() {
final EntityDTO dto = new EntityDTO(1L, "diff", "diff");
final EntityDTO entityDTO = entityService.create(dto);
assertNotNull(entityDTO);
}
}
In my microservice written on spring-boot I have following DTO and Entity:
#Entity
class Order {
#Id
private Long id;
#OneToMany
private List<Product> products;
// getters setters
public Product getMainProduct() {
final String someBusinessValue = "A";
return products.stream()
.filter(product -> someBusinessValue.equals(product.getSomeBusinessValue()))
.findFirst()
.orElseThrow(() -> new IllegalStateException("No product found with someBusinessValue 'A'"));
}
}
class OrderRequest {
private List<ProductDto> productDtos;
// getters setters
public ProductDto getMainProductDto() {
final String someBusinessValue = "A";
return productDtos.stream()
.filter(productDto -> someBusinessValue.equals(productDto.getSomeBusinessValue()))
.findFirst()
.orElseThrow(() -> new IllegalStateException("No productDto found with someBusinessValue 'A'"));
}
}
As seen both entity and dto contain some business logic method for taking the "main" product from the list of all product. It is needed to work with this "main" product in many parts of the code (only in service layer). I have received a comment after adding these methods:
Design-wise you made it (in Dto) and the one in DB entity tightly coupled through all layers. This method is used in services only. That means that general architecture rules for layers must apply. In this particular case, the "separation-of-concerns" rule. Essentially it means that the service layer does not need to know about the request parameter of the controller, as well as the controller shouldn't be coupled with what's returned from service layer. Otherwise a tight coupling occurs. Here's a schematical example of how it should be:
class RestController {
#Autowired
private ItemService itemService;
public CreateItemResponse createItem(CreateItemRequest request) {
CreateItemDTO createItemDTO = toCreateItemDTO(request);
ItemDTO itemDTO = itemService.createItem(createItemDTO);
return toResponse(itemDTO);
}
In fact, it is proposed to create another intermediate DTO (OrderDto), which additionally contains the main product field:
class OrderDto {
private List<ProductDto> productDtos;
private ProductDto mainProductDto;
// getters setters
}
into which the OrderRequest will be converted in the controller before passing it to the service layer, and the OrderEntity already in the service layer itself before working with it. And the method for obtaining the main product should be placed in mapper. Schematically looks like this:
---OrderRequest---> Controller(convert to OrderDto)
---OrderDto--> Service(Convert OrderEntity to OrderDto) <---OrderEntity--- JpaRepository
Actually, this leads to a lot of additional conversions within the service, some unusual work, for example, when updating an entity, now you have two intermediate dto (one from the controller for the update), the other from the repository (the entity found for the update and converted to intermediate dto in service layer) and you need to adopt the state of one into the other, and then map the result to the entity and do update.
In addition, it takes a lot of time to refactor the entire microservice.
The question is, is it a bad approach to have such methods in the entity and incoming dto, though these methods are used only in service layer, and is there another approach to refactor this?
If you need any clarification please let me now, thank you in advance!
I have a User entity and a Role entity. The fields are not important other than the fact that the User entity has a role_id field that corresponds to the id of its respective role. Since Spring Data R2DBC doesn't do any form of relations between entities, I am turning to the DTO approach. I am very new to R2DBC and reactive programming as a whole and I cannot for the life of me figure out how to convert the Flux<User> my repository's findAll() method is returning me to a Flux<UserDto>. My UserDto class is extremely simple :
#Data
#RequiredArgsConstructor
public class UserDto
{
private final User user;
private final Role role;
}
Here is the UserMapper class I'm trying to make :
#Service
#RequiredArgsConstructor
public class UserMapper
{
private final RoleRepository roleRepo;
public Flux<UserDto> map(Flux<User> users)
{
//???
}
}
How can I get this mapper to convert a Flux<User> into a Flux<UserDto> containing the user's respective role?
Thanks!
Assuming your RoleRepository has a findById() method or similar to find a Role given its ID, and your user object has a getRoleId(), you can just do it via a standard map call:
return users.map(u -> new UserDto(u, roleRepo.findById(u.getRoleId())));
Or in the case where findById() returns a Mono:
return users.flatMap(u -> roleRepo.findById(u.getRoleId()).map(r -> new UserDto(u, r)));
You may of course want to add additional checks if it's possible that getRoleId() could return null.
Converting the data from business object to database object :
private static UserDAO covertUserBOToBUserDAO(UserBO userBO){
return new UserDAO(userBO.getUserId(), userBO.getName(), userBO.getMobileNumber(),
userBO.getEmailId(), userBO.getPassword());
}
Converting the data from database object to business object :
private static Mono<UserBO> covertUserDAOToBUserBO(UserDAO userDAO){
return Mono.just(new UserBO(userDAO.getUserId(), userDAO.getName(),
userDAO.getMobileNumber(), userDAO.getEmailId(), userDAO.getPassword()));
}
Now in service (getAllUsers) asynchronously:
public Flux<UserBO> getAllUsers(){
return userRepository.findAll().flatMap(UserService::covertUserDAOToBUserBO);
}
Since flatMap is asynchronous so we get the benefit from asynchronous operation for even converting the object from DAO to BO.
Similarly if saving data then I tried below :
public Mono<UserBO> saveUser(UserBO userBO)
{
return
userRepository.save(covertUserBOToBUserDAO(userBO)).flatMap(UserService::covertUserDAOToBUserBO);
}
I'm currently messing around with a Spring Boot REST API project for instructional purposes. I have a rather large table with 22 columns loaded into a MySQL database and am trying to give the user the ability to filter the results by multiple columns (let's say 6 for the purposes of this example).
I am currently extending a Repository and have initialized methods such as findByParam1 and findByParam2 and findByParam1OrderByParam2Desc and etc. and have verified that they are working as intended. My question to you guys is the best way to approach allowing the user the ability to leverage all 6 optional RequestParams without writing a ridiculous amount of conditionals/repository method variants. For example, I want to give the user the ability to hit url home/get-data/ to get all results, home/get-data?param1=xx to filter based on param1, and potentially, home/get-data?param1=xx¶m2=yy...¶m6=zz to filter on all the optional parameters.
For reference, here is what the relevant chunk of my controller looks like (roughly).
#RequestMapping(value = "/get-data", method = RequestMethod.GET)
public List<SomeEntity> getData(#RequestParam Map<String, String> params) {
String p1 = params.get("param1");
if(p1 != null) {
return this.someRepository.findByParam1(p1);
}
return this.someRepository.findAll();
}
My issue so far is that the way I am proceeding about this means that I will basically need n! amount of methods in my repository to support this functionality with n equalling the amount of fields/columns I want to filter on. Is there a better way to approach handling this, perhaps where I am filtering the repository 'in-place' so I can simply filter 'in-place' as I check the Map to see what filters the user did indeed populate?
EDIT: So I'm currently implementing a 'hacky' solution that might be related to J. West's comment below. I assume that the user will be specifying all n parameters in the request URL and if they do not (for example, they specify p1-p4 but not p5 and p6) I generate SQL that just matches the statement to LIKE '%' for the non-included params. It would look something like...
#Query("select u from User u where u.p1 = :p1 and u.p2 = :p2 ... and u.p6 = :p6")
List<User> findWithComplicatedQueryAndSuch;
and in the Controller, I would detect if p5 and p6 were null in the Map and if so, simply change them to the String '%'. I'm sure there is a more precise and intuitive way to do this, although I haven't been able to find anything of the sort yet.
You can do this easily with a JpaSpecificationExecutor and a custom Specification: https://spring.io/blog/2011/04/26/advanced-spring-data-jpa-specifications-and-querydsl/
I would replace the HashMap with a DTO containing all optional get params, then build the specifications based on that DTO, obviously you can also keep the HashMap and build the specification based on it.
Basically:
public class VehicleFilter implements Specification<Vehicle>
{
private String art;
private String userId;
private String vehicle;
private String identifier;
#Override
public Predicate toPredicate(Root<Vehicle> root, CriteriaQuery<?> query, CriteriaBuilder cb)
{
ArrayList<Predicate> predicates = new ArrayList<>();
if (StringUtils.isNotBlank(art))
{
predicates.add(cb.equal(root.get("art"), art));
}
if (StringUtils.isNotBlank(userId))
{
predicates.add(cb.equal(root.get("userId"), userId));
}
if (StringUtils.isNotBlank(vehicle))
{
predicates.add(cb.equal(root.get("vehicle"), vehicle));
}
if (StringUtils.isNotBlank(identifier))
{
predicates.add(cb.equal(root.get("identifier"), fab));
}
return predicates.size() <= 0 ? null : cb.and(predicates.toArray(new Predicate[predicates.size()]));
}
// getter & setter
}
And the controller:
#RequestMapping(value = "/{ticket}/count", method = RequestMethod.GET)
public long getItemsCount(
#PathVariable String ticket,
VehicleFilter filter,
HttpServletRequest request
) throws Exception
{
return vehicleService.getCount(filter);
}
Service:
#Override
public long getCount(VehicleFilter filter)
{
return vehicleRepository.count(filter);
}
Repository:
#Repository
public interface VehicleRepository extends JpaRepository<Vehicle, Integer>, JpaSpecificationExecutor<Vehicle>
{
}
Just a quick example adapted from company code, you get the idea!
Another solution with less coding would be to use QueryDsl integration with Spring MVC.
By using this approach all your request parameters will be automatically resolved to one of your domain properties and appended to your query.
For reference check the documentation https://spring.io/blog/2015/09/04/what-s-new-in-spring-data-release-gosling#querydsl-web-support and the example project https://github.com/spring-projects/spring-data-examples/tree/master/web/querydsl
You can do it even more easily using Query By Example (QBE) technique if your repository class implements JpaRepository interface as that interface implements QueryByExampleExecutor interface which provides findAll method that takes object of Example<T> as an argument.
Using this approach is really applicable for your scenario as your entity has a lot of fields and you want user to be able to get those which are matching filter represented as subset of entity's fields with their corresponding values that have to be matched.
Let's say the entity is User (like in your example) and you want to create endpoint for fetching users whose attribute values are equal to the ones which are specified. That could be accomplished with the following code:
Entity class:
#Entity
public class User implements Serializable {
private Long id;
private String firstName;
private String lastName;
private Integer age;
private String city;
private String state;
private String zipCode;
}
Controller class:
#Controller
public class UserController {
private UserRepository repository;
private UserController(UserRepository repository) {
this.repository = repository;
}
#GetMapping
public List<User> getMatchingUsers(#RequestBody User userFilter) {
return repository.findAll(Example.of(userFilter));
}
}
Repository class:
#Repository
public class UserRepository implements JpaRepository<User, Integer> {
}
I'm currently working on a SpringBoot API to interface with a MongoRepository, but I'm having trouble understanding how the JSON being passed becomes a Document for storage within Mongo. I currently have a simple API that stores a group of users:
#Document
#JsonInclude
public class Group {
#Id
#JsonView(Views.Public.class)
private String id;
#JsonView(Views.Public.class)
private String name;
#JsonView(Views.Public.class)
private Set<GroupMember> groupMembers = new HashSet<>();
}
There are also setter and getter methods for each of the fields, although I don't know how necessary those are either (see questions at the end).
Here is the straightforward component I'm using:
#Component
#Path("/groups")
#Api(value = "/groups", description = "Group REST")
public class Groups {
#Autowired
private GroupService groupService;
#GET
#Produces(MediaType.APPLICATION_JSON)
#ApiOperation(value = "Get all Groups", response = Group.class, responseContainer = "List")
#JsonView(Views.Public.class)
public List<Group> getAllGroups() {
return groupService.getAllGroups();
}
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
#ApiOperation(value = "Create a Group", response = Group.class)
#JsonView(Views.Detailed.class)
public Group submitGroup(Group group) {
return groupService.addGroup(group);
}
}
Finally, I have a Service class:
#Service
public class GroupServiceImpl implements GroupService {
#Autowired
private GroupRepository groupRepository;
#Override
public Group addGroup(Group group) {
group.setId(null);
return groupRepository.save(group);
}
#Override
public List<Group> getAllGroups() {
return groupRepository.findAll();
}
}
The GroupRespository is simply an interface which extends MongoRepository<Group,String>
Now, when I actually make a call to the POST method, with a body containing:
{
"name": "group001",
"groupMembers": []
}
I see that it properly inserts this group with a random Mongo UUID. However, if I try to insert GroupMember objects inside the list, I receive a null pointer exception. From this, I have two questions:
How does SpringBoot (Jackson?) know which fields to deserialize from the JSON being passed? I tested this after deleting the getter and setter methods, and it still works.
How does SpringBoot handle nested objects, such as the Set inside the class? I tested with List instead of Set, and it worked, but I have no idea why. My guess is that for each object that is both declared in my class and listed in my JSON object, SpringBoot is calling a constructor that it magically created behind the scenes, and one doesn't exist for the Set interface.
Suppose I'm adamant on using Set (the same user shouldn't show up twice anyway). What tools can I use to get SpringBoot to work as expected?
It seems to me that a lot of the things that happen in Spring are very behind-the-scenes, which makes it difficult for me to understand why things work when they do. Not knowing why things work makes it difficult to construct things from scratch, which makes it feel as though I'm hacking together a project rather than actually engineering one. So my last question is something like, is there a guide that explains the wiring behind the scenes?
Finally, this is my first time working with Spring... so please excuse me if my questions are entirely off the mark, but I would appreciate any answers nonetheless.