Hello everyone I'm new to Spring world. Actually I want to know how can we use converter to update object instead of updating each element one by one using set and get. Right now in my controller I've :
#PostMapping("/edit/{userId}")
public Customer updateCustomer(#RequestBody Customer newCustomer, #PathVariable final String userId)
{
return customerService.update(userId, newCustomer);
}
and this is how I'm updating the customer object :
#Override
public Customer update(String id, Customer newCustomer) {
Customer customer = customerRepository.findById(id).get();
customer.setFirstName(newCustomer.getFirstName());
customer.setLastName(newCustomer.getLastName());
customer.setEmail(newCustomer.getEmail());
return customerRepository.save(customer);
}
Instead of using each time set and get, I want to use a converter.
The approach of passing the entity's id as a path variable when you're updating it isn't really right. Think about this: you have a #RequestBody, why don't you include the id inside this body too? Why do you want to specify a path variable for it?
Now, if you have the full Customer with its id from the body, you don't have to make any calls to your repository because hibernate adds it to a persistent state already based on its id and a simple
public Customer update(Customer newCustomer) {
return customerRepository.save(newCustomer);
}
should work.
Q: What is a persistent state?
A: A persistent entity has been associated with a database table row and it’s being managed by the current running Persistence Context. ( customerRepository.findById() is just asking the DB if the entity with the specified id exists and add it to a persistent state. Hibernate manage all this process if you have an #Id annotated field and is filled, in other words:
Customer customer = new Customer();
customer.setId(1);
is ALMOST the same thing as :
Customer customer = customerRepository.findById(1).get();
)
TIPS: Anyway, you shouldn't have (if you didn't know) a model in the controller layer. Why? Let's say that your Customer model can have multiple permissions. One possible structure could look like this:
#Entity
public class Customer{
//private fields here;
#OneToMany(mappedBy="customer",--other configs here--)
private List<Permission> permissions;
}
and
#Entity
public class Permission{
#Id
private Long id;
private String name;
private String creationDate;
#ManyToOne(--configs here--)
private Customer customer;
}
You can see that you have a cross reference between Customer and Permission entity which will eventually lead to a stack overflow exception (if you don't understand this, you can think about a recursive function which doesn't have a condition to stop and it's called over and over again => stack overflow. The same thing is happening here).
What can you do? Creating a so called DTO class that you want the client to receive instead of a model. How can you create this DTO? Think about what the user NEEDS to know.
1) Is "creationDate" from Permission a necessary field for the user? Not really.
2) Is "id" from Permission a necessary field for the user? In some cases yes, in others, not.
A possible CustomerDTO could look like this:
public class CustomerDTO
{
private String firstName;
private String lastName;
private List<String> permissions;
}
and you can notice that I'm using a List<String> instead of List<Permission> for customer's permissions which are in fact the permissions' names.
public CustomerDTO convertModelToDto(Customer customer)
{
//hard way
CustomerDTO customerDTO = new CustomerDTO();
customerDTO.setFirstName(customer.getFirstName());
customerDTO.setLastName(customer.getLastName());
customerDTO.setPermissions(
customer.getPermissions()
.stream()
.map(permission -> permission.getName())
.collect(Collectors.toList());
);
// easy-way => using a ModelMapper
customerDTO = modelMapper.map(customer,CustomerDTO.class);
return customerDTO;
}
Use ModelMapper to map one model into another.
First define a function that can map source data into the target model. Use this as a library to use whenever want.
public static <T> void merge(T source, T target) {
ModelMapper modelMapper = new ModelMapper();
modelMapper.getConfiguration().setMatchingStrategy(MatchingStrategies.STRICT);
modelMapper.map(source, target);
}
Use merge for mapping data
Customer customer = customerRepository.findById(id).get();
merge(newCustomer, customer);
customerRepository.save(customer);
Add dependency in pom.xml for model mapper
<dependency>
<groupId>org.modelmapper</groupId>
<artifactId>modelmapper</artifactId>
<version>2.3.4</version>
</dependency>
Related
In my microservice written on spring-boot I have following DTO and Entity:
#Entity
class Order {
#Id
private Long id;
#OneToMany
private List<Product> products;
// getters setters
public Product getMainProduct() {
final String someBusinessValue = "A";
return products.stream()
.filter(product -> someBusinessValue.equals(product.getSomeBusinessValue()))
.findFirst()
.orElseThrow(() -> new IllegalStateException("No product found with someBusinessValue 'A'"));
}
}
class OrderRequest {
private List<ProductDto> productDtos;
// getters setters
public ProductDto getMainProductDto() {
final String someBusinessValue = "A";
return productDtos.stream()
.filter(productDto -> someBusinessValue.equals(productDto.getSomeBusinessValue()))
.findFirst()
.orElseThrow(() -> new IllegalStateException("No productDto found with someBusinessValue 'A'"));
}
}
As seen both entity and dto contain some business logic method for taking the "main" product from the list of all product. It is needed to work with this "main" product in many parts of the code (only in service layer). I have received a comment after adding these methods:
Design-wise you made it (in Dto) and the one in DB entity tightly coupled through all layers. This method is used in services only. That means that general architecture rules for layers must apply. In this particular case, the "separation-of-concerns" rule. Essentially it means that the service layer does not need to know about the request parameter of the controller, as well as the controller shouldn't be coupled with what's returned from service layer. Otherwise a tight coupling occurs. Here's a schematical example of how it should be:
class RestController {
#Autowired
private ItemService itemService;
public CreateItemResponse createItem(CreateItemRequest request) {
CreateItemDTO createItemDTO = toCreateItemDTO(request);
ItemDTO itemDTO = itemService.createItem(createItemDTO);
return toResponse(itemDTO);
}
In fact, it is proposed to create another intermediate DTO (OrderDto), which additionally contains the main product field:
class OrderDto {
private List<ProductDto> productDtos;
private ProductDto mainProductDto;
// getters setters
}
into which the OrderRequest will be converted in the controller before passing it to the service layer, and the OrderEntity already in the service layer itself before working with it. And the method for obtaining the main product should be placed in mapper. Schematically looks like this:
---OrderRequest---> Controller(convert to OrderDto)
---OrderDto--> Service(Convert OrderEntity to OrderDto) <---OrderEntity--- JpaRepository
Actually, this leads to a lot of additional conversions within the service, some unusual work, for example, when updating an entity, now you have two intermediate dto (one from the controller for the update), the other from the repository (the entity found for the update and converted to intermediate dto in service layer) and you need to adopt the state of one into the other, and then map the result to the entity and do update.
In addition, it takes a lot of time to refactor the entire microservice.
The question is, is it a bad approach to have such methods in the entity and incoming dto, though these methods are used only in service layer, and is there another approach to refactor this?
If you need any clarification please let me now, thank you in advance!
I have a User entity and a Role entity. The fields are not important other than the fact that the User entity has a role_id field that corresponds to the id of its respective role. Since Spring Data R2DBC doesn't do any form of relations between entities, I am turning to the DTO approach. I am very new to R2DBC and reactive programming as a whole and I cannot for the life of me figure out how to convert the Flux<User> my repository's findAll() method is returning me to a Flux<UserDto>. My UserDto class is extremely simple :
#Data
#RequiredArgsConstructor
public class UserDto
{
private final User user;
private final Role role;
}
Here is the UserMapper class I'm trying to make :
#Service
#RequiredArgsConstructor
public class UserMapper
{
private final RoleRepository roleRepo;
public Flux<UserDto> map(Flux<User> users)
{
//???
}
}
How can I get this mapper to convert a Flux<User> into a Flux<UserDto> containing the user's respective role?
Thanks!
Assuming your RoleRepository has a findById() method or similar to find a Role given its ID, and your user object has a getRoleId(), you can just do it via a standard map call:
return users.map(u -> new UserDto(u, roleRepo.findById(u.getRoleId())));
Or in the case where findById() returns a Mono:
return users.flatMap(u -> roleRepo.findById(u.getRoleId()).map(r -> new UserDto(u, r)));
You may of course want to add additional checks if it's possible that getRoleId() could return null.
Converting the data from business object to database object :
private static UserDAO covertUserBOToBUserDAO(UserBO userBO){
return new UserDAO(userBO.getUserId(), userBO.getName(), userBO.getMobileNumber(),
userBO.getEmailId(), userBO.getPassword());
}
Converting the data from database object to business object :
private static Mono<UserBO> covertUserDAOToBUserBO(UserDAO userDAO){
return Mono.just(new UserBO(userDAO.getUserId(), userDAO.getName(),
userDAO.getMobileNumber(), userDAO.getEmailId(), userDAO.getPassword()));
}
Now in service (getAllUsers) asynchronously:
public Flux<UserBO> getAllUsers(){
return userRepository.findAll().flatMap(UserService::covertUserDAOToBUserBO);
}
Since flatMap is asynchronous so we get the benefit from asynchronous operation for even converting the object from DAO to BO.
Similarly if saving data then I tried below :
public Mono<UserBO> saveUser(UserBO userBO)
{
return
userRepository.save(covertUserBOToBUserDAO(userBO)).flatMap(UserService::covertUserDAOToBUserBO);
}
I am using spring boot with spring data jpa and postgre. I have "item" entity that has price, quantity, auto generated int id and order that it belongs to.
I've searched how to edit that entity changing its price and quantity only, without making new entity and the only answer I got is to get the entity from the db and set each property to the new one then save it. But if i have 6 other properties except price and quantity that means in the update method i will set a property 8 times and this seems to me like way too much boilerplate code for spring. My question is: Is there better/default way to do that?
You can provide a copy constructor:
public Item(Item item) {
this(item.price, item.quantity);
}
or use org.springframework.beans.BeanUtils method:
BeanUtils.copyProperties(sourceItem, targetItem, "id");
Then in controller:
#RestController
#RequestMapping("/items")
public class ItemController {
#Autoware
private ItemRepo repo;
#PutMapping("/{id}")
public ResponseEntity<?> update(#PathVariable("id") Item targetItem, #RequestBody Item sourceItem) {
BeanUtils.copyProperties(sourceItem, targetItem, "id");
return ResponseEntity.ok(repo.save(targetItem));
}
}
No, you don't need to set anything for 8 times. If you want to change price and quantity only, just change those two. Put it in a #Transactional method:
#Transactional
public void updateItem(Item item){
// ....
// EntityManager em;
// ....
// Get 'item' into 'managed' state
if(!em.contains(item)){
item = em.merge(item);
}
item.price = newPrice;
item.quantity = newQuantity;
// You don't even need to call save(), JPA provider/Hibernate will do it automatically.
}
This example will generate a SELECT and a UPDATE query. And that's all.
Try using #Query annotation and define your update statement
#Modifying
#Transactional
#Query("update Site site set site.name=:name where site.id=:id")
void updateJustNameById(#Param("id")Long id, #Param("name")String name);
You should use spring data rest which handles all of this by itself. you just have to call a patch request at the specified URL and provide the changed entity properties. if you have some knowledge of spring data rest have a look at https://github.com/ArslanAnjum/angularSpringApi.
Just use this #DynamicUpdate in your Entity class
#DynamicUpdate
public class Item{
}
How can one configure their JPA Entities to not fetch related entities unless a certain execution parameter is provided.
According to Spring's documentation, 4.3.9. Configuring Fetch- and LoadGraphs, you need to use the #EntityGraph annotation to specify fetch policy for queries, however this doesn't let me decide at runtime whether I want to load those entities.
I'm okay with getting the child entities in a separate query, but in order to do that I would need to configure my repository or entities to not retrieve any children. Unfortunately, I cannot seem to find any strategies on how to do this. FetchPolicy is ignored, and EntityGraph is only helpful when specifying which entities I want to eagerly retrieve.
For example, assume Account is the parent and Contact is the child, and an Account can have many Contacts.
I want to be able to do this:
if(fetchPolicy.contains("contacts")){
account.setContacts(contactRepository.findByAccountId(account.getAccountId());
}
The problem is spring-data eagerly fetches the contacts anyways.
The Account Entity class looks like this:
#Entity
#Table(name = "accounts")
public class Account
{
protected String accountId;
protected Collection<Contact> contacts;
#OneToMany
//#OneToMany(fetch=FetchType.LAZY) --> doesn't work, Spring Repositories ignore this
#JoinColumn(name="account_id", referencedColumnName="account_id")
public Collection<Contact> getContacts()
{
return contacts;
}
//getters & setters
}
The AccountRepository class looks like this:
public interface AccountRepository extends JpaRepository<Account, String>
{
//#EntityGraph ... <-- has type= LOAD or FETCH, but neither can help me prevent retrieval
Account findOne(String id);
}
The lazy fetch should be working properly if no methods of object resulted from the getContacts() is called.
If you prefer more manual work, and really want to have control over this (maybe more contexts depending on the use case). I would suggest you to remove contacts from the account entity, and maps the account in the contacts instead. One way to tell hibernate to ignore that field is to map it using the #Transient annotation.
#Entity
#Table(name = "accounts")
public class Account
{
protected String accountId;
protected Collection<Contact> contacts;
#Transient
public Collection<Contact> getContacts()
{
return contacts;
}
//getters & setters
}
Then in your service class, you could do something like:
public Account getAccountById(int accountId, Set<String> fetchPolicy) {
Account account = accountRepository.findOne(accountId);
if(fetchPolicy.contains("contacts")){
account.setContacts(contactRepository.findByAccountId(account.getAccountId());
}
return account;
}
Hope this is what you are looking for. Btw, the code is untested, so you should probably check again.
You can use #Transactional for that.
For that you need to fetch you account entity Lazily.
#Transactional Annotations should be placed around all operations that are inseparable.
Write method in your service layer which is accepting one flag to fetch contacts eagerly.
#Transactional
public Account getAccount(String id, boolean fetchEagerly){
Account account = accountRepository.findOne(id);
//If you want to fetch contact then send fetchEagerly as true
if(fetchEagerly){
//Here fetching contacts eagerly
Object object = account.getContacts().size();
}
}
#Transactional is a Service that can make multiple call in single transaction
without closing connection with end point.
Hope you find this useful. :)
For more details refer this link
Please find an example which runs with JPA 2.1.
Set the attribute(s) you only want to load (with attributeNodes list) :
Your entity with Entity graph annotations :
#Entity
#NamedEntityGraph(name = "accountGraph", attributeNodes = {
#NamedAttributeNode("accountId")})
#Table(name = "accounts")
public class Account {
protected String accountId;
protected Collection<Contact> contacts;
#OneToMany(fetch=FetchType.LAZY)
#JoinColumn(name="account_id", referencedColumnName="account_id")
public Collection<Contact> getContacts()
{
return contacts;
}
}
Your custom interface :
public interface AccountRepository extends JpaRepository<Account, String> {
#EntityGraph("accountGraph")
Account findOne(String id);
}
Only the "accountId" property will be loaded eagerly. All others properties will be loaded lazily on access.
Spring data does not ignore fetch=FetchType.Lazy.
My problem was that I was using dozer-mapping to covert my entities to graphs. Evidently dozer calls the getters and setters to map two objects, so I needed to add a custom field mapper configuration to ignore PersistentCollections...
GlobalCustomFieldMapper.java:
public class GlobalCustomFieldMapper implements CustomFieldMapper
{
public boolean mapField(Object source, Object destination, Object sourceFieldValue, ClassMap classMap, FieldMap fieldMapping)
{
if (!(sourceFieldValue instanceof PersistentCollection)) {
// Allow dozer to map as normal
return;
}
if (((PersistentCollectiosourceFieldValue).wasInitialized()) {
// Allow dozer to map as normal
return false;
}
// Set destination to null, and tell dozer that the field is mapped
destination = null;
return true;
}
}
If you are trying to send the resultset of your entities to a client, I recommend you use data transfer objects(DTO) instead of the entities. You can directly create a DTO within the HQL/JPQL.
For example
"select new com.test.MyTableDto(my.id, my.name) from MyTable my"
and if you want to pass the child
"select new com.test.MyTableDto(my.id, my.name, my.child) from MyTable my"
That way you have a full control of what is being created and passed to client.
I am trying to query spring data elasticsearch repositories for nested properties. My Repository looks like this:
public interface PersonRepository extends
ElasticsearchRepository<Person, Long> {
List<Person> findByAddressZipCode(String zipCode);
}
The domain objects Person and Address (without getters/setters) are defined as follows:
#Document(indexName="person")
public class Person {
#Id
private Long id;
private String name;
#Field(type=FieldType.Nested, store=true, index = FieldIndex.analyzed)
private Address address;
}
public class Address {
private String zipCode;
}
My test saves one Person document and tries to read it with the repository method. But no results are returned. Here is the test method:
#Test
public void testPersonRepo() throws Exception {
Person person = new Person();
person.setName("Rene");
Address address = new Address();
address.setZipCode("30880");
person.setAddress(address);
personRepository.save(person);
elasticsearchTemplate.refresh(Person.class,true);
assertThat(personRepository.findByAddressZipCodeContaining("30880"), hasSize(1));
}
Does spring data elasticsearch support the default spring data query generation?
Elasticsearch indexes the new document asynchronously...near real-time. The default refresh is typically 1s I think. So you must explicitly request a refresh (to force a flush and the document available for search) if you are wanting the document immediately searchable as with a unit test. So your unit test needs to include the ElasticsearchTemplate bean so that you can explicitly call refresh. Make sure you set waitForOperation to true to force a synchronous refresh. See this related answer. Kinda like this:
elasticsearchTemplate.refresh("myindex",true);