GraphQL and Data Loader Using the graphql-java-kickstart library - java

I am attempting to use the DataLoader feature within the graphql-java-kickstart library:
https://github.com/graphql-java-kickstart
My application is a Spring Boot application using 2.3.0.RELEASE. And I using version 7.0.1 of the graphql-spring-boot-starter library.
The library is pretty easy to use and it works when I don't use the data loader. However, I am plagued by the N+1 SQL problem and as a result need to use the data loader to help alleviate this issue. When I execute a request, I end up getting this:
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
I am sure I am missing something in the configuration but really don't know what that is.
Here is my graphql schema:
type Account {
id: ID!
accountNumber: String!
customers: [Customer]
}
type Customer {
id: ID!
fullName: String
}
I have created a CustomGraphQLContextBuilder:
#Component
public class CustomGraphQLContextBuilder implements GraphQLServletContextBuilder {
private final CustomerRepository customerRepository;
public CustomGraphQLContextBuilder(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
#Override
public GraphQLContext build(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse) {
return DefaultGraphQLServletContext.createServletContext(buildDataLoaderRegistry(), null).with(httpServletRequest).with(httpServletResponse).build();
}
#Override
public GraphQLContext build(Session session, HandshakeRequest handshakeRequest) {
return DefaultGraphQLWebSocketContext.createWebSocketContext(buildDataLoaderRegistry(), null).with(session).with(handshakeRequest).build();
}
#Override
public GraphQLContext build() {
return new DefaultGraphQLContext(buildDataLoaderRegistry(), null);
}
private DataLoaderRegistry buildDataLoaderRegistry() {
DataLoaderRegistry dataLoaderRegistry = new DataLoaderRegistry();
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, Customer>(accountIds ->
CompletableFuture.supplyAsync(() ->
customerRepository.findCustomersByAccountIds(accountIds), new SyncTaskExecutor())));
return dataLoaderRegistry;
}
}
I also have create an AccountResolver:
public CompletableFuture<List<Customer>> customers(Account account, DataFetchingEnvironment dfe) {
final DataLoader<Long, List<Customer>> dataloader = ((GraphQLContext) dfe.getContext())
.getDataLoaderRegistry().get()
.getDataLoader("customerDataLoader");
return dataloader.load(account.getId());
}
And here is the Customer Repository:
public List<Customer> findCustomersByAccountIds(List<Long> accountIds) {
Instant begin = Instant.now();
MapSqlParameterSource namedParameters = new MapSqlParameterSource();
String inClause = getInClauseParamFromList(accountIds, namedParameters);
String sql = StringUtils.replace(SQL_FIND_CUSTOMERS_BY_ACCOUNT_IDS,"__ACCOUNT_IDS__", inClause);
List<Customer> customers = jdbcTemplate.query(sql, namedParameters, new CustomerRowMapper());
Instant end = Instant.now();
LOGGER.info("Total Time in Millis to Execute findCustomersByAccountIds: " + Duration.between(begin, end).toMillis());
return customers;
}
I can put a break point in the Customer Repository and see the SQL execute and it returns a List of Customer objects. You can also see that the schema wants an array of customers. If I remove the code above and put in the resolver to get the customers one by one....it works....but is really slow.
What am I missing in the configuration that would cause this?
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
Thanks for your help!
Dan

Thanks, #Bms bharadwaj! The issue was on my side in understanding how the data is returned in the dataloader. I ended up using a MappedBatchLoader to bring the data in a map. The key in the map being the accountId.
private DataLoader<Long, List<Customer>> getCustomerDataLoader() {
MappedBatchLoader<Long, List<Customer>> customerMappedBatchLoader = accountIds -> CompletableFuture.supplyAsync(() -> {
List<Customer> customers = customerRepository.findCustomersByAccountId(accountIds);
Map<Long, List<Customer>> groupByAccountId = customers.stream().collect(Collectors.groupingBy(cust -> cust.getAccountId()));
return groupByAaccountId;
});
// }, new SyncTaskExecutor());
return DataLoader.newMappedDataLoader(customerMappedBatchLoader);
}
This seems to have done the trick because before I was issuing hundreds of SQL statement and now down to 2 (one for the driver SQL...accounts and one for the customers).

In the CustomGraphQLContextBuilder,
I think you should have registered the DataLoader as :
...
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, List<Customer>>(accountIds ->
...
because, you are expecting a list of Customers for one account Id.
That should work I guess.

Related

How to refactor this GraphQL endpoint to use BatchMapping instead of SchemaMapping?

I am using Spring for GraphQL to create a small microservice project which consists of 2 apps, a customer service and an order service.
My order service app is running on port 8081 and it contains an OrderData model:
public record OrderData(#Id Integer id, Integer customerId) {}
It also contains an OrderDataRepository interface:
#Repository
public interface OrderDataRepository extends ReactiveCrudRepository<OrderData, Integer> {
Flux<OrderData> getByCustomerId(Integer customerId);
}
And it exposes a single endpoint
#RestController
#RequestMapping(path = "/api/v1/orders")
public class OrderDataController {
private final OrderDataRepository orderDataRepository;
public OrderDataController(OrderDataRepository orderDataRepository) {
this.orderDataRepository = orderDataRepository;
}
#GetMapping
Flux<OrderData> getByCustomerId(#RequestParam Integer customerId) {
return orderDataRepository.getByCustomerId(customerId);
}
}
My customer service app defines the following graphql schema:
type Query {
customers: [Customer]
customersByName(name: String): [Customer]
customerById(id: ID): Customer
}
type Mutation {
addCustomer(name: String): Customer
}
type Customer {
id: ID
name: String
orders: [Order]
}
type Order {
id: ID
customerId: ID
}
And it exposes a few graphql endpoints for querying and mutating customer data, one of which is used to fetch customer orders by using a WebClient to call the endpoint exposed by my order service app:
#Controller
public class CustomerController {
private final CustomerRepository customerRepository;
private final WebClient webClient;
public CustomerController(CustomerRepository customerRepository, WebClient.Builder webClientBuilder) {
this.customerRepository = customerRepository;
this.webClient = webClientBuilder.baseUrl("http://localhost:8081").build();
}
// ...
#QueryMapping
Mono<Customer> customerById(#Argument Integer id) {
return this.customerRepository.findById(id);
}
#SchemaMapping(typeName = "Customer")
Flux<Order> orders(Customer customer) {
return webClient
.get()
.uri("/api/v1/orders?customerId=" + customer.id())
.retrieve()
.bodyToFlux(Order.class);
}
}
record Order(Integer id, Integer customerId){}
My question is how would I refactor this #SchemaMapping endpoint to use #BatchMapping and keep the app nonblocking.
I tried the following:
#BatchMapping
Map<Customer, Flux<Order>> orders(List<Customer> customers) {
return customers
.stream()
.collect(Collectors.toMap(customer -> customer,
customer -> webClient
.get()
.uri("/api/v1/orders?customerId=" + customer.id())
.retrieve()
.bodyToFlux(Order.class)));
}
But I get this error...
Can't resolve value (/customerById/orders) : type mismatch error, expected type LIST got class reactor.core.publisher.MonoFlatMapMany
... because the type of Customer has a orders LIST field and my orders service is returning a Flux.
How can I resolve this problem so I can return a Map<Customer, List<Order>> from my #BatchMapping endpoint and keep it nonblocking?
I assume it's a pretty simple solution but I don't have a lot of experience with Spring Webflux.
Thanks in advance!
I believe that what's missing from your method signature is an additional Mono wrapping it all. You should perform the necessary logic to transform what you return as well. For example, I have this #BatchMapping:
#BatchMapping(typeName = "Artista")
public Mono<Map<Artista, List<Obra>>> obras(List<Artista> artistas){
var artistasIds = artistas.stream()
.map(Artista::id)
.toList();
var todasLasObras = obtenerObras(artistasIds); // service method
return todasLasObras.collectList()
.map(obras -> {
Map<Long, List<Obra>> obrasDeCadaArtistaId = obras.stream()
.collect(Collectors.groupingBy(Obra::artistaId));
return artistas.stream()
.collect(Collectors.toMap(
unArtista -> unArtista, //K, the Artista
unArtista -> obrasDeCadaArtistaId.get(Long.parseLong(unArtista.id().toString())))); //V, the Obra List
});
}
You should replace my Artista for your Customer, and my Obra for your Order. Therefore you will return a Mono<Map<Customer, List>>

How would I find the existing objects id in the database using spring boot?

The application has 2 functionalities where I can create a new script file and also edit the existing script file. The logic I have for both create and edit will basically be the same with the only major difference being that for edit instead of creating new scripts, I will have to find the existing script and override it. I am not sure how I should go finding the script by id in spring boot.
Here is my create script functionality:
Controller Class:
#Controller
#RequestMapping("/api/auth/")
public class TestServerController {
#Autowired
ScriptsRepository scriptsRepository;
#Autowired
LineItemRepository lineItemRepository;
#Autowired
LineItemDataRepository lineItemDataRepository;
public TestServerController() {
}
#CrossOrigin(origins = "*")
#RequestMapping(value = "createScript")
public ResponseEntity<?> createScript(#RequestBody String body, #RequestParam String name) throws Exception {
JSONArray script = new JSONArray(body);
Scripts new_script = new Scripts(name);
scriptsRepository.save(new_script);
for (int i = 0;i<script.length();i++) {
JSONObject current_line_item = (JSONObject) script.get(i);
if(!current_line_item.has("keyword") ){
return ResponseEntity
.badRequest()
.body(new MessageResponse("Invalid Json Object"));
}
String keyword = current_line_item.getString("keyword");
LineItem new_li = new LineItem(keyword,i);
new_li.setSid(new_script);
lineItemRepository.save(new_li);
int order = 0;
var keys = current_line_item.keys();
for (Iterator<String> it = keys; it.hasNext(); ) {
var key = it.next();
if(key.equals("keyword")){
continue;
}
var value = current_line_item.getString(key);
// we will need to do something about this if the column values arent strings
LineItemData vb = new LineItemData(key,value,order);
vb.setLid(new_li);
lineItemDataRepository.save(vb);
order++;
}
}
return ResponseEntity.ok(new MessageResponse("Script Uploaded Successfully"));
}
For the edit script, I was thinking to remove Scripts new_script = new Scripts(name);
and use the findbyId to find the script in the database and pass that in the scriptsRepository.save(new_script);
where instead of new_script, I would just have the script file using the id. This hasn't worked for me and I'm getting errors. Is there another way to go about this?
When you use scriptsRepository.save(script) it will save the script on your Database. But if you already have a script in the database with the same ID of the one you are passing through the parameter it will not create a new one, but just update it with the information of the one you are trying to save.
A Script controller (only with the update method), following REST API design, would be something like this:
#RestController
#RequestMapping("/scripts")
public class ScriptController {
#Autowired
ScriptRepository scriptRepository;
#PutMapping("/{id}")
#ResponseStatus(HttpStatus.OK)
public Script update(#RequestBody Script script, #PathVariable long id) {
if (scriptRepository.findById(id).isEmpty()) {
throw new RecordNotFoundException();
}
script.setId(id);
return scriptRepository.save(script);
}
}
To access it you would need to use the endpoint localhost:8080/scripts/{id} (in case you use the default Spring Boot localhost:8080).
You don't need to throw the exception but in case you want you would need to create a simple class like this:
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;
#ResponseStatus(HttpStatus.NOT_FOUND)
public class RecordNotFoundException extends RuntimeException {
}

Spring Batch pass data from writer to next job or step

I'm fairly new to Spring Batch and I would like to know if it is possible to pass data saved by writer of one job to next job or save in process and pass to next step.
I've found similar question here where it says you can use a JobStep to launch the second job from within the first job but I'm not clear how data can be passed by looking at the examples.
Consider the following scenario:
First, from MongoDB I need to query list of contact details and save it to PostgresSQL. I've achieved this by writing a contact job with steps reader->processor->writer.
Now, based on the contacts that were saved earlier, I need to query MongoDB to get list of accounts for each contacts and set the new contact ID as FK to the account and then save the processed accounts to PostgresSQL. Note: I can't do cascade save to save accounts automatically while saving contacts. So need to set contact_id in account manually.
The saving of contact and its respective accounts should be transactional and able to re-process any failed rows.
Currently, I'm able to achieve 1&2 by writing a combined reader->processor->writer which would do the same thing by mapping both contact and account to a different Input/Output class as below:
public class AllReader implements ItemReader<InputRecord> {
#BeforeStep
public void before(StepExecution stepExecution) {
List<InputRecord> inputRecords = Lists.newArrayList();
List<Contact> contacts = mongoTemplate.findAll(Contact.class);
contacts.forEach(crmContact -> {
InputRecord inputRecord = new InputRecord();
inputRecord.setContact(crmContact);
List<Account> accounts = mongoTemplate.find(new Query().addCriteria(Criteria.where("refContactId").is(contact.getId())), Account.class);
inputRecord.setAccounts(accounts);
inputRecords.add(inputRecord);
});
inputRecordIterator = inputRecords.iterator();
}
#Override
public InputRecord read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
if (inputRecordIterator != null && inputRecordIterator.hasNext()) {
return inputRecordIterator.next();
} else {
return null;
}
}
}
public class AllWriter implements ItemWriter<OutputRecord> {
#Override
public void write(List<? extends OutputRecord> outputRecords) throws Exception {
List<Account> accounts = Lists.newArrayList();
for (OutputRecord outputRecord : outputRecords) {
Person contact = contactRepository.save(outputRecord.getContact());
accounts = outputRecords.stream()
.map(outputRecord -> {
Account account = outputRecord.getAccount();
account.setContactId(contact.getId());
return account;
})
.collect(Collectors.toList());
}
accountRepository.saveAll(accounts);
}
}
But I want to accomplish something similar by chaining the jobs or steps as there are many other similar jobs/steps that needs similar type of chaining. Since the write method of ItemWriter returns void, I'm assuming if there could be way of adding multiple processors instead, where contactPorcessor would save the contacts and then passed to next step for account reader? Or any ways to chain jobs/steps/flows by passing data something like below:
#Override
public void write(List<? extends Contact> contacts) throws Exception {
contactRepository.saveAll(contacts);
}
#Bean
public Job job() {
return this.jobBuilderFactory.get("job")
.start(contactSteps())
.next(accountSteps())
.end()
.build();
}
#Bean
public Step contactSteps(ContactReader reader,
ContactWriter writer,
ContactProcessor processor) {
return stepBuilderFactory.get("step1")
.<MContact, Contact>chunk(100)
.reader(reader)
.processor(processor)
.writer(writer)
.build();
}
#Bean
public Step accountSteps(AccountReader reader,
AccountWriter writer,
AccountProcessor processor) {
return stepBuilderFactory.get("step1")
.<MAccount, Account>chunk(100)
.reader(reader(contacts))
.processor(processor)
.writer(writer)
.build();
}
Some examples with code samples would be helpful. What I want to achieve is like below:
Job:
Step1: ContactMigrationStep
-> ContactReader (reads all the contacts from MongoDB)
-> ContactProcessor
-> ContactWriter
Step 2: AccountMigrationStep
-> AccountReader (should read respective accounts for each contact from Step1 contactWriter)
-> AccountProcessor
-> AccountWriter

How to configure spring-data-mongodb to use 2 different mongo instances sharing the same document model

I work for a company that has multiple brands, therefore we have a couple MongoDB instances on some different hosts holding the same document model for our Customer on each of these brands. (Same structure, not same data)
For the sake of simplicity let's say we have an Orange brand, with database instance serving on port 27017 and Banana brand with database instance serving on port 27018
Currently I'm developing a fraud detection service which is required to connect to all databases and analyze all the customers' behavior together regardless of the brand.
So my "model" has a shared entity for Customer, annotated with #Document (org.springframework.data.mongodb.core.mapping.Document)
Next thing I have is two MongoRepositories such as:
public interface BananaRepository extends MongoRepository<Customer, String>
List<Customer> findAllByEmail(String email);
public interface OrangeRepository extends MongoRepository<Customer, String>
List<Customer> findAllByEmail(String email);
With some stub method for finding customers by Id, Email, and so on. Spring is responsible for generating all implementation classes for such interfaces (pretty standard spring stuff)
In order to hint each of these repositories to connect to the right mongodb instance, I need two Mongo Config such as:
#Configuration
#EnableMongoRepositories(basePackageClasses = {Customer.class})
public class BananaConfig extends AbstractMongoConfiguration {
#Value("${database.mongodb.banana.username:}")
private String username;
#Value("${database.mongodb.banana.database}")
private String database;
#Value("${database.mongodb.banana.password:}")
private String password;
#Value("${database.mongodb.banana.uri}")
private String mongoUri;
#Override
protected Collection<String> getMappingBasePackages() {
return Collections.singletonList("com.acme.model");
}
#Override
protected String getDatabaseName() {
return this.database;
}
#Override
#Bean(name="bananaClient")
public MongoClient mongoClient() {
final String authString;
//todo: Use MongoCredential
//todo: Use ServerAddress
//(See https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#repositories) 10.3.4
if ( valueIsPresent(username) ||valueIsPresent(password)) {
authString = String.format("%s:%s#", username, password);
} else {
authString = "";
}
String conecctionString = "mongodb://" + authString + mongoUri + "/" + database;
System.out.println("Going to connect to: " + conecctionString);
return new MongoClient(new MongoClientURI(conecctionString, builder()
.connectTimeout(5000)
.socketTimeout(8000)
.readPreference(ReadPreference.secondaryPreferred())
.writeConcern(ACKNOWLEDGED)));
}
#Bean(name = "bananaTemplate")
public MongoTemplate mongoTemplate(#Qualifier("bananaFactory") MongoDbFactory mongoFactory) {
return new MongoTemplate(mongoFactory);
}
#Bean(name = "bananaFactory")
public MongoDbFactory mongoFactory() {
return new SimpleMongoDbFactory(mongoClient(),
getDatabaseName());
}
private static int sizeOfValue(String value){
if (value == null) return 0;
return value.length();
}
private static boolean valueIsMissing(String value){
return sizeOfValue(value) == 0;
}
private static boolean valueIsPresent(String value){
return ! valueIsMissing(value);
}
}
I also have similar config for Orange which points to the proper mongo instance.
Then I have my service like this:
public List<? extends Customer> findAllByEmail(String email) {
return Stream.concat(
bananaRepository.findAllByEmail(email).stream(),
orangeRepository.findAllByEmail(email).stream())
.collect(Collectors.toList());
}
Notice that I'm calling both repositories and then collecting back the results into one single list
What I would expect to happen is that each repository would connect to its corresponding mongo instance and query for the customer by its email.
But this don't happened. I always got the query executed against the same mongo instance.
But in the database log I can see both connections being made by spring.
It just uses one connection to run the queries for both repositories.
This is not surprising as both Mongo Config points to the same model package here. Right. But I also tried other approaches such as creating a BananaCustomer extends Customer, into its own model.banana package, and another OrangeCustomer extends Customer into its model.orange package, along with specifying the proper basePackageClasses into each config. But that neither worked, I've ended up getting both queries run against the same database.
:(
After scavenging official Spring-data-mongodb documentation for hours, and looking throughout thousands of lines of code here and there, I've run out of options: seems like nobody have done what I'm trying to accomplish before.
Except for this guy here that had to do the same thing but using JPA instead of mongodb: Link to article
Well, while it's still spring-data it't not for mongodb.
So here is my question:
¿How can I explicitly tell each repository to use a specific mongo config?
Magical autowiring rules, except when it doesn't work and nobody understands the magic.
Thanks in advance.
Well, I had a very detailed answer but StackOverflow complained about looking like spam and didn't allow me to post
The full answer is still available as a Gist file here
The bottom line is that both MongoRepository (interface) and the model object must be placed in the same package.

Spring REST partial update with #PATCH method

I'm trying to implement a partial update of the Manager entity based in the following:
Entity
public class Manager {
private int id;
private String firstname;
private String lastname;
private String username;
private String password;
// getters and setters omitted
}
SaveManager method in Controller
#RequestMapping(value = "/save", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#RequestBody Manager manager){
managerService.saveManager(manager);
}
Save object manager in Dao impl.
#Override
public void saveManager(Manager manager) {
sessionFactory.getCurrentSession().saveOrUpdate(manager);
}
When I save the object the username and password has changed correctly but the others values are empty.
So what I need to do is update the username and password and keep all the remaining data.
If you are truly using a PATCH, then you should use RequestMethod.PATCH, not RequestMethod.POST.
Your patch mapping should contain the id with which you can retrieve the Manager object to be patched. Also, it should only include the fields with which you want to change. In your example you are sending the entire entity, so you can't discern the fields that are actually changing (does empty mean leave this field alone or actually change its value to empty).
Perhaps an implementation as such is what you're after?
#RequestMapping(value = "/manager/{id}", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#PathVariable Long id, #RequestBody Map<Object, Object> fields) {
Manager manager = someServiceToLoadManager(id);
// Map key is field name, v is value
fields.forEach((k, v) -> {
// use reflection to get field k on manager and set it to value v
Field field = ReflectionUtils.findField(Manager.class, k);
field.setAccessible(true);
ReflectionUtils.setField(field, manager, v);
});
managerService.saveManager(manager);
}
Update
I want to provide an update to this post as there is now a project that simplifies the patching process.
The artifact is
<dependency>
<groupId>com.github.java-json-tools</groupId>
<artifactId>json-patch</artifactId>
<version>1.13</version>
</dependency>
The implementation to patch the Manager object in the OP would look like this:
Controller
#Operation(summary = "Patch a Manager")
#PatchMapping("/{managerId}")
public Task patchManager(#PathVariable Long managerId, #RequestBody JsonPatch jsonPatch)
throws JsonPatchException, JsonProcessingException {
return managerService.patch(managerId, jsonPatch);
}
Service
public Manager patch(Long managerId, JsonPatch jsonPatch) throws JsonPatchException, JsonProcessingException {
Manager manager = managerRepository.findById(managerId).orElseThrow(EntityNotFoundException::new);
JsonNode patched = jsonPatch.apply(objectMapper.convertValue(manager, JsonNode.class));
return managerRepository.save(objectMapper.treeToValue(patched, Manager.class));
}
The patch request follows the specifications in RFC 6092, so this is a true PATCH implementation. Details can be found here
With this, you can patch your changes
1. Autowire `ObjectMapper` in controller;
2. #PatchMapping("/manager/{id}")
ResponseEntity<?> saveManager(#RequestBody Map<String, String> manager) {
Manager toBePatchedManager = objectMapper.convertValue(manager, Manager.class);
managerService.patch(toBePatchedManager);
}
3. Create new method `patch` in `ManagerService`
4. Autowire `NullAwareBeanUtilsBean` in `ManagerService`
5. public void patch(Manager toBePatched) {
Optional<Manager> optionalManager = managerRepository.findOne(toBePatched.getId());
if (optionalManager.isPresent()) {
Manager fromDb = optionalManager.get();
// bean utils will copy non null values from toBePatched to fromDb manager.
beanUtils.copyProperties(fromDb, toBePatched);
updateManager(fromDb);
}
}
You will have to extend BeanUtilsBean to implement copying of non null values behaviour.
public class NullAwareBeanUtilsBean extends BeanUtilsBean {
#Override
public void copyProperty(Object dest, String name, Object value)
throws IllegalAccessException, InvocationTargetException {
if (value == null)
return;
super.copyProperty(dest, name, value);
}
}
and finally, mark NullAwareBeanUtilsBean as #Component
or
register NullAwareBeanUtilsBean as bean
#Bean
public NullAwareBeanUtilsBean nullAwareBeanUtilsBean() {
return new NullAwareBeanUtilsBean();
}
First, you need to know if you are doing an insert or an update. Insert is straightforward. On update, use get() to retrieve the entity. Then update whatever fields. At the end of the transaction, Hibernate will flush the changes and commit.
You can write custom update query which updates only particular fields:
#Override
public void saveManager(Manager manager) {
Query query = sessionFactory.getCurrentSession().createQuery("update Manager set username = :username, password = :password where id = :id");
query.setParameter("username", manager.getUsername());
query.setParameter("password", manager.getPassword());
query.setParameter("id", manager.getId());
query.executeUpdate();
}
ObjectMapper.updateValue provides all you need to partially map your entity with values from dto.
As an addition, you can use either of two here: Map<String, Object> fields or String json, so your service method may look like this:
#Autowired
private ObjectMapper objectMapper;
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) throws JsonMappingException {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
return objectMapper.updateValue(foo , fields);
}
As a second solution and addition to Lane Maxwell's answer you could use Reflection to map only properties that exist in a Map of values that was sent, so your service method may look like this:
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
fields.keySet()
.forEach(k -> {
Method method = ReflectionUtils.findMethod(LocationProduct.class, "set" + StringUtils.capitalize(k));
if (method != null) {
ReflectionUtils.invokeMethod(method, foo, fields.get(k));
}
});
return foo;
}
Second solution allows you to insert some additional business logic into mapping process, might be conversions or calculations ect.
Also unlike finding reflection field Field field = ReflectionUtils.findField(Foo.class, k); by name and than making it accessible, finding property's setter actually calls setter method that might contain additional logic to be executed and prevents from setting value to private properties.

Categories