I am using r2dbc, r2dbc-h2 and experimental spring-boot-starter-data-r2dbc
implementation 'org.springframework.boot.experimental:spring-boot-starter-data-r2dbc:0.1.0.M1'
implementation 'org.springframework.data:spring-data-r2dbc:1.0.0.RELEASE' // starter-data provides old version
implementation 'io.r2dbc:r2dbc-h2:0.8.0.RELEASE'
implementation 'io.r2dbc:r2dbc-pool:0.8.0.RELEASE'
I have created reactive repositories
public interface IJsonComparisonRepository extends ReactiveCrudRepository<JsonComparisonResult, String> {}
Also added a custom script that creates a table in H2 on startup
#SpringBootApplication
public class JsonComparisonApplication {
public static void main(String[] args) {
SpringApplication.run(JsonComparisonApplication.class, args);
}
#Bean
public CommandLineRunner startup(DatabaseClient client) {
return (args) -> client
.execute(() -> {
var resource = new ClassPathResource("ddl/script.sql");
try (var is = new InputStreamReader(resource.getInputStream())) {
return FileCopyUtils.copyToString(is);
} catch (IOException e) {
throw new RuntimeException(e);
} })
.then()
.block();
}
}
My r2dbc configuration looks like this
#Configuration
#EnableR2dbcRepositories
public class R2dbcConfiguration extends AbstractR2dbcConfiguration {
#Override
public ConnectionFactory connectionFactory() {
return new H2ConnectionFactory(
H2ConnectionConfiguration.builder()
.url("mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE")
.username("sa")
.build());
}
}
My service where I perform the logic looks like this
#Override
public Mono<JsonComparisonResult> updateOrCreateRightSide(String comparisonId, String json) {
return updateComparisonSide(comparisonId, storedComparisonResult -> {
storedComparisonResult.setRightSide(json);
return storedComparisonResult;
});
}
private Mono<JsonComparisonResult> updateComparisonSide(String comparisonId,
Function<JsonComparisonResult, JsonComparisonResult> updateSide) {
return repository.findById(comparisonId)
.defaultIfEmpty(createResult(comparisonId))
.filter(result -> ComparisonDecision.NONE == result.getDecision()) // if not NONE - it means it was found and completed
.switchIfEmpty(Mono.error(new NotUpdatableCompleteComparisonException(comparisonId)))
.map(updateSide)
.flatMap(repository::save);
}
private JsonComparisonResult createResult(String comparisonId) {
LOGGER.info("Creating new comparison result: {}.", comparisonId);
var newResult = new JsonComparisonResult();
newResult.setDecision(ComparisonDecision.NONE);
newResult.setComparisonId(comparisonId);
return newResult;
}
The domain looks like this
#Table("json_comparison")
public class JsonComparisonResult {
#Column("comparison_id")
#Id
private String comparisonId;
#Column("left")
private String leftSide;
#Column("right")
private String rightSide;
// #Enumerated(EnumType.STRING) - no support for now
#Column("decision")
private ComparisonDecision decision;
private String differences;
The problem is that when I try to add any object to the database it fails with the exception
org.springframework.dao.TransientDataAccessResourceException: Failed to update table [json_comparison]. Row with Id [4] does not exist.
at org.springframework.data.r2dbc.repository.support.SimpleR2dbcRepository.lambda$save$0(SimpleR2dbcRepository.java:91) ~[spring-data-r2dbc-1.0.0.RELEASE.jar:1.0.0.RELEASE]
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:96) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoUsingWhen$MonoUsingWhenSubscriber.deferredComplete(MonoUsingWhen.java:276) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.FluxUsingWhen$CommitInner.onComplete(FluxUsingWhen.java:536) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onComplete(Operators.java:1858) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.Operators.complete(Operators.java:132) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoEmpty.subscribe(MonoEmpty.java:45) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
For some reason during save in SimpleR2dbcRepository library class it doesn't consider the objectToSave as new, but then it fails to update as it is in reality doesn't exist.
// SimpleR2dbcRepository#save
#Override
#Transactional
public <S extends T> Mono<S> save(S objectToSave) {
Assert.notNull(objectToSave, "Object to save must not be null!");
if (this.entity.isNew(objectToSave)) { // not new
....
}
}
Why it is happening and what is the problem?
TL;DR: How should Spring Data know if your object is new or whether it should exist?
Relational Spring Data Repositories (both, JDBC and R2DBC) must differentiate on [Reactive]CrudRepository.save(…) whether the given object is new or whether it exists in your database. Performing a save(…) operation results either in an INSERT or UPDATE statement. Issuing the wrong statement either causes a primary key violation or a no-op as standard SQL does not have a way to express an upsert.
Spring Data JDBC|R2DBC use by default the presence/absence of the #Id value. Generated primary keys are a widely used mechanism. If the primary key is provided, the entity is considered existing. If the id value is null, the entity is considered new.
Read more in the reference documentation about Entity State Detection Strategies.
You have to implement Persistable because you’ve provided the #Id. The library needs to figure out, whether the row is new or whether it should exist. If your entity implements Persistable, then save(…) will use the outcome of isNew() to determine whether to issue an INSERT or UPDATE.
For example:
public class Product implements Persistable<Integer> {
#Id
private Integer id;
private String description;
private Double price;
#Transient
private boolean newProduct;
#Override
#Transient
public boolean isNew() {
return this.newProduct || id == null;
}
public Product setAsNew() {
this.newProduct = true;
return this;
}
}
May be you should consider this:
Choose data type of your id/Primary Key as INT/LONG and set it to AUTO_INCREMENT (something like below):
CREATE TABLE PRODUCT(id INT PRIMARY KEY AUTO_INCREMENT NOT NULL, modelname VARCHAR(30) , year VARCHAR(4), owner VARCHAR(50));
In your post request body, do not include id field.
Removing #ID issued insert statement
Related
i have a services layer and a repository layer in my spring boot application (i use also spring data, mvc etc)
before deleting an entity from the database, I want to check if such an entity exists and if not, then throw an EntityNotFoundException
for example my repository:
public interface RoomRepository extends CrudRepository<Room, Long> {
#Query("from Room r left join fetch r.messages where r.id = :rId")
Optional<Room> findByIdWithMessages(#Param("rId") long id);
#Override
List<Room> findAll();
}
and service:
#Service
#Loggable
public class RoomService implements GenericService<Room> {
private final RoomRepository roomRepository;
private final RoomDtoMapper roomMapper;
public RoomService(RoomRepository roomRepository, RoomDtoMapper roomMapper) {
this.roomRepository = roomRepository;
this.roomMapper = roomMapper;
}
#Override
public Room getById(long id) {
return roomRepository.findById(id).orElseThrow(
() -> new EntityNotFoundException(String.format("room with id = %d wasn't found", id)));
}
#Override
public void delete(Room room) {
getById(room.getId());
roomRepository.delete(room);
}
}
In this example in the delete method, I call the
getById(room.getId())
(so that it throws an EntityNotFoundException if the entity does not exist.)
before
roomRepository.delete(room);
it seems to me that such code is not thread-safe and the operation is not atomic
(because at the moment when in this thread at the moment of checking another request from another thread may already delete the same entity)
and I don't know if I'm doing the right thing
maybe i should add the #Transactional annotation?
would it allow me to make the method atomic?
like this:
#Override
#Transactional
public void delete(Room room) {
getById(room.getId());
roomRepository.delete(room);
}
maybe i should set some kind of isolation level?
you can test if your object needed, exist or not by autowiring the repository injected (in your case is RoomRepository e.g) and (insted User in my exmaple you can use Room): for example:
public ResponseEntity<Object> deletUserById(Long id) {
if (userrRepository.findById(id).isPresent()) {
userrRepository.deleteById(id);
return ResponseEntity.ok().body("User deleted with success");
} else {
return ResponseEntity.unprocessableEntity().body("user to be deleted not exist");
}
}
I would like to have Documents stored with an UUID id and createdAt / updatedAt fields. My solution was working with Spring Boot 2.1.x. After I upgraded from Spring Boot 2.1.11.RELEASE to 2.2.0.RELEASE my test for MongoAuditing failed with createdAt = null. What do I need to do to get the createdAt field filled again?
This is not just a testproblem. I ran the application and it has the same behaviour as my test. All auditing fields stay null.
I have a Configuration to enable MongoAuditing and UUID generation:
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public GenerateUUIDListener generateUUIDListener() {
return new GenerateUUIDListener();
}
}
The listner hooks into the onBeforeConvert - I guess thats where the trouble starts.
public class GenerateUUIDListener extends AbstractMongoEventListener<IdentifiableEntity> {
#Override
public void onBeforeConvert(BeforeConvertEvent<IdentifiableEntity> event) {
IdentifiableEntity entity = event.getSource();
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
}
}
The document itself (I dropped the getter and setters):
#Document
public class MyDocument extends InsertableEntity {
private String name;
}
public abstract class InsertableEntity extends IdentifiableEntity {
#CreatedDate
#JsonIgnore
private Instant createdAt;
}
public abstract class IdentifiableEntity implements Persistable<UUID> {
#Id
private UUID id;
#JsonIgnore
public boolean isNew() {
return getId() == null;
}
}
A complete minimal example can be find here (including a test) https://github.com/mab/auditable
With 2.1.11.RELEASE the test succeeds with 2.2.0.RELEASE it fails.
For me the best solution was to switch from event UUID generation to a callback based one. With the implementation of Ordered we can set the new callback to be executed after the AuditingEntityCallback.
public class IdEntityCallback implements BeforeConvertCallback<IdentifiableEntity>, Ordered {
#Override
public IdentifiableEntity onBeforeConvert(IdentifiableEntity entity, String collection) {
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
return entity;
}
#Override
public int getOrder() {
return 101;
}
}
I registered the callback with the MongoConfiguration. For a more general solution you might want to take a look at the registration of the AuditingEntityCallback with the `MongoAuditingBeanDefinitionParser.
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public IdEntityCallback registerCallback() {
return new IdEntityCallback();
}
}
MongoTemplate works in the following way on doInsert()
this.maybeEmitEvent - emit an event (onBeforeConvert, onBeforeSave and such) so any AbstractMappingEventListener can catch and act upon like you did with GenerateUUIDListener
this.maybeCallBeforeConvert - call before convert callbacks like mongo auditing
like you can see in source code of MongoTemplate.class src (831-832)
protected <T> T doInsert(String collectionName, T objectToSave, MongoWriter<T> writer) {
BeforeConvertEvent<T> event = new BeforeConvertEvent(objectToSave, collectionName);
T toConvert = ((BeforeConvertEvent)this.maybeEmitEvent(event)).getSource(); //emit event
toConvert = this.maybeCallBeforeConvert(toConvert, collectionName); //call some before convert handlers
...
}
MongoAudit marks createdAt only to new entities by checking if entity.isNew() == true
because your code (UUID) already set the Id the createdAt is not populated (the entity is not considered new)
you can do the following (order by best to worst):
forget about the UUID and use String for your id, let the mongo itself create and manage it's entities ids (this how MongoTemplate actually works lines 811-812)
keep the UUID at the code level, convert from/to String when inserting and retrieving from the db
create a custom repository like in this post
stay with 2.1.11.RELEASE
set the updateAt by GenerateUUIDListener as well as id (rename it NewEntityListener or smth), basically implement the audit
implement a new isNew() logic that don't depends only on the entity id
in version 2.1.11.RELEASE the order of the methods was flipped (MongoTemplate.class 804-805) so your code worked fine
as an abstract approach, the nature of event is to be sort of send-and-forget (async compatible), so it's a very bad practice to change the object itself, there is NO grantee for order of computation, if any
this is why the audit build on callbacks and not events, and that's why Pivotal don't (need to) keep order between versions
Let's say we use soft-delete policy: nothing gets deleted from the storage; instead, a 'deleted' attribute/column is set to true on a record/document/whatever to make it 'deleted'. Later, only non-deleted entries should be returned by query methods.
Let's take MongoDB as an example (alghough JPA is also interesting).
For standard methods defined by MongoRepository, we can extend the default implementation (SimpleMongoRepository), override the methods of interest and make them ignore 'deleted' documents.
But, of course, we'd also like to use custom query methods like
List<Person> findByFirstName(String firstName)
In a soft-delete environment, we are forced to do something iike
List<person> findByFirstNameAndDeletedIsFalse(String firstName)
or write queries manually with #Query (adding the same boilerplate condition about 'not deleted' all the time).
Here comes the question: is it possible to add this 'non-deleted' condition to any generated query automatically? I did not find anything in the documentation.
I'm looking at Spring Data (Mongo and JPA) 2.1.6.
Similar questions
Query interceptor for spring-data-mongodb for soft deletions here they suggest Hibernate's #Where annotation which only works for JPA+Hibernate, and it is not clear how to override it if you still need to access deleted items in some queries
Handling soft-deletes with Spring JPA here people either suggest the same #Where-based approach, or the solution applicability is limited with the already-defined standard methods, not the custom ones.
It turns out that for Mongo (at least, for spring-data-mongo 2.1.6) we can hack into standard QueryLookupStrategy implementation to add the desired 'soft-deleted documents are not visible by finders' behavior:
public class SoftDeleteMongoQueryLookupStrategy implements QueryLookupStrategy {
private final QueryLookupStrategy strategy;
private final MongoOperations mongoOperations;
public SoftDeleteMongoQueryLookupStrategy(QueryLookupStrategy strategy,
MongoOperations mongoOperations) {
this.strategy = strategy;
this.mongoOperations = mongoOperations;
}
#Override
public RepositoryQuery resolveQuery(Method method, RepositoryMetadata metadata, ProjectionFactory factory,
NamedQueries namedQueries) {
RepositoryQuery repositoryQuery = strategy.resolveQuery(method, metadata, factory, namedQueries);
// revert to the standard behavior if requested
if (method.getAnnotation(SeesSoftlyDeletedRecords.class) != null) {
return repositoryQuery;
}
if (!(repositoryQuery instanceof PartTreeMongoQuery)) {
return repositoryQuery;
}
PartTreeMongoQuery partTreeQuery = (PartTreeMongoQuery) repositoryQuery;
return new SoftDeletePartTreeMongoQuery(partTreeQuery);
}
private Criteria notDeleted() {
return new Criteria().orOperator(
where("deleted").exists(false),
where("deleted").is(false)
);
}
private class SoftDeletePartTreeMongoQuery extends PartTreeMongoQuery {
SoftDeletePartTreeMongoQuery(PartTreeMongoQuery partTreeQuery) {
super(partTreeQuery.getQueryMethod(), mongoOperations);
}
#Override
protected Query createQuery(ConvertingParameterAccessor accessor) {
Query query = super.createQuery(accessor);
return withNotDeleted(query);
}
#Override
protected Query createCountQuery(ConvertingParameterAccessor accessor) {
Query query = super.createCountQuery(accessor);
return withNotDeleted(query);
}
private Query withNotDeleted(Query query) {
return query.addCriteria(notDeleted());
}
}
}
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SeesSoftlyDeletedRecords {
}
We just add 'and not deleted' condition to all the queries unless #SeesSoftlyDeletedRecords asks as to avoid it.
Then, we need the following infrastructure to plug our QueryLiikupStrategy implementation:
public class SoftDeleteMongoRepositoryFactory extends MongoRepositoryFactory {
private final MongoOperations mongoOperations;
public SoftDeleteMongoRepositoryFactory(MongoOperations mongoOperations) {
super(mongoOperations);
this.mongoOperations = mongoOperations;
}
#Override
protected Optional<QueryLookupStrategy> getQueryLookupStrategy(QueryLookupStrategy.Key key,
QueryMethodEvaluationContextProvider evaluationContextProvider) {
Optional<QueryLookupStrategy> optStrategy = super.getQueryLookupStrategy(key,
evaluationContextProvider);
return optStrategy.map(this::createSoftDeleteQueryLookupStrategy);
}
private SoftDeleteMongoQueryLookupStrategy createSoftDeleteQueryLookupStrategy(QueryLookupStrategy strategy) {
return new SoftDeleteMongoQueryLookupStrategy(strategy, mongoOperations);
}
}
public class SoftDeleteMongoRepositoryFactoryBean<T extends Repository<S, ID>, S, ID extends Serializable>
extends MongoRepositoryFactoryBean<T, S, ID> {
public SoftDeleteMongoRepositoryFactoryBean(Class<? extends T> repositoryInterface) {
super(repositoryInterface);
}
#Override
protected RepositoryFactorySupport getFactoryInstance(MongoOperations operations) {
return new SoftDeleteMongoRepositoryFactory(operations);
}
}
Then we just need to reference the factory bean in a #EnableMongoRepositories annotation like this:
#EnableMongoRepositories(repositoryFactoryBeanClass = SoftDeleteMongoRepositoryFactoryBean.class)
If it is required to determine dynamically whether a particular repository needs to be 'soft-delete' or a regular 'hard-delete' repository, we can introspect the repository interface (or the domain class) and decide whether we need to change the QueryLookupStrategy or not.
As for JPA, this approach does not work without rewriting (possibly duplicating) a substantial part of the code in PartTreeJpaQuery.
I'm currently using Redis (3.2.100) with Spring data redis (1.8.9) and with Jedis connector.
When i use save() function on an existing entity, Redis delete my entity and re create the entity.
In my case i need to keep this existing entity and only update attributes of the entity. (I have another thread which read the same entity at the same time)
In Spring documentation (https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#redis.repositories.partial-updates), i found the partial update feature. Unfortunately, the example in the documentation use the update() method of RedisTemplate. But this method do not exist.
So did you ever use Spring-data-redis partial update?
There is another method to update entity redis without delete before?
Thanks
To get RedisKeyValueTemplate, you can do:
#Autowired
private RedisKeyValueTemplate redisKVTemplate;
redisKVTemplate.update(entity)
You should use RedisKeyValueTemplate for make partial update.
Well, consider following docs link and also spring data tests (link) actually made 0 contribution to resulting solution.
Consider following entity
#RedisHash(value = "myservice/lastactivity")
#Data
#AllArgsConstructor
#NoArgsConstructor
#Builder
public class LastActivityCacheEntity implements Serializable {
#Id
#Indexed
#Size(max = 50)
private String user;
private long lastLogin;
private long lastProfileChange;
private long lastOperation;
}
Let's assume that:
we don't want to do complex read-write exercise on every update.
entity = lastActivityCacheRepository.findByUser(userId);
lastActivityCacheRepository.save(LastActivityCacheEntity.builder()
.user(entity.getUser())
.lastLogin(entity.getLastLogin())
.lastProfileChange(entity.getLastProfileChange())
.lastOperation(entity.getLastOperation()).build());
what if there would pop up some 100 rows? then on each update entity got to fetched and saved, quite inefficient, but still would work out.
we don't actually want complex exercises with opsForHash + ObjectMapper + configuring beans approach - it's quite hard to implement and maintain (for example link)
So we're about to use something like:
#Autowired
private final RedisKeyValueTemplate redisTemplate;
void partialUpdate(LastActivityCacheEntity update) {
var partialUpdate = PartialUpdate
.newPartialUpdate(update.getUser(), LastActivityCacheEntity.class);
if (update.getLastLogin() > 0)
partialUpdate.set("lastlastLogin", update.getLastLogin());
if (update.getLastProfileChange() > 0)
partialUpdate.set("lastProfileChange", update.getLastProfileChange());
if (update.getLastOperation() > 0)
partialUpdate.set("lastOperation", update.getLastOperation());
redisTemplate.update(partialUpdate);
}
and the thing is - it doesn't really work for this case.
That is, values getting updated but you can not query new property later on via repository entity lookup: certain lastActivityCacheRepository.findAll() will return unchanged properties.
Here's the solution:
LastActivityCacheRepository.java:
#Repository
public interface LastActivityCacheRepository extends CrudRepository<LastActivityCacheEntity, String>, LastActivityCacheRepositoryCustom {
Optional<LastActivityCacheEntity> findByUser(String user);
}
LastActivityCacheRepositoryCustom.java:
public interface LastActivityCacheRepositoryCustom {
void updateEntry(String userId, String key, long date);
}
LastActivityCacheRepositoryCustomImpl.java
#Repository
public class LastActivityCacheRepositoryCustomImpl implements LastActivityCacheRepositoryCustom {
#Autowired
private final RedisKeyValueTemplate redisKeyValueTemplate;
#Override
public void updateEntry(String userId, String key, long date) {
redisKeyValueTemplate.update(new PartialUpdate<>(userId, LastActivityCacheEntity.class)
.set(key, date));
}
}
And finally working sample:
void partialUpdate(LastActivityCacheEntity update) {
if ((lastActivityCacheRepository.findByUser(update.getUser()).isEmpty())) {
lastActivityCacheRepository.save(LastActivityCacheEntity.builder().user(update.getUser()).build());
}
if (update.getLastLogin() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastlastLogin",
update.getLastLogin());
}
if (update.getLastProfileChange() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastProfileChange",
update.getLastProfileChange());
}
if (update.getLastOperation() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastOperation",
update.getLastOperation());
}
all credits to Chris Richardson and his src
If you don't want to type your field names as strings in the updateEntry method, you can use use the lombok annotation on your entity class #FieldNameConstants. This creates field name constants for you and then you can access your field names like this:
...
if (update.getLastOperation() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
LastActivityCache.Fields.lastOperation, // <- instead of "lastOperation"
update.getLastOperation());
...
This makes refactoring the field names more bug-proof.
I'm trying to implement a partial update of the Manager entity based in the following:
Entity
public class Manager {
private int id;
private String firstname;
private String lastname;
private String username;
private String password;
// getters and setters omitted
}
SaveManager method in Controller
#RequestMapping(value = "/save", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#RequestBody Manager manager){
managerService.saveManager(manager);
}
Save object manager in Dao impl.
#Override
public void saveManager(Manager manager) {
sessionFactory.getCurrentSession().saveOrUpdate(manager);
}
When I save the object the username and password has changed correctly but the others values are empty.
So what I need to do is update the username and password and keep all the remaining data.
If you are truly using a PATCH, then you should use RequestMethod.PATCH, not RequestMethod.POST.
Your patch mapping should contain the id with which you can retrieve the Manager object to be patched. Also, it should only include the fields with which you want to change. In your example you are sending the entire entity, so you can't discern the fields that are actually changing (does empty mean leave this field alone or actually change its value to empty).
Perhaps an implementation as such is what you're after?
#RequestMapping(value = "/manager/{id}", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#PathVariable Long id, #RequestBody Map<Object, Object> fields) {
Manager manager = someServiceToLoadManager(id);
// Map key is field name, v is value
fields.forEach((k, v) -> {
// use reflection to get field k on manager and set it to value v
Field field = ReflectionUtils.findField(Manager.class, k);
field.setAccessible(true);
ReflectionUtils.setField(field, manager, v);
});
managerService.saveManager(manager);
}
Update
I want to provide an update to this post as there is now a project that simplifies the patching process.
The artifact is
<dependency>
<groupId>com.github.java-json-tools</groupId>
<artifactId>json-patch</artifactId>
<version>1.13</version>
</dependency>
The implementation to patch the Manager object in the OP would look like this:
Controller
#Operation(summary = "Patch a Manager")
#PatchMapping("/{managerId}")
public Task patchManager(#PathVariable Long managerId, #RequestBody JsonPatch jsonPatch)
throws JsonPatchException, JsonProcessingException {
return managerService.patch(managerId, jsonPatch);
}
Service
public Manager patch(Long managerId, JsonPatch jsonPatch) throws JsonPatchException, JsonProcessingException {
Manager manager = managerRepository.findById(managerId).orElseThrow(EntityNotFoundException::new);
JsonNode patched = jsonPatch.apply(objectMapper.convertValue(manager, JsonNode.class));
return managerRepository.save(objectMapper.treeToValue(patched, Manager.class));
}
The patch request follows the specifications in RFC 6092, so this is a true PATCH implementation. Details can be found here
With this, you can patch your changes
1. Autowire `ObjectMapper` in controller;
2. #PatchMapping("/manager/{id}")
ResponseEntity<?> saveManager(#RequestBody Map<String, String> manager) {
Manager toBePatchedManager = objectMapper.convertValue(manager, Manager.class);
managerService.patch(toBePatchedManager);
}
3. Create new method `patch` in `ManagerService`
4. Autowire `NullAwareBeanUtilsBean` in `ManagerService`
5. public void patch(Manager toBePatched) {
Optional<Manager> optionalManager = managerRepository.findOne(toBePatched.getId());
if (optionalManager.isPresent()) {
Manager fromDb = optionalManager.get();
// bean utils will copy non null values from toBePatched to fromDb manager.
beanUtils.copyProperties(fromDb, toBePatched);
updateManager(fromDb);
}
}
You will have to extend BeanUtilsBean to implement copying of non null values behaviour.
public class NullAwareBeanUtilsBean extends BeanUtilsBean {
#Override
public void copyProperty(Object dest, String name, Object value)
throws IllegalAccessException, InvocationTargetException {
if (value == null)
return;
super.copyProperty(dest, name, value);
}
}
and finally, mark NullAwareBeanUtilsBean as #Component
or
register NullAwareBeanUtilsBean as bean
#Bean
public NullAwareBeanUtilsBean nullAwareBeanUtilsBean() {
return new NullAwareBeanUtilsBean();
}
First, you need to know if you are doing an insert or an update. Insert is straightforward. On update, use get() to retrieve the entity. Then update whatever fields. At the end of the transaction, Hibernate will flush the changes and commit.
You can write custom update query which updates only particular fields:
#Override
public void saveManager(Manager manager) {
Query query = sessionFactory.getCurrentSession().createQuery("update Manager set username = :username, password = :password where id = :id");
query.setParameter("username", manager.getUsername());
query.setParameter("password", manager.getPassword());
query.setParameter("id", manager.getId());
query.executeUpdate();
}
ObjectMapper.updateValue provides all you need to partially map your entity with values from dto.
As an addition, you can use either of two here: Map<String, Object> fields or String json, so your service method may look like this:
#Autowired
private ObjectMapper objectMapper;
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) throws JsonMappingException {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
return objectMapper.updateValue(foo , fields);
}
As a second solution and addition to Lane Maxwell's answer you could use Reflection to map only properties that exist in a Map of values that was sent, so your service method may look like this:
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
fields.keySet()
.forEach(k -> {
Method method = ReflectionUtils.findMethod(LocationProduct.class, "set" + StringUtils.capitalize(k));
if (method != null) {
ReflectionUtils.invokeMethod(method, foo, fields.get(k));
}
});
return foo;
}
Second solution allows you to insert some additional business logic into mapping process, might be conversions or calculations ect.
Also unlike finding reflection field Field field = ReflectionUtils.findField(Foo.class, k); by name and than making it accessible, finding property's setter actually calls setter method that might contain additional logic to be executed and prevents from setting value to private properties.