Modify entity before flush - java

Is it possible to update some column before every flush to database? I have modifiedOn and modifiedBy columns and want to update them on every DB update, similar to DB trigger. Is it possible with JPA?

Disclaimer: The following works only with Hibernate / JPA.
You can update the modifiedOn property like this with JPA:
public class Entity {
private Date modifiedOn;
#PreUpdate
#PrePersist
public void updateModified() {
modifiedOn = new Date();
}
}
As for the modifiedBy, it is a little bit trickier since the JPA spec discourages references to other entities in the lifecycle callback methods. Furthermore, you would need some knowledge of the current user, which probably belongs to the service layer.
You could use an EntityListener like this (however, this still uses the callback methods)
#Entity
#EntityListeners({MyListener.class})
public class MyEntity {
Date modifiedOn;
User modifiedBy;
...
}
An in the EntityListener:
public class MyListener {
CurrentUserProvider provider; // Implement this and make sure it is set
#PreUpdate
#PrePersist
public void updateModifier(MyEntity entity) {
entity.setModifiedOn(new Date());
entity.setModifiedBy(provider.getCurrentUser());
}
}

Obviously JPA has Entity callbacks defined in the spec such as #PrePersist, #PreUpdate. Simple revision of the spec would give you more details

Related

Why can't a column be set to null using JPA?

Let's start with this entity:
#Entity
public class MyEntity {
...
#Column(length = 80)
private String description;
#Column(name = "enum_column", precision = 18)
#Convert(converter = EnumColumnConverter.class)
private MyEnum enumColumn;
...
}
Here, you see two columns that are nullable (in my entity and in the database). The converter replaces the enum with a Long value in the database. A repository class is defined accordingly:
#Repository
public interface MyEntityRepository extends JpaRepository<MyEntity, Long> {}
A DTO is defined from a service package:
public class MyEntityDto {
...
private String description;
private MyEnum enumColumn;
...
}
Mapping between DTOs and entities is done using Dozer. A DTO is modified from a Java FX UI. A service has been defined between UI and persistence to save modified entities.
#Service
#Transactional
public class MyEntityService {
#Autowired MyEntityRepository myEntityRepository;
...
public List<MyEntityDto> save(List<MyEntityDto> dtosToSave) {
List<MyEntityDto> results = Collections.emptyList();
if (dtosToSave != null && !dtosToSave.empty()) {
Iterable<MyEntity> entities = convertDtosWithDozer(dtosToSave);
List<MyEntity> savedEntities = myEntityRepository.saveAll(entities);
results = convertEntitiesWithDozer(savedEntities);
}
return results;
}
From the UI, I modify an existing row where both descriptionand enumColumn are not null. Both values are set to null.
The problem is that none of them is set to null in the database. In the logs, the update request generated by Hibernate does not include these columns. When I debug the code, these columns are null in dtosToSave, entities, savedEntities and results.
I created a unit test for MyEntityRepository where I save an entity with non null description and entityColumn. I reload the entity from the database using the repository to be sure these columns are not null. I set them to null, save the entity, and load it back from the database. Now both columns are indeed null, which is what I've been expecting.
My question: what am I missing here? Why the repository does not save null columns? If I set any non null values, it works perfectly.
Thanks in advance.
UPDATE: could my problem be related to this? Jpa Repository save() doesn't update existing data
You convert your dtos to entities via dozer but as this point entities are still in detached-state.... to update existing entities you first need to load them through database via your repository. Something like repository.findById(Id id);
Then you will get entity in "attached" state and so state transitions(update on fields) will be applied.
During the save() all your entity state transitions will be translated to corresponding DML and your update should work now.
And regarding this statement
I reload the entity from the database using the repository to be sure these columns are not null. I set them to null, save the entity, and load it back from the database. Now both columns are indeed null, which is what I've been expecting.
As you said you reload entity from the database so it works

How to use Mongo Auditing and a UUID as id with Spring Boot 2.2.x?

I would like to have Documents stored with an UUID id and createdAt / updatedAt fields. My solution was working with Spring Boot 2.1.x. After I upgraded from Spring Boot 2.1.11.RELEASE to 2.2.0.RELEASE my test for MongoAuditing failed with createdAt = null. What do I need to do to get the createdAt field filled again?
This is not just a testproblem. I ran the application and it has the same behaviour as my test. All auditing fields stay null.
I have a Configuration to enable MongoAuditing and UUID generation:
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public GenerateUUIDListener generateUUIDListener() {
return new GenerateUUIDListener();
}
}
The listner hooks into the onBeforeConvert - I guess thats where the trouble starts.
public class GenerateUUIDListener extends AbstractMongoEventListener<IdentifiableEntity> {
#Override
public void onBeforeConvert(BeforeConvertEvent<IdentifiableEntity> event) {
IdentifiableEntity entity = event.getSource();
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
}
}
The document itself (I dropped the getter and setters):
#Document
public class MyDocument extends InsertableEntity {
private String name;
}
public abstract class InsertableEntity extends IdentifiableEntity {
#CreatedDate
#JsonIgnore
private Instant createdAt;
}
public abstract class IdentifiableEntity implements Persistable<UUID> {
#Id
private UUID id;
#JsonIgnore
public boolean isNew() {
return getId() == null;
}
}
A complete minimal example can be find here (including a test) https://github.com/mab/auditable
With 2.1.11.RELEASE the test succeeds with 2.2.0.RELEASE it fails.
For me the best solution was to switch from event UUID generation to a callback based one. With the implementation of Ordered we can set the new callback to be executed after the AuditingEntityCallback.
public class IdEntityCallback implements BeforeConvertCallback<IdentifiableEntity>, Ordered {
#Override
public IdentifiableEntity onBeforeConvert(IdentifiableEntity entity, String collection) {
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
return entity;
}
#Override
public int getOrder() {
return 101;
}
}
I registered the callback with the MongoConfiguration. For a more general solution you might want to take a look at the registration of the AuditingEntityCallback with the `MongoAuditingBeanDefinitionParser.
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public IdEntityCallback registerCallback() {
return new IdEntityCallback();
}
}
MongoTemplate works in the following way on doInsert()
this.maybeEmitEvent - emit an event (onBeforeConvert, onBeforeSave and such) so any AbstractMappingEventListener can catch and act upon like you did with GenerateUUIDListener
this.maybeCallBeforeConvert - call before convert callbacks like mongo auditing
like you can see in source code of MongoTemplate.class src (831-832)
protected <T> T doInsert(String collectionName, T objectToSave, MongoWriter<T> writer) {
BeforeConvertEvent<T> event = new BeforeConvertEvent(objectToSave, collectionName);
T toConvert = ((BeforeConvertEvent)this.maybeEmitEvent(event)).getSource(); //emit event
toConvert = this.maybeCallBeforeConvert(toConvert, collectionName); //call some before convert handlers
...
}
MongoAudit marks createdAt only to new entities by checking if entity.isNew() == true
because your code (UUID) already set the Id the createdAt is not populated (the entity is not considered new)
you can do the following (order by best to worst):
forget about the UUID and use String for your id, let the mongo itself create and manage it's entities ids (this how MongoTemplate actually works lines 811-812)
keep the UUID at the code level, convert from/to String when inserting and retrieving from the db
create a custom repository like in this post
stay with 2.1.11.RELEASE
set the updateAt by GenerateUUIDListener as well as id (rename it NewEntityListener or smth), basically implement the audit
implement a new isNew() logic that don't depends only on the entity id
in version 2.1.11.RELEASE the order of the methods was flipped (MongoTemplate.class 804-805) so your code worked fine
as an abstract approach, the nature of event is to be sort of send-and-forget (async compatible), so it's a very bad practice to change the object itself, there is NO grantee for order of computation, if any
this is why the audit build on callbacks and not events, and that's why Pivotal don't (need to) keep order between versions

Prevent hibernate entity changes from being persisted

I am updating my application from Spring Boot 1.4.5 / Hibernate 4.3.5 to Spring Boot 2.0.9 / Hibernate 5.2.18 and code that used to work in the previous configuration is no longer working.
The scenario is as follows:
Start a transaction by entering a method annotated with #Transactional
Hydrate the entity
Change the entity
Make another query
Detect a problem. As a result of this problem, determine that changes should not persist.
Evict the entity
Exit the method / transaction
With Hibernate 4.3.5, calling entityManager.detach() would prevent the changes from being persisted. However, with Hibernate 5.2.18, I'm finding that changes are persisted even with this call. I have also tried to evict() from the session and I have tried to clear() all entities from the session (just to see what would happen).
So I ask - is it possible to discard entity changes in Hibernate 5.2.18 the way that I was able to do in Hibernate 4.3.5?
The relevant code is below...
#Entity
public class Agreement {
private Long agreementId;
private Integer agreementStateId;
#Id
#Column(name = "agreement_id")
public Long getAgreementId() {
return agreementId;
}
public void setAgreementId(Long agreementId) {
this.agreementId = agreementId;
}
#Basic
#Column(name = "agreement_state_id", nullable = false)
public Integer getAgreementStateId() {
return agreementStateId;
}
public void setAgreementStateId(Integer agreementStateId) {
this.agreementStateId = agreementStateId;
}
}
#Component
public class Repo1 {
#PersistenceContext(unitName = "rights")
private EntityManager entityManager;
public void evict(Object entity) {
entityManager.detach(entity);
}
public Agreement getAgreement(Long agreementId) {
// Code to get entity is here.
// Agreement with an agreementStateId of 5 is returned.
}
public void anotherQuery() {
// Code to make another query is here.
}
}
#Component
public class Service1 {
#Autowired
Repo1 repo;
#Transactional
public void doSomething() {
Agreement agreement = repo.getAgreement(1L);
// Change agreementStateId. Very simple for purposes of example.
agreement.setAgreementStateId(100);
// Make another query
repo.anotherQuery();
// Detect a problem here. Simplified for purposes of example.
if (agreement.getAgreementStateId() == 100) {
repo.evict(agreement);
}
}
}
I have found the problem and it has nothing to do with evict(). It turns out that an additional query was causing the session to flush prior to the evict() call.
In general, the application uses QueryDSL to make queries. Queries made in this way did not result in the session flushing prior to making a query. However in this case, the query was created via Session.createSQLQuery(). This uses the FlushMode already assigned to the session which was FlushMode.AUTO.
I was able to prevent the flush by calling setHibernateFlushMode(FlushMode.COMMIT) on the query prior to making the query. This causes the session FlushMode to temporarily change until after the query has been run. After that, the evict() call worked as expected.

Update entity in redis with spring-data-redis

I'm currently using Redis (3.2.100) with Spring data redis (1.8.9) and with Jedis connector.
When i use save() function on an existing entity, Redis delete my entity and re create the entity.
In my case i need to keep this existing entity and only update attributes of the entity. (I have another thread which read the same entity at the same time)
In Spring documentation (https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#redis.repositories.partial-updates), i found the partial update feature. Unfortunately, the example in the documentation use the update() method of RedisTemplate. But this method do not exist.
So did you ever use Spring-data-redis partial update?
There is another method to update entity redis without delete before?
Thanks
To get RedisKeyValueTemplate, you can do:
#Autowired
private RedisKeyValueTemplate redisKVTemplate;
redisKVTemplate.update(entity)
You should use RedisKeyValueTemplate for make partial update.
Well, consider following docs link and also spring data tests (link) actually made 0 contribution to resulting solution.
Consider following entity
#RedisHash(value = "myservice/lastactivity")
#Data
#AllArgsConstructor
#NoArgsConstructor
#Builder
public class LastActivityCacheEntity implements Serializable {
#Id
#Indexed
#Size(max = 50)
private String user;
private long lastLogin;
private long lastProfileChange;
private long lastOperation;
}
Let's assume that:
we don't want to do complex read-write exercise on every update.
entity = lastActivityCacheRepository.findByUser(userId);
lastActivityCacheRepository.save(LastActivityCacheEntity.builder()
.user(entity.getUser())
.lastLogin(entity.getLastLogin())
.lastProfileChange(entity.getLastProfileChange())
.lastOperation(entity.getLastOperation()).build());
what if there would pop up some 100 rows? then on each update entity got to fetched and saved, quite inefficient, but still would work out.
we don't actually want complex exercises with opsForHash + ObjectMapper + configuring beans approach - it's quite hard to implement and maintain (for example link)
So we're about to use something like:
#Autowired
private final RedisKeyValueTemplate redisTemplate;
void partialUpdate(LastActivityCacheEntity update) {
var partialUpdate = PartialUpdate
.newPartialUpdate(update.getUser(), LastActivityCacheEntity.class);
if (update.getLastLogin() > 0)
partialUpdate.set("lastlastLogin", update.getLastLogin());
if (update.getLastProfileChange() > 0)
partialUpdate.set("lastProfileChange", update.getLastProfileChange());
if (update.getLastOperation() > 0)
partialUpdate.set("lastOperation", update.getLastOperation());
redisTemplate.update(partialUpdate);
}
and the thing is - it doesn't really work for this case.
That is, values getting updated but you can not query new property later on via repository entity lookup: certain lastActivityCacheRepository.findAll() will return unchanged properties.
Here's the solution:
LastActivityCacheRepository.java:
#Repository
public interface LastActivityCacheRepository extends CrudRepository<LastActivityCacheEntity, String>, LastActivityCacheRepositoryCustom {
Optional<LastActivityCacheEntity> findByUser(String user);
}
LastActivityCacheRepositoryCustom.java:
public interface LastActivityCacheRepositoryCustom {
void updateEntry(String userId, String key, long date);
}
LastActivityCacheRepositoryCustomImpl.java
#Repository
public class LastActivityCacheRepositoryCustomImpl implements LastActivityCacheRepositoryCustom {
#Autowired
private final RedisKeyValueTemplate redisKeyValueTemplate;
#Override
public void updateEntry(String userId, String key, long date) {
redisKeyValueTemplate.update(new PartialUpdate<>(userId, LastActivityCacheEntity.class)
.set(key, date));
}
}
And finally working sample:
void partialUpdate(LastActivityCacheEntity update) {
if ((lastActivityCacheRepository.findByUser(update.getUser()).isEmpty())) {
lastActivityCacheRepository.save(LastActivityCacheEntity.builder().user(update.getUser()).build());
}
if (update.getLastLogin() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastlastLogin",
update.getLastLogin());
}
if (update.getLastProfileChange() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastProfileChange",
update.getLastProfileChange());
}
if (update.getLastOperation() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastOperation",
update.getLastOperation());
}
all credits to Chris Richardson and his src
If you don't want to type your field names as strings in the updateEntry method, you can use use the lombok annotation on your entity class #FieldNameConstants. This creates field name constants for you and then you can access your field names like this:
...
if (update.getLastOperation() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
LastActivityCache.Fields.lastOperation, // <- instead of "lastOperation"
update.getLastOperation());
...
This makes refactoring the field names more bug-proof.

Spring Data JPARepository: How to conditionally fetch children entites

How can one configure their JPA Entities to not fetch related entities unless a certain execution parameter is provided.
According to Spring's documentation, 4.3.9. Configuring Fetch- and LoadGraphs, you need to use the #EntityGraph annotation to specify fetch policy for queries, however this doesn't let me decide at runtime whether I want to load those entities.
I'm okay with getting the child entities in a separate query, but in order to do that I would need to configure my repository or entities to not retrieve any children. Unfortunately, I cannot seem to find any strategies on how to do this. FetchPolicy is ignored, and EntityGraph is only helpful when specifying which entities I want to eagerly retrieve.
For example, assume Account is the parent and Contact is the child, and an Account can have many Contacts.
I want to be able to do this:
if(fetchPolicy.contains("contacts")){
account.setContacts(contactRepository.findByAccountId(account.getAccountId());
}
The problem is spring-data eagerly fetches the contacts anyways.
The Account Entity class looks like this:
#Entity
#Table(name = "accounts")
public class Account
{
protected String accountId;
protected Collection<Contact> contacts;
#OneToMany
//#OneToMany(fetch=FetchType.LAZY) --> doesn't work, Spring Repositories ignore this
#JoinColumn(name="account_id", referencedColumnName="account_id")
public Collection<Contact> getContacts()
{
return contacts;
}
//getters & setters
}
The AccountRepository class looks like this:
public interface AccountRepository extends JpaRepository<Account, String>
{
//#EntityGraph ... <-- has type= LOAD or FETCH, but neither can help me prevent retrieval
Account findOne(String id);
}
The lazy fetch should be working properly if no methods of object resulted from the getContacts() is called.
If you prefer more manual work, and really want to have control over this (maybe more contexts depending on the use case). I would suggest you to remove contacts from the account entity, and maps the account in the contacts instead. One way to tell hibernate to ignore that field is to map it using the #Transient annotation.
#Entity
#Table(name = "accounts")
public class Account
{
protected String accountId;
protected Collection<Contact> contacts;
#Transient
public Collection<Contact> getContacts()
{
return contacts;
}
//getters & setters
}
Then in your service class, you could do something like:
public Account getAccountById(int accountId, Set<String> fetchPolicy) {
Account account = accountRepository.findOne(accountId);
if(fetchPolicy.contains("contacts")){
account.setContacts(contactRepository.findByAccountId(account.getAccountId());
}
return account;
}
Hope this is what you are looking for. Btw, the code is untested, so you should probably check again.
You can use #Transactional for that.
For that you need to fetch you account entity Lazily.
#Transactional Annotations should be placed around all operations that are inseparable.
Write method in your service layer which is accepting one flag to fetch contacts eagerly.
#Transactional
public Account getAccount(String id, boolean fetchEagerly){
Account account = accountRepository.findOne(id);
//If you want to fetch contact then send fetchEagerly as true
if(fetchEagerly){
//Here fetching contacts eagerly
Object object = account.getContacts().size();
}
}
#Transactional is a Service that can make multiple call in single transaction
without closing connection with end point.
Hope you find this useful. :)
For more details refer this link
Please find an example which runs with JPA 2.1.
Set the attribute(s) you only want to load (with attributeNodes list) :
Your entity with Entity graph annotations :
#Entity
#NamedEntityGraph(name = "accountGraph", attributeNodes = {
#NamedAttributeNode("accountId")})
#Table(name = "accounts")
public class Account {
protected String accountId;
protected Collection<Contact> contacts;
#OneToMany(fetch=FetchType.LAZY)
#JoinColumn(name="account_id", referencedColumnName="account_id")
public Collection<Contact> getContacts()
{
return contacts;
}
}
Your custom interface :
public interface AccountRepository extends JpaRepository<Account, String> {
#EntityGraph("accountGraph")
Account findOne(String id);
}
Only the "accountId" property will be loaded eagerly. All others properties will be loaded lazily on access.
Spring data does not ignore fetch=FetchType.Lazy.
My problem was that I was using dozer-mapping to covert my entities to graphs. Evidently dozer calls the getters and setters to map two objects, so I needed to add a custom field mapper configuration to ignore PersistentCollections...
GlobalCustomFieldMapper.java:
public class GlobalCustomFieldMapper implements CustomFieldMapper
{
public boolean mapField(Object source, Object destination, Object sourceFieldValue, ClassMap classMap, FieldMap fieldMapping)
{
if (!(sourceFieldValue instanceof PersistentCollection)) {
// Allow dozer to map as normal
return;
}
if (((PersistentCollectiosourceFieldValue).wasInitialized()) {
// Allow dozer to map as normal
return false;
}
// Set destination to null, and tell dozer that the field is mapped
destination = null;
return true;
}
}
If you are trying to send the resultset of your entities to a client, I recommend you use data transfer objects(DTO) instead of the entities. You can directly create a DTO within the HQL/JPQL.
For example
"select new com.test.MyTableDto(my.id, my.name) from MyTable my"
and if you want to pass the child
"select new com.test.MyTableDto(my.id, my.name, my.child) from MyTable my"
That way you have a full control of what is being created and passed to client.

Categories