The onFlushDirty Hibernate Interceptor method is never called - java

Question: Why MyInterceptor#onFlushDirty is never called?
I extend AbstractEntityManagerFactoryBean in xml configs like
<bean id="myEntityManagerFactory" parent="abstractEntityManagerFactoryBean" abstract="true">
<property name="entityInterceptor">
<bean class="xxxx.MyInterceptor"/>
</property>
</bean>
<bean id="abstractEntityManagerFactoryBean" class="xxxx.MyEntityManagerFactoryBean"/>
MyEntityManagerFactoryBean
public class MyEntityManagerFactoryBean extends AbstractEntityManagerFactoryBean implements LoadTimeWeaverAware {
private Interceptor entityInterceptor;
public Interceptor getEntityInterceptor() {
return entityInterceptor;
}
public void setEntityInterceptor(Interceptor interceptor) {
entityInterceptor = interceptor;
}
}
MyInterceptor:
public class MyInterceptor extends EmptyInterceptor {
public MyInterceptor() {
System.out.println("init"); // Works well
}
// PROBLEM - is never called
#Override
public boolean onFlushDirty(Object entity,
Serializable id,
Object[] currentState,
Object[] previousState,
String[] propertyNames,
Type[] types) {
if (entity instanceof File) {
.....
}
return false;
}
}
UPDATE: [explanation why custom dirty policy looks like not my way]
I want update modified timestamp each time I change something in Folder entity EXCEPT folderPosition. In the same time folderPosition should be persistent and not transient (means cause entity to be dirty).
Due I use Spring Transactional and Hibernate Templates, there is some nuances:
1) I can't update modified timestamp at the end of each setter like:
public void setXXX(XXX xxx) {
//PROBLEM: Hibernate templates collect object via setters,
//means simple get query will cause multiple 'modified' timestamp updates
this.xxx = xxx;
this.modified = new Date();
}
2) I can't call setModified manually, because it has about 25 fields, and setXXX for each field is scattered across whole app. And I have no power to make refactoring.
#Entity
public class Folder {
/**
* GOAL: Changing of each of these fields except 'folderPosition' should cause
* 'modified' timestamp update
*/
private long id;
private String name;
private Date created;
private Date modified;
private Integer folderLocation;
#PreUpdate
public void preUpdate() {
//PROBLEM : change modified even if only location field has been changed!
//PROBLEM: need to know which fields have been updated!
modified = new Date();
}
....
}

You need to extend the findDirty method not onFlushDirty. Check this tutorial for a detail explanation with a reference to a GitHub working example.

Related

How to use Mongo Auditing and a UUID as id with Spring Boot 2.2.x?

I would like to have Documents stored with an UUID id and createdAt / updatedAt fields. My solution was working with Spring Boot 2.1.x. After I upgraded from Spring Boot 2.1.11.RELEASE to 2.2.0.RELEASE my test for MongoAuditing failed with createdAt = null. What do I need to do to get the createdAt field filled again?
This is not just a testproblem. I ran the application and it has the same behaviour as my test. All auditing fields stay null.
I have a Configuration to enable MongoAuditing and UUID generation:
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public GenerateUUIDListener generateUUIDListener() {
return new GenerateUUIDListener();
}
}
The listner hooks into the onBeforeConvert - I guess thats where the trouble starts.
public class GenerateUUIDListener extends AbstractMongoEventListener<IdentifiableEntity> {
#Override
public void onBeforeConvert(BeforeConvertEvent<IdentifiableEntity> event) {
IdentifiableEntity entity = event.getSource();
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
}
}
The document itself (I dropped the getter and setters):
#Document
public class MyDocument extends InsertableEntity {
private String name;
}
public abstract class InsertableEntity extends IdentifiableEntity {
#CreatedDate
#JsonIgnore
private Instant createdAt;
}
public abstract class IdentifiableEntity implements Persistable<UUID> {
#Id
private UUID id;
#JsonIgnore
public boolean isNew() {
return getId() == null;
}
}
A complete minimal example can be find here (including a test) https://github.com/mab/auditable
With 2.1.11.RELEASE the test succeeds with 2.2.0.RELEASE it fails.
For me the best solution was to switch from event UUID generation to a callback based one. With the implementation of Ordered we can set the new callback to be executed after the AuditingEntityCallback.
public class IdEntityCallback implements BeforeConvertCallback<IdentifiableEntity>, Ordered {
#Override
public IdentifiableEntity onBeforeConvert(IdentifiableEntity entity, String collection) {
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
return entity;
}
#Override
public int getOrder() {
return 101;
}
}
I registered the callback with the MongoConfiguration. For a more general solution you might want to take a look at the registration of the AuditingEntityCallback with the `MongoAuditingBeanDefinitionParser.
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public IdEntityCallback registerCallback() {
return new IdEntityCallback();
}
}
MongoTemplate works in the following way on doInsert()
this.maybeEmitEvent - emit an event (onBeforeConvert, onBeforeSave and such) so any AbstractMappingEventListener can catch and act upon like you did with GenerateUUIDListener
this.maybeCallBeforeConvert - call before convert callbacks like mongo auditing
like you can see in source code of MongoTemplate.class src (831-832)
protected <T> T doInsert(String collectionName, T objectToSave, MongoWriter<T> writer) {
BeforeConvertEvent<T> event = new BeforeConvertEvent(objectToSave, collectionName);
T toConvert = ((BeforeConvertEvent)this.maybeEmitEvent(event)).getSource(); //emit event
toConvert = this.maybeCallBeforeConvert(toConvert, collectionName); //call some before convert handlers
...
}
MongoAudit marks createdAt only to new entities by checking if entity.isNew() == true
because your code (UUID) already set the Id the createdAt is not populated (the entity is not considered new)
you can do the following (order by best to worst):
forget about the UUID and use String for your id, let the mongo itself create and manage it's entities ids (this how MongoTemplate actually works lines 811-812)
keep the UUID at the code level, convert from/to String when inserting and retrieving from the db
create a custom repository like in this post
stay with 2.1.11.RELEASE
set the updateAt by GenerateUUIDListener as well as id (rename it NewEntityListener or smth), basically implement the audit
implement a new isNew() logic that don't depends only on the entity id
in version 2.1.11.RELEASE the order of the methods was flipped (MongoTemplate.class 804-805) so your code worked fine
as an abstract approach, the nature of event is to be sort of send-and-forget (async compatible), so it's a very bad practice to change the object itself, there is NO grantee for order of computation, if any
this is why the audit build on callbacks and not events, and that's why Pivotal don't (need to) keep order between versions

reactive repository throws exception when saving a new object

I am using r2dbc, r2dbc-h2 and experimental spring-boot-starter-data-r2dbc
implementation 'org.springframework.boot.experimental:spring-boot-starter-data-r2dbc:0.1.0.M1'
implementation 'org.springframework.data:spring-data-r2dbc:1.0.0.RELEASE' // starter-data provides old version
implementation 'io.r2dbc:r2dbc-h2:0.8.0.RELEASE'
implementation 'io.r2dbc:r2dbc-pool:0.8.0.RELEASE'
I have created reactive repositories
public interface IJsonComparisonRepository extends ReactiveCrudRepository<JsonComparisonResult, String> {}
Also added a custom script that creates a table in H2 on startup
#SpringBootApplication
public class JsonComparisonApplication {
public static void main(String[] args) {
SpringApplication.run(JsonComparisonApplication.class, args);
}
#Bean
public CommandLineRunner startup(DatabaseClient client) {
return (args) -> client
.execute(() -> {
var resource = new ClassPathResource("ddl/script.sql");
try (var is = new InputStreamReader(resource.getInputStream())) {
return FileCopyUtils.copyToString(is);
} catch (IOException e) {
throw new RuntimeException(e);
} })
.then()
.block();
}
}
My r2dbc configuration looks like this
#Configuration
#EnableR2dbcRepositories
public class R2dbcConfiguration extends AbstractR2dbcConfiguration {
#Override
public ConnectionFactory connectionFactory() {
return new H2ConnectionFactory(
H2ConnectionConfiguration.builder()
.url("mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE")
.username("sa")
.build());
}
}
My service where I perform the logic looks like this
#Override
public Mono<JsonComparisonResult> updateOrCreateRightSide(String comparisonId, String json) {
return updateComparisonSide(comparisonId, storedComparisonResult -> {
storedComparisonResult.setRightSide(json);
return storedComparisonResult;
});
}
private Mono<JsonComparisonResult> updateComparisonSide(String comparisonId,
Function<JsonComparisonResult, JsonComparisonResult> updateSide) {
return repository.findById(comparisonId)
.defaultIfEmpty(createResult(comparisonId))
.filter(result -> ComparisonDecision.NONE == result.getDecision()) // if not NONE - it means it was found and completed
.switchIfEmpty(Mono.error(new NotUpdatableCompleteComparisonException(comparisonId)))
.map(updateSide)
.flatMap(repository::save);
}
private JsonComparisonResult createResult(String comparisonId) {
LOGGER.info("Creating new comparison result: {}.", comparisonId);
var newResult = new JsonComparisonResult();
newResult.setDecision(ComparisonDecision.NONE);
newResult.setComparisonId(comparisonId);
return newResult;
}
The domain looks like this
#Table("json_comparison")
public class JsonComparisonResult {
#Column("comparison_id")
#Id
private String comparisonId;
#Column("left")
private String leftSide;
#Column("right")
private String rightSide;
// #Enumerated(EnumType.STRING) - no support for now
#Column("decision")
private ComparisonDecision decision;
private String differences;
The problem is that when I try to add any object to the database it fails with the exception
org.springframework.dao.TransientDataAccessResourceException: Failed to update table [json_comparison]. Row with Id [4] does not exist.
at org.springframework.data.r2dbc.repository.support.SimpleR2dbcRepository.lambda$save$0(SimpleR2dbcRepository.java:91) ~[spring-data-r2dbc-1.0.0.RELEASE.jar:1.0.0.RELEASE]
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:96) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoUsingWhen$MonoUsingWhenSubscriber.deferredComplete(MonoUsingWhen.java:276) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.FluxUsingWhen$CommitInner.onComplete(FluxUsingWhen.java:536) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onComplete(Operators.java:1858) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.Operators.complete(Operators.java:132) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoEmpty.subscribe(MonoEmpty.java:45) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
For some reason during save in SimpleR2dbcRepository library class it doesn't consider the objectToSave as new, but then it fails to update as it is in reality doesn't exist.
// SimpleR2dbcRepository#save
#Override
#Transactional
public <S extends T> Mono<S> save(S objectToSave) {
Assert.notNull(objectToSave, "Object to save must not be null!");
if (this.entity.isNew(objectToSave)) { // not new
....
}
}
Why it is happening and what is the problem?
TL;DR: How should Spring Data know if your object is new or whether it should exist?
Relational Spring Data Repositories (both, JDBC and R2DBC) must differentiate on [Reactive]CrudRepository.save(…) whether the given object is new or whether it exists in your database. Performing a save(…) operation results either in an INSERT or UPDATE statement. Issuing the wrong statement either causes a primary key violation or a no-op as standard SQL does not have a way to express an upsert.
Spring Data JDBC|R2DBC use by default the presence/absence of the #Id value. Generated primary keys are a widely used mechanism. If the primary key is provided, the entity is considered existing. If the id value is null, the entity is considered new.
Read more in the reference documentation about Entity State Detection Strategies.
You have to implement Persistable because you’ve provided the #Id. The library needs to figure out, whether the row is new or whether it should exist. If your entity implements Persistable, then save(…) will use the outcome of isNew() to determine whether to issue an INSERT or UPDATE.
For example:
public class Product implements Persistable<Integer> {
#Id
private Integer id;
private String description;
private Double price;
#Transient
private boolean newProduct;
#Override
#Transient
public boolean isNew() {
return this.newProduct || id == null;
}
public Product setAsNew() {
this.newProduct = true;
return this;
}
}
May be you should consider this:
Choose data type of your id/Primary Key as INT/LONG and set it to AUTO_INCREMENT (something like below):
CREATE TABLE PRODUCT(id INT PRIMARY KEY AUTO_INCREMENT NOT NULL, modelname VARCHAR(30) , year VARCHAR(4), owner VARCHAR(50));
In your post request body, do not include id field.
Removing #ID issued insert statement

Hibernate LazyInitializationException if entity is fetched in JWTAuthorizationFilter

I'm using Spring Rest. I have an Entity called Operator that goes like this:
#Entity
#Table(name = "operators")
public class Operator {
//various properties
private List<OperatorRole> operatorRoles;
//various getters and setters
#LazyCollection(LazyCollectionOption.TRUE)
#OneToMany(mappedBy = "operator", cascade = CascadeType.ALL)
public List<OperatorRole> getOperatorRoles() {
return operatorRoles;
}
public void setOperatorRoles(List<OperatorRole> operatorRoles) {
this.operatorRoles = operatorRoles;
}
}
I also have the corresponding OperatorRepository extends JpaRepository
I defined a controller that exposes this API:
#RestController
#RequestMapping("/api/operators")
public class OperatorController{
private final OperatorRepository operatorRepository;
#Autowired
public OperatorController(OperatorRepository operatorRepository) {
this.operatorRepository = operatorRepository;
}
#GetMapping(value = "/myApi")
#Transactional(readOnly = true)
public MyResponseBody myApi(#ApiIgnore #AuthorizedConsumer Operator operator){
if(operator.getOperatorRoles()!=null) {
for (OperatorRole current : operator.getOperatorRoles()) {
//do things
}
}
}
}
This used to work before I made the OperatorRoles list lazy; now if I try to iterate through the list it throws LazyInitializationException.
The Operator parameter is fetched from the DB by a filter that extends Spring's BasicAuthenticationFilter, and is then somehow autowired into the API call.
I can get other, non-lazy initialized, properties without problem. If i do something like operator = operatorRepository.getOne(operator.getId());, everything works, but I would need to change this in too many points in the code.
From what I understand, the problem is that the session used to fetch the Operator in the BasicAuthenticationFilter is no longer open by the time i reach the actual API in OperatorController.
I managed to wrap everything in a OpenSessionInViewFilter, but it still doesn't work.
Anyone has any ideas?
I was having this very same problem for a long time and was using FetchType.EAGER but today something has clicked in my head ...
#Transactional didn't work so I thought "if declarative transactions don't work? Maybe programmatically do" And they do!
Based on Spring Programmatic Transactions docs:
public class JwtAuthorizationFilter extends BasicAuthenticationFilter {
private final TransactionTemplate transactionTemplate;
public JwtAuthorizationFilter(AuthenticationManager authenticationManager,
PlatformTransactionManager transactionManager) {
super(authenticationManager);
this.transactionTemplate = new TransactionTemplate(transactionManager);
// Set your desired propagation behavior, isolation level, readOnly, etc.
this.transactionTemplate.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
}
private void doSomething() {
transactionTemplate.execute(transactionStatus -> {
// execute your queries
});
}
}
It could be late for you, but I hope it helps others.

Spring Data: default 'not deleted' logic for automatic method-based queries when using soft-delete policy

Let's say we use soft-delete policy: nothing gets deleted from the storage; instead, a 'deleted' attribute/column is set to true on a record/document/whatever to make it 'deleted'. Later, only non-deleted entries should be returned by query methods.
Let's take MongoDB as an example (alghough JPA is also interesting).
For standard methods defined by MongoRepository, we can extend the default implementation (SimpleMongoRepository), override the methods of interest and make them ignore 'deleted' documents.
But, of course, we'd also like to use custom query methods like
List<Person> findByFirstName(String firstName)
In a soft-delete environment, we are forced to do something iike
List<person> findByFirstNameAndDeletedIsFalse(String firstName)
or write queries manually with #Query (adding the same boilerplate condition about 'not deleted' all the time).
Here comes the question: is it possible to add this 'non-deleted' condition to any generated query automatically? I did not find anything in the documentation.
I'm looking at Spring Data (Mongo and JPA) 2.1.6.
Similar questions
Query interceptor for spring-data-mongodb for soft deletions here they suggest Hibernate's #Where annotation which only works for JPA+Hibernate, and it is not clear how to override it if you still need to access deleted items in some queries
Handling soft-deletes with Spring JPA here people either suggest the same #Where-based approach, or the solution applicability is limited with the already-defined standard methods, not the custom ones.
It turns out that for Mongo (at least, for spring-data-mongo 2.1.6) we can hack into standard QueryLookupStrategy implementation to add the desired 'soft-deleted documents are not visible by finders' behavior:
public class SoftDeleteMongoQueryLookupStrategy implements QueryLookupStrategy {
private final QueryLookupStrategy strategy;
private final MongoOperations mongoOperations;
public SoftDeleteMongoQueryLookupStrategy(QueryLookupStrategy strategy,
MongoOperations mongoOperations) {
this.strategy = strategy;
this.mongoOperations = mongoOperations;
}
#Override
public RepositoryQuery resolveQuery(Method method, RepositoryMetadata metadata, ProjectionFactory factory,
NamedQueries namedQueries) {
RepositoryQuery repositoryQuery = strategy.resolveQuery(method, metadata, factory, namedQueries);
// revert to the standard behavior if requested
if (method.getAnnotation(SeesSoftlyDeletedRecords.class) != null) {
return repositoryQuery;
}
if (!(repositoryQuery instanceof PartTreeMongoQuery)) {
return repositoryQuery;
}
PartTreeMongoQuery partTreeQuery = (PartTreeMongoQuery) repositoryQuery;
return new SoftDeletePartTreeMongoQuery(partTreeQuery);
}
private Criteria notDeleted() {
return new Criteria().orOperator(
where("deleted").exists(false),
where("deleted").is(false)
);
}
private class SoftDeletePartTreeMongoQuery extends PartTreeMongoQuery {
SoftDeletePartTreeMongoQuery(PartTreeMongoQuery partTreeQuery) {
super(partTreeQuery.getQueryMethod(), mongoOperations);
}
#Override
protected Query createQuery(ConvertingParameterAccessor accessor) {
Query query = super.createQuery(accessor);
return withNotDeleted(query);
}
#Override
protected Query createCountQuery(ConvertingParameterAccessor accessor) {
Query query = super.createCountQuery(accessor);
return withNotDeleted(query);
}
private Query withNotDeleted(Query query) {
return query.addCriteria(notDeleted());
}
}
}
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SeesSoftlyDeletedRecords {
}
We just add 'and not deleted' condition to all the queries unless #SeesSoftlyDeletedRecords asks as to avoid it.
Then, we need the following infrastructure to plug our QueryLiikupStrategy implementation:
public class SoftDeleteMongoRepositoryFactory extends MongoRepositoryFactory {
private final MongoOperations mongoOperations;
public SoftDeleteMongoRepositoryFactory(MongoOperations mongoOperations) {
super(mongoOperations);
this.mongoOperations = mongoOperations;
}
#Override
protected Optional<QueryLookupStrategy> getQueryLookupStrategy(QueryLookupStrategy.Key key,
QueryMethodEvaluationContextProvider evaluationContextProvider) {
Optional<QueryLookupStrategy> optStrategy = super.getQueryLookupStrategy(key,
evaluationContextProvider);
return optStrategy.map(this::createSoftDeleteQueryLookupStrategy);
}
private SoftDeleteMongoQueryLookupStrategy createSoftDeleteQueryLookupStrategy(QueryLookupStrategy strategy) {
return new SoftDeleteMongoQueryLookupStrategy(strategy, mongoOperations);
}
}
public class SoftDeleteMongoRepositoryFactoryBean<T extends Repository<S, ID>, S, ID extends Serializable>
extends MongoRepositoryFactoryBean<T, S, ID> {
public SoftDeleteMongoRepositoryFactoryBean(Class<? extends T> repositoryInterface) {
super(repositoryInterface);
}
#Override
protected RepositoryFactorySupport getFactoryInstance(MongoOperations operations) {
return new SoftDeleteMongoRepositoryFactory(operations);
}
}
Then we just need to reference the factory bean in a #EnableMongoRepositories annotation like this:
#EnableMongoRepositories(repositoryFactoryBeanClass = SoftDeleteMongoRepositoryFactoryBean.class)
If it is required to determine dynamically whether a particular repository needs to be 'soft-delete' or a regular 'hard-delete' repository, we can introspect the repository interface (or the domain class) and decide whether we need to change the QueryLookupStrategy or not.
As for JPA, this approach does not work without rewriting (possibly duplicating) a substantial part of the code in PartTreeJpaQuery.

How can I validate a field as required depending on another field's value in SEAM?

I'm trying to create a simple custom validator for my project, and I can't seem to find a way of getting seam to validate things conditionally.
Here's what I've got:
A helper/backing bean (that is NOT an entity)
#RequiredIfSelected
public class AdSiteHelper {
private Date start;
private Date end;
private boolean selected;
/* getters and setters implied */
}
What I need is for "start" and "end" to be required if and only if selected is true.
I tried creating a custom validator at the TYPE target, but seam doesn't seem to want to pick it up and validate it. (Maybe because it's not an entity?)
here's the general idea of my custom annotation for starters:
#ValidatorClass(RequiredIfSelectedValidator.class)
#Target(ElementType.TYPE)
#Retention(RetentionPolicy.RUNTIME)
public #interface RequiredIfSelected {
String message();
}
public class RequiredIfSelectedValidator implements Validator<RequiredIfSelected>, Serializable {
public boolean isValid(Object value) {
AdSiteHelper ash = (AdSiteHelper) value;
return !ash.isSelected() || (ash.getStart() && ash.getEnd());
}
public void initialize(RequiredIfSelected parameters) { }
}
I had a similar problem covered by this post. If your Bean holding these values is always the same then you could just load the current instance of it into your Validator with
//Assuming you have the #Name annotation populated on your Bean and a Scope of CONVERSATION or higher
AdSiteHelper helper = (AdSiteHelper)Component.getInstance("adSiteHelper");
Also as you're using Seam your validators don't need to be so complex. You don't need an interface and it can be as simple as
#Name("requiredIfSelectedValidator")
#Validator
public class RequiredIfSelectedValidator implements javax.faces.validator.Validator {
public void validate(FacesContext context, UIComponent component, Object value) throws ValidatorException {
//do stuff
}
}

Categories