I was tasked with creating an Annotation for Custom Validation. This was due to some problems with handling database constraint violations nicely. What I did in response to this was relatively simple. I created a class-level CustomConstraint specifically for the one domain-class that required it. What I got as my current result is the following:
#UniqueLocation Annotation:
#Target({ TYPE, ANNOTATION_TYPE })
#Retention(RUNTIME)
#Constraint(validatedBy = UniqueLocationValidator.class)
#Documented
public #interface UniqueLocation {
String message() default "must be unique!";
Class<?>[] groups() default {};
Class<? extends Payload>[] payload() default {};
}
This is not spectacular, in fact it's copied almost verbatim from the hibernate documentation.
I proceeded to create my UniqueLocationValidator and ran into a problem with using the persistence context in there. I wanted to run a defensive select, and thusly tried to Inject my application wide #Produces #PersistenceContext EntityManager.
Therefor I included JBoss Seam to use it's InjectingConstraintValidatorFactory configuring my validation.xml as follows:
<?xml version="1.0" encoding="UTF-8"?>
<validation-config
xmlns="http://jboss.org/xml/ns/javax/validation/configuration" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration validation-configuration-1.0.xsd">
<constraint-validator-factory>
org.jboss.seam.validation.InjectingConstraintValidatorFactory
</constraint-validator-factory>
</validation-config>
After running into some issues with Creating Constraint Violations this is how my Validator actually looks:
#ManagedBean
public class UniqueLocationValidator implements
ConstraintValidator<UniqueLocation, Location> {
// must not return a result for name-equality on the same Id
private final String QUERY_STRING = "SELECT * FROM Location WHERE locationName = :value AND id <> :id";
#Inject
EntityManager entityManager;
private String constraintViolationMessage;
#Override
public void initialize(final UniqueLocation annotation) {
constraintViolationMessage = annotation.message();
}
#Override
public boolean isValid(final Location instance,
final ConstraintValidatorContext context) {
if (instance == null) {
// Recommended, instead use explicit #NotNull Annotation for
// validating non-nullable instances
return true;
}
if (duplicateLocationExists(instance)) {
createConstraintViolations(context);
return false;
} else {
return true;
}
}
private void createConstraintViolations(
final ConstraintValidatorContext context) {
context.disableDefaultConstraintViolation();
context.buildConstraintViolationWithTemplate(constraintViolationMessage)
.addNode("locationName").addConstraintViolation();
}
private boolean duplicateLocationExists(final Location location) {
final String checkedValue = location.getLocationName();
final long id = location.getId();
Query defensiveSelect = entityManager.createNativeQuery(QUERY_STRING)
.setParameter("value", checkedValue).setParameter("id", id);
return !defensiveSelect.getResultList().isEmpty();
}
}
So much for my current configuration, now to the real beef, the problem:
When I run following code after recieving an action from a user, the thing works wonderfully and correctly marks a duplicate location name as invalid.. Also persisting works just fine when the locationName is not duplicated.
public long add(#Valid final Location location) {
entityManager.persist(location);
return location.getId();
}
Mind that the entityManager here and the entityManager in the UniqueLocationValidator are both injected via Weld CDI from the aforementioned #PersistenceContext EntityManager.
What does not work is the following:
public long update(#Valid final Location location){
entityManager.merge(location);
return location.getId();
}
When calling this code, I get a relatively short stacktrace, that has a ConcurrentModificationException as root-cause.
I neither understand why that's the case, nor how I would go about fixing this. I have nowhere attempted to explicitly multithread my application, so this should have been managed by the JBoss 7.1.1-Final I am using as application server..
What you're trying to do is not possible via the EntityManager. Well, not normally.
Your validator is called during the processing of the updates. Queries sent via the EntityManager affect the internal storage, the ActionQueue of the EntityManager. This is what causes the ConcurrentModificationException: the results from your query alter the list that the EntityManager is iterating through when flushing changes.
A workaround for this would be to bypass the EntityManager.
How can we do this?
Well, ... it's a bit dirty, since you're effectively adding a dependency on the hibernate implementation, but you can get the connection from the Session or EntityManager in various ways. And once you have a java.sql.Connection object, well, you can use something like a PreparedStatement to execute your query anyway.
Example fix:
Session session = entityManager.unwrap(Session.class);
SessionFactoryImplementor sessionFactoryImplementation = (SessionFactoryImplementor) session.getSessionFactory();
ConnectionProvider connectionProvider = sessionFactoryImplementation.getConnectionProvider();
try {
connection = connectionProvider.getConnection();
PreparedStatement ps = connection.prepareStatement("SELECT 1 FROM Location WHERE id <> ? AND locationName = ?");
ps.setLong(1, id);
ps.setString(2, checkedValue);
ResultSet rs = ps.executeQuery();
boolean result = rs.next();//found any results? if we can retrieve a row: yes!
rs.close();
return result;
}//catch SQLException etc...
//finally, close resources (only the resultset!)
Related
Let's say we use soft-delete policy: nothing gets deleted from the storage; instead, a 'deleted' attribute/column is set to true on a record/document/whatever to make it 'deleted'. Later, only non-deleted entries should be returned by query methods.
Let's take MongoDB as an example (alghough JPA is also interesting).
For standard methods defined by MongoRepository, we can extend the default implementation (SimpleMongoRepository), override the methods of interest and make them ignore 'deleted' documents.
But, of course, we'd also like to use custom query methods like
List<Person> findByFirstName(String firstName)
In a soft-delete environment, we are forced to do something iike
List<person> findByFirstNameAndDeletedIsFalse(String firstName)
or write queries manually with #Query (adding the same boilerplate condition about 'not deleted' all the time).
Here comes the question: is it possible to add this 'non-deleted' condition to any generated query automatically? I did not find anything in the documentation.
I'm looking at Spring Data (Mongo and JPA) 2.1.6.
Similar questions
Query interceptor for spring-data-mongodb for soft deletions here they suggest Hibernate's #Where annotation which only works for JPA+Hibernate, and it is not clear how to override it if you still need to access deleted items in some queries
Handling soft-deletes with Spring JPA here people either suggest the same #Where-based approach, or the solution applicability is limited with the already-defined standard methods, not the custom ones.
It turns out that for Mongo (at least, for spring-data-mongo 2.1.6) we can hack into standard QueryLookupStrategy implementation to add the desired 'soft-deleted documents are not visible by finders' behavior:
public class SoftDeleteMongoQueryLookupStrategy implements QueryLookupStrategy {
private final QueryLookupStrategy strategy;
private final MongoOperations mongoOperations;
public SoftDeleteMongoQueryLookupStrategy(QueryLookupStrategy strategy,
MongoOperations mongoOperations) {
this.strategy = strategy;
this.mongoOperations = mongoOperations;
}
#Override
public RepositoryQuery resolveQuery(Method method, RepositoryMetadata metadata, ProjectionFactory factory,
NamedQueries namedQueries) {
RepositoryQuery repositoryQuery = strategy.resolveQuery(method, metadata, factory, namedQueries);
// revert to the standard behavior if requested
if (method.getAnnotation(SeesSoftlyDeletedRecords.class) != null) {
return repositoryQuery;
}
if (!(repositoryQuery instanceof PartTreeMongoQuery)) {
return repositoryQuery;
}
PartTreeMongoQuery partTreeQuery = (PartTreeMongoQuery) repositoryQuery;
return new SoftDeletePartTreeMongoQuery(partTreeQuery);
}
private Criteria notDeleted() {
return new Criteria().orOperator(
where("deleted").exists(false),
where("deleted").is(false)
);
}
private class SoftDeletePartTreeMongoQuery extends PartTreeMongoQuery {
SoftDeletePartTreeMongoQuery(PartTreeMongoQuery partTreeQuery) {
super(partTreeQuery.getQueryMethod(), mongoOperations);
}
#Override
protected Query createQuery(ConvertingParameterAccessor accessor) {
Query query = super.createQuery(accessor);
return withNotDeleted(query);
}
#Override
protected Query createCountQuery(ConvertingParameterAccessor accessor) {
Query query = super.createCountQuery(accessor);
return withNotDeleted(query);
}
private Query withNotDeleted(Query query) {
return query.addCriteria(notDeleted());
}
}
}
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SeesSoftlyDeletedRecords {
}
We just add 'and not deleted' condition to all the queries unless #SeesSoftlyDeletedRecords asks as to avoid it.
Then, we need the following infrastructure to plug our QueryLiikupStrategy implementation:
public class SoftDeleteMongoRepositoryFactory extends MongoRepositoryFactory {
private final MongoOperations mongoOperations;
public SoftDeleteMongoRepositoryFactory(MongoOperations mongoOperations) {
super(mongoOperations);
this.mongoOperations = mongoOperations;
}
#Override
protected Optional<QueryLookupStrategy> getQueryLookupStrategy(QueryLookupStrategy.Key key,
QueryMethodEvaluationContextProvider evaluationContextProvider) {
Optional<QueryLookupStrategy> optStrategy = super.getQueryLookupStrategy(key,
evaluationContextProvider);
return optStrategy.map(this::createSoftDeleteQueryLookupStrategy);
}
private SoftDeleteMongoQueryLookupStrategy createSoftDeleteQueryLookupStrategy(QueryLookupStrategy strategy) {
return new SoftDeleteMongoQueryLookupStrategy(strategy, mongoOperations);
}
}
public class SoftDeleteMongoRepositoryFactoryBean<T extends Repository<S, ID>, S, ID extends Serializable>
extends MongoRepositoryFactoryBean<T, S, ID> {
public SoftDeleteMongoRepositoryFactoryBean(Class<? extends T> repositoryInterface) {
super(repositoryInterface);
}
#Override
protected RepositoryFactorySupport getFactoryInstance(MongoOperations operations) {
return new SoftDeleteMongoRepositoryFactory(operations);
}
}
Then we just need to reference the factory bean in a #EnableMongoRepositories annotation like this:
#EnableMongoRepositories(repositoryFactoryBeanClass = SoftDeleteMongoRepositoryFactoryBean.class)
If it is required to determine dynamically whether a particular repository needs to be 'soft-delete' or a regular 'hard-delete' repository, we can introspect the repository interface (or the domain class) and decide whether we need to change the QueryLookupStrategy or not.
As for JPA, this approach does not work without rewriting (possibly duplicating) a substantial part of the code in PartTreeJpaQuery.
I have made a Jax-RS endpoint, with a JPA integration, where I try to make a query, based on a generic name, to create a query, to get data from the database.
#Override public Set<E> get() {
EntityManager em = emf.createEntityManager();
List<E> results = null;
try {
results = em.createQuery("SELECT e FROM " + entityClass.getSimpleName() + " e", entityClass)
.getResultList();
} finally {
em.close();
return new HashSet<E>(results);
}
}
when I make an instance of my repository, I specify the class name and the primary key in the SQL-database (usually an integer)
public class BaseRepository<E, PK> implements CRUDOperations<E, PK> {
private Class<E> entityClass;
protected EntityManagerFactory emf;
}
I tried this out for a dummy class, with just a string, in it works fine,
I tested it in the debugger.
however, when I try to do it for an actual class I created, I just get null back (not even an empty set=
lastly, I checked the database, and the tables have the same name in the database, and that matched.
The reason was that I did not add a no-argument constructor to the entity class, and therefore it failed to fetch a set of entities from the database.
We are working on web application using Spring data JPA with hibernate.
In the application there is a field of compid in each entity.
Which means in every DB call (Spring Data methods) will have to be checked with the compid.
I need a way that, this "where compid = ?" check to be injected automatically for every find method.
So that we won't have to specifically bother about compid checks.
Is this possible to achieve from Spring Data JPA framework?
Maybe Hibernateās annotation #Where will help you. It adds the passed condition to any JPA queries related to the entity. For example
#Entity
#Where(clause = "isDeleted='false'")
public class Customer {
//...
#Column
private Boolean isDeleted;
}
More info: 1, 2
Agree with Abhijit Sarkar.
You can achieve your goal hibernate listeners and aspects. I can suggest the following :
create an annotation #Compable (or whatever you call it) to mark service methods
create CompAspect which should be a bean and #Aspect. It should have something like this
#Around("#annotation(compable)")`
public Object enableClientFilter(ProceedingJoinPoint pjp, Compable compable) throws Throwable {
Session session = (Session) em.getDelegate();
try {
if (session.isOpen()) {
session.enableFilter("compid_filter_name")
.setParameter("comp_id", your_comp_id);
}
return pjp.proceed();
} finally {
if (session.isOpen()) {
session.disableFilter("filter_name");
}
}
}
em - EntityManager
3)Also you need to provide hibernate filters. If you use annotation this can look like this:
#FilterDef(name="compid_filter_name", parameters=#ParamDef(name="comp_id", type="java.util.Long"))
#Filters(#Filter(name="compid_filter_name", condition="comp_id=:comp_id"))
So your condition where compid = ? will be #Service method below
#Compable
someServicweMethod(){
List<YourEntity> l = someRepository.findAllWithNamesLike("test");
}
That's basically it for Selects,
For updates/deletes this scheme requires an EntityListener.
Like other people have said there is no set method for this
One option is to look at Query by example - from the spring data documentation -
Person person = new Person();
person.setFirstname("Dave");
Example<Person> example = Example.of(person);
So you could default compid in the object, or parent JPA object
Another option is a custom repository
I can contribute a 50% solution. 50% because it seems to be not easy to wrap Query Methods. Also custom JPA queries are an issue for this global approach. If the standard finders are sufficient it is possible to extend an own SimpleJpaRepository:
public class CustomJpaRepositoryIml<T, ID extends Serializable> extends
SimpleJpaRepository<T, ID> {
private JpaEntityInformation<T, ?> entityInformation;
#Autowired
public CustomJpaRepositoryIml(JpaEntityInformation<T, ?> entityInformation,
EntityManager entityManager) {
super(entityInformation, entityManager);
this.entityInformation = entityInformation;
}
private Sort applyDefaultOrder(Sort sort) {
if (sort == null) {
return null;
}
if (sort.isUnsorted()) {
return Sort.by("insert whatever is a default").ascending();
}
return sort;
}
private Pageable applyDefaultOrder(Pageable pageable) {
if (pageable.getSort().isUnsorted()) {
Sort defaultSort = Sort.by("insert whatever is a default").ascending();
pageable = PageRequest.of(pageable.getPageNumber(), pageable.getPageSize(), defaultSort);
}
return pageable;
}
#Override
public Optional<T> findById(ID id) {
Specification<T> filterSpec = filterOperatorUserAccess();
if (filterSpec == null) {
return super.findById(id);
}
return findOne(filterSpec.and((Specification<T>) (root, query, criteriaBuilder) -> {
Path<?> path = root.get(entityInformation.getIdAttribute());
return criteriaBuilder.equal(path, id);
}));
}
#Override
protected <S extends T> TypedQuery<S> getQuery(Specification<S> spec, Class<S> domainClass, Sort sort) {
sort = applyDefaultOrder(sort);
Specification<T> filterSpec = filterOperatorUserAccess();
if (filterSpec != null) {
spec = (Specification<S>) filterSpec.and((Specification<T>) spec);
}
return super.getQuery(spec, domainClass, sort);
}
}
This implementation is picked up e.g. by adding it to the Spring Boot:
#SpringBootApplication
#EnableJpaRepositories(repositoryBaseClass = CustomJpaRepositoryIml.class)
public class ServerStart {
...
If you need this kind of filtering also for Querydsl it is also possible to implement and register a QuerydslPredicateExecutor.
I use following tecnologies:
TestNG(6.9.10)
Spring(4.3.2.RELEASE)
Hibernate(5.1.0.Final)
Java 8
I test some code with functionality by integration tests and i need to check the entity for correct save/update/delete or any other changes. There are sessionFactory configuration in my .xml :
<bean id="sessionFactory" class="org.springframework.orm.hibernate5.LocalSessionFactoryBean"
p:dataSource-ref="dataSource" p:hibernateProperties="jdbcProperties">
<property name="packagesToScan" value="my.package"/>
</bean>
and test class example:
#ContextConfiguration(locations = {"classpath:/applicationContext-test.xml",
"classpath:/applicationContext-dao.xml",
"classpath:/applicationContext-orm.xml"})
public class AccountServiceTest extends AbstractTransactionalTestNGSpringContextTests {
#Autowired
private SomeService someService;
#Autowired
private SessionFactory sessionFactory;
#Test
public void updateEntity() {
//given
Long entityId = 1L;
SomeClass expected = someService.get(entityId);
String newPropertyValue = "new value";
//when
someService.changeEntity(entity, newPropertyValue);
// Manual flush is required to avoid false positive in test
sessionFactory.getCurrentSession().flush();
//then
expected = someService.get(entityId);
Assert.assertEquals(expected.getChangedProperty() , newPropertyValue);
}
service method:
#Transactional
#Override
public int changeEntity(entity, newPropertyValue) {
return dao().executeNamedQuery(REFRESH_ACCESS_TIME_QUERY,
CollectionUtils.arrayToMap("id", entity.getId(), "myColumn", newPropertyValue));
}
dao:
#Override
public int executeNamedQuery(final String query, final Map<String, Object> parameters) {
Query queryObject = sessionFactory.getCurrentSession().getNamedQuery(query);
if (parameters != null) {
for (Map.Entry<String, Object> entry : parameters.entrySet()) {
NamedQueryUtils.applyNamedParameterToQuery(queryObject, entry.getKey(), entry.getValue());
}
}
return queryObject.executeUpdate();
}
But my entity property didn't change after flush()
as described here, change #Autowire SessionFactory with #PersistenceContext EntityManager , i should use EntityManager to flush() - but i can't do this - i can't transform sessionFactory to EntityManager, and i don't need in creation of EntityManager for my application - because i need to change my .xml config file and others.
Is there are any another solutions of this problem?
Your code is actually working as expected.
Your test method is transactional and thus your Session is alive during the whole execution of the test method. The Session is also the 1st level cache for hibernate and when loading an entity from the database it is put into the session.
So the line SomeClass expected = someService.get(entityId); will load the entity from the database and with it also put it in the Session.
Now this line expected = someService.get(entityId); first checks (well actually the dao method underneath) checks if the entity of the requested type with the id is already present in the Session if so it simply returns it. It will not query the database!.
The main problem is that you are using hibernate in a wrong way, you are basically bypassing hibernate with the way you are updating your database. You should update your entity and persist it. You should not write queries to update the database!
Annotated test method
#Test
public void updateEntity() {
//given
Long entityId = 1L;
SomeClass expected = someService.get(entityId); // load from db and put in Sesion
String newPropertyValue = "new value";
//when
someService.changeEntity(entity, newPropertyValue); // update directly in database bypass Session and entity
// Manual flush is required to avoid false positive in test
sessionFactory.getCurrentSession().flush();
//then
expected = someService.get(entityId); // return entity from Session
Assert.assertEquals(expected.getChangedProperty() , newPropertyValue);
}
To only fix the test add a call to clear() after the flush().
sessionFactory.getCurrentSession().clear();
However what you actually should do is stop writing code like that and use Hibernate and persistent entities in the correct way.
#Test
public void updateEntity() {
//given
Long entityId = 1L;
String newPropertyValue = "new value";
SomeClass expected = someService.get(entityId);
expected.setMyColumn(newPropertyValue);
//when
someService.changeEntity(entity);
sessionFactory.getCurrentSession().flush();
// now you should use a SQL query to verify the state in the DB.
Map<String, Object> dbValues = getJdbcTemplate().queryForMap("select * from someClass where id=?", entityId);
//then
Assert.assertEquals(dbValues.get("myColumn"), newPropertyValue);
}
Your dao method should look something like this.
public void changeEntity(SomeClass entity) {
sessionFactory.getCurrentSession().saveOrUpdate(entity);
}
I am trying to implement logging in DB table using Spring AOP. By "logging in table" I mean to write in special log table information about record that was CREATED/UPDATED/DELETED in usual table for domain object.
I wrote some part of the code and all is working good except one thing - when transaction is rolled back then changes in log table still commit successfully. It's strange for me because in my AOP advice the same transaction is using as in my business and DAO layer. (From my AOP advice I called methods of special manager class with Transaction propagation MANDATORY and also I checked transaction name TransactionSynchronizationManager.getCurrentTransactionName() in business layer, dao layer and AOP advice and it is the same).
Does anyone tried to implement similar things in practice? Is it possible to use in AOP advice the same transaction as in the business layer and rollback changes made in AOP advice if some error in business layer occurs?
Thank you in advance for unswers.
EDIT
I want to clarify that problem with rollback occurs only for changes that was made from AOP advice. All changes that is made in DAO layer are rollbacked successfully. I mean that, for example, if some exception is thrown then changes made in DAO layer will be successfully rollbacked, but in log table information will be saved (commited). But I can't understand why it is like that because as I wrote above in AOP advice the same transaction is using.
EDIT 2
I checked with debugger the piece of the code where I am writting to the log table in AOP advice and it seems to me that JdbcTemplate's update method executes outside transaction because changes had been commited to the DB directly after execution of the statement and before transactional method was finished.
EDIT 3
I solved this problem. Actually, that was my stupid fault. I'm using MySQL. After creation of the log table I did't change DB engine and HeidySQL set MyIsam by default. But MyIsam doesn't support transaction so I changed DB engine to InnoDB (as for all other tables) and now all is working perfectly.
Thank you all for help and sorry for disturbing.
If someone is interested, here is simplified example that illustrate my approach.
Consider DAO class that has save method:
#Repository(value="jdbcUserDAO")
#Transactional(propagation=Propagation.SUPPORTS, readOnly=true, rollbackFor=Exception.class)
public class JdbcUserDAO implements UserDAO {
#Autowired
private JdbcTemplate jdbcTemplate;
#LoggedOperation(affectedRows = AffectedRows.ONE, loggedEntityClass = User.class, operationName = OperationName.CREATE)
#Transactional(propagation=Propagation.REQUIRED, readOnly=false, rollbackFor=Exception.class)
#Override
public User save(final User user) {
if (user == null || user.getRole() == null) {
throw new IllegalArgumentException("Input User object or nested Role object should not be null");
}
KeyHolder keyHolder = new GeneratedKeyHolder();
jdbcTemplate.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection connection)
throws SQLException {
PreparedStatement ps = connection.prepareStatement(SQL_INSERT_USER, new String[]{"ID"});
ps.setString(1, user.getUsername());
ps.setString(2, user.getPassword());
ps.setString(3, user.getFullName());
ps.setLong(4, user.getRole().getId());
ps.setString(5, user.geteMail());
return ps;
}
}, keyHolder);
user.setId((Long) keyHolder.getKey());
VacationDays vacationDays = user.getVacationDays();
vacationDays.setId(user.getId());
// Create related vacation days record.
vacationDaysDAO.save(vacationDays);
user.setVacationDays(vacationDays);
return user;
}
}
Here is how aspect looks like:
#Component
#Aspect
#Order(2)
public class DBLoggingAspect {
#Autowired
private DBLogManager dbLogManager;
#Around(value = "execution(* com.crediteuropebank.vacationsmanager.server.dao..*.*(..)) " +
"&& #annotation(loggedOperation)", argNames="loggedOperation")
public Object doOperation(final ProceedingJoinPoint joinPoint,
final LoggedOperation loggedOperation) throws Throwable {
Object[] arguments = joinPoint.getArgs();
/*
* This should be called before logging operation.
*/
Object retVal = joinPoint.proceed();
// Execute logging action
dbLogManager.logOperation(arguments,
loggedOperation);
return retVal;
}
}
And here is how my db log manager class LooksLike:
#Component("dbLogManager")
public class DBLogManager {
#Autowired
private JdbcTemplate jdbcTemplate;
#InjectLogger
private Logger logger;
#Transactional(rollbackFor={Exception.class}, propagation=Propagation.MANDATORY, readOnly=false)
public void logOperation(final Object[] inputArguments, final LoggedOperation loggedOperation) {
try {
/*
* Prepare query and array of the arguments
*/
jdbcTemplate.update(insertQuery.toString(),
insertedValues);
} catch (Exception e) {
StringBuilder sb = new StringBuilder();
// Prepare log string
logger.error(sb.toString(), e);
}
}
It could be to do with the order of the advice - you would want your #Transaction related advice to take effect around(or before and after) your logging related advice. If you are using Spring AOP you can probably control it using the order attribute of the advice - give your transaction related advice the highest precedence so that it executes last on the way out.
Nothing to do with AOP, set datasource property autocommit to false like :
<bean id="datasource" ...>
<property name="autoCommit" value="false/>
</bean>
If you are using xml configuration