How to specify #Lock timeout for query?
I am using Oracle 11g, I hope I can use something like 'select id from table where id = ?1 for update wait 5'.
I defined method like this:
#Lock(LockModeType.PESSIMISTIC_WRITE)
Stock findById(String id);
It seems to lock forever.
When I set javax.persistence.lock.timeout=0 in LocalContainerEntityManagerFactoryBean.jpaProperties, there is no effect.
To lock entities pessimistically, set the lock mode to
PESSIMISTIC_READ, PESSIMISTIC_WRITE, or
PESSIMISTIC_FORCE_INCREMENT.
If a pessimistic lock cannot be obtained, but the locking failure
doesn’t result in a transaction rollback, a LockTimeoutException is
thrown.
Pessimistic Locking Timeouts
The length of time in milliseconds the persistence provider should
wait to obtain a lock on the database tables may be specified using
the javax.persistence.lock.timeout property. If the time it takes to
obtain a lock exceeds the value of this property, a
LockTimeoutException will be thrown, but the current transaction
will not be marked for rollback. If this property is set to 0, the
persistence provider should throw a LockTimeoutException if it
cannot immediately obtain a lock.
If javax.persistence.lock.timeout is set in multiple places, the
value will be determined in the following order:
The argument to one of the EntityManager or Query methods.
The setting in the #NamedQuery annotation.
The argument to the Persistence.createEntityManagerFactory method.
The value in the persistence.xml deployment descriptor.
For Spring Data 1.6 or greater
#Lock is supported on CRUD methods as of version 1.6 of Spring Data JPA (in fact, there's already a milestone available). See this ticket for more details.
With that version you simply declare the following:
interface WidgetRepository extends Repository<Widget, Long> {
#Lock(LockModeType.PESSIMISTIC_WRITE)
Widget findOne(Long id);
}
This will cause the CRUD implementation part of the backing repository proxy to apply the configured LockModeType to the find(…) call on the EntityManager.
On the other hand,
For previous version of Spring Data 1.6
The Spring Data pessimistic #Lock annotations only apply (as you pointed out) to queries. There are not annotations I know of which can affect an entire transaction. You can either create a findByOnePessimistic method which calls findByOne with a pessimistic lock or you can change findByOne to always obtain a pessimistic lock.
If you wanted to implement your own solution you probably could. Under the hood the #Lock annotation is processed by LockModePopulatingMethodIntercceptor which does the following:
TransactionSynchronizationManager.bindResource(method, lockMode == null ? NULL : lockMode);
You could create some static lock manager which had a ThreadLocal<LockMode> member variable and then have an aspect wrapped around every method in every repository which called bindResource with the lock mode set in the ThreadLocal. This would allow you to set the lock mode on a per-thread basis. You could then create your own #MethodLockMode annotation which would wrap the method in an aspect which sets the thread-specific lock mode before running the method and clears it after running the method.
Resource Link:
How to enable LockModeType.PESSIMISTIC_WRITE when looking up entities with Spring Data JPA?
How to add custom method to Spring Data JPA
Spring Data Pessimistic Lock timeout with Postgres
JPA Query API
Various Example of Pessimistic Lock Timeout
Setting a Pessimistic Lock
An entity object can be locked explicitly by the lock method:
em.lock(employee, LockModeType.PESSIMISTIC_WRITE);
The first argument is an entity object. The second argument is the requested lock mode.
A TransactionRequiredException is thrown if there is no active transaction when lock is called because explicit locking requires an active transaction.
A LockTimeoutException is thrown if the requested pessimistic lock cannot be granted:
A PESSIMISTIC_READ lock request fails if another user (which is
represented by another EntityManager instance) currently holds a
PESSIMISTIC_WRITE lock on that database object.
A PESSIMISTIC_WRITE lock request fails if another user currently
holds either a PESSIMISTIC_WRITE lock or a PESSIMISTIC_READ lock on
that database object.
Setting Query Hint (Scopes)
Query hints can be set in the following scopes (from global to local):
For the entire persistence unit - using a persistence.xml property:
<properties>
<property name="javax.persistence.query.timeout" value="3000"/>
</properties>
For an EntityManagerFactory - using the createEntityManagerFacotory method:
Map<String,Object> properties = new HashMap();
properties.put("javax.persistence.query.timeout", 4000);
EntityManagerFactory emf =
Persistence.createEntityManagerFactory("pu", properties);
For an EntityManager - using the createEntityManager method:
Map<String,Object> properties = new HashMap();
properties.put("javax.persistence.query.timeout", 5000);
EntityManager em = emf.createEntityManager(properties);
or using the setProperty method:
em.setProperty("javax.persistence.query.timeout", 6000);
For a named query definition - using the hints element:
#NamedQuery(name="Country.findAll", query="SELECT c FROM Country c",
hints={#QueryHint(name="javax.persistence.query.timeout", value="7000")})
For a specific query execution - using the setHint method (before query execution):
query.setHint("javax.persistence.query.timeout", 8000);
Resource Link:
Locking in JPA
Pessimistic Lock Timeout
You can use #QueryHints in Spring Data:
#Lock(LockModeType.PESSIMISTIC_WRITE)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value ="5000")})
Stock findById(String id)
For Spring Data 1.6 or greater,we can use #Lock annotation provided by spring data jpa.
Also, Lock time out can be set as well by using #QueryHints. Originally there was no support for query hint annotations in default CRUD methods but its been available after fix 1.6M1.
https://jira.spring.io/browse/DATAJPA-173
Below is an example of a Pessimistic Lock with PESSIMISTIC_WRITE mode type which is an exclusive lock.
#Lock(LockModeType.PESSIMISTIC_WRITE)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value ="5000")})
Customer findByCustomerId(Long customerId);
javax.persistence.lock.timeout doesn't seem to be working for me also when provided like below:
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout",value = "15000")})
But then I tried something else which worked. Instead of using #Repository and using CrudRepository, now I am configuring my hbernate using entity manager. Used createQuery along with lock and setting lock timeout. And this configuration is working as expected.
I have two transaction running in parellel and trying to lock exact same row in DB. First transaction is able to acquire WRITE lock and holds the lock for around 10 secs before releasing lock. Meanwhile, second transaction tries to acquire lock on same row but since javax.persistence.lock.timeout is set to 15 secs, it waits for lock to be released and then acquires its own lock. Hence making the flow serialized.
#Component
public class Repository {
#PersistenceContext
private EntityManager em;
public Optional<Cache> getById(int id){
List<Cache> list = em.createQuery("select c from Cache c where c.id = ?1")
.setParameter(1, id)
.setHint("javax.persistence.lock.timeout", 15000)
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.getResultList();
return Optional.ofNullable(list.get(0));
}
public void save(Cache cache) {
cache = em.find(Cache.class, cache.getId());
em.merge(cache);
}
}
Related
the JPA optimistic locking doesn't throw an OptimisticLockException/StaleStateException where i would expect it.
Here is my setup:
i am using spring boot with spring data envers. So my repository are versioned, which should not influence the optimistic locking behaviour. In my entities the property version (Long) is annotated with #Version. My application consists of 3 layers:
persistence-layer
business-layer
transfer-layer
To map objects between the layers i use mapstruct.
When a request is received by the controller in the transfer-layer, the JSON-Payload is mapped to an business-layer object to process business rules to it. The version is always mapped through the whole lifecycle.
When i reach the persistence-layer, i use the ID of the object to find the corresponding entity in my database. The signature of my save-method looks like this:
#Transactional
public Entity saveEntity(BOEntity boEntity){
Entity e = entityRepository.findById(boEntity.getId());
entityMapper.updateEntity(boEntity, e);
entityRepository.save(e);
}
When the same entity is loaded by my clients, (e.g. two browser-tabs) each of them has the same version of the entity. Changes are made and saved in both clients.
The version is contained in the boEntity object and mapped into the entity.
Due to the findById call the entity is managed. The entitymanager will try to merge the entity and succeeds in both requests to do so.
The state of the entity of the first request is merged (with version 1). Hibernate calls the executeUpdate method and writes to the database. The version is increased to 2.
Now the second request delivers the entity in the former state with version 1. The save-method is called and the entity is retrieved from the persistence-context. It has the version 2, which is overwritten by the boEntity object with version 1.
When the entityManager now merges the entity, no exception is thrown.
My expectation is the second request to fail because of an old version.
Isn't it possible to overwrite the version of the entity?
I already read a lot of blog entries, but couldn't find any hint to do the trick.
The default JPA optimistic locking mechanism only works when a managed object is flushed but was changed in the meantime. What you want has to be coded manually. Just add the logic to your saveEntity method:
#Transactional
public Entity saveEntity(BOEntity boEntity){
Entity e = entityRepository.findById(boEntity.getId());
if (boEntity.getVersion() != e.getVersion()) {
throw new OptimisticLockException();
}
entityMapper.updateEntity(boEntity, e);
entityRepository.save(e);
}
This question already has answers here:
Spring Data JPA Update #Query not updating?
(5 answers)
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Let's suppose to have this situation:
We have Spring Data configured in the standard way, there is a Respository object, an Entity object and all works well.
Now for some complex motivations I have to use EntityManager (or JdbcTemplate, whatever is at a lower level than Spring Data) directly to update the table associated to my Entity, with a native SQL query. So, I'm not using Entity object, but simply doing a database update manually on the table I use as entity (it's more correct to say the table from which I get values, see next rows).
The reason is that I had to bind my spring-data Entity to a MySQL view that makes UNION of multiple tables, not directly to the table I need to update.
What happens is:
In a functional test, I call the "manual" update method (on table from which the MySQL view is created) as previously described (through entity-manager) and if I make a simple Respository.findOne(objectId), I get the old object (not updated one). I have to call Entitymanager.refresh(object) to get the updated object.
Why?
Is there a way to "synchronize" (out of the box) objects (or force some refresh) in spring-data? Or am I asking for a miracle?
I'm not ironical, but maybe I'm not so expert, maybe (or probably) is my ignorance. If so please explain me why and (if you want) share some advanced knowledge about this amazing framework.
If I make a simple Respository.findOne(objectId) I get old object (not
updated one). I've to call Entitymanager.refresh(object) to get
updated object.
Why?
The first-level cache is active for the duration of a session. Any object entity previously retrieved in the context of a session will be retrieved from the first-level cache unless there is reason to go back to the database.
Is there a reason to go back to the database after your SQL update? Well, as the book Pro JPA 2 notes (p199) regarding bulk update statements (either via JPQL or SQL):
The first issue for developers to consider when using these [bulk update] statements
is that the persistence context is not updated to reflect the results
of the operation. Bulk operations are issued as SQL against the
database, bypassing the in-memory structures of the persistence
context.
which is what you are seeing. That is why you need to call refresh to force the entity to be reloaded from the database as the persistence context is not aware of any potential modifications.
The book also notes the following about using Native SQL statements (rather than JPQL bulk update):
■ CAUTION Native SQL update and delete operations should not be
executed on tables mapped by an entity. The JP QL operations tell the
provider what cached entity state must be invalidated in order to
remain consistent with the database. Native SQL operations bypass such
checks and can quickly lead to situations where the inmemory cache is
out of date with respect to the database.
Essentially then, should you have a 2nd level cache configured then updating any entity currently in the cache via a native SQL statement is likely to result in stale data in the cache.
In Spring Boot JpaRepository:
If our modifying query changes entities contained in the persistence context, then this context becomes outdated.
In order to fetch the entities from the database with latest record.
Use #Modifying(clearAutomatically = true)
#Modifying annotation has clearAutomatically attribute which defines whether it should clear the underlying persistence context after executing the modifying query.
Example:
#Modifying(clearAutomatically = true)
#Query("UPDATE NetworkEntity n SET n.network_status = :network_status WHERE n.network_id = :network_id")
int expireNetwork(#Param("network_id") Integer network_id, #Param("network_status") String network_status);
Based on the way you described your usage, fetching from the repo should retrieve the updated object without the need to refresh the object as long as the method which used the entity manager to merge has #transactional
here's a sample test
#DirtiesContext(classMode = ClassMode.AFTER_CLASS)
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = ApplicationConfig.class)
#EnableJpaRepositories(basePackages = "com.foo")
public class SampleSegmentTest {
#Resource
SampleJpaRepository segmentJpaRepository;
#PersistenceContext
private EntityManager entityManager;
#Transactional
#Test
public void test() {
Segment segment = new Segment();
ReflectionTestUtils.setField(segment, "value", "foo");
ReflectionTestUtils.setField(segment, "description", "bar");
segmentJpaRepository.save(segment);
assertNotNull(segment.getId());
assertEquals("foo", segment.getValue());
assertEquals("bar",segment.getDescription());
ReflectionTestUtils.setField(segment, "value", "foo2");
entityManager.merge(segment);
Segment updatedSegment = segmentJpaRepository.findOne(segment.getId());
assertEquals("foo2", updatedSegment.getValue());
}
}
I have a critical section of code where I need to read and lock an entity by id with pessimistic lock.
This section of code looks like this right now:
MyEntity entity = entityManager.find(MyEntity.class, key);
entityManager.refresh(entity, LockModeType.PESSIMISTIC_WRITE);
It works OK, but as I understand in case when there is no entity in the hibernate's cache, we will use 2 read transactions to a database. 1st transaction to find the entity by id and another transaction to refresh and lock the entity.
Is it possible to use only one transaction in such scenario?
I would imagine something like:
boolean skipCache = true;
MyEntity entity = entityManager.find(MyEntity.class, key,
LockModeType.PESSIMISTIC_WRITE, skipCache);
But there is no such parameter like skipCache. Is there another approach to read an entity by id directly from the database by using EntityManager?
UPDATE:
This query will hit the first level cache in case the entity exists in the cache. Thus, it may potentially return the outdated data and that is why isn't suitable for critical sections where any read should be blocked:
MyEntity entity = entityManager.find(MyEntity.class, key, LockModeType.PESSIMISTIC_WRITE);
The question is about skipping the cache and not about locking.
I've just found a method getReference in the EntityManager which gets an instance, whose state may be lazily fetched. As said in the documentation:
Get an instance, whose state may be lazily fetched. If the requested
instance does not exist in the database, the EntityNotFoundException
is thrown when the instance state is first accessed. (The persistence
provider runtime is permitted to throw the EntityNotFoundException
when getReference is called.) The application should not expect that
the instance state will be available upon detachment, unless it was
accessed by the application while the entity manager was open.
As a possible solution to find and lock an up to date entity by id in one query we can use the next code:
MyEntity entity = entityManager.getReference(MyEntity.class, key);
entityManager.refresh(entity, LockModeType.PESSIMISTIC_WRITE);
This query will create an entity (no database query) and then refresh and lock the entity.
Why not directly pass the requested lock along with the query itself?
MyEntity entity = entityManager.find(MyEntity.class, key, LockModeType.PESSIMISTIC_WRITE);
As far as I understand this is doing exactly what you wanted. (documentation)
You can also set entityManager property just before you use the find method to address not hitting the cache.
Specifying the Cache Mode
entityManager.setProperty("javax.persistence.cache.storeMode", CacheStoreMode.REFRESH);
MyEntity entity = entityManager.find(MyEntity.class, key);
I am trying to perform batch inserts with data that is currently being inserted to DB one statement per transaction. Transaction code statement looks similar to below. Currently, addHolding() method is being called for each quote that comes in from an external feed, and each of these quote updates happens about 150 times per second.
public class HoldingServiceImpl {
#Autowired
private HoldingDAO holdingDao;
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public void addHolding(Quote quote) {
Holding holding = transformQuote(quote);
holdingDao.addHolding(holding);
}
}
And DAO is getting current session from Hibernate SessionFactory and calling save on object.
public class HoldingDAOImpl {
#Autowired
private SessionFactory sessionFactory;
public void addHolding(Holding holding) {
sessionFactory.getCurrentSession().save(holding);
}
}
I have looked at Hibernate batching documentation, but it is not clear from document how I would organize code for batch inserting in this case, since I don't have the full list of data at hand, but rather am waiting for it to stream.
Does merely setting Hibernate batching properties in properties file (e.g. hibernate.jdbc.batch_size=20) "magically" batch insert these? Or will I need to, say, capture each quote update in a synchronized list, and then insert list load and clear list when batch size limit reached?
Also, the whole purpose of implementing batching is to see if performance improves. If there is better way to handle inserts in this scenario, let me know.
Setting the property hibernate.jdbc.batch_size=20 is an indication for the hibernate to Flush the objects after 20. In your case hibernate automatically calls sessionfactory.flush() after 20 records saved.
When u call a sessionFactory.save(), the insert command is only fired to in-memory hibernate cache. Only once the Flush is called hibernate synchronizes these changes with the Database. Hence setting hibernate batch size is enough to do batch inserts. Fine tune the Batch size according to your needs.
Also make sure your transactions are handled properly. If you commit a transaction also forces hibernate to flush the session.
maybe somebody can help me with a transactional issue in Spring (3.1)/ Postgresql (8.4.11)
My transactional service is as follows:
#Transactional(isolation = Isolation.SERIALIZABLE, readOnly = false)
#Override
public Foo insertObject(Bar bar) {
// these methods are just examples
int x = firstDao.getMaxNumberOfAllowedObjects(bar)
int y = secondDao.getNumerOfExistingObjects(bar)
// comparison
if (x - y > 0){
secondDao.insertNewObject(...)
}
....
}
The Spring configuration Webapp contains:
#Configuration
#EnableTransactionManagement
public class ....{
#Bean
public DataSource dataSource() {
org.apache.tomcat.jdbc.pool.DataSource ds = new DataSource();
....configuration details
return ds;
}
#Bean
public DataSourceTransactionManager txManager() {
return new DataSourceTransactionManager(dataSource());
}
}
Let us say a request "x" and a request "y" execute concurrently and arrive both at the comment "comparison" (method insertObject). Then both of them are allowed to insert a new object and their transactions are commited.
Why am I not having a RollbackException? As far as I know that is what the Serializable isolotation level is for. Coming back to the previous scenario, if x manages to insert a new object and commits its transaction, then "y"'s transaction should not be allowed to commit since there is a new object he did not read.
That is, if "y" could read again the value of secondDao.getNumerOfExistingObjects(bar) it would realize that there is a new object more. Phantom?
The transaction configuration seems to be working fine:
For each request I can see the same connection for firstDao and secondDao
A transaction is created everytime insertObject is invoked
Both first and second DAOs are as follows:
#Autowired
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
}
#Override
public Object daoMethod(Object param) {
//uses jdbcTemplate
}
I am sure I am missing something. Any idea?
Thanks for your time,
Javier
TL;DR: Detection of serializability conflicts improved dramatically in Pg 9.1, so upgrade.
It's tricky to figure out from your description what the actual SQL is and why you expect to get a rollback. It looks like you've seriously misunderstood serializable isolation, perhaps thinking it perfectly tests all predicates, which it doesn't, especially not in Pg 8.4.
SERIALIZABLE doesn't perfectly guarantee that the transactions execute as if they were run in series - as doing so would be prohibitively expensive from a performance point of view if it it were possible at all. It only provides limited checking. Exactly what is checked and how varies from database to database and version to version, so you need to read the docs for your version of your database.
Anomalies are possible, where two transactions executing in SERIALIZABLE mode produce a different result to if those transactions truly executed in series.
Read the documentation on transaction isolation in Pg to learn more. Note that SERIALIZABLE changed behaviour dramatically in Pg 9.1, so make sure to read the version of the manual appropriate for your Pg version. Here's the 8.4 version. In particular read 13.2.2.1. Serializable Isolation versus True Serializability. Now compare that to the greatly improved predicate locking based serialization support described in the Pg 9.1 docs.
It looks like you're trying to perform logic something like this pseudocode:
count = query("SELECT count(*) FROM the_table");
if (count < threshold):
query("INSERT INTO the_table (...) VALUES (...)");
If so, that's not going to work in Pg 8.4 when executed concurrently - it's pretty much the same as the anomaly example used in the documentation linked above. Amazingly it actually works on Pg 9.1; I didn't expect even 9.1's predicate locking to catch use of aggregates.
You write that:
Coming back to the previous scenario, if x manages to insert a new
object and commits its transaction, then "y"'s transaction should not
be allowed to commit since there is a new object he did not read.
but 8.4 won't detect that the two transactions are interdependent, something you can trivially prove by using two psql sessions to test it. It's only with the true-serializability stuff introduced in 9.1 that this will work - and frankly, I was surprised it works in 9.1.
If you want to do something like enforce a maximum row count in Pg 8.4, you need to LOCK the table to prevent concurrent INSERTs, doing the locking either manually or via a trigger function. Doing it in a trigger will inherently require a lock promotion and thus will frequently deadlock, but will successfully do the job. It's better done in the application where you can issue the LOCK TABLE my_table IN EXCLUSIVE MODE before obtaining even SELECTing from the table, so it already has the highest lock mode it will need on the table and thus shouldn't need deadlock-prone lock promotion. The EXCLUSIVE lock mode is appropriate because it permits SELECTs but nothing else.
Here's how to test it in two psql sessions:
SESSION 1 SESSION 2
create table ser_test( x text );
BEGIN TRANSACTION
ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION
ISOLATION LEVEL SERIALIZABLE;
SELECT count(*) FROM ser_test ;
SELECT count(*) FROM ser_test ;
INSERT INTO ser_test(x) VALUES ('bob');
INSERT INTO ser_test(x) VALUES ('bob');
COMMIT;
COMMIT;
When run on Pg 9.1, the st commits succeeds then the secondCOMMIT` fails with:
regress=# COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
HINT: The transaction might succeed if retried.
but when run on 8.4 both commits commits succeed, because 8.4 didn't have all the predicate locking code for serializability added in 9.1.