I stumbled upon a problem with locking row in Oracle DB. The purpose of the lock is to prevent more than one transaction reading data from the DB because this data influences the generation of new data and is changed in terms of a transaction.
In order to make the lock, I've put the #Lock annotation over SpringData find method which retrieves data that participates in the transaction.
#Lock(LockModeType.PESSIMISTIC_WRITE)
User findUserById(#Param("id") String operatorId);
After this code is implemented I get log message
org.hibernate.loader.Loader - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
Besides, it has no effect and causes
org.springframework.dao.DataIntegrityViolationException: could not execute batch; SQL [insert into ...]
The issue can be solved when rewriting the lock using entity manager
entityManager.lock(userByIdWithLockOnReadWrite, LockModeType.PESSIMISTIC_WRITE);
or
entityManager.unwrap(Session.class).lock(userByIdWithLockOnReadWrite, LockMode.PESSIMISTIC_WRITE);
The issue doesn't appear on MariaDB (MySQL).
Maybe there are some special rules of using the annotation?
You said that:
The purpose of the lock is to prevent more than one transaction
reading data from the DB because this data influences the generation
of new data and is changed in terms of a transaction.
Oracle uses MVCC (Multiversion Concurrency Control) so Readers don't block Writers and Writers don't block Readers. Even if you acquire a row-level lock with Oracle, and you modify that row without committing, other transactions can still read the last committed value.
Related to this log message:
org.hibernate.loader.Loader - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
The follow-on locking mechanism is due to Oracle not being able to apply the lock when doing Oracle 11g pagination, using DISTINCT or UNION ALL.
If you're using Oracle 12i, then you can update the Hibernate dialect to Oracle12cDialect and pagination and locking will work fine since Oracle 12 uses the SQL standard pagination and it no longer requires a derived table query.
This does not happen in MariaDB or any other database. It's just an Oracle pre-12 limitation.
If you are using Hibernate 5.2.1, we added a new hint HINT_FOLLOW_ON_LOCKING which disables this mechanism.
So, your Spring Data query becomes:
#QueryHints(value = { #QueryHint(name = "hibernate.query.followOnLocking", value = "false")}, forCounting = false)
#Lock(LockModeType.PESSIMISTIC_WRITE)
User findUserById(#Param("id") String operatorId);
you can also apply it manually:
User user = entityManager.createQuery(
"select u from User u where id = :id", User.class)
.setParameter("id", id);
.unwrap( Query.class )
.setLockOptions(
new LockOptions( LockMode.PESSIMISTIC_WRITE )
.setFollowOnLocking( false ) )
.getSingleResult();
Is there any option to make a Transaction(TxB) to wait for some time (without throwing Lock Acquisition Exception) for another Transaction(TxA) to release the Lock.
Related
I am trying to read records from SAP ASE 16 Database table concurrently using java to increase performance. I am using select…..for update query to read database table records concurrently. Here two threads are trying to read records from single table.
I am executing this command on a microservice based environment.
I have done below configuration on database:
• I have enabled select for update using: sp_configure "select for update", 1
• To set locking scheme I have used: alter table poll lock datarows
Table name: poll
This is the query that I am trying to execute:
SQL Query:
SELECT e_id, e_name, e_status FROM poll WHERE (e_status = 'new') FOR UPDATE
UPDATE poll SET e_status = 'polled' WHERE (e_id = #e_id)
Problem:
For some reason I am getting duplicate records on executing above query for majority of records sometimes beyond 200 or 300.
It seems like locks are not being acquired during execution of above command. Is there any configuration that I am missing from database side. Does it have anything to do with shared lock and exclusive lock?
I'm currently working in a legacy application that uses an Stored Procedure in a Sybase DB to generate some IDs incrementally. The procedure is defined as:
CREATE PROC getId
(#val int = -1 output)
AS
BEGIN
UPDATE ID_TABLE SET LAST_VALUE = LAST_VALUE + 1
SELECT #val = LAST_VALUE FROM ID_TABLE
RETURN #val
END
This procedure is being called from a Java application using Spring's TransactionTemplate to handle the transaction in a declarative manner.
public Integer getId() {
TransactionTemplate txTemplate = new TransactionTemplate(txManager); // txManager is an autowired instance of PlatformTransactionManager.
txTemplate.setIsolationLevel(TransactionDefinition.ISOLATION_SERIALIZABLE);
txTemplate.setTimeout(-1);
return (Integer) txTemplate.execute((TransactionCallback) status -> idDao.generateId());
}
Internally, idDao uses a JdbcTemplate to call the Stored Procedure using a CallableStatementCreatorFactory. Nothing too out of the ordinary there.
The Stored Procedure is called ~10k times/day. From time to time, we see some ID collisions. My understanding was that the isolation level being set to SERIALIZABLE should prevent this from happening, and I can't seem to reproduce this even calling getId simultaneously with several threads. Does anybody have a hint on what might be happening here?
I can't speak to the java/spring coding but on a purely SQL basis it looks and sounds like the update/select is not being performed within a transaction.
Without a transaction wrapper (eg, explicit begin/commit tran, autocommit=false, set chained on) the update/select is a prime target for a race condition and duplicate key errors.
NOTE: the isolation level has no effect if the update/select is performed outside of a transaction wrapper
Tracking down a 'missing transaction wrapper' issue is a good idea as there could be other pieces of SQL code suffering from poor/missing transaction management. Having said that ...
For this particular case one easy fix would be to add a transaction wrapper in the proc, eg:
CREATE PROC getId
(#val int = -1 output)
AS
BEGIN
begin tran
UPDATE ID_TABLE SET LAST_VALUE = LAST_VALUE + 1
SELECT #val = LAST_VALUE FROM ID_TABLE
commit tran
RETURN #val
END
Even better would be a rewrite of the update that eliminates the need for the select, eg:
CREATE PROC getId
(#val int = -1 output)
AS
BEGIN
UPDATE ID_TABLE SET #val=LAST_VALUE+1, LAST_VALUE = LAST_VALUE + 1
RETURN #val
END
NOTE: as a standalone operation this eliminates the need for a transaction wrapper and should eliminate the generation of duplicate keys (assuming this proc is the root cause of the duplicate key issue)
Assuming this is Sybase ASE I'd want to look at making sure the table is configured to use datarows locking; with allpages locking, and to a limited extent datapages locking, there's a chance of a race condition on index updates (which in turn could lead to a deadlock scenario).
the TL;DR is that I am not able to delete a row previously created with an upsert using Java.
Basically I have a table like this:
CREATE TABLE transactions (
key text PRIMARY KEY,
created_at timestamp
);
Then I execute:
String sql = "update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null";
session.execute(sql)
As expected the row is created:
cqlsh:thingleme> SELECT * FROM transactions ;
key | created_at
------+---------------------------------
test | 2018-01-30 16:35:16.663000+0000
But (this is what is making me crazy) if I execute:
sql = "delete from transactions where key = 'test'";
ResultSet resultSet = session.execute(sql);
Nothing happens. I mean: no exception is thrown and the row is still there!
Some other weird stuff:
if I replace the upsert with a plain insert, then the delete works
if I directly run the sql code (update and delete) by using cqlsh, it works
If I run this code against an EmbeddedCassandraService, it works (this is very bad, because my integration tests are just green!)
My environment:
cassandra: 3.11.1
datastax java driver: 3.4.0
docker image: cassandra:3.11.1
Any idea/suggestion on how to tackle this problem is really appreciated ;-)
I think the issue you are encountering might be explained by the mixing of lightweight transactions (LWTs) (update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null) and non-LWTs (delete from transactions where key = 'test').
Cassandra uses timestamps to determine which mutations (deletes, updates) are the most recently applied. When using LWTs, the timestamp assignment is different then when not using LWTs:
Lightweight transactions will block other lightweight transactions from occurring, but will not stop normal read and write operations from occurring. Lightweight transactions use a timestamping mechanism different than for normal operations and mixing LWTs and normal operations can result in errors. If lightweight transactions are used to write to a row within a partition, only lightweight transactions for both read and write operations should be used.
Source: How do I accomplish lightweight transactions with linearizable consistency?
Further complicating things is that by default the java driver uses client timestamps, meaning the write timestamp is determined by the client rather than the coordinating cassandra node. However, when you use LWTs, the client timestamp is bypassed. In your case, unless you disable client timestamps, your non-LWT queries are using client timestamps, where your LWT queries are using a timestamp assigned by the paxos logic in cassandra. In any case, even if the driver wasn't assigning client timestamps this still might be a problem because the timestamp assignment logic is different on the C* side for LWT and non-LWT as well.
To fix this, you could alter your delete statement to include IF EXISTS, i.e.:
delete from transactions where key = 'test' if exists
Similar issue from the java driver mailing list
On our production application we recently become weird error from DB2:
Caused by: com.ibm.websphere.ce.cm.StaleConnectionException: [jcc][t4][2055][11259][4.13.80] The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated your particular request due to an error or a force interrupt. ERRORCODE=-4499, SQLSTATE=58009
This occurs when hibernate tries to select data from one big table(More than 6 milions records and 320 columns).
I observed that when ResultSet lower that 10 elements, hibernate selects successfully.
Our architecture:
Spring 4.0.3
Hibernate 4.3.5
DB2 v10 z/Os
Websphere 7.0.0.31(with JDBC V9.7FP5)
This select works when I tried to executed this in Data Studio or when app is started localy from Tomcat(connected to production Data Source). I suppose that Data Source on Websphere is not corectly configured, but I tried some modifications and without results. I also tried to update JDBC Driver but that not helped. Actually I become then ERRORCODE = -1244.
Ok, so now I'm looking for any help ;).
I can obviously provide additional information when needed.
Maybe someone fighted earlier with this problem?
Thanks in advance!
We have the same problem and finally solved by running REORG and RUNSTAT on the table(s). In our case, databse and tables were damaged and after running both mentioned operations, it resolved.
This occurs when hibernate tries to select data from one big table(More than 6 milions records and 320 columns)
6 million records with 320 columns seems huge to be read at once through hibernate. How you tried creating a database cursor and streaming few records at a time? In plain JDBC it is done as follows
Statement stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(50); //fetch only 50 records at a time
while with hibernate you would need the below code
Query query = session.createQuery(query);
query.setReadOnly(true);
query.setFetchSize(50);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
// iterate over results
while (results.next()) {
Object row = results.get();
// process row then release reference
// you may need to flush() as well
}
results.close();
This allows you to stream over the result set, however Hibernate will still cache results in the Session, so you’ll need to call session.flush() every so often. If you are only reading data, you might consider using a StatelessSession, though you should read its documentation beforehand.
Analyze the database table locking impact when using this approach.
I need to insert a lot of data in a database using hibernate, i was looking at batch insert from hibernate, what i am using is similar to the example on the manual:
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer = new Customer(.....);
session.save(customer);
if ( i % 20 == 0 ) { //20, same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
but i see that flush doesn't write the data on the database.
Reading about it, if the code is inside a transaction then nothing will be committed to the database until the transaction performs a commit.
So what is the need to use flush/clear ? seems useless, if the data are not written on the database then they are in memory.
How can i force hibernate to write data in the database?
Thanks
The data is sent to the database, and is not in memory anymore. It's just not made definitively persistent until the transaction commit. It's exacltly the same as if you executes the following sequences of statements in any database tool:
begin;
insert into ...
insert into ...
insert into ...
// here, three inserts have been done on the database. But they will only be made
// definitively persistent at commit time
...
commit;
The flush consists in executing the insert statements.
The commit consists in executing the commit statement.
The data will be written to the database, but according to the transaction isolation level you will not see them (in other transactions) until the transaction is committed.
Use some sql statement logger, that prints the statmentes that are transported over the database connection, then you will see that the statmentes are send to the database.
For best perfromance you also have to commit transactions. Flushing and clearing session clears hibernate caches, but data is moved to JDBC connection caches, and is still uncommited ( different RDBMS / drivers show differrent behaviour ) - you are just shifting proble to other place without real improvements in perfromance.
Having flush() at the location mentioned saves you memory too as your session will be cleared regularly. Otherwise you will have 100000 object in memory and might run out of memory for larger count. Check out this article.