I'm facing some lock timeout issues and I need better tools to find the root cause. Considering a IBM stack, WebSphere 8.5 and DB2 10.5, and given lock information like:
Lock Information:
Lock Name: 000301230000000008C0000252
Lock Type: Basic RECORD lock(DMS/IXM)
Lock Specifics: (obj={4;511}, rid=d(0;2440;6), x0000000002A00001)
Lock Requestor:
...
Requesting Agent ID: 28648
Coordinator Agent ID: 28648
...
Lock Owner (Representative):
...
Requesting Agent ID: 295623
Coordinator Agent ID: 295623
...
and given I have two JDBC transactions, one holding the lock and another waiting for the lock to be released, how can I obtain transactional information programmatically from the JDBC connection (Agent ID for example) to diagnose in my apps which instance of the JDBC connection is holding the lock? Suppose I have a multi-thread, multi-server environment.
I saw a similar question regarding Oracle, SQLServer, and PostgreSQL, at this link: How to get the current database transaction id using JDBC or Hibernate?
But I didn't find any information for DB2
To troubleshoot the lock cause I need to find:
The SQL locking the table
The SQL waiting for the lock to be released
The bind parameters (data) involved on the above SQLs
Start from the SYSIBMADM.MON_LOCKWAITS view.
If you need more info on participating applications, you may use the monitoring table functions which this view is based on directly:
MON_GET_APPL_LOCKWAIT
MON_GET_CONNECTION
SELECT
-- locked table
CASE WHEN L.TBSP_ID > 0 THEN T.TABSCHEMA ELSE S.TABSCHEMA END AS TABSCHEMA
, CASE WHEN L.TBSP_ID > 0 THEN T.TABNAME ELSE S.TABNAME END AS TABNAME
, CASE WHEN L.TBSP_ID > 0 THEN T.DATA_PARTITION_ID ELSE -1 END AS DATA_PARTITION_ID
--, L.* -- lock info
, H.CLIENT_HOSTNAME -- holder connection info
, HC.STMT_TEXT AS HLD_STMT_TEXT_CURR -- holder's currently executing statement
, HL.STMT_TEXT AS HLD_STMT_TEXT_LAST -- holder's last executed statement
, R.CLIENT_HOSTNAME -- requester connection info
, RC.STMT_TEXT AS REQ_STMT_TEXT_CURR -- requester's current statement
FROM TABLE (MON_GET_APPL_LOCKWAIT (NULL, -2)) L
LEFT JOIN TABLE (MON_GET_TABLE (NULL, NULL, L.HLD_MEMBER)) T ON T.TBSP_ID = L.TBSP_ID AND T.TAB_FILE_ID = L.TAB_FILE_ID
LEFT JOIN SYSCAT.TABLES S ON S.TBSPACEID = L.TBSP_ID AND S.TABLEID = L.TAB_FILE_ID
-- Holder's info
LEFT JOIN TABLE (MON_GET_CONNECTION (L.HLD_APPLICATION_HANDLE, L.HLD_MEMBER)) H ON 1=1
LEFT JOIN TABLE (MON_GET_ACTIVITY (L.HLD_APPLICATION_HANDLE, L.HLD_MEMBER)) HC ON 1=1
LEFT JOIN TABLE (MON_GET_UNIT_OF_WORK (L.HLD_APPLICATION_HANDLE, L.HLD_MEMBER)) HU ON 1=1
LEFT JOIN TABLE (MON_GET_PKG_CACHE_STMT (NULL, HU.LAST_EXECUTABLE_ID, NULL, L.HLD_MEMBER)) HL ON 1=1
-- Requester's info
LEFT JOIN TABLE (MON_GET_CONNECTION (L.REQ_APPLICATION_HANDLE, L.REQ_MEMBER)) R ON 1=1
LEFT JOIN TABLE (MON_GET_PKG_CACHE_STMT (NULL, L.REQ_EXECUTABLE_ID, NULL, L.REQ_MEMBER)) RC ON 1=1
Notes:
You can't get the statement placed the lock, which requester is waiting on. There is no explicit binding between a lock and a statement placed this lock in Db2. You may get a current statement (if any) and the last completed statement for a holder as above. None of these statements may place this lock.
You can't get parameter values for all returned statements with the query above.
What you can do to get more information on the lock wait / timeout event is to create an event monitor for locking - some kind of "logger" for lockwait, locktimeout, deadlock events. The corresponding information is written to database tables created for this monitor. Refer to the Information written to tables for a locking event monitor topic. The amount of data you get in these tables depend on setting of the database configuration parameter:
Lock timeout events (MON_LOCKTIMEOUT) = HIST_AND_VALUES
Deadlock events (MON_DEADLOCK) = HIST_AND_VALUES
Lock wait events (MON_LOCKWAIT) = HIST_AND_VALUES
Lock wait event threshold (MON_LW_THRESH) = 5000000
For example, if you set these parameters as above, very detailed information (including all statement parameter values) on all 3 types of events (lockwait event will be generated, if the requester waits more than 5 seconds) will be written to the event monitor tables.
If you have such an active event monitor and MON_DEADLOCK = HISTORY at least, you have an additional activity to whole transaction history for all applications currently having open transactions with db2pd -apinfo -db mydb being run on the server.
What I've found so far to help me better diagnose DB2 and WebSphere locks correlation requires me to do the following.
First, I need to have a clue about which tables are getting deadlocked. I can find it by issue one of the following commands:
> db2pd -locks showlocks -db SAMPLE
Locks:
Address TranHdl Lockname Type Mode Sts Owner Dur HoldCount Att ReleaseFlg rrIID TableNm SchemaNm
0x0000A338B812C780 15 01008800050000000000000052 RowLock ..X G 15 1 0 0x00200000 0x40000000 0 TABLE DB2INST1 03008800050000000000000052 SQLP_RECORD (obj={2;136}, rid=d(0;0;5), x0500000000000000)
0x0000A338B812C780 15 01008800000000000000000054 TableLock .IX G 15 1 0 0x00202000 0x40000000 0 TABLE DB2INST1 03008800000000000000000054 SQLP_TABLE (obj={2;136})
Then I can query the ROWID based on the result of the previous command:
> db2 connect to SAMPLE
> db2 "select rid(TABLE), COL1, COL2 from TABLE with ur" | grep 0500000000000000
Based on the row information in the table (that if in lock won't be committed) then I can correlate it with the application somehow to discover which server is causing the issue.
Related
I am trying to read records from SAP ASE 16 Database table concurrently using java to increase performance. I am using select…..for update query to read database table records concurrently. Here two threads are trying to read records from single table.
I am executing this command on a microservice based environment.
I have done below configuration on database:
• I have enabled select for update using: sp_configure "select for update", 1
• To set locking scheme I have used: alter table poll lock datarows
Table name: poll
This is the query that I am trying to execute:
SQL Query:
SELECT e_id, e_name, e_status FROM poll WHERE (e_status = 'new') FOR UPDATE
UPDATE poll SET e_status = 'polled' WHERE (e_id = #e_id)
Problem:
For some reason I am getting duplicate records on executing above query for majority of records sometimes beyond 200 or 300.
It seems like locks are not being acquired during execution of above command. Is there any configuration that I am missing from database side. Does it have anything to do with shared lock and exclusive lock?
There are 5 coupon code in my Table, these coupon codes are same. If 10 customer's will apply coupon code [FIRST5] simultaneously, then i need to update coupon as "LOCKED" and CUST_ID respectively only for 5 customers. For this case i have tried below SQL to lock the row & get P_KEY to update status & cust id when customer apply coupon. But i could not able to update latest P_KEY for the respective customer. Please advice the correct way of doing.
SELECT P_KEY FROM
(SELECT P_KEY FROM COUPON_DETAILS WHERE COUPON_CODE = 'FIRST5'
AND (STATUS = 'UNLOCK' OR STATUS IS NULL))
WHERE ROWNUM = 1 FOR UPDATE;
P_KEY COUPON_CODE STATUS CUST_ID
1 FIRST5 UNLOCK
2 FIRST5 UNLOCK
3 FIRST5 UNLOCK
4 FIRST5 UNLOCK
5 FIRST5 UNLOCK
If 10 customer's will apply coupon code [FIRST5] simultaneously, then
i need to update coupon as "LOCKED" and CUST_ID respectively only for
5 customers.
I don't know of a good, pure-SQL way to do this, because the FOR UPDATE clause does not affect the query's result set. It only affects how rows are fetched.
So, you might think to try this:
SELECT p_key
FROM coupon_details
WHERE coupon_code = 'FIRST5'
AND (status = 'UNLOCK' OR status IS NULL)
AND rownum = 1
FOR UPDATE SKIP LOCKED;
It's reasonable to think this will cause Oracle to read all the matching coupon_details rows, skip any that are locked, and then stop after the 1st. But that would only work if the rownum=1 condition was applied after the for update clause.
Unfortunately, the way it works, the rownum=1 condition is applied first, because the FOR UPDATE only happens during fetching. So, what winds up happening is that every session looks at the first row only. If it is not locked, it returns that p_key. But if that first row is locked, it returns no data. (Or, in the case of the query you posted, which did not include SKIP LOCKED, the sessions after the first one would just wait.)
What you really need to do is select all the rows and then fetch them (skipping locked rows) and then stop after the first one.
You need PL/SQL for that. Here is an example:
DECLARE
c SYS_REFCURSOR;
l_key coupon_details.p_key%TYPE;
BEGIN
-- Open a cursor on all the coupon details that are available to lock
OPEN c FOR
SELECT p_key
FROM coupon_details
WHERE coupon_code = 'FIRST5'
AND (status = 'UNLOCK' OR status IS NULL)
FOR UPDATE SKIP LOCKED;
-- Fetch the first one. The (FOR UPDATE SKIP LOCKED) will ensure that
-- the one we fetch is not locked by another user and, after fetching,
-- will be locked by the current session.
FETCH c INTO l_key;
-- Do what you need with the locked row. In this example, we'll
-- just print some debug messages.
IF l_key IS NULL THEN
DBMS_OUTPUT.PUT_LINE('No free locks!');
ELSE
DBMS_OUTPUT.PUT_LINE('Locked key ' || l_key);
END IF;
-- Close the cursor
CLOSE c;
END;
... be sure to UPDATE coupon_details SET status = 'LOCKED' WHERE p_key = l_key before you commit.
I stumbled upon a problem with locking row in Oracle DB. The purpose of the lock is to prevent more than one transaction reading data from the DB because this data influences the generation of new data and is changed in terms of a transaction.
In order to make the lock, I've put the #Lock annotation over SpringData find method which retrieves data that participates in the transaction.
#Lock(LockModeType.PESSIMISTIC_WRITE)
User findUserById(#Param("id") String operatorId);
After this code is implemented I get log message
org.hibernate.loader.Loader - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
Besides, it has no effect and causes
org.springframework.dao.DataIntegrityViolationException: could not execute batch; SQL [insert into ...]
The issue can be solved when rewriting the lock using entity manager
entityManager.lock(userByIdWithLockOnReadWrite, LockModeType.PESSIMISTIC_WRITE);
or
entityManager.unwrap(Session.class).lock(userByIdWithLockOnReadWrite, LockMode.PESSIMISTIC_WRITE);
The issue doesn't appear on MariaDB (MySQL).
Maybe there are some special rules of using the annotation?
You said that:
The purpose of the lock is to prevent more than one transaction
reading data from the DB because this data influences the generation
of new data and is changed in terms of a transaction.
Oracle uses MVCC (Multiversion Concurrency Control) so Readers don't block Writers and Writers don't block Readers. Even if you acquire a row-level lock with Oracle, and you modify that row without committing, other transactions can still read the last committed value.
Related to this log message:
org.hibernate.loader.Loader - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
The follow-on locking mechanism is due to Oracle not being able to apply the lock when doing Oracle 11g pagination, using DISTINCT or UNION ALL.
If you're using Oracle 12i, then you can update the Hibernate dialect to Oracle12cDialect and pagination and locking will work fine since Oracle 12 uses the SQL standard pagination and it no longer requires a derived table query.
This does not happen in MariaDB or any other database. It's just an Oracle pre-12 limitation.
If you are using Hibernate 5.2.1, we added a new hint HINT_FOLLOW_ON_LOCKING which disables this mechanism.
So, your Spring Data query becomes:
#QueryHints(value = { #QueryHint(name = "hibernate.query.followOnLocking", value = "false")}, forCounting = false)
#Lock(LockModeType.PESSIMISTIC_WRITE)
User findUserById(#Param("id") String operatorId);
you can also apply it manually:
User user = entityManager.createQuery(
"select u from User u where id = :id", User.class)
.setParameter("id", id);
.unwrap( Query.class )
.setLockOptions(
new LockOptions( LockMode.PESSIMISTIC_WRITE )
.setFollowOnLocking( false ) )
.getSingleResult();
Is there any option to make a Transaction(TxB) to wait for some time (without throwing Lock Acquisition Exception) for another Transaction(TxA) to release the Lock.
I have a very simple table SEQ_NO in MS SQL server 2014, the table structure is as following:
CREATE TABLE SEQ_NO(
KEY_CODE varchar(30) NOT NULL,
CURR_SEQ_NO numeric(38, 0) NOT NULL,
PRIMARY KEY (KEY_CODE)
)
This table has only 4 records, each of them contains a sequence number. I have a program which has many threads access this table with hibernate to increase the sequence number by 1 and retrieve the increased sequence number.
For example, thread 1-10 increases and retrieves the sequence number from key_code_1 and thread 11-20 increases and retrieves the sequence number from key_code_2 etc.
I handled the exception for the threads accessing the same record but not handling the exception for the threads accessing different records since the table is row-locked. (i.e. the exception in-between thread 1-10 is properly handled, but exceptions between thread 1 and thread 11 is not handled).
It works fine with oracle 10g and hibernate3. Recently, the database is upgraded to SQL server 2014 and hibernate is upgrade to version4 and the program does not work! Sometimes thread 1 and thread 11 will go to deadlock even they are accessing the different row of the table! I am not sure why this happens and how can resolve this problem.
I have used the following script to check the table is row locked:
Query1 to lock one row:
begin tran T1;
update SEQ_NO set CURR_SEQ_NO = CURR_SEQ_NO+1 where KEY_CODE = 'KEY_CODE_1';
select CURR_SEQ_NO from SEQ_NO where KEY_CODE = 'KEY_CODE_1';
Query2 to check another row:
update SEQ_NO set CURR_SEQ_NO = CURR_SEQ_NO+1 where KEY_CODE = 'KEY_CODE_2';
select CURR_SEQ_NO from SEQ_NO where KEY_CODE = 'KEY_CODE_2';
I am able to get the sequence number for the 2nd query.
The problem is solved by disable the lock escalation and turn off the page locks:
ALTER TABLE SEQ_NO SET (LOCK_ESCALATION = DISABLE);
ALTER INDEX SEQ_NO_PK ON SEQ_NO; SET (ALLOW_PAGE_LOCKS = OFF)
Question:
Updated:
Why does inserting a row into table A with a foreign key constraint to table B and then updating the row in table B that the inserted row in table A references in a transaction cause a deadlock?
Scenario:
reservation.time_slot_id has a foreign key constraint to time_slot.id.
When a reservation is made the following SQL is run:
BEGIN TRANSACTION
INSERT INTO reservations (..., time_slot_id) VALUES (..., $timeSlotID)
UPDATE reservations SET num_reservations = 5 WHERE id = $timeSlotID
COMMIT
I am load testing my server with about 100 concurrent users, each making a reservation for the same time slot (same $timeSlotID for each user).
If I don't use a transaction (remove cn.setAutoCommit(false);, cn.commit(), etc.) this problem does not occur.
Environment:
PostgreSQL 9.2.4
Tomcat v7.0
JDK 1.7.0_40
commons-dbcp-1.4.jar
commons-pool-1.6.jar
postgresql-9.2-1002.jdbc4.jar
Code:
// endpoint start
// there are some other SELECT ... LEFT JOIN ... WHERE ... queries up here but they don't seem to be related
...
// create a reservation in the time slot then increment the count
cn.setAutoCommit(false);
try
{
st = cn.prepareStatement("INSERT INTO reservation (time_slot_id, email, created_timestamp) VALUES (?, ?, ?)");
st.setInt (1, timeSlotID); // timeSlotID is the same for every user
st.setString(2, email);
st.setInt (3, currentTimestamp);
st.executeUpdate();
st.close();
st = cn.prepareStatement("UPDATE time_slot SET num_reservations = 5 WHERE id = ?"); // set to 5 instead of incrementing for testing
st.setInt(1, timeSlotID); // timeSlotID is the same for every user
st.executeUpdate();
st.close();
cn.commit();
}
catch (SQLException e)
{
cn.rollback();
...
}
finally
{
cn.setAutoCommit(true);
}
...
// endpoint end
PSQL Error:
ERROR: deadlock detected
DETAIL: Process 27776 waits for ExclusiveLock on tuple (2,179) of relation 49817 of database 49772; blocked by process 27795.
Process 27795 waits for ShareLock on transaction 3962; blocked by process 27777.
Process 27777 waits for ExclusiveLock on tuple (2,179) of relation 49817 of database 49772; blocked by process 27776.
Process 27776: UPDATE time_slot SET num_reservations = 5 WHERE id = $1
Process 27795: UPDATE time_slot SET num_reservations = 5 WHERE id = $1
Process 27777: UPDATE time_slot SET num_reservations = 5 WHERE id = $1
HINT: See server log for query details.
STATEMENT: UPDATE time_slot SET num_reservations = 5 WHERE id = $1
How the foreign key can cause a deadlock (in Postgresql 9.2 and below).
Let say there is a child table referencing to a parent table:
CREATE TABLE time_slot(
id int primary key,
num_reservations int
);
CREATE TABLE reservation(
time_slot_id int,
created_timestamp timestamp,
CONSTRAINT time_slot_fk FOREIGN KEY (time_slot_id)
REFERENCES time_slot( id )
);
INSERT INTO time_slot values( 1, 0 );
INSERT INTO time_slot values( 2, 0 );
Suppose that the FK column in child table is modified in session one, that fires ordinary insert statement (to test this behavior, open one session in SQL Shell (psql) and turn auto commit off, or start the transaction using begin statement:
BEGIN;
INSERT INTO reservation VALUES( 2, now() );
When the FK column in child table is modified, DBMS will have to lookup the parent table to ensure the existence of the parent record.
If inserted value doesn't exists in the referenced (parent) table - DBMS breaks the transaction and reports an error.
In case the value exists, the record is inserted into child table, but DBMS has to ensure the transaction integrity - no other transaction can delete or modify referenced record in the parent table, until the transaction ends (until INSERT into child table is committed).
PostgreSql 9.2 (and below) ensure database integrity in a such case placing a read share lock on a record in the parent table. The read share lock doesn't prevents readers from reading locked record from the table, but prevent's writers from modyfying locked record in the shared mode.
OK - now we have a new record in the child table insered by session 1 (there is a write lock placed on this record by session 1), and the read share lock placed on the record 2 in the parent table. The transaction is not yet commitet.
Suppose that a session 2 starts the same transaction, that references the same record in the parent table:
BEGIN;
INSERT INTO reservation VALUES( 2, now() );
The query executes fine, without any errors - it inserts a new record into the child table, and also places a shared read lock on the record 2 in the parent table. Shared locks don't conflict, many transactions can lock a record in a shared read mode and don't have to wait for others (only write locks conflict).
Now (a few miliseconds later) the session 1 fires (as a part of the same transaction) this command:
UPDATE time_slot
SET num_reservations = num_reservations + 1
WHERE id = 2;
In Postgres 9.2 the above command "hangs" and is waiting for the shared lock placed by session 2.
And now, suppose that the same command, a few miliseconds later, is running in session 2:
UPDATE time_slot
SET num_reservations = num_reservations + 1
WHERE id = 2;
This command is supposed to "hang" and should wait for a write lock placed on the record by UPDATE from session 1.
But the result is:
BŁĄD: wykryto zakleszczenie
SZCZEGÓŁY: Proces 5604 oczekuje na ExclusiveLock na krotka (0,2) relacji 41363 bazy danych 16393; zablokowany przez 381
6.
Proces 3816 oczekuje na ShareLock na transakcja 1036; zablokowany przez 5604.
PODPOWIEDŹ: Przejrzyj dziennik serwera by znaleźć szczegóły zapytania.
("zakleszczenie" means "deadlock", "BŁĄD" means "ERROR")
the update command from session 2 is trying to place a write lock on the record 2 locked by session 1
session 1 is trying to place a write lock on the same record, locked (in the shared mode) by session 2
----> ...... deadlock.
The deadlock can be prevented by placing a write lock on the parent table using SELECT FOR UPDATE
The above test case will not cause the deadlock in PostgreSQL 9.3 (try it) - in 9.3 they improved locking behaviour in such cases.
------------ EDIT - additional questions -------------------
why does the insert statement not release the lock after it is done? Or does it remain for the entire transaction which is why not using a transaction does not cause a deadlock?
All statements that modify data within the transaction (insert, update, delete) place locks on modified records. These locks remain active until the transaction ends - by issuing commit or rollback.
Because autocommit is turned off in the JDBC connection, successive SQL commands are automaticaly grouped into one transaction
The explanation is here:
http://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#setAutoCommit%28boolean%29
If a connection is in auto-commit mode, then all its SQL statements will be executed and committed as individual transactions. Otherwise, its SQL statements are grouped into transactions that are terminated by a call to either the method commit or the method rollback.
How does the SELECT FOR UPDATE prevent the deadlock?
SELECT FOR UPDATE places a write lock on the record. This is the first command in the whole transaction, and the lock is placed in the beginning. When another transaction starts (in another session), is also executes SELECT FOR UPDATE, trying to lock the same record. Write locks conflict - two transactions cannot lock the same record at the same time - therefore the SELECT FOR UPDATE of the second transaction is hold, and is waiting until the first transaction releases the lock (by issuing commit or rollback) - actually the second transaction is waiting until the whole first transaction ends.
In the first scenario, the INSERT statements places two locks:
- a write lock on the inserted record in the reservation table
- and a read shared lock on the record in the time_slot table referenced by the foreign key constraint
Read shared locks don't conflict - two and more transactions can lock the same record in the shared mode, and can continue execution - then don't have to wait for each other. But later, when the UPDATE is issued within the same transaction, trying to place a write lock on the same record already locked in the shared mode, this cause a deadlock.
Would placing the increment first also prevent the deadlock?
Yes, you are right. This prevents the deadlock, because a write lock is placed on the record at the beginning of the transaction. Another transaction also tries to update the same record at the beginning, and has to wait at this point because the record is already locked (in write mode) by another session.
While I still don't understand it, I added:
SELECT * FROM time_slot WHERE id = ? FOR UPDATE
as the first statement in the transaction. This seems to have solved my problem as I no longer get a deadlock.
I would still love for someone to give a proper answer and explain this to me.