I am trying to read records from SAP ASE 16 Database table concurrently using java to increase performance. I am using select…..for update query to read database table records concurrently. Here two threads are trying to read records from single table.
I am executing this command on a microservice based environment.
I have done below configuration on database:
• I have enabled select for update using: sp_configure "select for update", 1
• To set locking scheme I have used: alter table poll lock datarows
Table name: poll
This is the query that I am trying to execute:
SQL Query:
SELECT e_id, e_name, e_status FROM poll WHERE (e_status = 'new') FOR UPDATE
UPDATE poll SET e_status = 'polled' WHERE (e_id = #e_id)
Problem:
For some reason I am getting duplicate records on executing above query for majority of records sometimes beyond 200 or 300.
It seems like locks are not being acquired during execution of above command. Is there any configuration that I am missing from database side. Does it have anything to do with shared lock and exclusive lock?
Related
I have an update query which I am trying to execute through batchUpdate method of spring jdbc template. This update query can potentially match 1000s of rows in EVENT_DYNAMIC_ATTRIBUTE table which needs to be get updated. Will updating thousands of rows in a table cause any issue in production database apart from timeout? like, will it crash database or slowdown the performance of entire database engine for other connections...etc?
Is there a better way to achieve this instead of firing single update query in spring JDBC template or JPA? I have the following settings for jdbc template.
this.jdbc = new JdbcTemplate(ds);
jdbc.setFetchSize(1000);
jdbc.setQueryTimeout(0); // zero means there is no limit
The update query:
UPDATE EVENT_DYNAMIC_ATTRIBUTE eda
SET eda.ATTRIBUTE_VALUE = 'claim',
eda.LAST_UPDATED_DATE = SYSDATE,
eda.LAST_UPDATED_BY = 'superUsers'
WHERE eda.DYNAMIC_ATTRIBUTE_NAME_ID = 4002
AND eda.EVENT_ID IN
(WITH category_data
AS ( SELECT c.CATEGORY_ID
FROM CATEGORY c
START WITH CATEGORY_ID = 495984
CONNECT BY PARENT_ID = PRIOR CATEGORY_ID)
SELECT event_id
FROM event e
WHERE EXISTS
(SELECT 't'
FROM category_data cd
WHERE cd.CATEGORY_ID = e.PRIMARY_CATEGORY_ID))
If it is one time thing, I normally first select the records which needs to be updated and put in a temporary table or in a csv, and I make sure that I save primary key of those records in a table or in a csv. Then I read records in batches from temporary table or csv, and do the update in the table using the primary key. This way tables are not locked for a long time and you can have fixed set of records added in the batch which needs update and updates are done using primary key so it will be very fast. And if any update fails then you know which records got failed by logging out the failed records primary key in a log file or in an error table. I have followed this approach many time for updating millions of records in the PROD database, as it is very safe approach.
my jdev version :11.1.1.7
In our adf application we have a requirement to upload heavy csv files(10k-100k rows) and process/validate each rows and update in the table with process/validation statuses.
The update is happening for each row by applying the view criteria with a primary key as bind variable and commiting each updated row
All of the above process is happening concurrently using java.util.concurrent utilities.
Everything is working fine but few rows encounter oracle.jbo.JboException: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[254 ].
I have tried updating the table at the end of the whole executor process and committing all updated rows in batch which works fine but this contradicts one of the requirements as user has to wait till end of the process to see the number of updated records in UI.
My queries :
1.How can i implement a thread safe DB commit operation in ADF in such scenario?
2.Each processed/validated row should be commited to DB so that the updated records can be viewed on UI by user
after your every commit operation use "executequery()" or "closerowset()" for your getviewobject.
eg:public void closemaster() {
this.getMasterView().closeRowSet();
}
or you can use:
public void closemaster() {
this.getMasterView().executeQuery();
}
both answers will work.
i think your problem will be solved.
update what happens.
I stumbled upon a problem with locking row in Oracle DB. The purpose of the lock is to prevent more than one transaction reading data from the DB because this data influences the generation of new data and is changed in terms of a transaction.
In order to make the lock, I've put the #Lock annotation over SpringData find method which retrieves data that participates in the transaction.
#Lock(LockModeType.PESSIMISTIC_WRITE)
User findUserById(#Param("id") String operatorId);
After this code is implemented I get log message
org.hibernate.loader.Loader - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
Besides, it has no effect and causes
org.springframework.dao.DataIntegrityViolationException: could not execute batch; SQL [insert into ...]
The issue can be solved when rewriting the lock using entity manager
entityManager.lock(userByIdWithLockOnReadWrite, LockModeType.PESSIMISTIC_WRITE);
or
entityManager.unwrap(Session.class).lock(userByIdWithLockOnReadWrite, LockMode.PESSIMISTIC_WRITE);
The issue doesn't appear on MariaDB (MySQL).
Maybe there are some special rules of using the annotation?
You said that:
The purpose of the lock is to prevent more than one transaction
reading data from the DB because this data influences the generation
of new data and is changed in terms of a transaction.
Oracle uses MVCC (Multiversion Concurrency Control) so Readers don't block Writers and Writers don't block Readers. Even if you acquire a row-level lock with Oracle, and you modify that row without committing, other transactions can still read the last committed value.
Related to this log message:
org.hibernate.loader.Loader - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
The follow-on locking mechanism is due to Oracle not being able to apply the lock when doing Oracle 11g pagination, using DISTINCT or UNION ALL.
If you're using Oracle 12i, then you can update the Hibernate dialect to Oracle12cDialect and pagination and locking will work fine since Oracle 12 uses the SQL standard pagination and it no longer requires a derived table query.
This does not happen in MariaDB or any other database. It's just an Oracle pre-12 limitation.
If you are using Hibernate 5.2.1, we added a new hint HINT_FOLLOW_ON_LOCKING which disables this mechanism.
So, your Spring Data query becomes:
#QueryHints(value = { #QueryHint(name = "hibernate.query.followOnLocking", value = "false")}, forCounting = false)
#Lock(LockModeType.PESSIMISTIC_WRITE)
User findUserById(#Param("id") String operatorId);
you can also apply it manually:
User user = entityManager.createQuery(
"select u from User u where id = :id", User.class)
.setParameter("id", id);
.unwrap( Query.class )
.setLockOptions(
new LockOptions( LockMode.PESSIMISTIC_WRITE )
.setFollowOnLocking( false ) )
.getSingleResult();
Is there any option to make a Transaction(TxB) to wait for some time (without throwing Lock Acquisition Exception) for another Transaction(TxA) to release the Lock.
On our production application we recently become weird error from DB2:
Caused by: com.ibm.websphere.ce.cm.StaleConnectionException: [jcc][t4][2055][11259][4.13.80] The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated your particular request due to an error or a force interrupt. ERRORCODE=-4499, SQLSTATE=58009
This occurs when hibernate tries to select data from one big table(More than 6 milions records and 320 columns).
I observed that when ResultSet lower that 10 elements, hibernate selects successfully.
Our architecture:
Spring 4.0.3
Hibernate 4.3.5
DB2 v10 z/Os
Websphere 7.0.0.31(with JDBC V9.7FP5)
This select works when I tried to executed this in Data Studio or when app is started localy from Tomcat(connected to production Data Source). I suppose that Data Source on Websphere is not corectly configured, but I tried some modifications and without results. I also tried to update JDBC Driver but that not helped. Actually I become then ERRORCODE = -1244.
Ok, so now I'm looking for any help ;).
I can obviously provide additional information when needed.
Maybe someone fighted earlier with this problem?
Thanks in advance!
We have the same problem and finally solved by running REORG and RUNSTAT on the table(s). In our case, databse and tables were damaged and after running both mentioned operations, it resolved.
This occurs when hibernate tries to select data from one big table(More than 6 milions records and 320 columns)
6 million records with 320 columns seems huge to be read at once through hibernate. How you tried creating a database cursor and streaming few records at a time? In plain JDBC it is done as follows
Statement stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(50); //fetch only 50 records at a time
while with hibernate you would need the below code
Query query = session.createQuery(query);
query.setReadOnly(true);
query.setFetchSize(50);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
// iterate over results
while (results.next()) {
Object row = results.get();
// process row then release reference
// you may need to flush() as well
}
results.close();
This allows you to stream over the result set, however Hibernate will still cache results in the Session, so you’ll need to call session.flush() every so often. If you are only reading data, you might consider using a StatelessSession, though you should read its documentation beforehand.
Analyze the database table locking impact when using this approach.
Below Snapshot is current application flow.
Current Flow
When user Logged in at these multiple deployments, then respective SMSAgent(java class) insert user info in database, SMSHelper is a java Scheduler which reads data from database in its local queue,send SMS and then update user status in database.
Issue with this flow
Now,In above scenario, Multiple SMS is getting send to Single User because database is common and both the notification helper takes contact details from database(which may be common) and send SMS to that user.
Existing Solution
Currently, solution to this problem is only available in oracle 11g where select query has for update skip locked support.
Expectation
How to achieve the same with all databases at application level and not at query level ?
First,you have to RESERVE the row by update and then do select.
Suppose u have 200 row,
so first you should do is RESERVE by some value which are unique by instance, also you could limit on no of rows updated in your query and then select the row which are reserved by your query
UPDATE TABLE_NAME SET SERVER_INSTACE_ID=UNIQUE_VAL AND ROWNUM <= RECORD_RESERVATION_LIMIT
SELECT * FROM TABLE_NAME WHERE SERVER_INSTANCE_ID=UNIQUE_VAL
Through this approach, you don't need to obtain lock on row or table.