Below Snapshot is current application flow.
Current Flow
When user Logged in at these multiple deployments, then respective SMSAgent(java class) insert user info in database, SMSHelper is a java Scheduler which reads data from database in its local queue,send SMS and then update user status in database.
Issue with this flow
Now,In above scenario, Multiple SMS is getting send to Single User because database is common and both the notification helper takes contact details from database(which may be common) and send SMS to that user.
Existing Solution
Currently, solution to this problem is only available in oracle 11g where select query has for update skip locked support.
Expectation
How to achieve the same with all databases at application level and not at query level ?
First,you have to RESERVE the row by update and then do select.
Suppose u have 200 row,
so first you should do is RESERVE by some value which are unique by instance, also you could limit on no of rows updated in your query and then select the row which are reserved by your query
UPDATE TABLE_NAME SET SERVER_INSTACE_ID=UNIQUE_VAL AND ROWNUM <= RECORD_RESERVATION_LIMIT
SELECT * FROM TABLE_NAME WHERE SERVER_INSTANCE_ID=UNIQUE_VAL
Through this approach, you don't need to obtain lock on row or table.
Related
I am trying to read records from SAP ASE 16 Database table concurrently using java to increase performance. I am using select…..for update query to read database table records concurrently. Here two threads are trying to read records from single table.
I am executing this command on a microservice based environment.
I have done below configuration on database:
• I have enabled select for update using: sp_configure "select for update", 1
• To set locking scheme I have used: alter table poll lock datarows
Table name: poll
This is the query that I am trying to execute:
SQL Query:
SELECT e_id, e_name, e_status FROM poll WHERE (e_status = 'new') FOR UPDATE
UPDATE poll SET e_status = 'polled' WHERE (e_id = #e_id)
Problem:
For some reason I am getting duplicate records on executing above query for majority of records sometimes beyond 200 or 300.
It seems like locks are not being acquired during execution of above command. Is there any configuration that I am missing from database side. Does it have anything to do with shared lock and exclusive lock?
I am having an Oracle table which has my application configurations.I load them at my application startup and use it as my application cache. Now i have requirement to update my configuration if there is any Update in the table. I implemented Oracle's CQN database change notification framework via registering with ROW_ID event. It works for Insert and Update. I get row id in notification and do a lookup in the table using Row id and update my cache. Now while deleting a row, Row id in notification is already deleted from my table then how do i know which configuration to be deleted from cache ? Should i keep row id as key in my cache to do this ? Or any other efficient & safe way to achieve the same.Please suggest. Thanks in advance
I'm trying to run this query on a multiple instances (on a server) of the same application
I tried to run the query but would get deadlocks.
set transaction isolation level serializable
go
begin transaction
if not exists (select name from sys.sysobjects where name like 'xyp')
begin
CREATE TABLE xyp( id varchar(1), name varchar(5));
end
commit transaction go
Is there anything that I can do?
Why you need to lock the table during the creation? i don't think it can be done in sql server. you need to make some changes against your app.
I have an update query which I am trying to execute through batchUpdate method of spring jdbc template. This update query can potentially match 1000s of rows in EVENT_DYNAMIC_ATTRIBUTE table which needs to be get updated. Will updating thousands of rows in a table cause any issue in production database apart from timeout? like, will it crash database or slowdown the performance of entire database engine for other connections...etc?
Is there a better way to achieve this instead of firing single update query in spring JDBC template or JPA? I have the following settings for jdbc template.
this.jdbc = new JdbcTemplate(ds);
jdbc.setFetchSize(1000);
jdbc.setQueryTimeout(0); // zero means there is no limit
The update query:
UPDATE EVENT_DYNAMIC_ATTRIBUTE eda
SET eda.ATTRIBUTE_VALUE = 'claim',
eda.LAST_UPDATED_DATE = SYSDATE,
eda.LAST_UPDATED_BY = 'superUsers'
WHERE eda.DYNAMIC_ATTRIBUTE_NAME_ID = 4002
AND eda.EVENT_ID IN
(WITH category_data
AS ( SELECT c.CATEGORY_ID
FROM CATEGORY c
START WITH CATEGORY_ID = 495984
CONNECT BY PARENT_ID = PRIOR CATEGORY_ID)
SELECT event_id
FROM event e
WHERE EXISTS
(SELECT 't'
FROM category_data cd
WHERE cd.CATEGORY_ID = e.PRIMARY_CATEGORY_ID))
If it is one time thing, I normally first select the records which needs to be updated and put in a temporary table or in a csv, and I make sure that I save primary key of those records in a table or in a csv. Then I read records in batches from temporary table or csv, and do the update in the table using the primary key. This way tables are not locked for a long time and you can have fixed set of records added in the batch which needs update and updates are done using primary key so it will be very fast. And if any update fails then you know which records got failed by logging out the failed records primary key in a log file or in an error table. I have followed this approach many time for updating millions of records in the PROD database, as it is very safe approach.
my jdev version :11.1.1.7
In our adf application we have a requirement to upload heavy csv files(10k-100k rows) and process/validate each rows and update in the table with process/validation statuses.
The update is happening for each row by applying the view criteria with a primary key as bind variable and commiting each updated row
All of the above process is happening concurrently using java.util.concurrent utilities.
Everything is working fine but few rows encounter oracle.jbo.JboException: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[254 ].
I have tried updating the table at the end of the whole executor process and committing all updated rows in batch which works fine but this contradicts one of the requirements as user has to wait till end of the process to see the number of updated records in UI.
My queries :
1.How can i implement a thread safe DB commit operation in ADF in such scenario?
2.Each processed/validated row should be commited to DB so that the updated records can be viewed on UI by user
after your every commit operation use "executequery()" or "closerowset()" for your getviewobject.
eg:public void closemaster() {
this.getMasterView().closeRowSet();
}
or you can use:
public void closemaster() {
this.getMasterView().executeQuery();
}
both answers will work.
i think your problem will be solved.
update what happens.