I am currently invastigating problem with my DB and it seems very strange to me. I have 2 records with 2 columns - status and fighter_name. and i have a constraint idx_status_fighter - i cannot have fighters with same name and active status. I have 2 records in DB. "Zed" with status ACTIVE and "Zed" with status DELETED. In transactional method first I set the status of active zed to DELETED and then the status of deleted Zed to active and Spring tells me that I have ViolationConstraint about the idx_status_fighter. I really cannot find any information about that.
Edit: As far as I know #Transactional commits to DB after the whole method ends without errors. How can I instruct him which commits to DB to set first.
Related
Context:
Spring Boot application with Spring JPA and MS SQL DB.
User registration process with insuranceNumber
insuranceNumber is not unique in DB but only for certain status (PROSPECTIVE, PARTICIPANT)
there is a duplication check in a service to check this
REST Controller ->
RegistrationService (#Transactional)
- do duplicate check (select * from customer where status in (PROSPECTIVE,PARTICIPANT) and insuranceNumber = XYZ -> no results good)
- insert new customer to DB
Issue:
If tested, the duplication checks works but sometimes I have duplicates of insuranceNumber in PROSPECTIVE status
Assumption:
due to multiple REST requests in short time I have multiple threads (let's assume 3)
Thread 1 "duplicate check" - all fine
Thread 2 "duplicate check" - all fine (Thread 1 is not comitted yet)
Thread 1 insert do DB, commit TX
Thread 2 insert do DB, commit TX ## the issue, now customer with same insurance number in same status
Thread 3 "duplicate check" - fails - als expected
Possible Solutions:
Frontend: prevent this multiple requests. Out of scope. I want to be sure from backend
DB: create something on DB site (database trigger) to do the same duplication check again. Feels wrong as it duplicates the logic of the duplication check. Would also raise a different exception than if raised in Java already
Java code: RegistrationService with synchronized method. Would slow down registration for everybody. For me it would be enough that only one insurance number is allowed to enter the registration method.
Are there more ideas?
Play around with isolation levels for the DB?
Prevent to enter the registration method if one thread has already entered the method with same insurance number?
The only reliable approach to prevent duplicates in DB is to create unique index, in your particular case that should be filtered unique index:
CREATE UNIQUE NONCLUSTERED INDEX CUSTOMERS_UK
ON CUSTOMERS(insuranceNumber)
WHERE status IN ('PROSPECTIVE','PARTICIPANT')
Another options are:
application locks in MSSQL
lock by a key in Java, however, that won't work in case of multiple instance deployment
We are implementing connection or flush retry logic for database.
Auto-commit=true;
RetryPolicy retryPolicy = new RetryPolicy()
.retryOn(DataAccessException.class)
.withMaxRetries(maxRetry)
.withDelay(retryInterval, TimeUnit.SECONDS);
result = Failsafe.with(retryPolicy)
.onFailure(throwable -> LOG.warn("Flush failure, will not retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.onRetry(throwable -> LOG.warn("Flush failure, will retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.get(cntx -> {
return batch.execute();
});
we want to intercept storing, updating, inserting, deleting records by stopping mssql db service in backend. At some point even If we got org.jooq.exception.DataAccessException, some of the records in batch (subset of batch) are loaded into db.
Is there any way to find failed and successfully loaded records using jooq api?
The jOOQ API cannot help you here out of the box because such a functionality is definitely out of scope for the relatively low level jOOQ API, which helps you write type safe embedded SQL. It does not make any assumptions about your business logic or infrastructure logic.
Ideally, you will run your own diagnostic here. For example, you already have a BATCHID column which should make it possible to detect which records were inserted/updated with which process. When you re-run the batch, you need to detect that you've already attempted this batch, remember the previous BATCHID, and fetch the IDs of the previous attempt to do whatever needs to be done prior to a re-run.
I've written a Task scheduler in Java where it calls the method for every one min. Now this application is deployed into SIT server which has 2 instances running on it. Now let me tell you the scenario which I have built.
<task:scheduled-tasks scheduler="myScheduler">
<task:scheduled ref="myBean" method="takeLunch" fixed-delay="60000" />
</task:scheduled-tasks>
<task:scheduler id="myScheduler"/>
The flow is
1. Get the employees who are ready to take the lunch. This is the eligiblity condition.
SELECT EMP_ID FROM EMPLOYEES WHERE WORK_STATUS='COMPLETED'
(Can there be a deadlock here because both instances try to fire the query at the same time?)
2. I've another table called "LUNCH_STATUS" where I will keep track of their lunch.
INSERT INTO LUNCH_STATUS(EMP_ID,STATUS) .....
Here all employee ids will be inserted with the status as empty.
3. I will get the first employee from the LUNCH_STATUS whose status is empty and I will update in the same table as the status "LUNCH IN PROGRESS"
4. While taking lunch, I've some business logic, once the lunch is done, I will update the status as "COMPLETED"
UPDATE LUNCH_STATUS SET STATUS='COMPLETED' WHERE EMP_ID = ?
5. Once this update is done, I should update the main table EMPLOYEES
UPDATE EMPLOYEES SET WORK_STATUS='WORK RESUMED' WHERE EMP_ID=?
This is working fine when I run in my local machine, but not sometimes in the SIT server.
Now, the problem here is sometimes when multiple employees are eligible for taking their lunch, the application is not updating status as Lunch completed even though the process is done. Somewhere the record is getting locked. Any ideas what steps should I have considered?
I'm using the #Transactional annotation and the isolation property as SERIALIZABLE for all these DAO methods (INSERT, SELECT & UPDATE).
Please guide me where should I go for locking mechanism OR the flow should be redesigned on how to use isolation.
You have to implement a cluster environment quartz scheduler. for example in GitHub with the database.
Click here https://github.com/faizakram/Application
We have a table in postgres that we use for our system configurations, from which i want to delete configurations with certain ids. The problem is that just before the deletion, we have some code that fetches some of the configuration entries and on the next step (the deletion one) i can't delete them because the rows are locked. I've checked the pg_locks table and each configuration retrieval's transaction stays in postgres for around a minute. And it stays in status "idle in transaction", while the deletion step is waiting on it.
This is how we retrieve the configuration entities
Query query = getEntityManager().createQuery("some query");
...
list = query.getResultList();
There's no transaction involved, we don't really add any transactional jta, ejb annotations, i guess it's added automatically and after this query is where it gets locked.
And this is how we try to delete the rows
Query query = getEntityManager().createNamedQuery(namedQuery);
query.executeUpdate();
Thats it. Nothing special, yet since some rows are locked, the query fails on waiting for the transaction to finish. Any ideas ?
I've tried seemingly everything. Setting hibernate's autocommit mode to true, setting the locktype when executing the queries to pessimistic read/writes, sending over query hints with the queries with max transaction times, executing the queries in separate JTA transactions, executing them in the same transaction, nothing seems to work, the delete query always 'hangs'
I am using hibernate lucene search(4.5.1) in my application in the cloud environment. For each tenant a separate hibernate configuration is maintained (all the properties are same except hibernate.search.default.indexBase property. Each tenant has separate filesystem location). While starting the application I made the logic to index some table data by unique location for each tenant (eg: d:/dbindex/tenant1/, d:/dbindex/tenant2/) by calling the method Search.getFullTextSession(session).createIndexer().startAndWait(). For the first tenant every thing is fine, index was perfectly done. For the second tenant the startAndWait is not completed. It doesn't comes out from the startAndWait() but some time it is working. Some time doesn't comes out from the startAndWait(). After a serious debugging I found that BatchIndexingWorkspace has two thread producer and consumer where producer take the list of Id from DB and put it in queue and Consumer takes and indexes it. At the producer side (IdentifierProducer) a method named inTransactionWrapper has a statement
Transaction transaction = Helper.getTransactionAndMarkForJoin( session );
transaction.begin();
The statement transaction.begin() gets hangs and transaction doesn't begins so producer is not produced so consumer didn't indexed and startAndWait freezes. After a long search some post says that pool size will make a dead lock. But I am using BoneCPConnectionProvider with maxConnectionsPerPartition with 50 (per tenant). I monitored the active connection while starting, it doesn't exceed 10. More connection are available. But I don't know what is the problem.