I've written a Task scheduler in Java where it calls the method for every one min. Now this application is deployed into SIT server which has 2 instances running on it. Now let me tell you the scenario which I have built.
<task:scheduled-tasks scheduler="myScheduler">
<task:scheduled ref="myBean" method="takeLunch" fixed-delay="60000" />
</task:scheduled-tasks>
<task:scheduler id="myScheduler"/>
The flow is
1. Get the employees who are ready to take the lunch. This is the eligiblity condition.
SELECT EMP_ID FROM EMPLOYEES WHERE WORK_STATUS='COMPLETED'
(Can there be a deadlock here because both instances try to fire the query at the same time?)
2. I've another table called "LUNCH_STATUS" where I will keep track of their lunch.
INSERT INTO LUNCH_STATUS(EMP_ID,STATUS) .....
Here all employee ids will be inserted with the status as empty.
3. I will get the first employee from the LUNCH_STATUS whose status is empty and I will update in the same table as the status "LUNCH IN PROGRESS"
4. While taking lunch, I've some business logic, once the lunch is done, I will update the status as "COMPLETED"
UPDATE LUNCH_STATUS SET STATUS='COMPLETED' WHERE EMP_ID = ?
5. Once this update is done, I should update the main table EMPLOYEES
UPDATE EMPLOYEES SET WORK_STATUS='WORK RESUMED' WHERE EMP_ID=?
This is working fine when I run in my local machine, but not sometimes in the SIT server.
Now, the problem here is sometimes when multiple employees are eligible for taking their lunch, the application is not updating status as Lunch completed even though the process is done. Somewhere the record is getting locked. Any ideas what steps should I have considered?
I'm using the #Transactional annotation and the isolation property as SERIALIZABLE for all these DAO methods (INSERT, SELECT & UPDATE).
Please guide me where should I go for locking mechanism OR the flow should be redesigned on how to use isolation.
You have to implement a cluster environment quartz scheduler. for example in GitHub with the database.
Click here https://github.com/faizakram/Application
Related
Context:
Spring Boot application with Spring JPA and MS SQL DB.
User registration process with insuranceNumber
insuranceNumber is not unique in DB but only for certain status (PROSPECTIVE, PARTICIPANT)
there is a duplication check in a service to check this
REST Controller ->
RegistrationService (#Transactional)
- do duplicate check (select * from customer where status in (PROSPECTIVE,PARTICIPANT) and insuranceNumber = XYZ -> no results good)
- insert new customer to DB
Issue:
If tested, the duplication checks works but sometimes I have duplicates of insuranceNumber in PROSPECTIVE status
Assumption:
due to multiple REST requests in short time I have multiple threads (let's assume 3)
Thread 1 "duplicate check" - all fine
Thread 2 "duplicate check" - all fine (Thread 1 is not comitted yet)
Thread 1 insert do DB, commit TX
Thread 2 insert do DB, commit TX ## the issue, now customer with same insurance number in same status
Thread 3 "duplicate check" - fails - als expected
Possible Solutions:
Frontend: prevent this multiple requests. Out of scope. I want to be sure from backend
DB: create something on DB site (database trigger) to do the same duplication check again. Feels wrong as it duplicates the logic of the duplication check. Would also raise a different exception than if raised in Java already
Java code: RegistrationService with synchronized method. Would slow down registration for everybody. For me it would be enough that only one insurance number is allowed to enter the registration method.
Are there more ideas?
Play around with isolation levels for the DB?
Prevent to enter the registration method if one thread has already entered the method with same insurance number?
The only reliable approach to prevent duplicates in DB is to create unique index, in your particular case that should be filtered unique index:
CREATE UNIQUE NONCLUSTERED INDEX CUSTOMERS_UK
ON CUSTOMERS(insuranceNumber)
WHERE status IN ('PROSPECTIVE','PARTICIPANT')
Another options are:
application locks in MSSQL
lock by a key in Java, however, that won't work in case of multiple instance deployment
I am currently invastigating problem with my DB and it seems very strange to me. I have 2 records with 2 columns - status and fighter_name. and i have a constraint idx_status_fighter - i cannot have fighters with same name and active status. I have 2 records in DB. "Zed" with status ACTIVE and "Zed" with status DELETED. In transactional method first I set the status of active zed to DELETED and then the status of deleted Zed to active and Spring tells me that I have ViolationConstraint about the idx_status_fighter. I really cannot find any information about that.
Edit: As far as I know #Transactional commits to DB after the whole method ends without errors. How can I instruct him which commits to DB to set first.
I am designing a system in which when a request comes to web server, the request data is inserted in DB and auto-increment id is generated.
Now there are some x number of task to be completed for each request, where each task roughly takes 0-20 sec.
Using the auto generated id as reference, the web server publishes message to a message broker(RabbitMQ) for each task to be completed by workers running concurrently. When any task is complete its status is updated in DB by inserting a row.
I want to update the table (like set status of overall request) when all tasks are complete. How should I proceed in my scenario ? Is there a way to run a query when all pending tasks are complete ?
I have tried checking after each task completion that if all other task are complete or not. If all other are complete then I updated the table else did nothing. But this fail if there are 2 task remaining to be completed and both get completed and check at the same time. Both will see that 1 task is still pending and no one will update the table.
I am using JDBC with MYSQL database at web server.
Please help.
I have a service which is running using executor in java.In the main method of that service is as follows
public void method()
{
// will get some records from database
process records one by one
}
For example in my database I have 100 records , after 49 records processed I stopped my server.When I restart the server again it is running from starting means from 1st record.
Is there any possibility to start the service from 50th record.
Possible solution:
whenever server started need to check in previous iteration how many records processed by looking into the database (by maintaining a flag ).Once I found that records , I can skip those records .
Is there any alternative for this or any framework in java which can handle in a proper manner.Please correct me if my solution is not correct or any better solution is available
NOTE:Here we don't require any transaction management
I am using hibernate lucene search(4.5.1) in my application in the cloud environment. For each tenant a separate hibernate configuration is maintained (all the properties are same except hibernate.search.default.indexBase property. Each tenant has separate filesystem location). While starting the application I made the logic to index some table data by unique location for each tenant (eg: d:/dbindex/tenant1/, d:/dbindex/tenant2/) by calling the method Search.getFullTextSession(session).createIndexer().startAndWait(). For the first tenant every thing is fine, index was perfectly done. For the second tenant the startAndWait is not completed. It doesn't comes out from the startAndWait() but some time it is working. Some time doesn't comes out from the startAndWait(). After a serious debugging I found that BatchIndexingWorkspace has two thread producer and consumer where producer take the list of Id from DB and put it in queue and Consumer takes and indexes it. At the producer side (IdentifierProducer) a method named inTransactionWrapper has a statement
Transaction transaction = Helper.getTransactionAndMarkForJoin( session );
transaction.begin();
The statement transaction.begin() gets hangs and transaction doesn't begins so producer is not produced so consumer didn't indexed and startAndWait freezes. After a long search some post says that pool size will make a dead lock. But I am using BoneCPConnectionProvider with maxConnectionsPerPartition with 50 (per tenant). I monitored the active connection while starting, it doesn't exceed 10. More connection are available. But I don't know what is the problem.