Restart stopped service, after server restart in java executor framework - java

I have a service which is running using executor in java.In the main method of that service is as follows
public void method()
{
// will get some records from database
process records one by one
}
For example in my database I have 100 records , after 49 records processed I stopped my server.When I restart the server again it is running from starting means from 1st record.
Is there any possibility to start the service from 50th record.
Possible solution:
whenever server started need to check in previous iteration how many records processed by looking into the database (by maintaining a flag ).Once I found that records , I can skip those records .
Is there any alternative for this or any framework in java which can handle in a proper manner.Please correct me if my solution is not correct or any better solution is available
NOTE:Here we don't require any transaction management

Related

Locking Mechanism if pod crashes while processing mongodb record

We have a java/spring application which runs on EKS pods and we have records stored in MongoDB collection.
STATUS: READY,STARTED,COMPLETED
Application needs to pick the records which are in READY status and update the status to STARTED. Once the processing of the record is completed, the status will be updated to COMPLETED
Once the record is STARTED, it may take few hours to complete, until then other pods(other instance of the same app) should not pick this record. If some exception occurs, the app changes the status to READY so that other pods(or the same pod) can pick the READY record for processing.
Requirement: If the pod crashes when the record is processing(STARTED) but crashes before changing the status to READY/COMPLETED, the other pod should be able to pick this record and start processing again.
We have some solution in mind but trying to find the best solution. Request you to help me with some best approaches.
You can use a shutdown hook from spring:
#Component
public class Bean1 {
#PreDestroy
public void destroy() {
## handle database change
System.out.println(Status changed to ready);
}
}
Beyond that, that kind of job could run better in a messaging architecture, using SQS for example. Instead of using the status on the database to handle and orchestrate the task, you can use an SQS, publish the message that needs to be consumed (the messages that were in ready state) and have a poll of workers consuming messages from this SQS, if something crashes or the pod of this workers needs to be reclaimed, the message goes back to SQS and can be consumed by another pod.

How to find status of records loaded when we forcefully intercepting batch execution by stopping mssql database

We are implementing connection or flush retry logic for database.
Auto-commit=true;
RetryPolicy retryPolicy = new RetryPolicy()
.retryOn(DataAccessException.class)
.withMaxRetries(maxRetry)
.withDelay(retryInterval, TimeUnit.SECONDS);
result = Failsafe.with(retryPolicy)
.onFailure(throwable -> LOG.warn("Flush failure, will not retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.onRetry(throwable -> LOG.warn("Flush failure, will retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.get(cntx -> {
return batch.execute();
});
we want to intercept storing, updating, inserting, deleting records by stopping mssql db service in backend. At some point even If we got org.jooq.exception.DataAccessException, some of the records in batch (subset of batch) are loaded into db.
Is there any way to find failed and successfully loaded records using jooq api?
The jOOQ API cannot help you here out of the box because such a functionality is definitely out of scope for the relatively low level jOOQ API, which helps you write type safe embedded SQL. It does not make any assumptions about your business logic or infrastructure logic.
Ideally, you will run your own diagnostic here. For example, you already have a BATCHID column which should make it possible to detect which records were inserted/updated with which process. When you re-run the batch, you need to detect that you've already attempted this batch, remember the previous BATCHID, and fetch the IDs of the previous attempt to do whatever needs to be done prior to a re-run.

Report to database only once from multiple machines

I have a Spring Boot app which has a scheduler that insert data to a remote database at 2 a.m. every day.
#Scheduled(cron = "0 0 2 * * ?")
public void reportDataToDB() {
// code omitted
}
The problem is, the app runs on multiple machines, so the database would receive multiple duplicate insertions of data.
What is the idiomatic way to solve this?
We solved such a problem by using a central scheduler. In our case we use Rundeck, which then calls a URL on our service (by going through the loadbalancer), which then executes the task (in our case data cleanup). This way you can make sure, that the logic is only executed on one instance of the service.

How to run update query after all concurrent pending queries have executed in JDBC or MySQL?

I am designing a system in which when a request comes to web server, the request data is inserted in DB and auto-increment id is generated.
Now there are some x number of task to be completed for each request, where each task roughly takes 0-20 sec.
Using the auto generated id as reference, the web server publishes message to a message broker(RabbitMQ) for each task to be completed by workers running concurrently. When any task is complete its status is updated in DB by inserting a row.
I want to update the table (like set status of overall request) when all tasks are complete. How should I proceed in my scenario ? Is there a way to run a query when all pending tasks are complete ?
I have tried checking after each task completion that if all other task are complete or not. If all other are complete then I updated the table else did nothing. But this fail if there are 2 task remaining to be completed and both get completed and check at the same time. Both will see that 1 task is still pending and no one will update the table.
I am using JDBC with MYSQL database at web server.
Please help.

Hiberante Lucene Hangs while calling startAndWait()

I am using hibernate lucene search(4.5.1) in my application in the cloud environment. For each tenant a separate hibernate configuration is maintained (all the properties are same except hibernate.search.default.indexBase property. Each tenant has separate filesystem location). While starting the application I made the logic to index some table data by unique location for each tenant (eg: d:/dbindex/tenant1/, d:/dbindex/tenant2/) by calling the method Search.getFullTextSession(session).createIndexer().startAndWait(). For the first tenant every thing is fine, index was perfectly done. For the second tenant the startAndWait is not completed. It doesn't comes out from the startAndWait() but some time it is working. Some time doesn't comes out from the startAndWait(). After a serious debugging I found that BatchIndexingWorkspace has two thread producer and consumer where producer take the list of Id from DB and put it in queue and Consumer takes and indexes it. At the producer side (IdentifierProducer) a method named inTransactionWrapper has a statement
Transaction transaction = Helper.getTransactionAndMarkForJoin( session );
transaction.begin();
The statement transaction.begin() gets hangs and transaction doesn't begins so producer is not produced so consumer didn't indexed and startAndWait freezes. After a long search some post says that pool size will make a dead lock. But I am using BoneCPConnectionProvider with maxConnectionsPerPartition with 50 (per tenant). I monitored the active connection while starting, it doesn't exceed 10. More connection are available. But I don't know what is the problem.

Categories