In one of the steps of my Spring Batch job, I'm trying to configure it so that when ObjectOptimisticLockingFailureException happens, the step can be retried and hopefully the retry will work.
#Bean
public Step myStep(StaxEventItemReader<Response> staxEventResponseReader,
ItemWriter<Response> itemWriter,
ItemProcessor<? super Response, ? extends Response> responseProcessor) {
return stepBuilderFactory.get("myStep")
.<Response, Response>chunk(1)
.reader(staxEventResponseReader)
.processor(responseProcessor)
.writer(itemWriter)
//.faultTolerant().retryLimit(3).retry(Exception.class)
.build();
}
The logic of writer for the step is pretty simple: it tries to read a row from the database, and once it finds the row, it updates it. I was able to reproduce the ObjectOptimisticLockingFailureException by setting a breakpoint right after the find method, manually bump the version column for the row in database and commit it, then resume.
However, after uncommenting the retry definition in my step, no retries were attempted. After some debugging, it seems that the Spring retry logic is inside the chunk's transaction; but since the ObjectOptimisticLockingFailureException is not thrown by my code in the writer, but by Spring's chunk transaction committing logic, no retries were attempted at all.
Chunk Transaction Begin
Begin Retry loop in FaultTolerantChunkProcessor.write()
Writer logic in my Step
End Retry loop
Chunk Transaction Commit - Throws ObjectOptimisticLockingFailureException
When I tried to explicitly throw ObjectOptimisticLockingFailureException in my writer, the retry logic worked perfectly as expected. My questions are:
How to make the retry logic work if the exception is not thrown from my writer code in the step, but by the time the chunk transaction is committed by Spring Batch?
Another weird behavior is, when I manually cause the ObjectOptimisticLockingFailureException by bumping the version column in database, with the retry definition commented in the step, the final status of the step is FAILED which is expected. But with the retry definition uncommented, the final status of the step is COMPLETE. Why is that?
How to make the retry logic work if the exception is not thrown from my writer code in the step, but by the time the chunk transaction is committed by Spring Batch?
There is an open issue for that here: https://github.com/spring-projects/spring-batch/issues/1826. The workaround is to (try to anticipate and) throw any exception that might happen at the commit time in the writer. This is what you tried already and confirmed that works when you say When I tried to explicitly throw ObjectOptimisticLockingFailureException in my writer, the retry logic worked perfectly as expected.
Another weird behavior is, when I manually cause the ObjectOptimisticLockingFailureException by bumping the version column in database, with the retry definition commented in the step, the final status of the step is FAILED which is expected. But with the retry definition uncommented, the final status of the step is COMPLETE. Why is that?
This is related to the previous issue, but caused by a different one: https://github.com/spring-projects/spring-batch/issues/1189. That said, it is ok to play with the version field during a debugging session to understand how things work, but I would not recommend changing the version column in your code. Spring Batch relies on this column heavily in its optimistic locking strategy, and it is not expected to change values of this column in user code, otherwise unexpected behaviour might happen.
Related
My goal is to validate update logic before it queries to the database. Also this validation should not rollback or fail on request. It should just skip failed iteration in for-loop logic(logging would be enough) and go next iteration.
I created custom Hibernate interceptor, that validates request "before SQL querying" at the end of transaction. That's how I understand it.
I check current and previous field value in boolean onFlushDirty overriten method and throw org.hibernate.CallbackException(as it throws in Interceptor interface method signature) if validation failes.
In my non-transactional service method I have for loop where I execute another method that can cause this exception. repository.save() method is transactional and method throws JpaSystemException instead of CallbackException when validation fails...
So I catch it and don't throw anything - just logging. That's why failed iteration is skipped and service method continues its work.
But I need to have transactional service method to have option to rollback for other exceptions and keep "skipping Hibernate interceptor exception when it caught" finishing method successfully.
I tried to add
#Transactional(noRollbackFor = JpaSystemException.class)
But it doesn't work for me. I also tried to noRollBackFor others exception, I tried to throw custom exceptions instead of CallbackException. It didn't fix problem.
What should I do in my situation?
I need to insert a record in my database so another system can read that record. After that, I will write (with my ItemWriter) the response I recieve from that system in a CSV file.
My problem is that the system can't read the record since Spring Batch is transactional. How can I disable that property?
Disable spring batch transaction:
User .transactionAttribute(attributes()) method from AbstractTaskletStepBuilder with a parameter DefaultTransactionAttribute.
Build DefaultTransactionAttribute object with transaction propagation.
Example:
#Bean
public Step springBatchStep() {
return this.stepBuilderFactory.get("springBatchStep")
...
.reader()
.processor()
.writer()
.transactionAttribute(attributesForTransaction())
...
.build();
}
private DefaultTransactionAttribute attributesForTransaction() {
DefaultTransactionAttribute attribute = new DefaultTransactionAttribute();
attribute.setPropagationBehavior(Propagation.NOT_SUPPORTED.value());
attribute.setIsolationLevel(Isolation.DEFAULT.value());
return attribute;
}
You do not want to disable transaction enforcement, that can open you up to a serious data integrity problem. Even a 1% error issue can easily cause cause ten's of thousands, or even tens of millions of incomplete or bad records. Especially if say the one of the databases you are interacting with, or file systems you are writing to become unavailable. Which over time WILL happen. It would also break the job retry features.
A better option would be to break the process up into multiple steps, or jobs so the transaction boundaries fit your process. So one of them write out to this other database, and then the next step or job does the reading and writing to the file.
Spring Batch is highly opinionated Framework. You can split Step or do your job inside REQUIRES_NEW transaction. Or choose alternative less restrictive Framework.
So I currently have a spring batch process that has a composite skip policy implemented for a few custom exception types. So the issue that I am now running into is the fact that I don't always just want to skip when I get an exception.
For some database related exceptions I would like to retry a few times and then if it still fails move on and skip the record. Unfortunately I don't see a way to do this.
I tried implementing my own RetryPolicy but the only option for canRetry is true or false (rather than false I would like to throw my skippable exception).
So am I missing something here or is this not really functionality that spring batch has?
Thanks
From a StepBuilderFactory, you can do that:
stepBuilder.reader(reader).writer(writer).faultTolerant().retryPolicy(retryPolicy).skipPolicy(skipPolicy).
And yes it is working. I had the same issue and after test, I see that my items are retried following my RetryPolicy and then they are skipped following my SkipPolicy.
I'am adding some record in db using hibernate, but when i try to get the same record in some millis then it is returning 0 results.
This is the flow:
Create a put request.
Put result in db.
Response received 202 accepted.
Then same controller sends the request to another controller which then tries to update that record.
It returns result as failure.
Environment:
JDK 8
Spring boot 1.2.5
Hibernate 4.3.11.Final
I tried following ways:
Set session flush mode to ALWAYS and COMMIT.
Manually did session.flush() and session.clear()
Please provide the solution as soon as possible.
I would like to set a timeout on a javax.persistence.TypedQuery.
I've found this easy method :
TypedQuery<Foo> query = ... ;
query.setHint("javax.persistence.query.timeout", 1000);
query.getReturnList();
But it seems that does not work, it's just ignored.
From the PRO JPA 2 book:
"Unfortunately, setting a query timeout is not portable behavior. It
may not be supported by all database platforms nor is it a requirement
to be supported by all persistence providers. Therefore, applications
that want to enable query timeouts must be prepared for three
scenarios.
The first is that the property is silently ignored and has no effect.
The second is that the property is enabled and any select, update, or
delete operation that runs longer than the specified timeout value is
aborted, and a QueryTimeoutException is thrown. This exception may be
handled and will not cause any active transaction to be marked for
rollback.
The third scenario is that the property is enabled, but in doing so
the database forces a transaction rollback when the timeout is
exceeded. In this case, a PersistenceException will be thrown and the
transaction marked for rollback. In general, if enabled the
application should be written to handle the QueryTimeoutException, but
should not fail if the timeout is exceeded and the exception is not
thrown."
Does anyone knows any other method to specify a timeout on a TypedQuery?
Or how can I make this "hint" working?
Thanks
EDIT: Oracle 11.2.0.4.0 and PostgreSql 9.2.9
with JPA 2.1 / Hibernate
I know its late to reply, but we faced similar problem Oracle 11g, JPA2.0 and this hint wasn't working.
Actually the problem was we were using it as #NamedQuery hint and were calling the function inside #transactional aspect. As #NamedQuery gets loaded and compiled at the time of context load this timeout was overridden by transaction timeout.
You can find more info at http://javadeveloperz0ne.blogspot.in/2015/07/why-jpa-hints-on-namedquery-wont-work.html.
Solution would be fetching named query again and then applying timeout.
Query query = entityManager.createNamedQuery("NamedQueryName");
query.setHint("org.hibernate.timeout", "5");
query.getSingleResult();
Hope it helps!
Yeah Hint ignored and its not work please review this question . you should set timeout for javax.persistence.query.timeout
Set timeout on EntityManager query