We have a table in postgres that we use for our system configurations, from which i want to delete configurations with certain ids. The problem is that just before the deletion, we have some code that fetches some of the configuration entries and on the next step (the deletion one) i can't delete them because the rows are locked. I've checked the pg_locks table and each configuration retrieval's transaction stays in postgres for around a minute. And it stays in status "idle in transaction", while the deletion step is waiting on it.
This is how we retrieve the configuration entities
Query query = getEntityManager().createQuery("some query");
...
list = query.getResultList();
There's no transaction involved, we don't really add any transactional jta, ejb annotations, i guess it's added automatically and after this query is where it gets locked.
And this is how we try to delete the rows
Query query = getEntityManager().createNamedQuery(namedQuery);
query.executeUpdate();
Thats it. Nothing special, yet since some rows are locked, the query fails on waiting for the transaction to finish. Any ideas ?
I've tried seemingly everything. Setting hibernate's autocommit mode to true, setting the locktype when executing the queries to pessimistic read/writes, sending over query hints with the queries with max transaction times, executing the queries in separate JTA transactions, executing them in the same transaction, nothing seems to work, the delete query always 'hangs'
Related
I have want to fetch a list of Relation records, and I'll use .fetchInto(RELATION); and I want to iterate over the list and commit each iteration to the database. This doesn't seem to be working for me because I get Caused by: java.sql.SQLException: Connection is closed while updating the record. I don't have this problem with regular jOOQ queries. And it doesn't seem any connections are closed.
When I use contact.attach(jooq().configuration()) it seems to be working again. How can I prevent it from detaching?
I start and commit a transaction through JPA.em().getTransaction().*.
org.jooq.exception.DataAccessException: SQL [update `Relation` set `Relation`.`organizationName` = ? where `Relation`.`id` = ?]; Connection is closed
at org.jooq_3.15.1.MYSQL.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2979)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:643)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349)
at org.jooq.impl.UpdatableRecordImpl.storeMergeOrUpdate0(UpdatableRecordImpl.java:331)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:228)
at org.jooq.impl.UpdatableRecordImpl.lambda$storeUpdate$1(UpdatableRecordImpl.java:220)
at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:143)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:219)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:156)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:151)
at worker.taskprocessors.BulkContactEditWorker.execute(BulkContactEditWorker.java:144)
Example:
var contacts = jooq()
.selectFrom(
RELATION
.join(BULK_CONTACT_EDIT_CONTACTS)
.on(
BULK_CONTACT_EDIT_CONTACTS
.CONTACT_ID
.eq(RELATION.ID)
.and(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())))
.limit(batchSize)
.fetchInto(RELATION);
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
for (RelationRecord contact : contacts) {
contact.attach(jooq().configuration()); // I have to add this line to make it work.
contact.setOrganizationName("OrganizationName");
contact.update();
JPA.em().getTransaction().commit();
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
}
What's the problem
You're fetching your jOOQ RelationRecord values outside of a JPA transactional context, so that fetching runs in its own transaction (this is independent of jOOQ). jOOQ will always try to acquire() and release() a connection for every query, and when that happens outside of a transactional context, then the connection will be effectively closed (e.g. returned to the connection pool). You should switch the order of starting a transaction and running the jOOQ query:
// start transaction
// run jOOQ query
// iterate jOOQ results
// run updates
// commit transaction
If you're using declarative transactions (e.g. Spring's #Transactional annotation), then you're less likely to run into this problem as your initial jOOQ query will more likely also be in the same transaction already.
Why did attaching work
When you explicitly attach the RelationRecord to the configuration, then you will attach a record that has been attached to the previously closed connection to the new configuration with the transactional connection. I'm assuming that your jooq() method produces a new DSLContext instance, which wraps the currently active DataSource or Connection.
Switch to bulk updates
However, if your example is all there is to your actual logic (i.e. you haven't simplified it for this Stack Overflow question), then why not just run a bulk update? It will be simpler and much faster.
jooq()
.update(RELATION)
.set(RELATION.ORGANIZATION_NAME, "OrganizationName")
.where(RELATION.ID.in(
select(BULK_CONTACT_EDIT_CONTACTS.CONTACT_ID)
.from(BULK_CONTACT_EDIT_CONTACTS)
.where(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())
))
// Since you're using MySQL, you have native UPDATE .. LIMIT support
.limit(batchSize)
.execute();
I'm assuming your actual example is a bit more complex, because you need to set the PROCESSED flag to true somewhere, too, but it's always good to keep this option in mind.
I am currently invastigating problem with my DB and it seems very strange to me. I have 2 records with 2 columns - status and fighter_name. and i have a constraint idx_status_fighter - i cannot have fighters with same name and active status. I have 2 records in DB. "Zed" with status ACTIVE and "Zed" with status DELETED. In transactional method first I set the status of active zed to DELETED and then the status of deleted Zed to active and Spring tells me that I have ViolationConstraint about the idx_status_fighter. I really cannot find any information about that.
Edit: As far as I know #Transactional commits to DB after the whole method ends without errors. How can I instruct him which commits to DB to set first.
We are implementing connection or flush retry logic for database.
Auto-commit=true;
RetryPolicy retryPolicy = new RetryPolicy()
.retryOn(DataAccessException.class)
.withMaxRetries(maxRetry)
.withDelay(retryInterval, TimeUnit.SECONDS);
result = Failsafe.with(retryPolicy)
.onFailure(throwable -> LOG.warn("Flush failure, will not retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.onRetry(throwable -> LOG.warn("Flush failure, will retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.get(cntx -> {
return batch.execute();
});
we want to intercept storing, updating, inserting, deleting records by stopping mssql db service in backend. At some point even If we got org.jooq.exception.DataAccessException, some of the records in batch (subset of batch) are loaded into db.
Is there any way to find failed and successfully loaded records using jooq api?
The jOOQ API cannot help you here out of the box because such a functionality is definitely out of scope for the relatively low level jOOQ API, which helps you write type safe embedded SQL. It does not make any assumptions about your business logic or infrastructure logic.
Ideally, you will run your own diagnostic here. For example, you already have a BATCHID column which should make it possible to detect which records were inserted/updated with which process. When you re-run the batch, you need to detect that you've already attempted this batch, remember the previous BATCHID, and fetch the IDs of the previous attempt to do whatever needs to be done prior to a re-run.
On host 'A' I start a transaction by inserting values to a table in DB. As soon as I insert, I call processBuilder to refresh host 'B' which in-turn should load the updated values from same DB table to cache. But the values are not getting loaded.
Is there any relation between processBuilder and transaction? Because the transaction is yet to complete on the host from where I am calling processBuilder.
I tried fetching values from DB before calling processBuilder on host 'A' and it is returning the values which was recently inserted (result set returns 10 rows) where as on host 'B' which is calling same select statement return 9 rows.
"as soon as I insert" red flag.
Yes, there is a relationship between the transaction and processBuilder. If the transaction is not committed, then all other sessions will not be able to see the changes. If you're used to programming in a DB app environment where autocommit is enabled and switch to a DB app environment where autocommit is disabled, then you are likely to run across this kind of problem.
I am using hibernate lucene search(4.5.1) in my application in the cloud environment. For each tenant a separate hibernate configuration is maintained (all the properties are same except hibernate.search.default.indexBase property. Each tenant has separate filesystem location). While starting the application I made the logic to index some table data by unique location for each tenant (eg: d:/dbindex/tenant1/, d:/dbindex/tenant2/) by calling the method Search.getFullTextSession(session).createIndexer().startAndWait(). For the first tenant every thing is fine, index was perfectly done. For the second tenant the startAndWait is not completed. It doesn't comes out from the startAndWait() but some time it is working. Some time doesn't comes out from the startAndWait(). After a serious debugging I found that BatchIndexingWorkspace has two thread producer and consumer where producer take the list of Id from DB and put it in queue and Consumer takes and indexes it. At the producer side (IdentifierProducer) a method named inTransactionWrapper has a statement
Transaction transaction = Helper.getTransactionAndMarkForJoin( session );
transaction.begin();
The statement transaction.begin() gets hangs and transaction doesn't begins so producer is not produced so consumer didn't indexed and startAndWait freezes. After a long search some post says that pool size will make a dead lock. But I am using BoneCPConnectionProvider with maxConnectionsPerPartition with 50 (per tenant). I monitored the active connection while starting, it doesn't exceed 10. More connection are available. But I don't know what is the problem.