I have a rest API in Springboot using Hikari for connection pooling. Hikari is used with default configurations (10 connections in the pool, 30 sec timeout waiting for a connection). The API itself is very simple
It first makes a JPA repository query to fetch some data from a PostgresDB. This part takes about 15-20milliseconds.
It then sends this data to a remote REST API that is slow and can take upwards of 120seconds.
Once the remote API responds, my API returns the result back to the client. A simplified version is shown below.
public ResponseEntity analyseData(int companyId) {
Company company = companyRepository.findById(companyId);//takes 20ms
Analysis analysis = callRemoteRestAPI(company.data) //takes 120seconds
return ResponseEntity.status(200).body(analysis);
}
The code does not have any #Transactional annotations. I find that the JDBC connection is held for the entire duration of my API (i.e ~ 120s). And hence if we get more than 10 requests they timeout waiting on the hikari connection pool (30s). But strictly speaking my API does not need the connection after the JPA query is done (Step 1 above).
Is there a way to get spring to release this connection immediately after the query instead of holding it till the entire API finishes processing ? Can Spring be configured to get a connection for every JPA request ? That way if I have multiple JPA queries interspersed with very slow operations the server throughput is not affected and it can handle more than 10 concurrent API requests. .
Essentially the problem is caused by the Spring OpenSessionInViewFilter that "binds a Hibernate Session to the thread for the entire processing of the request" This essentially acquires a connection from the pool when the first JPA query is executed and then holds on to it till the request is processed.
This page - https://www.baeldung.com/spring-open-session-in-view provides a clear and concise explanation about this feature. It has its pros and cons and the current opinion seems to be divided on its use.
Related
I have want to fetch a list of Relation records, and I'll use .fetchInto(RELATION); and I want to iterate over the list and commit each iteration to the database. This doesn't seem to be working for me because I get Caused by: java.sql.SQLException: Connection is closed while updating the record. I don't have this problem with regular jOOQ queries. And it doesn't seem any connections are closed.
When I use contact.attach(jooq().configuration()) it seems to be working again. How can I prevent it from detaching?
I start and commit a transaction through JPA.em().getTransaction().*.
org.jooq.exception.DataAccessException: SQL [update `Relation` set `Relation`.`organizationName` = ? where `Relation`.`id` = ?]; Connection is closed
at org.jooq_3.15.1.MYSQL.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2979)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:643)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349)
at org.jooq.impl.UpdatableRecordImpl.storeMergeOrUpdate0(UpdatableRecordImpl.java:331)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:228)
at org.jooq.impl.UpdatableRecordImpl.lambda$storeUpdate$1(UpdatableRecordImpl.java:220)
at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:143)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:219)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:156)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:151)
at worker.taskprocessors.BulkContactEditWorker.execute(BulkContactEditWorker.java:144)
Example:
var contacts = jooq()
.selectFrom(
RELATION
.join(BULK_CONTACT_EDIT_CONTACTS)
.on(
BULK_CONTACT_EDIT_CONTACTS
.CONTACT_ID
.eq(RELATION.ID)
.and(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())))
.limit(batchSize)
.fetchInto(RELATION);
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
for (RelationRecord contact : contacts) {
contact.attach(jooq().configuration()); // I have to add this line to make it work.
contact.setOrganizationName("OrganizationName");
contact.update();
JPA.em().getTransaction().commit();
if (!JPA.em().getTransaction().isActive()) {
JPA.em().getTransaction().begin();
}
}
What's the problem
You're fetching your jOOQ RelationRecord values outside of a JPA transactional context, so that fetching runs in its own transaction (this is independent of jOOQ). jOOQ will always try to acquire() and release() a connection for every query, and when that happens outside of a transactional context, then the connection will be effectively closed (e.g. returned to the connection pool). You should switch the order of starting a transaction and running the jOOQ query:
// start transaction
// run jOOQ query
// iterate jOOQ results
// run updates
// commit transaction
If you're using declarative transactions (e.g. Spring's #Transactional annotation), then you're less likely to run into this problem as your initial jOOQ query will more likely also be in the same transaction already.
Why did attaching work
When you explicitly attach the RelationRecord to the configuration, then you will attach a record that has been attached to the previously closed connection to the new configuration with the transactional connection. I'm assuming that your jooq() method produces a new DSLContext instance, which wraps the currently active DataSource or Connection.
Switch to bulk updates
However, if your example is all there is to your actual logic (i.e. you haven't simplified it for this Stack Overflow question), then why not just run a bulk update? It will be simpler and much faster.
jooq()
.update(RELATION)
.set(RELATION.ORGANIZATION_NAME, "OrganizationName")
.where(RELATION.ID.in(
select(BULK_CONTACT_EDIT_CONTACTS.CONTACT_ID)
.from(BULK_CONTACT_EDIT_CONTACTS)
.where(BULK_CONTACT_EDIT_CONTACTS.BULK_CONTACT_EDIT_ID.eq(bulkContactEditId))
.and(BULK_CONTACT_EDIT_CONTACTS.PROCESSED.isFalse())
))
// Since you're using MySQL, you have native UPDATE .. LIMIT support
.limit(batchSize)
.execute();
I'm assuming your actual example is a bit more complex, because you need to set the PROCESSED flag to true somewhere, too, but it's always good to keep this option in mind.
I have a service for Spring Boot and I get an error when working with the database and rest.
spring.datasource.hikari.maximum-pool-size=5
When an application sends a rest request to another service and the service responds with a delay (eg timeout) at this time, HikariCP will continue to hold the connection.
An example I address from three threads take something from the base and put it in rest and simulate the delay
Thread.sleep(10000);
restTemplate.exchange(url, HttpMethod.POST, request, String.class);
As a result, I get
HikariPool-1 - Pool stats (total=5, active=3, idle=2, waiting=0)
if there are more than 5 threads I get an error.
How best to solve the problem, set up rest to less wait or somehow close the HikariCP connection
Problem Statement
We have been using H2 in embedded mode for a while now. It has a connection pool configured above it. Following is the current pool configuration:
h2.datasource.min-idle=10
h2.datasource.initial-size=10
h2.datasource.max-active=200
h2.datasource.max-age=600000
h2.datasource.max-wait=3000
h2.datasource.min-evictable-idle-time-millis=60000
h2.datasource.remove-abandoned=true
h2.datasource.remove-abandoned-timeout=60
h2.datasource.log-abandoned=true
h2.datasource.abandonWhenPercentageFull=100
H2 config:
spring.h2.console.enabled=true
spring.h2.console.path=/h2
h2.datasource.url=jdbc:h2:file:~/h2/cartdb
h2.server.properties=webAllowOthers
spring.h2.console.settings.web-allow-others=true
h2.datasource.driver-class-name=org.h2.Driver
*skipping username and password properties.
We have verified that the above configuration takes effect by logging the pool properties.
The issue with this setup is that we are observing regular(though intermittent) connection pool exhaustion and once the pool hits the max limit it starts throwing the following exception for some queries.
SqlExceptionHelper.logExceptions(SqlExceptionHelper.java:129) - [http-apr-8080-exec-38] Timeout: Pool empty. Unable to fetch a connection in 3 seconds, none available[size:200; busy:200; idle:0; lastwait:3000].
And thereafter it fails to recover from this state even after many hours until we restart the web server(tomcat in this case).
H2 driver dependency:
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.4.196</version>
<scope>runtime</scope>
</dependency>
Query Pattern & Throughput
We use h2 to load up some data for every request, then execute a few(about 50) SELECT queries and finally delete the data. This results into consistent 30k-40k calls per minute(except off hours) on h2(according to new relic monitoring).
Every read operation acquires a new connection and releases the same after execution.
EntityManager entityManager = null;
try {
entityManager = entityManagerFactory.createEntityManager();
Query query = entityManager.createNativeQuery(sqlQuery);
query.setParameter("cartId", cartId);
List<String> resultList = query.getResultList();
return resultList;
} finally {
if(null != entityManager) { entityManager.close(); }
}
Observations
After application restart the pool utilization is minimal until at one moment when the pool utilization abruptly shoots up and eventually reaches max limit. This happens over the course of 1-2 days.
Once the pool hits the maximum connection limit, the borrowed connection count increases at a faster pace as compared the the returned connection count which otherwise remains very close to one another.
At the same time the abandoned connection count also starts increasing along with the abandon logs.
Interestingly the query response times remains the same after pool exhaustion. So this kind of rules out slow query.
This issue has happened at even the oddest of the hours when the traffic is minimum. So it has no relation to the traffic.
Please guide us in the right direction to solve this issue.
UPDATE
Recently we discovered the following causes in our stack trace when one such incident occured:
Caused by: org.h2.jdbc.JdbcSQLException: Database may be already in
use: null. Possible solutions: close all other connection(s); use the
server mode [90020-196]
Caused by: java.lang.IllegalStateException:The file is locked:
nio:/root/h2/cartdb.mv.db [1.4.196/7]
Caused by: java.nio.channels.OverlappingFileLockException
So after digging into this we have decided to move to in-memory mode as we don't require to persist the data beyond the application's life time. As a result, the file lock should not occur thereby reducing or eradicating this issue.
Will come back and update in either case.
Since the last update on the question:
After observing the performance for quite some time we have come to the conclusion that using H2 in file-mode(embedded) was somehow leading to file lock exceptions periodically(though irregular).
Since our application does not require to persist the data beyond the application's lifetime, we decided to move to pure in-memory mode.
Though the file lock exception's mystery still needs to disclosed.
I am using hibernate lucene search(4.5.1) in my application in the cloud environment. For each tenant a separate hibernate configuration is maintained (all the properties are same except hibernate.search.default.indexBase property. Each tenant has separate filesystem location). While starting the application I made the logic to index some table data by unique location for each tenant (eg: d:/dbindex/tenant1/, d:/dbindex/tenant2/) by calling the method Search.getFullTextSession(session).createIndexer().startAndWait(). For the first tenant every thing is fine, index was perfectly done. For the second tenant the startAndWait is not completed. It doesn't comes out from the startAndWait() but some time it is working. Some time doesn't comes out from the startAndWait(). After a serious debugging I found that BatchIndexingWorkspace has two thread producer and consumer where producer take the list of Id from DB and put it in queue and Consumer takes and indexes it. At the producer side (IdentifierProducer) a method named inTransactionWrapper has a statement
Transaction transaction = Helper.getTransactionAndMarkForJoin( session );
transaction.begin();
The statement transaction.begin() gets hangs and transaction doesn't begins so producer is not produced so consumer didn't indexed and startAndWait freezes. After a long search some post says that pool size will make a dead lock. But I am using BoneCPConnectionProvider with maxConnectionsPerPartition with 50 (per tenant). I monitored the active connection while starting, it doesn't exceed 10. More connection are available. But I don't know what is the problem.
I'm working on a Java project that incorporates a PostgresSQL 9.0 database tier, using JDBC. SQL is wrapped in functions, executed in Java like stored procedures using JDBC.
The database requires a header-detail scheme, with detail records requiring the foreign-key ID to the header. Thus, the header row is written first, then a couple thousand detail records. I need to prevent the user from accessing the header until the details have completed writing.
You may suggest wrapping the entire transaction so that the header record cannot be committed until the detail records have completed writing. However, you can see below that I've isolated the transactions to calls in Java: write header, then loop thru details (while writing detail rows). Due the the sheer size of the data, it is not feasible to pass the detailed data to the function to perform one transaction.
My question is: how do I wrap the transaction at the JDBC level, so that the header is not committed until the detail records have finished writing?
The best solution metaphor would be SQL Server's named transaction's, where the transaction could be started in the data-access layer code (outside other transactions), and completed in a later DB call.
The following (simplified) code executes without error, but doesn't resolve the isolation problem:
DatabaseManager mgr = DatabaseManager.getInstance();
Connection conn = mgr.getConnection();
CallableStatement proc = null;
conn.setAutoCommit(false);
proc = conn.prepareCall("BEGIN TRANSACTION");
proc.execute();
//Write header details
writeHeader(....);
for(Fault fault : faultList) {
writeFault(fault, buno, rsmTime, dnld, faultType, verbose);
}
proc = conn.prepareCall("COMMIT TRANSACTION");
proc.execute();
Your brilliant answer will be much appreciated!
Are you using the same connection for writeHeader and writeFault?
conn.setAutoCommit(false);
headerProc = conn.prepareCall("headerProc...");
headerProc.setString(...);
headerProc.execute();
detailProc = conn.prepareCall("detailProc...");
for(Fault fault : faultList) {
detailProc.setString(...);
detailProc.execute();
detailProc.clearParameters();
}
conn.commit();
And then you should really look at "addBatch" for that detail loop.
While it seems you've solved your immediate issue, you may want to look into JTA if you're running inside a Java EE container. JTA combined with EJB3.1* lets you do declarative transaction control and greatly simplifies transaction management in my experience.
*Don't worry, EJB3.1 is much simpler and cleaner and less horrid than prior EJB specs.