I am using MS SQL Server, and my program recently started losing the DB connection randomly. I am using a non-XA driver.
The most likely suspect is the asynchronous database logging I added.
The sneaky thing is, I have used a thread pool:
ExecutorService ruleLoggingExecutor = Executors.newFixedThreadPool(10);
and in the finally block of my process, I start off a new thread that calls down to the addLogs() method.
The code works for hours, days, and then during a totally unrelated query, it will lose the DB connection. I have an inkling that the problem is that two concurrent inserts are being attempted. But I don't know if putting 'synchronized' on the addLogs method would fix it, or if I need transactional code, or what. Any advice?
In the DAO:
private EntityManager getEntityManager(InitialContext context) {
try {
if (emf == null) {
emf = (EntityManagerFactory) context
.lookup("java:jboss/persistence/db");
}
return emf.createEntityManager();
} catch (Exception e) {
logger.error(
"Error finding EntityManagerFactory in JNDI: "
+ e.getMessage(), e);
return null;
}
}
public void addLogs(InitialContext context, String key, String logs,
String responseXml) {
EntityManager em = getEntityManager(context);
try {
TblRuleLog log = new TblRuleLog();
log.setAuthKey(key);
log.setLogMessage(logs);
log.setDateTime(new Timestamp(new Date().getTime()));
log.setResponseXml(responseXml);
em.persist(log);
em.flush();
} catch (Exception e) {
logger.error(e.getMessage(), e);
} finally {
em.close();
}
}
It seems the connection is closed after a timeout, perhaps due to transaction not being commited/rolled back (and locks not being released on the tables/rows).
Manual flushing looks suspicious. I'd use entityManager.getTransaction().begin/commit() and remove em.flush().
Related
I am working on an application that has about 15 threads running the entire time.We recently started using HikariCP for connection pooling.
These threads are restarted every 24 hours. When the threads are restarted, we explicitly close the Hikari datasource by calling dataSource.close() Until before we started to use Connection pooling, One connection object was passed around in the thread to all functions. Now, when the dataSource is closed and if the old connection object was already passed to a method, that returned an error that said dataSource has already been closed which makes sense.
To get around this issue, instead of passing around same connection object in a thread, we started creating them in methods in DBUtils class(Basically functions with queries)
This is how run method of a thread in our application looks like:
#Override
public void run() {
consumer.subscribe(this.topics);
while (!isStopped.get()) {
try {
for (ConsumerRecord<Integer, String> record : records) {
try{
/*some code*/
}catch(JsonProcessingException ex){
ex.printStackTrace();
}
}
DBUtils.Messages(LOGGER.getName(),entryExitList);
} catch (IOException exception) {
this.interrupt();
}
consumer.close();
}
Now, after starting to use HikariCP, instead of passing connection object to DBUtils.Messages, we get a connection from the pool in the method itself
i.e
public static final void Messages(String threadName, List<EntryExit> entryExitMessages) throws SQLException {
Connection connection = DBUtils.getConnection(threadName);
/*code*/
try{
connection.close();
}catch(SQLException se){}
}
This is what getConnection method of DBUtils looks like
public static synchronized Connection getConnection(String threadName) {
Connection connection = null;
try {
if (ds == null || ds.isClosed()) {
config.setJdbcUrl(getProperty("postgres.url"));
config.setUsername(getProperty("postgres.username"));
config.setPassword(getProperty("postgres.password"));
config.setDriverClassName(getProperty("postgres.driver"));
config.setMaximumPoolSize(getProperty("postgres.max-pool-size"));
config.setMetricRegistry(ApplicationUtils.getMetricRegistry());
config.setConnectionTimeout(getProperty("postgres.connection-timeout"));
config.setLeakDetectionThreshold(getProperty("postgres.leak-detection-threshold"));
config.setIdleTimeout(getProperty("postgres.idle-timeout"));
config.setMaxLifetime(getProperty("postgres.max-lifetime"));
config.setValidationTimeout(getProperty("postgres.validation-timeout"));
config.setMinimumIdle(getProperty("postgres.minimum-idle"));
config.setPoolName("PostgresConnectionPool");
ds = new HikariDataSource(config);
}
connection = ds.getConnection();
return connection;
} catch (Exception exception) {
exception.printStackTrace();
}
}
But since call to this method is inside while loop in the thread, PostgresConnectionPool.pool.Wait keeps increasing.
.What's the best way to deal with this?
Edit: PostgresConnection is the pool name . PoolPostgresConnectionPool.pool.Wait is coming from Dropwizard metrics :
https://github.com/brettwooldridge/HikariCP/wiki/Dropwizard-Metrics
My problem is that JPA/Hibernate returns true for a call of entityManager.getTransaction().isActive() even when I did not explicitly start a transaction (see code below).
The problem here is that I want to read something from the database and a SerializationException is ok in this scenario, because this just indicates that the persisted object does not fit to the actual code any more and needs to be recalculated. Instead of just returning null the code below throws the following exception:
Transaction rollback failed.
org.hibernate.TransactionException: Unable to rollback against JDBC Connection
This shows me, there must have been a transaction somewhere which I did not start in my code. The finally block in the code above is
final EntityManager entityManager = Persistence.createEntityManagerFactory("test").createEntityManager();
try {
final TypedQuery<Test> query = entityManager.createQuery("SELECT t FROM test t", Test.class);
return query.getResultList();
} catch (final PersistenceException e) {
if (e.getCause() instanceof SerializationException) {
LOG.debug("Implementation changed meanwhile. That's ok - we return null.");
return null;
}
throw e;
} finally {
EntityManagerCloser.closeEntityManager(entityManager);
}
And the EntityManagerCloser looks like this:
public final class EntityManagerCloser {
private static final Logger LOG = LoggerFactory.getLogger(EntityManagerCloser.class);
public static void closeEntityManager(EntityManager entityManager) {
if (entityManager.getTransaction().isActive()) {
try {
entityManager.getTransaction().rollback();
} catch (PersistenceException | IllegalStateException e) {
LOG.error("Transaction rollback failed.", e);
}
}
if (entityManager.isOpen()) {
try {
entityManager.close();
} catch (IllegalStateException e) {
LOG.error("Closing entity manager failed.", e);
}
}
}
}
Hibernate docs says "Always use clear transaction boundaries, even for read-only operations". So do I really need to insert a
entityManager.getTransaction().begin();
....
<do read here>
....
entityManager.getTransaction().commit();
around every read operation I perform on the database?
I could implement another closeEntityManager method for read-only operations without the rollback transaction block but I want to understand why there IS a transaction at all. Thanks for any help!
The problem is that when you call entityManager.getTransaction(); a new transaction object will be created. So it is better to save the transaction reference to a variable as shown below.
Transaction txn = entityManager.getTransaction();
if (txn.isActive()) {
try {
txn.rollback();
} catch (PersistenceException | IllegalStateException e) {
LOG.error("Transaction rollback failed.", e);
}
}
Thanks to Jobin I quickly found the solution to my problem:
I think I need to call entityManager.isJoinedToTransaction() in my closeEntityManager method before calling entityManager.getTransaction().isActive().
This will prevent the EntityManagerCloser to start its own transaction which I can not rollback later because I did not explicitly call transaction.begin() on it.
I use injected EntityManagerFactory for scheduled operation:
#PersistenceUnit
private EntityManagerFactory entityManagerFactory;
#Scheduled(cron="0 0/10 * * * *")
private void scheduledOperation() {
int rows = 0;
try {
EntityManager em = entityManagerFactory.createEntityManager();
em.getTransaction().begin();
rows = em.createNativeQuery("UPDATE table SET ...").executeUpdate();
em.getTransaction().commit();
} catch (Exception ex) {
logger.error("Exception while scheduledOperation. Details: " + ex.getMessage());
}
DateTime now = new DateTime(DateTimeZone.UTC);
logger.info("Scheduled operation completed. Rows affected: {}. UTC time: {}", rows, now);
}
When the application is started, scheduled operation runs every 10 minutes. So first several times operation works as well, but after some time this gone with error:
ERROR - ConnectionHandle - Database access problem. Killing off this
connection and all remaining connections in the connection pool. SQL State = 08S01
Whats happens? How I can keep connection, or take working connection for each scheduled operation?
That's because you don't ever close the EntityManager and the associated connections might hang indefinitely.
Change your code to this instead:
EntityManager em = null;
try {
em = entityManagerFactory.createEntityManager();
em.getTransaction().begin();
rows = em.createNativeQuery("UPDATE table SET ...").executeUpdate();
em.getTransaction().commit();
} catch (Exception ex) {
logger.error("Exception while scheduledOperation. Details: " + ex.getMessage());
em.getTransaction().rollback();
} finally {
if(em != null) {
em.close();
}
}
And always call rollback on failure. Don't assume transactions will rollback automatically because this is a database specific implementation.
Probably this is stupid or weird question but I'm facing a strange bug in the app for many weeks and do not know how to solve it.
There are some Quartz jobs that periodically updating database (change the status of orders and stuff). There is only Hibernate 3.1.3 without spring and all transactions handled in code manually with explicit call session.close() in finally block.
It all worked fine with Hibernate built-in connection pool, however after I have changed built-in connection pool to c3p0 pool there are some bugs appeared connected with database transaction/session management. There is no exceptions or anything in the log files, so it is hard to tell what exactly was the reason.
Is there any way to make c3p0 connection pool behave like built-in pool by configuration? I need only one c3p0 feature - checking dead connections. Before this pool was implemented there was idle SQL Connection Reset issue. To keep connections alive I decided to use c3p0 pooling instead of built-in.
here is my current c3p0 configuration:
c3p0.min_size=0
c3p0.max_size=100
c3p0.max_statements=0
c3p0.idle_test_period=600
c3p0.testConnectionOnCheckout=true
c3p0.preferredTestQuery=select 1 from dual
c3p0.autoCommitOnClose=true
c3p0.checkoutTimeout=120000
c3p0.acquireIncrement=2
c3p0.privilegeSpawnedThreads=true
c3p0.numHelperThreads=8
Thanks in advance.
c3p0 is an old version: 0.9.0.4, because I have Java 1.4 environment.
Here is a rough example how transactions are managed by the code:
SessionFactory sessionFactory = null;
Context ctx = null;
Object obj = null;
ctx = new InitialContext();
obj = ctx.lookup("HibernateSessionFactory");
sessionFactory = (SessionFactory) obj;
session = sessionFactory.openSession();
Transaction tx = null;
try {
tx = session.beginTransaction();
// some transactional code
if (!tx.wasRolledBack() && !tx.wasCommitted()) {
tx.commit();
}
} catch (Exception ex) {
if (tx != null && !tx.wasRolledBack() && !tx.wasCommitted()) {
// if it is runtime exception then rollback
if (ex instanceof RuntimeException) {
logger.log(Level.ERROR, "10001", ex);
try {
tx.rollback();
} catch (RuntimeException rtex) {
logger.log(Level.ERROR, "10002", rtex);
}
}
else {
// attempt to commit if there are any other exceptions
try {
tx.commit();
} catch (RuntimeException rtex) {
logger.log(Level.ERROR, "10004", rtex);
// if commit fails then try to rollback
try {
tx.rollback();
} catch (RuntimeException rtex2) {
logger.log(Level.ERROR, "10002", rtex2);
}
// then rethrow this exception
throw rtex;
}
}
}
finally {
session.close();
}
We have been using annotation based JPA with Hibernate as JpaVendorAdapter.
We had to schedule a job to refresh an entire table data using spring scheduler.
The code is as follows,
#Scheduled(fixedDelay=120000)
public void refreshTable() {
EntityManager em = null;
try {
EntityManagerFactory emf = entityManager.getEntityManagerFactory();
em = emf.createEntityManager();
em.getTransaction().begin();
/// do something like delete and fill table
// this block is done for the sake of batch mode operation
em.getTransaction().commit();
} catch (Exception e) {
logger.error("Failed to refresh table", e);
} finally{
if(em != null && em.getTransaction().isActive()){
em.getTransaction().rollback();
logger.info("Failure while refreshing table, rolling back transaction.");
}
}
}
This used to build memory utilization and caused application hang.
We added, at the end of finally block,
if(em != null){
em.close();
}
Which solved the memory problem.
So, why does not EntityManager execute close() while being GC'ed?
The JPAConnector has various connections associated with it and it does not close all of them whereas waiting for garbage collector to do it is not a wise approach.
Closing the connections(i.e EntityManager and EntityManagerFactory) as and when they are no more needed is the best way to address this issue.
Hope this helps!
Good luck!