Hibernate open-close SessionFactory on every query vs on app instance - java

I am currently working on a Java Swing application in NetBeans with Hibernate guided with this wonderful repo from GitHub.
From the example code found here, it basically urges new programmers to open and close SessionFactory connection every time certain queries have been executed:
try {
HibernateSessionFactory.Builder.configureFromDefaultHibernateCfgXml()
.createSessionFactory();
new MySqlExample().doSomeDatabaseStuff();
} catch (Throwable th) {
th.printStackTrace();
} finally {
HibernateSessionFactory.closeSessionFactory();
}
private void doSomeDatabaseStuff() {
deleteAllUsers();
insertUsers();
countUsers();
User user = findUser(USER_LOGIN_A);
LOG.info("User A: " + user);
}
Is this a good programming exercise? Isn't it more efficient to open the SessionFactory on app startup and close it on WindowClosing event? What are the drawbacks of each method?
Thanks.

Using a persistent connection means you are going to have as many opened connections on your database as opened clients, plus you'll have to make sure it stays open (very often it will be closed if it stays idle for a long time).
On the other hand, executing a query will be significantly faster if the connection is already opened.
So it really depends on often your clients will use the database. If they use it very rarely, a persistent connection is useless.

Related

How to fix 'SQLRecoverableException: Closed Connection' in Java

We are working for an ecommerce built with Hybris framework and currently we have an issue with database connection (I suppose) and no idea on how to solve it. It happens only on production environment and only on servers that are used by ESB (2 servers in a total of 40).
Basically, sometimes (1-3/day), we discover sessions waiting for some idle session (SEL*NET message from client). We can only manually kill the holder in order to free these sessions.
All the servers share the same application code and the main difference between ESB and Frontend servers is in the controllers that are called and in the requests count.
ESB Server: 10 requests per minute
Frontend Server: 300 requests per minute
In the application log I found a lot of Closed Connection errors on these 2 servers and I think that this is related to our problem but actually I don't know why.
In access.log I have this request:
[26/Mar/2019:09:04:39 +0100] "GET /blockorder?orderCode=XXXX&access_token=XXXX HTTP/1.1" 400 122 "-" "AHC/1.0"
and in the console.log I have this:
hybrisHTTP8 2019-03-26 09:04:39,184 ERROR [[10.125.31.2] ] () [de.hybris.platform.jdbcwrapper.ConnectionImpl] error resetting AutoCommit
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.PhysicalConnection.setAutoCommit(PhysicalConnection.java:3763)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.doSetAutoCommit(ConnectionImpl.java:431)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.restoreAutoCommit(ConnectionImpl.java:185)
at de.hybris.platform.jdbcwrapper.ConnectionImpl.unsetTxBound(ConnectionImpl.java:175)
at de.hybris.platform.tx.Transaction.unsetTxBoundConnection(Transaction.java:920)
at de.hybris.platform.tx.Transaction.clearTxBoundConnectionAndNotify(Transaction.java:897)
at de.hybris.platform.tx.Transaction.clearTxBoundConnectionAndNotifyRollback(Transaction.java:887)
at de.hybris.platform.tx.Transaction.rollbackOuter(Transaction.java:1084)
at de.hybris.platform.tx.Transaction.rollback(Transaction.java:1028)
at de.hybris.platform.tx.Transaction.commit(Transaction.java:690)
at de.hybris.platform.tx.Transaction.finishExecute(Transaction.java:1218)
at de.hybris.platform.tx.Transaction.execute(Transaction.java:1205)
at de.hybris.platform.tx.Transaction.execute(Transaction.java:1160)
at de.hybris.platform.jalo.Item.setAllAttributes(Item.java:2082)
at de.hybris.platform.jalo.Item.setAllAttributes(Item.java:2057)
at de.hybris.platform.servicelayer.internal.converter.impl.ItemModelConverter.storeAttributes(ItemModelConverter.java:1503)
at de.hybris.platform.servicelayer.internal.converter.impl.ItemModelConverter.save(ItemModelConverter.java:730)
at de.hybris.platform.servicelayer.internal.model.impl.wrapper.ModelWrapper.save(ModelWrapper.java:336)
at de.hybris.platform.servicelayer.internal.model.impl.ResolvingModelPersister.saveOthers(ResolvingModelPersister.java:64)
at de.hybris.platform.servicelayer.internal.model.impl.ResolvingModelPersister.persist(ResolvingModelPersister.java:49)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveViaJalo(DefaultModelService.java:1059)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.doJaloPersistence(DefaultModelService.java:648)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.persistWrappers(DefaultModelService.java:1002)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.performPersistenceOperations(DefaultModelService.java:626)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveAllInternal(DefaultModelService.java:620)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.saveAll(DefaultModelService.java:600)
at de.hybris.platform.servicelayer.internal.model.impl.DefaultModelService.save(DefaultModelService.java:548)
at com.test.fulfilment.process.impl.DefaultOrderProcessService.requestForcedOrderCancellation(DefaultOrderProcessService.java:131)
at com.test.application.order.facades.impl.DefaultOrderFacade.forcedOrderCancel(DefaultOrderFacade.java:62)
at com.test.application.controllers.OrderController.blockOrder(OrderController.java:520)
Our pool config is the following:
{
"maxIdle": 90,
"minIdle": 2,
"maxActive": 90,
"maxWait": 10000,
"whenExhaustedAction": 1,
"testOnBorrow": true,
"testOnReturn": true,
"testWhileIdle": true,
"timeBetweenEvictionRunsMillis": 10000,
"numTestsPerEvictionRun": 100,
"minEvictableIdleTimeMillis": 300000,
"softMinEvictableIdleTimeMillis": -1,
"lifo": true
}
Our tomcat config is:
tomcat.generaloptions.JDBC=-Doracle.jdbc.ReadTimeout=60000
tomcat.generaloptions.TIMEOUT=-Dsun.net.client.defaultConnectTimeout\=60000 -Dsun.net.client.defaultReadTimeout\=60000
tomcat.ajp.acceptCount=100
tomcat.ajp.maxThreads=400
tomcat.maxthreads=400
tomcat.minsparethreads=50
tomcat.maxidletime=10000
tomcat.connectiontimeout=120000
tomcat.acceptcount=100
We tried to remove the oracle.jdbc.ReadTimeout but the result was that we started to see Closed Connections on the other servers.
The code that trigger this error is pretty simple (and it works in the 95% of time):
#Override
public boolean requestForcedOrderCancellation(final OrderModel order) {
Transaction.current().begin();
try {
modelService.lock(order.getPk());
modelService.refresh(order);
order.setForcedCancelled(true);
modelService.save(order);
Transaction.current().commit();
return true;
catch (Exception e) {
LOG.error(e.getMessage(), e);
Transaction.current().rollback();
return false;
}
}
We tried also without explicit locking and the problem is exactly the same.
It seems like the connection is already closed and we cannot rollback (or commit) the transactions that are still waiting in DB.
I expect to avoid this lock and these closed connection errors.
Your connection pool is probably fixing this already for you. Try in increase the logging to see whether it does.
Background: Databases hate long living connections because it can starve them. So they tend to close the connection after some time. Another culprit are firewalls which tend to delete idle connections from their tables. Connection pools know how to handle this by testing the connections (all those test* options in your config above).
Sometimes, you need to tell your pool how to test a connection. Check the documentation. For Oracle, a good test is select 1 from dual.
I think your real problem are those stuck sessions. Find out what they are waiting for by looking at a Java thread dump which you can create using the tool jstack which comes with the Java SDK.
We found that issue was due to uncatched exception/error in transactional code.
Server answer with error and Hybris did not rollback the transaction that is still open.
The same thread is reused sometime later (maybe some days) and old transaction is still open.
When this corrupted thread is used for locking some rows in database, even if we commit the transaction in the code, the same is not committed to database because internally Hybris has a transaction counter to handle inner transactions (maybe used in called methods). Transaction is commited/rollback to DB only when we use commit/rollback method and transaction counter is 1.
Request1:
Transaction.begin() // Hybris Counter = 1
doSomething() // This throws Exception, Application Exit, Hybris Counter is still 1
try {
Transaction.commit()
} catch (Exception e) {
Transaction.rollback();
}
Request2 on same thread:
Transaction.begin() // Hybris Counter now is 2
doSomething() // Works OK, Hybris Counter is still 2
try {
Transaction.commit() // HybrisCounter -= 1
// Transaction is not commited to DB because Hybris counter now is 1
} catch (Exception e) {
Transaction.rollback();
}
Request3 on same thread:
Transaction.begin() // Hybris Counter now is 2
lockRow()
// Row is locked for the whole transaction (the same opened in R1)
// Everything is OK
try {
Transaction.commit() // HybrisCounter -= 1
// Transaction is not commited to DB because Hybris counter now is 1
// Row is still locked
// Next requests to the same row will wait lock forever
} catch (Exception e) {
Transaction.rollback();
}

Do SQLDatabases get closed if an App crashes due to other unhanded Exceptions?

After reading upon Database SQL best practices, i have extracted two try catches into two separate methods. The only thing that I can think of which create a Database leak is if there is an external unhanded exception after doing getReadableDatabase(). So my question is, if an app crashes and restarts does the app clean itself up? For example help me closing this database session? As you can see i don't fully quite understand what happens after an App crashes/ restarts.
try {
SQLiteDatabase database = helper.getReadableDatabase();
doALongDatabaseTask(database);
database.close();
} catch (SQLException e){
Log.w(Util.TAG, "getButtonPairsForSending db cannot be opened " + e);
}
public void doALongDatabaseTask(SQLiteDatabase database){
try
{
// Do long database query task
} catch(Exception e) {
}
}
In your example method doALongDatabaseTask(SQLiteDatabase database) the exception is handled, its just nothing is done with it, so in this case your database would call close()
However, if your method was defined to throw an exceptions
public void doALongDatabaseTask(SQLiteDatabase database) throws Exception{...}
then your call to close would be skipped, as it goes into the catch block in the first part of the example given.
Provided your using a high enough sdk version: If you look at the docs for SQLiteDatabase you can see that it implements the interface AutoCloseable.
This means that you can use a try with resources which looks like this
try(SQLiteDatabase database = helper.getReadableDatabase()){
doALongDatabaseTask(database);
} catch (Exception e) {
e.printstacktrace();
}
This is the equivalent of calling database.close() in a finally block (which you should be doing anyway) but much more readable. This then should in most cases clean up the connections when your app crashes and on a restart fresh connections can be obtained.

Java, trigger function when oracle database is updated [duplicate]

This question already has answers here:
How to implement a db listener in Java
(5 answers)
Closed 8 years ago.
Currently I'm using JDBC templates to query the database for information. I'm constantly pinging the oracle DB to check if a table in particular has been updated, if it has, then I run a function, if not, then I wait a bit and ping it again.
ReportsDao rDao = new ReportsDao();
while(true)
{
List<ReportRequest> rr = rDao.selectAll();
for (ReportRequest r: rr)
{
if(!r.getDone())
{
//do stuff
}
}
try {
Thread.sleep(10000);
} catch(InterruptedException ex) {
Thread.currentThread().interrupt();
}
{
So my question is, how can I avoid this constant pinging of the database for new information? Is it possible to have a listener sit around that triggers what I want it to do once the table is updated?
When I was at college I heard that at that time Oracle provided a way to execute Java code at server side.
This link seems to prove that my memory isn't as bad as I thought.
Having this in mind, you could implement a socket listener at your db-client side and send a notification event from the socket-client at Oracle server triggered by a db event.
As you may find in the documentation I linked above, is it possible to register a Java class method as a procedure in the database server.
If you don't want to use sockets you could try RMI.

Transaction control for MySQL in Swing Desktop Application with Hibernate

I have a swing Desktop application that uses Hibernate. I can have more than one computer using the application and accesing the same MySQL database concurrently.
I thought that using Hibernate Transaction would prevent the case of two different computers accessing the same data in the same time. Apparently it does not.
The problem i'm facing is similar to this:
//Suppose I'm persisting a product sale
Session session = getSession();
Transaction transaction = session.getTransaction();
try {
transaction.begin();
MyProduct product = getMyProduct();
product.setActualInventory(product.getActualInventory() - amountSold);
session.merge(product);
transaction.commit();
} catch (Exception ex) {
ex.printStackTrace();
transaction.rollback();
logger.error(ex);
throw new DefaultException("Erro ao salvar a venda.");
}
The problem is that sometimes I get this "ActualInventory" not correct, because of multiple access of two machines in the same time.
I thought that with this code, when one computer asked for transaction.begin(), it would have to wait the another one to finish its with transaction.commit().
But now I realized that this does not occur, because I have two different computers with a separate Hibernate in each one that just communicates with the same database.
How could I solve this problem? Maybe is there a way to perform this "transaction control" via MySQL directly?

Hibernate Delayed Write

I am wondering if there is a possibility of hibernate delaying its writes to the DB. I have hibernate configured for mysql. Scenarios I hope to support are 80% reads and 20% writes. So I do not want to optimize my writes, I rather have the client wait until the DB has been written to, than to return a bit earlier. My tests currently have 100 client in parallel, the cpu does sometimes max out. I need this flush method to write to DB immediately and return only when the data is written.
On my client side, I send a write request and then a read request, but the read request sometimes returns null. I suspect hibernate is not writing to db immediately.
public final ThreadLocal session = new ThreadLocal();
public Session currentSession() {
Session s = (Session) session.get();
// Open a new Session, if this thread has none yet
if (s == null || !s.isOpen()) {
s = sessionFactory.openSession();
// Store it in the ThreadLocal variable
session.set(s);
}
return s;
}
public synchronized void flush(Object dataStore) throws DidNotSaveRequestSomeRandomError {
Transaction txD;
Session session;
session = currentSession();
txD = session.beginTransaction();
session.save(dataStore);
try {
txD.commit();
} catch (ConstraintViolationException e) {
e.printStackTrace();
throw new DidNotSaveRequestSomeRandomError(dataStore, feedbackManager);
} catch (TransactionException e) {
log.debug("txD state isActive" + txD.isActive() + " txD is participating" + txD.isParticipating());
log.debug(e);
} finally {
// session.flush();
txD = null;
session.close();
}
// mySession.clear();
}
#Siddharth Hibernate does not really delay in writing the response , and your code also does not speaks the same. I have also faced similar issue earlier and doubt you might be facing the same that is , when there a numerous request for write into hibernate are there many threads share same instance of your db and even having consecutive commits by hibernate you really dont see any changes .
You may also catch this by simple looking at you MySQL logs during the transaction and see what exactly went wrong !
Thanks for your hint. Took me some time to debug. Mysql logs are amazing.
This is what I run to check the time stamp on my inserts and mysql writes. mysql logs all db operations in a binlog. To read it we need to use the tool called mysqlbinlog. my.cnf too needs to be http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpm7%2Fm7enablelogs.html
I check which is the lastest mysql bin log file, and run this to grep for 1 line above the log, to get the time stamp. Then in java, I call Calendar.getInstance().getTimeInMilli() to compare with the time stamp.
sudo mysqlbinlog mysql/mysql-bin.000004 | grep "mystring" -1
So I debugged my problem. It was a delayed write problem. So I implemented a sync write too instead of all async. In other words the server call wont return until db is flushed for this object.

Categories