Creating multiple transactions in a single hibernate session - java

I have created a Quartz Job which runs in background in my JBoss server and is responsible for updating some statistical data at regular interval (coupled with some database flags)
To load and persist the I am using Hibernate 4. Everything works fine except one hick up.
The entire thread i.e. Job is wrapped in a Single transaction which over the period of time (as the amount of data increases) becomes huge and worry some. I am trying to break this single large transaction into multiple small ones, such that each transaction process only a sub group of data.
Problem: I tried very lamely to wrap a code into a loop and start/end transaction at start/end of the loop. As I expected it didn't work. I have been looking around various forums to figure out a solution but have not come across anything that indicates managing multiple transaction in a single session (where in only 1 transaction will be active at a time).
I am relatively new to hibernate and would appreciate any help that points me to a direction on achieving this.
Update: Adding code demonstrate what I am trying to achieve and mean when I say breaking into multiple transaction. And stack trace when this is executed.
log.info("Starting Calculation Job.");
List<GroupModel> groups = Collections.emptyList();
DAOFactory hibDaoFactory = null;
try {
hibDaoFactory = DAOFactory.hibernate();
hibDaoFactory.beginTransaction();
OrganizationDao groupDao = hibDaoFactory.getGroupDao();
groups = groupDao.findAll();
hibDaoFactory.commitTransaction();
} catch (Exception ex) {
hibDaoFactory.rollbackTransaction();
log.error("Error in transaction", ex);
}
try {
hibDaoFactory = DAOFactory.hibernate();
StatsDao statsDao = hibDaoFactory.getStatsDao();
StatsScaledValuesDao statsScaledDao = hibDaoFactory.getStatsScaledValuesDao();
for (GroupModel grp : groups) {
try {
hibDaoFactory.beginTransaction();
log.info("Performing computation for Group " + grp.getName() + " ["
+ grp.getId() + "]");
List<Stats> statsDetail = statsDao.loadStatsGroup(grp.getId());
// Coputing Steps here
for (Entry origEntry : statsEntries) {
entry.setCalculatedItem1(origEntry.getCalculatedItem1());
entry.setCalculatedItem2(origEntry.getCalculatedItem2());
entry.setCalculatedItem3(origEntry.getCalculatedItem3());
StatsDetailsScaledValues scValues = entry.getScaledValues();
if (scValues == null) {
scValues = new StatsDetailsScaledValues();
scValues.setId(origEntry.getScrEntryId());
scValues.setValues(origEntry.getScaledValues());
} else {
scValues.setValues(origEntry.getScaledValues());
}
statsScaledDao.makePersistent(scValues);
}
hibDaoFactory.commitTransaction();
} catch (Exception ex) {
hibDaoFactory.rollbackTransaction();
log.error("Error in transaction", ex);
} finally {
}
}
} catch (Exception ex) {
log.error("Error", ex);
} finally {
}
log.info("Job Complete.");
Following is the exception stacktrace I am getting upon execution of this Job
org.hibernate.SessionException: Session is closed!
at org.hibernate.internal.AbstractSessionImpl.errorIfClosed(AbstractSessionImpl.java:127)
at org.hibernate.internal.SessionImpl.createCriteria(SessionImpl.java:1555)
at sun.reflect.GeneratedMethodAccessor469.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.hibernate.context.internal.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:352)
at $Proxy308.createCriteria(Unknown Source)
at com.blueoptima.cs.dao.impl.hibernate.GenericHibernateDao.findByCriteria(GenericHibernateDao.java:132)
at com.blueoptima.cs.dao.impl.hibernate.ScrStatsManagementHibernateDao.loadStatsEntriesForOrg(ScrStatsManagementHibernateDao.java:22)
... 3 more
To my understanding from what I have read so far about Hibernate, sessions and transactions. It seems that when a session is created it is attached to the thread and lives through out the threads life or when commit or rollback is called. Thus, when the first transaction is committed the session is being closed and is unavailable for the rest of the threads life.
My question remains: How can we have multiple transactions in a single session?

More detail would be great and some examples but I think I should be able to help with what you have written here.
Have one static SessionFactory (this is big on memory)
Also with your transactions you want something like this.
SomeClass object = new SomeClass();
Session session = sessionFactory().openSession() // create the session object
session.beginTransaction(); //begins the transaction
session.save(object); // saves the object But REMEMBER it isn't saved till session.commit()
session.getTransaction().commit(); // actually persisting the object
session.close(); //closes the transaction
This is how I used my transaction, I am not sure if I do as many transaction as you have at a time. But the session object is light weight compared to the SessionFactory in memory.
If you want to save more objects at a time you could do it in one transaction for example.
SomeClass object1 = new SomeClass();
SomeClass object2 = new SomeClass();
SomeClass object2 = new SomeClass();
session.beginTransaction();
session.save(object1);
session.save(object2);
session.save(object3);
session.getTransaction().commit(); // when commit is called it will save all 3 objects
session.close();
Hope this help in some way or points you in the right direction.
I think you could configure you program to condense transactions as well. :)
Edit
Here is a great youtube tutorial. This guy really broke it down for me.
Hibernate Tutorials

Related

Apache Jena SPARQL query will not abort

I'm having a problem in my Java application where Apache Jena will never stop a SPARQL query until it's finished, even if I explicitly tell it to stop. Here's the code that gets called to run a query:
try {
Model union = null;
union = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_RULE_INF);
if (ontologies != null)
for (OntModel om : ontologies)
union = ModelFactory.createUnion(union, om);
Reasoner reasoner = ReasonerRegistry.getOWLReasoner();
reasoner = reasoner.bindSchema(union);
InfModel infmodel = ModelFactory.createInfModel(reasoner, triples);
query_running = true;
Query query = QueryFactory.create(query_string);
query_execution = QueryExecutionFactory.create(query, infmodel);
ResultSet rs = query_execution.execSelect();
for ( ; rs.hasNext(); ) {
// do stuff with results
}
} catch (Exception e) {}
finally {
stopQuery();
}
stopQuery() gets called at the end, but the method is also called when the user hits a "cancel" button. Here's what that method looks like:
public void stopQuery() {
try {
if (query_execution != null) {
//query_execution.close();
query_execution.abort();
query_execution = null;
System.out.println("STOPPED");
}
} catch (Exception e) { e.printStackTrace(); }
query_running = false;
}
When the cancel button is hit while a query is running (10+ minutes on a relatively small dataset...?), the method gets called, but the query continues to run in the background. I know it's still running because I can see the application in task manager taking up 30%+ CPU until the query presumably completes. I've tried .abort(), .close(), and both at the same time, but I cannot figure out how to stop the query mid-execution. I've even tried wrapping the query code in a separate thread, but that doesn't work either. It makes sense that threading wouldn't solve the problem because the thread needs to see the interrupt request, but the code is freezing on a particular line. The code seems to freeze on rs.hasNext(), but not the first check. It will run quickly with the first x results of the ResultSet (which are likely explicit statements it finds easily), but then it will seem to freeze for a long while after that, likely searching for implicit results with the reasoner. How can I force the query to stop? I don't want to use a timeout -- I want the user to have the option to stop the query or let it play out. This problem is not specific to any one query or dataset.
Thanks.

Result Set closes automatically

I have a problem with my music bot for Discord.
I want to send an embed message when a track ist started, but the ResultSet always closes.
So it can't pass the if-query.
Here is my code (class "TrackScheduler"):
try {
file = new URL("https://img.youtube.com/vi/" + videoID + "/hqdefault.jpg").openStream();
builder.setImage("attachment://thumbnail.png");
System.out.println("4");
ResultSet set = LiteSQL.onQuery("SELECT * FROM musicchannel WHERE guildid = " + guildid);
try {
System.out.println("3");
if(set.next()) {
long channelid = set.getLong("channelid");
TextChannel channel;
System.out.println("2");
if((channel = guild.getTextChannelById(channelid)) != null) {
System.out.println("1");
channel.sendTyping().queue();
channel.sendFile(file, "thumbnail.png").embed(builder.build()).queue();
}
}
}
catch (SQLException e) {
e.printStackTrace();
}
}
catch (IOException e) {
e.printStackTrace();
}
}
My LiteSQL.onQuery (class "LiteSQL"):
private static Connection c;
private static Statement s;
public static ResultSet onQuery(String sql) {
try {
return s.executeQuery(sql);
}
catch(SQLException e) {
e.printStackTrace();
}
return null;
}
Here is the error:
ava.sql.SQLException: ResultSet closed
at org.sqlite.core.CoreResultSet.checkOpen(CoreResultSet.java:76)
at org.sqlite.jdbc3.JDBC3ResultSet.findColumn(JDBC3ResultSet.java:39)
at org.sqlite.jdbc3.JDBC3ResultSet.getLong(JDBC3ResultSet.java:423)
at de.nameddaniel.bot.musik.TrackScheduler.onTrackStart(TrackScheduler.java:79)
at com.sedmelluq.discord.lavaplayer.player.event.AudioEventAdapter.onEvent(AudioEventAdapter.java:72)
at com.sedmelluq.discord.lavaplayer.player.DefaultAudioPlayer.dispatchEvent(DefaultAudioPlayer.java:368)
at com.sedmelluq.discord.lavaplayer.player.DefaultAudioPlayer.startTrack(DefaultAudioPlayer.java:117)
at com.sedmelluq.discord.lavaplayer.player.DefaultAudioPlayer.playTrack(DefaultAudioPlayer.java:80)
at de.nameddaniel.bot.musik.AudioLoadResult.trackLoaded(AudioLoadResult.java:20)
at com.sedmelluq.discord.lavaplayer.player.DefaultAudioPlayerManager.checkSourcesForItemOnce(DefaultAudioPlayerManager.java:443)
at com.sedmelluq.discord.lavaplayer.player.DefaultAudioPlayerManager.checkSourcesForItem(DefaultAudioPlayerManager.java:419)
at com.sedmelluq.discord.lavaplayer.player.DefaultAudioPlayerManager.lambda$createItemLoader$0(DefaultAudioPlayerManager.java:218)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
I'm new here, so if there is any missing information, please tell me.
As well, I'm sorry for the bad formatting.
Greetings, Daniel :)
tl;dr
Do not use static on your Statement and Connection fields.
Details
This code has a security leak. Look up SQL injection. The basic gist is: Statement is almost entirely useless. You want PreparedStatement, and you want your SQL queries to be solely string literals. Don't, ever, 'make the query string' by concatenating user input in. The Query string should be, say, SELECT * FROM musicchannel WHERE guildid = ? (yes, with a literal question mark in the string), then use the setInt method of PreparedStatement to set the guild id. Or better yet, as the JDBC API is not really designed for consumption like this, use something like JDBI.
This is bad exception handling. If you don't know what to do, the right 'I dont know' catch block is throw new RuntimeException("Uncaught", e); and not e.printStackTrace();. Better yet, have these methods just throw SQLException; methods that obviously do DB things should be throwing that. Note that your main method can (and should) be declared to throw Exception.
Connection, PreparedStatement, and ResultSets are all resources and need to be opened via try-with-resources. Not doing so means your app has a leak and will break something if it runs long enough. For DB code, the DB will eventually run out of connections and become entirely inaccessible until you close the java app. That's why you need try-with-resources.
You have a single Statement and Connection (the fields are static). Presumably your discord bot can receive more than one message, so if you try to send more than one, the system goes down in flames. Don't use 'static' here. The code you pasted does not itself contain anything that would close your ResultSet, but by redesigning away from static the problem is likely to go away by itself.
(Apart from the other answer, which is actually all very good suggestions and you should follow) I presume the following is the problem:
return s.executeQuery(sql);
I don't think that will work if it's a static and is being used multiple times by other objects. It will be cleaned up eventually. Rather than doing that there, you should be returning just an object with the data that you need. Look up the DAO class pattern.

hibernate optimistic lock mechanism

I am so curious about the hibernate optimistic lock (dedicated version way), I checked hibernate source code which tells that it checks version before current transaction commits, but if there is another transaction happens to committed after it query the version column from DB(in a very short time gap), then current transaction considers there is no change, so the old transaction would be replaced wrongly.
EntityVerifyVersionProcess.java
#Override
public void doBeforeTransactionCompletion(SessionImplementor session) {
final EntityPersister persister = entry.getPersister();
if ( !entry.isExistsInDatabase() ) {
// HHH-9419: We cannot check for a version of an entry we ourselves deleted
return;
}
final Object latestVersion = persister.getCurrentVersion( entry.getId(), session );
if ( !entry.getVersion().equals( latestVersion ) ) {
throw new OptimisticLockException(
object,
"Newer version [" + latestVersion +
"] of entity [" + MessageHelper.infoString( entry.getEntityName(), entry.getId() ) +
"] found in database"
);
}
}
is such case possible?
Hope there are DB domain experts who would help me on this.
Many thanks.
Based on a quick glance of the code, EntityVerifyVersionProcess is used for read transactions, so there's no potential for data loss involved. This would only check that when the transaction commits, it's not returning data that's already stale. With a READ COMMITTED transaction, I suppose this might return data that is instantly going stale, but hard to say without going into details.
Write transactions on the other hand use EntityIncrementVersionProcess, which is a completely different beast and leaves no chance for race conditions.
public void doBeforeTransactionCompletion(SessionImplementor session) {
final EntityPersister persister = entry.getPersister();
final Object nextVersion = persister.forceVersionIncrement( entry.getId(), entry.getVersion(), session );
entry.forceLocked( object, nextVersion );
}

Neo4j Causal Cluster Bolt Driver Performance Too Low

We are evaluating Neo4J Enterprise Edition Causal Cluster using Bolt Driver for Java.
We have 3 node Core Cluster.
The performance we saw is too low.
We are creating just 1 node with 2 property 10,00,000 times. When tracked, we are getting 300TPS (i.e. only 300 nodes are created per second).
OS is Linux, RHEL.
Each core is running with 32GB.
We were estimating close to 50,000 TPS for creation of just 1 node however it is only 300 TPS which is way way way too low.
I am sure we are missing something big.
This function is called 10,00,000 times by a thread pool of 64 threads.
Code Snippet:
#Override
public void createNode() throws InterruptedException {
try (Session session = RTNeo4j.getInstance().getWriteDriver().session(AccessMode.WRITE)) {
try (final Transaction tx = session.beginTransaction()) {
try {
tx.run("CREATE (a:Person {name: {name}, id: {id}})",
parameters("name", "king", "id", System.currentTimeMillis()));
tx.success();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Appreciate quick help for evaluation.
You do not have to create each time a session within a method. Move the creation of the session outside method:
Session session = RTNeo4j.getInstance().getWriteDriver().session(AccessMode.WRITE)

java.sql.SQLException Closed Statement

I have a problem, the below code runs fine if I run it without the autoCommit property, however I would prefer to run it as a transaction, the code basically inserts an article's header information and then the list each articles associated with it (so it's like a one-to-many relationship), so I could like to commit everything in one go rather than first the article information and then its items. The issue is that when I reach to the cn.commit() line, I get an exception that says "Closed Statement"
database insertion method
public static void addArticle(Article article) throws SQLException {
Connection cn = null;
PreparedStatement ps = null;
StringBuffer insert = new StringBuffer();
StringBuffer itemsSQL = new StringBuffer();
try {
article.setArticleSortNum(getNextArticleNum(article.getShopId()));
article.setArticleId(DAOHelper.getNextId("article_id_sequence"));
cn = DBHelper.makeConnection();
cn.setAutoCommit(false);
insert.append("insert query for article goes here");
ps = cn.prepareStatement(insert.toString());
int i = 1;
ps.setLong(i, article.getArticleId()); i++;
ps.setLong(i, article.getShopId()); i++;
ps.setInt(i, article.getArticleNum()); i++;
// etcetera...
ps.executeUpdate();
itemsSQL.append("insert query for each line goes here");
itemStatement = cn.prepareStatement(itemsSQL.toString());
for(Article item : article.getArticlesList()) {
item.setArticleId(article.getArticleId());
i= 1;
itemStatement.setLong(i, item.getArticleId()); i++;
itemStatement.setInt(i, item.getItemsOnStock()); i++;
itemStatement.setInt(i, item.getQuantity()); i++;
// etcetera...
itemStatement.executeUpdate();
}
cn.commit();
} catch (SQLException e) {
cn.rollback();
log.error(e.getMessage());
throw e;
}
finally {
DBHelper.releasePreparedStatement(ps);
DBHelper.releasePreparedStatement(itemStatement);
DBHelper.releaseConnection(cn);
}
}
I also had the items insertion where the For is running with addBatch() then executeBatch but also the same Closed Statement error upon reaching cn.commit()... I dont understand why its closing, all connections and everything is released in the finally clause, so I get the feeling I'm making some fundamental error I'm not aware of... Any ideas? Thanks in advance!
EDIT: Below is the stack trace:
java.sql.SQLException: Closed Statement at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:189) at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:231) at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:294) at
oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:6226)
at
oracle.jdbc.driver.OraclePreparedStatement.sendBatch(OraclePreparedStatement.java:592)
at
oracle.jdbc.driver.OracleConnection.commit(OracleConnection.java:1376)
at com.evermind.sql.FilterConnection.commit(FilterConnection.java:201)
at
com.evermind.sql.OrionCMTConnection.commit(OrionCMTConnection.java:461)
at com.evermind.sql.FilterConnection.commit(FilterConnection.java:201)
at com.dao.ArticlesDAO.addArticle(ArticlesDAO.java:571) at
com.action.registry.CustomBaseAction.execute(CustomBaseAction.java:57)
at
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
at
org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
at
org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)
at
org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:760) at
javax.servlet.http.HttpServlet.service(HttpServlet.java:853) at
com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:765)
at
com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:317)
at
com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:790)
at
com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:270)
at
com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:112)
at
com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:192)
at java.lang.Thread.run(Unknown Source)
EDIT 2:
These are the parameters in the driver's datasource config, I thought the debugging process might be making it time out, but even finishing in less than a second throws the closed statement exception
min-connections="20"
max-connections="200"
inactivity-timeout="20"
stmt-cache-size="40"/>
It's usually best to create a statement, use it and close it as soon as possible, and it does no harm to do so before the transaction gets committed. From reading the Oracle tuturial about the batch model it's sounding like it could be a problem to have multiple statements open at one time. I would try closing the ps object before working with the itemStatement, then moving the initialization
itemStatement = cn.prepareStatement(itemsSQL.toString());
to directly above the for loop, and also move where you close the itemStatement to immediately after the for loop:
PreparedStatement itemStatement = cn.prepareStatement(itemsSQL.toString());
try {
for(Article item : article.getArticlesList()) {
item.setArticleId(article.getArticleId());
i= 1;
itemStatement.setLong(i, item.getArticleId()); i++;
itemStatement.setInt(i, item.getItemsOnStock()); i++;
itemStatement.setInt(i, item.getQuantity()); i++;
// etcetera...
itemStatement.executeUpdate();
}
} finally {
DBHelper.releasePreparedStatement(itemStatement);
}
It looks like what is going on is you have some batching parameter set on the connection that is causing the connection to try to look for unfinished business in the statement to finish up; it's finding the statement is already closed and the connection is complaining about it. This is weird because at the point the commit blows up on you the code hasn't reached the finally where the statement gets closed.
Reading up on Oracle batching models may be helpful. Also check the JDBC driver version and make sure it's right for the version of Oracle you're using, and see if there are any updates available for it.

Categories