Distributed transaction with multiple database servers seems unreliable - java

After googling for some days i got to understand something about JTA and
then wrote an application that uses JTA, JDBC and MySQL server for performing distributed transaction.
Below is my code.
We are using 3 different database servers and res1, res2 and res3(these are 3 different XAResources) denote them respectively.
try{
res1.start(xid1, XAResource.TMNOFLAGS);
dao.addEmployee(e, con);
res1.end(xid1, XAResource.TMSUCCESS);
res2.start(xid2, XAResource.TMNOFLAGS);
dao.addDepartment(d, tournamentCon);
res2.end(xid2, XAResource.TMSUCCESS);
res3.start(xid3, XAResource.TMNOFLAGS);
dao.addAsset(a, testCon);
res3.end(xid3, XAResource.TMSUCCESS);
int result1 = res1.prepare(xid1);
int result2 = res2.prepare(xid2);
int result3 = res3.prepare(xid3);
if(result1 == XAResource.XA_OK && result2 == XAResource.XA_OK &&
result3 == XAResource.XA_OK) {
res1.commit(xid1, false);
res2.commit(xid2, false);
res3.commit(xid3, false);
}
}
catch(Exception e){
res1.rollback(xid1);
res2.rollback(xid2);
res3.rollback(xid3);
}
Now, the problem with this approach is :-
let's say during res2.commit or res3.commit, there is an exception
then the commits which have already been done i.e res1.commit, coult not be rolled back
but we want all these trxs to be done completely or nothing at all.
Could someone please let me know how to do this?
FYI : we are not using any application server.
Please also let me know what a transaction manager is ? Is it a s/w component which needs to be used explicitly or the code that i have posted is what would be called a transaction manager.
Any help would be highly appreciated.

Related

Apache Jena SPARQL query will not abort

I'm having a problem in my Java application where Apache Jena will never stop a SPARQL query until it's finished, even if I explicitly tell it to stop. Here's the code that gets called to run a query:
try {
Model union = null;
union = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_RULE_INF);
if (ontologies != null)
for (OntModel om : ontologies)
union = ModelFactory.createUnion(union, om);
Reasoner reasoner = ReasonerRegistry.getOWLReasoner();
reasoner = reasoner.bindSchema(union);
InfModel infmodel = ModelFactory.createInfModel(reasoner, triples);
query_running = true;
Query query = QueryFactory.create(query_string);
query_execution = QueryExecutionFactory.create(query, infmodel);
ResultSet rs = query_execution.execSelect();
for ( ; rs.hasNext(); ) {
// do stuff with results
}
} catch (Exception e) {}
finally {
stopQuery();
}
stopQuery() gets called at the end, but the method is also called when the user hits a "cancel" button. Here's what that method looks like:
public void stopQuery() {
try {
if (query_execution != null) {
//query_execution.close();
query_execution.abort();
query_execution = null;
System.out.println("STOPPED");
}
} catch (Exception e) { e.printStackTrace(); }
query_running = false;
}
When the cancel button is hit while a query is running (10+ minutes on a relatively small dataset...?), the method gets called, but the query continues to run in the background. I know it's still running because I can see the application in task manager taking up 30%+ CPU until the query presumably completes. I've tried .abort(), .close(), and both at the same time, but I cannot figure out how to stop the query mid-execution. I've even tried wrapping the query code in a separate thread, but that doesn't work either. It makes sense that threading wouldn't solve the problem because the thread needs to see the interrupt request, but the code is freezing on a particular line. The code seems to freeze on rs.hasNext(), but not the first check. It will run quickly with the first x results of the ResultSet (which are likely explicit statements it finds easily), but then it will seem to freeze for a long while after that, likely searching for implicit results with the reasoner. How can I force the query to stop? I don't want to use a timeout -- I want the user to have the option to stop the query or let it play out. This problem is not specific to any one query or dataset.
Thanks.

hibernate optimistic lock mechanism

I am so curious about the hibernate optimistic lock (dedicated version way), I checked hibernate source code which tells that it checks version before current transaction commits, but if there is another transaction happens to committed after it query the version column from DB(in a very short time gap), then current transaction considers there is no change, so the old transaction would be replaced wrongly.
EntityVerifyVersionProcess.java
#Override
public void doBeforeTransactionCompletion(SessionImplementor session) {
final EntityPersister persister = entry.getPersister();
if ( !entry.isExistsInDatabase() ) {
// HHH-9419: We cannot check for a version of an entry we ourselves deleted
return;
}
final Object latestVersion = persister.getCurrentVersion( entry.getId(), session );
if ( !entry.getVersion().equals( latestVersion ) ) {
throw new OptimisticLockException(
object,
"Newer version [" + latestVersion +
"] of entity [" + MessageHelper.infoString( entry.getEntityName(), entry.getId() ) +
"] found in database"
);
}
}
is such case possible?
Hope there are DB domain experts who would help me on this.
Many thanks.
Based on a quick glance of the code, EntityVerifyVersionProcess is used for read transactions, so there's no potential for data loss involved. This would only check that when the transaction commits, it's not returning data that's already stale. With a READ COMMITTED transaction, I suppose this might return data that is instantly going stale, but hard to say without going into details.
Write transactions on the other hand use EntityIncrementVersionProcess, which is a completely different beast and leaves no chance for race conditions.
public void doBeforeTransactionCompletion(SessionImplementor session) {
final EntityPersister persister = entry.getPersister();
final Object nextVersion = persister.forceVersionIncrement( entry.getId(), entry.getVersion(), session );
entry.forceLocked( object, nextVersion );
}

Mongodb connection issue with java

i am having mongoDB connections issue in java , this is my connection class
public MongoDbUtil() {
try {
System.out.println("1");
String host = "127.0.0.1" ;
String dbName = "m_prod" ;
int port =27017 ;
System.out.println("2");
Mongo m = new Mongo();
System.out.println("3");
ds = new Morphia().createDatastore(m,dbName);
System.out.println("4");
ds.ensureIndexes();
System.out.println("5");
ds.ensureCaps();
System.out.println("1");
} catch(Exception e) {
System.out.println("catch");
}finally{
System.out.println("finally");
System.out.println(ds==null);
} }
only 1 and 2 is printing, after that 'finally' is printing also 'ds' is null, there is no any exception happen ('catch' is not printing)
Mongo server is up and running and i can access from command prompt (Linux) , the Other interesting thing is, its working fine when i call this method by unit test function, but for all other cases above issue happen , what can be the reason ?
Thanks
Mongo() is deprecated, you should use MongoClient() instead - see http://api.mongodb.org/java/2.11.0/com/mongodb/Mongo.html#Mongo()
Still it should find the deprecated constructor. Can you include the imports of your file, please?
If you're using the 3.0 driver, there's a driver-compat layer that will help you transition. You really should use the new API, though.

Creating multiple transactions in a single hibernate session

I have created a Quartz Job which runs in background in my JBoss server and is responsible for updating some statistical data at regular interval (coupled with some database flags)
To load and persist the I am using Hibernate 4. Everything works fine except one hick up.
The entire thread i.e. Job is wrapped in a Single transaction which over the period of time (as the amount of data increases) becomes huge and worry some. I am trying to break this single large transaction into multiple small ones, such that each transaction process only a sub group of data.
Problem: I tried very lamely to wrap a code into a loop and start/end transaction at start/end of the loop. As I expected it didn't work. I have been looking around various forums to figure out a solution but have not come across anything that indicates managing multiple transaction in a single session (where in only 1 transaction will be active at a time).
I am relatively new to hibernate and would appreciate any help that points me to a direction on achieving this.
Update: Adding code demonstrate what I am trying to achieve and mean when I say breaking into multiple transaction. And stack trace when this is executed.
log.info("Starting Calculation Job.");
List<GroupModel> groups = Collections.emptyList();
DAOFactory hibDaoFactory = null;
try {
hibDaoFactory = DAOFactory.hibernate();
hibDaoFactory.beginTransaction();
OrganizationDao groupDao = hibDaoFactory.getGroupDao();
groups = groupDao.findAll();
hibDaoFactory.commitTransaction();
} catch (Exception ex) {
hibDaoFactory.rollbackTransaction();
log.error("Error in transaction", ex);
}
try {
hibDaoFactory = DAOFactory.hibernate();
StatsDao statsDao = hibDaoFactory.getStatsDao();
StatsScaledValuesDao statsScaledDao = hibDaoFactory.getStatsScaledValuesDao();
for (GroupModel grp : groups) {
try {
hibDaoFactory.beginTransaction();
log.info("Performing computation for Group " + grp.getName() + " ["
+ grp.getId() + "]");
List<Stats> statsDetail = statsDao.loadStatsGroup(grp.getId());
// Coputing Steps here
for (Entry origEntry : statsEntries) {
entry.setCalculatedItem1(origEntry.getCalculatedItem1());
entry.setCalculatedItem2(origEntry.getCalculatedItem2());
entry.setCalculatedItem3(origEntry.getCalculatedItem3());
StatsDetailsScaledValues scValues = entry.getScaledValues();
if (scValues == null) {
scValues = new StatsDetailsScaledValues();
scValues.setId(origEntry.getScrEntryId());
scValues.setValues(origEntry.getScaledValues());
} else {
scValues.setValues(origEntry.getScaledValues());
}
statsScaledDao.makePersistent(scValues);
}
hibDaoFactory.commitTransaction();
} catch (Exception ex) {
hibDaoFactory.rollbackTransaction();
log.error("Error in transaction", ex);
} finally {
}
}
} catch (Exception ex) {
log.error("Error", ex);
} finally {
}
log.info("Job Complete.");
Following is the exception stacktrace I am getting upon execution of this Job
org.hibernate.SessionException: Session is closed!
at org.hibernate.internal.AbstractSessionImpl.errorIfClosed(AbstractSessionImpl.java:127)
at org.hibernate.internal.SessionImpl.createCriteria(SessionImpl.java:1555)
at sun.reflect.GeneratedMethodAccessor469.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.hibernate.context.internal.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:352)
at $Proxy308.createCriteria(Unknown Source)
at com.blueoptima.cs.dao.impl.hibernate.GenericHibernateDao.findByCriteria(GenericHibernateDao.java:132)
at com.blueoptima.cs.dao.impl.hibernate.ScrStatsManagementHibernateDao.loadStatsEntriesForOrg(ScrStatsManagementHibernateDao.java:22)
... 3 more
To my understanding from what I have read so far about Hibernate, sessions and transactions. It seems that when a session is created it is attached to the thread and lives through out the threads life or when commit or rollback is called. Thus, when the first transaction is committed the session is being closed and is unavailable for the rest of the threads life.
My question remains: How can we have multiple transactions in a single session?
More detail would be great and some examples but I think I should be able to help with what you have written here.
Have one static SessionFactory (this is big on memory)
Also with your transactions you want something like this.
SomeClass object = new SomeClass();
Session session = sessionFactory().openSession() // create the session object
session.beginTransaction(); //begins the transaction
session.save(object); // saves the object But REMEMBER it isn't saved till session.commit()
session.getTransaction().commit(); // actually persisting the object
session.close(); //closes the transaction
This is how I used my transaction, I am not sure if I do as many transaction as you have at a time. But the session object is light weight compared to the SessionFactory in memory.
If you want to save more objects at a time you could do it in one transaction for example.
SomeClass object1 = new SomeClass();
SomeClass object2 = new SomeClass();
SomeClass object2 = new SomeClass();
session.beginTransaction();
session.save(object1);
session.save(object2);
session.save(object3);
session.getTransaction().commit(); // when commit is called it will save all 3 objects
session.close();
Hope this help in some way or points you in the right direction.
I think you could configure you program to condense transactions as well. :)
Edit
Here is a great youtube tutorial. This guy really broke it down for me.
Hibernate Tutorials

Android / Java : GC_CONCURENT when creating many String's

I'm blocking with a problem from a few days. I've found some similar posts but still didn't understanding the problem of my code.
In fact, I'm reading a file (18,4Kbytes) which is containing SQL queries. So the only thing I want to do is read the file and execute the queries.
I've no problem reading the file, the problem occurs when after having executed all the queries (if I don't execute it, it works but it's not the deal!)
So here's my code (between try / catch for IO Exception):
InputStream in = ctx.getAssets().open("file.sql");
ByteArrayBuffer queryBuff = new ByteArrayBuffer(in.available());
String query = null;
int curent;
while (-1 != (curent = in.read())) {
queryBuff.append((char) curent);
if (((char) curent) == ';') {
query = new String(queryBuff.toByteArray());
db.execSQL(query);
queryBuff.clear();
query = null;
}
}
in.close();
queryBuff.clear();
And my GC_CONCURENT occurs when there is "new String" in the loop, after the end of the loop.
Thanks !
EDIT :
I'm a little annoyed, because my memory-leak didn't occurs in this part of code but in a part of code executed laterly (don't know what for now) but my problem wasn't a problem, app worked properly in fact...
Sorry !

Categories