Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a REST WS in JAVA using jersey that connects to database. I don't know what should be the ideal time for execution but I feel the time it takes is too much.
The actual call to DB completes in range of 0-3 milliseconds but the overall time to complete the REST request takes >9 milliseconds.
Below is one of the method:
connection // declared as instance variable
preparedStatement //declared as instance variable
public int insertSubscription(ActiveWatchers activeWatchers) throws SQLException {
int index = 0;
try {
connection = DAOConnectionFactory.getConnection();
preparedStatement = connection.prepareStatement(INSERT_SUBS);
preparedStatement.setObject(++index, activeWatchers.getPresentityURI());
preparedStatement.setObject(++index, activeWatchers.getCallId());
preparedStatement.setObject(++index, activeWatchers.getToTag());
preparedStatement.setObject(++index, activeWatchers.getFromTag());
preparedStatement.setObject(++index, activeWatchers.getToUser());
preparedStatement.setObject(++index, activeWatchers.getToDomain());
preparedStatement.setObject(++index, activeWatchers.getWatcherUsername());
preparedStatement.setObject(++index, activeWatchers.getWatcherDomain());
preparedStatement.setObject(++index, activeWatchers.getEvent());
preparedStatement.setObject(++index, activeWatchers.getEventId());
preparedStatement.setObject(++index, activeWatchers.getLocalCseq());
preparedStatement.setObject(++index, activeWatchers.getRemoteCseq());
preparedStatement.setObject(++index, activeWatchers.getExpires());
preparedStatement.setObject(++index, activeWatchers.getStatus());
preparedStatement.setObject(++index, activeWatchers.getReason());
preparedStatement.setObject(++index, activeWatchers.getRecordRoute());
preparedStatement.setObject(++index, activeWatchers.getContact());
preparedStatement.setObject(++index, activeWatchers.getLocalContact());
preparedStatement.setObject(++index, activeWatchers.getVersion());
preparedStatement.setObject(++index, activeWatchers.getSocketInfo());
long start = System.currentTimeMillis();
int status = preparedStatement.executeUpdate();
long end = System.currentTimeMillis();
logger.debug("insertSubscription elasped time {}", (end - start));
logger.debug("Insert returned with status {}.", status);
return status;
} catch (SQLException ex) {
logger.error("Error while adding new subscription by {}#{} for {} into database.", activeWatchers.getWatcherUsername(), activeWatchers.getWatcherDomain(), activeWatchers.getPresentityURI(), ex);
throw ex;
} catch (Exception ex) {
logger.error("Error while adding new subscription by {}#{} for {} into database.", activeWatchers.getWatcherUsername(), activeWatchers.getWatcherDomain(), activeWatchers.getPresentityURI(), ex);
throw ex;
} finally {
DAOConnectionFactory.closeConnection(connection, preparedStatement, null);
}
}
The REST part
subscriptionDAO //declared as instance variable
#POST
#Consumes("application/json")
public Response addSubscription(ActiveWatchers activeWatchers) {
long start = System.currentTimeMillis();
logger.debug("addSubscription start time {}", start);
subscriptionDAO = new SubscriptionDAO();
try {
subscriptionDAO.insertSubscription(activeWatchers);
long end = System.currentTimeMillis();
logger.debug("addSubscription elasped time {}", (end - start));
return Response.status(201).build();
} catch (Exception ex) {
logger.error("Error while creating subscription.", ex);
return Response.status(500).entity("Server Error").build();
}
}
I have a lot of other similar functions for different operations and each has similar behavior which is affecting the overall performance of the system.
Thanks
The actual call to DB completes in range of 0-3 milliseconds but the overall time to complete the REST request takes >9 milliseconds.
I think if your web layer causes only 6ms overhead, then it is pretty fast. I guess that 6ms is spent mostly with reflection-heavy JSON deserialization (into an ActiveWatcher instance).
First, you should profile your app with VisualVM (a GUI app, part of the JDK), because doing optimization based on guessing is just a lame thing.
If it turns out to be the json deserialization being the bottleneck, then you may develop a custom jackson deserializer for your ActiveWatchers class, where you can take the advantage of hand-written code over slow reflection-based behavior.
But I still think that your 9ms is fast enough.
Related
What Happened
All the data from last month was corrupted due to a bug in the system. So we have to delete and re-input these records manually. Basically, I want to delete all the rows inserted during a certain period of time. However, I found it difficult to scan and delete millions of rows in HBase.
Possible Solutions
I found two way to bulk delete:
The first one is to set a TTL, so that all the outdated record would be deleted automatically by the system. But I want to keep the records inserted before last month, so this solution does not work for me.
The second option is to write a client using the Java API:
public static void deleteTimeRange(String tableName, Long minTime, Long maxTime) {
Table table = null;
Connection connection = null;
try {
Scan scan = new Scan();
scan.setTimeRange(minTime, maxTime);
connection = HBaseOperator.getHbaseConnection();
table = connection.getTable(TableName.valueOf(tableName));
ResultScanner rs = table.getScanner(scan);
List<Delete> list = getDeleteList(rs);
if (list.size() > 0) {
table.delete(list);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != table) {
try {
table.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (connection != null) {
try {
connection.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
private static List<Delete> getDeleteList(ResultScanner rs) {
List<Delete> list = new ArrayList<>();
try {
for (Result r : rs) {
Delete d = new Delete(r.getRow());
list.add(d);
}
} finally {
rs.close();
}
return list;
}
But in this approach, all the records are stored in ResultScanner rs, so the heap size would be huge. And if the program crushes, it has to start from the beginning.
So, is there a better way to achieve the goal?
Don't know how many 'millions' you are dealing with in your table, but the simples thing is to not try to put them all into a List at once but to do it in more manageable steps by using the .next(n) function. Something like this:
for (Result row : rs.next(numRows))
{
Delete del = new Delete(row.getRow());
...
}
This way, you can control how many rows get returned from the server via a single RPC through the numRows parameter. Make sure it's large enough so as not to make too many round-trips to the server, but at the same time not too large to kill your heap. You can also use the BufferedMutator to operate on multiple Deletes at once.
Hope this helps.
I would suggest two improvements:
Use BufferedMutator to batch your deletes, it does exactly what you need – keeps internal buffer of mutations and flushes it to HBase when buffer fills up, so you do not have to worry about keeping your own list, sizing and flushing it.
Improve your scan:
Use KeyOnlyFilter – since you do not need the values, no need to retrieve them
use scan.setCacheBlocks(false) - since you do a full-table scan, caching all blocks on the region server does not make much sense
tune scan.setCaching(N) and scan.setBatch(N) – the N will depend on the size of your keys, you should keep a balance between caching more and memory it will require; but since you only transfer keys, the N could be quite large, I suppose.
Here's an updated version of your code:
public static void deleteTimeRange(String tableName, Long minTime, Long maxTime) {
try (Connection connection = HBaseOperator.getHbaseConnection();
final Table table = connection.getTable(TableName.valueOf(tableName));
final BufferedMutator mutator = connection.getBufferedMutator(TableName.valueOf(tableName))) {
Scan scan = new Scan();
scan.setTimeRange(minTime, maxTime);
scan.setFilter(new KeyOnlyFilter());
scan.setCaching(1000);
scan.setBatch(1000);
scan.setCacheBlocks(false);
try (ResultScanner rs = table.getScanner(scan)) {
for (Result result : rs) {
mutator.mutate(new Delete(result.getRow()));
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
Note the use of "try with resource" – if you omit that, make sure to .close() mutator, rs, table, and connection.
I defined the following aspect the measure the time execution of some methods:
#Around("execution(#Metrics * *.*(..))")
public Object metrics(ProceedingJoinPoint pointcut) {
Logger log = LoggerFactory.getLogger(pointcut.getSourceLocation().getWithinType());
long ms = System.currentTimeMillis();
try {
Object result = pointcut.proceed();
ms = System.currentTimeMillis() - ms;
log.info(String.format("Execution of method %s finished in %d ms", pointcut.getSignature().getName(), ms));
return result;
}
catch (Throwable e) {
log.error(String.format("Execution of method %s ended with an error", pointcut.getSignature().getName()), e);
}
return null;
}
The problem comes when I use it in the update method of my daos, which is #Transactional. The results I'm getting do not match the real times. I guess it is only measuring the time execution of the java code, but not the database update performed by Hibernate.
Is it possible to measure the complete execution time?
For more information I am using spring 3.2.9 and hibernate 3.5 in my application.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
The similar code below is working in my app which I have developed 3-yrs back, Do I need to add and dependencies files Or is there an other way of implementing it.I have found this.
private void appLevel_Lang(final Context cntxt) {
final ParseQuery<ParseObject> query = ParseQuery.getQuery("appSupportedLanguages");
query.setLimit(100);
// Get last updated date of appSupportedLanguage table from sqllite
Date dbLastUpdatedDate = db.getLastUpdateDateOfTable("appSupportedLanguages");
if (dbLastUpdatedDate != null) {
query.whereGreaterThan("updatedAt", dbLastUpdatedDate);
}
query.orderByAscending("updatedAt");
// run in background
query.findInBackground(new FindCallback<ParseObject>() {
#Override
public void done(List<ParseObject> applvl_LangList, ParseException e) {
if (e == null) {
if (applvl_LangList.size() > 0) {
String lastUpdatedDate = ParseQueries.getNSDateFormatterUpdateAtForParse().format(applvl_LangList.get(applvl_LangList.size() - 1).getUpdatedAt());
for (ParseObject p : applvl_LangList) {
// ****Insert in DB****
AppLevel appLevelLanguage = new AppLevel();
appLevelLanguage.objectID = p.getObjectId();
appLevelLanguage.key = p.getString("key");
appLevelLanguage.updatedAt = lastUpdatedDate;
ArrayList<String> arrLangColNames = (ArrayList<String>) ParseConfig.getCurrentConfig().get("supportedLanguages");
// *Insert in local DB*
db.insertOrUpdateAppSupportedLanguageTable(appLevelLanguage);
}
}
if (applvl_LangList.size() == query.getLimit()) {
appLevel_Lang(cntxt);
} else {
Log.d("", "AppSupportedLanguages is not equal to limit");
}
} else {
*// Show parse exception here*
Log.d("AppSupportedLanguages", "Error: " + e.getMessage());
}
}
});
}
Parse has shutdown their service on January 30, 2017
Form Blog link
we will disable the Parse service on Monday, January 30, 2017.
Throughout the day we will be disabling the Parse API on an app-by-app
basis. When your app is disabled, you will not be able to access the
data browser or export any data, and your applications will no longer
be able to access the Parse API.
Alternate Solutions
Firebase
Buddy
Migration (required your own server with node.js application support)
I have a problem, the below code runs fine if I run it without the autoCommit property, however I would prefer to run it as a transaction, the code basically inserts an article's header information and then the list each articles associated with it (so it's like a one-to-many relationship), so I could like to commit everything in one go rather than first the article information and then its items. The issue is that when I reach to the cn.commit() line, I get an exception that says "Closed Statement"
database insertion method
public static void addArticle(Article article) throws SQLException {
Connection cn = null;
PreparedStatement ps = null;
StringBuffer insert = new StringBuffer();
StringBuffer itemsSQL = new StringBuffer();
try {
article.setArticleSortNum(getNextArticleNum(article.getShopId()));
article.setArticleId(DAOHelper.getNextId("article_id_sequence"));
cn = DBHelper.makeConnection();
cn.setAutoCommit(false);
insert.append("insert query for article goes here");
ps = cn.prepareStatement(insert.toString());
int i = 1;
ps.setLong(i, article.getArticleId()); i++;
ps.setLong(i, article.getShopId()); i++;
ps.setInt(i, article.getArticleNum()); i++;
// etcetera...
ps.executeUpdate();
itemsSQL.append("insert query for each line goes here");
itemStatement = cn.prepareStatement(itemsSQL.toString());
for(Article item : article.getArticlesList()) {
item.setArticleId(article.getArticleId());
i= 1;
itemStatement.setLong(i, item.getArticleId()); i++;
itemStatement.setInt(i, item.getItemsOnStock()); i++;
itemStatement.setInt(i, item.getQuantity()); i++;
// etcetera...
itemStatement.executeUpdate();
}
cn.commit();
} catch (SQLException e) {
cn.rollback();
log.error(e.getMessage());
throw e;
}
finally {
DBHelper.releasePreparedStatement(ps);
DBHelper.releasePreparedStatement(itemStatement);
DBHelper.releaseConnection(cn);
}
}
I also had the items insertion where the For is running with addBatch() then executeBatch but also the same Closed Statement error upon reaching cn.commit()... I dont understand why its closing, all connections and everything is released in the finally clause, so I get the feeling I'm making some fundamental error I'm not aware of... Any ideas? Thanks in advance!
EDIT: Below is the stack trace:
java.sql.SQLException: Closed Statement at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:189) at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:231) at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:294) at
oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:6226)
at
oracle.jdbc.driver.OraclePreparedStatement.sendBatch(OraclePreparedStatement.java:592)
at
oracle.jdbc.driver.OracleConnection.commit(OracleConnection.java:1376)
at com.evermind.sql.FilterConnection.commit(FilterConnection.java:201)
at
com.evermind.sql.OrionCMTConnection.commit(OrionCMTConnection.java:461)
at com.evermind.sql.FilterConnection.commit(FilterConnection.java:201)
at com.dao.ArticlesDAO.addArticle(ArticlesDAO.java:571) at
com.action.registry.CustomBaseAction.execute(CustomBaseAction.java:57)
at
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
at
org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
at
org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)
at
org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:760) at
javax.servlet.http.HttpServlet.service(HttpServlet.java:853) at
com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:765)
at
com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:317)
at
com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:790)
at
com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:270)
at
com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:112)
at
com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:192)
at java.lang.Thread.run(Unknown Source)
EDIT 2:
These are the parameters in the driver's datasource config, I thought the debugging process might be making it time out, but even finishing in less than a second throws the closed statement exception
min-connections="20"
max-connections="200"
inactivity-timeout="20"
stmt-cache-size="40"/>
It's usually best to create a statement, use it and close it as soon as possible, and it does no harm to do so before the transaction gets committed. From reading the Oracle tuturial about the batch model it's sounding like it could be a problem to have multiple statements open at one time. I would try closing the ps object before working with the itemStatement, then moving the initialization
itemStatement = cn.prepareStatement(itemsSQL.toString());
to directly above the for loop, and also move where you close the itemStatement to immediately after the for loop:
PreparedStatement itemStatement = cn.prepareStatement(itemsSQL.toString());
try {
for(Article item : article.getArticlesList()) {
item.setArticleId(article.getArticleId());
i= 1;
itemStatement.setLong(i, item.getArticleId()); i++;
itemStatement.setInt(i, item.getItemsOnStock()); i++;
itemStatement.setInt(i, item.getQuantity()); i++;
// etcetera...
itemStatement.executeUpdate();
}
} finally {
DBHelper.releasePreparedStatement(itemStatement);
}
It looks like what is going on is you have some batching parameter set on the connection that is causing the connection to try to look for unfinished business in the statement to finish up; it's finding the statement is already closed and the connection is complaining about it. This is weird because at the point the commit blows up on you the code hasn't reached the finally where the statement gets closed.
Reading up on Oracle batching models may be helpful. Also check the JDBC driver version and make sure it's right for the version of Oracle you're using, and see if there are any updates available for it.
I have created a Quartz Job which runs in background in my JBoss server and is responsible for updating some statistical data at regular interval (coupled with some database flags)
To load and persist the I am using Hibernate 4. Everything works fine except one hick up.
The entire thread i.e. Job is wrapped in a Single transaction which over the period of time (as the amount of data increases) becomes huge and worry some. I am trying to break this single large transaction into multiple small ones, such that each transaction process only a sub group of data.
Problem: I tried very lamely to wrap a code into a loop and start/end transaction at start/end of the loop. As I expected it didn't work. I have been looking around various forums to figure out a solution but have not come across anything that indicates managing multiple transaction in a single session (where in only 1 transaction will be active at a time).
I am relatively new to hibernate and would appreciate any help that points me to a direction on achieving this.
Update: Adding code demonstrate what I am trying to achieve and mean when I say breaking into multiple transaction. And stack trace when this is executed.
log.info("Starting Calculation Job.");
List<GroupModel> groups = Collections.emptyList();
DAOFactory hibDaoFactory = null;
try {
hibDaoFactory = DAOFactory.hibernate();
hibDaoFactory.beginTransaction();
OrganizationDao groupDao = hibDaoFactory.getGroupDao();
groups = groupDao.findAll();
hibDaoFactory.commitTransaction();
} catch (Exception ex) {
hibDaoFactory.rollbackTransaction();
log.error("Error in transaction", ex);
}
try {
hibDaoFactory = DAOFactory.hibernate();
StatsDao statsDao = hibDaoFactory.getStatsDao();
StatsScaledValuesDao statsScaledDao = hibDaoFactory.getStatsScaledValuesDao();
for (GroupModel grp : groups) {
try {
hibDaoFactory.beginTransaction();
log.info("Performing computation for Group " + grp.getName() + " ["
+ grp.getId() + "]");
List<Stats> statsDetail = statsDao.loadStatsGroup(grp.getId());
// Coputing Steps here
for (Entry origEntry : statsEntries) {
entry.setCalculatedItem1(origEntry.getCalculatedItem1());
entry.setCalculatedItem2(origEntry.getCalculatedItem2());
entry.setCalculatedItem3(origEntry.getCalculatedItem3());
StatsDetailsScaledValues scValues = entry.getScaledValues();
if (scValues == null) {
scValues = new StatsDetailsScaledValues();
scValues.setId(origEntry.getScrEntryId());
scValues.setValues(origEntry.getScaledValues());
} else {
scValues.setValues(origEntry.getScaledValues());
}
statsScaledDao.makePersistent(scValues);
}
hibDaoFactory.commitTransaction();
} catch (Exception ex) {
hibDaoFactory.rollbackTransaction();
log.error("Error in transaction", ex);
} finally {
}
}
} catch (Exception ex) {
log.error("Error", ex);
} finally {
}
log.info("Job Complete.");
Following is the exception stacktrace I am getting upon execution of this Job
org.hibernate.SessionException: Session is closed!
at org.hibernate.internal.AbstractSessionImpl.errorIfClosed(AbstractSessionImpl.java:127)
at org.hibernate.internal.SessionImpl.createCriteria(SessionImpl.java:1555)
at sun.reflect.GeneratedMethodAccessor469.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.hibernate.context.internal.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:352)
at $Proxy308.createCriteria(Unknown Source)
at com.blueoptima.cs.dao.impl.hibernate.GenericHibernateDao.findByCriteria(GenericHibernateDao.java:132)
at com.blueoptima.cs.dao.impl.hibernate.ScrStatsManagementHibernateDao.loadStatsEntriesForOrg(ScrStatsManagementHibernateDao.java:22)
... 3 more
To my understanding from what I have read so far about Hibernate, sessions and transactions. It seems that when a session is created it is attached to the thread and lives through out the threads life or when commit or rollback is called. Thus, when the first transaction is committed the session is being closed and is unavailable for the rest of the threads life.
My question remains: How can we have multiple transactions in a single session?
More detail would be great and some examples but I think I should be able to help with what you have written here.
Have one static SessionFactory (this is big on memory)
Also with your transactions you want something like this.
SomeClass object = new SomeClass();
Session session = sessionFactory().openSession() // create the session object
session.beginTransaction(); //begins the transaction
session.save(object); // saves the object But REMEMBER it isn't saved till session.commit()
session.getTransaction().commit(); // actually persisting the object
session.close(); //closes the transaction
This is how I used my transaction, I am not sure if I do as many transaction as you have at a time. But the session object is light weight compared to the SessionFactory in memory.
If you want to save more objects at a time you could do it in one transaction for example.
SomeClass object1 = new SomeClass();
SomeClass object2 = new SomeClass();
SomeClass object2 = new SomeClass();
session.beginTransaction();
session.save(object1);
session.save(object2);
session.save(object3);
session.getTransaction().commit(); // when commit is called it will save all 3 objects
session.close();
Hope this help in some way or points you in the right direction.
I think you could configure you program to condense transactions as well. :)
Edit
Here is a great youtube tutorial. This guy really broke it down for me.
Hibernate Tutorials