I am working with JPA, my web application is taking 60 sec to execute this method, I want to execute it faster how to achive ?
public boolean evaluateStudentTestPaper (long testPostID, long studentID, long howManyTimeWroteExam) {
Gson uday = new Gson();
Logger custLogger = Logger.getLogger("StudentDao.java");
// custLogger.info("evaluateTestPaper test paper for testPostID: " +
// testPostID);
long subjectID = 0;
// checking in table
EntityManagerFactory EMF = EntityManagerFactoryProvider.get();
EntityManager em = EMF.createEntityManager();
List<StudentExamResponse> studentExamResponses = null;
try {
studentExamResponses = em
.createQuery(
"SELECT o FROM StudentExamResponse o where o.studentId=:studentId And o.testPostID=:testPostID and o.howManyTimeWroteExam=:howManyTimeWroteExam")
.setParameter("studentId", studentID).setParameter("testPostID", testPostID)
.setParameter("howManyTimeWroteExam", howManyTimeWroteExam).getResultList();
System.out.println("studentExamResponses--------------------------------------------------"
+ uday.toJson(studentExamResponses) + "---------------------------------------");
} catch (Exception e) {
custLogger.info("exception at getting student details:" + e.toString());
studentExamResponses = null;
}
int studentExamResponseSize = studentExamResponses.size();
if (AppConstants.SHOWLOGS.equalsIgnoreCase("true")) {
custLogger.info("student questions list:" + studentExamResponseSize);
}
// Get all questions based on student id and test post id
List<ExamPaperRequest> examPaperRequestList = new ArrayList<ExamPaperRequest>();
List<Questions> questionsList = new ArrayList<Questions>();
// StudentExamResponse [] studentExamResponsesArgs =
// (StudentExamResponse[]) studentExamResponses.toArray();
// custLogger.info("Total questions to be evaluated: " +
// examPaperRequestList.size());
List<StudentTestResults> studentTestResultsList = new ArrayList<StudentTestResults>();
StudentTestResults studentTestResults = null;
StudentResults studentResults = null;
String subjectnames = "", subjectMarks = "";
int count = 0;
boolean lastIndex = false;
if (studentExamResponses != null && studentExamResponseSize > 0) {
// studentExamResponses.forEach(studentExamResponses->{
for (StudentExamResponse o : studentExamResponses.stream().parallel()) {
// 900 lines of coade inside which includes getting data from database Queries
}
}
As #Nikos Paraskevopoulos mentioned, it should probably be the ~900 * N database iterations inside that for loop.
I'd say to avoid DB iterations as much as you can, specially inside a loop like that.
You can try to elaborate your current StudentExamResponse sql to englobe more clauses - those you're using inside your for mainly, which could even diminish the amount of items you iterate upon.
My guess would be your select query is taking time.
If possible, set query timeout to less than 60 seconds & confirm this.
Ways of setting query timeout can be found out there - How to set the timeout period on a JPA EntityManager query
If this is because of query, then you may need to work to make select query optimal.
Related
I have a piece of JAVA code that is accessed by multiple threads.
synchronized (this.getClass())
{
System.out.println("stsrt");
certRequest.setRequestNbr(
generateRequestNumber(
certInsuranceRequestAddRq.getAccountInfo().getAccountNumberId()));
System.out.println("outside funcvtion"+certRequest.getRequestNbr());
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
System.out.println(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
System.out.println("end");
}
the function generateRequestNumber() generates a request number based on the data fetched from two database tables.
public String generateRequestNumber(String accNumber) throws Exception
{
String requestNumber = null;
if (accNumber != null)
{
String SQL_QUERY = "select CERTREQUEST.requestNbr from CertRequest as CERTREQUEST, "
+ "CertActObjRel as certActObjRel where certActObjRel.certificateObjkeyId=CERTREQUEST.requestId "
+ " and certActObjRel.certObjTypeCd=:certObjTypeCd "
+ " and certActObjRel.certAccountId=:accNumber ";
String[] parameterNames = { "certObjTypeCd", "accNumber" };
Object[] parameterVaues = new Object[]
{
Constants.REQUEST_RELATION_CODE, accNumber
};
List<?> resultSet = dao.executeNamedQuery(SQL_QUERY,
parameterNames, parameterVaues);
// List<?> resultSet = dao.retrieveTableData(SQL_QUERY);
if (resultSet != null && resultSet.size() > 0) {
requestNumber = (String) resultSet.get(0);
}
int maxRequestNumber = -1;
if (requestNumber != null && requestNumber.length() > 0) {
maxRequestNumber = maxValue(resultSet.toArray());
requestNumber = Integer.toString(maxRequestNumber + 1);
} else {
requestNumber = Integer.toString(1);
}
System.out.println("inside function request number"+requestNumber);
return requestNumber;
}
return null;
}
The tables CertRequest and CertActObjRel used in generateRequestNumber() are updated by the functions "dao.insert(certRequest);" and "addAccountRel();" used in my initial code respectively. Also the System.out.println() statements used in my initial code have following output.
stsrt
inside function request number73
outside funcvtion73
A1664886-5F84-45A9-AB5F-C69768B83EAD
end
stsrt
inside function request number73
outside funcvtion73
44DAD872-6A1D-4524-8A32-15741FAC0CA9
end
If you notice both the threads run in a synchronized manner, but when the request number is generated , it's the same. My assumption is the database updation for CertRequest and CertActObjRel is done when both the threads finish their execution.
Could anyone help me to fix this?
Hi I am using Hibernate to update the records in a table. And I'm inserting same records in another table. It is in a loop, but I am getting exception as lock wait timeout exception when I am updating records. Please could anybody resolve this problem? Thanks in advance!
try {
SalesInventoryDAO dao = new SalesInventoryDAO();
sess = HibernateUtil.getSessionFactory().openSession();
Transaction tx = ses.beginTransaction();
GoodsRecievedForm item = (GoodsRecievedForm) form;
GoodsRecieved bk = new GoodsRecieved();
bk.setGoodsId(item.getGoodsId());
InventoryOrder order = (InventoryOrder) sess.get(InventoryOrder.class, item.getOrderNo());
bk.setOrderNo(order);
// if (order.getQuotation().getQuotationNo() != null) {
// bk.setQuotation(order.getQuotation().getQuotationNo());
// } else {
// bk.setQuotation(null);
// }
java.util.Date temp = new SimpleDateFormat("MM/dd/yyyy", Locale.ENGLISH).parse(item.getRecievedDate());
java.sql.Date temp1 = new java.sql.Date(temp.getTime());
bk.setRecievedDate(temp1);
bk.setOrderQty(order.getTotalqty());
bk.setReceivedPersonName(item.getReceivedPersonName());
bk.setReceivedQty(item.getReceivedQty());
bk.setConditionOfMaterial(item.getConditionOfMaterial());
UserEntity msg;
HttpSession session = request.getSession(false);
msg = (UserEntity) session.getAttribute("user");
bk.setAddedBy(msg);
bk.setAddedDate(new Date());
int[] item1111 = item.getGoodsDetails();
String[] productre = item.getGoodsDetailsName();
float proqty[] = item.getGoodsDetailsQty();
float price[] = item.getGoodsDetailsPrice();
float receivedqty[] = item.getReceivedquantity();
GoodsReceivedDetails mb;
Set<GoodsReceivedDetails> purDetails = new HashSet();
for (int i = 0; i < productre.length; i++) {
mb = new GoodsReceivedDetails();
mb.setGoodsDetailsName(productre[i]);
mb.setGoodsDetailsQty(proqty[i]);
mb.setGoodsDetailsPrice(price[i]);
mb.setReceivedquantity(receivedqty[i]);
//System.out.println("productre" + productre[i]);
int id3 = item1111[i];
//System.out.println("id3id3id3id3" + id3);
// int id3 = Integer.parseInt(productre[i]);
Item idf = (Item) sess.get(Item.class, id3);
float qty = (idf.getItemStock() + mb.getReceivedquantity());
// mb.setItemId(idf);
// mb.setItemId(idf);
dao.updateitem(qty, idf);
//dao.updateitem(idf);
mb.setGoodsId(bk);
sess.save(mb);
purDetails.add(mb);
}
bk.setGoodsDetails(purDetails);
sess.save(bk);
tx.commit;
//System.out.println("comming");
// List ls = gdao.getOrderItems(order.getOrderId());
// for (Iterator it = ls.iterator(); it.hasNext();) {
// InventoryOrderDetails inv = (InventoryOrderDetails) it.next();
// gdao.updateitem(inv.getItemId().getItemStock() + bk.getReceivedQty(), inv.getItemId());
// }
} catch (Exception e) {
e.printStackTrace();
} finally {
sess.close();
}
This is my dao code..
public void updateitem(float stock, Item itm) {
Session ses = HibernateUtil.getSessionFactory().openSession();
////System.out.println("itmitmitm" + itm.getItemId());
Transaction tx = ses.beginTransaction();
Query qry = ses.createQuery("UPDATE Item set itemStock='" + stock + "' where itemId='" + itm.getItemId() + "'");
qry.executeUpdate();
ses.close();
tx.commit();
}
You have initialized a the transaction by sess.beginTransaction(); in the beginning and before even committing the transaction, you've trying to re-initialize the transaction. This will lead to memory leaks as the previous transactions hasn't been committed. So, before you begin another transaction, commit the previous one.
And here are some suggestions:
‘Lock wait timeout’ occurs typically when a transaction is waiting on
row(s) of data to update which is already been locked by some other
transaction.
Most of the times, the problem lies on the database
side. The possible causes may be a inappropriate table design, large
amount of data, constraints etc.
Please check out this for more details.
Commit transaction before opening new one
Transaction currentTx = sess.beginTransaction();
..
currentTx.commit();
..
currentTx = sess.beginTransaction();
EDIT:
In dao you opening new transaction instead of use previous one.. you should read some tutorials about transaction management in java/hibernate.
How can I improve the search functionality.? I have written some codes to search for something.The search was taking too much time. And the code snippets here,
I am pulling the data from the database using this method.,
OracleConnection connection = null;
OraclePreparedStatement ptmst = null;
OracleResultSet rs = null;
OracleCallableStatement cstmt = null;
StringBuffer strBfr = new StringBuffer();
ArrayList myList = new ArrayList();
try
{
connection = (OracleConnection) TransactionScope.getConnection();
strBfr.append("select distinct .......... ");
ptmst = (OraclePreparedStatement)connection.prepareStatement(strBfr.toString());
rs = (OracleResultSet)ptmst.executeQuery();
while (rs.next())
{
HashMap hashItems = new HashMap();
hashItems.put("first",rs.getString(1));
hashItems.put("second",rs.getString(2));
myList.add(hashItems);
}
}
catch (Exception e) {
}
finally {
try {
if (ptmst != null) {
ptmst.close();
}
} catch (Exception e) {
}
try {
if (connection != null) {
TransactionScope.releaseConnection(connection);
}
} catch (Exception e) {
}
}
return myList;
In my jsp:
ArrayList getValues = new ArrayList();
getValues = //calling Method here.
for(int i=0; i < getValues.size();i++)
{
HashMap quoteSrch=(HashMap)allPOV.get(i);
first = (String)quoteSrch.get("first");
second = (String)quoteSrch.get("second");
}
Query:
SELECT DISTINCT(mtl.segment1),
mtl.description ,
mtl.inventory_item_id ,
mtl.attribute16
FROM mtl_system_items_b mtl,
mtl_system_items_tl k
WHERE 1 =1
AND mtl.organization_id = ?
AND k.inventory_item_id = mtl.inventory_item_id
AND NVL(orderable_on_web_flag,'N')= 'Y'
AND NVL(web_status,'UNPUBLISHED') = 'PUBLISHED'
AND mtl.SEGMENT1 LIKE ? --Here is the search term
Make sure organization_id , inventory_item_id and especially SEGMENT1 is indexed in your table.
Your query is pretty standard , if that doesn't work then it seems like your DB server is responding slow which could be due to number of reasons like low space , low memory , slow disk/read etc.
You can then ask your DBA/Server admins to check that.
First you need to find out the real problem
Is it the DB query
Is it the Network (is the App and the DB located on the same machine?)
Once you have identified that it is the DB query, then it becomes more of a DB question.
How does the two tables look like?
Any index used?
How does the data look like (How many rows etc)
After you have analyzed this, you should be able to post the question differently and expect an answer. I am not a DB guy, but I am sure someone would be able to provide some pointers.
Tunning has to be done:
Check TransactionScope.getConnection(); is giving connection without any delay.
Instead of creating new HashMap hashItems = new HashMap(); you can use
while (rs.next()){
myList.add(rs.getString(1) + "delimiter" + rs.getString(2));
}
in jsp use
first = allPOV.get(i).split("delimter")[0];
second = allPOV.get(i).split("delimter")[1];
so that you can reduce memory.
If possible use limit in your query, and use index on SEGMENT1 link.
I want to have Solr always retrieve all documents found by a search (I know Solr wasn't built for that, but anyways) and I am currently doing this with this code:
...
List<Article> ret = new ArrayList<Article>();
QueryResponse response = solr.query(query);
int offset = 0;
int totalResults = (int) response.getResults().getNumFound();
List<Article> ret = new ArrayList<Article>((int) totalResults);
query.setRows(FETCH_SIZE);
while(offset < totalResults) {
//requires an int? wtf?
query.setStart((int) offset);
int left = totalResults - offset;
if(left < FETCH_SIZE) {
query.setRows(left);
}
response = solr.query(query);
List<Article> current = response.getBeans(Article.class);
offset += current.size();
ret.addAll(current);
}
...
This works, but is pretty slow if a query gets over 1000 hits (I've read about that on here. This is being caused by Solr because I am setting the start everytime which - for some reason - takes some time). What would be a nicer (and faster) ways to do this?
To improve the suggested answer you could use a streamed response. This has been added especially for the case that one fetches all results. As you can see in Solr's Jira that guy wants to do the same as you do. This has been implemented for Solr 4.
This is also described in Solrj's javadoc.
Solr will pack the response and create a whole XML/JSON document before it starts sending the response. Then your client is required to unpack all that and offer it as a list to you. By using streaming and parallel processing, which you can do when using such a queued approach, the performance should improve further.
Yes, you will loose the automatic bean mapping, but as performance is a factor here, I think this is acceptable.
Here is a sample unit test:
public class StreamingTest {
#Test
public void streaming() throws SolrServerException, IOException, InterruptedException {
HttpSolrServer server = new HttpSolrServer("http://your-server");
SolrQuery tmpQuery = new SolrQuery("your query");
tmpQuery.setRows(Integer.MAX_VALUE);
final BlockingQueue<SolrDocument> tmpQueue = new LinkedBlockingQueue<SolrDocument>();
server.queryAndStreamResponse(tmpQuery, new MyCallbackHander(tmpQueue));
SolrDocument tmpDoc;
do {
tmpDoc = tmpQueue.take();
} while (!(tmpDoc instanceof PoisonDoc));
}
private class PoisonDoc extends SolrDocument {
// marker to finish queuing
}
private class MyCallbackHander extends StreamingResponseCallback {
private BlockingQueue<SolrDocument> queue;
private long currentPosition;
private long numFound;
public MyCallbackHander(BlockingQueue<SolrDocument> aQueue) {
queue = aQueue;
}
#Override
public void streamDocListInfo(long aNumFound, long aStart, Float aMaxScore) {
// called before start of streaming
// probably use for some statistics
currentPosition = aStart;
numFound = aNumFound;
if (numFound == 0) {
queue.add(new PoisonDoc());
}
}
#Override
public void streamSolrDocument(SolrDocument aDoc) {
currentPosition++;
System.out.println("adding doc " + currentPosition + " of " + numFound);
queue.add(aDoc);
if (currentPosition == numFound) {
queue.add(new PoisonDoc());
}
}
}
}
You might improve performance by increasing FETCH_SIZE. Since you are getting all the results, pagination doesn't make sense unless you are concerned with memory or some such. If 1000 results are liable to cause a memory overflow, I'd say your current performance seems pretty outstanding though.
So I would try getting everything at once, simplifying this to something like:
//WHOLE_BUNCHES is a constant representing a reasonable max number of docs we want to pull here.
//Integer.MAX_VALUE would probably invite an OutOfMemoryError, but that would be true of the
//implementation in the question anyway, since they were still being stored in the list at the end.
query.setRows(WHOLE_BUNCHES);
QueryResponse response = solr.query(query);
int totalResults = (int) response.getResults().getNumFound(); //If you even still need this figure.
List<Article> ret = response.getBeans(Article.class);
If you need to keep the pagination though:
You are performing this first query:
QueryResponse response = solr.query(query);
and are populating the number of found results from it, but you are not pulling any results with the response. Even if you keep pagination here, you could at least eliminate one extra query here.
This:
int left = totalResults - offset;
if(left < FETCH_SIZE) {
query.setRows(left);
}
Is unnecessary. setRows specifies a Maximum number of rows to return, so asking for more than are available won't cause any problems.
Finally, apropos of nothing, but I have to ask: what argument would you expect setStart to take if not an int?
Use below logic to fetch solr data as batch to optimize performance of solr data fetch query:
public List<Map<String, Object>> getData(int id,Set<String> fields){
final int SOLR_QUERY_MAX_ROWS = 3;
long start = System.currentTimeMillis();
SolrQuery query = new SolrQuery();
String queryStr = "id:" + id;
LOG.info(queryStr);
query.setQuery(queryStr);
query.setRows(SOLR_QUERY_MAX_ROWS);
QueryResponse rsp = server.query(query, SolrRequest.METHOD.POST);
List<Map<String, Object>> mapList = null;
if (rsp != null) {
long total = rsp.getResults().getNumFound();
System.out.println("Total count found: " + total);
// Solr query batch
mapList = new ArrayList<Map<String, Object>>();
if (total <= SOLR_QUERY_MAX_ROWS) {
addAllData(mapList, rsp,fields);
} else {
int marker = SOLR_QUERY_MAX_ROWS;
do {
if (rsp != null) {
addAllData(mapList, rsp,fields);
}
query.setStart(marker);
rsp = server.query(query, SolrRequest.METHOD.POST);
marker = marker + SOLR_QUERY_MAX_ROWS;
} while (marker <= total);
}
}
long end = System.currentTimeMillis();
LOG.debug("SOLR Performance: getData: " + (end - start));
return mapList;
}
private void addAllData(List<Map<String, Object>> mapList, QueryResponse rsp,Set<String> fields) {
for (SolrDocument sdoc : rsp.getResults()) {
Map<String, Object> map = new HashMap<String, Object>();
for (String field : fields) {
map.put(field, sdoc.getFieldValue(field));
}
mapList.add(map);
}
}
I'm trying to debug a method in Java using NetBeans.
That method is:
public Integer getNumberOfClamps(Locations paLocation) {
Integer ret = -1;
List list = new ArrayList();
String removeme = "ABC";
if (paLocation == null) {
return ret;
}
try {
IO io = new IO(this.getSchemaName());
Session session = io.getSession();
String sql = "select count(*) from assets a join assettypes at on (at.id = a.assettype_id) ";
sql += "where a.currentlocation_id = " + paLocation.getId() + " and at.clamp = 1 and at.active = 1;";
list = session.createQuery(sql).list();
// for some reason, list is empty yet MySQL reports 40 records
// and the following two lines are never reached!
ret = list.size();
removeme = "WHAT???";
} catch (Exception ex) {
ret = -1; // explicitly set return
} finally {
return ret;
}
}
Towards the middle of the method you will see list = session.createQuery(sql).list();
For some reason, this is returning an empty list even though when the SQL is run manually, I get 40 results.
But the odd part is that once the .list() is called, it jumps to the finally block and never reaches the rest! So for testing, 'removeme' should equal WHAT??? but the debugger reports it as still ABC.
What gives?
You are using the wrong method. 'createQuery' is expecting HQL syntax. Change your method to 'createSQLQuery'