I have one script which fetches around 25.000 different ID values and uses them to make some changes in other table. But the programmer created this code which searches ID (dialid in the code) through the table of 10 million records (line 3) and every query in loop is executing around 1 second. My idea is to fetch last 30 days of records with the SQL and to put it into an array and check only the array.
And my question is, how to do that in Java? Is it the in_array function? I'm solid in PHP, but beginner in Java code...
private Integer getDialId(int predictiveId) {
Integer dialid = null;
StringBuilder sql = new StringBuilder("SELECT dialid from dial where PREDICTIVE_DIALID=");
sql.append(predictiveId); //this predictiveId is calculated in other part of code
ResultSet rsDialId = null;
Statement s1 = null;
try {
s1 = oracle.getConn().createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_UPDATABLE, ResultSet.CLOSE_CURSORS_AT_COMMIT);
rsDialId = s1.executeQuery(String.valueOf(sql));
if (rsDialId.next()) {
dialid = rsDialId.getInt("dialid");
}
} catch (SQLException ex) {
Logger.getLogger(MediatelCdrSync.class.getName()).log(Level.SEVERE, null, ex);
} finally {
try {
if (s1 != null) {
s1.close();
}
if (rsDialId != null) {
rsDialId.close();
}
} catch (SQLException ex) {
Logger.getLogger(MediatelCdrSync.class.getName()).log(Level.SEVERE, null, ex);
}
}
System.out.println("DIALID = " + dialid);
return dialid;
}
Thnx
If you have a performance problem I'd start to see why the query takes one second per execution, if it's database time because the dial table does not have and index on PREDICTIVE_DIALID column you can do very little at the java level.
Anyway the jdbc code reveals some problems especially when used with an oracle database.
The biggest issue is that you are hardcoding your query parameter causing Oracle to re"hard parse" the query every time; the second (minor one) is that the resultset is scrollable and updatable while you need only to load the first row. If you want to make some little modification to your code you should change to somethig like this pseudo code:
PreparedStatement ps =connection.prepareStatement("SELECT dialid from dial where PREDICTIVE_DIALID=?");
for (int i=0;i<10;i++) {//your 25000 loop elements is this one
//this shoudl be the start of the body of your getDialId function that takes also a prepared statement
ps.setInt(1, i);
ResultSet rs=ps.executeQuery();
if (rs.next()) {
rs.getInt("dialid");
}
rs.close();
//your getDialId end here
}
ps.close();
With this minimal java solution you should note a performance increase, but you must chek the performance of the single query since if there is a missing index you cand very little at a java code.
Another solution, more complicated, is to to create a temporart table, fill it with all the 25000 predictiveId values and then issue a query that joins dial and you temporary table; so with one resultset(and one query) you can find all the dialid you need. A jdbc batch insert into the temp table speeds up insertion time noticeably.
If you are planning to fetch less record and store that result in some array then
I think it is better for you to limit your search by creating a view in Database with limited record's (say record for last 2 year's)
And Use that view in your select query
"SELECT dialid from dial_view WHERE PREDICTIVE_DIALID = "
Hope it will help :)
Related
I've encountered a bit of a perplexing problem. I've got this simple method for extracting data from a table using SELECT *... However, when it iterates through the table it stops the iteration before it's gone through all entries in said table. I've used the debugger to the extent of eliminating the problem areas to when the rows are added to the ArrayList. But still, it stops before it should stop. Any ideas?
public static ArrayList<Actors> acList() throws Exception {
ArrayList<Actors> acList = new ArrayList<Actors>();
try {
getConnection();
PreparedStatement st = conn.prepareStatement("SELECT * FROM Actors");
ResultSet rst = st.executeQuery();
Actors ac;
while (rst.next()) {
ac = new Actors(rst.getInt("ActorId"), rst.getString("fName"), rst.getString("eName"),
rst.getInt("Age"), rst.getInt("NoOfCredits"), rst.getString("Country"));
acList.add(ac);
}
} catch (Exception e) {
}
return acList;
}
Found the answer, posting if anyone else encounters similar problems. What had happened was that one row in my mysql table had an entry (Which i believed to have deleted) that wasn't matching types with what I was trying to extract. The list attempted to claim an int but the value in the row was varchar. Edited the table so it was correct types. Now it works ^^
I have application that reads from excel sheet , number of records more than 25000 records. I calculated time to insert records to database is
15 minutes,currently using MySQL which may change to db2 later on.
I insert all statement direct to MySQL, the time taken is 14 minutes.
Is it normal ? Are there any ways to increase performance? or code enhancement ?
/**
* insert records excel sheeet in tables
* #param dbConnection
* #throws Exception
*/
void insertRecords(Connection dbConnection,Sheet sheet,int sizeColumns ,String tableName) throws Exception {
PreparedStatement preparedStatement = null;
try {
Sheet datatypeSheet =sheet;
Iterator<Row> iterator = datatypeSheet.iterator();
StringBuilder sbInsert = new StringBuilder( 1024 );
//skip first row
iterator.next();
//iterator for rows excel sheet
while (iterator.hasNext()) {
sbInsert.setLength(0);
Row currentRow = iterator.next();
sbInsert.append("insert into "+tableName.trim().replaceAll(" ","_")+" values(");
int currentCellLenght=0;
//iterator for cell rows
for(int cn=0; cn<sizeColumns; cn++) {
Cell currentCell = currentRow.getCell(cn, MissingCellPolicy.CREATE_NULL_AS_BLANK);
currentCell.setCellType(Cell.CELL_TYPE_STRING);
String cellValue;
cellValue=currentCell.getStringCellValue();
sbInsert.append("'"+cellValue.replaceAll("\'", "")+"'");
currentCellLenght++;
if(currentCellLenght==sizeColumns) {
break;
}
//add insert rows
if(currentCellLenght!=sizeColumns) {
sbInsert.append(",");
}
}
sbInsert.append(")");
preparedStatement = dbConnection.prepareStatement(sbInsert.toString());
preparedStatement.execute();
}
} catch (EncryptedDocumentException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
throw new Exception(e.getMessage());
}finally {
if (preparedStatement != null) {
preparedStatement.close();
}
dbConnection.close();
}
}
When you naively hit an InnoDB table in MySQL with a series of insert statements, it automatically commits each statement before it takes the next one. That takes lots of extra time.
You can work around this by doing your inserts in multiple-row chunks.
One way is to chunk your inserts with transactions. At the beginning of your operation, do Connection.setAutoCommit(false);. Then, every few hundred rows do Connection.commit();. Don't forget to do a last Connection.commit(); after all your rows are processed. And, if you'll go on to use the same connection for other things, do Connection.setAutoCommit(true);.
Another way is to issue multi-row inserts. They look something like this.
INSERT INTO table VALUES
(val1, val2, val3, val4),
(val5, val6, val7, val8),
...
(val9, vala, valb, valc);
Each set of values in parentheses is a single row. You can fit ten or even fifty rows in each of these insert statements. This itself is a way of chunking your inserts, because each multirow insert uses just one transaction.
Another way to speed this up (probably an inferior way). Use a MyISAM table rather than InnoDB. MyISAM doesn't have transactions, so it doesn't have the overhead. But transactions are good when you use tables in production.
Chunking makes a big difference to bulk insertion performance problems like yours.
1st with JAVA the 2nd run is always faster because of loads and other initialization. Keep up the good work.
Code Review.
Your evaluating the same thing twice.
You could shave some time here with an else statement.
IRL your iterating for the sizeColumns no need to check for it. 1st if statement not needed.
IRL Do the first column then start the iterations now just put a comma before each value and close out the statement at the end. 2nd if statement is no longer needed.
if(currentCellLenght==sizeColumns) {
break;
}
//add insert rows
if(currentCellLenght!=sizeColumns) {
sbInsert.append(",");
}
I'm writing a Java program in which it updates existing data in a local database with new details whenever a record that has the same product model in another database is found. If no record is found, it inserts a new record instead.
I'm using an if-else statement to check if record does exist or not. The current code works ONLY for a single record. I want it to keep on updating the local database as long as the product model in local database and in the other database exists.
Here's the code, I'm currently using:
public static void checkExist() {
try {
Statement stmt = conn.createStatement();
int prod_qnty, prod_weight;
float prod_price;
String prod_model = "97801433"; //testing purposes
ResultSet rs = stmt.executeQuery("SELECT * from products where products_model = " + prod_model );
if (rs.next()){
//while (rs.next()) {
prod_qnty = rs.getInt("products_quantity");
prod_price = rs.getFloat("products_price");
prod_weight = rs.getInt("products_weight");
System.out.println(prod_qnty + "\t" + prod_price + "\t" + prod_weight);
updDB(prod_model, prod_qnty, prod_price, prod_weight);
//}
}
else{
insertDB(prod_model, prod_qnty, prod_price, prod_weight);
}
stmt.close();
} catch (SQLException ex) {
//Logger.getLogger(Check.class.getName()).log(Level.SEVERE, null, ex);
}
}
I've added a while loop within the if statement where record exists, however, after I've added the while loop. Now it can't even print out any record. It seems like once it goes inside the if (rs.next()) condition, it doesn't go inside the while loop.
Will anyone let me know what I'm doing wrong? Is it possible to use a while loop inside an if-else statement? As usually it is used the other way around, if-else statement within the while loop.
Any help will be greatly appreciated. Thank you.
You do not have to add a separate if statement while (rs.next()) { //CODE }
if you call next() 2 times, cursor will move 2 records forward and if you have only one record for the query, it will not go into the while loop.
CODE will execute only if there is a results remaining.
My JAVA script consists of 2 JAVA classes: RMS, queryRMS
In the RMS class I call the method in the queryRMS class
RMS Java Class (I left out the start execution part, below is just the method)
for (int i = 1; i <= itemCount; i++) {
GlobalVariables.numberRow = i;
JavaDatapool.settings();
String item = queryRPM.connectDB_Multi(configFile,"SELECT ITEM FROM ORDSKU WHERE ORDER_NO = '" + orderNo + "' ORDER BY ITEM ASC",i);
JavaDatapool.writeXLS("item",item,GlobalVariables.sheetXLS);
sleep(1);
}
queryRMS JAVA class
public static String connectDB_Multi(String configFile, String query, int i) throws FileNotFoundException, IOException, SQLException, ClassNotFoundException{
Properties p = new Properties();
p.load(new FileInputStream(configFile));
String serverName = (p.getProperty("RMS_DBServerName"));
String portNumber = (p.getProperty("RMS_PortNumber"));
String sid = (p.getProperty("RMS_SID"));
String url = "jdbc:oracle:thin:#//" + serverName + ":" + portNumber + "/" + sid;
String username = (p.getProperty("RMS_Username"));
String password = (p.getProperty("RMS_Password"));
// jdbc:oracle:thin:#//localhost:1521/orcl
Class.forName("oracle.jdbc.driver.OracleDriver");
Connection connection = DriverManager.getConnection(url,username,password);
String setr = null;
try {
Statement stmt = connection.createStatement();
try {ResultSet rset = stmt.executeQuery(query);
try {
while(rset.absolute(i))
setr = rset.getString(1);
return setr;
}
finally {
try { rset.close();
}
catch (Exception ignore) {}
}
}
finally {
try { stmt.close();
}
catch (Exception ignore) {}
}
}
finally {
try { connection.close();
}
catch (Exception ignore) {}
}
}
So what it does is call the connectDB_multi class and then returns the String where the next part is saving it inside an Excel worksheet.
The loop should return all rows, one at a time and then save it inside the Excel worksheet.
In the second time in loop the query is faulted, eventhough the query should return 1 column consisting of 2 rows.
the original contained the part while(rset.next()) instead of while(rset.absolute(i))
but next only return the first row everytime. so the script works when only one column and row is retrieved from the Database.
Your logic looks a bit messed up.
Look at the first loop you posted. You are, effectivly, executing:
SELECT ITEM FROM ORDSKU WHERE ORDER_NO = '" + orderNo + "' ORDER BY ITEM ASC
itemCount number of times. Each time you execute it, you are attempting to access the n:th row, n being loop counter. Do you see a problem there? How do you know that the query will return itemCount number of rows? Because if it doesn't, it will fail since you are attempting to access a row that doesn't exist.
What I suspect you WANT to do is something like this
Statement stmt = connection.createStatement();
ResultSet rset = stmt.executeQuery(query);
while(rset.next()) {
JavaDatapool.writeXLS("item",rset.getString(1),GlobalVariables.sheetXLS);
}
You should also seriously consider using some form of connection pooling to avoid having to re-open new connections all the time as that is a pretty time-consuming operation.
This code seems very inefficient, for each row you want to fetch from the database you read a property file, create a connection, select all matching rows, skip ahead to the row you want and return just that row. (Or at least I think that is what you are trying to do).
Your code
while(rset.absolute(i))
setr = rset.getString(1);
is probably an infinite loop as it will continue to go to the same row as long as it is ok to go to that row, so either that row does not exist (and the while exists) or the row does exist (and while continues forever).
You should probably restructure your program to only do one select and read all rows that you want and store them in your excel file. While doing this, you can debug to see if you actually are getting the data you expect.
Apart from the inefficient code of creating new connections and querying once for each row, how do you know how many rows you want?
I think in the end you want something like this
....
while(rset.next()) {
JavaDatapool.writeXLS("item",item,GlobalVariables.sheetXLS);
}
And what is the sleep(1) support to accomplish?
FYI: if you open and close statement too often as your logic or pap's solution, you can get the " java.sql.SQLException: ORA-01000: maximum open cursors exceeded" error message.
I suggest you to not do 'too much generalize'. I saw a lot of OOP programmers overdid generalization and that is painful. You should design by a goal and the goal should not be 'just alignment' nor 'code look beautiful', it has to have a purpose for designing.
I have following code:
public boolean updateDatabase(long houseValue, List<Users> userList)
{
boolean result = false;
Connection conn = null;
PreparedStatement stmtUpdateUsers = null;
PreparedStatement stmtQueryHouse = null;
PreparedStatement stmtUpdateHouse = null;
ResultSet rs = null;
String updateUsers = "UPDATE users SET money = ? WHERE username = ?";
String queryHouse = "SELECT * FROM house WHERE house_id = ?";
String updateHouse = "UPDATE house SET house_money = ? WHERE house_id = ?";
try
{
conn = getConnectionPool().getConnection();
conn.setAutoCommit(false);
stmtUpdateUsers = conn.prepareStatement(updateUsers);
...
// Here is some code that updates Users table in a short loop
...
stmtQueryHouse = conn.prepareStatement(queryHouse);
stmtQueryHouse.setInt(1, 1);
rs = stmtQueryHouse.executeQuery();
if(rs.next())
{
long houseMoney = rs.getLong("house_money");
houseMoney += houseValue;
stmtUpdateHouse = conn.prepareStatement(updateHouse);
stmtUpdateHouse.setLong(1, houseMoney);
stmtUpdateHouse.setInt(2, 1);
stmtUpdateHouse.executeUpdate();
}
else
{
throw new SQLException("Failed to update house: unable to query house table");
}
conn.commit();
result = true;
}
catch(SQLException e)
{
logger.warn(getStackTrace(e));
try{conn.rollback();}catch(SQLException excep)
{
logger.warn(getStackTrace(excep));
}
}
finally
{
DbUtils.closeQuietly(rs);
DbUtils.closeQuietly(stmtQueryHouse);
DbUtils.closeQuietly(stmtUpdateUsers);
DbUtils.closeQuietly(stmtUpdateHouse);
try { conn.setAutoCommit(true); } catch (SQLException e) { /* quiet */ }
DbUtils.closeQuietly(conn);
}
return result
}
This method can be called from multiple threads, house table is just a one row table which holds total earned money. It gets updated by different threads.
Problem is that stmtQueryHouse.executeQuery() returns empty set, and it should not happen, because house table always have (since database creation) one single row that gets updated (only house_money column is updated).
When I run this code on windows (JDBC driver + mysql 5.5.13) it works fine, but when I run it on CentOS (same JDBC driver + mysql 5.1.57) it returns empty result set very often (if not always). Any idea what is going wrong or how could I check where is the problem? Maybe I should use select for update, but then why it works on windows and not on linux? I appreciate any help. Thanks in advance.
Look in the mysql general query log for any errors?
I realize this isnt your question per se, but if you have another table with just a single row for each House, it sounds to me that it would make more sense to move house_money into your main house table
I'd say this one method is doing far too much.
I'd pass in the Connection to three separate methods and manage the transaction outside all of them.
I'd wonder if there's an optimization that would eliminate one of the UPDATES.
I'd want to batch all these so I didn't do a round trip for each and every user. It'll perform poorly as the # of users increases.