Android - Fastest way to search data in SQLite database - java

I have an image processing app. My app stores the already processed images in a database. Every time the user opens the app, the app starts to check the database to see what photos have already been processed. With my code this process is taking around 10-20 seconds, which for my needs is a lot of time.
The database only has one column, the path of the image. I take the full image list from the phone and then search every item of the list in the database.
My code is as follows:
public static ArrayList<String> getAlreadyProcessedPhotos(Context context, ArrayList<String> photos, SQLiteDatabase db)
{
ArrayList<String> notAlreadyProcessedPhotos = new ArrayList<>();
for(String path : photos)
{
File imgFile = new File(path);
if (!Utils.isAlreadyProcessed(context, imgFile, db))
{
notAlreadyProcessedPhotos.add(path);
}
}
return notAlreadyProcessedPhotos;
}
public static boolean isAlreadyProcessed(Context context, File imgFile, SQLiteDatabase photosDb) {
if(photosDb == null || !photosDb.isOpen())
photosDb = new DatabaseHelper(context).getReadableDatabase();
String searchQuery = "SELECT * FROM " + DatabaseHelper.TABLE_NAME + " WHERE " + DatabaseHelper.PATH_COLUMN + "=?";
Cursor cursor = photosDb.rawQuery(searchQuery, new String[] {imgFile.getAbsolutePath()});
boolean result = cursor.moveToFirst();
cursor.close();
return result;
}

For each file that you want to check you are executing a separate sqlite query. No wonder it's slow! If there are 100 files you will need to do a 100 queries. But this can really be done with one simple query. You just need to combine your two methods into 1
public static ArrayList<String> getAlreadyProcessedPhotos(Context context, ArrayList<String> photos, SQLiteDatabase db)
{
ArrayList<String> notAlreadyProcessedPhotos = new ArrayList<>();
ArrayList<String> preProc = new ArrayList()
for (String item: photos) {
preProc.add("'" + item + "'");
}
String inClause = TextUtils.join(",", preProc);
String searchQuery = "SELECT " + DatabaseHelper.PATH_COLUMN + "FROM " + DatabaseHelper.TABLE_NAME + " WHERE " + DatabaseHelper.PATH_COLUMN + "NOT IN (" +inClause + ")";
Cursor cursor = photosDb.rawQuery(searchQuery);
while(cursor.moveToNext())
{
notAlreadyProcessedPhotos.add(cursor.getString(0);
}
return notAlreadyProcessedPhotos;
}
This is one loop, one query. I don't know where your photos array list comes from but I get the feeling there is room for further optimization there as well.

The answer to almost all sql (Sqlite, MySql, ....) speed issues is to create an index on the table. See: https://www.sqlite.org/lang_createindex.html
My guess your doing a full table scan on the imgFile you just added, that is as slow as it gets.
Other things you can do ( But won't help near as much as an index)
1) Since you are not using the imgFile returned from Sqlite, change your sql to 'Select count() From ... ' which will return an integer that is greater than zero if present.
2) Add a limit clause to the select statement "Select .... limit 1;" This will allow Sqlite to return once the first record is found.

You already got the responses in form of the comments too!
First is the loop issue as how suggested e4c5. Of course that will make a huge boost.
The second is the SELECT * FROM table replace with SELECT field1WhatIreallyNeed, field2WhatIreallyNeed FROM table.
It helps adding to index the Where fields too.
I have integrated sqlite3 with NDK , so it is used C there and is even faster, but worth if your records are close to 1 million in 1 table.
The best answer is in comment: you don't need database for this! And that would be the fastest. Think about the database how is read, where is stored? - in a file not, just with another constraints, parsing, processing overheads.
I need the database, becuase my app overwrites the original photo, so
the photo always exists
No, you don't need database for this!
There are eTags
There is a meta file info
You can store in an separate file downloaded_timestamp, processed_timestamp and you can calculate if needs to be processed or not, and that will take milliseconds and not 10-20 seconds.
So, drop your database and use a simple file, read the data from that file all at once, not line by line.

Related

Discord Java JDA | deleting data from SQLite / getting data from SQLite [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to make a ticket-based support system and I would like to know how to read and delete data from a SQLite table.
The system will work like this:
You click on a reaction and the bot checks if you already have a dedicated channel, if not it will create one.
If you close the ticket by clicking on a reaction in your personal channel, the channel and your data will be deleted.
That's my code so far:
public void onMessageReactionAdd(MessageReactionAddEvent event) {
if(!event.getUser().isBot()) {
if(event.getChannel().getIdLong() == 747412032281772033l && event.getReactionEmote().getEmoji().equals("\uD83C\uDFAB")) {
ResultSet set = LiteSQL.onQuery("SELECT channelid FROM ticketchans WHERE guildid = " + event.getGuild().getIdLong() + " AND userid = " + event.getUserIdLong());
try {
Long user = set.getLong("userid");
if(!(user == event.getUserIdLong())){
Category cat = ((GuildChannel) event.getChannel()).getParent();
TextChannel chan = cat.createTextChannel(event.getMember().getEffectiveName() + "'s TicketChannel").complete();
EmbedBuilder builder = new EmbedBuilder();
builder.setDescription("Hi " + event.getMember().getAsMention() + ", bitte beschreibe hier detailiert dein Anliegen. Wenn du dein ticket schliessen willst klicke auf das X");
builder.setColor(Color.decode("#910cc9"));
chan.sendMessage(builder.build()).queue(Message -> {
Message.addReaction("\u274C").queue();
});
set.next();
LiteSQL.onUpdate("INSERT INTO ticketchans(guildid, channelid, userid) VALUES(" +
event.getGuild().getIdLong() + ", " + event.getChannel().getIdLong() + ", " + event.getUserIdLong() + ")");
event.getChannel().sendMessage(event.getUser().getAsMention() + " TicketChannel eröffnet!").complete().delete().queueAfter(4, TimeUnit.SECONDS);
}
}catch (SQLException e) {}
}
if(event.getReactionEmote().getEmoji().equals("\u274C")) {
//delete data in table event.getGuild().getGuildChannelById(event.getChannel().getIdLong()).delete().reason("").queue();
}
}
}
Getting Data from SQLite
Most of this applies to SQL in general and isn't specific to SQLite.
First off, a SELECT statement consists of different parts.
SELECT columns FROM table WHERE condition;
For columns you have to fill in the names of the columns you want to get from your table. Pretty self-explanatory.
If you want to select more than one column, you just have to list them with commas, like this:
SELECT column1, column2, column3 FROM table WHERE condition;
In order to select every column of your table you just write * instead of the columns.
SELECT * FROM table WHERE condition;
Note: You can only access columns in your ResultSet if you selected them in your statement. If you select channelid you won't be able to get userid, unless you select it as well. (SELECT channelid, userid FROM table WHERE condition;)
You seem to understand the WHERE part so I will skip it. In case you need some more help or want to expand your usage of SQLite even more, you may check out some tutorials online.
Now, after writing your correct SELECT statement it's time to access the data in Java.
Therefore, you have to loop through your ResultSet.
ResultSet rs = LiteSQL.onQuery(
"SELECT channelid, userid
FROM ticketchans
WHERE guildid = " + event.getGuild().getIdLong() + "
AND userid = " + event.getUserIdLong()
);
// loop through the result set
while (rs.next()) {
Long userid = rs.getLong("userid");
Long channelid = rs.getLong("channelid");
}
You now have the data you need and can use it for whatever you want.
Deleting Data from SQLite
Most of this applies to SQL in general and isn't specific to SQLite.
The DELETE statement has a similar structure to the SELECT statement although it lacks the columns (of course).
DELETE FROM table WHERE condition;
As explained in the first part, you have to choose the table you want to delete data from and then narrow it down using conditions.
In your case, deleting a specific ticket would be like this:
DELETE FROM ticketchans WHERE guildid = GID and userid = UID and channelid = CID;
If you don't use all three IDs in the condition, you might end up deleting all tickets of a guild or of an user. Since the channelid is always unique you could possibly skip the userid = UID part, but the details are up to you.
As already mentioned, if you want more specific statements or need some variations, check out a tutorial of your liking. (The one provided is just an example, use whatever you are comfortable with.)
On another note: I would advice not using .complete() but .queue() instead.
If you want to know why and how, check out this page.

Codename One SQL database storing wrong values

I am used to developing desktop applications with Java. Now I am trying Codename One to develop my first mobile app.
Trying to replicate my experiences with SQL databases I am running into a very odd storage behavior, which I cannot explain.
The database is created, but when I change the table input value, the new value gets ignored and just the old value is added. To save the new value, I have to delete the database.
I like the interface and any kind help would be appreciated.
Database db = Display.getInstance().openOrCreate("MyDB.db");
db.execute("CREATE TABLE IF NOT EXISTS Persons (Date NOT NULL,Event NOT NULL)");
String sql = "INSERT INTO Persons (DATE , Event) " + "VALUES ( 'John', '10000.00' );";
db.execute (sql);
// adds "John" to the database every time I click the button
// then I change the from "John" to "James"
// I am not adding the lines twice, just change the input
String sql = "INSERT INTO Persons (DATE , Event) " + "VALUES ( 'James', '10000.00' );";
db.execute (sql);
//keeps adding "John" to the database, even though value has been changed to "James"
Cursor cur = db.executeQuery("select * from Persons;");
Row currentRow= cur.getRow();
String dataText = currentRow.getString(0);
while (cur.next()) {
System.out.println(dataText);
}
You're not fetching the next row into dataText in your while() loop, so you're just repeatedly printing out the text from the first row.
It should be:
Cursor cur = db.executeQuery("select * from Persons;");
while (cur.next()) {
Row currentRow = cur.getRow();
String dataText = currentRow.getString("Date");
System.out.println(dataText);
}
If you examine the table with a separate query tool, like PhpMyAdmin, you should see that it contains both rows.
I hope I got the syntax right. I'm not a Java programmer and I got it from a tutorial.

How count the frequency of occurence of each value in a column in Table "A" and insert in a Table "B" using SQLitein an android app

I have a simple app that contains an Sqlite database containing 2 tables:
TABLE_CHAP that contains:
_id
Chapter_title
Number_of_flashcards
TABLE_Flash contains:
_id
Chap_id
flashcard content
I upload the content of the database to the assets folder, TABLE_FLASH contains a number of flashcards that belongs to each chapter.
What I'm trying to do is to count the frequency of Chap_id in TABLE_FLASH and insert this number to Number_of_flashcards in TABLE_CHAP and afterward display the number_of_flashcard in front of the concerned Chapter_Title.
the Number_of_flashcards is dynamic as the user may add his own flashcards to each chapter.
public void nberOfFlahcards(){
int xy= getChap_tableCount();
ContentValues contentValues = new ContentValues();
database = openHelper.getWritableDatabase();
for(int i=1; i<=xy; i++){
String countQuery = "SELECT * FROM TABLE_CHAP WHERE Chap_ID = " + i;
Cursor cursor = database.rawQuery(countQuery, null);
int total_count=cursor.getCount();
Log.v(TAG, "Nber of Flashcards for chapter"+i+"is: "+total_count);
contentValues.put(KEY_NBER_FLAHCARDS, Integer.toString(total_count));
database.update(TABLE_CHAP,contentValues, KEY_NBER_FLAHCARDS,null );
this code gave me always 0, please check where's the error and if you have better code or better architecture for the database please advise.
This update can be done with a single query
UPDATE TABLE_CHAP SET Number_of_flashcards =
(SELECT count(*)
FROM TABLE_Flash AS tf
WHERE tf.Chap_id=TABLE_CHAP._id)
But I'd better have a method int getNumberOfFlashcards(int chap_id) which computes the count for the given chap_id.

Do not update row in ResultSet if data has changed

we are extracting data from various database types (Oracle, MySQL, SQL-Server, ...). Once it is successfully written to a file we want to mark it as transmitted, so we update a specific column.
Our problem is, that a user has the possibility to change the data in the meantime but might forget to commit. The record is blocked with a select for update statement. So it can happen, that we mark something as transmitted, which is not.
This is an excerpt from our code:
Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);
ResultSet extractedData = stmt.executeQuery(sql);
writeDataToFile(extractedData);
extractedData.beforeFirst();
while (extractedData.next()) {
if (!extractedData.rowUpdated()) {
extractedData.updateString("COLUMNNAME", "TRANSMITTED");
// code will stop here if user has changed data but did not commit
extractedData.updateRow();
// once committed the changed data is marked as transmitted
}
}
The method extractedData.rowUpdated() returns false, because technically the user didn't change anything yet.
Is there any way to not update the row and detect if data was changed at this late stage?
Unfortunately I cannot change the program the user is using to change the data.
So you want to
Run through all rows of the table that have not been exported
Export this data somewhere
Mark these rows exported so your next iteration will not export them again
As there might be pending changes on a row, you don't want to mess with that information
How about:
You iterate over all rows.
for every row
generate a hash value for the contents of the row
compare column "UPDATE_STATUS" with calulated hash
if no match
export row
store hash into "UPDATE_STATUS"
if store fails (row locked)
-> no worries, will be exported again next time
if store succeeds (on data already changed by user)
-> no worries, will be exported again as hash will not match
This might further slow your export as you'll have to iterate over everything instead of over everything WHERE UPDATE_STATUS IS NULL but you might be able to do two jobs - one (fast)
iterating over WHERE UPDATE_STATUS IS NULL and one slow and thorough WHERE UPDATE_STATUS IS NOT NULL (with the hash-rechecking in place)
If you want to avoid store-failures/waits, you might want to store the hash /updated information into a second table copying the primary key plus the hash field value - that way user
locks on the main table would not interfere with your updates at all (as those would be on another table)
"a user [...] might forget to commit" > A user either commits or he doesn't. "Forgetting" to commit is tantamount to a bug in his software.
To work around that you need to either:
Start a transaction with isolation level SERIALIZABLE, and within that transaction:
Read the data and export it. Data read this way is blocked from being updated.
Update the data you processed. Note: don't do that with an updateable ResultSet, do that with an UPDATE statement. That way you don't need an CONCUR_UPDATABLE + TYPE_SCROLL_SENSITIVE which is much slower than a CONCUR_READ_ONLY + TYPE_FORWARD_ONLY.
Commit the transaction.
That way the buggy software will be blocked from updating data you are processing.
Another way
Start a TRANSACTION at a lower isolation level (default READ COMMITTED) and within that transaction
Select the data with proper Table Hints Eg for SQL Server these: TABLOCKX + HOLDLOCK (large datasets), or ROWLOCK + XLOCK + HOLDLOCK (small datasets), or PAGLOCK + XLOCK + HOLDLOCK. Having HOLDLOCK as a table hint is practically equivalent to having a SERIALIZABLE transaction. Note that lock escalation may escalate the latter two to table locks if the number of locks becomes too high.
Update the data you processed; Note: use an UPDATE statement. Lose the updatable/scroll_sensitive resultset.
Commit the TRANSACTION.
Same deal, the buggy software will be blocked from updating data you are processing.
In the end we had to implement optimistic locking. In some tables we already have a column that stores the version number. Some other tables have a timestamp column that holds the time of the last change (changed by trigger).
While a timestamp might not always be a reliable source for optimistic locking we went with it anyway. Several changes during a single second are not very realistic in our environment.
Since we have to know the primary key without describing it before hand, we had to access the resultset metadata. Some of our databases do not support this (DB/2 legacy tables for example). We are still using the old system for these.
Note: The tableMetaData is an XML-config file where our description of the table is stored. This is not directly related to the metadata of the table in the database.
Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);
ResultSet extractedData = stmt.executeQuery(sql);
writeDataToFile(extractedData);
extractedData.beforeFirst();
while (extractedData.next()) {
if (tableMetaData.getVersion() != null) {
markDataAsExported(extractedData, tableMetaData);
} else {
markResultSetAsExported(extractedData, tableMetaData);
}
}
// new way with building of an update statement including the version column in the where clause
private void markDataAsExported(ResultSet extractedData, TableMetaData tableMetaData) throws SQLException {
ResultSet resultSetPrimaryKeys = null;
PreparedStatement versionedUpdateStatement = null;
try {
ResultSetMetaData extractedMetaData = extractedData.getMetaData();
resultSetPrimaryKeys = conn.getMetaData().getPrimaryKeys(null, null, tableMetaData.getTable());
ArrayList<String> primaryKeyList = new ArrayList<String>();
String sqlStatement = "update " + tableMetaData.getTable() + " set " + tableMetaData.getUpdateColumn()
+ " = ? where ";
if (resultSetPrimaryKeys.isBeforeFirst()) {
while (resultSetPrimaryKeys.next()) {
primaryKeyList.add(resultSetPrimaryKeys.getString(4));
sqlStatement += resultSetPrimaryKeys.getString(4) + " = ? and ";
}
sqlStatement += tableMetaData.getVersionColumn() + " = ?";
versionedUpdateStatement = conn.prepareStatement(sqlStatement);
while (extractedData.next()) {
versionedUpdateStatement.setString(1, tableMetaData.getUpdateValue());
for (int i = 0; i < primaryKeyList.size(); i++) {
versionedUpdateStatement.setObject(i + 2, extractedData.getObject(primaryKeyList.get(i)),
extractedMetaData.getColumnType(extractedData.findColumn(primaryKeyList.get(i))));
}
versionedUpdateStatement.setObject(primaryKeyList.size() + 2,
extractedData.getObject(tableMetaData.getVersionColumn()), tableMetaData.getVersionType());
if (versionedUpdateStatement.executeUpdate() == 0) {
logger.warn(Message.COLLECTOR_DATA_CHANGED, tableMetaData.getTable());
}
}
} else {
logger.warn(Message.COLLECTOR_PK_ERROR, tableMetaData.getTable());
markResultSetAsExported(extractedData, tableMetaData);
}
} finally {
if (resultSetPrimaryKeys != null) {
resultSetPrimaryKeys.close();
}
if (versionedUpdateStatement != null) {
versionedUpdateStatement.close();
}
}
}
//the old way as fallback
private void markResultSetAsExported(ResultSet extractedData, TableMetaData tableMetaData) throws SQLException {
while (extractedData.next()) {
extractedData.updateString(tableMetaData.getUpdateColumn(), tableMetaData.getUpdateValue());
extractedData.updateRow();
}
}

Using a database API cursor with JDBC and SQLServer to select batch results

SOLVED (See answer below.)
I did not understand my problem within the proper context. The real issue was that my query was returning multiple ResultSet objects, and I had never come across that before. I have posted code below that solves the problem.
PROBLEM
I have an SQL Server database table with many thousand rows. My goal is to pull the data back from the source database and write it to a second database. Because of application memory constraints, I will not be able to pull the data back all at once. Also, because of this particular table's schema (over which I have no control) there is no good way for me to tick off the rows using some sort of ID column.
A gentleman over at the Database Administrators StackExchange helped me out by putting together something called a database API cursor, and basically wrote this complicated query that I only need to drop my statement into. When I run the query in SQL Management Studio (SSMS) it works great. I get all the data back, a thousand rows at a time.
Unfortunately, when I try to translate this into JDBC code, I get back the first thousand rows only.
QUESTION
Is it possible using JDBC to retrieve a database API cursor, pull the first set of rows from it, allow the cursor to advance, and then pull the subsequent sets one at a time? (In this case, a thousand rows at a time.)
SQL CODE
This gets complicated, so I'm going to break it up.
The actual query can be simple or complicated. It doesn't matter. I've tried several different queries during my experimentation and they all work. You just basically drop it into the the SQL code in the appropriate place. So, let's take this simple statement as our query:
SELECT MyColumn FROM MyTable;
The actual SQL database API cursor is far more complicated. I will print it out below. You can see the above query buried in it:
-- http://dba.stackexchange.com/a/82806
DECLARE #cur INTEGER
,
-- FAST_FORWARD | AUTO_FETCH | AUTO_CLOSE
#scrollopt INTEGER = 16 | 8192 | 16384
,
-- READ_ONLY, CHECK_ACCEPTED_OPTS, READ_ONLY_ACCEPTABLE
#ccopt INTEGER = 1 | 32768 | 65536
,#rowcount INTEGER = 1000
,#rc INTEGER;
-- Open the cursor and return the first 1,000 rows
EXECUTE #rc = sys.sp_cursoropen #cur OUTPUT
,'SELECT MyColumn FROM MyTable'
,#scrollopt OUTPUT
,#ccopt OUTPUT
,#rowcount OUTPUT;
IF #rc <> 16 -- FastForward cursor automatically closed
BEGIN
-- Name the cursor so we can use CURSOR_STATUS
EXECUTE sys.sp_cursoroption #cur
,2
,'MyCursorName';
-- Until the cursor auto-closes
WHILE CURSOR_STATUS('global', 'MyCursorName') = 1
BEGIN
EXECUTE sys.sp_cursorfetch #cur
,2
,0
,1000;
END;
END;
As I've said, the above creates a cursor in the database and asks the database to execute the statement, keep track (internally) of the data it's returning, and return the data a thousand rows at a time. It works great.
JDBC CODE
Here's where I'm having the problem. I have no compilation problems or run-time problems with my Java code. The problem I am having is that it returns only the first thousand rows. I don't understand how to utilize the database cursor properly. I have tried variations on the Java basics:
// Hoping to get all of the data, but I only get the first thousand.
ResultSet rs = stmt.executeQuery(fq.getQuery());
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
I'm not surprised by the results, but all of the variations I've tried produce the same results.
From my research it seems like the JDBC does something with database cursors when the database is Oracle, but you have to set the data type returned in the result set as an Oracle cursor object. I'm guessing there is something similar with SQL Server, but I have been unable to find anything yet.
Does anyone know of a way?
I'm including example Java code in full (as ugly as that gets).
// FancyQuery.java
import java.sql.*;
public class FancyQuery {
// Adapted from http://dba.stackexchange.com/a/82806
String query = "DECLARE #cur INTEGER\n"
+ " ,\n"
+ " -- FAST_FORWARD | AUTO_FETCH | AUTO_CLOSE\n"
+ " #scrollopt INTEGER = 16 | 8192 | 16384\n"
+ " ,\n"
+ " -- READ_ONLY, CHECK_ACCEPTED_OPTS, READ_ONLY_ACCEPTABLE\n"
+ " #ccopt INTEGER = 1 | 32768 | 65536\n"
+ " ,#rowcount INTEGER = 1000\n"
+ " ,#rc INTEGER;\n"
+ "\n"
+ "-- Open the cursor and return the first 1,000 rows\n"
+ "EXECUTE #rc = sys.sp_cursoropen #cur OUTPUT\n"
+ " ,'SELECT MyColumn FROM MyTable;'\n"
+ " ,#scrollopt OUTPUT\n"
+ " ,#ccopt OUTPUT\n"
+ " ,#rowcount OUTPUT;\n"
+ " \n"
+ "IF #rc <> 16 -- FastForward cursor automatically closed\n"
+ "BEGIN\n"
+ " -- Name the cursor so we can use CURSOR_STATUS\n"
+ " EXECUTE sys.sp_cursoroption #cur\n"
+ " ,2\n"
+ " ,'MyCursorName';\n"
+ "\n"
+ " -- Until the cursor auto-closes\n"
+ " WHILE CURSOR_STATUS('global', 'MyCursorName') = 1\n"
+ " BEGIN\n"
+ " EXECUTE sys.sp_cursorfetch #cur\n"
+ " ,2\n"
+ " ,0\n"
+ " ,1000;\n"
+ " END;\n"
+ "END;\n";
public String getQuery() {
return this.query;
}
public static void main(String[ ] args) throws Exception {
String dbUrl = "jdbc:sqlserver://tc-sqlserver:1433;database=MyBigDatabase";
String user = "mario";
String password = "p#ssw0rd";
String driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
FancyQuery fq = new FancyQuery();
Class.forName(driver);
Connection conn = DriverManager.getConnection(dbUrl, user, password);
Statement stmt = conn.createStatement();
// We expect to get 1,000 rows at a time.
ResultSet rs = stmt.executeQuery(fq.getQuery());
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
// Alas, we've only gotten 1,000 rows, total.
rs.close();
stmt.close();
conn.close();
}
}
I figured it out.
stmt.execute(fq.getQuery());
ResultSet rs = null;
for (;;) {
rs = stmt.getResultSet();
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
if ((stmt.getMoreResults() == false) && (stmt.getUpdateCount() == -1)) {
break;
}
}
if (rs != null) {
rs.close();
}
After some additional googling, I found a bit of code posted back in 2004:
http://www.coderanch.com/t/300865/JDBC/databases/SQL-Server-JDBC-Registering-cursor
The gentleman who posted the snippet that I found helpful (Julian Kennedy) suggested: "Read the Javadoc for getUpdateCount() and getMoreResults() for a clear understanding." I was able to piece it together from that.
Basically, I don't think I understood my problem well enough at the outset in order to phrase it correctly. What it comes down to is that my query will be returning the data in multiple ResultSet instances. What I needed was a way to not merely iterate through each row in a ResultSet but, rather, iterate through the entire set of ResultSets. That's what the code above does.
If you want all records from the table, just do "Select * from table".
The only reason to retrieve in chunks is if there is some intermediate place for the data: e.g. if you are showing it on the screen, or storing it in memory.
If you are simply reading from one and inserting to another, just read everything from the first.You will not get any better performance by trying to retrieve in batches. If there is a difference, it will be negative. Frame your query in a way that brings back everything. The JDBC software will handle all the other breaking-up and reconstituting that you need.
However, you should batch the update/insert side of things.
The set-up would create two statements on the two connections:
Statement stmt = null;
ResultSet rs = null;
PreparedStatement insStmt = null;
stmt = conDb1.createStatement();
insStmt = conDb2.prepareStament("insert into tgt_db2_table (?,?,?,?,?......etc. ?,?) ");
rs = stmt.executeQuery("select * from src_db1_table");
Then, loop over the select as normal, but use batching on the target.
int batchedRecordCount = 0;
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
//Here you read values from the cursor and set them to the insStmt ...
String field1 = rs.getString(1);
String field2 = rs.getString(2);
int field3 = rs.getInt(3);
//--- etc.
insStmt.setString(1, field1);
insStmt.setString(2, field2);
insStmt.setInt(3, field3);
//----- etc. for all the fields
batchedRecordCount++;
insStmt.addBatch();
if (batchRecordCount > 1000) {
insStmt.executeBatch();
}
}
if (batchRecordCount > 0) {
//Finish of the final (partial) set of records
insStmt.executeBatch();
}
//Close resources...

Categories