I want to create an array for each row.
while (result.next()) {
String[] result1 = {
"Dog ID: " + result.getLong("dogs.id"),
", Dog name: " + result.getString("dogs.first_name"),
", Owner name: "
+ result.getString("owners.first_name"),
", Owner phone: " + result.getString("owners.phone") };
resultList.add(result1);
My code write every one row in one array.
Can i get numbers of columns and put a limit?
while (resultset.next()) {
int i = 1;
while(i <= numberOfColumns) {
It's because i can't send entire table as a result from server to client.
You can query by column number result.getLong(columnIndex) but it doesn't make sense in your case withing a loop because you have columns of different types (unless complicating the code).
If you want to optimize the traffic from server to client the way to go is querying for just the columns you need.
If you want to limit the rows returned, it might be better to put the limiting criteria into the SQL query and only return the rows you want to include.
In order to get number of columns in your ResultSet you can use the following piece of code :
Statement stat = conn.createStatement();
ResultSet rs = stat.executeQuery(myQuery);
ResultSetMetaData metaData = rs.getMetaData();
int numOfColumns = metaData .getColumnCount();
Related
I have a table called [Elenco_Aziende] from which I extract all the record in a resultset. [Elenco_Aziende] is in a relation one to many with other two tables called [Elenco_Autisti] and [Elenco_Veicoli] via a field called [Partita_IVA_Azienda] that is also primary key in [Elenco_Aziende] table.
After extracting all records from [Elenco_Aziende] I perform a loop for each value of [Partita_IVA_Azienda] and then open a new result set to try to read values in related fields of [Elenco_Autisti] and [Elenco_Veicoli] tables and do for each of them some operation.
And here comes strange thing: As long as [Partita_IVA_Azienda] (which is defined as a Text field in the Access DB) values are all the same length everything is OK when I try to read in [Elenco_Autisti] and [Elenco_Veicoli] tables, but if some of [Partita_IVA_Azienda] has a different length then I got error:
net.ucanaccess.jdbc.UcanaccessSQLException: UCAExc:::3.0.1 data exception: numeric value out of range
More precisely here is the nested loop scenario:
Connection con = DriverManager.getConnection("jdbc:ucanaccess://"
+ filepath);
String qry = "SELECT * FROM Elenco_Aziende";
ResultSet rs = stmt.executeQuery(qry);
String cognometest = "";
String nometest ="";
while (rs.next()) {
String partitaiva = "Partita IVA: "
+ rs.getString("Partita_IVA_Azienda") + "\n\r";
String partitaivazienda = rs.getString("Partita_IVA_Azienda");
Statement stmtautisti = con.createStatement();
System.out.println("Sto per eseguire la query per partita iva azienda = " + partitaivazienda + "\n\r");
String qryautisti = "SELECT * FROM Elenco_Autisti WHERE Partita_IVA_Azienda="
+ partitaivazienda; /*!!!!! AND HERE WHEN I EXECUTE NEXT QUERY IS WHERE I GET THE EXCEPTION net.ucanaccess.jdbc.UcanaccessSQLException: UCAExc:::3.0.1 data exception: numeric value out of range more!!!!!*/
ResultSet rsautisti = stmtautisti.executeQuery(qryautisti);
while (rsautisti.next()) {
do something here
}
Statement stmtveicoli = con.createStatement();
String qryveicoli = "SELECT * FROM Elenco_Veicoli WHERE Partita_IVA_Azienda="
+ rs.getString("Partita_IVA_Azienda");
ResultSet rsveicoli = stmtveicoli.executeQuery(qryveicoli);
while (rsveicoli.next()) {
do something else here
}
that is as soon as I execute the query
String qryautisti = "SELECT * FROM Elenco_Autisti WHERE Partita_IVA_Azienda="+ partitaivazienda;
for a different length value of [Partita_IVA_Azienda] I get the problem.
I even tried to export the database in a comma separated value and reimporting it in a brand new one but it did not help. Furthermore, the problem seems to happen just for large number of records in tables [Elenco_Autisti] (138 records) and [Elenco_Veicoli] (287 records), while seems not to happen for small number of records. [Elenco_Aziende] is small (no more than 10 records).
According to the little of what I know about SQL, a WHERE with a text field should be written with the value in apostrophes:
String qryautisti = "SELECT * FROM Elenco_Autisti WHERE Partita_IVA_Azienda='"
+ partitaivazienda + "'";
SOLVED (See answer below.)
I did not understand my problem within the proper context. The real issue was that my query was returning multiple ResultSet objects, and I had never come across that before. I have posted code below that solves the problem.
PROBLEM
I have an SQL Server database table with many thousand rows. My goal is to pull the data back from the source database and write it to a second database. Because of application memory constraints, I will not be able to pull the data back all at once. Also, because of this particular table's schema (over which I have no control) there is no good way for me to tick off the rows using some sort of ID column.
A gentleman over at the Database Administrators StackExchange helped me out by putting together something called a database API cursor, and basically wrote this complicated query that I only need to drop my statement into. When I run the query in SQL Management Studio (SSMS) it works great. I get all the data back, a thousand rows at a time.
Unfortunately, when I try to translate this into JDBC code, I get back the first thousand rows only.
QUESTION
Is it possible using JDBC to retrieve a database API cursor, pull the first set of rows from it, allow the cursor to advance, and then pull the subsequent sets one at a time? (In this case, a thousand rows at a time.)
SQL CODE
This gets complicated, so I'm going to break it up.
The actual query can be simple or complicated. It doesn't matter. I've tried several different queries during my experimentation and they all work. You just basically drop it into the the SQL code in the appropriate place. So, let's take this simple statement as our query:
SELECT MyColumn FROM MyTable;
The actual SQL database API cursor is far more complicated. I will print it out below. You can see the above query buried in it:
-- http://dba.stackexchange.com/a/82806
DECLARE #cur INTEGER
,
-- FAST_FORWARD | AUTO_FETCH | AUTO_CLOSE
#scrollopt INTEGER = 16 | 8192 | 16384
,
-- READ_ONLY, CHECK_ACCEPTED_OPTS, READ_ONLY_ACCEPTABLE
#ccopt INTEGER = 1 | 32768 | 65536
,#rowcount INTEGER = 1000
,#rc INTEGER;
-- Open the cursor and return the first 1,000 rows
EXECUTE #rc = sys.sp_cursoropen #cur OUTPUT
,'SELECT MyColumn FROM MyTable'
,#scrollopt OUTPUT
,#ccopt OUTPUT
,#rowcount OUTPUT;
IF #rc <> 16 -- FastForward cursor automatically closed
BEGIN
-- Name the cursor so we can use CURSOR_STATUS
EXECUTE sys.sp_cursoroption #cur
,2
,'MyCursorName';
-- Until the cursor auto-closes
WHILE CURSOR_STATUS('global', 'MyCursorName') = 1
BEGIN
EXECUTE sys.sp_cursorfetch #cur
,2
,0
,1000;
END;
END;
As I've said, the above creates a cursor in the database and asks the database to execute the statement, keep track (internally) of the data it's returning, and return the data a thousand rows at a time. It works great.
JDBC CODE
Here's where I'm having the problem. I have no compilation problems or run-time problems with my Java code. The problem I am having is that it returns only the first thousand rows. I don't understand how to utilize the database cursor properly. I have tried variations on the Java basics:
// Hoping to get all of the data, but I only get the first thousand.
ResultSet rs = stmt.executeQuery(fq.getQuery());
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
I'm not surprised by the results, but all of the variations I've tried produce the same results.
From my research it seems like the JDBC does something with database cursors when the database is Oracle, but you have to set the data type returned in the result set as an Oracle cursor object. I'm guessing there is something similar with SQL Server, but I have been unable to find anything yet.
Does anyone know of a way?
I'm including example Java code in full (as ugly as that gets).
// FancyQuery.java
import java.sql.*;
public class FancyQuery {
// Adapted from http://dba.stackexchange.com/a/82806
String query = "DECLARE #cur INTEGER\n"
+ " ,\n"
+ " -- FAST_FORWARD | AUTO_FETCH | AUTO_CLOSE\n"
+ " #scrollopt INTEGER = 16 | 8192 | 16384\n"
+ " ,\n"
+ " -- READ_ONLY, CHECK_ACCEPTED_OPTS, READ_ONLY_ACCEPTABLE\n"
+ " #ccopt INTEGER = 1 | 32768 | 65536\n"
+ " ,#rowcount INTEGER = 1000\n"
+ " ,#rc INTEGER;\n"
+ "\n"
+ "-- Open the cursor and return the first 1,000 rows\n"
+ "EXECUTE #rc = sys.sp_cursoropen #cur OUTPUT\n"
+ " ,'SELECT MyColumn FROM MyTable;'\n"
+ " ,#scrollopt OUTPUT\n"
+ " ,#ccopt OUTPUT\n"
+ " ,#rowcount OUTPUT;\n"
+ " \n"
+ "IF #rc <> 16 -- FastForward cursor automatically closed\n"
+ "BEGIN\n"
+ " -- Name the cursor so we can use CURSOR_STATUS\n"
+ " EXECUTE sys.sp_cursoroption #cur\n"
+ " ,2\n"
+ " ,'MyCursorName';\n"
+ "\n"
+ " -- Until the cursor auto-closes\n"
+ " WHILE CURSOR_STATUS('global', 'MyCursorName') = 1\n"
+ " BEGIN\n"
+ " EXECUTE sys.sp_cursorfetch #cur\n"
+ " ,2\n"
+ " ,0\n"
+ " ,1000;\n"
+ " END;\n"
+ "END;\n";
public String getQuery() {
return this.query;
}
public static void main(String[ ] args) throws Exception {
String dbUrl = "jdbc:sqlserver://tc-sqlserver:1433;database=MyBigDatabase";
String user = "mario";
String password = "p#ssw0rd";
String driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
FancyQuery fq = new FancyQuery();
Class.forName(driver);
Connection conn = DriverManager.getConnection(dbUrl, user, password);
Statement stmt = conn.createStatement();
// We expect to get 1,000 rows at a time.
ResultSet rs = stmt.executeQuery(fq.getQuery());
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
// Alas, we've only gotten 1,000 rows, total.
rs.close();
stmt.close();
conn.close();
}
}
I figured it out.
stmt.execute(fq.getQuery());
ResultSet rs = null;
for (;;) {
rs = stmt.getResultSet();
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
if ((stmt.getMoreResults() == false) && (stmt.getUpdateCount() == -1)) {
break;
}
}
if (rs != null) {
rs.close();
}
After some additional googling, I found a bit of code posted back in 2004:
http://www.coderanch.com/t/300865/JDBC/databases/SQL-Server-JDBC-Registering-cursor
The gentleman who posted the snippet that I found helpful (Julian Kennedy) suggested: "Read the Javadoc for getUpdateCount() and getMoreResults() for a clear understanding." I was able to piece it together from that.
Basically, I don't think I understood my problem well enough at the outset in order to phrase it correctly. What it comes down to is that my query will be returning the data in multiple ResultSet instances. What I needed was a way to not merely iterate through each row in a ResultSet but, rather, iterate through the entire set of ResultSets. That's what the code above does.
If you want all records from the table, just do "Select * from table".
The only reason to retrieve in chunks is if there is some intermediate place for the data: e.g. if you are showing it on the screen, or storing it in memory.
If you are simply reading from one and inserting to another, just read everything from the first.You will not get any better performance by trying to retrieve in batches. If there is a difference, it will be negative. Frame your query in a way that brings back everything. The JDBC software will handle all the other breaking-up and reconstituting that you need.
However, you should batch the update/insert side of things.
The set-up would create two statements on the two connections:
Statement stmt = null;
ResultSet rs = null;
PreparedStatement insStmt = null;
stmt = conDb1.createStatement();
insStmt = conDb2.prepareStament("insert into tgt_db2_table (?,?,?,?,?......etc. ?,?) ");
rs = stmt.executeQuery("select * from src_db1_table");
Then, loop over the select as normal, but use batching on the target.
int batchedRecordCount = 0;
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
//Here you read values from the cursor and set them to the insStmt ...
String field1 = rs.getString(1);
String field2 = rs.getString(2);
int field3 = rs.getInt(3);
//--- etc.
insStmt.setString(1, field1);
insStmt.setString(2, field2);
insStmt.setInt(3, field3);
//----- etc. for all the fields
batchedRecordCount++;
insStmt.addBatch();
if (batchRecordCount > 1000) {
insStmt.executeBatch();
}
}
if (batchRecordCount > 0) {
//Finish of the final (partial) set of records
insStmt.executeBatch();
}
//Close resources...
I know this would be a foolish question to ask but still i need to do this.
This is a basic program in java application where I want to use 3 queries simultaneously to print the table.
(I'm not using any Primary key in this case so please help me to resolve this without making my attributes as primary keys - I know this is not a good practice but for now i need to complete it.)
my code:
Connection con = null;
Statement stat1 = null, stat2 = null, stat3 = null;
ResultSet rs1, rs2, rs3;
stat1 = con.createStatement();
stat2 = con.createStatement();
stat3 = con.createStatement();
String str = "\nProduct\tC.P\tS.P.\tStock\tExpenditure\tSales";
info.setText(str);
String s1 = "SELECT type, cp, sp, stock FROM ts_items GROUP BY type ORDER BY type";
String s2 = "SELECT expenditure FROM ts_expenditure GROUP BY type ORDER BY type";
String s3 = "SELECT sales FROM ts_sales GROUP BY type ORDER BY type";
rs1 = stat1.executeQuery(s1);
rs2 = stat2.executeQuery(s2);
rs3 = stat3.executeQuery(s3);
String type;
int cp, sp, stock, expenditure, sales;
while( rs1.next() || rs2.next() || rs3.next() )
{
type = rs1.getString("type");
cp = rs1.getInt("cp");
sp = rs1.getInt("sp");
stock = rs1.getInt("stock");
expenditure = rs2.getInt("expenditure");
sales = rs3.getInt("sales");
info.append("\n" + type + "\t" + cp + "\t" + sp + "\t" + stock + "\t" + expenditure + "\t" + sales);
}
Output:
Runtime Exception: Before start of result set
This is the problem:
while( rs1.next() || rs2.next() || rs3.next() )
If rs1.next() returns true, rs2.next() and rs3.next() won't be called due to short-circuiting. So rs2 and rs3 will both be before the first row. And if rs1.next() returns false, then you couldn't read from that anyway...
I suspect you actually want:
while (rs1.next() && rs2.next() && rs3.next())
After all, you only want to keep going while all three result sets have more information, right?
It's not clear why you're not doing an appropriate join, to be honest. That would make a lot more sense to me... Then you wouldn't be trying to use multiple result sets on a single connection, and you wouldn't be relying on there being the exact same type values in all the different tables.
You do an OR so imagine only one ResultSet has a result.
What you end up with is trying to read from empty result sets.
Suppose rs1 has one result and rs3 has 3 results. Now as per your code it will fail for rs1.getString("type"); during second iteration.
Better to loop over each resultSet separately.
This is going to go badly wrong, in the event that there is a type value that's missing from one of your three tables. Your code just assumes you'll get all of the types from all of the tables. It may be the case for your current data set, but it means that your code is not at all robust.
I would seriously recommend having just one SQL statement, that has each of your three selects as subselects, then joins them all together. Your java can just iterate over the result from this one SQL statement.
I want to export data from A table and import the data to B table. A and B table are the same tables and they are have 100 columns. How can I export and import within JDBC? I want to do it dynamically. I do not want to write one column to other.(2 tables have same columns. But Table A in oracle and Table B in mysql)
Thank you.
Try:
insert into tableB
select * from tableA
This is possible also if the tables are in different databases, creating a DB-link between the databases (granted you have the permissions to do so).
You can otherwise copy a max number of columns from TableA in memory and than insert them into the TableB, but I strongly discourage this.
Unfortunately in java there is nothing similar to the .NET BulkCopy
This might be of help :
ResultSet rs = st.executeQuery("SELECT * FROM A");
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
String insert_string = "INSERT INTO B(" ;
for (int i = 1; i < columnCount + 1; i++) {
String column = rsmd.getColumnName(i);
insert_string += column + ", " ;
}
insert_string += " )"; // Column part of INSERT INTO B should be well formed
insert_string += " VALUES (" ;
int i=0;
while(i < columnCount - 1){
i++;
insert_string += "'"+ rs.getString(i)+"', " ;
}
insert_string += "'" + rs.getString(columnCount) + ")" ; // VALUES part should be ok by now
This far, we must have a one valid INSERT statement, but that's only for one row in the rs object. An iteration with rs.next() must be included in the code so as to repead the creation of that INSERT string for all A rows.
As for the performance, I honestly have no clue. I don't recommend this, but I think it's a fair way of addressing the question.
I am fetching records from MySQL database using Java (JDBC). I have tables -
Stop_Times with 1.5 Million records and
Stops with 1 lac records.
I am using following code
ResultSet rs = stm.executeQuery("select distinct(stop_id) from Stop_Times force index (idx_stop_times) where agency_id = '" + agency_id + "' and route_type = " + route_type + " order by stop_id");
while(rs.next())
{
stop_id.add(rs.getString("stop_id"));
}
JSONArray jsonResult = new JSONArray();
String sql = "select * from Stops force index (idx_Stops) where stop_id = ? and agency_id = ? and location_type = 0 order by stop_name";
PreparedStatement pstm = con.prepareStatement(sql);
int rid = 0;
for(int r = 0; r < stop_id.size(); r++)
{
pstm.setString(1, stop_id.get(r).toString());
pstm.setString(2, agency_id);
rs = pstm.executeQuery();
if(rs.next())
{
JSONObject jsonStop = new JSONObject();
jsonStop.put("str_station_id", rs.getString("stop_id"));
jsonStop.put("str_station_name", rs.getString("stop_name") + "_" + rs.getString("stop_id"));
jsonStop.put("str_station_code", rs.getString("stop_code"));
jsonStop.put("str_station_desc", rs.getString("stop_desc"));
jsonStop.put("str_station_lat", rs.getDouble("stop_lat"));
jsonStop.put("str_station_lon", rs.getDouble("stop_lon"));
jsonStop.put("str_station_url", rs.getString("stop_url"));
jsonStop.put("str_location_type", rs.getString("location_type"));
jsonStop.put("str_zone_id", rs.getString("zone_id"));
jsonResult.put((rid++), jsonStop);
}
}
The first query returns 6871 records. But it is taking too much time - on server side it is taking 8-10 seconds and at client side 40-45 seconds.
I want to reduce these times as for server side 300-500 milliseconds and at client side around 10 sec.
Please can anybody help me for how to to this?
Your strategy is to use a first query to get IDs, and then loop over these IDs and execute another query for each of the IDs found by the first query. You're in fact doing a "manual" join instead of letting the database do it for you. You could rewrite everything in a single query:
select * from Stops stops
inner join Stop_Times stopTimes on stopTimes.stop_id = stops.stop_id
where stops.stop_id = ?
and stops.agency_id = ?
and stops.location_type = 0
and stopTimes.agency_id = ?
and stopTimes.route_type = ?
order by stops.stop_name
Try to get the explain plan associated with your request (Cf. http://dev.mysql.com/doc/refman/5.0/en/using-explain.html) ; avoid full table scan (Explain join ALL). Then add relevant indexes. Then retry.