Java JDBC - PreparedStatement executeUpdate() always returns 1 - java

Currently I'm working on a code in java that retrieves data from XML files located in various folders and then uploads the file itself and the data retrieved to a SQL-server Database. I don't want to upload any repeated XML file to database but since the files can have random names I'm checking using the Hash from each file I'm about to upload, I'm uploading the files to the following table:
XMLFiles
CREATE TABLE [dbo].[XMLFiles](
[PathID] [int] NOT NULL,
[FileID] [int] IDENTITY(1,1) NOT NULL,
[XMLFileName] [nvarchar](100) NULL,
[FileSize] [int] NULL,
[FileData] [varbinary](max) NULL,
[ModDate] [datetime2](7) NULL,
[FileHash] [nvarchar](100) NULL,
CONSTRAINT [PK_XMLFiles] PRIMARY KEY CLUSTERED
(
[FileID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
the code I'm using to upload the files is the following:
public int UploadFile
public int UploadFile(String Path,int pathID) throws SQLException, SAXException, IOException {
int ID=-1;
String hash;
int len,rowCount=0;
String query;
PreparedStatement pstmt;
try {
File file = new File(Path);
hash=XMLRead.getFileChecksum(file);
FileInputStream fis = new FileInputStream(file);
len = (int) file.length();
query = (" IF NOT EXISTS "
+ " (SELECT 1"
+ " FROM XMLFiles"
+ " WHERE FileSize="+len+" AND FileHash='"+hash+"')"
+ " BEGIN"
+ " INSERT INTO XMLFiles (PathID,XMLFileName,FileSize,FileData,ModDate,FileHash) "
+ " VALUES(?,?,?,?,GETDATE(),?)"
+ " END;");
pstmt = Con.prepareStatement(query);
pstmt.setInt(1, pathID);
pstmt.setString(2, file.getName());
pstmt.setInt(3, len);
pstmt.setBinaryStream(4, fis, len);
pstmt.setString(5, hash);
rowCount=pstmt.executeUpdate();
System.out.println("ROWS AFFECTED:-"+rowCount);
if (rowCount==0){
System.out.println("THE FILE: "+file.getName()+"ALREADY EXISTS IN THE SERVER WITH THE NAME: ");
System.out.println(GetFilename(hash));
}
} catch (Exception e) {
e.printStackTrace();
}
return rowCount;
}
I'm executing the program with 28 files in which 4 of them are repeated files but with different names, I know the code is working fine because at the end of each execution only the 24 unique files are uploaded, the problem is that I'm using the rowCount to check if the file was uploaded or not, and if the file wasn't uploaded because it was a repeated file I'm not uploading the data of that file to the database neither, like so (the following code is a fragment to illustrate the comprobation I'm doing):
int rowCount=UploadFile(Path,pathID);
if (rowCount==1){
//UPLOAD DATA
}
the problem is that the executeUpdate() in the method UploadFile always returns 1 even when no rows in the database where affected, Is there something I'm missing here?, I can't find anything wrong with my code, is it the IF NOT EXISTS comprobation that I'm doing the one that returns 1?

The update count returned by a SQL statement is only well-defined for a normal DML statement (INSERT, UPDATE, or DELETE).
It is not defined for a SQL script.
The value is whatever the server chooses to return for a script. For MS SQL Server, it is likely the value of ##ROWCOUNT at the end of the statement / script:
Set ##ROWCOUNT to the number of rows affected or read.
Since you're executing a SELECT statement, it sets the ##ROWCOUNT value. If zero, you then execute the INSERT statement, which will override the ##ROWCOUNT value.
Assuming there will never be more than one row with that size/hash, you will always get a count of 1 back.

It could be that when your SELECT in the IF block finds the existing row it is counted and returned.
If there is no exception thrown you could try the INSERT without the IF NOT EXISTS check and see if this is the case. You may end up with duplicates if you do not have a key of some kind that prevents them from being inserted, or you may receive an exception if you have a key that does prevent the insert. It's worth testing to see what you get.
If it is the SELECT returning the 1, you may need to split them into two statements, and simply skip the execution of the second if the first finds a row. You can keep them in the same transaction, and essentially your db is doing two statements as currently written. It's more code, but if you do in the same transaction, it's the same effect on your database.

Related

Codename One SQL database storing wrong values

I am used to developing desktop applications with Java. Now I am trying Codename One to develop my first mobile app.
Trying to replicate my experiences with SQL databases I am running into a very odd storage behavior, which I cannot explain.
The database is created, but when I change the table input value, the new value gets ignored and just the old value is added. To save the new value, I have to delete the database.
I like the interface and any kind help would be appreciated.
Database db = Display.getInstance().openOrCreate("MyDB.db");
db.execute("CREATE TABLE IF NOT EXISTS Persons (Date NOT NULL,Event NOT NULL)");
String sql = "INSERT INTO Persons (DATE , Event) " + "VALUES ( 'John', '10000.00' );";
db.execute (sql);
// adds "John" to the database every time I click the button
// then I change the from "John" to "James"
// I am not adding the lines twice, just change the input
String sql = "INSERT INTO Persons (DATE , Event) " + "VALUES ( 'James', '10000.00' );";
db.execute (sql);
//keeps adding "John" to the database, even though value has been changed to "James"
Cursor cur = db.executeQuery("select * from Persons;");
Row currentRow= cur.getRow();
String dataText = currentRow.getString(0);
while (cur.next()) {
System.out.println(dataText);
}
You're not fetching the next row into dataText in your while() loop, so you're just repeatedly printing out the text from the first row.
It should be:
Cursor cur = db.executeQuery("select * from Persons;");
while (cur.next()) {
Row currentRow = cur.getRow();
String dataText = currentRow.getString("Date");
System.out.println(dataText);
}
If you examine the table with a separate query tool, like PhpMyAdmin, you should see that it contains both rows.
I hope I got the syntax right. I'm not a Java programmer and I got it from a tutorial.

Using a database API cursor with JDBC and SQLServer to select batch results

SOLVED (See answer below.)
I did not understand my problem within the proper context. The real issue was that my query was returning multiple ResultSet objects, and I had never come across that before. I have posted code below that solves the problem.
PROBLEM
I have an SQL Server database table with many thousand rows. My goal is to pull the data back from the source database and write it to a second database. Because of application memory constraints, I will not be able to pull the data back all at once. Also, because of this particular table's schema (over which I have no control) there is no good way for me to tick off the rows using some sort of ID column.
A gentleman over at the Database Administrators StackExchange helped me out by putting together something called a database API cursor, and basically wrote this complicated query that I only need to drop my statement into. When I run the query in SQL Management Studio (SSMS) it works great. I get all the data back, a thousand rows at a time.
Unfortunately, when I try to translate this into JDBC code, I get back the first thousand rows only.
QUESTION
Is it possible using JDBC to retrieve a database API cursor, pull the first set of rows from it, allow the cursor to advance, and then pull the subsequent sets one at a time? (In this case, a thousand rows at a time.)
SQL CODE
This gets complicated, so I'm going to break it up.
The actual query can be simple or complicated. It doesn't matter. I've tried several different queries during my experimentation and they all work. You just basically drop it into the the SQL code in the appropriate place. So, let's take this simple statement as our query:
SELECT MyColumn FROM MyTable;
The actual SQL database API cursor is far more complicated. I will print it out below. You can see the above query buried in it:
-- http://dba.stackexchange.com/a/82806
DECLARE #cur INTEGER
,
-- FAST_FORWARD | AUTO_FETCH | AUTO_CLOSE
#scrollopt INTEGER = 16 | 8192 | 16384
,
-- READ_ONLY, CHECK_ACCEPTED_OPTS, READ_ONLY_ACCEPTABLE
#ccopt INTEGER = 1 | 32768 | 65536
,#rowcount INTEGER = 1000
,#rc INTEGER;
-- Open the cursor and return the first 1,000 rows
EXECUTE #rc = sys.sp_cursoropen #cur OUTPUT
,'SELECT MyColumn FROM MyTable'
,#scrollopt OUTPUT
,#ccopt OUTPUT
,#rowcount OUTPUT;
IF #rc <> 16 -- FastForward cursor automatically closed
BEGIN
-- Name the cursor so we can use CURSOR_STATUS
EXECUTE sys.sp_cursoroption #cur
,2
,'MyCursorName';
-- Until the cursor auto-closes
WHILE CURSOR_STATUS('global', 'MyCursorName') = 1
BEGIN
EXECUTE sys.sp_cursorfetch #cur
,2
,0
,1000;
END;
END;
As I've said, the above creates a cursor in the database and asks the database to execute the statement, keep track (internally) of the data it's returning, and return the data a thousand rows at a time. It works great.
JDBC CODE
Here's where I'm having the problem. I have no compilation problems or run-time problems with my Java code. The problem I am having is that it returns only the first thousand rows. I don't understand how to utilize the database cursor properly. I have tried variations on the Java basics:
// Hoping to get all of the data, but I only get the first thousand.
ResultSet rs = stmt.executeQuery(fq.getQuery());
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
I'm not surprised by the results, but all of the variations I've tried produce the same results.
From my research it seems like the JDBC does something with database cursors when the database is Oracle, but you have to set the data type returned in the result set as an Oracle cursor object. I'm guessing there is something similar with SQL Server, but I have been unable to find anything yet.
Does anyone know of a way?
I'm including example Java code in full (as ugly as that gets).
// FancyQuery.java
import java.sql.*;
public class FancyQuery {
// Adapted from http://dba.stackexchange.com/a/82806
String query = "DECLARE #cur INTEGER\n"
+ " ,\n"
+ " -- FAST_FORWARD | AUTO_FETCH | AUTO_CLOSE\n"
+ " #scrollopt INTEGER = 16 | 8192 | 16384\n"
+ " ,\n"
+ " -- READ_ONLY, CHECK_ACCEPTED_OPTS, READ_ONLY_ACCEPTABLE\n"
+ " #ccopt INTEGER = 1 | 32768 | 65536\n"
+ " ,#rowcount INTEGER = 1000\n"
+ " ,#rc INTEGER;\n"
+ "\n"
+ "-- Open the cursor and return the first 1,000 rows\n"
+ "EXECUTE #rc = sys.sp_cursoropen #cur OUTPUT\n"
+ " ,'SELECT MyColumn FROM MyTable;'\n"
+ " ,#scrollopt OUTPUT\n"
+ " ,#ccopt OUTPUT\n"
+ " ,#rowcount OUTPUT;\n"
+ " \n"
+ "IF #rc <> 16 -- FastForward cursor automatically closed\n"
+ "BEGIN\n"
+ " -- Name the cursor so we can use CURSOR_STATUS\n"
+ " EXECUTE sys.sp_cursoroption #cur\n"
+ " ,2\n"
+ " ,'MyCursorName';\n"
+ "\n"
+ " -- Until the cursor auto-closes\n"
+ " WHILE CURSOR_STATUS('global', 'MyCursorName') = 1\n"
+ " BEGIN\n"
+ " EXECUTE sys.sp_cursorfetch #cur\n"
+ " ,2\n"
+ " ,0\n"
+ " ,1000;\n"
+ " END;\n"
+ "END;\n";
public String getQuery() {
return this.query;
}
public static void main(String[ ] args) throws Exception {
String dbUrl = "jdbc:sqlserver://tc-sqlserver:1433;database=MyBigDatabase";
String user = "mario";
String password = "p#ssw0rd";
String driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
FancyQuery fq = new FancyQuery();
Class.forName(driver);
Connection conn = DriverManager.getConnection(dbUrl, user, password);
Statement stmt = conn.createStatement();
// We expect to get 1,000 rows at a time.
ResultSet rs = stmt.executeQuery(fq.getQuery());
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
// Alas, we've only gotten 1,000 rows, total.
rs.close();
stmt.close();
conn.close();
}
}
I figured it out.
stmt.execute(fq.getQuery());
ResultSet rs = null;
for (;;) {
rs = stmt.getResultSet();
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
if ((stmt.getMoreResults() == false) && (stmt.getUpdateCount() == -1)) {
break;
}
}
if (rs != null) {
rs.close();
}
After some additional googling, I found a bit of code posted back in 2004:
http://www.coderanch.com/t/300865/JDBC/databases/SQL-Server-JDBC-Registering-cursor
The gentleman who posted the snippet that I found helpful (Julian Kennedy) suggested: "Read the Javadoc for getUpdateCount() and getMoreResults() for a clear understanding." I was able to piece it together from that.
Basically, I don't think I understood my problem well enough at the outset in order to phrase it correctly. What it comes down to is that my query will be returning the data in multiple ResultSet instances. What I needed was a way to not merely iterate through each row in a ResultSet but, rather, iterate through the entire set of ResultSets. That's what the code above does.
If you want all records from the table, just do "Select * from table".
The only reason to retrieve in chunks is if there is some intermediate place for the data: e.g. if you are showing it on the screen, or storing it in memory.
If you are simply reading from one and inserting to another, just read everything from the first.You will not get any better performance by trying to retrieve in batches. If there is a difference, it will be negative. Frame your query in a way that brings back everything. The JDBC software will handle all the other breaking-up and reconstituting that you need.
However, you should batch the update/insert side of things.
The set-up would create two statements on the two connections:
Statement stmt = null;
ResultSet rs = null;
PreparedStatement insStmt = null;
stmt = conDb1.createStatement();
insStmt = conDb2.prepareStament("insert into tgt_db2_table (?,?,?,?,?......etc. ?,?) ");
rs = stmt.executeQuery("select * from src_db1_table");
Then, loop over the select as normal, but use batching on the target.
int batchedRecordCount = 0;
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
//Here you read values from the cursor and set them to the insStmt ...
String field1 = rs.getString(1);
String field2 = rs.getString(2);
int field3 = rs.getInt(3);
//--- etc.
insStmt.setString(1, field1);
insStmt.setString(2, field2);
insStmt.setInt(3, field3);
//----- etc. for all the fields
batchedRecordCount++;
insStmt.addBatch();
if (batchRecordCount > 1000) {
insStmt.executeBatch();
}
}
if (batchRecordCount > 0) {
//Finish of the final (partial) set of records
insStmt.executeBatch();
}
//Close resources...

Program hangs after retrieving 100 rows containg CLOB

I am retrieving one text column (CLOB) from a table in a "remote" H2 database (actually on a local drive, but using tcp to access it) and after retrieving the first 100 rows the program hangs on retrieving the next row of the result set. If, on the other hand, I access the same database as an embedded database, there is no problem. If I try to display the table's rows using H2's console application accessing the database using the Server (i.e. tcp) method, then I get the following error message:
IO Exception: "java.io.IOException: org.h2.message.DbException: The object is already closed [90007-164]";
"lob: null table: 14 id: 1" [90031-164] 90031/90031
Here is the program. If I uncomment out the call that sets the system property, the program works. I have also tried retrieving the column using a character stream or simply a call to getString, controlled by constant USE_STREAM. There is no difference in the results:
import java.sql.*;
import java.util.*;
import java.io.*;
public class Jdbc4
{
private static final boolean USE_STREAM = false;
public static void main(String[] args) throws Exception
{
//System.setProperty("h2.serverResultSetFetchSize", "50");
Connection conn = null;
try {
Class.forName("org.h2.Driver").newInstance();
conn = DriverManager.getConnection("jdbc:h2:tcp://localhost/file:C:/h2/db/test/test;IFEXISTS=TRUE", "sa", "");
Statement stmt = conn.createStatement();
String sql = "select select_variables from ipm_queues";
ResultSet rs = stmt.executeQuery(sql);
int count = 0;
while (rs.next()) {
++count;
String s;
if (USE_STREAM) {
Clob clob = rs.getClob(1);
Reader rdr = clob.getCharacterStream();
char[] cbuf = new char[1024];
StringBuffer sb = new StringBuffer();
int len;
while ((len = rdr.read(cbuf, 0, cbuf.length)) != -1)
sb.append(cbuf, 0, len);
rdr.close();
s = sb.toString();
clob.free();
}
else
s = rs.getString(1);
System.out.println(count + ": " + s);
}
}
finally {
if (conn != null)
conn.close();
}
}
}
Here is the DDL for creating the table (you can see it was originally a MySql table):
CREATE TABLE `ipm_queues` (
`oid` bigint NOT NULL,
`queue_id` varchar(256) NOT NULL,
`store_id` bigint NOT NULL,
`creation_time` datetime NOT NULL,
`status` bigint NOT NULL,
`deleted` bigint NOT NULL,
`last_mod_time` datetime NOT NULL,
`queue_name` varchar(128),
`select_variables` text,
`where_clause` text,
`from_table` varchar(128),
`order_by` varchar(256),
`from_associate_table` varchar(256),
`from_view` varchar(128)
);
ALTER TABLE ipm_queues
ADD CONSTRAINT ipm_queues_pkey PRIMARY KEY (oid);
CREATE UNIQUE INDEX ipm_queues_key_idx ON ipm_queues(queue_id, store_id);
CREATE INDEX ipm_queues_str_idx ON ipm_queues(store_id);
I believe I understand the the cause of the hang. I investigated the simplest case of using a h2.serverResultSetFetchSize value of 600, which is greater than the 523 rows I know that I have. As I mentioned, I can retrieve the first 3 rows (single CLOB column) okay and then I either hang on the retrieval of the 4th row or I get a "The object is already closed" exception.
It turns out that the actual string comprising the first three columns seem to be rather short in length and method getInputStream in class org.h2.value.ValueLobDb has the data already and simply returns a ByteArrayInputStream constructed on this data. The 4th row's data is still on the server side and so an actual RemoteInputStream has to be built to process fetch the data from the server-side LOB.
Here's what seems to be the problem: Class org.h2.server.TcpServerThread is caching these LOBs in in instance of a SmallLRUCache. This cache seems to be designed to maintain only the least recently referenced LOBs!!! The default size of this cache is given by system property h2.serverCachedObjects, which defaults to 64, whereas the default fetch size is 100. So even if I had not overridden the default h2.serverResultSetFetchSize property, if all of my rows had sufficiently large columns requiring cached LOBs, any fetch size > 64 would cause the LOB representing the first row to be flushed out of the cache and I would not even be able to retrieve the first row.
An LRU cache seems to be the wrong structure for holding LOBs that are in an active result set. Certainly having a default cache size that is less than the default fetch size seems less than ideal.
you should probably give more details, but did you check your network connection? Maybe your database server is blocking connections (or network connections) as soon as they try to fetch too much data. This could be a "sort of" protection.

Delete Row from Oracle table by passing table name and column name

I have jsp page where user selects table name, column name and column value, with those three condtion I want to delete all matching row from the database. Is there a way to pass table name, column name and column value in oracle to delete certain row from the table? Any example would help me.. Thank you
I'd worry about SQL Injection attacks as you are supplying the table and column names.
You could create an Oracle function to remove the records required and test for certain conditions to be met before removing the row:
CREATE OR REPLACE
FUNCTION delete_record (
p_table IN VARCHAR2,
p_column IN VARCHAR2,
p_value IN VARCHAR2
)
RETURN NUMBER
AS
v_table user_tables.table_name%TYPE;
v_columns user_tab_cols.column_name%TYPE;
BEGIN
-- Check table exists in DB
SELECT table_name
INTO v_table
FROM user_tables
WHERE table_name = UPPER(p_table);
-- Check column exists in DB table
SELECT column_name
INTO v_colums
FROM user_tab_cols
WHERE table_name = UPPER(p_table)
AND column_name = UPPER(p_column);
EXECUTE IMMEDIATE
'DELETE FROM '||DBMS_ASSERT.SIMPLE_SQL_NAME(p_table)||
' WHERE '||DBMS_ASSERT.SIMPLE_SQL_NAME(p_column)||' = :col_value'
USING p_value;
RETURN SQL%ROWCOUNT;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
-- Either return -1 (error) or log an error etc.
RETURN -1;
WHEN others
THEN
<Your exception handling here>
END delete_record;
/
This (or something like this) would check the table and column variables supplied exist in the database before then deleting the records and returning the number of records deleted.
If there is a problem with the number deleted you can issue a rollback statement, if it is OK then you can issue a commit.
Of course, if you want to supply a fully qualified table name (recommended) then you would use the DBMS_ASSERT.QUALIFIED_SQL_NAME function instead of the DBMS_ASSERT.SIMPLE_SQL_NAME function.
Hope it helps...
EDIT: In response to Jack's question about adding date from and date to.
If you add two new conditions that are passed in to the function as:
CREATE OR REPLACE
FUNCTION delete_record (
p_table IN VARCHAR2,
p_column IN VARCHAR2,
p_value IN VARCHAR2,
p_date_from IN DATE,
p_date_to IN DATE
)
Then you'd need to expand the EXECUTE IMMEDIATE with:
EXECUTE IMMEDIATE
'DELETE FROM '||DBMS_ASSERT.SIMPLE_SQL_NAME(p_table)||
' WHERE '||DBMS_ASSERT.SIMPLE_SQL_NAME(p_column)||' = :col_value'||
' AND date BETWEEN :date_from AND :date_to'
USING p_value,
p_date_from,
p_date_to;
N.B. This assumes your date column in the table is called "date".
I don't have a SQL interface in front of me at the moment but this should be close enough to what you need to get it working.
If you are passing the p_date_XXXX parameters in as VARCHAR2 and not DATE types then you's need to "TO_DATE" the values before passing them into the dynamic SQL.
e.g.
EXECUTE IMMEDIATE
'DELETE FROM '||DBMS_ASSERT.SIMPLE_SQL_NAME(p_table)||
' WHERE '||DBMS_ASSERT.SIMPLE_SQL_NAME(p_column)||' = :col_value'||
' AND date BETWEEN :date_from AND :date_to'
USING p_value,
TO_DATE(p_date_from, <date_format>),
TO_DATE(p_date_to, <date_format>);
DELETE FROM table_name WHERE column_name = column_value
The problem is that you can't bind table or column names in PreparedStatement, only column values.
This should work (from memory; not tested):
Statement stmt = null;
try
{
stmt = conn.createStatement("DELETE FROM " + tableName + " WHERE " + columnName + " = '" + condition + "'");
int deleted = stmt.execute();
}
catch (SQLException e)
{
... report error
}
try
{
if (stmt != null)
stmt.close();
}
catch (SQLException ignore)
{
}

Failing to load large dataset into h2 database

Here is the problem: At my company we have a large database that we want to perform some automated operations in it. To test that we got a small sample of that data about 6 10MB sized csv files. We want to use H2 to test the results of our program in it. H2 Seemed to work fine with our previous cvs though they were, at most, 1000 entries long. When it comes to any of our 10MB files the command
insert into myschema.mytable (select * from csvread('mycsvfile.csv'));
reports a failure because one of the registries is supposedly duplicated and offends our primary key constraints.
Unique index or primary key violation: "PRIMARY_KEY_6 ON MYSCHEMA.MYTABLE(DATETIME, LARGENUMBER, KIND)"; SQL statement:
insert into myschema.mytable (select * from csvread('src/test/resources/h2/data/mycsvfile.csv')) [23001-148] 23001/23001
Breaking the mycsvfile.csv into smaller pieces I was able to see that the problem starts to appear after about 10000 rows inserted(though the number varies depending on what data I used). I could however insert more than 10000 rows if I broke the file into pieces and then ran the command individually. But even if I manage to insert all that data manually I need an automated method to fill the database.
Since running the command would not give me the row that was causing the problem I guessed that the problem could be some cache in the csvread routine.
Then I created a small java program that could insert the data in the H2 database manually. No matter whether I batched the commands, closed and opened the connection for 1000 rows h2 reported that I was trying to duplicate an entry in the database.
org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY_KEY_6 ON MYSCHEMA.MYTABLE(DATETIME, LARGENUMBER, KIND)"; SQL statement:
INSERT INTO myschema.mytable VALUES ( '1997-10-06 01:00:00.0',25485116,1.600,0,18 ) [23001-148]
Doing a normal search for that registry using emacs I can find that the registry is not duplicated as the datetime column is unique in the whole dataset.
I cannot give that data for you to test since the company sells that information. But here is how my table definition is like.
create table myschema.mytable (
datetime timestamp,
largenumber numeric(8,0) references myschema.largenumber(largecode),
value numeric(8,3) not null,
flag numeric(1,0) references myschema.flag(flagcode),
kind smallint references myschema.kind(kindcode),
primary key (datetime, largenumber, kind)
);
This is how our csv looks like:
datetime,largenumber,value,flag,kind
1997-06-11 16:45:00.0,25485116,0.710,0,18
1997-06-11 17:00:00.0,25485116,0.000,0,18
1997-06-11 17:15:00.0,25485116,0.000,0,18
1997-06-11 17:30:00.0,25485116,0.000,0,18
And the java code that would fill our test database(forgive my ugly code, I got desperate :)
private static void insertFile(MyFile file) throws SQLException {
int updateCount = 0;
ResultSet rs = Csv.getInstance().read(file.toString(), null, null);
ResultSetMetaData meta = rs.getMetaData();
Connection conn = DriverManager.getConnection(
"jdbc:h2:tcp://localhost/mytestdatabase", "sa", "pass");
rs.next();
while (rs.next()) {
Statement stmt = conn.createStatement();
StringBuilder sb = new StringBuilder();
for (int i = 0; i < meta.getColumnCount(); i++) {
if (i == 0)
sb.append("'" + rs.getString(i + 1) + "'");
else
sb.append(rs.getString(i + 1));
sb.append(',');
}
updateCount++;
if (sb.length() > 0)
sb.deleteCharAt(sb.length() - 1);
stmt.execute(String.format(
"INSERT INTO myschema.mydatabase VALUES ( %s ) ",
sb.toString()));
if (updateCount == 1000) {
conn.close();
conn = DriverManager.getConnection(
"jdbc:h2:tcp://localhost/mytestdatabase", "sa", "pass");
updateCount = 0;
}
}
if (!conn.isClosed()) {
conn.close();
}
rs.close();
}
I'll be glad to provide more information if requested.
EDIT
#Randy I always check if the database is clean before running the command and in my java program I have a routine to delete all data from a file that fails to be inserted.
select * from myschema.mytable where largenumber = 25485116;
DATETIME LARGENUMBER VALUE FLAG KIND
(no rows, 8 ms)
The only thing that I can think of is that there is a trigger on the table that sets the timestamp to "now". Although that would not explain why you are successful with a few rows, it would explain why the primary key is being violated.

Categories