I'm getting a weird SQLException on a function I run against a database using JDBC.
SQLException: Column 'Message' not found.
I have this in my function:
st = con.prepareStatement("SELECT NotificationID,UserIDFrom,UserIDTo,Message,Timestamp,isNotified FROM notification WHERE UserIDTo=? AND isNotified=?");
st.setInt(1, _UserID);
st.setBoolean(2, false);
System.out.println("st is: " + st);
rs = st.executeQuery();
And I got that error, so I added this after the st.executeQuery() :
ResultSetMetaData meta = rs.getMetaData();
for (int index = 1; index <= meta.getColumnCount(); index++) {
System.out.println("Column " + index + " is named " + meta.getColumnName(index));
}
And when I run my code again this is what I get as a result:
Column 1 is named NotificationID
Column 2 is named UserIDFrom
Column 3 is named UserIDTo
Column 4 is named Message
Column 5 is named TimeStamp
Exception in thread "main" java.sql.SQLException: Column 'Message' not found.
Column 6 is named isNotified
And here is a screenshot of my table's design, from MySQL Workbench
And the data in the table
I really can't figure out what's going one here.... Anyone can help out?
EDIT
I've replaced the * in the SELECT statement just to add something to the question that I just noticed.
If I remove the Message column from the select then I get the same error for the TimeStamp column. And if I remove both columns I get no errors then.
EDIT2
OK,this is the part i get the errors, i get both on Message and Timestamp:
while (rs.next()) {
NotificationID = rs.getInt("NotificationID");
System.out.println("NotificationID: " + NotificationID);
SenderID = rs.getInt("UserIDFrom");
System.out.println("SenderID: " + SenderID);
From = findUserName(SenderID);
try {
body = rs.getString("Message");
System.out.println("body: " + body);
} catch (Exception e) {
System.out.println("Message error: " + e);
e.printStackTrace();
}
try {
time = rs.getString("Timestamp");
System.out.println("time: " + time);
} catch (Exception e) {
System.out.println("Timestamp error: " + e);
e.printStackTrace();
}
}
I get the error on the getString() methods for each column
StackTrace for TimeStamp(the same for Message):
java.sql.SQLException: Column 'TimeStamp' not found.
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1078)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:975)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:920)
at com.mysql.jdbc.ResultSetImpl.findColumn(ResultSetImpl.java:1167)
at com.mysql.jdbc.ResultSetImpl.getString(ResultSetImpl.java:5733)
at NotifyMe_Server.Database.getUnNotified(Database.java:444)
at tests.Tests.main(Tests.java:39)
If you observe your code
try {
time = rs.getString("Timestamp");
System.out.println("time: " + time);
} catch (Exception e) {
System.out.println("Timestamp error: " + e);
e.printStackTrace();
}
}
you have used "Timestamp" in this format but if you changed it to "TimeStamp" as specified in your database, hopefully it will work.
Change datatype of your isNotified column as TINYINT in database and retry to insert
isNotified TINYINT(1)
Bool, Boolean: These types are synonyms for TINYINT(1). A value of zero is considered false. Non-zero values are considered true.
Can you change
System.out.println("Column " + index + " is named " + meta.getColumnName(index));
to
System.out.println("Column " + index + " is named '" + meta.getColumnName(index) + "'");
so that we can see if there is whitespace in the "Message" column name?
The fact that the error message comes between column 5 and 6 is not important I think, because one is Standard Output and the other one Standard Error, these are not synchronized output streams.
(Also see the previous answer about Timestamp vs TimeStamp.)
It sounds like the table metadata is corrupt. You should be able to correct this by dropping and recreating the table, although if the metadata is really borked you may not be able to drop the table. If that's the case or you need to keep the data, backing up and restoring the whole database is the way to go, but check the SQL dump file before restoring and/or restore to another database name before dropping the broken database. Depending on exactly what's wrong, your problem columns may be missing from the dump.
If refreshing the database is not an option there are ways to perform targetted repairs, but I'm no expert so I can't advise you on that. Again, back up your database AND verify that the backup is complete (i.e. it has all your columns) before proceeding. If this is a production database, I would be very wary about taking advice from the internet on manipulating metadata. Minor differences in version, storage engine and environment can make or break you with this stuff, and given the nature of the problem you can't do a dry run.
Related
I'm truncating the table and then insert new data. Somehow we got error like (size more in data). It gives error but truncate statement is not rolled back.
Could you please suggest what is issue and how to roll it back?
try {
utx.begin();
List<Company> company = CompanyMapper.MAPPER.entityListToDaoList(query);
logger.log(Level.INFO,"Truncating Table Company!!");
emTarget.createNativeQuery("TRUNCATE TABLE Company").executeUpdate();
logger.log(Level.INFO,"Table Companyrole Company!!");
logger.log(Level.INFO,"Populating Table Company!! - " + company.size());
for (Company row : company) {
logger.log(Level.INFO,"ROW:" + row.getCompanyid()
+ "| Address:" + row.getVisitaddress1()
+ "| Size:" + (row.getVisitaddress1()!=null? row.getVisitaddress1().length():"0"));
emTarget.persist(row);
}
logger.log(Level.INFO,"Populated Table Company!!");
utx.commit();
} catch (Exception e) {
e.printStackTrace();
utx.rollback();
logger.log(Level.SEVERE, "Persist transaction failed. Rollback activated", e.getMessage());
throw new PersistenceException("There was an error reading the source table");
}
I'm using Oracle.
As noted in the Oracle documentation of TRUNCATE TABLE, you cannot roll back:
Note: You cannot roll back a TRUNCATE TABLE statement, nor can you use a FLASHBACK TABLE statement to retrieve the contents of a
table that has been truncated.
If you want the ability to roll back, you will need to use DELETE instead (this may be a lot slower than truncate).
Try using DELETE instead of TRUNCATE
TRUNCATE is DDL while DELETE is DML.
There are several database implementations where DDL cant be rollbacked.
You cannot roll back TRUNCATE. But if you have to - use DELETE instead.
Passing from glassfish3 to payara5 one piece of code stop waorking. Our code save a date to a set of record and the select those updated record. In glassfish3 work seamlessly, in payara5 the select return no record (seems like it does work in another transaction). If the result is different we throw an exception, so the data is never saved.
The transaction scope is READ_COMMITTED
try {
String whereClause = " where "
+ "em.mailboxToBeUsed =:mailbox "
+ "and em.lastSendResult is null "
+ "and (em.errorsNumber is null or em.errorsNumber<4) ";
Query updateQuery = em.createQuery("update EmailTobeSended em "
+ "set em.trysendsince=:dat " + whereClause);
updateQuery.setParameter("mailbox", mb);
updateQuery.setParameter("dat", key);
int modificate = updateQuery.executeUpdate();
em.flush();
TypedQuery<EmailTobeSended> emlocksel = em.createQuery(
"select em from EmailTobeSended em WHERE em.mailboxToBeUsed =:mailbox AND "
+ "em.trysendsince=:dat "
+ " order by em.emailId ", EmailTobeSended.class);
emlocksel.setParameter("mailbox", mb);
emlocksel.setParameter("dat", key);
res = emlocksel.getResultList();
if (modificate != res.size()) {
throw new java.lang.AssertionError("Lock error on select emailtobesended");
}
} catch (Exception ex) {
gotError = true;
res = null;
}
On glassfish3, after flushing, the second query find out the record updated. On payara5 no result
EDIT
we use eclipselink
WE solved the problem: it wasn't about persistence, it was about mysql versions (from 5.5 to 5.6)
The DATE field was interpreted differently between the two version: in 5.5 millisecond are ignored, in 5.6 are considered. Due to the field was not configured to accept millisecond, the date were saved without them, so in the second query (a select) the comparison were done answering with ".000" as millisecond, different from what were searched
Updating the field to DATE(3) solved the problem
I want to insert multiple rows in a table from a console/tool (e.g.: Data studio) I get the following error message
THE INSERT OR UPDATE VALUE OF FOREIGN KEY FK$MAR$S IS INVALID.
SQLCODE=-530, SQLSTATE=23503, DRIVER=4.13.111
This means I have some trouble with a FOREIGN KEY variable, but I solved that later and it works well.
My problem is that when I'm running the same query from a Java application using PreparedStatement.executeBatch() (batch because it could insert more than one row at a time), I get a different error message:
com.ibm.db2.jcc.am.wn: [jcc][t4][102][10040][3.57.82] Batch failure.
The batch was submitted, but at least one exception occurred on an
individual member of the batch. Use getNextException() to retrieve
the exceptions for specific batched elements. ERRORCODE=-4228,
SQLSTATE=null
When I used getNextException(), I get the following:
com.ibm.db2.jcc.am.co: A NON-ATOMIC INSERT STATEMENT ATTEMPTED TO
PROCESS MULTIPLE ROWS OF DATA, BUT ERRORS OCCURRED
And the error code is -4228.
Why this difference? I want the java application return the same error details as the console tool, so I can handle those exceptions in my java code.
For example, if the returned error code=-803 which means duplicate exception, I would handle my code to make update instead of insert, or if the returned message contains some words like " FOREIGN KEY ", I'll tell user to make sure about lookup tables and so on
I use DB2 version 10.5.3 on z/OS and the DB2 driver version is : 3.65.92
} catch (SQLException ex) {
while (ex != null) {
if (ex instanceof com.ibm.db2.jcc.DB2Diagnosable) {
com.ibm.db2.jcc.DB2Diagnosable db2ex = (com.ibm.db2.jcc.DB2Diagnosable) ex;
com.ibm.db2.jcc.DB2Sqlca sqlca = db2ex.getSqlca();
if (sqlca != null) {
System.out.println("SQLCODE: " + sqlca.getSqlCode());
System.out.println("MESSAGE: " + sqlca.getMessage());
} else {
System.out.println("Error code: " + ex.getErrorCode());
System.out.println("Error msg : " + ex.getMessage());
}
} else {
System.out.println("Error code (non-db2): " + ex.getErrorCode());
System.out.println("Error msg (non-db2): " + ex.getMessage());
}
ex = ex.getNextException();
}
...
}
Above is an example of handling db2 exceptions. The example of output when there are 2 violations simultaneously: unique key on the table MYSCHEMA.MYTABLE where batch inserts come, and a foreign key on a parent table. I split it intentionally into 2 parts:
Before getNextException():
Error code: -4229
Error msg : [jcc][t4][102][10040][4.19.66] ... getNextException().
ERRORCODE=-4229, SQLSTATE=null
After getNextException():
SQLCODE: -803
MESSAGE: One or more values in the INSERT statement,
UPDATE statement, or foreign key update caused by a DELETE statement
are not valid because the primary key, unique constraint or unique
index identified by "1" constrains table "MYSCHEMA.MYTABLE" from
having duplicate values for the index key.. SQLCODE=-803,
SQLSTATE=23505, DRIVER=4.19.66
SQLCODE: -530
MESSAGE: The insert or update value of the FOREIGN KEY
"MYSCHEMA.MYTABLE.MYTABLE_FK" is not equal to any value of the parent
key of the parent table.. SQLCODE=-530, SQLSTATE=23503, DRIVER=4.19.66
I think the batch exception message is pretty clear. Consider that different statements in a batch might fail or issue warnings for different reasons. The batch level error message is therefore generic and instructs you to use "getNextException() to retrieve the exceptions for specific" statements in the batch.
Though this is an old thread. I will share the code which worked for me
try{
preparedStatement.batchUpdate( new ClassName{
//code with setting values and batch size});
}catch (Exception e) {
if (e.getCause() instanceof BatchUpdateException) {
BatchUpdateException be = (BatchUpdateException) e.getCause();
SQLException current = be.getNextException();
do {
current.printStackTrace();
} while ((current = current.getNextException()) != null);
}
}
In here I'm trying to get the exception based on BatchUpdateException instance.
SOLVED (See answer below.)
I did not understand my problem within the proper context. The real issue was that my query was returning multiple ResultSet objects, and I had never come across that before. I have posted code below that solves the problem.
PROBLEM
I have an SQL Server database table with many thousand rows. My goal is to pull the data back from the source database and write it to a second database. Because of application memory constraints, I will not be able to pull the data back all at once. Also, because of this particular table's schema (over which I have no control) there is no good way for me to tick off the rows using some sort of ID column.
A gentleman over at the Database Administrators StackExchange helped me out by putting together something called a database API cursor, and basically wrote this complicated query that I only need to drop my statement into. When I run the query in SQL Management Studio (SSMS) it works great. I get all the data back, a thousand rows at a time.
Unfortunately, when I try to translate this into JDBC code, I get back the first thousand rows only.
QUESTION
Is it possible using JDBC to retrieve a database API cursor, pull the first set of rows from it, allow the cursor to advance, and then pull the subsequent sets one at a time? (In this case, a thousand rows at a time.)
SQL CODE
This gets complicated, so I'm going to break it up.
The actual query can be simple or complicated. It doesn't matter. I've tried several different queries during my experimentation and they all work. You just basically drop it into the the SQL code in the appropriate place. So, let's take this simple statement as our query:
SELECT MyColumn FROM MyTable;
The actual SQL database API cursor is far more complicated. I will print it out below. You can see the above query buried in it:
-- http://dba.stackexchange.com/a/82806
DECLARE #cur INTEGER
,
-- FAST_FORWARD | AUTO_FETCH | AUTO_CLOSE
#scrollopt INTEGER = 16 | 8192 | 16384
,
-- READ_ONLY, CHECK_ACCEPTED_OPTS, READ_ONLY_ACCEPTABLE
#ccopt INTEGER = 1 | 32768 | 65536
,#rowcount INTEGER = 1000
,#rc INTEGER;
-- Open the cursor and return the first 1,000 rows
EXECUTE #rc = sys.sp_cursoropen #cur OUTPUT
,'SELECT MyColumn FROM MyTable'
,#scrollopt OUTPUT
,#ccopt OUTPUT
,#rowcount OUTPUT;
IF #rc <> 16 -- FastForward cursor automatically closed
BEGIN
-- Name the cursor so we can use CURSOR_STATUS
EXECUTE sys.sp_cursoroption #cur
,2
,'MyCursorName';
-- Until the cursor auto-closes
WHILE CURSOR_STATUS('global', 'MyCursorName') = 1
BEGIN
EXECUTE sys.sp_cursorfetch #cur
,2
,0
,1000;
END;
END;
As I've said, the above creates a cursor in the database and asks the database to execute the statement, keep track (internally) of the data it's returning, and return the data a thousand rows at a time. It works great.
JDBC CODE
Here's where I'm having the problem. I have no compilation problems or run-time problems with my Java code. The problem I am having is that it returns only the first thousand rows. I don't understand how to utilize the database cursor properly. I have tried variations on the Java basics:
// Hoping to get all of the data, but I only get the first thousand.
ResultSet rs = stmt.executeQuery(fq.getQuery());
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
I'm not surprised by the results, but all of the variations I've tried produce the same results.
From my research it seems like the JDBC does something with database cursors when the database is Oracle, but you have to set the data type returned in the result set as an Oracle cursor object. I'm guessing there is something similar with SQL Server, but I have been unable to find anything yet.
Does anyone know of a way?
I'm including example Java code in full (as ugly as that gets).
// FancyQuery.java
import java.sql.*;
public class FancyQuery {
// Adapted from http://dba.stackexchange.com/a/82806
String query = "DECLARE #cur INTEGER\n"
+ " ,\n"
+ " -- FAST_FORWARD | AUTO_FETCH | AUTO_CLOSE\n"
+ " #scrollopt INTEGER = 16 | 8192 | 16384\n"
+ " ,\n"
+ " -- READ_ONLY, CHECK_ACCEPTED_OPTS, READ_ONLY_ACCEPTABLE\n"
+ " #ccopt INTEGER = 1 | 32768 | 65536\n"
+ " ,#rowcount INTEGER = 1000\n"
+ " ,#rc INTEGER;\n"
+ "\n"
+ "-- Open the cursor and return the first 1,000 rows\n"
+ "EXECUTE #rc = sys.sp_cursoropen #cur OUTPUT\n"
+ " ,'SELECT MyColumn FROM MyTable;'\n"
+ " ,#scrollopt OUTPUT\n"
+ " ,#ccopt OUTPUT\n"
+ " ,#rowcount OUTPUT;\n"
+ " \n"
+ "IF #rc <> 16 -- FastForward cursor automatically closed\n"
+ "BEGIN\n"
+ " -- Name the cursor so we can use CURSOR_STATUS\n"
+ " EXECUTE sys.sp_cursoroption #cur\n"
+ " ,2\n"
+ " ,'MyCursorName';\n"
+ "\n"
+ " -- Until the cursor auto-closes\n"
+ " WHILE CURSOR_STATUS('global', 'MyCursorName') = 1\n"
+ " BEGIN\n"
+ " EXECUTE sys.sp_cursorfetch #cur\n"
+ " ,2\n"
+ " ,0\n"
+ " ,1000;\n"
+ " END;\n"
+ "END;\n";
public String getQuery() {
return this.query;
}
public static void main(String[ ] args) throws Exception {
String dbUrl = "jdbc:sqlserver://tc-sqlserver:1433;database=MyBigDatabase";
String user = "mario";
String password = "p#ssw0rd";
String driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
FancyQuery fq = new FancyQuery();
Class.forName(driver);
Connection conn = DriverManager.getConnection(dbUrl, user, password);
Statement stmt = conn.createStatement();
// We expect to get 1,000 rows at a time.
ResultSet rs = stmt.executeQuery(fq.getQuery());
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
// Alas, we've only gotten 1,000 rows, total.
rs.close();
stmt.close();
conn.close();
}
}
I figured it out.
stmt.execute(fq.getQuery());
ResultSet rs = null;
for (;;) {
rs = stmt.getResultSet();
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
}
if ((stmt.getMoreResults() == false) && (stmt.getUpdateCount() == -1)) {
break;
}
}
if (rs != null) {
rs.close();
}
After some additional googling, I found a bit of code posted back in 2004:
http://www.coderanch.com/t/300865/JDBC/databases/SQL-Server-JDBC-Registering-cursor
The gentleman who posted the snippet that I found helpful (Julian Kennedy) suggested: "Read the Javadoc for getUpdateCount() and getMoreResults() for a clear understanding." I was able to piece it together from that.
Basically, I don't think I understood my problem well enough at the outset in order to phrase it correctly. What it comes down to is that my query will be returning the data in multiple ResultSet instances. What I needed was a way to not merely iterate through each row in a ResultSet but, rather, iterate through the entire set of ResultSets. That's what the code above does.
If you want all records from the table, just do "Select * from table".
The only reason to retrieve in chunks is if there is some intermediate place for the data: e.g. if you are showing it on the screen, or storing it in memory.
If you are simply reading from one and inserting to another, just read everything from the first.You will not get any better performance by trying to retrieve in batches. If there is a difference, it will be negative. Frame your query in a way that brings back everything. The JDBC software will handle all the other breaking-up and reconstituting that you need.
However, you should batch the update/insert side of things.
The set-up would create two statements on the two connections:
Statement stmt = null;
ResultSet rs = null;
PreparedStatement insStmt = null;
stmt = conDb1.createStatement();
insStmt = conDb2.prepareStament("insert into tgt_db2_table (?,?,?,?,?......etc. ?,?) ");
rs = stmt.executeQuery("select * from src_db1_table");
Then, loop over the select as normal, but use batching on the target.
int batchedRecordCount = 0;
while (rs.next()) {
System.out.println(rs.getString("MyColumn"));
//Here you read values from the cursor and set them to the insStmt ...
String field1 = rs.getString(1);
String field2 = rs.getString(2);
int field3 = rs.getInt(3);
//--- etc.
insStmt.setString(1, field1);
insStmt.setString(2, field2);
insStmt.setInt(3, field3);
//----- etc. for all the fields
batchedRecordCount++;
insStmt.addBatch();
if (batchRecordCount > 1000) {
insStmt.executeBatch();
}
}
if (batchRecordCount > 0) {
//Finish of the final (partial) set of records
insStmt.executeBatch();
}
//Close resources...
I am trying to figure out a piece of code that has existed as is for several years but has recently started behaving differently (not sure if it is because of newer versions of jre or the environment the code is being run in but I don't know enough to tell). Anyways, basically the gist of the code pasted below is that I process one record at a time and insert it into a new table. The idea is that I only ever insert 1 row at a time. This code is from like 2000 or something and back then the engineers decided to throw in super paranoid sanity checks to make sure that one and only one record got inserted into the said table. All of a sudden this piece has been throwing the exceptions that indicate that the number of rows inserted was NOT 1 (Unfortunately, I don't know how many rows got inserted since the code lacks that piece of logging).
//
// If the result set is empty, then get the next master record id and insert a new record into the
// Holding_Tank_Master_Records.
//
if ( !mrRecords.next() )
{
// a new master record is needed
mrid = this.getMasterRecordId();
log.debug("new master record id = " + mrid);
insertSQL = new String("INSERT INTO Holding_Tank_Master_Records (" +
"Master_Record_Id, Probe_Date, Import_Filename, " +
"Pennies, Nickels, Dimes, Quarters, " +
"Half_Dollars, SBA_Dollars, One_Dollar_Bills, " +
"Five_Dollar_Bills, Ten_Dollar_Bills, Twenty_Dollar_Bills) " +
"VALUES( " + mrid + ", '" + curProbeDate.toString() + "', 'Avail AVL APC', " +
"0, 0, 0, 0, 0, 0, 0, 0, 0, 0 )");
int result = mrStatement.executeUpdate(insertSQL);
if ( result == 1 )
{
log.debug ( "NEW MASTER RECORD created for " + curProbeDate.toString() );
}
else
{
log.debug("Failed! SQL: " + insertSQL);
String strErrMsg = "Failed to insert new record into Holding_Tank_Master_Records. " +
"Master_Record_Id = " + mrid + " Probe_Date = " + curProbeDate.toString() +
"Vehicle Farebox Id = " + Integer.toString(vehicleID) + ".";
log.error( strErrMsg );
throw new AvailFaretoolException( strErrMsg );
}
}
So, what I am seeing is the 'Failed!' SQL message along with the custom exception being thrown lending me to believe that the number of rows inserted was NOT 1.
Has anyone seen anything like this before? Can you spot an issue here? By the way, if I were to run the SQL via SQL management studio, it works just fine with a message telling me 1 row was inserted. I know that the SQL running through the code isn't causing any SQL exceptions (which in any case would have caused the code flow to fall straight to my catch block, correct?)
Thanks for looking at this!
K
Edit:
Just wanted to add information regarding research I have done so far on this topic:
The API documentation for the executeUpdate mentions that the 'update count' it returns is either (1) the row count for SQL Data Manipulation Language (DML) statements or (2) 0 for SQL statements that return nothing
Since the statement is a pure insert statement, I don't understand how it could return anything but a non-negative integer. I mean, the only possible outcome of an insert statement is a row count, right?