I implemented a Java application which queries a database based on given set of ids using the query:
select * from STUDENT where ID in (?)
The set of ids will be used to replace ?. However, occasionally, I receive an exception:
Caused by: java.sql.SQLException: Numeric Overflow
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:271)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:445)
at oracle.jdbc.driver.NumberCommonAccessor.throwOverflow(NumberCommonAccessor.java:4319)
at oracle.jdbc.driver.NumberCommonAccessor.getInt(NumberCommonAccessor.java:187)
at oracle.jdbc.driver.OracleResultSetImpl.getInt(OracleResultSetImpl.java:712)
at oracle.jdbc.driver.OracleResultSet.getInt(OracleResultSet.java:434)
After some testing, I realized that if I divide the list of ids into many sub-lists with smaller size, the exception stops happening. For some reason, jdbc doesn't like putting too many values into IN (?). I wonder if anyone has seen this issue before and has an explanation for it? As this issue never happens on production environment but only on a local one (which has less powerful resources), I suspect it has something to do with server's resources.
Thanks
Update: the source code that I'm using is:
// create a query
private String getQueryString(int numOfParams) {
StringBuilder out = new StringBuilder();
out.append("select * from STUDENT where ID in (");
for (int i = 0; i < numOfParams; i++) {
if (i == numOfParams - 1) {
out.append("?");
} else {
out.append("?, ");
}
}
out.append(")");
}
// set parameters
private void setParams(PreparedStatement ps, Set<String> params) {
int index = 1;
for (String param: params) {
ps.setString(index++, param);
}
}
public void queryStudent(Connection conn, Set<String> ids) throws Exception {
String query = this.getQueryString(ids.size());
PreparedStatement ps = conn.prepareStatement(query);
this.setParams(ps, ids);
ps.executeQuery();
// do some operations with the result
}
The issue was caused by conflict of ojdbc driver between GlassFish and application. In order to fix it, I need to:
Update application's pom.xml (as I'm using maven) to use a latest
ojdbc which is ojdbc6-11.2.0.3
Add ojdbc6-11.2.0.3 to GlassFish lib
If necessary, manually remove the ojdbc jar from deployed applications' lib in glassfish (apparently this is not cleared by undeploy)
Did you check MySQL and/or JDBC max packet size setting? That usually bites you with large IN (...) lists.
This occurs with the ID property or some other integer property type of your entity
look your stacktrace>
at oracle.jdbc.driver.NumberCommonAccessor.getInt(NumberCommonAccessor.java:187)
at oracle.jdbc.driver.OracleResultSetImpl.getInt(OracleResultSetImpl.java:712)
at oracle.jdbc.driver.OracleResultSet.getInt(OracleResultSet.java:434)
Any value returned from the query does not fit on this property!
Change the properties Integer and try to work with the next integer types (long, Long BigInteger) in all fields of Integer type in your entity.
Related
Announcing Hibernate 6 the Hibernate team claims that by switching from
read-by-name to read-by-position in JDBC ResultSet they gain a performance benefit.
High-load performance testing showed that Hibernate’s approach of
reading values from ResultSet by name to be its most limiting factor
in scaling through-put.
Does that mean they are changing calls from getString(String columnLabel) to getString(int columnIndex)?
Why is this faster?
As ResultSet is an interface doesn't performance gain depend on the JDBC driver implementing it?
How big are the gains?
Speaking as a JDBC driver maintainer (and, I admit, making some sweeping generalizations which not necessarily apply to all JDBC driver), row values will usually be stored in an array or list because that most naturally matches the way the data is received from the database server.
As a result, retrieving values by index will be the simplest. It might be as simple as something like (ignoring some of the nastier details of implementing a JDBC driver):
public Object getObject(int index) throws SQLException {
checkValidRow();
checkValidIndex(index);
return currentRow[index - 1];
}
This is about as fast as it gets.
On the other hand, looking up by column name is more work. Column names need to be treated case-insensitive, which has additional cost whether you normalize using lower or uppercase, or use a case-insensitive lookup using a TreeMap.
A simple implementation might be something like:
public Object getObject(String columnLabel) throws SQLException {
return getObject(getIndexByLabel(columnLabel));
}
private int getIndexByLabel(String columnLabel) {
Map<String, Integer> indexMap = createOrGetIndexMap();
Integer columnIndex = indexMap.get(columnLabel.toLowerCase());
if (columnIndex == null) {
throw new SQLException("Column label " + columnLabel + " does not exist in the result set");
}
return columnIndex;
}
private Map<String, Integer> createOrGetIndexMap() throws SQLException {
if (this.indexMap != null) {
return this.indexMap;
}
ResultSetMetaData rsmd = getMetaData();
Map<String, Integer> map = new HashMap<>(rsmd.getColumnCount());
// reverse loop to ensure first occurrence of a column label is retained
for (int idx = rsmd.getColumnCount(); idx > 0; idx--) {
String label = rsmd.getColumnLabel(idx).toLowerCase();
map.put(label, idx);
}
return this.indexMap = map;
}
Depending on the API of the database and available statement metadata, it may require additional processing to determine the actual column labels of a query. Depending on the cost, this will likely only be determined when it is actually needed (when accessing column labels by name, or when retrieving result set metadata). In other words, the cost of createOrGetIndexMap() might be pretty high.
But even if that cost is negligible (eg the statement prepare metadata from the database server includes the column labels), the overhead of mapping the column label to index and then retrieving by index is obviously higher than directly retrieving by index.
Drivers could even just loop over the result set metadata each time and use the first whose label matches; this might be cheaper than building and accessing the hash map for result sets with a small number of columns, but the cost is still higher than direct access by index.
As I said, this is a sweeping generalization, but I would be surprised if this (lookup index by name, then retrieve by index) isn't how it works in the majority of JDBC drivers, which means that I expect that lookup by index will generally be quicker.
Taking a quick look at a number of drivers, this is the case for:
Firebird (Jaybird, disclosure: I maintain this driver)
MySQL (MySQL Connector/J)
PostgreSQL
Oracle
HSQLDB
SQL Server (Microsoft JDBC Driver for SQL Server)
I'm not aware of JDBC drivers where retrieval by column name would be equivalent in cost or even cheaper.
In the very early days of making jOOQ, I had considered both options, of accessing JDBC ResultSet values by index or by name. I chose accessing things by index for these reasons:
RDBMS support
Not all JDBC drivers actually support accessing columns by name. I forgot which ones didn't, and if they still don't, because I never touched that part of JDBC's API again in 13 years. But some didn't and that was already a show stopper for me.
Semantics of the name
Furthermore, among those that do support column names, there are different semantics to a column name, mainly two, what JDBC calls:
The column name as in ResultSetMetaData::getColumnName
The column label as in ResultSetMetaData::getColumnLabel
There is a lot of ambiguity with respect to implementations of the above two, although I think the intent is quite clear:
The column name is supposed to produce the name of the column irrespective of aliasing, e.g. TITLE if the projected expression is BOOK.TITLE AS X
The column label is supposed to produce the label (or alias) of the column, or the name if no alias is available, e.g. X if the projected expression is BOOK.TITLE AS X
So, this ambiguity of what a name/label is is already very confusing and concerning. It doesn't seem something an ORM should rely on in general, although, in Hibernate's case, one can argue that Hibernate is in control of most SQL being generated, at least the SQL that is produced to fetch entities. But if a user writes an HQL or native SQL query, I would be reluctant to rely on the name/label - at least without looking things up in ResultSetMetaData, first.
Ambiguities
In SQL, it's perfectly fine to have ambiguous column names at the top level, e.g.:
SELECT id, id, not_the_id AS id
FROM book
This is perfectly valid SQL. You can't nest this query as a derived table, where ambiguities aren't allowed, but in top level SELECT you can. Now, what are you going to do with those duplicate ID labels at the top level? You can't know for sure which one you'll get when accessing things by name. The first two may be identical, but the third one is very different.
The only way to clearly distinguish between the columns is by index, which is unique: 1, 2, 3.
Performance
I had also tried performance at the time. I don't have the benchmark results anymore, but it's easy to write another benchmark quickly. In the below benchmark, I'm running a simple query on an H2 in-memory instance, and consume the ResultSet accessing things:
By index
By name
The results are staggering:
Benchmark Mode Cnt Score Error Units
JDBCResultSetBenchmark.indexAccess thrpt 7 1130734.076 ± 9035.404 ops/s
JDBCResultSetBenchmark.nameAccess thrpt 7 600540.553 ± 13217.954 ops/s
Despite the benchmark running an entire query on each invocation, the access by index is almost twice as fast! You can look at H2's code, it's open source. It does this (version 2.1.212):
private int getColumnIndex(String columnLabel) {
checkClosed();
if (columnLabel == null) {
throw DbException.getInvalidValueException("columnLabel", null);
}
if (columnCount >= 3) {
// use a hash table if more than 2 columns
if (columnLabelMap == null) {
HashMap<String, Integer> map = new HashMap<>();
// [ ... ]
columnLabelMap = map;
if (preparedStatement != null) {
preparedStatement.setCachedColumnLabelMap(columnLabelMap);
}
}
Integer index = columnLabelMap.get(StringUtils.toUpperEnglish(columnLabel));
if (index == null) {
throw DbException.get(ErrorCode.COLUMN_NOT_FOUND_1, columnLabel);
}
return index + 1;
}
// [ ... ]
So. there's a hashmap with upper casing, and each lookup also performs upper casing. At least, it caches the map in the prepared statement, so:
You can reuse it on every row
You can reuse it on multiple executions of the statement (at least that's how I interpret the code)
So, for very large result sets, it might not matter as much anymore, but for small ones, it definitely does.
Conclusion for ORMs
An ORM like Hibernate or jOOQ is in control of a lot of SQL and the result set. It knows exactly what column is at what position, this work has already been done when generating the SQL query. So, there's absolutely no reason to rely on the column name any further when the result set comes back from the database server. Every value will be at the expected position.
Using column names must have been some historic thing in Hibernate. It's probably also why they used to generate these not so readable column aliases, to make sure that each alias is non-ambiguous.
It seems like an obvious improvement, irrespective of the actual gains in a real world (non-benchmark) query. Even if the improvement had been only 2%, it would have been worth it, because it affects every query execution by every Hibernate based application.
Benchmark code below, for reproduction
package org.jooq.test.benchmarks.local;
import java.io.*;
import java.sql.*;
import java.util.Properties;
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.infra.*;
#Fork(value = 1)
#Warmup(iterations = 3, time = 3)
#Measurement(iterations = 7, time = 3)
public class JDBCResultSetBenchmark {
#State(Scope.Benchmark)
public static class BenchmarkState {
Connection connection;
#Setup(Level.Trial)
public void setup() throws Exception {
try (InputStream is = BenchmarkState.class.getResourceAsStream("/config.properties")) {
Properties p = new Properties();
p.load(is);
connection = DriverManager.getConnection(
p.getProperty("db.url"),
p.getProperty("db.username"),
p.getProperty("db.password")
);
}
}
#TearDown(Level.Trial)
public void teardown() throws Exception {
connection.close();
}
}
#FunctionalInterface
interface ThrowingConsumer<T> {
void accept(T t) throws SQLException;
}
private void run(BenchmarkState state, ThrowingConsumer<ResultSet> c) throws SQLException {
try (Statement s = state.connection.createStatement();
ResultSet rs = s.executeQuery("select c as c1, c as c2, c as c3, c as c4 from system_range(1, 10) as t(c);")) {
c.accept(rs);
}
}
#Benchmark
public void indexAccess(Blackhole blackhole, BenchmarkState state) throws SQLException {
run(state, rs -> {
while (rs.next()) {
blackhole.consume(rs.getInt(1));
blackhole.consume(rs.getInt(2));
blackhole.consume(rs.getInt(3));
blackhole.consume(rs.getInt(4));
}
});
}
#Benchmark
public void nameAccess(Blackhole blackhole, BenchmarkState state) throws SQLException {
run(state, rs -> {
while (rs.next()) {
blackhole.consume(rs.getInt("C1"));
blackhole.consume(rs.getInt("C2"));
blackhole.consume(rs.getInt("C3"));
blackhole.consume(rs.getInt("C4"));
}
});
}
}
I'm using JDBC with createStruct() to call a stored procedure on an Oracle database that accepts a custom type as a parameter. The stored procedure inserts the custom type fields into a table and when I SELECT from the table later I see that all the fields that I tried to insert are NULL.
The custom type looks like this:
type record_rec as object (owner_id varchar2 (7),
target_id VARCHAR2 (8),
IP VARCHAR2 (15),
PREFIX varchar2 (7),
port varchar2 (4),
description VARCHAR2 (35),
cost_id varchar2(10))
The stored procedure looks like this:
package body "PKG_RECORDS"
IS
procedure P_ADD_RECORD (p_target_id in out VARCHAR2,
p_record_rec in record_rec)
is
l_target_id targets.target_id%TYPE;
BEGIN
Insert into targets (target_id,
owner_id,
IP,
description,
prefix,
start_date,
end_date,
cost_id,
port,
server_name,
server_code)
values (f_sequence ('TARGETS'),
p_record_rec.owner_id,
p_record_rec.ip,
p_record_rec.description,
p_record_rec.prefix,
sysdate,
to_date ('01-JAN-2050'),
p_record_rec.cost_id,
p_record_rec.port,
'test-server',
'51')
returning target_id
into p_target_id;
END;
END PKG_RECORDS;
My Java code looks something like this:
try (Connection con = m_dataSource.getConnection()) {
ArrayList<String> ids = new ArrayList<>();
CallableStatement call = con.prepareCall("{call PKG_RECORDS.P_ADD_RECORD(?,?)}");
for (Record r : records) {
call.registerOutParameter("p_target_id", Types.VARCHAR);
call.setObject("p_record_rec",
con.createStruct("SCHEME_ADM.RECORD_REC", new Object[] {
r.getTarget_id(),
null, // will be populated by SP
t.getIp(),
t.getPrefix(),
t.getPort(),
t.getDescription(),
t.getCost_id()
}), Types.STRUCT);
call.execute();
ids.add(call.getString("p_target_id"));
}
return new QueryRunner().query(con,
"SELECT * from TARGETS_V WHERE TARGET_ID IN ("+
ids.stream().map(s -> "?").collect(Collectors.joining(",")) +
")",
new BeanListHandler<Record>(Record.class),
ids.toArray(new Object[] {})
).stream()
.collect(Collectors.toList());
} catch (SQLException e) {
throw new DataAccessException(e.getMessage());
}
Notes:
* That last part is using Apache Commons db-utils - I love their bean stream operations.
* The connection is using C3P0 connection pool - could that be related?
* Just to make it clear - its not that the bean processor populates null values into the Record bean fields - if I use an SQL explorer to load the table (or view) directly, I can see that the fields in the database are indeed set to NULL.
There are no SQLExceptions when the process runs, or any other notice that something is wrong.
Any ideas what to check?
[Update]
After reading on Oracle Objects and SQLData mappings, I rewrote the code to use SQLData.
The Record class now implements SQLData and it's writeSQL() method looks like this:
#Override
public void writeSQL(SQLOutput stream) throws SQLException {
stream.writeString(owner_id);
stream.writeString(target_id);
stream.writeString(Objects.isNull(ip) ? "0" : ip); // weird, but as specified
stream.writeString(prefix);
stream.writeString(String.valueOf(port));
stream.writeString(description);
stream.writeString(cost_id);
}
Then at the start of the calling code, I've added:
con.getTypeMap().put("SCHEME_ADM.RECORD_REC", Record.class);
And instead of using createStruct(), the setObject() call now looks simply like this:
call.setObject("p_record_rec", t, Types.STRUCT)
But the result is the same - no errors and all the passed values are read as NULL. I've traced through the writeSQL() implementation and I can see that it is called and all values are passed correctly into the Oracle code. I've tried to use Types.JAVA_OBJECT in the setObject() call, and got an error: Invalid column type.
[Update 2]
Bordering on insane helplessness I've implemented the OracleData pattern:
public class Record implements SQLData, OracleData, OracleDataFactory {
...
#Override
public Object toJDBCObject(Connection conn) throws SQLException {
return conn.createStruct(getSQLTypeName(), new Object[] {
Objects.isNull(owner_id) ? "" : owner_id,
Objects.isNull(record_id) ? "" : record_id,
Objects.isNull(ip) ? "0" : ip,
Objects.isNull(prefix) ? "" : prefix,
String.valueOf(port),
Objects.isNull(description) ? "" : description,
Objects.isNull(cost_id) ? "" : cost_id
});
}
#Override
public OracleData create(Object jdbcValue, int sqltype) throws SQLException {
if (Objects.isNull(jdbcValue)) return null;
LinkedList<Object> attr = new LinkedList<>(Arrays.asList(((OracleStruct)jdbcValue).getAttributes()));
Record r = new Record();
r.setOwner_id(attr.removeFirst().toString());
r.setRecord_id(attr.removeFirst().toString());
r.setIp(attr.removeFirst().toString());
r.setPrefix(attr.removeFirst().toString());
r.setPort(Integer.parseInt(attr.removeFirst().toString()));
r.setDescription(attr.removeFirst().toString());
r.setCost_id(attr.removeFirst().toString());
return r;
}
public static OracleDataFactory getOracleDataFactory() {
return new Record();
}
Calling code:
...
// unwrap the Oracle object from C3P0 (standard JDBCv4 API)
OracleCallableStatement ops = call.unwrap(OracleCallableStatement.class);
// I'm not sure why I even need to do this - it looks exactly like
// the standard JDBC code
for (Records r : records) {
ops.registerOutParameter(1, Types.VARCHAR);
ops.setObject(2, t);
ops.execute();
ids.add(ops.getString(1));
}
...
And again, same result - no errors, a record is created in the table, with all provided values are null. I've traced through the code and the toJDBCObject() method is called correctly and does pass the values correctly in to createStruct().
Found the problem. Annoyingly, its about character encoding.
If in the toJDBCObject() implementation, I run getAttributes() on the created struct, the resulting Object[] array has all fields set as "???". Which is weird and looks like a character set transcoding failure (although it looks weird for that too - has three question marks for all fields regardless of value length, including empty string values).
According to Oracle's JDBC developer guide, "Globalization Support":
The basic Java Archive (JAR) file ojdbc7.jar, contains all the necessary classes to provide complete globalization support for:
Oracle character sets for CHAR, VARCHAR, LONGVARCHAR, or CLOB data that is not being retrieved or inserted as a data member of an Oracle object or collection type.
CHAR or VARCHAR data members of object and collection for the character sets US7ASCII, WE8DEC, WE8ISO8859P1, WE8MSWIN1252, and UTF8.
To use any other character sets in CHAR or VARCHAR data members of objects or collections, you must include orai18n.jar in the CLASSPATH environment variable:
ORACLE_HOME/jlib/orai18n.jar
And my setup was using the character set "WE8ISO8859P9" (I have no idea why, what it means, or even if it is selected by the client or the server - I just dumped the STRUCT object created by the OracleData API implementation and it was there somewhere).
So when Oracle says that it does not "provide complete globalization support", they mean "all character fields will be silently converted to NULL". Hmpph.
Anyway, adding orai18n.jar to the CLASSPATH indeed fixed the problem, and now records are added correctly to the database.
I've got an entity which has 201 fields(testId,test1...test200) that id is long type and the others are String. And I searched it in Hibernate with the HQL
this.getTestDao().getHibernateTemplate().find("from Test where testId<=10000")
The thread turned to spring-hibernate3.jar then
//Method from spring-hibernate3.jar
//org.springframework.orm.hibernate3.HibernateTemplate
public List find(final String queryString, final Object values[])
throws DataAccessException
{
return (List)execute(new HibernateCallback() {
public Object doInHibernate(Session session)
throws HibernateException
{
Query queryObject = session.createQuery(queryString);
prepareQuery(queryObject);
if(values != null)
{
for(int i = 0; i < values.length; i++)
queryObject.setParameter(i, values[i]);
}
return queryObject.list();
}
}
, true);
}
But Java VisualVm(a monitor software in JDK) told me that the method oracle.jdbc.driver.OracleStatement.getColumnIndex() cost 4404ms for 10 thousand data
.I know Hibernate is slow but it's really unacceptable,'cause it only cost 55ms in sqldeveloper with the same SQL.
I am sure that there is no any exception printed and every fileld is legal.here is the decompiled code oracle.jdbc.driver.OracleStatement.getColumnIndex
//Method from oracle10.2 jdbc14.jar
// oracle.jdbc.driver.OracleStatement
int getColumnIndex(String s)
throws SQLException
{
if(!describedWithNames)
synchronized(connection)
{
synchronized(this)
{
connection.needLine();
doDescribe(true);
described = true;
describedWithNames = true;
}
}
for(int i = 0; i < numberOfDefinePositions; i++)
if(accessors[i].columnName.equalsIgnoreCase(s))
return i + 1;
DatabaseError.throwSqlException(6);
return 0;
}
and the image from monitor
click to view
Thanks everybody,thanks pointing the grammar mistakes if existed.
I ran into the same issue. We use Hibernate to load a very "wide" table (~200 columns) into objects. For each row in the result set, Hibernate will try to "extract" the value in each column using its name. To do this, it calls the "OracleStatement.getColumnIndex" method (above). This stupid method looks up the index by iterating through all the fields field... trying to see if the name of the column matches the name in the input.
So if the there are 200 column, and 100 rows. And the "getColumnIndex" method has to go halfway through the columns (on average) to find the column it is looking for, then...
200 * 100 * 100 = 2,000,000 string comparison operations...
As if the names of the columns in the result set changes!!!
But this is the code in an OLD version of oracle... "ojdbj6.jar" (12.1.0.2.0) to be precise. So I decompiled "ojdbc8.jar' (18.3.0.0.0) to see if anything changed... and it did. Some clever person at Oracle decided to add a CACHE:
int getColumnIndex(String paramString) throws SQLException {
ensureOpen();
Integer integer = (Integer)this.columnNameCache.get(paramString);
if (integer == null) {
integer = Integer.valueOf(getColumnIndexPrimitive(paramString));
if (this.columnNameCache.size() <= this.accessors.length)
this.columnNameCache.put(paramString, integer);
}
return integer.intValue();
}
So #a_horse_with_no_name is somewhat correct when it said that your problem is that the Oracle driver is old.
UPDATE:
For my application, I did a test which compared the performance old (11.2.0.3) driver versus the newer (18.3.0.0) driver. This test performs an action which does a Hibernate query against a "wide" table and scrolls through the results, converting the result to XML as it goes. With the new driver, the elapsed time was 4X faster. If I could eliminate the XML logic from the equation, the performance improvement would be much more dramatic.
BOTTOM LINE:
If you have to use Hibernate to do a query against a "wide" table, or one with many rows in the results, make sure to use the latest JDBC driver. New drivers are compatible with older database.
I am trying to write java code to access a table 'customer' with columns 'customer_id', 'email', 'deliverable', and 'create_date'
I have
Connection conn = DriverManager.getConnection(connectionUrl, connectionUser, connectionPassword);
Statement constat = conn.createStatement();
String query = "SELECT * FROM customer WHERE customer_id LIKE " + customerId;
ResultSet rtn = constat.executeQuery(query);
Customer cust = new Customer(rtn.getInt("customer_id"), rtn.getString("email"), rtn.getInt("deliverable"), rtn.getString("create_date"));
conn.close();
return cust;
I am receiving the error:
java.sql.SQLException: Before start of result set
As far as I can tell, my error is in the line where I am creating a new Customer object, but I cannot figure out what I am doing wrong. Can anyone offer me some help? Thanks!
You must always go to the next row by calling resultSet.next() (and checking it returns true), before accessing the data of the row:
Customer cust = null;
if (rtn.next()) {
cust = new Customer(rtn.getInt("customer_id"),
rtn.getString("email"),
rtn.getInt("deliverable"),
rtn.getString("create_date"));
}
Note that you should also
use prepared statements instead of String concatenation to avoid SQL injection attacks, and have more robust code
close the connections, statements and resultsets in a finally block, or use the try-with-resources construct if using Java 7
Read the JDBC tutorial
You should call ResultSet.first() to move the result to the first position. The result set is a programming convention not to retrieve the whole result of the query and keep in memory. As such, its interface is quite low level and you must explicit select the row via methods like first(), last() or next() (each returns true to check if the requested row index is in the set)
You need to add
rtn.next();
before you use the result set.
Usually this is done as
while (rtn.next()) {
<do something with the row>
}
I'm running a Tomcat WAR, which uses a MySQL database.
The application will run in foreign languages, so I had to change all database character parameters to utf8.
One application string (appPrefix) has to be empty (because the WAR is deployed in the root dir). This worked well, until I created a new database in UTF8 and migrated all the tables.
Now I get a NullPointerException because of the appPrefix being empty:
java.lang.NullPointerException
com.horizon.servlet.PageServlet.doMainPageRequest(PageServlet.java:177)
com.horizon.servlet.PageServlet.doRequest(PageServlet.java:53)
com.horizon.servlet.PageServlet.doGet(PageServlet.java:33)
javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
com.horizon.filters.P3PFilter.doFilter(P3PFilter.java:19)
The above is all the same error causing ripples throughout the application.
It's all caused by appPrefix being empty, but it should..
Should I specify it as empty in another way? Or should I try to hardcode my way around this?
EDIT:
As per the request in the comment below, here is PageServlet.java:177
request.setAttribute("appPrefix", appManager.getAppStringById(11).getValue());
This references AppManager.java:
public static final int APP_STRING_APPLICATION_PREFIX = 11;
which is populated by
public AppString getAppStringById(int id) {
AppString string = (AppString) stringCache.get(id);
if (string == null) {
String query = "SELECT * FROM app_strings WHERE id = ?";
List<Object> params = new LinkedList<Object>();
params.add(id);
string = execQueryLoadSingleRecord(query, params, new LoadAppString());
if (string != null) {
populateCache(stringCache, id, string);
}
}
return string;
}
As per
request.setAttribute("appPrefix", appManager.getAppStringById(11).getValue());
and
The database entry it's getting is empty, as it should.. So shouldn't it return null? Does this conflict with anything? The db string being empty was no problem until I changed the database's encoding to UTF8 from the default latin1 swedish!
I understand that it's not a problem at all if appManager.getAppStringById(11) can possibly return null, right? In that case, you should check for it before calling getValue() on it.
AppString appString = appManager.getAppStringById(11);
if (appString != null) {
request.setAttribute("appPrefix", appString.getValue());
}
As to why it returns null after you changed the table's charset; I have no idea. Perhaps it's just big coincidence or a misinterpretation of the problem. Perhaps you added getValue() call later on because you wanted to use ${appPrefix} instead of ${appPrefix.value} in EL or something. Or perhaps you rewrote execQueryLoadSingleRecord() that it returned null instead of empty string. Or perhaps the column's default value is null instead of an empty string. Or perhaps it's a bug in the JDBC driver used. Who knows. Using null as "no value" is perfectly fine and should be treated as such.