My database contains a field ARR_ID with data type as NUMBER(20),
the value is = 100013085001
Some java application is fetching this value and they are using Integer as the datatype to fetch the value.
In the output of their application the value is displayed as 1228837193.
How is this value getting converted in Java I do not know?
What happens when the data is too large to be contained in the datatype?
Shouldn't the application throw an error in that case?
It depends on the SQL driver you are using. I saw the circumstances you are describing once when I was using the SQLite driver. I was also a bit concerned and checked the sources. The datatype is determinded by the value in the column. If it doesn't fit in an Integer a Long is used.
I don't remember how this is working for a collection. You may want to check the sources of the driver which you are using (if it's also open source).
If you use JDBC then with a 20-digit integer (e.g. 12345678901234567890), the program throws com.mysql.jdbc.exceptions.jdbc4.MySQLDataException: '1.2345678901234567E19' in column '1' is outside valid range for the datatype INTEGER. It seems you use Oracle which I don't have, this is for mysql, you may try for yourself.
Related
If I understand (and test a sample JDBC code; using Jaybird for Firebird) well, even using a proper (= respecting the type mapping) updater method (e.g. ResultSet.updateString), or maybe PreparedStatement parameter, can bring a "conversion exception".
Is it possible (and is it a good practice) to test before actually working with the result set (e.g. running an updater method) whether the actual Java Type/value can be safely converted to the target SQL data type?
Is the "problem" just one-way? I.e. when converting back from SQL to Java (using getter method), is it guaranteed that the correct getter method will never fail (due to conversion problems)?
My examples (Using Jaybird 3.0.2, JDK1.8):
I need to update field: NUMERIC(9,2). The corresponding updater is:
ResultSet.updateBigDecimal(int columnIndex, BigDecimal x). If I use x = new BigDecimal("123456789.1234") (bigger precision and scale), I (logically) get an exception:
Exception in thread "main" org.firebirdsql.jdbc.field.TypeConversionException: Error converting to big decimal.
I need to update field: VARCHAR(5). The corresponding updater is: ResultSet.updateString(int columnIndex, String x).
If I use x = "123456" (longer string 6 > 5), I (logically) get an exception: Exception in thread "main" java.sql.DataTruncation: Data truncation.
Is there some general elegant way (not depending on specific type) how to check, whether an actual Java value/object can be "saved" to certain SQL field, other than just trying to run the query and catching the exceptions?
I would like to check the values already in my data editing dialog (before actually running the update query). Simple test "VALUE OK / NOT OK" would be fine (knowing just the target SQL type).
It seems quite difficult for me to find all rules I would have to check "type by type" (i.e. for VARCHAR check string length, for NUMERIC check precision and scale etc. - but what else? or would that be sufficient? for integer and float types no need to check anything?).
I tried to go through the Jaybird source codes but the "conversion process" is very complicated (and type-specific), I could not find the answer myself.
JDBC does not provide anything to 'check' values before you actually set them, so Jaybird doesn't either: setting the value is the check. Exact behaviour is driver dependent, Jaybird attempts to validate on setting values, but other drivers might choose to defer this to the database itself (so the error would only occur on execute).
Normally, you would design your database and pick column types based on your business needs, which should naturally have lead to validation before you even try to put it in the database.
If you haven't done that until now, start adding validation to your input forms, by restricting lengths, using things like Hibernate Validator, or the validation of your UI framework.
If you are working with highly dynamic requirements (eg user provided queries, etc), then you should use the features that JDBC does provide to create your own validation: The ParameterMetaData of a prepared statement and the ResultSetMetaData of a result set (also accessible from a prepared statement), specifically the getPrecision (and getScale) of these objects, or maybe even things like DatabaseMetadata.getColumns.
For a character type, getPrecision will indicate the max number of characters, for a numeric or decimal type you can use the max numbers of digits before the decimal point as precision - scale.
However in Jaybird this is not a 100% exact, for example getPrecision may return 9 for a numeric(8,2) if Jaybird can't identify the underlying column, and Jaybird (and Firebird) will actually allow up to precision 10 with some limitations (that is, unscaled max value of Integer.MAX_VALUE, ie 21474836.47 for this type).
As to your second question if using getters could cause a conversion exception: normal cases will not, but for example calling getInt() on a BIGINT with a value larger than Integer.MAX_VALUE or Integer.MIN_VALUE will.
I have a foreign key column in my table that holds an unsigned integer value. I want to add a new value to this column, but I don't know how java will deal with this unsigned value, since java doesn't have native unsigned values.
If I do:
preparedStatement.setInt(2, myInt);
Will the Database convert this signed int to it's unsigned value automatically? Or will it throw an error saying that those are incompatible types? Should I step up and use a long like:
preparedStatement.setLong(2, myLong);
Or will this throw an exception as well, because the Database is not using BIGINT?
I am using MySQL and I just want to avoid surprises in the future as my table records grow.
maybe MySQL behaves in one way and SQL Server in another
Leaving aside the fact that SQL Server does not have unsigned integers, your instincts are correct that signed ⇄ unsigned behaviour could very well depend on the implementation of the particular JDBC driver being used. Therefore, the general answer you seek is really "too broad" because it could potentially require a description of implementation-specific details for all of the JDBC drivers whose databases support unsigned integer columns.
So, as suggested in the comments to the question, your best bet for MySQL (or any other particular JDBC driver) would be to
see if the JDBC specification itself defines the required behaviour (unlikely),
check your JDBC driver documentation for their definitive answer, or
test your desired configuration to see if it behaves in a way that will suit your needs.
How do I write a custom Long class to handle long values in Oracle, to avoid the following error?
Caused by: java.sql.SQLException: Stream has already been closed.
Thanks
Oracle recommends not using Long and Long Raw columns (since Oracle 8i). They are included in Oracle only for legacy reasons. If you really need to use them, the you should first handle these columns before attempting to touch any other columns in the ResultSet:
Docs:
When a query selects one or more LONG or LONG RAW columns, the JDBC driver transfers these columns to the client in streaming mode. After a call to executeQuery or next, the data of the LONG column is waiting to be read.
Do not create tables with LONG columns. Use large object (LOB) columns, CLOB, NCLOB, and BLOB, instead. LONG columns are supported only for backward compatibility. Oracle recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns.
As for hibernate - see this question.
The following doesn't answer the original question 'how to write a custom Long class to handle long values in Oracle' but may be helpful to avoid the 'Stream has already been closed' error when querying Oracle long raw columns.
We faced this error using a legacy database with no chances of changing the column type. We use Spring with hibernate3 session factory and transaction manager. The problem occurred when more than one task were accessing the DAO concurrently.
We are using ojdbc14.jar driver and tried a newer one with no luck.
Setting useFetchSizeWithLongColumn = true in the connection properties for the OJDBC driver solved the problem. See the OracleDriver API
THIS IS A THIN ONLY PROPERTY. IT SHOULD NOT BE USED WITH ANY OTHER
DRIVERS. If set to "true", the performance when retrieving data in a
'SELECT' will be improved but the default behavior for handling LONG
columns will be changed to fetch multiple rows (prefetch size). It
means that enough memory will be allocated to read this data. So if
you want to use this property, make sure that the LONG columns you are
retrieving are not too big or you may run out of memory. This property
can also be set as a java property : java
-Doracle.jdbc.useFetchSizeWithLongColumn=true myApplication
I think you get this message when you try to get an Oracle LONG value from the result set multiple times.
I had code like:
rs.getString(i+1) ;
if (rs.wasNull()) continue ;
set(queryAttr[i], rs.getString(i+1)) ;
And I started getting the "Stream has already been closed." error. I stopped getting the error when I changed the code to:
String str = rs.getString(i+1) ;
if (rs.wasNull()) continue ;
set(queryAttr[i], str) ;
This happens in a query of system tables:
SELECT * FROM all_tab_columns
WHERE owner = 'D_OWNER' AND COLUMN_NAME LIKE 'XXX%';
I'm working with third party user data that may or may not fit into our database. The data needs to be truncated if it is too long.
We are using IBatis with Connector/J. If the data is too long a SQL exception is thrown. I have had two choices: either truncate the strings in Java or truncate the strings in sql using substring.
I don't like truncating the strings in sql, because I am writing table structure in our Ibatis XML, but SQL on the other hand knows about our database collation (which isn't consistent and would be expensive to make consistent) and can truncate string in a multibyte safe manner.
Is there a way to have the Connector/J just straight insert this SQL and if not which route would people recommend?
According to the MySQL documentation it's possible that inserting data that exceeds the length could be treated as a warning:
Inserting a string into a string
column (CHAR, VARCHAR, TEXT, or BLOB)
that exceeds the column's maximum
length. The value is truncated to the
column's maximum length.
One of the Connector/J properties is jdbcCompliantTruncation. This is its description:
This sets whether Connector/J should
throw java.sql.DataTruncation
exceptions when data is truncated.
This is required by the JDBC
specification when connected to a
server that supports warnings (MySQL
4.1.0 and newer). This property has no effect if the server sql-mode includes
STRICT_TRANS_TABLES. Note that if
STRICT_TRANS_TABLES is not set, it
will be set as a result of using this
connection string option.
If I understand correctly then setting this property to false doesn't throw the exception but inserts the truncated data. This solution doesn't require you to truncate the data in program code or SQL statements, but delegates it to the database.
I'm new to using JDBC + MySQL.
I have several 1/0 values which I want to stick into a database with a PreparedStatement. The destination column is a BIT(M!=1). I'm unclear on which of the setXXX methods to use. I can find the references for what data comes out as easily enough, but how it goes in is eluding me.
The values effectively live as an ordered collection of booleans in the objects used by the application. Also, I'll occasionally be importing data from flat text files with 1/0 characters.
To set a BIT(M) column in MySQL
For M==1
setBoolean(int parameterIndex, boolean x)
From the javadoc
Sets the designated parameter to the
given Java boolean value. The driver
converts this to an SQL BIT value when
it sends it to the database.
For M>1
The support for BIT(M) where M!=1 is problematic with JDBC as BIT(M) is only required with "full" SQL-92 and only few DBs support that.
Check here Mapping SQL and Java Types: 8.3.3 BIT
The following works for me with MySQL (at least with MySQL 5.0.45, Java 1.6 and MySQL Connector/J 5.0.8)
...
PreparedStatement insert = con.prepareStatement(
"INSERT INTO bittable (bitcolumn) values (b?)"
);
insert.setString(1,"111000");
...
This uses the special b'110101010' syntax of MySQL to set the value for BIT columns.
You can use get/setObject with a byte array (byte[]). 8 bits are packed into each byte with the least significant bit being in the last array element.