I am actually trying to insert the data in the H2 database. While starting up the application server, was getting the SQL syntax error exception. I am really not sure if H2 database supports the insertion of array in the column? Is there any issue with the below sql statement? Does H2 database support any of the array datatypes float[], String[]...?
INSERT INTO weather (id,date,temperature) values ('1','2019-09-11','{"37.3","36.8","36.4"}');
CREATE TABLE WEATHER(
id INT AUTO_INCREMENT PRIMARY KEY,
date DATE,
temperature text[]
);
You can't use PostgreSQL-style text[] as a data type in H2 and in other databases. H2 has the ARRAY data type for arrays:
https://h2database.com/html/datatypes.html#array_type
H2 1.4.201 will also support standard-compliant array data type with a component type:
componentDataType ARRAY[maximumCardinality]
You can build H2 from its current sources if you really need that functionality right now, but I think you don't really need it, non-standard plain ARRAY will work too.
'{"37.3","36.8","36.4"}' is a character string literal. H2 uses the standard array literals:
ARRAY[element, …]
https://h2database.com/html/grammar.html#array
If you use some outdated version of H2 you need to use non-standard (element, …) literal instead (but don't use that variant in recent versions, it will be parsed by them as a row value as required by the Standard).
It's not related with your question, but you really should use 1 instead of '1' as integer literal and DATE '2019-09-11' instead of '2019-09-11' as a date literal to avoid conversions from character strings to other data types.
Related
I'm using a Spring jdbcTemplate.update(String sql, Object[] args) to execute a prepared insert statement on an Oracle database. One of the objects is a Character object containing the value 'Y', and the target column is of CHAR(1) type, but I'm receiving a
java.sql.SQLException: Invalid column type
exception.
I've debugged this backwards and forwards and there is no doubt that it is this one particular object that is causing the problem. The insert executes as expected when this Character Object is omitted.
I can also output the sql and Object[] values, copy the sql into sql developer, replace the value placeholders (?'s) with the actual values of the Objects, and the insert will work fine.
The sql (obfuscated to protect the guilty):
INSERT INTO SCHEMA.TABLE(NUMBER_COLUMN,VARCHAR_COLUMN,DATE_COLUMN,CHAR_COLUMN) VALUES (?,?,?,?);
The object values:
values[0] = [123]
values[1] = [Some String]
values[2] = [2012-04-19]
values[3] = [Y]
The combination run manually in sql developer and that works just fine:
INSERT INTO SCHEMA.TABLE(NUMBER_COLUMN,VARCHAR_COLUMN,DATE_COLUMN,CHAR_COLUMN) VALUES (123,'Some String','19-Apr-2012','Y');
The prepared statement sql itself is generated dynamically based on the non-null instance variable objects contained within a data transfer object (we want the database to handle generation of default values), so I can't accept any answers suggesting that I just rework the sql or insertion routine.
Anyone ever encountered this and can explain to me what's going on and how to fix it? It's frustratingly bizzare that I can't seem to insert a Character object into a CHAR(1) field. Any help would be much appreciated.
Sincerely, Longtime Lurker First-time Poster
There is no PreparedStatement.setXxx() that takes a character value, and the Oracle docs states that all JDBC character types map to Java Strings. Also, see http://docs.oracle.com/javase/1.3/docs/guide/jdbc/getstart/mapping.html#1039196, which does not include a mapping from Java char or Character to a JDBC type.
You will have to convert the value to a String.
I'm using hibernate and db is sqlserver.
SQL Server differentiates it's data types that support Unicode from the ones that just support ASCII. For example, the character data types that support Unicode are nchar, nvarchar, longnvarchar where as their ASCII counter parts are char, varchar and longvarchar respectively. By default, all Microsoft’s JDBC drivers send the strings in Unicode format to the SQL Server, irrespective of whether the datatype of the corresponding column defined in the SQL Server supports Unicode or not. In the case where the data types of the columns support Unicode, everything is smooth. But, in cases where the data types of the columns do not support Unicode, serious performance issues arise especially during data fetches. SQL Server tries to convert non-unicode datatypes in the table to unicode datatypes before doing the comparison. Moreover, if an index exists on the non-unicode column, it will be ignored. This would ultimately lead to a whole table scan during data fetch, thereby slowing down the search queries drastically.
The solution we used is ,we figured that there is a property called sendStringParametersAsUnicode which helps in getting rid of this unicode conversion. This property defaults to ‘true’ which makes the JDBC driver send every string in Unicode format to the database by default. We switched off this property.
My question is now we cannot send data in unicode conversion. in future if db column of varchar is changed to nvarchar (only one column not all varchar columns), now we should sent the string in unicode format.
Please suggest me how to handle the scenario.
Thanks.
You need to specify property: sendStringParametersAsUnicode=false in connection string url.
jdbc:sqlserver://localhost:1433;databaseName=mydb;sendStringParametersAsUnicode=false
Unicode is the native string representation for communication with SQL Server, if you are converting to MBCS (Multibyte character sets), then you are doing 2 converts for every string. I suggest that if you are concerned with performance, use all Unicode instead of all MBCS
ref: http://social.msdn.microsoft.com/Forums/en/sqldataaccess/thread/249c629f-b8f2-4a8a-91e8-aad0d83919ca
I'm working with third party user data that may or may not fit into our database. The data needs to be truncated if it is too long.
We are using IBatis with Connector/J. If the data is too long a SQL exception is thrown. I have had two choices: either truncate the strings in Java or truncate the strings in sql using substring.
I don't like truncating the strings in sql, because I am writing table structure in our Ibatis XML, but SQL on the other hand knows about our database collation (which isn't consistent and would be expensive to make consistent) and can truncate string in a multibyte safe manner.
Is there a way to have the Connector/J just straight insert this SQL and if not which route would people recommend?
According to the MySQL documentation it's possible that inserting data that exceeds the length could be treated as a warning:
Inserting a string into a string
column (CHAR, VARCHAR, TEXT, or BLOB)
that exceeds the column's maximum
length. The value is truncated to the
column's maximum length.
One of the Connector/J properties is jdbcCompliantTruncation. This is its description:
This sets whether Connector/J should
throw java.sql.DataTruncation
exceptions when data is truncated.
This is required by the JDBC
specification when connected to a
server that supports warnings (MySQL
4.1.0 and newer). This property has no effect if the server sql-mode includes
STRICT_TRANS_TABLES. Note that if
STRICT_TRANS_TABLES is not set, it
will be set as a result of using this
connection string option.
If I understand correctly then setting this property to false doesn't throw the exception but inserts the truncated data. This solution doesn't require you to truncate the data in program code or SQL statements, but delegates it to the database.
I'm using Hibernate 3.2.7.GA criteria queries to select rows from an Oracle Enterprise Edition 10.2.0.4.0 database, filtering by a timestamp field. The field in question is of type java.util.Date in Java, and DATE in Oracle.
It turns out that the field gets mapped to java.sql.Timestamp, and Oracle converts all rows to TIMESTAMP before comparing to the passed in value, bypassing the index and thereby ruining performance.
One solution would be to use Hibernate's sqlRestriction() along with Oracle's TO_DATE function. That would fix performance, but requires rewriting the application code (lots of queries).
So is there a more elegant solution? Since Hibernate already does type mapping, could it be configured to do the right thing?
Update: The problem occurs in a variety of configurations, but here's one specific example:
Oracle Enterprise Edition 10.2.0.4.0
Oracle JDBC Driver 11.1.0.7.0
Hibernate 3.2.7.GA
Hibernate's Oracle10gDialect
Java 1.6.0_16
This might sound drastic, but when faced with this problem we ended up converting all DATE columns to TIMESTAMP types in the database. There's no drawback to this that I can see, and if Hibernate is your primary application platform then you'll save yourself future aggravation.
Notes:
The column types may be changed with
a simple "ALTER tableName MODIFY
columnName TIMESTAMP(precisionVal)".
I was surprised to find that indexes
on these columns did NOT have to be
rebuilt.
Again, this only makes sense if you're committed to Hibernate.
According to Oracle JDBC FAQ:
"11.1 drivers by default convert SQL DATE to Timestamp when reading from the database"
So this is an expected behaviour.
To me this means that actual values coming from DATE columns are converted to java.sql.Timestamp, not that bind variables with java.util.Date are converted to java.sql.Timestamp.
An EXPLAIN PLAN output would help identifying the issue. Also, an Oracle trace could tell you exactly what type is assigned to the bind variable in the query.
If that's really happening it could be a Oracle bug.
You can work around it this way:
Create an FBI (Function Based Index) on the DATE column, casting it to a TIMESTAMP. For example:
CREATE INDEX tab_idx ON tab (CAST(date_col AS TIMESTAMP)) COMPUTE STATISTICS;
Create a View that contains the same CAST expression. You can keep the same column name if you want:
CREATE VIEW v AS
SELECT CAST(date_col AS TIMESTAMP) AS date_col, col_1, ... FROM tab;
Use the View instead of the Table (it's often a good idea anyway, e.g. if you were already using a View, you wouldn't need to change the code at all). When a java.sql.Timestamp variable will be used with date_col in the WHERE condition, (if enough selective) the Index will be used.
If you find out why there was a java.sql.Timestamp (or Oracle fixes the potential bug), you can always go back just changing the View (and dropping the FBI), and it would be completely transparent to the code
I'm new to using JDBC + MySQL.
I have several 1/0 values which I want to stick into a database with a PreparedStatement. The destination column is a BIT(M!=1). I'm unclear on which of the setXXX methods to use. I can find the references for what data comes out as easily enough, but how it goes in is eluding me.
The values effectively live as an ordered collection of booleans in the objects used by the application. Also, I'll occasionally be importing data from flat text files with 1/0 characters.
To set a BIT(M) column in MySQL
For M==1
setBoolean(int parameterIndex, boolean x)
From the javadoc
Sets the designated parameter to the
given Java boolean value. The driver
converts this to an SQL BIT value when
it sends it to the database.
For M>1
The support for BIT(M) where M!=1 is problematic with JDBC as BIT(M) is only required with "full" SQL-92 and only few DBs support that.
Check here Mapping SQL and Java Types: 8.3.3 BIT
The following works for me with MySQL (at least with MySQL 5.0.45, Java 1.6 and MySQL Connector/J 5.0.8)
...
PreparedStatement insert = con.prepareStatement(
"INSERT INTO bittable (bitcolumn) values (b?)"
);
insert.setString(1,"111000");
...
This uses the special b'110101010' syntax of MySQL to set the value for BIT columns.
You can use get/setObject with a byte array (byte[]). 8 bits are packed into each byte with the least significant bit being in the last array element.