I am trying to run a SQl query using Hive as an underlying data store, the query invokes Big Decimal function and throws the following error :
Method not supported at
org.apache.hadoop.hive.jdbc.HivePreparedStatement.setBigDecimal(HivePreparedStatement.java:317)
That is simply because Hive does not support as follows :
public void setBigDecimal(int parameterIndex, BigDecimal x) throws SQLException {
// TODO Auto-generated method stub
throw new SQLException("Method not supported");
}
Please suggest what can be other workarounds or fixes available to counter such an issue
The original Hive JDBC driver only supported few of the JDBC interfaces, see HIVE-48: Support JDBC connections for interoperability between Hive and RDBMS. So the commit left auto-generated "not supported" code for interfaces like CallableStatement or PreparedStatement.
With HIVE-2158: add the HivePreparedStatement implementation based on current HIVE supported data-type some of the methods were fleshed out, see the commit. But types like Blob, AsciiStream, binary stream and ... bigDecimal were not added. When HIVE-2158 was resolved (2011-06-15) the support for DECIMAL in Hive was not in, it came with HIVE-2693: Add DECIMAL data type, on 2013-01-17. When support for DECIMAL was added, looks like the JDBC driver interface was not updated.
So basically the JDBC driver needs to be updated with the new types supported. You should file a JIRA for this. Workaround: don't use DECIMAL, or don't use PrepareStatement.
I had similar issue with ".setObject" method, but after update to version 1.2.1 it was resolved.
".setBigDecimal" currently it is not implemented. Here is the implementation of the class. However in .setObject method currently has line like this which in fact solve the case.
if(value instanceof BigDecimal){
st.setString(valueIndex, value.toString());
}
This worked for me, but you can lose precision without any warning!
In general it seems that metamodel supports decimal poorly. If you get all the columns with statemant like this
Column[] columnNames = table.getColumns();
and one of the columns is decimal you'll notice that there is no information about the precision.
Related
Description of code
Database connection
I try to to store Java object locally database without use external database.
For this I use JDBC with H2 via Hibernate :
/**
* #param connection the connection to set
*/
public static void setConnectionHibernate() {
Properties connectionProps = new Properties();
connectionProps.put("user", "sa");
try {
Class.forName("org.h2.Driver");
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
url = "jdbc:h2:mem:db1;DB_CLOSE_DELAY=-1;MODE=MySQL;";
}
Query
I store the PROCEDURE in String with this code :
static final String CREATE_PROCEDURE_INITPSEUDOS = "CREATE OR REPLACE PROCEDURE init_pseudos (MaxPseudo INT) BEGIN WHILE MaxPseudo >= 0 DO"
+
" INSERT INTO Pseudos (indexPseudo)" +
" VALUES (MaxPseudo);" +
" SET MaxPseudo = MaxPseudo - 1;" +
" END WHILE;" +
" END init_pseudos;";
Query execution
And I execute the statement with this code :
public static void initBaseDonneePseudos() {
try (Connection connection = DriverManager.getConnection(url, connectionProps);
Statement stmt = connection.createStatement()) {
stmt.execute(RequetesSQL.CREATE_TABLE_PSEUDOS);
stmt.execute(RequetesSQL.CREATE_PROCEDURE_INITPSEUDOS);
stmt.execute(RequetesSQL.CREATE_FUNCTION_RECUPEREPSEUDO);
stmt.execute(RequetesSQL.INIT_TABLE_PSEUDOS);
} catch (SQLException e) {
e.printStackTrace();
}
}
Problem
Test
I execute this test to test statement :
#Nested
class BaseDonneeInteractionTest {
#BeforeEach
public void setUp() {
BaseDonnee.setConnectionHibernate();
}
#Test
void testInitBaseDonnee() {
assertDoesNotThrow(() -> BaseDonnee.initBaseDonneePseudos());
}
}
Error
But I obtain this error
I didn't find the problem of the query, anybody have the solution to solve this ?
The "MySQL Compatibility Mode" doesn't make H2 100% compatible with MySQL. It just changes a few things. The documentation lists them:
Creating indexes in the CREATE TABLE statement is allowed using INDEX(..) or KEY(..). Example: create table test(id int primary key, name varchar(255), key idx_name(name));
When converting a floating point number to an integer, the fractional digits are not truncated, but the value is rounded.
ON DUPLICATE KEY UPDATE is supported in INSERT statements, due to this feature VALUES has special non-standard meaning is some contexts.
INSERT IGNORE is partially supported and may be used to skip rows with duplicate keys if ON DUPLICATE KEY UPDATE is not specified.
REPLACE INTO is partially supported.
Spaces are trimmed from the right side of CHAR values.
REGEXP_REPLACE() uses \ for back-references.
Datetime value functions return the same value within a command.
0x literals are parsed as binary string literals.
Unrelated expressions in ORDER BY clause of DISTINCT queries are allowed.
Some MySQL-specific ALTER TABLE commands are partially supported.
TRUNCATE TABLE restarts next values of generated columns.
If value of an identity column was manually specified, its sequence is updated to generate values after inserted.
NULL value works like DEFAULT value is assignments to identity columns.
Referential constraints don't require an existing primary key or unique constraint on referenced columns and create a unique constraint automatically if such constraint doesn't exist.
LIMIT / OFFSET clauses are supported.
AUTO_INCREMENT clause can be used.
YEAR data type is treated like SMALLINT data type.
GROUP BY clause can contain 1-based positions of expressions from the SELECT list.
Unsafe comparison operators between numeric and boolean values are allowed.
That's all. There is nothing about procedures. As #jccampanero pointed out in the other answer, you must use the syntax specific to H2 if you want to create stored procedures.
The problem is that in H2 there are not explicit procedures or functions as you are trying defining.
For that purpose, H2 allows you to create used defined functions instead. Please, consider reed the appropriate documentation.
Basically, you create a user defined function by declaring an ALIAS for a bunch of Java code.
For example, in your use case, your CREATE_PROCEDURE_INITPSEUDOS could look similar to this:
CREATE ALIAS INIT_PSEUDOS AS $$
import java.sql.Connection;
import java.sql.Statement;
import java.sql.SQLException;
#CODE
void init_pseudos(final Connection conn, final int maxPseudo) throws SQLException {
try (Statement stmt = conn.createStatement()) {
while (maxPseudo >= 0) do {
stmt.execute("INSERT INTO Pseudos (indexPseudo) VALUES (MaxPseudo);");
maxPseudo = maxPseudo - 1;
}
}
}
$$;
Note the following:
As I said, you define a user defined function as Java code. That Java code should be enclosed between two $$ delimiters.
Although I included explicitly some imports, you can use any class in the java.util or java.sql packages in your code. If you want to included explicitly some imports, or if you require classes from other packages than the mentioned, the corresponding imports should be provided right after the first $$ token. In addition, you need to include #CODE the signal H2 where your imports end and your actual Java method starts.
If you need a reference to a Connection to the database in your code, it should be the first argument of your method.
Prefer to raise and not hide exceptions: it will allow your transactions to be committed or rollbacked as a whole appropriately.
You can invoke such a function as usual:
CALL INIT_PSEUDOS (5);
Please, provide the appropriate value for the maxPseudo argument.
Please, consider the provided code as just an example of use: you can improve the code in different ways, like using PreparedStatements instead of Statements for efficiency purposes, checking parameters for nullability, etcetera.
Using H2 as a test database product
Historically, H2 has been used a lot as a test replacement for the actual production database product. But this means you're limiting yourself to the least common denominator between H2 and MySQL, in your case. Including, as others pointed out, the lack of support for stored procedures. You could re-implement all the stored procedures in H2 (very tedious and error prone), or, you could just use testcontainers and run actual integration tests on MySQL directly.
I really think that using H2 as an integration test database is an outdated concept as I've shown in this blog post (unless you're also using H2 in production). You'll be much happier developing and testing everything against MySQL directly!
Using recursive SQL instead
You don't really need that procedure, I think? You probably wrote it to avoid too many round trips to the database for that loop. But you could batch the inserts or generate the following recursive SQL:
INSERT INTO Pseudos (indexPseudo)
WITH RECURSIVE
p (i) AS (
SELECT MaxPseudo
UNION ALL
SELECT i - 1 FROM p WHERE i >= 0
)
SELECT i FROM p;
Now there's no more need to place this in a procedure, you can run the query directly.
Translating your procedure to H2
Just to be complete, for translation of simple procedural logic, jOOQ would offer the feature transparently. You can try it online, here. It's probably overkill, I recommend the other approaches, but perhaps worth a try.
Disclaimer: I work for the company behind jOOQ.
We have an Oracle database with the following charset settings
SELECT parameter, value FROM nls_database_parameters WHERE parameter like 'NLS%CHARACTERSET'
NLS_NCHAR_CHARACTERSET: AL16UTF16
NLS_CHARACTERSET: WE8ISO8859P15
In this database we have a table with a CLOB field, which has a record that starts with the following string, stored obviously in ISO-8859-15: X²ARB (here correctly converted to unicode, in particular that 2-superscript is important and correct).
Then we have the following trivial piece of code to get the value out, which is supposed to automatically convert the charset to unicode via globalization support in Oracle:
private static final String STATEMENT = "SELECT data FROM datatable d WHERE d.id=2562456";
public static void main(String[] args) throws Exception {
Class.forName("oracle.jdbc.driver.OracleDriver");
try (Connection conn = DriverManager.getConnection(DB_URL);
ResultSet rs = conn.createStatement().executeQuery(STATEMENT))
{
if (rs.next()) {
System.out.println(rs.getString(1).substring(0, 5));
}
}
}
Running the code prints:
with ojdbc8.jar and orai18n.jar: X�ARB -- incorrect
with ojdbc7.jar and orai18n.jar: X�ARB -- incorrect
with ojdbc-6.jar: X²ARB -- correct
By using UNISTR and changing the statement to SELECT UNISTR(data) FROM datatable d WHERE d.id=2562456 I can bring ojdbc7.jar and ojdbc8.jar to return the correct value, but this would require an unknown number of changes to the code as this is probably not the only place where the problem occurs.
Is there anything I can do to the client or server configurations to make all queries return correctly encoded values without statement modifications?
It definitely looks like a bug in the JDBC thin driver (I assume you're using thin). It could be related to LOB prefetch where the CLOB's length, character set id and the first part of the LOB data is sent inband. This feature was introduced in 11.2. As a workaround, you can disable lob prefetch by setting the connection property
oracle.jdbc.defaultLobPrefetchSize
to "-1". Meanwhile I'll follow up on this bug to make sure that it gets fixed.
Please have a look at Database JDBC Developer's Guide - Globalization Support
The basic Java Archive (JAR) file ojdbc7.jar, contains all the
necessary classes to provide complete globalization support for:
CHAR or VARCHAR data members of object and collection for the character sets US7ASCII, WE8DEC, WE8ISO8859P1, WE8MSWIN1252, and UTF8.
To use any other character sets in CHAR or VARCHAR data members of
objects or collections, you must include orai18n.jar in the CLASSPATH
environment variable:
ORACLE_HOME/jlib/orai18n.jar
I am using Jooq to populate CSV data into my DB.
If I provide "String Value" instead of Int It is not entering the value into DB but in the mean time it is not throwing the error also.
How do I know if the upload is failed or not.How to handle these type of exceptions.In addition that is there any way to check/throw warning if i try to give string in int column.
version : 3.8.x
Connection connection = getConnection()
try(Connection connection = getConnection()) {
DSLContext create = DSL.using(connection, SQLDialect.MYSQL);
create.loadInto(Tables.PROCESS_QUEUE_MAP)
.loadCSV(new File("/my/folder/testInput.csv"))
.fields(Tables.PROCESS_QUEUE_MAP.PROCESS_QUEUE_ID,
Tables.PROCESS_QUEUE_MAP.PROCESS_NAME,
Tables.PROCESS_QUEUE_MAP.QUEUE_NAME,
Tables.PROCESS_QUEUE_MAP.MARKEPTLACE,
Tables.PROCESS_QUEUE_MAP.QUEUE_TYPE,
Tables.PROCESS_QUEUE_MAP.CREATED_BY,
Tables.PROCESS_QUEUE_MAP.CREATED_TIME,
Tables.PROCESS_QUEUE_MAP.LAST_MODIFIED_BY,
Tables.PROCESS_QUEUE_MAP.LAST_MODIFIED_TIME)
.execute();
} catch(Exception ex) {
ex.printStackTrace();
}
}
General failure handling with the Loader API
The loader API by default throws all kinds of exceptions that are raised from the underlying database or JDBC driver. This can be configured and overridden by specifying:
LoaderOptionsStep.onErrorAbort() (default)
LoaderOptionsStep.onErrorIgnore()
This only affects JDBC errors, not data loading "errors"
jOOQ's auto-conversion
For historic reason, throughout the jOOQ API, the automatic conversion between data types is "lenient" instead of "fail-fast". All data type conversion passes through the Convert utility, which returns null in case a data type conversion fails. E.g. when calling Convert.convert(Object, Class), the following test will pass:
assertNull(Convert.convert("abc", int.class));
This has been criticised in the past, but cannot be changed easily in the jOOQ API due to backwards compatibility.
Workarounds include:
Parsing the CSV content yourself
Passing an Object[][] to LoaderSourceStep.loadArrays()
I have a table sensor_location:
CREATE TABLE public.sensor_location (
sensor_id INTEGER NOT NULL,
location_time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
location_point public.geometry NOT NULL,
CONSTRAINT sensor_location_sensor_id_fkey FOREIGN KEY (sensor_id)
REFERENCES public.sensor(id)
ON DELETE NO ACTION
ON UPDATE NO ACTION
NOT DEFERRABLE
)
I want a query which will return sensor_ids of sensors and location_times within selected polygon.
The query should look something like:
SELECT
sensor_id,
location_time,
FROM
public.sensor_location
WHERE
ST_Within(location_point, ST_Polygon(ST_GeomFromText('LINESTRING(-71.050316 48.422044,-71.070316 48.422044,-71.070316 48.462044,-71.050316 48.462044,-71.050316 48.422044)'), 0));
How can I do that using jOOQ? Is it even possible to use jOOQ with PostGIS? Do I have to write my own sql query and just execute it with jOOQ?
I found this but I have no idea how to use it. I'm still a novice Java programmer.
Using jOOQ 3.16 out-of-the-box GIS support
Starting with jOOQ 3.16 (see #982), jOOQ will offer out-of-the-box support for the most popular GIS implementations, including PostGIS
As always with jOOQ, just translate your query to the equivalent jOOQ query:
ctx.select(SENSOR_LOCATION.SENSOR_ID, SENSOR_LOCATION.LOCATION_TIME)
.from(SENSOR_LOCATION)
.where(stWithin(
SENSOR_LOCATION.LOCATION_POINT,
// The ST_Polygon(...) wrapper isn't really needed
stGeomFromText("LINESTRING(...)", 0
))
.fetch();
Historic answer, or when something is still missing
... then, using plain SQL will certainly do the trick. Here's one example, how to do that:
ctx.select(SENSOR_LOCATION.SENSOR_ID, SENSOR_LOCATION.LOCATION_TIME)
.from(SENSOR_LOCATION)
.where("ST_WITHIN({0}, ST_Polygon(ST_GeomFromText('...'), 0))",
SENSOR_LOCATION.LOCATION_POINT)
.fetch();
Note how you can still use some type safety by using the plain SQL templating mechanism as shown above
If you're running lots of GIS queries
In this case, you probably want to build your own API that encapsulates all the plain SQL usage. Here's an idea how to get started with that:
public static Condition stWithin(Field<?> left, Field<?> right) {
return DSL.condition("ST_WITHIN({0}, {1})", left, right);
}
public static Field<?> stPolygon(Field<?> geom, int value) {
return DSL.field("ST_Polygon({0}, {1})", Object.class, geom, DSL.val(value));
}
If you also want to support binding GIS data types to the JDBC driver, then indeed, custom data type bindings will be the way to go:
http://www.jooq.org/doc/latest/manual/sql-building/queryparts/custom-bindings
You will then use your custom data types rather than the above Object.class, and you can then use Field<YourType> rather than Field<?> for additional type safety.
I found jooq-postgis-spatial spatial support: https://github.com/dmitry-zhuravlev/jooq-postgis-spatial
It allows working with geometries either using jts or postgis types.
Seems like storing timestamps with millisecond precision is a know issue with hibernate.
My field in the db was initially set as timestamp(3), but I've tried datetime(3) as well...unfortunately, it didn't make any difference.
I've tried using Timestamp and Date classes, and recently I've started using joda-time library. After all those efforts, I still wasn't unable to save timestamps with millisecond accuracy.
My mapping for the class contains following property:
<property name="startTime" column="startTime" type="org.jadira.usertype.dateandtime.joda.PersistentDateTime" length="3" precision="3" />
and I'v custom defined Dialect class
public class MySQLQustomDialect extends MySQL5InnoDBDialect{
protected void registerColumnType(int code, String name) {
if (code == Types.TIMESTAMP) {
super.registerColumnType(code, "TIMESTAMP(3)");
} else {
super.registerColumnType(code, name);
}
}
}
If I enter the data manually into db, hibernate manages to retrieve the sub second part.
Is there any way to solve this issue?
Are you, by any chance, using the MySQL Connector/J JDBC driver with MariaDB 5.5?
Connector/J usually sends the milliseconds part to the server only when it detects that the server is new enough, and this detection checks that the server version is >= 5.6.4. This obviously does not work correctly for MariaDB 5.5.x.
You can see the relevant part of Connector/J source here:
http://bazaar.launchpad.net/~mysql/connectorj/5.1/view/head:/src/com/mysql/jdbc/PreparedStatement.java#L796
Using MariaDB's own JDBC driver (MariaDB Java Client) might help (I haven't tried), but I accidentally discovered that adding useServerPrepStmts=true to the connection string makes this work with Connector/J, too.