No mapping for LONGVARCHAR in Hibernate 3.2 - java

I am running Hibernate 3.2.0 with MySQL 5.1. After updating the group_concat_max_len in MySQL (because of a group_concat query that was exceeding the default value), I got the following exception when executing a SQLQuery with a group_concat clause:
"No Dialect mapping for JDBC type: -1"
-1 is the java.sql.Types value for LONGVARCHAR. Evidently, increasing the group_concat_max_len value causes calls to group_concat to return a LONGVARCHAR value. This appears to be an instance of this bug:
http://opensource.atlassian.com/projects/hibernate/browse/HHH-3892
I guess there is a fix for this issue in Hibernate 3.5, but that is still a development version, so I am hesitant to put it into production, and don't know if it would cause issues for other parts of my code base. I could also just use JDBC queries, but then I have to replace every instance of a SQLQuery with a group_concat clause.
Any other suggestions?

Yes, two suggestions. Either:
Patch Hibernate 3.2.0 with the changes of HHH-3892 i.e. get Hibernate sources, apply the patches for r16501, r16823 and r17332) and build Hibernate yourself.
Or use a custom dialect as suggested in HHH-1483:
public class MySQL5Dialect extends org.hibernate.dialect.MySQL5Dialect {
public MySQL5Dialect() {
super();
// register additional hibernate types for default use in scalar sqlquery type auto detection
// http://opensource.atlassian.com/projects/hibernate/browse/HHH-1483
registerHibernateType(Types.LONGVARCHAR, Hibernate.TEXT.getName());
}
}
Option #2 is easy to implement and to test (I didn't) while option #1 is "cleaner" but require (a bit) more work. Personally, I'd choose option #1 because that's what you will get with 3.5 and thus guarantees a seamless upgrade.

Pascal's answer sounds very good, but I took a shortcut, for now.
Calling addScalar for every query return value also alleviates this problem. As it turns out, there were not very many places in my code with a group_concat but no explicit calls to addScalar. Adding these makes the issue go away. (Note that you must have a call to addScalar for every return value, not just those coming from a group_concat.)

Related

Java JOOQ multiple tables query

I have a problem.
I have the following query:
SELECT
Agents.Owner,
Orders.*
FROM
Orders
INNER JOIN Agents ON Agents.id = Orders.agentid
WHERE
Agents.botstate = 'Active' AND Orders.state = 'Active' AND(
Orders.status = 'Failed' OR Orders.status = 'Processing' AND Orders.DateTimeInProgressMicro < DATE_SUB(NOW(), INTERVAL 10 SECOND))
ORDER BY
Orders.agentid
But now I need to convert this to JOOQ language. This is what I came up with:
create.select()
.from(DSL.table("Orders"))
.join(DSL.table("Agents"))
.on(DSL.table("Agents").field("Id").eq(DSL.table("Orders").field("AgentId")))
.where(DSL.table("Agents").field("botstate").eq("Active")
.and(DSL.table("Orders").field("state").eq("Active"))
.and((DSL.table("Orders").field("status").eq("Failed"))
.or(DSL.table("Orders").field("status").eq("Processing")))).fetch().sortAsc(DSL.table("Orders").field("AgentId"));
Now the first problem is that it doesn't like all the .eq() statements, because it gives me the error:
Cannot resolve method: eq(Java.lang.String). And my second problem is that I don't know how to write this statement in JOOQ: Orders.DateTimeInProgressMicro < DATE_SUB(NOW(), INTERVAL 10 SECOND).
The first problem is caused by the fact that I can't just use:
.on(Agents.Id).eq(Orders.AgentId)
But instead I need to enter for every table:
DSL.table("table_name")
And for every column:
DSL.field("column_name")
Without that it doesn't recognize my tables and columns
How can I write the SQL in the JOOQ version correctly or an alternative solution is that I can use normal SQL statements?
Why doesn't your code work?
Table.field(String) does not construct a path expression of the form table.field. It tries to dereference a known field from Table. If Table doesn't have any known fields (e.g. in the case of using DSL.table(String), then there are no fields to dereference.
Correct plain SQL API usage
There are two types of API that allow for working with dynamic SQL fragments:
The plain SQL API to construct plain SQL fragments and templates
The Name API to construct identifiers and jOOQ types from identifiers
Most people use these only when generated code isn't possible (see below), or jOOQ is missing some support for vendor-specific functionality (e.g. some built-in function).
Here's how to write your query with each:
Plain SQL API
The advantage of this API is that you can use arbitrary SQL fragments including vendor specific function calls that are unknown to jOOQ. There's a certain risk of running into syntax errors, SQL injection (!), and simple data type problems, because jOOQ won't know the data types unless you tell jOOQ explicitly
// as always, this static import is implied:
import static org.jooq.impl.DSL.*;
And then:
create.select()
.from("orders") // or table("orders")
.join("agents") // or table("agents")
.on(field("agents.id").eq(field("orders.id")))
.where(field("agents.botstate").eq("Active"))
.and(field("orders.state").eq("Active"))
.and(field("orders.status").in("Failed", "Processing"))
.orderBy(field("orders.agentid"))
.fetch();
Sometimes it is useful to tell jOOQ about data types explicitly, e.g. when using these expressions in SELECT, or when creating bind variables:
// Use the default SQLDataType for a Java class
field("agents.id", Integer.class);
// Use an explicit SQLDataType
field("agents.id", SQLDataType.INTEGER);
Name API
This API allows for constructing identifiers (by default quoted, but you can configure that, or use unquotedName()). If the identifiers are quoted, the SQL injection risk is avoided, but then in most dialects, you need to get case sensitivity right.
create.select()
.from(table(name("orders")))
.join(table(name("agents")))
.on(field(name("agents", "id")).eq(field(name("orders", "id"))))
.where(field(name("agents", "botstate")).eq("Active"))
.and(field(name("orders", "state")).eq("Active"))
.and(field(name("orders", "status")).in("Failed", "Processing"))
.orderBy(field(name("orders", "agentid")))
.fetch();
Using the code generator
Some use cases prevent using jOOQ's code generator, e.g. when working with dynamic schemas that are only known at runtime. In all other cases, it is very strongly recommended to use the code generator. Not only will building your SQL statements with jOOQ be much easier in general, you will also not run into problems like the one you're presenting here.
Your query would read:
create.select()
.from(ORDERS)
.join(AGENTS)
.on(AGENTS.ID.eq(ORDERS.ID))
.where(AGENTS.BOTSTATE.eq("Active"))
.and(ORDERS.STATE.eq("Active"))
.and(ORDERS.STATUS.in("Failed", "Processing"))
.orderBy(ORDERS.AGENTID)
.fetch();
Benefits:
All tables and columns are type checked by your Java compiler
You can use IDE auto completion on your schema objects
You never run into SQL injection problems or syntax errors
Your code stops compiling as soon as you rename a column, or change a data type, etc.
When fetching your data, you already know the data type as well
Your bind variables are bound using the correct type without you having to specify it explicitly
Remember that both the plain SQL API and the identifier API were built for cases where the schema is not known at compile time, or schema elements need to be accessed dynamically for any other reason. They are low level APIs, to be avoided when code generation is an option.

SQLSyntaxErrorException: data type exception due to `COALESCE` in HSQLDB

I had to upgrade the org.hsqldb library version from 2.2.9 to 2.4.0, in order to support schemas with NOT EXIST keyword as suggested in another question at this site, but then I encountered lots of JUnit tests which fails with
Caused by: java.sql.SQLSyntaxErrorException:
data type cast needed for parameter or null literal in statement
[INSERT INTO xxx(y) VALUES (COALESCE(?,?))]
my hsqldb looks like:
CREATE TABLE xxx (
y VARCHAR (32) NOT NULL
)
they only thing that worked for me is to move the COALESCE logic to my Java sources before inserting.
Questions:
it makes me think that the requirement NOT NULL in the schema didn't worked until the version 2.4.0, and that's why my tests fails only now. beside deleting this requirement, there is anything I can do to avoid that restriction?
is there a function like COALESCE which insert default value if all the parameters are null?
You can fix it with a cast:
INSERT INTO xxx(y) VALUES (COALESCE(CAST(? AS VARCHAR(32)),?)

Why has Hibernate switched to use LONG over CLOB?

It looks like that Hibernate started using LONG data type in version 3.5.5 (we upgraded from 3.2.7) instead of CLOB for the property of type="text".
This is causing problems as LONG data type in Oracle is an old outdated data type (see http://www.orafaq.com/wiki/LONG) that shouldn’t be used, and tables can’t have more than one column having LONG as a data type.
Does anyone know why this has been changed?
I have tried to set the Oracle SetBigStringTryClob property to true (as suggested in Hibernate > CLOB > Oracle :(), but that does not affect the data type mapping but only data transfer internals which are irrelevant to my case.
One possible fix for this is to override the org.hibernate.dialect.Oracle9iDialect:
public class Oracle9iDialectFix extends Oracle9iDialect {
public Oracle9iDialectFix() {
super();
registerColumnType(Types.LONGVARCHAR, "clob");
registerColumnType(Types.LONGNVARCHAR, "clob");
}
}
However this is the last resort - overriding this class is step closer to forking Hibernate which I would rather avoid doing.
Can anybody explain why this was done?
Should this be raised as a bug?
[UPDATE]: I have created https://hibernate.atlassian.net/browse/HHH-5569, let's see what happens.
It looks like the resolution to this issue is to use materialized_clob, at least that's what's being said by Gail Badner on HHH-5569.
This doesn't help me at all (and I left relevant comment about that) but might be helpful for someone else here. Anyway the bug is rejected and there is very little I can do about it but use overriden dialect :(
Can anybody explain why this was done? Should this be raised as a bug?
This has been done for HHH-3892 - Improve support for mapping SQL LONGVARCHAR and CLOB to Java String, SQL LONGVARBINARY and BLOB to Java byte[] (update of the documentation is tracked by HHH-4878).
And according to the same issue, the old behavior was wrong.
(NOTE: currently, org.hibernate.type.TextType incorrectly maps "text" to java.sql.Types.CLOB; this will be fixed by this issue and updated in database dialects)
You can always raise an issue but in short, my understanding is that you should use type="clob" if you want to get the property mapped to a CLOB.
PS: Providing your own Dialect and declaring it in your Hibernate configuration (which has nothing to do with a fork) is IMHO not a solution on the long term.
I cannot answer your question about why, but for Hibernate 6, it seems they're considering switching back to using CLOB

DatabaseMetaData.getTables() returns how many columns?

I was playing around with the DatabaseMetaData class to see how it works. The java doc comments seem to state one thing, while the code does a different. I know it is an interface, so it is really up to the vendor that supplied the JDBC driver to implement this correctly. But I was wondering if I am missing something or not?
I am using this with a version of Oracle 10g. Basically the comment implies that it will return the following 10 columns in the resultset:
TABLE_CAT
TABLE_SCHEM
TABLE_NAME
TABLE_TYPE
REMARKS
TYPE_CAT
TYPE_SCHEM
TYPE_NAME
SELF_REFERENCING_COL_NAM
REF_GENERATION
In reality I only get 5 columns in the result set:
TABLE_CAT
TABLE_SCHEM
TABLE_NAME
TABLE_TYPE
REMARKS
So what gives? Am I misreading the javadocs or is this pretty much par for the course with jdbc drivers. For instance if I swapped out oracle for MySQL (of course getting the appropriate driver) would I probably get a number of columns?
The JDBC driver for Oracle 10g that you are using is just fulfilling an older spec. Here is a JavaDoc to which it conforms. You have to know the JDBC version of your JDBC drivers to work with them effectively when you do more than the absolute basics.
JDBC is a spec. Some features are required to conform to the spec; others are optional.
I don't know the complete spec, but this must be one feature that Oracle has chosen not to return all the values expressed in the interface. Other vendors like MySQL may choose to do so.
You'll have to try it and see.
Are the missing columns crucial to your app's operation? It seems like a trivial reason to switch database vendors.

What are the best workarounds for known problems with Hibernate's schema validation of floating point columns when using Oracle 10g?

I have several Java classes with double fields that I am persisting via Hibernate. For example, I have
#Entity
public class Node ...
private double value;
When Hibernate's org.hibernate.dialect.Oracle10gDialect creates the DDL for the Node table, it maps the value field to a "double precision" type.
create table MDB.Node (... value double precision not null, ...
It would appear that in Oracle, "double precision" is an alias for "float". So, when I try to verify the database schema using the org.hibernate.cfg.AnnotationConfiguration.validateSchema() method, Oracle appears to describe the value column as a "float". This causes Hibernate to throw the following Exception
org.hibernate.HibernateException: Wrong column type in DBO.ACL_RULE for column value. Found: float, expected: double precision
A very similar problem is listed in Hibernate's JIRA database as HHH-1961. I'd like to avoid doing anything that will break MySql, Postgres, and Sql Server support so extending the Oracle10gDialect appears to be the most promising of the workarounds mentioned in HHH-1961. But extending a Dialect is something I've never done before and I'm afraid there may be some nasty gotchas. What is the best workaround for this problem that won't break our compatibility with MySql, Postgres, and Sql Server?
This is a known limitation of the schema validator, check HHH-2315. So you have three options here (actually four but I guess that deactivating validation is not wanted). Either:
Use a float instead of a double at the Java level - this might not be an option though.
Patch org.hibernate.mapping.Table.validateColumns(Dialect dialect, Mapping mapping, TableMetadata tableInfo) to add a special condition for this particular case - this isn't really a light option.
Extends the org.hibernate.dialect.Oracle10gDialect to make it use float for the SQL type DOUBLE
public class MyOracle10gDialect extends Oracle10gDialect {
public MyOracle10gDialect() {
super();
}
protected void registerNumericTypeMappings() {
super.registerNumericTypeMappings();
registerColumnType( Types.DOUBLE, "float" );
}
}
The later option seems safe but will require some testing to see if it doesn't introduce any regression. I didn't look at Oracle's JDBC driver code, so I can't say how float and double precision differ at the driver level.
just adding (columnDefinition = "NUMBER(9,2)") works!
#Column(name = "CREDIT_AMOUNT", columnDefinition = "NUMBER(9,2)")
#Basic
private double creditAmount;
There was a similar problem HHH-1598 with HSQL mappings of boolean fields, and a discussion of it here.
The solution I chose to use was in the discussion referenced above, with an extension of HSQLDialect.
I saw no problems with this, though I only use HSQL in tests.
It certainly doesn't interfere with any other DB.
Use 'scale' attribute on your member.

Categories