It looks like that Hibernate started using LONG data type in version 3.5.5 (we upgraded from 3.2.7) instead of CLOB for the property of type="text".
This is causing problems as LONG data type in Oracle is an old outdated data type (see http://www.orafaq.com/wiki/LONG) that shouldn’t be used, and tables can’t have more than one column having LONG as a data type.
Does anyone know why this has been changed?
I have tried to set the Oracle SetBigStringTryClob property to true (as suggested in Hibernate > CLOB > Oracle :(), but that does not affect the data type mapping but only data transfer internals which are irrelevant to my case.
One possible fix for this is to override the org.hibernate.dialect.Oracle9iDialect:
public class Oracle9iDialectFix extends Oracle9iDialect {
public Oracle9iDialectFix() {
super();
registerColumnType(Types.LONGVARCHAR, "clob");
registerColumnType(Types.LONGNVARCHAR, "clob");
}
}
However this is the last resort - overriding this class is step closer to forking Hibernate which I would rather avoid doing.
Can anybody explain why this was done?
Should this be raised as a bug?
[UPDATE]: I have created https://hibernate.atlassian.net/browse/HHH-5569, let's see what happens.
It looks like the resolution to this issue is to use materialized_clob, at least that's what's being said by Gail Badner on HHH-5569.
This doesn't help me at all (and I left relevant comment about that) but might be helpful for someone else here. Anyway the bug is rejected and there is very little I can do about it but use overriden dialect :(
Can anybody explain why this was done? Should this be raised as a bug?
This has been done for HHH-3892 - Improve support for mapping SQL LONGVARCHAR and CLOB to Java String, SQL LONGVARBINARY and BLOB to Java byte[] (update of the documentation is tracked by HHH-4878).
And according to the same issue, the old behavior was wrong.
(NOTE: currently, org.hibernate.type.TextType incorrectly maps "text" to java.sql.Types.CLOB; this will be fixed by this issue and updated in database dialects)
You can always raise an issue but in short, my understanding is that you should use type="clob" if you want to get the property mapped to a CLOB.
PS: Providing your own Dialect and declaring it in your Hibernate configuration (which has nothing to do with a fork) is IMHO not a solution on the long term.
I cannot answer your question about why, but for Hibernate 6, it seems they're considering switching back to using CLOB
Related
I'm currently evaluating JOOQ because I believe I started reinventing the wheel which looks very close to part of JOOQ :)
Now, while digging in great JOOQ documentation I've found that my use case lies somewhere between Using JOOQ as SQL Builder and Using JOOQ as SQL Builder with Code generation i.e. I would like to:
Create plain SQL strings like it is shown in Using JOOQ as SQL Builder part
Instead of using hard-coded DSL.fieldByName("BOOK","TITLE") constructs, I prefer storing name of a table along with it's column names and types like it's shown in Using JOOQ as SQL Builder with Code generation part
I prefer not to use code generation (at least not on regular basis), but rather creating TableImpl myself when new table is needed.
While digging in manual, I've found how table implementation should look like in chapter Generated tables. However, TableImpl class as well as Table interface should be parameterized with record type and the same goes for TableField class. I believe this is done for easier type inference when directly querying database and retrieving results, though I may be mistaken.
So my questions are:
Is there a guide in manual on how to create Table and TableField implementations? Or I can simply generate them once for my database schema and use generated code as a guideline?
How can I gracefully "discard" record type parameters in implemented classes? First, I thought about using java.lang.Void class as type parameter but then I noticed that only subclasses of Record are allowed... The reason is that I don't need record types at all because I plan to use generated by JOOQ SQL queries in something like Spring JdbcTemplate so mapping is done by myself.
Thanks in advance for any help!
Given your use-case, I'm not sure why you'd like to roll your own Table and TableField implementations rather than using the ones generated by jOOQ. As you stated yourself, you don't have to regenerate that code every time the DB schema changes. Many users will just generate the schema once in a while and then put the generated artefacts under version control. This will help you keep track of newly added changes.
To answer your questions:
Yes, there are some examples around the use of CustomTable. You may also find some people sharing similar experiences on the user group
Yes you can just use Record. Your minimal custom table type would then be:
class X extends TableImpl<Record> {
public X() {
super("x");
}
}
Note that you will be using jOOQ's internal API (TableImpl), which is not officially supported. While I'm positive that it'll work, it might break in the future, e.g. as the super constructor signature might change.
I recently had a similar issue as this SOer had, where I was using Hibernate's Query#list object and getting compiler warnings for type safety.
Someone pointed out to me that I could use EntityManager#createQuery(String,Class<?>) instead of Query#list to accomplish the same thing but have everything genericized correctly.
I've been searching for examples of using Hibernate directly with EntityManager but so far no luck. So I ask: how can I use the EntityManager#createQuery method in lieu of the Query#list method when doing a SELECT from Hibernate?
My application is working fine before I included custom type converter. I need to convert jOOQ UInteger to Integer, so I included a type converter to achieve this. Post this change, I am getting a mysql syntax error on limit and offset.
Then while debugging, I found that all Integer values that are being supplied(including limit and offset values) are converting into UInteger(because of the type converter) and in turn to string since UInteger is not a default type.
I could solve this by the solution provided by the link jooq issue with limit and offset but I want to understand some details.
If I use settings.setStatementType(StatementType.STATIC_STATEMENT) I cannot get prepared statement and I might miss the advantages of the PreparedStatement.
If I use Factory.inline to bind all integer inline values, I have to do this over my complete application and if I miss something, it will result in serious issue.
Kindly help me out to solve the issue or give me some suggestions on the same.
I think what you're looking for is a way to completely disable the generation of unsigned integer types. The relevant code generation flag is documented here:
http://www.jooq.org/doc/3.0/manual/code-generation/codegen-advanced
An excerpt:
<!-- Generate jOOU data types for your unsigned data types, which are
not natively supported in Java.
Defaults to true -->
<unsignedTypes>false</unsignedTypes>
Otherwise, there is an undocumented solution to force types onto another SQL type rather than onto a converter. The documentation task is this one here:
https://github.com/jOOQ/jOOQ/issues/2095
This isn't properly tested, but in the case of converting between UInteger and Integer it might work quite well. An example from the integration tests can be seen here
<forcedType>
<name>UUID</name>
<expressions>(?i:(.*?.)?T_EXOTIC_TYPES.UU)</expressions>
</forcedType>
In your case:
<forcedType>
<name>INTEGER</name>
<expressions>YOUR_COLUMN_MATCHING_EXPRESSION_HERE</expressions>
</forcedType>
Note that you can always change your database schema to actually hold signed types, instead of unsigned ones.
I've recently migrated a Java 1.4 application to a Java 6 environment. Unfortunately, I encountered a problem with the BigDecimal storage in a Oracle database. To summarize, when I try to store a "7.65E+7" BigDecimal value (76,500,000.00) in the database, Oracle stores in reality the value of 7,650,000.00. This defect is due to the rewritting of the BigDecimal class in Java 1.5 (see here).
In my code, the BigDecimal was created from a double using this kind of code:
BigDecimal myBD = new BigDecimal("" + someDoubleValue);
someObject.setAmount(myBD);
// Now let Hibernate persists my object in DB...
In more than 99% of the cases, everything works fine. Except that in really few case, the bug mentioned above occurs. And that's quite annoying.
If I change the previous code to avoid the use of the String constructor of BigDecimal, then I do not encounter the bug in my uses cases:
BigDecimal myBD = new BigDecimal(someDoubleValue);
someObject.setAmount(myBD);
// Now let Hibernate persists my object in DB...
However, how can I be sure that this solution is the correct way to handle the use of BigDecimal?
So my question is to know how I have to manage my BigDecimal values to avoid this issue:
Do not use the new BigDecimal(String) constructor and use directly the new BigDecimal(double)?
Force Oracle to use toPlainString() instead of toString() method when dealing with BigDecimal (and in this case how to do that)?
Any other solution?
Environment information:
Java 1.6.0_14
Hibernate 2.1.8 (yes, it is a quite old version)
Oracle JDBC 9.0.2.0 and also tested with 10.2.0.3.0
Oracle database 10.2.0.3.0
Edit : I've tested the same code in error but with the Oracle JDBC version 10.2.0.4.0 and the bug did not occur! The value stored was indeed 76,500,000.00...
Regarding the changelog, maybe it is related to the bug #4711863.
With modern Hibernate versions you can use UserType to map any class to a database field. Just make a custom UserType and use it to map BigDecimal object to database column.
See http://i-proving.com/space/Technologies/Hibernate/User+Types+in+Hibernate
Confession: I don't personally use Hibernate, but could you just create a subclass MyBigDecimal whose toString() method calls toPlainString()?
I'm also not entirely sure of the merits of passing a double into the cosntructor of BigDecimal -- a double is inherently inaccurate unless the number is composed entirely of additions of powers of 2 (with range restrictions). The whole point of BigDecimal is to circumvent these restrictions on double.
I am running Hibernate 3.2.0 with MySQL 5.1. After updating the group_concat_max_len in MySQL (because of a group_concat query that was exceeding the default value), I got the following exception when executing a SQLQuery with a group_concat clause:
"No Dialect mapping for JDBC type: -1"
-1 is the java.sql.Types value for LONGVARCHAR. Evidently, increasing the group_concat_max_len value causes calls to group_concat to return a LONGVARCHAR value. This appears to be an instance of this bug:
http://opensource.atlassian.com/projects/hibernate/browse/HHH-3892
I guess there is a fix for this issue in Hibernate 3.5, but that is still a development version, so I am hesitant to put it into production, and don't know if it would cause issues for other parts of my code base. I could also just use JDBC queries, but then I have to replace every instance of a SQLQuery with a group_concat clause.
Any other suggestions?
Yes, two suggestions. Either:
Patch Hibernate 3.2.0 with the changes of HHH-3892 i.e. get Hibernate sources, apply the patches for r16501, r16823 and r17332) and build Hibernate yourself.
Or use a custom dialect as suggested in HHH-1483:
public class MySQL5Dialect extends org.hibernate.dialect.MySQL5Dialect {
public MySQL5Dialect() {
super();
// register additional hibernate types for default use in scalar sqlquery type auto detection
// http://opensource.atlassian.com/projects/hibernate/browse/HHH-1483
registerHibernateType(Types.LONGVARCHAR, Hibernate.TEXT.getName());
}
}
Option #2 is easy to implement and to test (I didn't) while option #1 is "cleaner" but require (a bit) more work. Personally, I'd choose option #1 because that's what you will get with 3.5 and thus guarantees a seamless upgrade.
Pascal's answer sounds very good, but I took a shortcut, for now.
Calling addScalar for every query return value also alleviates this problem. As it turns out, there were not very many places in my code with a group_concat but no explicit calls to addScalar. Adding these makes the issue go away. (Note that you must have a call to addScalar for every return value, not just those coming from a group_concat.)