Issue with jOOQ custom Type conversion - java

My application is working fine before I included custom type converter. I need to convert jOOQ UInteger to Integer, so I included a type converter to achieve this. Post this change, I am getting a mysql syntax error on limit and offset.
Then while debugging, I found that all Integer values that are being supplied(including limit and offset values) are converting into UInteger(because of the type converter) and in turn to string since UInteger is not a default type.
I could solve this by the solution provided by the link jooq issue with limit and offset but I want to understand some details.
If I use settings.setStatementType(StatementType.STATIC_STATEMENT) I cannot get prepared statement and I might miss the advantages of the PreparedStatement.
If I use Factory.inline to bind all integer inline values, I have to do this over my complete application and if I miss something, it will result in serious issue.
Kindly help me out to solve the issue or give me some suggestions on the same.

I think what you're looking for is a way to completely disable the generation of unsigned integer types. The relevant code generation flag is documented here:
http://www.jooq.org/doc/3.0/manual/code-generation/codegen-advanced
An excerpt:
<!-- Generate jOOU data types for your unsigned data types, which are
not natively supported in Java.
Defaults to true -->
<unsignedTypes>false</unsignedTypes>
Otherwise, there is an undocumented solution to force types onto another SQL type rather than onto a converter. The documentation task is this one here:
https://github.com/jOOQ/jOOQ/issues/2095
This isn't properly tested, but in the case of converting between UInteger and Integer it might work quite well. An example from the integration tests can be seen here
<forcedType>
<name>UUID</name>
<expressions>(?i:(.*?.)?T_EXOTIC_TYPES.UU)</expressions>
</forcedType>
In your case:
<forcedType>
<name>INTEGER</name>
<expressions>YOUR_COLUMN_MATCHING_EXPRESSION_HERE</expressions>
</forcedType>
Note that you can always change your database schema to actually hold signed types, instead of unsigned ones.

Related

How to create 'update' using multiple 'set' methods

Synopsis: I'm trying to create an SQL update using jOOQ
DSL.using(connection)
.update(DSL.table("dogs"))
.set(DSL.field("age"), DSL.field("age").add(1))
.set(DSL.field("rabies"), "true")
.where(DSL.field("id").eq("Kujo"))
.execute();
Issue:
The method set(Field<Object>, Object) is ambiguous for the type UpdateSetFirstStep<Record>
Question: How do I create this update using jOOQ?
You ran into this problem: Reference is ambiguous with generics
Fixing your query
It's always a good idea to attach data types with your jOOQ expressions. In your particular case, you can work around the problem by specifying things like:
DSL.field("age", SQLDataType.INTEGER)
Or, shorter, with the usual static imports:
field("age", INTEGER)
Using the code generator
However, jOOQ is best used with its code generator, see also this article here. Not only will you avoid problems like these, but you also get compile time type safety (of data types and meta data), advanced features like implicit joins and much more.
Your query would then look like this:
DSL.using(connection)
.update(DOGS)
.set(DOGS.AGE, DOGS.AGE.add(1))
.set(DOGS.RABIES, true)
.where(DOGS.ID.eq("Kujo"))
.execute();

Can not query table with type double precision[] from PostGis with geotools

Postgres 9
Postgis
GeoTools 12.2
In Java backend I try to make a query via geotools. It works fine until I try to make a query with attibute of type "double precision[]".
First geotools log a Warning:
org.geotools.jdbc.JDBCFeatureSource buildFeatureType
WARNING: Could not find mapping for '<my column name>', ignoring the column and setting the feature type read only
And then somewhere deep down, more or less around PostGISDialect level, it throws a NullPointerException. I tried to debug whole thing and I found that for JDBCFeatureSource "double precision[]" is "_float8" (typeName) and sqlType is "2003" (which is ARRAY in java.sql.Types). And JDBFeatureSource can not find binding neither for _float8 nor for SQL ARRAY type.
I tried to find some information if I can extend geotools with my own data type but I failed. Does anyone has any idea how can I use "double precision[]" type with geotools?
Ciao,
simply put you cannot right now.
Longer explanation, I don't think there is a mapper in GeoTools PostGISDialect for that type of data. You might want to provide a patch for it, it should not be too hard.
Simone.

How to Change Hibernate Collection Mapping from Set to List

I am using eclipse Tools to generate my Annotated Domain Code Classes.
For the One to Many & Many to Many Relationships, the code generated
used Set type for collections.
I want to change it to List or ArrayList. What should be my configuration
in reveng.xml
Also, what are the standard conversion types between MySQL and Java.
I mean like varchar is converted to string, int to int etc.
Can anyone share a pretty much standard reveng.xml file for type conversions...???
You shouldn't use List by default instead of Set. But if you need it punctually, that can help you:
public <T> List<T> fromSetToList(Set<T> set) {
return new ArrayList<T>(set);
}
Also, what are the standard conversion types between MySQL and Java. I mean like varchar is converted to string, int to int etc.
For reference on Hibernate mappings, I found the following link helpful for basic scenarios. For more complex mappings, refer to the full hibernate documentation.
Hibernate Mapping Cheat Sheet
As for The List vs. Set, Set should actually be the Collection type you should use. The only difference between List and Set is that List implies order of the elements and Set does not allow duplicates. A simple DB record set does not have a specified order and it does not have duplicates, so a Set is appropriate. A List would be useful only if your query did specify order and/or you wanted some kind of UNION which may produce duplicates.
I don't know how to turn your Sets into Lists but I would encourage you to question if you actually want to do so.

How to map standard Java types to SQL types?

I want to write a program, which creates RDBMS-tables through JDBC automatically. To add columns to a table, I need to know the name of the column and the column's RDBMS datatype. I already have the names of the columns. I don't have the RDBMS types for the columns, but I have Java types for those column. So I need to map those Java types to RDBMS datatypes. The Java type can be one of the following:
primitve types
wrapper types of primitive types
String
So my question is: How to map those java types to RDBMS types?
Is there a part of JDBC or library that already handles this mapping?
Are there any classes which can help me partially?
Especially I am working with PostgreSQL. So if there is no genenic way to do it, it would be important for the moment to get it running with PG.
Thanks in advance
Well, there there's always the java.sql.Types class which contains the generic SQL type mappings, but you'd be better served using something like Hibernate to do all of this for you.
getTypeInfo() is intended to get the driver's view on which (native) DBMS type should be mapped to which JDBC type. But these mappings aren't always precise so you will need to find some way of detecting the "best match"
Sun/Oracle's JDBC Guide proposes some mappings:
Mapping SQL and Java Types
I don't think there is any generic way to do it. The devil is in the details : do you want to impose any precision or scale? What's the maximum number of chars in your strings?
The mapping, in its simplest form could be
Java char --> varchar(1)
Java String --> varchar
Java number --> numeric
Java boolean --> boolean

Why has Hibernate switched to use LONG over CLOB?

It looks like that Hibernate started using LONG data type in version 3.5.5 (we upgraded from 3.2.7) instead of CLOB for the property of type="text".
This is causing problems as LONG data type in Oracle is an old outdated data type (see http://www.orafaq.com/wiki/LONG) that shouldn’t be used, and tables can’t have more than one column having LONG as a data type.
Does anyone know why this has been changed?
I have tried to set the Oracle SetBigStringTryClob property to true (as suggested in Hibernate > CLOB > Oracle :(), but that does not affect the data type mapping but only data transfer internals which are irrelevant to my case.
One possible fix for this is to override the org.hibernate.dialect.Oracle9iDialect:
public class Oracle9iDialectFix extends Oracle9iDialect {
public Oracle9iDialectFix() {
super();
registerColumnType(Types.LONGVARCHAR, "clob");
registerColumnType(Types.LONGNVARCHAR, "clob");
}
}
However this is the last resort - overriding this class is step closer to forking Hibernate which I would rather avoid doing.
Can anybody explain why this was done?
Should this be raised as a bug?
[UPDATE]: I have created https://hibernate.atlassian.net/browse/HHH-5569, let's see what happens.
It looks like the resolution to this issue is to use materialized_clob, at least that's what's being said by Gail Badner on HHH-5569.
This doesn't help me at all (and I left relevant comment about that) but might be helpful for someone else here. Anyway the bug is rejected and there is very little I can do about it but use overriden dialect :(
Can anybody explain why this was done? Should this be raised as a bug?
This has been done for HHH-3892 - Improve support for mapping SQL LONGVARCHAR and CLOB to Java String, SQL LONGVARBINARY and BLOB to Java byte[] (update of the documentation is tracked by HHH-4878).
And according to the same issue, the old behavior was wrong.
(NOTE: currently, org.hibernate.type.TextType incorrectly maps "text" to java.sql.Types.CLOB; this will be fixed by this issue and updated in database dialects)
You can always raise an issue but in short, my understanding is that you should use type="clob" if you want to get the property mapped to a CLOB.
PS: Providing your own Dialect and declaring it in your Hibernate configuration (which has nothing to do with a fork) is IMHO not a solution on the long term.
I cannot answer your question about why, but for Hibernate 6, it seems they're considering switching back to using CLOB

Categories