DatabaseMetaData.getTables() returns how many columns? - java

I was playing around with the DatabaseMetaData class to see how it works. The java doc comments seem to state one thing, while the code does a different. I know it is an interface, so it is really up to the vendor that supplied the JDBC driver to implement this correctly. But I was wondering if I am missing something or not?
I am using this with a version of Oracle 10g. Basically the comment implies that it will return the following 10 columns in the resultset:
TABLE_CAT
TABLE_SCHEM
TABLE_NAME
TABLE_TYPE
REMARKS
TYPE_CAT
TYPE_SCHEM
TYPE_NAME
SELF_REFERENCING_COL_NAM
REF_GENERATION
In reality I only get 5 columns in the result set:
TABLE_CAT
TABLE_SCHEM
TABLE_NAME
TABLE_TYPE
REMARKS
So what gives? Am I misreading the javadocs or is this pretty much par for the course with jdbc drivers. For instance if I swapped out oracle for MySQL (of course getting the appropriate driver) would I probably get a number of columns?

The JDBC driver for Oracle 10g that you are using is just fulfilling an older spec. Here is a JavaDoc to which it conforms. You have to know the JDBC version of your JDBC drivers to work with them effectively when you do more than the absolute basics.

JDBC is a spec. Some features are required to conform to the spec; others are optional.
I don't know the complete spec, but this must be one feature that Oracle has chosen not to return all the values expressed in the interface. Other vendors like MySQL may choose to do so.
You'll have to try it and see.
Are the missing columns crucial to your app's operation? It seems like a trivial reason to switch database vendors.

Related

User-defined types in Apache Derby as ENUM replacements

I'm using Apache Derby as an in-memory mock database for unit testing some code that works with MySQL using jOOQ.
The production database uses enums for certain fields (this is a given and out of scope of this question - I know enums are bad but I can't change this part now), so jOOQ generates code to handle the enums.
Unfortunately, Derby does not support enums and when I try to create the database in Derby (from jOOQ SQL generator), I get errors.
My solution was to user-defined types that mimic the enum by wrapping the relevant jOOQ generated enum Java class. So, for example, if I have an enum field kind in the table stuffs, jOOQ SQL generator creates Derby table creation SQL that talks about stuffs_kind.
To support this I created the class my.project.tests.StuffsKindDebyEnum that wraps the jOOQ generated enum type my.project.model.StuffsKind. I then run the following SQL through Derby, before running the jOOQ database creation SQL:
CREATE TYPE stuffs_kind EXTERNAL NAME 'my.project.tests.StuffsKindDerbyEnum' LANGUAGE JAVA
When I then use jOOQ to insert new records, jOOQ generates SQL that looks somewhat like this:
insert into "schema"."stuffs" ("text", "kind")
values (cast (? as varchar(32672)), cast(? as stuffs_kind)
But binds a string value to the kind argument (as expected), and it work for MySQL but with Derby I get an exception:
java.sql.SQLDataException: An attempt was made to get a data value of type
'"APP"."STUFFS_KIND"' from a data value of type 'VARCHAR'
After looking at all kinds of ways to solve this problem (including trying to treat enums as simple VARCHARs), and before I give up on being able to test my jOOQ-using code, is there a way to get Derby to "cast" varchar into user-defined types? If could put some Java code that can handle that, it will not be a problem as I can simply do StuffsKind.valueOf(value) to convert a string to the correct enum type, but after perusing the (very minimal) Derby documentation, I can't figure out if it is even should be possible.
Any ideas are welcome!
Implementing a dialect sensitive custom data type binding:
The proper way forward here would be to use a dialect sensitive, custom data type binding:
https://www.jooq.org/doc/latest/manual/sql-building/queryparts/custom-bindings
The binding could then implement, e.g. the bind variable SQL generation as follows:
#Override
public void sql(BindingSQLContext<StuffsKindDerbyEnum> ctx) throws SQLException {
if (ctx.family() == MYSQL)
ctx.render().visit(DSL.val(ctx.convert(converter()).value()));
else if (ctx.family() == DERBY)
ctx.render()
.sql("cast(
.visit(DSL.val(ctx.convert(converter()).value()))
.sql(" as varchar(255))");
else
throw new UnsupportedOperationException("Dialect not supported: " + ctx.family());
}
You'd obviously also have to implement the other methods that tell jOOQ how to bind your variable to a JDBC PreparedStatement, or how to fetch it from a ResultSet
Avoiding the MySQL enum
Another, simpler way forward might be to avoid the vendor-specific feature and just use VARCHAR in both databases. You can still map that VARCHAR to a Java enum type using a jOOQ Converter that will work the same way in both databases.
Simplify testing by avoiding Derby
A much simpler way forward is to test your application directly on MySQL, e.g. on an in-memory docker virtualisation. There are a lot of differences between database vendors and their features, and at some point, working around those differences just to get slightly faster tests doesn't seem reasonable.
The exception is, of course, if you have to support both Derby and MySQL in production, in case of which the data type binding is again the best solution.

Understanding table constraint in SQL

I faced with the the system table_contraints. Now, I'm writing a class TableConstraint representing a table constraint itself. Is the table_constraint the postgreSQL-speciefic concept or I can safely use the class if I migrate to, say MSSQL or something else RDBMS?
public abstract class TableConstraint{
private String name;
private String tableName;
//GET, SET
}
Is the table_constraint the postgreSQL-speciefic concept
No. It's a part of ANSI information_schema (https://en.wikipedia.org/wiki/Information_schema)
I can safely use the class if I migrate to, say MSSQL or something else RDBMS?
It depends. Not all RDBMS supports information_schema (Oracle for instance doesn't). However quick look at https://msdn.microsoft.com/en-us/library/ms186778.aspx and we know SQL Server implements it.
The table_constraints view is a part of information schema. It's a part of the standard. It is rather safe to assume a modern db will stick to the standard however, like with all the standards, that's not always. true.
It certainly exists in latest versions of:
MySQL
Postgres
SQL Server
It doesn't exist (or I haven't found info about it) in:
SQLite
Oracle (as pointed out by Radek)

Why does PreparedStatement.setNull requires sqlType?

According to the java docs of PreparedStatement.setNull: "Note: You must specify the parameter's SQL type". What is the reason that the method requires the SQL type of the column?
I noticed that passing java.sql.Types.VARCHAR also works for non-varchar columns. Are there scenarios in which VARCHAR won't be suitable (certain column types or certain DB providers)?
Thanks.
According to the java docs of
PreparedStatement.setNull: "Note: You
must specify the parameter's SQL
type". What is the reason that the
method requires the SQL type of the
column?
For maximum compatibility; as per the specification, there are some databases which don't allow untyped NULL to be sent to the underlying data source.
I noticed that passing
java.sql.Types.VARCHAR also works for
non-varchar columns. Are there
scenarios in which VARCHAR won't be
suitable (certain column types or
certain DB providers)?
I don't think that sort of behaviour really is part of the specification or if it is, then I'm sure there is some sort of implicit coercion going on there. In any case, relying on such sort of behaviour which might break when the underlying datastore changes is not recommended. Why not just specify the correct type?
JDBC drivers appear to be moving away from setNull. See Add support for setObject(<arg>, null).
My list of databases supporting the more logical behaviour is:
Oracle
MySQL
Sybase
MS SQL Server
HSQL
My list of databases NOT supporting this logical behaviour is:
Derby Queries with guarded null Parameter fail
PostgreSQL Cannot pass null in Parameter in Query for ISNULL Suggested solution
When it comes to Oracle it would be very unwise to use varchar2 towards other datatypes. This might fool the optimizer and you could get an bad execution plan. For instance filtering on a date column using a timestamp datatype in your bind, Oracle could end up reading all your rows converting all dates to timestamp, then filtering out the wanted rows.
If you have a index on your date column, it could even get worse (if oracle chose to use it) - doing single reads on your oracle blocks.
--Lasse

No mapping for LONGVARCHAR in Hibernate 3.2

I am running Hibernate 3.2.0 with MySQL 5.1. After updating the group_concat_max_len in MySQL (because of a group_concat query that was exceeding the default value), I got the following exception when executing a SQLQuery with a group_concat clause:
"No Dialect mapping for JDBC type: -1"
-1 is the java.sql.Types value for LONGVARCHAR. Evidently, increasing the group_concat_max_len value causes calls to group_concat to return a LONGVARCHAR value. This appears to be an instance of this bug:
http://opensource.atlassian.com/projects/hibernate/browse/HHH-3892
I guess there is a fix for this issue in Hibernate 3.5, but that is still a development version, so I am hesitant to put it into production, and don't know if it would cause issues for other parts of my code base. I could also just use JDBC queries, but then I have to replace every instance of a SQLQuery with a group_concat clause.
Any other suggestions?
Yes, two suggestions. Either:
Patch Hibernate 3.2.0 with the changes of HHH-3892 i.e. get Hibernate sources, apply the patches for r16501, r16823 and r17332) and build Hibernate yourself.
Or use a custom dialect as suggested in HHH-1483:
public class MySQL5Dialect extends org.hibernate.dialect.MySQL5Dialect {
public MySQL5Dialect() {
super();
// register additional hibernate types for default use in scalar sqlquery type auto detection
// http://opensource.atlassian.com/projects/hibernate/browse/HHH-1483
registerHibernateType(Types.LONGVARCHAR, Hibernate.TEXT.getName());
}
}
Option #2 is easy to implement and to test (I didn't) while option #1 is "cleaner" but require (a bit) more work. Personally, I'd choose option #1 because that's what you will get with 3.5 and thus guarantees a seamless upgrade.
Pascal's answer sounds very good, but I took a shortcut, for now.
Calling addScalar for every query return value also alleviates this problem. As it turns out, there were not very many places in my code with a group_concat but no explicit calls to addScalar. Adding these makes the issue go away. (Note that you must have a call to addScalar for every return value, not just those coming from a group_concat.)

How to determine database type for a given JDBC connection?

I need to handle resultsets returning stored procedures/functions for three databases (Oracle, sybase, MS-Server). The procedures/functions are generally the same but the call is a little different in Oracle.
statement.registerOutParameter(1, oracle.jdbc.OracleTypes.CURSOR);
...
statement.execute();
ResultSet rs = (ResultSet)statement.getObject(1);
JDBC doesn't provide a generic way to handle this, so I'll need to distinguish the different types of DBs in my code. I'm given the connection but don't know the best way to determine if the DB is oracle. I can use the driver name but would rather find a cleaner way.
I suspect you would want to use the DatabaseMetaData class. Most likely DatabaseMetaData.getDatabaseProductName would be sufficient, though you may also want to use the getDatabaseProductVersion method if you have code that depends on the particular version of the particular database you're working with.
You can use org.apache.ddlutils, class Platformutils:
databaseName = new PlatformUtils().determineDatabaseType(dataSource)

Categories