I have a table with a column defined like this:
Country VARCHAR(2) NOT NULL DEFAULT 'US'
When I try to detect this default with JDBC it fails. Basically when I use DatabaseMetaData.getColumns the result does not contain the COLUMN_DEFAULT column. It is there when I try this with H2.
Any ideas how to get the default with Derby?
Did you try for COLUMN_DEFAULT? Or for COLUMN_DEF? According to the Javadoc I think it should be COLUMN_DEF.
Also, what version of Java and of JDBC are you using? I think that Derby only added COLUMN_DEF as part of the JDBC 4.0 support, which may require Java 1.6.
Related
I wonder if it's possible to control the type of string (unicode or ANSI) while setting parameter value in the queries generated by Hibernate.
The problem is that the most of the tables in my application have varchar/char columns , and these columns are often appear in various filters. However, all queries generated by Hibernate set parameter type to nvarchar/nchar making all indexes built on varchar columns pretty much unusable (index scan or full table scan instead of index seek/lookups ) .
As a workaround I set sendStringParametersAsUnicode ODBC connection parameter to false which solved performance issues , but I hope it should be the way to specify which string parameter has to be Unicode, and which is just simple ANSI string.
Thank you.
I wont change any Issue in Hibernate or somewhere else because this is not the Point doing this in the current layer. It Is A configuration Issue so when you have to Configure the Connection Parameter (URL) you could have Adoptable Configuration where you do not need any Converions etc. so you can only switch the URL
e.g:
jdbc:sqlserver://localhost\MYSERVER;DatabaseName=MyDB;sendStringParametersAsUnicode=false
if you switch your DB to server MySQL or any other SQL Server so just change the Configuration not the Hibernate. I don't think that this would be the right place..
Hope that helps..
i found this List Helpfull :
Vendor Parameter
-----------------------------------------
JSQLConnect asciiStringParameters
JTDS sendStringParametersAsUnicode
DataDirectConnect sendStringParametersAsUnicode
Microsoft JDBC sendStringParametersAsUnicode
http://emransharif.blogspot.de/2011/07/performance-issues-with-jdbc-drivers.html
cheers :)
I want to fetch Column comments using JDBC Metadata , But everytime it returns null , I tested with Oracle and SqlServer both cases it returning Null.
DatabaseMetaData dmt = con.getMetaData();
colRs = dmt.getColumns(null, "dbo", 'Student', null);
while (colRs.next()) {
System.out.println(colRs.getString("REMARKS");
}
While i am getting all other data like column name , length etc absolutely ok ...
For Oracle you need to provide a connection property remarksReporting and set that to true or call the method setRemarksReporting() to enable that.
OracleConnection oraCon = (OracleConnection)con;
oraCon.setRemarksReporting(true);
After that, getColumns() will return the column (or table) comments in the REMARKS column of the ResultSet.
See Oracle's JDBC Reference for more details
For SQL Server this is not possible at all.
Neither the Microsoft nor the jTDS driver expose table or column comments. Probably because there is no SQL support for that in SQL Server. The usual approach of using "extended properties" and the property name MS_DESCRIPTION is not reliable. Mainly because there is no requirement to us MS_DESCRIPTION as the property name. Not even sp_help returns those remarks. And at least the jTDS driver simply calls sp_help go the the table columns. I don't know what the Microsoft driver does.
The only option you have there, is to use fn_listextendedproperty() to retrieve the comments:
e.g.:
SELECT objname, cast(value as varchar(8000)) as value
FROM fn_listextendedproperty ('MS_DESCRIPTION','schema', 'dbo', 'table', 'Student', 'column', null)
You need to replace MS_DESCRIPTION with whatever property name you use to store your comments.
In a unit test I am trying to generate a table in an in-mem HSQLDB, the table contains a column with the definition: #Column(name = "xxx", columnDefinition="NUMBER(10,0) default 0"). NUMBER is not recognized by HSQLDB (version 2.3.3), so I have added a script running this statement first: CREATE TYPE NUMBER AS NUMERIC;. Now it seems to recognize NUMBER, but I get the error unexpected token: ( instead. I cannot edit the column definition, so wow do I correctly map Oracle NUMBER(10,0) to NUMERIC? If I remove the precision and scale from NUMBER it seems to work.
You do not need to define the NUMBER type, as it is supported by HSQLDB.
HSQLDB supports Oracle syntax in one of its compatibility modes. Run this statement to enable it:
SET DATABASE SQL SYNTAX ORA TRUE
Is it possible? Can i specify it on the connection URL? How to do that?
I know this was answered already, but I just ran into the same issue trying to specify the schema to use for the liquibase command line.
Update
As of JDBC v9.4 you can specify the url with the new currentSchema parameter like so:
jdbc:postgresql://localhost:5432/mydatabase?currentSchema=myschema
Appears based on an earlier patch:
http://web.archive.org/web/20141025044151/http://postgresql.1045698.n5.nabble.com/Patch-to-allow-setting-schema-search-path-in-the-connectionURL-td2174512.html
Which proposed url's like so:
jdbc:postgresql://localhost:5432/mydatabase?searchpath=myschema
As of version 9.4, you can use the currentSchema parameter in your connection string.
For example:
jdbc:postgresql://localhost:5432/mydatabase?currentSchema=myschema
If it is possible in your environment, you could also set the user's default schema to your desired schema:
ALTER USER user_name SET search_path to 'schema'
I don't believe there is a way to specify the schema in the connection string. It appears you have to execute
set search_path to 'schema'
after the connection is made to specify the schema.
DataSource – setCurrentSchema
When instantiating a DataSource implementation, look for a method to set the current/default schema.
For example, on the PGSimpleDataSource class call setCurrentSchema.
org.postgresql.ds.PGSimpleDataSource dataSource = new org.postgresql.ds.PGSimpleDataSource ( );
dataSource.setServerName ( "localhost" );
dataSource.setDatabaseName ( "your_db_here_" );
dataSource.setPortNumber ( 5432 );
dataSource.setUser ( "postgres" );
dataSource.setPassword ( "your_password_here" );
dataSource.setCurrentSchema ( "your_schema_name_here_" ); // <----------
If you leave the schema unspecified, Postgres defaults to a schema named public within the database. See the manual, section 5.9.2 The Public Schema. To quote hat manual:
In the previous sections we created tables without specifying any schema names. By default such tables (and other objects) are automatically put into a schema named “public”. Every new database contains such a schema.
I submitted an updated version of a patch to the PostgreSQL JDBC driver to enable this a few years back. You'll have to build the PostreSQL JDBC driver from source (after adding in the patch) to use it:
http://archives.postgresql.org/pgsql-jdbc/2008-07/msg00012.php
http://jdbc.postgresql.org/
In Go with "sql.DB" (note the search_path with underscore):
postgres://user:password#host/dbname?sslmode=disable&search_path=schema
Don't forget SET SCHEMA 'myschema' which you could use in a separate Statement
SET SCHEMA 'value' is an alias for SET search_path TO value. Only one
schema can be specified using this syntax.
And since 9.4 and possibly earlier versions on the JDBC driver, there is support for the setSchema(String schemaName) method.
I just found out that creating a connection to Oracle using JDBC Thin driver version 10.2.0.4 fails when the Locale is set to empty string, for example :
Locale.setDefault(new Locale("",""));
DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
con = DriverManager.getConnection(url, userName, password);
The above code will produce ORA-12705: Cannot access NLS data files or invalid environment specified.
It works fine if I specify en US, or just en.
But if I use Oracle driver 9.2.0.1, this exact piece of code works : the NLS is set to AMERICAN.
My question is : Is this a normal, documented, change of behavior ?
Maybe setting default locale to empty strings is a bad practice ?
if you have access to oracle support, note 115001.1 seems to state there are changes around this from 10g onwards:
9i Thick JDBC (=OCI) driver will make use of the NLS_LANG to determine
the how to convert characters. NOT defining the NLS_LANG will make
this to use the default US7ASCII setting, any non-ASCII data from /to
the database will be lost.
You don't need to specify anything to your Java code for JDBC to pick
up the NLS_LANG.
From 10g onwards the Thick JDBC driver is ignoring the NLS_LANG and
uses Language, Territory and characterset settings from the JVM
locale.
In 11g the property -Doracle.jdbc.ociNlsLangBackwardCompatible=true is
settable on the command. If set it causes JDBC to get the characterset
from NLS_LANG instead of getting the client characterset id from the
locale.
Language and territory are taken from the locale regardless of the
property though.