Seems like storing timestamps with millisecond precision is a know issue with hibernate.
My field in the db was initially set as timestamp(3), but I've tried datetime(3) as well...unfortunately, it didn't make any difference.
I've tried using Timestamp and Date classes, and recently I've started using joda-time library. After all those efforts, I still wasn't unable to save timestamps with millisecond accuracy.
My mapping for the class contains following property:
<property name="startTime" column="startTime" type="org.jadira.usertype.dateandtime.joda.PersistentDateTime" length="3" precision="3" />
and I'v custom defined Dialect class
public class MySQLQustomDialect extends MySQL5InnoDBDialect{
protected void registerColumnType(int code, String name) {
if (code == Types.TIMESTAMP) {
super.registerColumnType(code, "TIMESTAMP(3)");
} else {
super.registerColumnType(code, name);
}
}
}
If I enter the data manually into db, hibernate manages to retrieve the sub second part.
Is there any way to solve this issue?
Are you, by any chance, using the MySQL Connector/J JDBC driver with MariaDB 5.5?
Connector/J usually sends the milliseconds part to the server only when it detects that the server is new enough, and this detection checks that the server version is >= 5.6.4. This obviously does not work correctly for MariaDB 5.5.x.
You can see the relevant part of Connector/J source here:
http://bazaar.launchpad.net/~mysql/connectorj/5.1/view/head:/src/com/mysql/jdbc/PreparedStatement.java#L796
Using MariaDB's own JDBC driver (MariaDB Java Client) might help (I haven't tried), but I accidentally discovered that adding useServerPrepStmts=true to the connection string makes this work with Connector/J, too.
Related
I am new to JOOQ and love learning and using it.
I am at the point where I want to do some testing, but instead of testing
on the 'real' database I want to use a 'copy' of that database.
When using jdbc all it really took was changing the database name in the
create statement and using that name when connecting to the database.
I quickly discovered that anything I tried to write to my test database
was ending up in the production database
In the JOOQ documentation it looked like I could solve the problem with some mapping.
I added a Settings class and got the DSLContext using the settings, as shown below
My base database name is 'kpi'.
Connection conn = JooqUtil.getConnection("user", "pw", kpitest, hostip, sb);
Settings settings = new Settings()
.withRenderMapping(new RenderMapping()
.withSchemata(
new MappedSchema().withInput(kpi)
.withOutput(kpitest)));
// Add the settings to the DSLContext
if (sb.length() == 0) {
dsl = DSL.using(conn, SQLDialect.MYSQL, settings);
}
The above resulted in seeing the testName database being used,
BUT access to the tables and fields was using the productionName.
Going back to the documentation it looks like there is a way to map the tables
also but it looks like a lot of work.
Further reading, I found some settings in the configuration file using schemata
My current JOOQ configuration xml file contains
<inputSchema>kpi</inputSchema>
It looks like I should change it to something like
<inputSchema></inputSchema> << maybe drop altogether
<schemata>
<schema>
<inputSchema>kpi</inputSchema>
<outputSchema>kpi</outputSchema>
</schema>
<schema>
<inputSchema>kpi</inputSchema>
<outputSchema>kpitest</outputSchema>
</schema>
</schemata>
In reading the manual I was not sure if DEV and PRODUCTION names had literal significance or not.
What would help is a (real world) example where there was a production and test database,
or one develpment database per developer, and how the tables and fields were accessed
if there was a change in syntax. A change in syntax would be difficult for doing unit type testing though.
Thanks for any quidance and links to examples
I am trying to run a SQl query using Hive as an underlying data store, the query invokes Big Decimal function and throws the following error :
Method not supported at
org.apache.hadoop.hive.jdbc.HivePreparedStatement.setBigDecimal(HivePreparedStatement.java:317)
That is simply because Hive does not support as follows :
public void setBigDecimal(int parameterIndex, BigDecimal x) throws SQLException {
// TODO Auto-generated method stub
throw new SQLException("Method not supported");
}
Please suggest what can be other workarounds or fixes available to counter such an issue
The original Hive JDBC driver only supported few of the JDBC interfaces, see HIVE-48: Support JDBC connections for interoperability between Hive and RDBMS. So the commit left auto-generated "not supported" code for interfaces like CallableStatement or PreparedStatement.
With HIVE-2158: add the HivePreparedStatement implementation based on current HIVE supported data-type some of the methods were fleshed out, see the commit. But types like Blob, AsciiStream, binary stream and ... bigDecimal were not added. When HIVE-2158 was resolved (2011-06-15) the support for DECIMAL in Hive was not in, it came with HIVE-2693: Add DECIMAL data type, on 2013-01-17. When support for DECIMAL was added, looks like the JDBC driver interface was not updated.
So basically the JDBC driver needs to be updated with the new types supported. You should file a JIRA for this. Workaround: don't use DECIMAL, or don't use PrepareStatement.
I had similar issue with ".setObject" method, but after update to version 1.2.1 it was resolved.
".setBigDecimal" currently it is not implemented. Here is the implementation of the class. However in .setObject method currently has line like this which in fact solve the case.
if(value instanceof BigDecimal){
st.setString(valueIndex, value.toString());
}
This worked for me, but you can lose precision without any warning!
In general it seems that metamodel supports decimal poorly. If you get all the columns with statemant like this
Column[] columnNames = table.getColumns();
and one of the columns is decimal you'll notice that there is no information about the precision.
From within a java code - where I already have a connection to a database - I need to find the default schema of the connection.
I have the following code that gives me a list of all schemas of that connection.
rs = transactionManager.getDataSource().getConnection().getMetaData().getSchemas();
while (rs.next()) {
log.debug("The schema is {} and the catalogue is {} ", rs.getString(1), rs.getString(2));
}
However, I don't want the list of all the schemas. I need the default schema of this connection.
Please help.
Note1: I am using H2 and DB2 on Windows7 (dev box) and Linux Redhat (production box)
Note2: I finally concluded that it was not possible to use the Connections object in Java to find the default schema of both H2 and DB2 using the same code. I fixed the problem with a configuration file. However, if someone can share a solution, I could go back and refactor the code.
Please use connection.getMetaData().getURL() method which returns String like
jdbc:mysql://localhost:3306/?autoReconnect=true&useUnicode=true&characterEncoding=utf8
We can parse it easily and get the schema name. It works for all JDBC drivers.
I have a requirement to upgrade company's db legacy system from MySQL ver 4.1 to ver 5.5 ,I currently found out that if i insert empty string to decimal/integer field via java program ,It will throw exception but if i write the same statement and insert it directly via mysql command line the record will be inserted normally(the empty field will become 0),so this lead me to think that there are some problem with jdbc driver , is driver enforce some rule upon statement before pass it to db? i really dont want to re-write the old program to support this change.
thx in advance for your answer :)
You can assign value null not empty string.
You are changing your DB version so all codes may support. So you have to change
I have just updated to Hibernate 3.6.5.Final from 3.3.0.GA and have run into a problem with a SQL formula call on an XML mapped property:
<property
name="endDate"
type="java.util.Date"
formula="TIMESTAMPADD(SECOND, (quantity*60*60), transactionDate)"
/>
I have changed nothing in the *.xml.hbm nor have I changed the database design. Where previously my endDate was nicely calculated I now get a MySQLSyntaxErrorException:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'this_.SECOND,(this_.quantity*60*60),this_.transactionDate) as formula0_0_ from t' at line 1
The problem is pretty obvious in that the this_.SECOND should be SECOND. It seems to me that Hibernate recognizes the TIMESTAMPADD as a formula but not the SECOND as a static passed parameter and thinks it thus must be a column in the table. I am unsure how to tell hibernate it should use SECOND as is.
I've tried registerFunction and registerKeyword on my Dialect but without any luck as these seem related to HQL function definitions and not native SQL which is used here in the formula.
Could anyone point me in the right direction or tell me what Hibernate does different between these versions and how I can fix it?
I just upgraded to Hibernate 4.1.2 and this same problem started coming back. The solution of [SECOND] no longer works and I had to register the keyword in my own custom Dialect. Like:
public class ExtendedMySQL5InnoDBDialect extends MySQL5InnoDBDialect {
public ExtendedMySQL5InnoDBDialect() {
super();
//make sure to register it in lowercase as uppercase does not work (took me 4 hours to realize)
registerKeyword("second");
}
}
I had the same type of problem in Sql Server - a similar solution might work. .
Here is what I found. .
https://forum.hibernate.org/viewtopic.php?p=2427791
So, try putting quotes around SECOND
<property
name="endDate"
type="java.util.Date"
formula="TIMESTAMPADD("SECOND", (quantity*60*60), transactionDate)"
/>
I am actually not sure how you would escape double quotes in this xml attribute, but I would try " first and if that doesn't work \"