Timezones in SQL DATE vs java.sql.Date - java

I'm getting a bit confused by the behaviour of the SQL DATE data type vs. that of java.sql.Date. Take the following statement, for example:
select cast(? as date) -- in most databases
select cast(? as date) from dual -- in Oracle
Let's prepare and execute the statement with Java
PreparedStatement stmt = connection.prepareStatement(sql);
stmt.setDate(1, new java.sql.Date(0)); // GMT 1970-01-01 00:00:00
ResultSet rs = stmt.executeQuery();
rs.next();
// I live in Zurich, which is CET, not GMT. So the following prints -3600000,
// which is CET 1970-01-01 00:00:00
// ... or GMT 1969-12-31 23:00:00
System.out.println(rs.getDate(1).getTime());
In other words, the GMT timestamp I bind to the statement becomes the CET timestamp I get back. At what step is the timezone added and why?
Note:
I have observed this to be true for any of these databases:
DB2, Derby, H2, HSQLDB, Ingres, MySQL, Oracle, Postgres, SQL Server, Sybase ASE, Sybase SQL Anywhere
I have observed this to be false for SQLite (which doesn't really have true DATE data types)
All of this is irrelevant when using java.sql.Timestamp instead of java.sql.Date
This is a similar question, which doesn't answer this question, however: java.util.Date vs java.sql.Date

The JDBC specification does not define any details with regards to time zone. Nonetheless, most of us know the pains of having to deal with JDBC time zone discrepencies; just look at all the StackOverflow questions!
Ultimately, the handling of time zone for date/time database types boils down to the database server, the JDBC driver and everything in between. You're even at the mercy of JDBC driver bugs; PostgreSQL fixed a bug in version 8.3 where
Statement.getTime, .getDate, and .getTimestamp methods which are passed a Calendar object were rotating the timezone in the wrong direction.
When you create a new date using new Date(0) (let's assert you are using Oracle JavaSE java.sql.Date, your date is created
using the given milliseconds time value. If the given milliseconds value contains time information, the driver will set the time components to the time in the default time zone (the time zone of the Java virtual machine running the application) that corresponds to zero GMT.
So, new Date(0) should be using GMT.
When you call ResultSet.getDate(int), you're executing a JDBC implementation. The JDBC specification does not dictate how a JDBC implementation should handle time zone details; so you're at the mercy of the implementation. Looking at the Oracle 11g oracle.sql.DATE JavaDoc, it doesn't seem Oracle DB stores time zone information, so it performs its own conversions to get the date into a java.sql.Date. I have no experience with Oracle DB, but I would guess the JDBC implementation is using the server's and your local JVM's time zone settings to do the conversion from oracle.sql.DATE to java.sql.Date.
You mention that multiple RDBMS implementations handle time zone correctly, with the exception of SQLite. Let's look at how H2 and SQLite work when you send date values to the JDBC driver and when you get date values from the JDBC driver.
The H2 JDBC driver PrepStmt.setDate(int, Date) uses ValueDate.get(Date), which calls DateTimeUtils.dateValueFromDate(long) which does a time zone conversion.
Using this SQLite JDBC driver, PrepStmt.setDate(int, Date) calls PrepStmt.setObject(int, Object) and does not do any time zone conversion.
The H2 JDBC driver JdbcResultSet.getDate(int) returns get(columnIndex).getDate(). get(int) returns an H2 Value for the specified column. Since the column type is DATE, H2 uses ValueDate. ValueDate.getDate() calls DateTimeUtils.convertDateValueToDate(long), which ultimately creates a java.sql.Date after a time zone conversion.
Using this SQLite JDBC driver, the RS.getDate(int) code is much simpler; it just returns a java.sql.Date using the long date value stored in the database.
So we see that the H2 JDBC driver is being smart about handling time zone conversions with dates while the SQLite JDBC driver is not (not to say this decision isn't smart, it might suit SQLite design decisions well). If you chase down the source for the other RDBMS JDBC drivers you mention, you will probably find that most are approaching date and time zone in a similar fashion as how H2 does.
Though the JDBC specifications do not detail time zone handling, it makes good sense that RDBMS and JDBC implementation designers took time zone into consideration and will handle it properly; especially if they want their products to be marketable in the global arena. These designers are pretty darn smart and I am not surprised that most of them get this right, even in the absence of a concrete specification.
I found this Microsoft SQL Server blog, Using time zone data in SQL Server 2008, which explains how time zone complicates things:
timezones are a complex area and each application will need to address how you are going to handle time zone data to make programs more user friendly.
Unfortunately, there is no current international standard authority for timezone names and values. Each system needs to use a system of their own choosing, and until there is an international standard, it is not feasible to try to have SQL Server provide one, and would ultimately cause more problems than it would solve.

It is the jdbc driver that does the conversion. It needs to convert the Date object to a format acceptable by the db/wire format and when that format doesn't include a time zone, the tendency is to default to the local machine's time zone setting when interpreting the date. So, most likely scenario, given the list of drivers you specify, is that you set the date to GMT 1970-1-1 00:00:00 but the interpreted date when you set it to the statement was CET 1970-1-1 1:00:00. Since the date is only the date portion, you get 1970-1-1 (without a time zone) sent to the server and echoed back to you. When the driver gets the date back, and you access it as a date, it sees 1970-1-1 and interprets that again with the local time zone, i.e. CET 1970-1-1 00:00:00 or GMT 1969-12-31 23:00:00. Hence, you have "lost" an hour compared to the original date.

Both java.util.Date and the Oracle/MySQL Date objects are simply representations of a point in time regardless of location. This means that it is very likely to be internally stored as the number of milli/nano seconds since the "epoch", GMT 1970-01-01 00:00:00.
When you read from the resultset, your call to "rs.getDate()" tells the resultset to take the internal data containing the point in time, and convert it to a java Date object. This date object is created on your local machine, so Java will choose your local timezone, which is CET for Zurich.
The difference you are seeing is a difference in representation, not a difference in time.

The class java.sql.Date corresponds to SQL DATE, which does not store time or timezone information. The way this is accomplished is by 'normalizing' the date, like the javadoc puts it:
To conform with the definition of SQL DATE, the millisecond values wrapped by a java.sql.Date instance must be 'normalized' by setting the hours, minutes, seconds, and milliseconds to zero in the particular time zone with which the instance is associated.
This means that when you work in UTC+1 and ask the database for a DATE a compliant implementation does exactly what you've observed: return a java.sql.Date with a milliseconds value that corresponds to the date in question at 00:00:00 UTC+1 independently of how the data got to the database in the first place.
Database drivers may allow changing this behaviour through options if it's not what you want.
On the other hand, when you pass a java.sql.Date to the database, the driver will use the default time zone to separate the date and time components from the millisecond value. If you use 0 and you're in UTC+X, the date will be 1970-01-01 for X>=0 and 1969-12-31 for X<0.
Sidenote: It's odd to see that the documentation for the Date(long) constructor differs from the implementation. The javadoc says this:
If the given milliseconds value contains time information, the driver will set the time components to the time in the default time zone (the time zone of the Java virtual machine running the application) that corresponds to zero GMT.
However what is actually implemented in OpenJDK is this:
public Date(long date) {
// If the millisecond date value contains time info, mask it out.
super(date);
}
Apparently this "masking out" is not implemented. Just as well because the specified behaviour is not well specified, e.g. should 1970-01-01 00:00:00 GMT-6 = 1970-01-01 06:00:00 GMT be mapped to 1970-01-01 00:00:00 GMT = 1969-12-31 18:00:00 GMT-6, or to 1970-01-01 18:00:00 GMT-6?

JDBC 4.2 and java.time
PreparedStatement stmt = connection.prepareStatement(sql);
stmt.setObject(1, LocalDate.EPOCH); // The epoch year LocalDate, '1970-01-01'.
ResultSet rs = stmt.executeQuery();
rs.next();
// It doesnt matter where you live or your JVM’s time zone setting
// since a LocalDate doesn’t have or use time zone
System.out.println(rs.getObject(1, LocalDate.class));
I didn’t test, but unless there’s a bug in your JDBC driver, the output in all time zones is:
1970-01-01
LocalDate is a date without time of day and without time zone. So there is no millisecond value to get and no time of day to be confused about. LocalDate is part of java.time, the modern Java date and time API. JDBC 4.2 specifies that you can transfer java.time objects including LocalDate to and from your SQL database through the methods setObject and getObject I use in the snippet (so not setDate nor getDate).
I think that virtually all of us are using JDBC 4.2 drivers by now.
What went wrong in your code?
java.sql.Date is a hack trying (not very successfully) to disguise a java.util.Date as a date without time of day. I recommend that you don’t use that class.
From the documentation:
To conform with the definition of SQL DATE, the millisecond values
wrapped by a java.sql.Date instance must be 'normalized' by setting
the hours, minutes, seconds, and milliseconds to zero in the
particular time zone with which the instance is associated.
“Associated” may be vague. What this means, I believe, is that the millisecond value must denote the start of day in your JVM’s default time zone (Europe/Zurich in your case, I suppose). So when you create new java.sql.Date(0) equal to GMT 1970-01-01 00:00:00, it’s really you creating an incorrect Date object, since this time is 01:00:00 in your time zone. JDBC is ignoring the time of day (we might have expected an error message, but don’t get any). So SQL is getting and returning a date of 1970-01-01. JDBC correctly translates this back to a Date containing CET 1970-01-01 00:00:00. Or a millisecond value of -3 600 000. Yes, it’s confusing. As I said, don’t use that class.
If you had been at a negative GMT offset (for example most time zones in the Americas), new java.sql.Date(0) would have been interpreted as 1969-12-31, so this would have been the date you had passed to and got back from SQL.
To make matters worse, think what happens when the JVM’s time zone setting is changed. Any part of your program and any other program running in the same JVM may do this at any time. This typically causes all java.sql.Date objects created before the change to become invalid and may sometimes shift their date by one day (in rare cases by two days).
Links
Oracle tutorial: Date Time explaining how to use java.time.
JSR-000221 JDBC™ API Specification 4.2 Maintenance Release 2
Documentation of java.sql.Date

There is an overload of getDate that accepts a Calendar, could using this solve your problem? Or you could use a Calendar object to convert the information.
That is, assuming that retrieval is the problem.
Calendar gmt = Calendar.getInstance(TimeZone.getTimeZone("GMT"));
PreparedStatement stmt = connection.prepareStatement(sql);
stmt.setDate(1, new Date(0)); // I assume this is always GMT
ResultSet rs = stmt.executeQuery();
rs.next();
//This will output 0 as expected
System.out.println(rs.getDate(1, gmt).getTime());
Alternatively, assuming that storage is the problem.
Calendar gmt = Calendar.getInstance(TimeZone.getTimeZone("GMT"));
PreparedStatement stmt = connection.prepareStatement(sql);
gmt.setTimeInMillis(0) ;
stmt.setDate(1, gmt.getTime()); //getTime returns a Date object
ResultSet rs = stmt.executeQuery();
rs.next();
//This will output 0 as expected
System.out.println(rs.getDate(1).getTime());
I don't have a test environment for this, so you will have to find out which is the correct assumption. Also I believe that getTime returns the current locale, otherwise you will have to look up how to do a quick timezone conversion.

Related

How can I get the UTC-converted Java timestamp of current local time?

Could somebody please help with getting UTC-converted Java timestamp of current local time?
The main goal is to get current date and time, convert into UTC Timestamp and then store in Ignite cache as a Timestamp yyyy-MM-dd hh:mm:ss[.nnnnnnnnn].
My attempt was Timestamp.from(Instant.now()). However, it still considers my local timezone +03:00. I am getting '2020-02-20 10:57:56' as a result instead of desirable '2020-02-20 07:57:56'.
How can I get UTC-converted Timestamp?
You can do it like this :
LocalDateTime localDateTime = Instant.now().atOffset(ZoneOffset.UTC).toLocalDateTime();
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd hh:mm:ss");
System.out.println(localDateTime.format(formatter));
Don’t use Timestamp
You most probably don’t need a Timestamp. Which is good because the Timestamp class is poorly designed, indeed a true hack on top of the already poorly designed Date class. Both classes are also long outdated. Instead nearly 6 years ago we got java.time, the modern Java date and time API. Since JDBC 4.2 this works with your JDBC driver too, and also with your modern JPA implementation.
Use OffsetDateTime
For a timestamp the recommended datatype in your database is timestamp with time zone. In this case in Java use an OffsetDateTime with an offset of zero (that is, UTC). For example:
OffsetDateTime now = OffsetDateTime.now(ZoneOffset.UTC);
System.out.println(now);
PreparedStatement statement = yourDatabaseConnection
.prepareStatement("insert into your_table (tswtz) values (?);");
statement.setObject(1, now);
int rowsInserted = statement.executeUpdate();
Example output from the System.out.println() just now:
2020-02-22T13:04:06.320Z
Or use LocalDateTime if your database timestamp is without time zone
From your question I get the impression that the datatype in your database is timestamp without time zone. It’s only the second best option, but you can pass a LocalDateTime to it.
LocalDateTime now = LocalDateTime.now(ZoneOffset.UTC);
The rest is the same as before. Example output:
2020-02-22T13:05:08.776
If you do need an old-fashioned java.sql.Timestamp
You asked for a Timestamp in UTC. A Timestamp is always in UTC. More precisely, it’s a point in time independent of time zone, so converting it into a different time zone does not make sense. Internally it’s implemented as a count of milliseconds and nanoseconds since the epoch. The epoch is defined as the first moment of 1970 in UTC.
The Timestamp class is a confusing class though. One thing that might have confused you is when you print it, thereby implicitly calling its toString method. The toString method uses the default time zone of the JVM for rendering the string, so prints the time in your local time zone. Confusing. If your datatype in SQL is timestamp without time zone, your JDBC driver most probably interprets the Timestamp in your time zone for the conversion into an SQL timestamp. Which in your case is incorrect since your database uses UTC (a recommended practice). I can think of three possible solutions:
Some database engines allow you to set a time zone on the session. I haven’t got any experience with it myself, it’s something I have read; but it may force the correct conversion from your Java Timestamp to your SQL timestamp in UTC to be performed.
You may make an incorrect conversion in Java to compensate for the opposite incorrect conversion being performed between Java and SQL. It’s a hack, not something that I would want to have in my code. I present it as a last resort.
LocalDateTime now = LocalDateTime.now(ZoneOffset.UTC);
Timestamp ts = Timestamp.valueOf(now);
System.out.println(ts);
2020-02-22 13:05:08.776
You notice that it only appears to agree with the UTC time above. It‘s the same result you get from the answer by Vipin Sharma except (1) my code is simpler and (2) you’re getting a higher precision, fraction of second is included.
Have you database generate the current timestamp in UTC instead of generating it in Java.
Links
Oracle tutorial: Date Time explaining how to use java.time.
Related question: Java - Convert java.time.Instant to java.sql.Timestamp without Zone offset
Despite what the Ignite docs say you can pass in a 24hr time.
The docs says yyyy-MM-dd hh:mm:ss[.nnnnnnnnn] so you may be tempted in your code to use this to format your dates but this will lead to times after midday being wrong. Instead, format your dates with yyyy-MM-dd HH:mm:ss[.nnnnnnnnn].
Notice the upper case HH. If you're using ZonedDateTime or Joda's DateTime when you call now with UTC now(UTC) and then toString("yyyy-MM-dd HH:mm:ss") will store the correct time in UTC.

How to gather timezone of operating system from Oracle database in string format? (Migrate/convert date to ts with tz)

I want to update a database table which uses date type to timestamp with timezone type in such a way that the old dates get correct timezone information.
The plain cast is not good for me because if the time zone is for example UTC+2 hours (UTC+1 + 1 hour for daylightsaving) and I try to cast dates to timestamp with timezone, all the dates in the database table got the same +2 hour as timezone offset, regardless if it's a summer time or winter time date.
I already can write an SQL query which can determine about a date if it is in daylightsaving time or not, IF I know the current time zone in string format, e.g. 'Europe/Berlin'. The problem is that dbtimezone and sessiontimezone can be stored in other formats, too (+02:00, CET, etc). I cannot easily set the current sessiontimezone in a static way, because there are customers in several places on the globe with their own databases, but using a common update script.
Express method for timestamp can not help neither, because it cannot map the offset to named time zones.
I've seen a solution which uses java stored procedure to get the OS’s timezone instead of Oracles timezone. Unfortunately we use Oracle 12c, which contains an older JRE (I think it's 1.6 version). So, although Java 1.8 handles the timezones and daylight saving well (it uses updated tzmapping table), it does not work for me. I tryed it and if I run a test from Netbeans, then it gives me back the right time zone ID (in Europe/Berlin format), but even if it is accepted by Oracle SQL Developer, SQLPlus (which we use for running update scrips), it displays only +02:00.
I've tryied to use JodaTime (recompiled onto Java 1.6 in order to be accepted by SQL*Plus). The latest JodaTimes uses its own mapping table in theory. I read here on StackOverflow that if it cannot gather the time zone from user.timezone variable, then it turns to java.util, which is not good, as I mentioned. And if it does not succeed, then uses UTC. But it's not clear to me why it cannot get timezone from user.timezone systemm variable. Is it a permission problem maybe?
Or how could I possibly solve this issue? Thank you!
If the data is already in an Oracle SQL table, and you must convert to a timestamp with time zone (for example, in a new column you created in the same table), you do not need to go explicitly to the OS, or to use Java or any other thing, other than the Oracle database itself.
It is not clear from your question if you must assume the "date" was meant to be in the server time zone (you mention "the database" which normally means the server) or the client time zone (you mention "session" which means the client). Either way:
update <your_table>
set <timestamp_with_time_zone_col> =
from_tz(cast<date_col> as timestamp, dbtimezone)
;
or use sessiontimezone as the second argument, if that's what you need.
This assumes that the database (and/or the session) time zone is set up properly in the db, respectively in the client. If it isn't / they aren't, that needs to be fixed first. Oracle is perfectly capable of handling daytime savings time, if the parameters are set correctly in the first place. (And if they aren't, it's not clear why you would try to get your operation to be "more correct" than the database supports in the first place.)
Example: in the WITH clause below, I simulate a table with a column dt in data type date. Then I convert that to be a timestamp with time zone, in my session's (client) time zone.
with
my_table ( dt ) as (
select to_date('2018-06-20 14:30:00', 'yyyy-mm-dd hh24:mi:ss') from dual
)
select dt,
from_tz(cast(dt as timestamp), sessiontimezone) as ts_with_tz
from my_table
;
DT TS_WITH_TZ
------------------- -------------------------------------------------
2018-06-20 14:30:00 2018-06-20 14:30:00.000000000 AMERICA/LOS_ANGELES
The question in your title
How to gather timezone of operating system from Oracle database in string format?
is easy to answer. Run this statement:
SELECT TO_CHAR(SYSTIMESTAMP, 'tzr') FROM dual;
But I assume you have a different problem, however I don't fully understand it.
When you have a column of TIMESTAMP WITH TIME ZONE then the time zone information is available. Time zones like +02:00 do not have any daylight savings, it is always 2 hour ahead UTC, no matter if summer or winter. Timezones like Europe/Berlin or CET apply Daylight Saving Times.
If you have a time for example 2018-06-22 10:00:00+02:00 then you simply don't know whether this means Europe/Berlin with Daylight Saving Time on or Africa/Cairo which is always +02:00 hours ahead UTC - you have no possibility to retrieve such information!
If you have data in column of DATE (or TIMESTAMP) then you don't have any time zone information at all, thus you cannot convert such values to TIMESTAMP WITH TIME ZONE without further information.
Storing times in timezone of operating system is rather useless. Either store them in UTC or use data type TIMESTAMP WITH LOCAL TIME ZONE. Data in TIMESTAMP WITH LOCAL TIME ZONE are stored in DBTIMEZONE (which is recommended to be set as UTC but actually not relevant for you) and always and only shown in current user SESSIONTIMEZONE.

Oracle DB Timestamp to Java Timestamp : Confusion

I'm struggling from a couple of hours to understand what's going on with the TimeStamps in my code.
Both the Oracle DB and the java application are in PDT
Select from DB:
select id, time_stamp from some_Table where id = '3de392d69c69434eb907f1c0d2802bf0';
3de392d69c69434eb907f1c0d2802bf0 09-DEC-2014 12.45.41.354000000 PM
select id, time_stamp at time zone 'UTC' from some_Table where id = '3de392d69c69434eb907f1c0d2802bf0';
3de392d69c69434eb907f1c0d2802bf0 09-DEC-2014 12.45.41.354000000 PM
The field in the Oracle database is TimeStamp, hence no timezone information is stored.
Timestamp dbTimeStamp = dbRecord.getLastLoginTime();
System.out.println(dbTimeStamp.toString()); // 2014-12-09 12:16:50.365
System.out.println(dbTimeStamp.getTime()); // 1418156210365 --> Tue Dec 09 2014 20:16:50 UTC?
According to the documentation, getTime()
Returns the number of milliseconds since January 1, 1970, 00:00:00 GMT
represented by this Timestamp object.
Why are 8 hours (PDT - UTC) of extra time added to the response of getTime() ?
TimeStamp.toString() internally uses Date.getHours() whose javadoc states:
Returns the hour represented by this Date object. The
returned value is a number (0 through 23)
representing the hour within the day that contains or begins
with the instant in time represented by this Date
object, as interpreted in the local time zone.
So toString is using your local time zone whereas getDate doesn't.
These two are consistent with each other. The getTime() method gives you the absolute millicecond value, which you chose to interpret in UTC. The toString() method gives you that same millisecond value interpreted in the associated timezone. So it is not getTime() which is adding the time, but toString() which is subtracting it. This is not really documented, but that is how it behaves.
The most important takeaway should be not to rely on Timestamp.toString because it is misleading. The whole timezone mechanism within Date (and Timestamp is a subclass) has been deprecated a long time ago. Instead use just the getTime() value and have it formatted by other APIs, such as Java 8 Date/Time API.
Update
Apparently, the toString() output is actually the correct one, which for me is just one small addition to the thick catalog of all things wrong with Java's date/time handling. You probably receive the timestamp from the database as a formatted string, not the millisecond value. JDBC then parses that into a millisecond value according to the timezone associated with the Timestamp instance, such that the output of toString() matches what was returned by the database, and the actual millisecond value being secondary.
Thanks to the answers above and the references on SO. This answer finally helped me in understanding where I was going wrong in understanding TimeStamps.
Qouting from the linked answer
Note: Timestamp.valueOf("2010-10-23 12:05:16"); means "create a timestamp with the given time in the default timezone".
TimeStamp represents an instant of time. By default, that instant of time in the current Timezone.
The timestamps being written to the DB were UTC instants. i.e. current UTC time was being written. Hence, no matter where the application was deployed, the value being written to the DB was the same TimeStamp.
However, while reading the TimeStamp generated assumes the default TimeZone as read from the deployment JVM. Hence, the value read was the instant in PST timezone. The actual UTC Value being 8 hours more than the PST time. Hence, the difference.
TimeStamp.getTime() returns milliseconds from UTC.
TimeStamp.toString() returns the representation of time in the current TimeZone. Thanks #marko-topolnik
To take an example,
Value in the DB : 2014-12-09 12:16:50.365
When this value is read in a TimeStamp, the instant is 2014-12-09 12:16:50.365 in PST
Convert this to UTC, it would be 2014-12-09 20:16:50
Hence, the solution was to add the TimeZone offset to the values read from the database to get the instants as UTC TimeStamps.
The key here was "TimeStamp is a time instant without TimeZone information. The timestamp is assumed to be relative to the default system TimeZone." - It took me a really long while to comprehend this.

Why is my timestamp shifted in timezone?

I have this date in a PostgreSQL 9.1 database in a timestamp without time zone column:
2012-11-17 13:00:00
It's meant to be in UTC, and it is, which I've verified by selecting it as a UNIX timestamp (EXTRACT epoch).
int epoch = 1353157200; // from database
Date date = new Date((long)epoch * 1000);
System.out.println(date.toGMTString()); // output 17 Nov 2012 13:00:00 GMT
However, when I read this date using JPA/Hibernate, things go wrong. This is my mapping:
#Column(nullable=true,updatable=true,name="startDate")
#Temporal(TemporalType.TIMESTAMP)
private Date start;
The Date I get, however, is:
17 Nov 2012 12:00:00 GMT
Why is this happening, and more importantly, how can I stop it?
Note that I just want to store points in time, universally (as java.util.Date does), and I couldn't care less about timezones, except that I obviously don't want them to corrupt my data.
As you've probably deduced, the client application which connects to the database is in UTC+1 (Netherlands).
Also, the choice for the column type timestamp without time zone was made by Hibernate when it automatically generated the schema. Should that maybe be timestamp with time zone instead?
If a database does not provide timezone information, then the JDBC driver should treat it as if it is in the local timezone of the JVM (see PreparedStatement.setDate(int, Date)):
Sets the designated parameter to the given java.sql.Date value using the default time zone of the virtual machine that is running the application.
The Javadoc and the JDBC specification do not explicitly say anything about ResultSet etc, but to be consistent most drivers will also apply that rule to dates retrieved from the database. If you want explicit control over the timezone used, you will need to use the various set/getDate/Time/Timestamp methods that also accept a Calendar object in the right timezone.
Some drivers also provide a connection property allowing you to specify the timezone to use when converting to/from the database.
I've found that changing the column type to timestamp with time zone fixed the problem. I'll have to convert all my other timestamp columns to that as well then.
From this I conclude that Hibernate does not read timestamp without time zone columns as in UTC, but as in the local timezone. If there is a way to make it interpret them as UTC, please let me know.
There are several solutions to solve the problems:
1) The easiest way is to set the time zone in the JDBC connect string - as long as the database supports this.
For MySQL you can use useGmtMillisForDatetimes=true to force the use of UTC in the database. As far as I know Postgres does not support such an option, but I might be wrong because I don't use Postgres.
2) Set the default time zone in your Java client program with
TimeZone.setDefault(TimeZone.getTimeZone("UTC"));
Disadvantage: There you also change the timezone where you don't want, for example if your program has an UI part.
3) Use a mapping with a special getter only for Hibernate:
In your Mapping file
<property name="myTimeWithTzConversion" type="timestamp" access="property">
<column name="..." />
</property>
and in your program you have get/setMyTimeWithTzConversion() for the access only by hibernate to the member variable Timestamp myTime, and in this getter/setter you do the timezone conversion.
We finally decided for 3) (which is a bit more programming work), because there we didn't have to change the existing database, our database was in UTC+1 (which forbids the JDBC connect string solution) and that didn't interfere with the existing UI.

How to store a java.util.Date into a MySQL timestamp field in the UTC/GMT timezone?

I used a new Date() object to fill a field in a MySQL DB, but the actual value stored in that field is in my local timezone.
How can I configure MySQL to store it in the UTC/GMT timezone?
I think, configuring the connection string will help but I don't know how. There are many properties in the connection string like useTimezone, serverTimzone, useGmtMillisForDatetimes, useLegacyDatetimeCode, ...
The short answer is:
add "default-time-zone=utc" to my.cnf
in your code, always "think" in UTC, except when displaying dates for your users
when getting/setting dates or timestamps with JDBC, always use the Calendar parameter, set to UTC:
resultset.getTimestamp("my_date", Calendar.getInstance(TimeZone.getTimeZone("UTC")));
either synchronize your servers with NTP, or rely only on the database server to tell you what time it is.
The long answer is this:
When dealing with dates and timezones in any database and with any client code, I usually recommend the following policy:
Configure your database to use UTC timezone, instead of using the server's local timezone (unless it is UTC of course).
How to do so depends on your database server. Instructions for MySQL can be found here: http://dev.mysql.com/doc/refman/5.0/en/time-zone-support.html. Basically you need to write this in my.cnf: default-time-zone=utc
This way you can host your database servers anywhere, change your hosting location easily, and more generally manipulate dates on your servers without any ambiguity.
If you really prefer to use a local timezone, I recommend at least turning off Daylight Saving Time, because having ambiguous dates in your database can be a real nightmare.
For example, if you are building a telephony service and you are using Daylight Saving Time on your database server then you are asking for trouble: there will be no way to tell whether a customer who called from "2008-10-26 02:30:00" to "2008-10-26 02:35:00" actually called for 5 minutes or for 1 hour and 5 minutes (supposing Daylight Saving occurred on Oct. 26th at 3am)!
Inside your application code, always use UTC dates, except when displaying dates to your users.
In Java, when reading from the database, always use:
Timestamp myDate = resultSet.getTimestamp("my_date", Calendar.getInstance(TimeZone.getTimeZone("UTC")));
If you do not do this, the timestamp will be assumed to be in your local TimeZone, instead of UTC.
Synchronize your servers or only rely on the database server's time
If you have your Web server on one server (or more) and your database server on some other server, then I strongly recommend you synchronize their clocks with NTP.
OR, only rely on one server to tell you what time it is. Usually, the database server is the best one to ask for time. In other words, avoid code such as this:
preparedStatement = connection.prepareStatement("UPDATE my_table SET my_time = ? WHERE [...]");
java.util.Date now = new java.util.Date(); // local time! :-(
preparedStatement.setTimestamp(1, new Timestamp(now.getTime()));
int result = preparedStatement.execute();
Instead, rely on the database server's time:
preparedStatement = connection.prepareStatement("UPDATE my_table SET my_time = NOW() WHERE [...]");
int result = preparedStatement.execute();
Hope this helps! :-)
MiniQuark gave some good answers for databases in general, but there are some MySql specific quirks to consider...
Configure your database to use UTC timezone
That actually won't be enough to fix the problem. If you pass a java.util.Date to MySql as the OP was asking, the MySql driver will change the value to make it look like the same local time in the database's time zone.
Example: Your database if configured to UTC. Your application is EST. You pass a java.util.Date object for 5:00 (EST). The database will convert it to 5:00 UTC and store it. Awesome.
You'd have to adjust the time before you pass the data to "undo" this automatic adjustment. Something like...
long originalTime = originalDate.getTime();
Date newDate = new Date(originalTime - TimeZone.getDefault().getOffset(originalTime));
ps.setDate(1, newDate);
Reading the data back out requires a similar conversion..
long dbTime = rs.getTimestamp(1).getTime();
Date originalDate = new Date(dbTime + TimeZone.getDefault().getOffset(dbTime));
Here's another fun quirk...
In Java, when reading from the database, always use: Timestamp myDate
= resultSet.getTimestamp("my_date", Calendar.getInstance(TimeZone.getTimeZone("UTC")));
MySql actually ignores that Calendar parameter. This returns the same value regardless of what calendar you pass it.
I had the same problem, and it took me nearly a day to track down. I'm storing DateTime columns in MySQL. The RDS instance, running in Amazon's Cloud, is correctly set to have a UTC timestamp by default.
The Buggy Code is:
String startTime = "2013-02-01T04:00:00.000Z";
DateTime dt = ISODateTimeFormat.dateTimeParser().parseDateTime(startTime);
PreparedStatement stmt = connection.prepareStatement(insertStatementTemplate);
Timestamp ts = new Timestamp(dt.getMillis());
stmt.setTimestamp(1, ts, Calendar.getInstance(TimeZone.getTimeZone("UTC")));
In the code above, the ".setTimestamp" call would NOT take the date as a UTC date!
After hours of investigating, this turns out to be a known bug in the Java / MySQL Driver. The call to setTimestamp listerally just ignores the Calendar parameter.
To fix this add the "useLegacyDatetimeCode=false" to your database URI.
private final static String DatabaseName =
"jdbc:mysql://foo/?useLegacyDatetimeCode=false";
As soon as i did that, the date stored in the MySQL database was in proper UTC form, rather than in the timezone of my local workstation.
Well, if we're talking about using PreparedStatements, there's a form of setDate where you can pass in a Calendar set to a particular time zone.
For instance, if you have a PreparedStatement named stmt, a Date named date, and assuming the date is the second parameter:
stmt.setDate(2, date, Calendar.getInstance(TimeZone.getTimeZone("GMT")));
The best part is, even if GMT is an invalid time zone name, it still returns the GMT time zone because of the default behavior of getTimeZone.
A java Date is timezone agnostic. It ALWAYS represents a date in GMD(UTC) in milliseconds from the Epoch.
The ONLY time that a timezone is relevant is when you are emitting a date as a string or parsing a data string into a date object.

Categories