We are using CXF in combination with Jackson (JacksonJaxbJsonProvider) to marshall domain objects into JSON. Everything is working good, with the exception that I cannot get dates to format the way I want them to. In short, what I want, is to output dates as seconds since epoch (also called unix time). This is partly doable with SerializationConfig.Feature.WRITE_DATES_AS_TIMESTAMPS, but this gives me milliseconds, not seconds. As my dates don't have such high precision (and never will), I am wasting 4 bytes for every timestamp.
To my knowledge, the only way I can control the date format is by using setDateFormat() on ObjectMapper. This function accepts a DateFormat. However, it does not seem like a DateFormat can output seconds since epoch, only milliseconds.
Are there any other ways of doing this?
Related
My project uses Javascript and Java (Android) for the client and Java for the backend.
When I started working on my project, I stored dates as days from epoch (long) and all was good. I then found out that my project doesn't work well with timezones. Suddenly dates were +1 -1 days off. Depending on the client's location in the world.
After a short investigation, I saw that the foolproof way to avoid it was to store the dates as String yyyy-MM-ddT00:00 so when using the Javascript's new Date(dateStr), it creates it correctly and all was good. Ofcourse I could store the dates as yyyy-MM-dd and just send it to the client as yyyy-MM-ddT00:00 but that won't solve the question I have.
After that, I was wondering whether Java (backend) is handled correctly. I use LocalDate when I want to "play" with dates and LocalDate.parse doesn't like yyyy-MM-ddT00:00 format, instead it works with yyyy-MM-dd so whenever I needed dates, I did LocalDate.parse(dateStr.substring(0,10)). LocalDateTime does work with yyyy-MM-ddT00:00 but I don't need the time part and it had its own issues, which I don't remember what they were at the moment.
So now I have a lot of String manipulation (inside loops) that actually creates more String objects. One can say it's not that much of a stress and I shouldn't pay attention to that but I want to make sure I'm not missing something and maybe there's another way (maybe silly enough that I've missed) to overcome this.
Thanks
Update: The events are stored from a different source and only the date itself is important so if an event happened on 2020-06-17, this is the date all users should see, no matter where they are.
I'm using new Date(dateStr) in Javascript. If dateStr is 2020-06-17, the date object uses the client's timezone and the date might be +-1 depending on the client's timezone. If dateStr is 2020-06-17T00:00 then the date object is created as expected no matter where the client is located.
Assuming the above, which I hope is clearer now, creating String objects over and over again is a memory stress that I should consider or is it something Java handles with no problem and I shouldn't worry about this?
My question was closed and I was told to edit it to be more focused. After editing my question, how can I re-open my question to answers?
As you have discovered, storing dates in terms of days since some epoch only works if everyone who uses your system is using the same time zone. If two different users in different time zones have a different idea about the date on which some event occurred (e.g., the person in New York says that the system crashed on Sunday night, but the the person in Hong Kong says it crashed on Monday morning), then you have to store the time zone in which the event occurred in order to show the date of that event accurately.
But if that's the situation you're in, why not just store the time zone along with the date? There's no compelling reason to combine the date and timezone into a string.
When you parse a ISO-formatted timestamp into a LocalDate using only the first 10 characters, be aware that you're losing the time zone information. Implicitly the LocalDate that you get is in the time zone of the original timestamp. So if the original timestamp is New York time, and you take the date part and add 1 day, then you'll get the next day in the New York time zone. But if you then take the date from a second timestamp, you can't compare it to the date you got from the first timestamp, in terms of determining if it represents the "same day." You can only test for "same day" if both dates are implicitly in the same time zone.
UPDATE
After reading your additional comments, I realize that what's happening is this. You have a date stored in your database, like 2020-06-15. You send that to the UI as the string '2020-06-15' and then do new Date('2020-06-15') and then you're surprised when you render the date in the UI and get June 14!
This is the transformation that happens:
The string '2016-06-15' gets parsed into a JavaScript Date representing midnight UTC on the June 15.
When you render the date, it gets converted into a string using the browser's local time zone, which (if you're in the United States) will give you June 14, because at midnight UTC on June 15 it's still June 14 in all time zones west of Greenwich.
You discovered that if you make the string "2020-06-15T00:00" that it works, because now JavaScript uses the browser's local time zone to parse the string. In other words, this string means midnight local time, not UTC, on June 15. So now the sequence is:
'2020-06-15T00:00' gets parsed using the local time zone and becomes June 15 4:00AM UTC.
When you render the date, it gets converted back to local time and is rendered as June 15.
The easiest way to avoid all this messiness is just to send the regular date string '2020-06-15' to the UI and render it using DateTimeFormat, specifying the time zone as UTC:
new Intl.DateTimeFormat('en-US', {timeZone: 'UTC'}).format(d)
Since dates in JavaScript are always UTC, and you're asking DateTimeFormat to output the date in UTC, no date shift occurs.
You could also use the Date methods getUTCFullYear, getUTCMonth, etc. to get the date components and format them however you like.
Once you're no longer sending dates back and forth with "T00:00" appended, you can just use LocalDate on the Java side.
Don't spend even a second worrying about the time required to manipulate strings. Think about the incredible amount of string manipulation that is necessary to build even a simple web page. A few more strings here and there isn't going to make a difference.
I am deserializing some JSON which contains date fields from a web service. Almost all of the time the date comes in like this:
2017-01-18T07:20:00Z
But on vary rare occasions that date comes in like this:
2017-01-18T07:20:42.9295582Z
The second time it seems to not pick up the Z at the end. I'm just going by the logs at the moment but was just wondering if anyone had any ideas.
I was trying to fetch the current time in UTC in Java and came across this post: Getting "unixtime" in Java
I checked out that all solutions
new Date().getTime()
System.currentTimeMillis()
Instant.now().toEpochMillis()
return the same value. I wished to know if there is any difference between the three of them. If so then what?
There is no difference. This is just the result of the evolution of the date API over the years. There is now more than one way to do this.
As far as just getting epoch milliseconds, all three are fine. Things get more complicated as soon as formatting, calendars, timezones, durations and the like become involved.
I'm storing messages from an amazon cloud and ordering them by their timestamp in a sorted map.
I am parsing the timestamp from the cloud with the following code:
Date timestamp = new SimpleDateFormat("yyyy-MM-dd'T'hh:mm:ss.SSS'Z'", Locale.ENGLISH).parse(time);
and then I am storing in them in a sorted map with the key being the date.
The issue is that the date only comes down to seconds precision.
I can have several messages sent in 1 second, so I need them to be ordered with millisecond precision. Is there a data structure that allows this?
Well as long as your source has a higher resolution than 1 second. Looks like that from the pattern, but you haven't shown us any input example.
Date is just a wrapper around a long milliseconds since 1970-01-01. So you have that already. Date.getTime() will return that, with millisecond precision.
Why would you think that Date only has one second precision? Date.compareTo(Date anotherDate) compares on a millisecond level.
So your SortedMap should work fine unless you are doing something strange.
I am not sure if you have done this, but you can create your own comparator and use that.
As a side note, depending on your applications setup you may want to be careful with how you use SimpleDateFormat, there are some issues with it.
java.time
I am providing the modern answer: use java.time, the modern Java date and time API, for your date and time work. First of all because it is so much nicer to work with than the old date and time classes like Date and (oh, horrors) SimpleDateFormat, which are poorly designed. We’re fortunate that they are long outdated. An added advantage is: Your date-time string is in ISO 8601 format, and the classes of java.time parse this format as their default, that is, without any explicit formatter.
String stringFromCloud = "2014-06-14T08:55:56.789Z";
Instant timestamp = Instant.parse(stringFromCloud);
System.out.println("Parsed timestamp: " + timestamp);
Output:
Parsed timestamp: 2014-06-14T08:55:56.789Z
Now it’s clear to see that the string has been parsed with full millisecond precision (Instant can parse with nanosecond precision, up to 9 decimals on the seconds). Instant objects will work fine as keys for your SortedMap.
Corner case: if the fraction of seconds i 0, it is not printed.
String stringFromCloud = "2014-06-14T08:56:59.000Z";
Parsed timestamp: 2014-06-14T08:56:59Z
You will need to trust that when no fraction is printed, it is because it is 0. The Instant will still work nicely for your purpose, being sorted before instants with fraction .001, .002, etc.
What went wrong in your parsing?
First, you’ve got a problem that is much worse than missing milliseconds: You are parsing into the wrong time zone. The trailing Z in your incoming string is a UTC offset of 0 and needs to be parsed as such. What happened in your code was that SimpleDateFormat used the time zone setting of your JVM instead of UTC, giving rise to an error of up to 14 hours. In most cases your sorting would still be correct. Around transition from summer time (DST) in your local time zone the time would be ambiguous and parsing may therefore be incorrect leading to wrong sort order.
As the Mattias Isegran Bergander says in his answer, parsing of milliseconds should work in your code. The reason why you didn’t think so is probably because just a minor one of the many design problems with the old Date class: even though internally it has millisecond precision, its toString method only prints seconds, it leaves out the milliseconds.
Links
Oracle tutorial: Date Time explaining how to use java.time.
Wikipedia article: ISO 8601
I have 2 different computers, each with different TimeZone.
In one computer im printing System.currentTimeMillis(), and then prints the following command in both computers:
System.out.println(new Date(123456)); --> 123456 stands for the number came in the currentTimeMillis in computer #1.
The second print (though typed hardcoded) result in different prints, in both computers.
why is that?
How about some pedantic detail.
java.util.Date is timezone-independent. Says so right in the javadoc.
You want something with respect to a particular timezone? That's java.util.Calendar.
The tricky part? When you print this stuff (with java.text.DateFormat or a subclass), that involves a Calendar (which involves a timezone). See DateFormat.setTimeZone().
It sure looks (haven't checked the implementation) like java.util.Date.toString() goes through a DateFormat. So even our (mostly) timezone-independent class gets messed up w/ timezones.
Want to get that timezone stuff out of our pure zoneless Date objects? There's Date.toGMTString(). Or you can create your own SimpleDateFormatter and use setTimeZone() to control which zone is used yourself.
why is that?
Because something like "Oct 4th 2009, 14:20" is meaningless without knowing the timezone it refers to - which you can most likely see right now, because that's my time as I write this, and it probably differs by several hours from your time even though it's the same moment in time.
Computer timestamps are usually measured in UTC (basically the timezone of Greenwich, England), and the time zone has to be taken into account when formatting them into something human readable.
Because that milliseconds number is the number of milliseconds past 1/1/1970 UTC. If you then translate to a different timezone, the rendered time will be different.
e.g. 123456 may correspond to midday at Greenwich (UTC). But that will be a different time in New York.
To confirm this, use SimpleDateFormat with a time zone output, and/or change the timezone on the second computer to match the first.
javadoc explains this well,
System.currentTimeMillis()
Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
See https://docs.oracle.com/javase/7/docs/api/java/util/Date.html#toString().
Yes, it's using timezones. It should also print them out (the three characters before the year).