I always use unix timestamps for everything, but am wondering if there is a better way.
What do you use to store timestamps and why?
I always use unix timestamps for everything, but am wondering if there is a better way.
What do you use to store timestamps and why?
However you choose to store a timestamp, it is important to avoid regional interpretation problems and time offset problems. A Unix timestamp is interpreted the same regardless of region, and is calculated from the same point in time regardless of time zone - these are good things.
Beware storing timestamps as ambiguous strings such as 01/02/2008, since that can be interpreted as January 02, 2008 or February 01, 2008, depending on locale.
When storing hours/minutes/seconds, it is important to know "which" hour/minute/second is being specified. You can do this by including timezone information (not needed for a Unix timestamp, since it is assumed to be UTC).
However, note that Unix timestamps cannot uniquely represent some instants in time: when there is a leap second in UTC, the Unix timestamp does not change, so both 23:59:60 UTC and 00:00:00 the next day have the same Unix representation. So if you really need one second or better resolution, consider another format.
If you prefer a more human readable format for storage than a Unix timestamp, consider ISO 8601.
One technique that helps keep things straight-forward is to store dates as UTC and only apply timezone or DST offsets when displaying a date to a user.
If you are storing a log file, please for the love of pete make it something human readable and lexically-sortable.
2008-10-07 09:47:02 for example.
32 bit Unix timestamps will overflow in a few years (January 2038), so that might be a consideration. I generally use a DATETIME format in SQL, which is YYYY-MM-DD HH:MM:SS with the time as a 24-hour clock. I try to output to files in the same format, just to make my life easier.
What era do you need to store and to what resolution? If you need microseconds, or dates in the stone age time_t might not be the best. For general business purposes it's quite good (assuming 64bit)
It depends on what you need the timestamps for.
A unix timestamp cannot represent the time 1 second after 2008-12-31T23:59:59Z. If you do '2009-01-01T09:00:00' - '2008-12-31T09:00:00' with unix timestamps the result is NOT correct: there will be a leap second between those two dates and they're separated by 86401 seconds (not 86400 as unix timestamps will tell you).
Other than that and what the other responders said, yes -- unix timestamps are the way to go :)
A timestamp is not a good idea on databases, because they do not take daylight savings or the current local time into account. On MySQL it is better to store it as a time, and then use the MySQL date and time functions to retreive the parts you want, or compare to other dates.
timeval-style (time_t + microseconds) if I need sub-second accuracy, else just time_t. You can use a 64-bit integer value to store time_t * 1000000 + usec and you are overflow-proof for over +/- 292,000 years.
UNIX Timestamp 32-bit problem seems to be pretty annoying for users who enter future dates in 2038+.
Either use the DATETIME sequence for MySQL, or store your dates as BIGINT(8) unsigned (max: 18 quintillion) or FLOAT so that you can enter large numbers. Then you cannot use for example PHP's date() function because it only allows integers as parameter (limited by 32-bit systems).
The solution I found is to use PHP 5.2.0 functions. Here's the DateTime PHP solution.
No need to change UNIX_TIMESTAMP format. As long as you have BIGINT(8) unsigned as your MySQL storage for timestamps. You won't be limited by 32-bit systems anymore.
A timestamp is bascially:
And as a point in time has an endless resolution, the important thing on choosing a timestamp format is: has it enough resolution?
For most applications I had, nanoseconds were enough. So Java Timestamp had the right resolution for me so far.