I just finished reading "A Deepness in the Sky" a 2000 SF book by Vernor Vinge. It's a great book with an unexpected reference to seconds since the epoch.
>Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
That is one of my favorite books of all time. The use of subtle software references is really great.
I recommend Bobiverse series for anyone who wants more "computer science in space" or permutation city for anyone who wants more "exploration of humans + simulations and computers"
I’ll second the Bobiverse series, one of my favorites. Its descriptions of new technologies is at just the right level and depth, I think, and it’s subtly hilarious.
Ray Porter, the narrator, is quite the talent. He does a brilliant job with ‘Project: Hail Mary’ as well which is the second book from the author of ‘The Martian.’ It has quite a bit more science and humor than The Martian and is one of my favorites.
> There’s an ongoing effort to end leap seconds, hopefully by 2035.
I don't really like this plan.
The entire point of UTC is to be some integer number of seconds away from TAI to approximate mean solar time (MST).
If we no longer want to track MST, then we should just switch to TAI. Having UTC drift away from MST leaves it in a bastardized state where it still has historical leap seconds that need to be accounted for, but those leap seconds no longer serve any purpose.
In the ideal world, you are right, computer systems should've been using TAI for time tracking and converted it to UTC/local time using TZ databases.
But in the real world a lot of systems made the wrong choice (UNIX being the biggest offender) and it got deeply encoded in many systems and regulations, so it's practically impossible to "just switch to TAI".
So it's easier to just re-interpret UTC as "the new TAI". I will not be surprised if some time in the future we will get the old UTC, but under a different name.
The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.
In most (all?) countries, civil time is based on UTC. Nobody is going to set all clocks in the world backwards by about half a minute because it is somewhat more pure.
GPS time also has an offset compared to TAI. Nobody care a bout that. Just like nobody really cares about the Unix epoch. As long as results are consistent.
> The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.
There is, though? You can easily look at the BIPM's reports [0] to get the gist of how they do it. Some of the contributing atomic clocks are aligned to UTC, and others are aligned to TAI (according to the preferences of their different operators), but the BIPM averages all the contributing measurements into a TAI clock, then derives UTC from that by adding in the leap seconds.
The only think we can be certain of is that the Summer Solstice occurs when the mid summer sun shines through a trillithon at Stonehenge and strikes a certain point. From there we can work outwards.
The logical thing to do is to precisely model Stonehenge to the last micron in space. That will take a bit of work involving the various sea levels and so on. So on will include the thermal expansion of granite and the traffic density on the A303 and whether the Solstice is a bank holiday.
Oh bollocks ... mass. That standard kilo thing - is it sorted out yet? Those cars and lorries are going to need constant observation - we'll need a sort of dynamic weigh bridge that works at 60mph. If we slap it in the road just after (going west) the speed cameras should keep the measurements within parameters. If we apply now, we should be able to get Highways to change the middle of the road markings from double dashed to a double solid line and then we can simplify a few variables.
... more daft stuff ...
Right, we've got this. We now have a standard place and point in time to define place and time from.
No we don't and we never will. There is no absolute when it comes to time, place or mass. What we do have is requirements for standards and a point to measure from. Those points to measure from have differing requirements, depending on who who you are and what you are doing.
I suggest we treat time as we do sea level, with a few special versions that people can use without having to worry about silliness.
Provided I can work out when to plant my wheat crop and read log files with sub micro second precision for correlation, I'll be happy. My launches to the moon will need a little more funkiness ...
The hack is literally trivial. Check once a month to see if UTC # ET. If not then create a file called Leap_Second once a month, check if this file exists, and if so, then delete it, and add 1 to the value in a file called Leap_Seconds, and make a backup called 'LSSE' Leap seconds since Epoch.
You are not expected to understand this.
It keeps both systems in place.
If you want, I could make it either a hash or a lookup table.
Note also that the modern "UTC epoch" is January 1, 1972. Before this date, UTC used a different second than TAI: [1]
> As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2.
So Unix times in the years 1970 and 1971 do not actually match UTC times from that period. [2]
A funny consequence of this is that there are people alive today that do not know (and never will know) their exact age in seconds[1].
This is true even if we assume the time on the birth certificate was a time precise down to the second. It is because what was considered the length of a second during part of their life varied significantly compared to what we (usually) consider a second now.
[1] Second as in 9192631770/s being the the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom
Without fail, if I read about time keeping, I learn something new. I had always thought unix time as the most simple way to track time (as long as you consider rollovers). I knew of leap seconds, but somehow didn’t think they applied here. Clearly I hadn’t thought about it enough. Good post.
I also read the link for “UTC, GPS, LORAN and TAI”. It’s an interesting contrast that GPS time does not account for leap seconds.
Saying that something happened x-number of seconds (or minutes, hours, days or weeks) ago (or in the future) is simple: it’s giving that point in time a calendar date that’s tricky.
> Saying that something happened x-number of [...]days or weeks) ago in the future) is simple
It's not, actually. Does 2 days and 1 hour ago mean 48, 49 or 50 hours, if there was a daylight saving jump in the meantime? If it's 3PM and something is due to happen in 3 days and 2 hours, the user is going to assume and prepare for 5PM, but what if there's a daylight saving jump in the meantime? What happens to "in 3 days and 2 hours" if there's a leap second happening tomorrow that some systems know about and some don't?
You rarely want to be thinking in terms of deltas when considering future events. If there is an event that you want to happen on jan 1, 2030 at 6 PM CET, there is no way to express that as a number of seconds between now and then, because you don't know whether the US government abolishes DST between now and 2030 or not.
To reiterate this point, there is no way to make an accurate, constantly decreasing countdown of seconds to 6PM CET on jan 1, 2030, because nobody actually knows when that moment is going to happen yet.
No. The problems begin because GP included the idea of saying "N <calendar units> in the future".
If the definition of a future time was limited to hours, minutes and/or seconds, then it would be true that the only hard part is answering "what calendrical time and date is that?"
But if you can say "1 day in the future", you're already slamming into problems before even getting to ask that question.
The real problem here is that people keep trying to screw up the simple thing.
If you want to know the timestamp of "two days from now" then you need to know all kinds of things like what time zone you're talking about and if there are any leap seconds etc. That would tell you if "two days from now" is in 172800 seconds or 172801 seconds or 169201 or 176400 etc.
But the seconds-counting thing should be doing absolutely nothing other than counting seconds and doing otherwise is crazy. The conversion from that into calendar dates and so on is for a separate library which is aware of all these contextual things that allow it to do the conversion. What we do not need and should not have is for the seconds counting thing to contain two identical timestamps that refer to two independent points in time. It should just count seconds.
Agree, but people often miss that there's two different use cases here, with different requirements.
"2 days from now" could either mean "after 2*86400 seconds have ticked" or it could mean "when the wall clock looks like it does now, after 2 sunset events". These are not the same thing.
The intent of the thing demanding a future event matters. So you can have the right software abstractions all you like and people will still use the wrong thing.
The problem is that programmers are human, and humans don't reason in monotonic counters :)
One might also recall the late Gregory Bateson's reiteration that "number and quantity are not the same thing - you can have 5 oranges but you can never have 5 gallons of water" [0]
Seconds are numbers; calendrical units are quantities.
[0] Bateson was, in some ways, anticipating the divide between the digital and analog worlds.
> "2 days from now" could either mean "after 2*86400 seconds have ticked" or it could mean "when the wall clock looks like it does now, after 2 sunset events". These are not the same thing.
Which is why you need some means to specify which one you want from the library that converts from the monotonic counter to calendar dates.
Anyone who tries to address the distinction by molesting the monotonic counter is doing it wrong.
I recently built a small Python library to try getting time management right [1]. Exactly because of the first part of your comment, I concluded that the only way to apply a time delta in "calendar" units is to provide the starting point. It was fun developing variable-length time spans :) I however did not address leap seconds.
You are very right that future calendar arithmetic is undefined. I guess that the only viable approach is to assume that it works based on what we know today, and to treat future changes as unpredictable events (as if earth would slow its rotation). Otherwise, we should just stop using calendar arithmetic, but in many fields this is just unfeasible...
> I guess that the only viable approach is to assume that it works based on what we know today, and to treat future changes as unpredictable events
No, the only way is to store the user's intent, and recalculate based on that intent when needed.
When the user schedules a meeting for 2PM while being in Glasgow, the meeting should stay at 2PM Glasgow time, even in a hypothetical world where Scotland achieves independence from the UK and they get different ideas whether to do daylight saving or not.
The problem is determining what the user's intent actually is; if they set a reminder for 5PM while in NY, do they want it to be 5PM NY time in whatever timezone they're currently in (because their favorite football team plays at 5PM every week), or do they want it to be at 5PM in their current timezone (because they need to take their medicine at 5PM, whatever that currently means)?
But because of the UNIX time stamp "re-synchronization" to the current calendar dates, you can't use UNIX time stamps to do those "delta seconds" calculations if you care about _actual_ amount of seconds since something happened.
Simple as long as your precision is at milliseconds and you don’t account for space travel.
We can measure the difference in speed of time in a valley and a mountain (“just” take an atomic clock up a mountain and wait for a bit, bring it back to your lab where the other atomic clock is now out of sync)
I have come to the conclusion that TAI is the simplest and that anything else should only be used by conversion from TAI when needed (e.g. representation or interoperability).
There's a certain exchange out there that I wrote some code for recently, that runs on top of VAX, or rather OpenVMS, and that has an epoch of November 17, 1858, the first time I've seen a mention of a non-unix epoch in my career. Fortunately, it is abstracted to be the unix epoch in the code I was using.
It’s called the modified Julian day (MJD). And the offset is 2,400,000.5 days.
In the Julian day way of counting, each day ended at noon, so that all astronomical observations done in one night would be the same Julian day, at least in Europe. MJD moved the epoch back to midnight.
It'd be not-so-funny if there was a miscalculation and the Earth was slowed down or sped up too much. There's a story about the end of times and the Antichrist (Dajjal) in the Muslim traditions where this sort of thing actually happens. It is said that the "first day of the Antichrist will be like a year, the second day like a month, and third like a week", which many take literally, i.e. a cosmic event which actually slows down the Earth's rotation, eventually reversing course such that the sun rises from the West (the final sign of the end of humanity).
Somewhat related: I really like Erlang's docs about handling time. They have common scenarios laid out and document which APIs to use for them. Like: retrieve system time, measure elapsed time, determine order of events, etc.
The Open Group Base Specifications Issue 7, 2018 edition says that "time_t shall be an integer type". Issue 8, 2024 edition says "time_t shall be an integer type with a width of at least 64 bits".
C merely says that time_t is a "real type capable of representing times". A "real type", as C defines the term, can be either integer or floating-point. It doesn't specify how time_t represents times; for example, a conforming implementation could represent 2024-12-27 02:17:31 UTC as 0x20241227021731.
It's been suggested that time_t should be unsigned so a 32-bit integer can represent times after 2038 (at the cost of not being able to represent times before 1970). Fortunately this did not catch on, and with the current POSIX requiring 64 bits, it wouldn't make much sense.
But the relevant standards don't forbid an unsigned time_t.
Well, at least there isn't any POSIX timestamp that correspond to more than one real time point. So, it's better than the one representation people use for everything.
That'd be like saying some points in time that don't have a ISO 8601 year. Every point in time has a year, but some years are longer than others.
If you sat down and watched https://time.is/UTC, it would monotonically tick up, except that occasionally some seconds would be very slightly longer. Like 0.001% longer over the course of 24 hours.
When storing dates in a database I always store them in Unix Epoch time and I don't record the timezone information on the date field (it is stored separately if there was a requirement to know the timezone).
Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
I know that timezones are a field of landmines, but again, that is a human construct where timezone boundaries are adjusted over time.
It seems we need to anchor on absolute time, and then render that out to whatever local time format we need, when required.
> Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
Yes. TAI or similar is the only sensible way to track "system" time, and a higher-level system should be responsible for converting it to human-facing times; leap second adjustment should happen there, in the same place as time zone conversion.
Unfortunately Unix standardised the wrong thing and migration is hard.
TAI is not a time zone. Timezones are a concept of civil time keeping, that is tied to the UTC time scale.
TAI is a separate time scale and it is used to define UTC.
There is now CLOCK_TAI in Linux [1], tai_clock [2] in c++ and of course several high level libraries in many languages (e.g. astropy.time in Python [3])
There are three things you want in a time scale:
* Monotonically Increasing
* Ticking with a fixed frequency, i.e. an integer multiple of the SI second
* Aligned with the solar day
Unfortunately, as always, you can only chose 2 out of the 3.
TAI is 1 + 2, atomic clocks using the caesiun standard ticking at the frequency that is the definition of the SI second forever Increasing.
Then there is UT1, which is 1 + 3 (at least as long as no major disaster happens...). It is purely the orientation of the Earth, measured with radio telescopes.
UTC is 2 + 3, defined with the help of both. It ticks the SI seconds of TAI, but leap seconds are inserted at two possible time slots per year to keep it within 1 second of UT1. The last part is under discussion to be changed to a much longer time, practically eliminating future leap seconds.
The issue then is that POSIX chose the wrong standard for numerical system clocks. And now it is pretty hard to change and it can also be argued that for performance reasons, it shouldn't be changed, as you more often need the civil time than the monotonic time.
The remaining issues are:
* On many systems, it's simple to get TAI
* Many software systems do not accept the complexity of this topic and instead just return the wrong answer using simplified assumptions, e.g. of no leap seconds in UTC
* There is no standardized way to handle the leap seconds in the Unix time stamp, so on days around the introduction of leap second, the relationship between the Unix timestamp and the actual UTC or TAI time is not clear, several versions exist and that results in uncertainty up to two seconds.
* There might be a negative leap second one day, and nothing is ready for it
> you more often need the civil time than the monotonic time
I don't think that's true? You need to time something at the system level (e.g. measure the duration of an operation, or run something at a regular interval) a lot more often than you need a user-facing time.
Thank you ; it’s kind of you to write such a thoughtful, thorough reply.
In my original comment, when I wrote timezone, I actually didn’t really mean one of many known civil timezones (because it’s not), but I meant “timezone string configuration in Linux that will then give TAI time, ie stop adjusting it with timezones, daylight savings, or leap seconds”.
I hadn’t heard of the concept of timescale.
Personally i think item (3) is worthless for computer (as opposed to human facing) timekeeping.
Your explanation is very educational, thank you.
That said, you say it’s simple to get TAI, but that’s within a programming language. What we need is a way to explicitly specify the meaning of a time (timezone but also timescale, I’m learning), and that that interpretation is stored together with the timestamp.
I still don’t understand why a TZ=TAI would be so unreasonable or hard to implement as a shorthand for this desire..
I’m thinking particularly of it being attractive for logfiles and other long term data with time info in it.
I did this for my systems a while ago. You can grab <https://imu.li/TAI.zone>, compile it with the tzdata tools, and stick it in /etc/zoneinfo. It is unfortunately unable to keep time during a leap second.
In theory, if you keep your clock set to TAI instead of UTC, you can use the /etc/zoneinfo/right timezones for civic time and make a (simpler) TAI zone file. I learned of that after I'd created the above though, and I can imagine all sorts of problems with getting the NTP daemon to do the right thing, and my use case was more TZ=TAI date, as you mentioned.
There's a contentious discussion on the time zone mailing list about adding a TAI entry. It really didn't help that DJB was the one wanting to add it and approached the issue with his customary attitude. There's a lot of interesting stuff in there though - like allegedly there's a legal requirement in Germany for their time zone to be fixed to the rotation of the earth (and so they might abandon UTC if it gives up leap seconds).
That's already false except along one line within every timezone (and that's assuming the timezone is properly set and not a convenient political or historical fiction). Let's say your timezone is perfectly positioned, and "true" in the middle. Along its east and west boundaries, local noon is 30 minutes off. Near daylight savings transitions, it's off by about an hour everywhere.
Local noon just doesn't matter that much. It especially doesn't matter to the second.
No, almost often no. Most software is written to paper over leap seconds: it really only happens at the clock synchronization level (chrony for example implements leap second smearing).
All your cocks are therefore synchronized to UTC anyway: it would mean you’d have to translate from UTC to TAI when you store things, then undo when you retrieve. It would be a mess.
Smearing is alluring as a concept right up until you try and implement it in the real world.
If you control all the computers that all your other computers talk to (and also their time sync sources), then smearing works great. You're effectively investing your own standard to make Unix time monatomic.
If, however, your computers need to talk to someone else's computers and have some sort of consensus about what time it is, then the chances are your smearing policy won't match theirs, and you'll disagree on _what time it is_.
Sometimes these effects are harmless. Sometimes they're unforseen. If mysterious, infrequent buggy behaviour is your kink, then go for it!
Using time to sync between computers is one of the classic distributed systems problems. It is explicitly recommended against. The amount of errors in the regular time stack mean that you can’t really rely on time being accurate, regardless of leap seconds.
Computer clock speeds are not really that consistent, so “dead reckoning” style approaches don’t work.
NTP can only really sync to ~millisecond precision at best. I’m not aware of the state-of-the-art, but NTP errors and smearing errors in the worst case are probably quite similar. If you need more precise synchronisation, you need to implement it differently.
If you want 2 different computers to have the same time, you either have to solve it at a higher layer up by introducing an ordering to events (or equivalent) or use something like atomic clocks.
Fair, it's often one of those hidden, implicit design assumptions.
Google explicitly built spanner (?) around the idea that you can get distributed consistency and availability iff you control All The Clocks.
Smearing is fine, as long as it's interaction with other systems is thought about (and tested!). Nobody wants a surprise (yet actually inevitable) outage at midnight on New year's day.
Close to the poles, I'd say the assumption that the cocks be synchronised with UTC is flawed. Had we had cocks, I am afraid they'd be oversleeping at this time of year.
> and I don't record the timezone information on the date field
Very few databases actually make it possible to preserve timezone in a timestamp column. Typically the db either has no concept of time zone for stored timestamps (e.g. SQL server) or has “time zone aware” timestamp column types where the input is converted to UTC and the original zone discarded (MySQL, Postgres)
Oracle is the only DB I’m aware of that can actually round-trip nonlocal zones in its “with time zone” type.
What's "original timezone"? Most libraries implement timezone aware dates as an offset from UTC internally. What tzinfo uses oracle? Is it updated? Is it similar to tzinfo used in your service?
It's highly complicated topic and it's amazing PostgreSQL decided to use instant time for 'datetime with timezone' type instead of Oracle mess.
> Most libraries implement timezone aware dates as an offset from UTC internally.
For what it's worth, the libraries that are generally considered "good" (e.g. java.time, Nodatime, Temporal) all offer a "zoned datetime" type which stores an IANA identifier (and maybe an offset, but it's only meant for disambiguation w.r.t. transitions). Postgres already ships tzinfo and works with those identifiers, it just expects you to manage them more manually (e.g. in a separate column or composite type). Also let's not pretend that "timestamp with time zone" isn't a huge misnomer that causes confusion when it refers to a simple instant.
I agree naming is kinda awful. But you need geo timezone only for rare cases and handling it in a separate column is not that hard. Instant time is the right thing for almost all cases beginners want to use `datetime with timezone` for.
The discussion was about storing a timestamp as UTC, plus the timezone the time was in originally as a second field.
Postgres has timezone aware datetime fields, that translate incoming times to UTC, and outgoing to a configured timezone. So it doesnt store what timezone the time was in originally.
The claim was that the docs explain why not, but they don't.
Maybe, it really depends on what your systems are storing. Most systems really won't care if you are one second off every few years. For some calculations being a second off is a big deal. I think you should tread carefully when adopting any format that isn't the most popular and have valid reasons for deviating from the norm. The simple act of being different can be expensive.
Seconded. Don't mess around with raw timestamps. If you're using a database, use its date-time data type and functions. They will be much more likely to handle numerous edge cases you've never even thought about.
I think this article ruined my Christmas. Is nothing sacred? seconds should be seconds since epoch. Why should I care if it drifts off solar day? Let seconds-since-epoch to date representation converters be responsible for making the correction. What am I missing?
The way it is is really how we all want it. 86400 seconds = 1 day. And we operate under the assumption that midnight UTC is always a multiple of 86400.
We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
You never worried or thought about it before, and you don’t need to! It’s done in the right way.
> We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
That kind of thing is already needed for timezone handling. Any piece of software that handles human-facing time needs regular updates.
I think it would make most of our lives easier if machine time was ~29 seconds off from human time. It would be a red flag for carelessly programmed applications, and make it harder to confuse system time with human-facing UK time.
I don't want it this way: it mixes a data model concern (timestamps) with a ui concern (calendars). As other have said, it would be much better if we used TAI and handled leap seconds at the same level as timezones.
But most software that would need to care about that already needs to care about timezones, and those already need to be regularly updated, sometimes with not much more than a month's notice.
Was this Morsy's government or Sisi's? If it's Morsy's government you're holding a grudge against, I have some good news for you. (Presumably you're not holding that grudge against random taxi drivers and housewives in Alexandria.)
With hindsight, we'd do lots of things differently :)
I guess they just didn't foresee the problem, or misjudged the impact. I can imagine it being very "let's kick that problem down the road and just do a simple thing for now" approach.
UNIX systems at the time probably didn’t care about accuracy to the second being maintained over rare leap second adjustments.
Random example, the wonderful RealTime1987A project (https://bmonreal.github.io/RealTime1987A/) talks about detecting neutrinos from the supernova, and what information can be inferred from the timing of the detections. A major source of that data is the Super Kamiokande project. The data was recorded to tape by a PDP-11, timestamped by its local clock. That clock was periodically synced with UTC with a sophisticated high-tech procedure that consisted of an operator making a phone call to some time service, then typing the time into the computer. As such, the timestamps recorded by this instrument have error bars of something like +/- one minute.
If that’s the sort of world you’re in, trying to account for leap seconds probably seems like a complete waste of effort and precious computer memory.
Arguably it's worse if 00:33 on 2024.12.26 has to get run through another function to get the true value of 2024.12.25 T 23:59.
The problem is leap seconds. Software just wasn't designed to handle 86401 seconds in a day, and caused incidents at Google, Cloudflare, Quantus, and others. Worried that resolving all possible bugs related to days with 86401 seconds in them was going to be impossible to get right, Google decided to smear that leap second so that the last "second" isn't.
And if you've not seen it, there's the falsehoods programmers believe about time article.
What I don’t understand is why we would ever assume two clocks in two different places could be compared in a non approximate way. Your clock, your observations of the world state, are always situated in a local context. In the best of all possible cases, the reasons why your clock and time reports from other clocks differ are well understood.
I believe it has some advantages that while you are waiting at the train station your clock shows exactly the same time as the train conductor’s several miles away from you.
in the US or parts of Europe you could wait there for 10m past the scheduled time and barely notice. In Japan if the train clock disagreed with the station clock by 30s, causing the train to arrive 30s late, they'd have to write all of the passengers excuse notes for why they were late to work.
I think something like the small angle approximation applies. There are plenty of applications where you can assume clocks are basically in the same frame of reference because relativistic effects are orders of magnitude smaller than your uncertainty.
How? Unless you have an atomic clock nearby, they will very quickly drift apart by many nanoseconds again. It's also impossible to synchronize to that level of precision across a network.
It's not only possible, you can demonstrate it on your phone. Check the GPS error on your device in a clear area. 1 ft of spatial error is roughly 1ns timing error on the signal (assuming other error sources are zero). Alternatively, you can just look at the published clock errors: http://navigationservices.agi.com/GNSSWeb/PAFPSFViewer.aspx
All the satellites in all of the GNSS constellations are synchronized to each other and every device tracking them to within a few tens of nanoseconds. Yes, atomic clocks are involved, but none of them are corrected locally and they're running at a significantly different rate than "true" time here on earth.
That's true, but it's not really the situation I'm thinking of. Your phone is comparing the differences between the timestamps of multiple incoming GNSS signals at a given instant, not using them to set its local clock for future reference.
A better analogy to practical networked computing scenarios would be this: receive a timestamp from a GNSS signal, set your local clock to that, wait a few minutes, then receive a GNSS timestamp again and compare it to your local clock. Use the difference to measure how far you've travelled in those few minutes. If you did that without a local atomic clock then I don't think it would be very accurate.
Basic hardware gets you a precise GNSS time once per second. Your local clock won’t drift that much in that time, and you can track and compensate for the drift. If you’re in a position to get the signal and have the hardware, then you can have very accurate clocks in your system.
I hate to break it to you, but all modern electronic warfare equipment has been targeting all GNSS for many years now. There's a reason why "GPS-denied", which is really referring to any form of satellite navigation, is a multi-billion dollar industry.
That's a common way of doing high precision time sync, yes. It's slightly out of phone budget/form factor, but that's what a GPSDO does.
The receiver in your phone also needs pretty good short term stability to track the signal for all of the higher processing. It'd be absolutely fine to depend on PPS output with seconds or minutes between measurements.
The advantage of equal-length days is that know now what some future date represents; whereas if counting leap-seconds too etc you might get a different date computing now compared to future code that knows about any leap seconds between now and then.
Working with time is full of pitfalls, especially around clock monotonicity and clock synchronisation. I wrote an article about some of those pitfalls some time ago [1]. Then, you add time zones to it, and you get a real minefield.
Not any worse than most other commonly used calendars, and it's got the benefit of network effects: many people use it, and virtually everyone will be at least somewhat familiar with it.
You are correct. The first example time in the article, "2024-12-25 at 18:54:53 UTC", corresponds to POSIX timestamp 1735152893, not 1735152686. And there have been 27 leap seconds since the 1970 epoch, not 29.
I've been trying to find discussion of this topic on Hacker News between October 1582 and September 1752, but to no avail.
'cal 9 1752' is .. funny. I guess instead of doing this annoying a-periodic leap second business, they accumulated a bunch of leap seconds owed, and skipped 11 days at one go. Sysadmins at the time were of divided opinion on the matter.
The more I learn about the computation of time, the more unbelievably complex getting it right seems. I thought I was pretty sophisticated in in my view of time handling, but just in the last couple of months there have been a series of posts on HN that have opened my eyes even more to how leaky this abstraction of computer time is.
Pretty soon we'll have to defer to deep experts and fundamental libraries to do anything at all with time in our applications, a la security and cryptography.
I remember hearing at a conference about 10 years ago that Google does not make use of leap seconds. Instead, they spread them across regular seconds (they modified their NTP servers). I quickly searched online and found the original article [1].
Seems like there's another corner cut here, where the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.
I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.
(For the curious, the way this seems to work is that it's calibrated to start ticking up in 1973 and every 4 years thereafter. This is integer math, so fractional values are rounded off. 1972 was a leap year. From March 1st to December 31st 1972, the leap day was accounted for in `tm_yday`. Thereafter it was accounted for in this expression.)
> the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.
The article cites the original edition of POSIX from 1988.
The bug in question was fixed in the 2001 edition:
> I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.
Not just Unix time, converting future local time to UTC and storing that is also fraught with risk, as there's no guarantee that the conversion you apply today will be the same as the one that needs to be applied in the future.
Often (for future dates), the right thing to do is to store the thing you were provided (e.g. a local timestamp + the asserted local timezone) and then convert when you need to.
(Past dates have fewer problems converting to UTC, because we don't tend to retroactively change the meaning of timezones).
there is literally no easy and safe way to actually handle leap seconds. what happens when they need to remove one second? even for the easier case of inserted leap second, you can smear it, but what happens if there are multiple systems each smearing it at different rates? I'd strongly argue that you pretty much have to reboot all your time critical and mission critical systems during the leap second to be safe.
the issue is so wide spread and complicated that they decided to stop introducing extra leap seconds so people can come up with something better in the coming decades - probably way later than the arrival of AGI.
Lot of people seem to miss the point of the article.
Suppose you had a clock that counted seconds (in the way we understand seconds, moving forward one unit per second). If you looked at it in a few days at midnight UTC on NYE (according to any clock), it would not be a multiple of 86400 (number of seconds per day). It would be off by some 29 seconds due to leap seconds. In that way, Unix time is not seconds since the epoch.
You have it backwards. If you look at it at midnight UTC (on any day, not just NYE) it WOULD be an exact multiple of 86400. (Try it and see.)
Because of leap seconds, this is wrong. Midnight UTC tonight is in fact NOT a multiple of 86,400 real, physical seconds since midnight UTC on 1970-01-01.
He didn't have it backwards, he was saying the same thing as you. He said, "suppose you had a clock that counted seconds," then described how it would work (it would be a non-multiple) if that was the case, which it isn't. You ignored that his description of the behavior was part of a hypothetical and not meant to describe how it actually behaves.
I wonder if the increasing number of computers in orbit will mean even more strange relativistic timekeeping stuff will become a concern for normal developers - will we have to add leap seconds to individual machines?
Back of the envelope says ~100 years in low earth orbit will cause a difference of 1 second
Most of those probably don't/won't have clocks that are accurate enough to measure 1 second every hundred years; typical quartz oscillators drift about one second every few weeks.
For GPS at least it is accounted for 38 microseconds per day, they have atomic clocks accurate to like 0.4 milliseconds over 100 years. The frequencies they measure at are different from earth and are constantly synchronised.
More often than I care to admit, I yearn for another of Aphers programming interview short stories. Some of my favorite prose and incredibly in depth programming.
> People, myself included, like to say that POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00.
> This is not true. Or rather, it isn’t true in the sense most people think.
I find that assertion odd, because it works exactly as I did assume. Though, to be fair, I'm not thinking in the scientific notion that the author may.
If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick. That scientists inject a second here or there wouldn't interfere with such logic.
All of that said, the leap second is going away anyways, so hopefully whatever replaces it is less troublesome.
The article is needlessly unclear, but the specification given in the second blockquote is the one that is actually applied, and a simpler way of explaining it is: POSIX time() returns 86400 * [the number of UTC midnights since 1970-01-01T00:00:00] + [the number of seconds since the last UTC midnight].
POSIX doesn’t ignore leap seconds. Occasionally systems repeat a second, so time doesn’t drift beyond a second from when leap seconds were invented: https://en.wikipedia.org/wiki/Leap_second
The leap second in Unix time is supposed to wait a second and pretend it never happened. I can see why a longer second could be trouble, but also… if you knew it was coming you could make every nanosecond last 2 and lessen the impact as time would always be monotonic?
Typically you don't need to worry about leap seconds on server because AWS or GCP will help you handle it.
You just need to read the docs to understand their behavior. Some will smooth it out for you, some will jump for you. It would be a problem if you have 3rd party integrations and you rely on their timestamp.
What we’re seeing is again the scientists trying to constrain a humanist system into a scientifically precise framework. It doesn’t really tend to work out. I’m reminded of the time that a bunch of astronomers decided to redefine what a planet is, and yet the cultural notion of Pluto remains strong.
Science and culture will rarely move hand-in-glove, so the rule of separation or concerns, to decouple human experience from scientific measurement, applies.
So what if leap seconds make the epoch 29 seconds longer-ago than date +%s would suggest? It matters a lot less than the fact that we all agree on some number N to represent the current time. That we have -29 fictional seconds doesn't affect the real world in any way. What are you going to do, run missile targeting routines on targets 30 years ago? I mean, I'm as much for abolish leap seconds as anyone, but I don't think it's useful --- even if it's pedantically correct --- to highlight the time discrepancy.
One could imagine a scenario where you’re looking at the duration of some brief event by looking at the start and end times. If that’s interval happens to span a leap second then the duration could be significantly different depending on how your timestamps handled it.
Much more important, though, is how it affects the future. The fact that timestamps in the past might be a few seconds different from the straightforward “now minus N seconds” calculation is mostly a curiosity. The fact that clocks might all have to shift by one more second at time point in the future is more significant. There are plenty of real-world scenarios where that needs some substantial effort to account for.
For practically everyone the local civil time is off from local solar time more than 30 seconds, because very few people live at the exact longitude that corresponds to their time zone. And then you got DST which throws the local time even more off.
This is ignoring the fact that due equation of time, solar noon naturally shifts around tens of minutes over the course of the year.
To drive the point, for example local mean solar time at Buckingham palace is already more than 30 seconds off from Greenwich time.
The point is, since astronomical "time" isn't exactly on constant multiple of cesium standard seconds, and it even fluctuates due to astrophysical phenomena, applications that concern astro-kineti-geometrical reality has to use the tarnished timescale to match the motion of the planet we're on rather than following a monotonic counter pointed at a glass vial.
It is up to you to keep TAI for everything and let your representations of physical coordinates drift away into the galaxy or something, but that's not the majority choice. Overwhelming majority choose UTC time.
TAI is still nice for many high precision applications, weirdly including a lot of precisely those geo-spatial use cases, so we have both.
Sure, but that doesn't mean that we invented and practise leap seconds for the sheer fun of it.
There's very good reasons that are important behind why we try and keep UTC near UT1, so saying "it doesn't matter to anyone" without even entertaining that some people might care isn't very constructive.
UTC, and leap seconds, originate from (military) navies of the world, with the intent of supporting celestial navigation. It is already dubious how useful leap seconds were for that use, and much more dubious is its use as civil timescale.
We have leap seconds to save us from having leap minutes, or leap hours.
Generally, it's useful for midnight to be at night, and midday during the day. UT1 is not regular, so you need some form of correction. Then the debate is about how big and how often.
You don't need leap minutes. Nobody cares if the sun is off by minutes, it already is anyways thanks to timezones. You don't even need leap hours. If in seven thousand years no-one has done a 1 time correction, you can just move the timezones over 1 space, like computers do all the time for political reasons.
It’s going to be multiple centuries until the cumulative leap seconds add up to 30 minutes, and by that point, a majority of the human population is likely to be living off the earth anyway.
Isn't this the point to the tz files shipped on every linux system? If the crappy online converters only do the basic math formula, of course it's going to be off a little...
> POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00. … I think there should be a concise explanation of the problem.
I don’t think that the definition that software engineers believe is wrong or misleading at all. It really is the number of seconds that have passed since Unix’s “beginning of time”.
But to address the problem the article brings up, here’s my attempt at a concise definition:
POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00, and does not include leap seconds that have been added periodically since the 1970s.
Seconds are a fraction of a day which is Earth rotating, and count 86400 seconds and then roll over to the next day, but Earth's rotating speed changes so how much "time passing" is in 86400 seconds varies a little. Clocks based on Earth rotating get out of sync with atomic clocks.
Leap seconds go into day-rotation clocks so their date matches the atomic clock measure of how much time has passed - they are time which has actually passed and ordinary time has not accounted for; so it's inconsistant for you to say "Unix time really is the number of seconds that have passed" and "does not include leap seconds" because those leap seconds are time that has passed.
Strictly speaking Unix time is monotonic, because it counts integer number of seconds and it does not go backwards, it only repeats during leap seconds.
POSIX does define "the amount of time (in seconds and nanoseconds) since the Epoch", for the output of clock_gettime() with CLOCK_REALTIME [0]. That "amount of time" must be stopped or smeared or go backward in some way when it reaches a leap second. This isn't the 80s, we have functions that interact with Unix time at sub-second precision.
“Monotonic” means non-decreasing (or non-increasing if you’re going the other way). Values are allowed to repeat. The term you’re looking for is “strictly increasing.”
I guess this hinges on whether you think Unix time is an integer or a float. If you think it's just an integer, then yes, you can't get a negative delta.
If, however, you think it's a float, then you can.
Because a day, that is the time between midnight UTC and midnight UTC, is not always exactly 86400 seconds, due to leap seconds. But Unix time always increases by exactly 86400.
I think you're describing the exact confusion that developers have. Unix time doesn't include leap seconds, but they are real seconds that happened. Consider a system that counts days since 1970, but ignores leap years so doesn't count Feb 29. Those 29ths were actual days, just recorded strangely in the calendar. A system that ignores them is going to give you an inaccurate number of days since 1970.
Are you sure they actually happened? as you say, at least one of us is confused. My understanding is that the added leap seconds never happened, they are just inserted to make the dates line up nicely. Perhaps this depends on the definition of second?
Leap seconds are exactly analogous to leap days. One additional unit is added to the calendar, shifting everything down. For leap days we add a day 29 when normally we wrap after 28. For leap seconds we add second 60 when normally we wrap after 59.
Imagine a timestamp defined as days since January 1, 1970, except that it ignores leap years and says all years have 365 days. Leap days are handled by giving February 29 the same day number as February 28.
If you do basic arithmetic with these timestamps to answer the question, “how many days has it been since Nixon resigned? then you will get the wrong number. You’ll calculate N, but the sun has in fact risen N+13 times since that day.
Same thing with leap seconds. If you calculate the number of seconds since Nixon resigned by subtracting POSIX timestamps, you’ll come up short. The actual time since that event is 20-some seconds more than the value you calculate.
I'm honestly just diving into this now after reading the article, and not a total expert. Wikipedia has a table of a leap second happening across TAI (atomic clock that purely counts seconds) UTC, and unix timestamps according to POSIX: https://en.wikipedia.org/wiki/Unix_time#Leap_seconds
It works out to be that unix time spits out the same integer for 2 seconds.
I thought you were wrong because if a timestamp is being repeated, that means two real seconds (that actually happened) got the same timestamp.
However, after looking hard at the tables in that Wikipedia article comparing TAI, UTC, and Unix time, I think you might actually be correct-- TAI is the atomic time (that counts "real seconds that actually happened"), and it gets out of sync with "observed solar time." The leap seconds are added into UTC, but ultimately ignored in Unix time.* ~~So Unix time is actually more accurate to "real time" as measured atomically than solar UTC is.~~
The only point of debate is that most people consider UTC to be "real time," but that's physically not the case in terms of "seconds that actually happened." It's only the case in terms of "the second that high noon hits." (For anyone wondering, we can't simply fix this by redefining a second to be an actual 24/60/60 division of a day because our orbit is apparently irregular and generally slowing down over time, which is why UTC has to use leap seconds in order to maintain our social construct of "noon == sun at the highest point" while our atomic clocks are able to measure time that's actually passed.)
*Edit: Or maybe my initial intuition was right. The table does show that one Unix timestamp ends up representing two TAI (real) timestamps. UTC inserts an extra second, while Unix time repeats a second, to handle the same phenomenon. The table is bolded weirdly (and I'm assuming it's correct while it may not be); and beyond that, I'm not sure if this confusion is actually the topic of conversation in the article, or if it's just too late in the night to be pondering this.
It really is the number of seconds that have passed since Unix's "beginning of time", minus twenty-nine. Some UTC days have 86401 seconds, Unix assumes they had 86400.
It's wrong and misleading in precisely the way you (and other commenters here) were wrong and misled, so it seems like that's a fair characterization.
I just finished reading "A Deepness in the Sky" a 2000 SF book by Vernor Vinge. It's a great book with an unexpected reference to seconds since the epoch.
>Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
That is one of my favorite books of all time. The use of subtle software references is really great.
I recommend Bobiverse series for anyone who wants more "computer science in space" or permutation city for anyone who wants more "exploration of humans + simulations and computers"
I’ll second the Bobiverse series, one of my favorites. Its descriptions of new technologies is at just the right level and depth, I think, and it’s subtly hilarious.
Just starting the third book, really fun series. Highly recommend for anyone interested in computing and science fiction.
The audio books are narrated brilliantly too. Stange fact: bobiverse has no dedicated Wikipedia page.
Ray Porter, the narrator, is quite the talent. He does a brilliant job with ‘Project: Hail Mary’ as well which is the second book from the author of ‘The Martian.’ It has quite a bit more science and humor than The Martian and is one of my favorites.
Thanks for the recommendation. Looks like they're on Kindle Unlimited so I'll definitely give them a try
> There’s an ongoing effort to end leap seconds, hopefully by 2035.
I don't really like this plan.
The entire point of UTC is to be some integer number of seconds away from TAI to approximate mean solar time (MST).
If we no longer want to track MST, then we should just switch to TAI. Having UTC drift away from MST leaves it in a bastardized state where it still has historical leap seconds that need to be accounted for, but those leap seconds no longer serve any purpose.
In the ideal world, you are right, computer systems should've been using TAI for time tracking and converted it to UTC/local time using TZ databases.
But in the real world a lot of systems made the wrong choice (UNIX being the biggest offender) and it got deeply encoded in many systems and regulations, so it's practically impossible to "just switch to TAI".
So it's easier to just re-interpret UTC as "the new TAI". I will not be surprised if some time in the future we will get the old UTC, but under a different name.
I agree that deviating from MST costs more than it benefits.
---
However, this proposal is not entirely pointless. The point is:
1. Existing UTC timekeeping is unmodified. (profoundly non-negotiable)
2. Any two timestamps after 2035 different by an accurate number of physical seconds.
---
Given that MST is already a feature of UTC, I agree removing it seems silly.
The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.
In most (all?) countries, civil time is based on UTC. Nobody is going to set all clocks in the world backwards by about half a minute because it is somewhat more pure.
GPS time also has an offset compared to TAI. Nobody care a bout that. Just like nobody really cares about the Unix epoch. As long as results are consistent.
> The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.
There is, though? You can easily look at the BIPM's reports [0] to get the gist of how they do it. Some of the contributing atomic clocks are aligned to UTC, and others are aligned to TAI (according to the preferences of their different operators), but the BIPM averages all the contributing measurements into a TAI clock, then derives UTC from that by adding in the leap seconds.
[0] https://webtai.bipm.org/ftp/pub/tai/annual-reports/bipm-annu...
The only think we can be certain of is that the Summer Solstice occurs when the mid summer sun shines through a trillithon at Stonehenge and strikes a certain point. From there we can work outwards.
The logical thing to do is to precisely model Stonehenge to the last micron in space. That will take a bit of work involving the various sea levels and so on. So on will include the thermal expansion of granite and the traffic density on the A303 and whether the Solstice is a bank holiday.
Oh bollocks ... mass. That standard kilo thing - is it sorted out yet? Those cars and lorries are going to need constant observation - we'll need a sort of dynamic weigh bridge that works at 60mph. If we slap it in the road just after (going west) the speed cameras should keep the measurements within parameters. If we apply now, we should be able to get Highways to change the middle of the road markings from double dashed to a double solid line and then we can simplify a few variables.
... more daft stuff ...
Right, we've got this. We now have a standard place and point in time to define place and time from.
No we don't and we never will. There is no absolute when it comes to time, place or mass. What we do have is requirements for standards and a point to measure from. Those points to measure from have differing requirements, depending on who who you are and what you are doing.
I suggest we treat time as we do sea level, with a few special versions that people can use without having to worry about silliness.
Provided I can work out when to plant my wheat crop and read log files with sub micro second precision for correlation, I'll be happy. My launches to the moon will need a little more funkiness ...
Sorry to say Stonehenge or the plate on which is stands is moving... to the east, but the wobble of the earth is changing.
The hack is literally trivial. Check once a month to see if UTC # ET. If not then create a file called Leap_Second once a month, check if this file exists, and if so, then delete it, and add 1 to the value in a file called Leap_Seconds, and make a backup called 'LSSE' Leap seconds since Epoch.
You are not expected to understand this.
It keeps both systems in place.
If you want, I could make it either a hash or a lookup table.
Note also that the modern "UTC epoch" is January 1, 1972. Before this date, UTC used a different second than TAI: [1]
> As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2.
So Unix times in the years 1970 and 1971 do not actually match UTC times from that period. [2]
[1] https://en.wikipedia.org/wiki/Coordinated_Universal_Time#His...
[2] https://en.wikipedia.org/wiki/Unix_time#UTC_basis
A funny consequence of this is that there are people alive today that do not know (and never will know) their exact age in seconds[1].
This is true even if we assume the time on the birth certificate was a time precise down to the second. It is because what was considered the length of a second during part of their life varied significantly compared to what we (usually) consider a second now.
[1] Second as in 9192631770/s being the the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom
[2] in a particular gravity well.
Without fail, if I read about time keeping, I learn something new. I had always thought unix time as the most simple way to track time (as long as you consider rollovers). I knew of leap seconds, but somehow didn’t think they applied here. Clearly I hadn’t thought about it enough. Good post.
I also read the link for “UTC, GPS, LORAN and TAI”. It’s an interesting contrast that GPS time does not account for leap seconds.
Saying that something happened x-number of seconds (or minutes, hours, days or weeks) ago (or in the future) is simple: it’s giving that point in time a calendar date that’s tricky.
> Saying that something happened x-number of [...]days or weeks) ago in the future) is simple
It's not, actually. Does 2 days and 1 hour ago mean 48, 49 or 50 hours, if there was a daylight saving jump in the meantime? If it's 3PM and something is due to happen in 3 days and 2 hours, the user is going to assume and prepare for 5PM, but what if there's a daylight saving jump in the meantime? What happens to "in 3 days and 2 hours" if there's a leap second happening tomorrow that some systems know about and some don't?
You rarely want to be thinking in terms of deltas when considering future events. If there is an event that you want to happen on jan 1, 2030 at 6 PM CET, there is no way to express that as a number of seconds between now and then, because you don't know whether the US government abolishes DST between now and 2030 or not.
To reiterate this point, there is no way to make an accurate, constantly decreasing countdown of seconds to 6PM CET on jan 1, 2030, because nobody actually knows when that moment is going to happen yet.
You ignored the last part of their comment. All your examples are things they did say are hard.
Also natural events are the other way around, we can know they're X in the future but not the exact calendar date/time.
No. The problems begin because GP included the idea of saying "N <calendar units> in the future".
If the definition of a future time was limited to hours, minutes and/or seconds, then it would be true that the only hard part is answering "what calendrical time and date is that?"
But if you can say "1 day in the future", you're already slamming into problems before even getting to ask that question.
The real problem here is that people keep trying to screw up the simple thing.
If you want to know the timestamp of "two days from now" then you need to know all kinds of things like what time zone you're talking about and if there are any leap seconds etc. That would tell you if "two days from now" is in 172800 seconds or 172801 seconds or 169201 or 176400 etc.
But the seconds-counting thing should be doing absolutely nothing other than counting seconds and doing otherwise is crazy. The conversion from that into calendar dates and so on is for a separate library which is aware of all these contextual things that allow it to do the conversion. What we do not need and should not have is for the seconds counting thing to contain two identical timestamps that refer to two independent points in time. It should just count seconds.
Agree, but people often miss that there's two different use cases here, with different requirements.
"2 days from now" could either mean "after 2*86400 seconds have ticked" or it could mean "when the wall clock looks like it does now, after 2 sunset events". These are not the same thing.
The intent of the thing demanding a future event matters. So you can have the right software abstractions all you like and people will still use the wrong thing.
The problem is that programmers are human, and humans don't reason in monotonic counters :)
One might also recall the late Gregory Bateson's reiteration that "number and quantity are not the same thing - you can have 5 oranges but you can never have 5 gallons of water" [0]
Seconds are numbers; calendrical units are quantities.
[0] Bateson was, in some ways, anticipating the divide between the digital and analog worlds.
> "2 days from now" could either mean "after 2*86400 seconds have ticked" or it could mean "when the wall clock looks like it does now, after 2 sunset events". These are not the same thing.
Which is why you need some means to specify which one you want from the library that converts from the monotonic counter to calendar dates.
Anyone who tries to address the distinction by molesting the monotonic counter is doing it wrong.
I recently built a small Python library to try getting time management right [1]. Exactly because of the first part of your comment, I concluded that the only way to apply a time delta in "calendar" units is to provide the starting point. It was fun developing variable-length time spans :) I however did not address leap seconds.
You are very right that future calendar arithmetic is undefined. I guess that the only viable approach is to assume that it works based on what we know today, and to treat future changes as unpredictable events (as if earth would slow its rotation). Otherwise, we should just stop using calendar arithmetic, but in many fields this is just unfeasible...
[1] https://github.com/sarusso/Propertime
> I guess that the only viable approach is to assume that it works based on what we know today, and to treat future changes as unpredictable events
No, the only way is to store the user's intent, and recalculate based on that intent when needed.
When the user schedules a meeting for 2PM while being in Glasgow, the meeting should stay at 2PM Glasgow time, even in a hypothetical world where Scotland achieves independence from the UK and they get different ideas whether to do daylight saving or not.
The problem is determining what the user's intent actually is; if they set a reminder for 5PM while in NY, do they want it to be 5PM NY time in whatever timezone they're currently in (because their favorite football team plays at 5PM every week), or do they want it to be at 5PM in their current timezone (because they need to take their medicine at 5PM, whatever that currently means)?
I would argue that 2 days and 1 hour is not a "number of seconds (or minutes, hours, days or weeks)"
If you say something will happen in three days, that's a big time window.
But because of the UNIX time stamp "re-synchronization" to the current calendar dates, you can't use UNIX time stamps to do those "delta seconds" calculations if you care about _actual_ amount of seconds since something happened.
Simple as long as your precision is at milliseconds and you don’t account for space travel.
We can measure the difference in speed of time in a valley and a mountain (“just” take an atomic clock up a mountain and wait for a bit, bring it back to your lab where the other atomic clock is now out of sync)
I have come to the conclusion that TAI is the simplest and that anything else should only be used by conversion from TAI when needed (e.g. representation or interoperability).
There's a certain exchange out there that I wrote some code for recently, that runs on top of VAX, or rather OpenVMS, and that has an epoch of November 17, 1858, the first time I've seen a mention of a non-unix epoch in my career. Fortunately, it is abstracted to be the unix epoch in the code I was using.
Apparently the 1858 epoch comes from an astronomy standard calendar called the Julian Day, where day zero was in 4713 BC:
https://www.slac.stanford.edu/~rkj/crazytime.txt
To make these dates fit in computer memory in the 1950s, they offset the calendar by 2.4 million days, placing day zero on November 17, 1858.
It’s called the modified Julian day (MJD). And the offset is 2,400,000.5 days. In the Julian day way of counting, each day ended at noon, so that all astronomical observations done in one night would be the same Julian day, at least in Europe. MJD moved the epoch back to midnight.
https://en.wikipedia.org/wiki/Julian_day
There's an old Microsoft tale related to the conflict between Excel's epoch of Jan 1 1900 vs Basic's Dec 31 1899:
https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...
The classic MacOS, Apple's HFS file system (also used in OS X), and PalmOS all had en epoch of January 1, 1904.
The macOS/Swift Foundation API NSDate.timeIntervalSinceReferenceDate uses an epoch of January 1, 2001.
edit: Looks like Wikipedia has a handy list https://en.wikipedia.org/wiki/Epoch_(computing)#Notable_epoc...
Another common computing system to be aware of: the Windows epoch is 1-Jan-1601.
PostgreSQL internally uses a 2000-01-01 epoch for storing timestamps.
Leap seconds should be replaced by large rockets mounted on the equator. Adjust the planet, not the clock.
It'd be not-so-funny if there was a miscalculation and the Earth was slowed down or sped up too much. There's a story about the end of times and the Antichrist (Dajjal) in the Muslim traditions where this sort of thing actually happens. It is said that the "first day of the Antichrist will be like a year, the second day like a month, and third like a week", which many take literally, i.e. a cosmic event which actually slows down the Earth's rotation, eventually reversing course such that the sun rises from the West (the final sign of the end of humanity).
Somewhat related: I really like Erlang's docs about handling time. They have common scenarios laid out and document which APIs to use for them. Like: retrieve system time, measure elapsed time, determine order of events, etc.
https://www.erlang.org/doc/apps/erts/time_correction.html#ho...
This means that some time points cannot be represented by POSIX timestamps, and some POSIX timestamps do not correspond to any real time
What are POSIX timestamps that don't correspond to any real time? Or do you mean in the future if there is a negative leap second?
Yes, negative leap seconds are possible in the future if leap second adjustments are not abandoned
This has always been true. Pre 1970 is not defined in Unix time.
Related question that leads too deep: "What was before the Big Bang?"
Why? time_t is signed
Neither C nor POSIX requires time_t to be signed.
The Open Group Base Specifications Issue 7, 2018 edition says that "time_t shall be an integer type". Issue 8, 2024 edition says "time_t shall be an integer type with a width of at least 64 bits".
C merely says that time_t is a "real type capable of representing times". A "real type", as C defines the term, can be either integer or floating-point. It doesn't specify how time_t represents times; for example, a conforming implementation could represent 2024-12-27 02:17:31 UTC as 0x20241227021731.
It's been suggested that time_t should be unsigned so a 32-bit integer can represent times after 2038 (at the cost of not being able to represent times before 1970). Fortunately this did not catch on, and with the current POSIX requiring 64 bits, it wouldn't make much sense.
But the relevant standards don't forbid an unsigned time_t.
Apparently both Pelles C for Windows and VAX/VMS use a 32-bit unsigned time_t.
From IEE 1003.1 (and TFA):
> If year < 1970 or the value is negative, the relationship is undefined.
In addition to being formally undefined (see sibling comment), APIs sometimes use negative time_t values to indicate error conditions and the like.
Probably because the Gregorian calendar didn't always exist. How do you map an int to a calendar that doesn't exist?
https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar
Well, at least there isn't any POSIX timestamp that correspond to more than one real time point. So, it's better than the one representation people use for everything.
Not yet.
No.
That'd be like saying some points in time that don't have a ISO 8601 year. Every point in time has a year, but some years are longer than others.
If you sat down and watched https://time.is/UTC, it would monotonically tick up, except that occasionally some seconds would be very slightly longer. Like 0.001% longer over the course of 24 hours.
The positive leap second of UTC is inserted at midnight, resulting in 86,401 seconds on that day. Reference: https://en.wikipedia.org/wiki/Leap_second
When storing dates in a database I always store them in Unix Epoch time and I don't record the timezone information on the date field (it is stored separately if there was a requirement to know the timezone).
Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
I know that timezones are a field of landmines, but again, that is a human construct where timezone boundaries are adjusted over time.
It seems we need to anchor on absolute time, and then render that out to whatever local time format we need, when required.
> Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
Yes. TAI or similar is the only sensible way to track "system" time, and a higher-level system should be responsible for converting it to human-facing times; leap second adjustment should happen there, in the same place as time zone conversion.
Unfortunately Unix standardised the wrong thing and migration is hard.
I wish there were a TAI timezone: just unmodified, unleaped, untimezoned seconds, forever, in both directions. I was surprised it doesn’t exist.
TAI is not a time zone. Timezones are a concept of civil time keeping, that is tied to the UTC time scale.
TAI is a separate time scale and it is used to define UTC.
There is now CLOCK_TAI in Linux [1], tai_clock [2] in c++ and of course several high level libraries in many languages (e.g. astropy.time in Python [3])
There are three things you want in a time scale: * Monotonically Increasing * Ticking with a fixed frequency, i.e. an integer multiple of the SI second * Aligned with the solar day
Unfortunately, as always, you can only chose 2 out of the 3.
TAI is 1 + 2, atomic clocks using the caesiun standard ticking at the frequency that is the definition of the SI second forever Increasing.
Then there is UT1, which is 1 + 3 (at least as long as no major disaster happens...). It is purely the orientation of the Earth, measured with radio telescopes.
UTC is 2 + 3, defined with the help of both. It ticks the SI seconds of TAI, but leap seconds are inserted at two possible time slots per year to keep it within 1 second of UT1. The last part is under discussion to be changed to a much longer time, practically eliminating future leap seconds.
The issue then is that POSIX chose the wrong standard for numerical system clocks. And now it is pretty hard to change and it can also be argued that for performance reasons, it shouldn't be changed, as you more often need the civil time than the monotonic time.
The remaining issues are:
* On many systems, it's simple to get TAI * Many software systems do not accept the complexity of this topic and instead just return the wrong answer using simplified assumptions, e.g. of no leap seconds in UTC * There is no standardized way to handle the leap seconds in the Unix time stamp, so on days around the introduction of leap second, the relationship between the Unix timestamp and the actual UTC or TAI time is not clear, several versions exist and that results in uncertainty up to two seconds. * There might be a negative leap second one day, and nothing is ready for it
[1] https://www.man7.org/linux/man-pages/man7/vdso.7.html [2] https://en.cppreference.com/w/cpp/chrono/tai_clock [3] https://docs.astropy.org/en/stable/time/index.html
> you more often need the civil time than the monotonic time
I don't think that's true? You need to time something at the system level (e.g. measure the duration of an operation, or run something at a regular interval) a lot more often than you need a user-facing time.
Thank you ; it’s kind of you to write such a thoughtful, thorough reply.
In my original comment, when I wrote timezone, I actually didn’t really mean one of many known civil timezones (because it’s not), but I meant “timezone string configuration in Linux that will then give TAI time, ie stop adjusting it with timezones, daylight savings, or leap seconds”.
I hadn’t heard of the concept of timescale.
Personally i think item (3) is worthless for computer (as opposed to human facing) timekeeping.
Your explanation is very educational, thank you.
That said, you say it’s simple to get TAI, but that’s within a programming language. What we need is a way to explicitly specify the meaning of a time (timezone but also timescale, I’m learning), and that that interpretation is stored together with the timestamp.
I still don’t understand why a TZ=TAI would be so unreasonable or hard to implement as a shorthand for this desire..
I’m thinking particularly of it being attractive for logfiles and other long term data with time info in it.
I did this for my systems a while ago. You can grab <https://imu.li/TAI.zone>, compile it with the tzdata tools, and stick it in /etc/zoneinfo. It is unfortunately unable to keep time during a leap second.
In theory, if you keep your clock set to TAI instead of UTC, you can use the /etc/zoneinfo/right timezones for civic time and make a (simpler) TAI zone file. I learned of that after I'd created the above though, and I can imagine all sorts of problems with getting the NTP daemon to do the right thing, and my use case was more TZ=TAI date, as you mentioned.
There's a contentious discussion on the time zone mailing list about adding a TAI entry. It really didn't help that DJB was the one wanting to add it and approached the issue with his customary attitude. There's a lot of interesting stuff in there though - like allegedly there's a legal requirement in Germany for their time zone to be fixed to the rotation of the earth (and so they might abandon UTC if it gives up leap seconds).
Sorry, there is a "not" missing there.
A remaining issue is that it is not easy to get proper TAI on most systems.
Why do you think a time scale has to be aligned with solar day? Are you an astronomer or come from an astronomy adjacent background?
Of all the definitions and hidden assumptions about time we’re talking about, possibly the oldest one is that the sun is highest at noon.
That's already false except along one line within every timezone (and that's assuming the timezone is properly set and not a convenient political or historical fiction). Let's say your timezone is perfectly positioned, and "true" in the middle. Along its east and west boundaries, local noon is 30 minutes off. Near daylight savings transitions, it's off by about an hour everywhere.
Local noon just doesn't matter that much. It especially doesn't matter to the second.
The first clock precise enough to even measure the irregularity of Earth rotation was only build in 1934.
Before, it was simply the best clock available.
No, almost often no. Most software is written to paper over leap seconds: it really only happens at the clock synchronization level (chrony for example implements leap second smearing).
All your cocks are therefore synchronized to UTC anyway: it would mean you’d have to translate from UTC to TAI when you store things, then undo when you retrieve. It would be a mess.
Smearing is alluring as a concept right up until you try and implement it in the real world.
If you control all the computers that all your other computers talk to (and also their time sync sources), then smearing works great. You're effectively investing your own standard to make Unix time monatomic.
If, however, your computers need to talk to someone else's computers and have some sort of consensus about what time it is, then the chances are your smearing policy won't match theirs, and you'll disagree on _what time it is_.
Sometimes these effects are harmless. Sometimes they're unforseen. If mysterious, infrequent buggy behaviour is your kink, then go for it!
Using time to sync between computers is one of the classic distributed systems problems. It is explicitly recommended against. The amount of errors in the regular time stack mean that you can’t really rely on time being accurate, regardless of leap seconds.
Computer clock speeds are not really that consistent, so “dead reckoning” style approaches don’t work.
NTP can only really sync to ~millisecond precision at best. I’m not aware of the state-of-the-art, but NTP errors and smearing errors in the worst case are probably quite similar. If you need more precise synchronisation, you need to implement it differently.
If you want 2 different computers to have the same time, you either have to solve it at a higher layer up by introducing an ordering to events (or equivalent) or use something like atomic clocks.
Fair, it's often one of those hidden, implicit design assumptions.
Google explicitly built spanner (?) around the idea that you can get distributed consistency and availability iff you control All The Clocks.
Smearing is fine, as long as it's interaction with other systems is thought about (and tested!). Nobody wants a surprise (yet actually inevitable) outage at midnight on New year's day.
In practice with GPS clocks and OTP you can get very good precision in the microseconds
Throw in chrony and you can get nanoseconds.
That's quite the typo
Close to the poles, I'd say the assumption that the cocks be synchronised with UTC is flawed. Had we had cocks, I am afraid they'd be oversleeping at this time of year.
Oracle is the only DB I’m aware of that can actually round-trip nonlocal zones in its “with time zone” type.
As always the Postgres docs give an excellent explanation of why this is the case: https://www.postgresql.org/docs/current/datatype-datetime.ht...
I read it but I only see an explanation about what it does, not the why. It could have stored the original timezone.
What's "original timezone"? Most libraries implement timezone aware dates as an offset from UTC internally. What tzinfo uses oracle? Is it updated? Is it similar to tzinfo used in your service?
It's highly complicated topic and it's amazing PostgreSQL decided to use instant time for 'datetime with timezone' type instead of Oracle mess.
> Most libraries implement timezone aware dates as an offset from UTC internally.
For what it's worth, the libraries that are generally considered "good" (e.g. java.time, Nodatime, Temporal) all offer a "zoned datetime" type which stores an IANA identifier (and maybe an offset, but it's only meant for disambiguation w.r.t. transitions). Postgres already ships tzinfo and works with those identifiers, it just expects you to manage them more manually (e.g. in a separate column or composite type). Also let's not pretend that "timestamp with time zone" isn't a huge misnomer that causes confusion when it refers to a simple instant.
I suspect you might be part of the contingent that considers such a combined type a fundamentally bad idea, however: https://errorprone.info/docs/time#zoned_datetime
I agree naming is kinda awful. But you need geo timezone only for rare cases and handling it in a separate column is not that hard. Instant time is the right thing for almost all cases beginners want to use `datetime with timezone` for.
The discussion was about storing a timestamp as UTC, plus the timezone the time was in originally as a second field.
Postgres has timezone aware datetime fields, that translate incoming times to UTC, and outgoing to a configured timezone. So it doesnt store what timezone the time was in originally.
The claim was that the docs explain why not, but they don't.
Maybe, it really depends on what your systems are storing. Most systems really won't care if you are one second off every few years. For some calculations being a second off is a big deal. I think you should tread carefully when adopting any format that isn't the most popular and have valid reasons for deviating from the norm. The simple act of being different can be expensive.
Use your database native date-time field.
Seconded. Don't mess around with raw timestamps. If you're using a database, use its date-time data type and functions. They will be much more likely to handle numerous edge cases you've never even thought about.
I think this article ruined my Christmas. Is nothing sacred? seconds should be seconds since epoch. Why should I care if it drifts off solar day? Let seconds-since-epoch to date representation converters be responsible for making the correction. What am I missing?
The way it is is really how we all want it. 86400 seconds = 1 day. And we operate under the assumption that midnight UTC is always a multiple of 86400.
We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
You never worried or thought about it before, and you don’t need to! It’s done in the right way.
> We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
That kind of thing is already needed for timezone handling. Any piece of software that handles human-facing time needs regular updates.
I think it would make most of our lives easier if machine time was ~29 seconds off from human time. It would be a red flag for carelessly programmed applications, and make it harder to confuse system time with human-facing UK time.
You can set your OS to any timezone you want to. If you want it to be 29 seconds off, go for it. The tz database is open source.
Nobody is an island… the hard part is interfacing with other systems, not hacking your own server.
Seems to work fine for most of the planet?
I don't want it this way: it mixes a data model concern (timestamps) with a ui concern (calendars). As other have said, it would be much better if we used TAI and handled leap seconds at the same level as timezones.
But most software that would need to care about that already needs to care about timezones, and those already need to be regularly updated, sometimes with not much more than a month's notice.
I will never forgive Egypt for breaking my shit with a 3 day notice (what was it like 10 years ago?).
Thankfully for me it was just a bunch of non-production-facing stuff.
Was this Morsy's government or Sisi's? If it's Morsy's government you're holding a grudge against, I have some good news for you. (Presumably you're not holding that grudge against random taxi drivers and housewives in Alexandria.)
Maybe a naive question but why wasn't the timestamp designed as seconds since the epoch with zero adjustments?
Everything would be derived from that.
I suppose it would make some math more complex but overall it feels simpler.
With hindsight, we'd do lots of things differently :)
I guess they just didn't foresee the problem, or misjudged the impact. I can imagine it being very "let's kick that problem down the road and just do a simple thing for now" approach.
UNIX systems at the time probably didn’t care about accuracy to the second being maintained over rare leap second adjustments.
Random example, the wonderful RealTime1987A project (https://bmonreal.github.io/RealTime1987A/) talks about detecting neutrinos from the supernova, and what information can be inferred from the timing of the detections. A major source of that data is the Super Kamiokande project. The data was recorded to tape by a PDP-11, timestamped by its local clock. That clock was periodically synced with UTC with a sophisticated high-tech procedure that consisted of an operator making a phone call to some time service, then typing the time into the computer. As such, the timestamps recorded by this instrument have error bars of something like +/- one minute.
If that’s the sort of world you’re in, trying to account for leap seconds probably seems like a complete waste of effort and precious computer memory.
Arguably it's worse if 00:33 on 2024.12.26 has to get run through another function to get the true value of 2024.12.25 T 23:59.
The problem is leap seconds. Software just wasn't designed to handle 86401 seconds in a day, and caused incidents at Google, Cloudflare, Quantus, and others. Worried that resolving all possible bugs related to days with 86401 seconds in them was going to be impossible to get right, Google decided to smear that leap second so that the last "second" isn't.
And if you've not seen it, there's the falsehoods programmers believe about time article.
What I don’t understand is why we would ever assume two clocks in two different places could be compared in a non approximate way. Your clock, your observations of the world state, are always situated in a local context. In the best of all possible cases, the reasons why your clock and time reports from other clocks differ are well understood.
I believe it has some advantages that while you are waiting at the train station your clock shows exactly the same time as the train conductor’s several miles away from you.
Surely not! We could be a whole minute off and I’d still be standing on the platform when the train arrived.
in the US or parts of Europe you could wait there for 10m past the scheduled time and barely notice. In Japan if the train clock disagreed with the station clock by 30s, causing the train to arrive 30s late, they'd have to write all of the passengers excuse notes for why they were late to work.
GPS depends on widely separated (several times the diameter of Earth) clocks agreeing with each other down to the nanosecond.
and moving at such high speeds that relativity factors into the equations.
Speeds and altitude both! I believe time dilation from gravity is more significant but both are big enough to need compensation.
I think something like the small angle approximation applies. There are plenty of applications where you can assume clocks are basically in the same frame of reference because relativistic effects are orders of magnitude smaller than your uncertainty.
The approximation error is so small that you can often ignore it. Hence the concept of exact time.
Eg in most computing contexts, you can synchronize clocks close enough to ignore a few nanos difference.
How? Unless you have an atomic clock nearby, they will very quickly drift apart by many nanoseconds again. It's also impossible to synchronize to that level of precision across a network.
It's not only possible, you can demonstrate it on your phone. Check the GPS error on your device in a clear area. 1 ft of spatial error is roughly 1ns timing error on the signal (assuming other error sources are zero). Alternatively, you can just look at the published clock errors: http://navigationservices.agi.com/GNSSWeb/PAFPSFViewer.aspx
All the satellites in all of the GNSS constellations are synchronized to each other and every device tracking them to within a few tens of nanoseconds. Yes, atomic clocks are involved, but none of them are corrected locally and they're running at a significantly different rate than "true" time here on earth.
That's true, but it's not really the situation I'm thinking of. Your phone is comparing the differences between the timestamps of multiple incoming GNSS signals at a given instant, not using them to set its local clock for future reference.
A better analogy to practical networked computing scenarios would be this: receive a timestamp from a GNSS signal, set your local clock to that, wait a few minutes, then receive a GNSS timestamp again and compare it to your local clock. Use the difference to measure how far you've travelled in those few minutes. If you did that without a local atomic clock then I don't think it would be very accurate.
Basic hardware gets you a precise GNSS time once per second. Your local clock won’t drift that much in that time, and you can track and compensate for the drift. If you’re in a position to get the signal and have the hardware, then you can have very accurate clocks in your system.
Until somebody start spoofing GPS like they do in Ukraine, and you look embarrassing.
So use Galileo's OSNMA instead. That'll work until they spend $100 on a jammer.
I hate to break it to you, but all modern electronic warfare equipment has been targeting all GNSS for many years now. There's a reason why "GPS-denied", which is really referring to any form of satellite navigation, is a multi-billion dollar industry.
That's a common way of doing high precision time sync, yes. It's slightly out of phone budget/form factor, but that's what a GPSDO does.
The receiver in your phone also needs pretty good short term stability to track the signal for all of the higher processing. It'd be absolutely fine to depend on PPS output with seconds or minutes between measurements.
The Precision Time Protocol is intended to solve this problem:
https://en.m.wikipedia.org/wiki/Precision_Time_Protocol
It does require hardware support, though.
WhiteRabbit achieves sub-nanosecond time synchronization over a network.
Oh wow, that's impressive. Is that over a standard internet connection? Do they need special hardware?
It does require a special switch yes.
camera cuts across to Newton, seething on his side of the desk, his knuckles white as the table visibly starts to crack under his grip
The advantage of equal-length days is that know now what some future date represents; whereas if counting leap-seconds too etc you might get a different date computing now compared to future code that knows about any leap seconds between now and then.
Is there a synchronized and monotonically increasing measure of time to be found?
Not really. GPS time comes close (at least, it avoids leap seconds and DST) but you still have technical issues like clock drift.
Working with time is full of pitfalls, especially around clock monotonicity and clock synchronisation. I wrote an article about some of those pitfalls some time ago [1]. Then, you add time zones to it, and you get a real minefield.
[1]: https://serce.me/posts/16-05-2019-the-matter-of-time
You are a developer who works with time and you named your file, "16-05-2019-the-matter-of-time"? :)
What's wrong with that?
That’s not a standard format. ISO format is yyyy-mm—dd. Also, sorts nicely by time if you sort alphabetically.
Yes, I know. But for your personal file names, you can pick whatever you feel like.
They wrote it on the 16th of May, or the 5th of Bdrfln, we will never know.
Perhaps it's just named for that date, and not written then?
In any case, dates only have to make sense in the context they are used.
Eg we don't know from just the string of numbers whether it's Gregorian, Julian, or Buddhist or Japanese etc calendar.
Assuming Gregorian is a sane choice.
Not any worse than most other commonly used calendars, and it's got the benefit of network effects: many people use it, and virtually everyone will be at least somewhat familiar with it.
Who know, it may not even be a date?
But seriously, https://xkcd.com/1179/
Yeah, sorry mate, it can be confusing, will use unix epoch next time.
Why the snarkiness? Don't you acknowledge that YYYY-MM-DD is strictly superior to DD-MM-YYYY?
Snarkiness was deserved.
lol. Great article, btw; thanks. I submitted it:
https://news.ycombinator.com/item?id=42516811
The timestamps given in the article seem completely wrong? Also, where would 29 even come from?
The offset between UTC and TAI is 37 seconds.
You are correct. The first example time in the article, "2024-12-25 at 18:54:53 UTC", corresponds to POSIX timestamp 1735152893, not 1735152686. And there have been 27 leap seconds since the 1970 epoch, not 29.
I'm also not sure where 29 came from, but the expected offset here is 27 - there have been 27 UTC leap seconds since the unix epoch.
I've been trying to find discussion of this topic on Hacker News between October 1582 and September 1752, but to no avail.
'cal 9 1752' is .. funny. I guess instead of doing this annoying a-periodic leap second business, they accumulated a bunch of leap seconds owed, and skipped 11 days at one go. Sysadmins at the time were of divided opinion on the matter.
The more I learn about the computation of time, the more unbelievably complex getting it right seems. I thought I was pretty sophisticated in in my view of time handling, but just in the last couple of months there have been a series of posts on HN that have opened my eyes even more to how leaky this abstraction of computer time is.
Pretty soon we'll have to defer to deep experts and fundamental libraries to do anything at all with time in our applications, a la security and cryptography.
I remember hearing at a conference about 10 years ago that Google does not make use of leap seconds. Instead, they spread them across regular seconds (they modified their NTP servers). I quickly searched online and found the original article [1].
[1] https://googleblog.blogspot.com/2011/09/time-technology-and-...
Their public NTP doc for the "leap smear" also includes some other leap smear proposals: https://developers.google.com/time/smear
> ((tm_year - 69) / 4) * 86400
Seems like there's another corner cut here, where the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.
I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.
(For the curious, the way this seems to work is that it's calibrated to start ticking up in 1973 and every 4 years thereafter. This is integer math, so fractional values are rounded off. 1972 was a leap year. From March 1st to December 31st 1972, the leap day was accounted for in `tm_yday`. Thereafter it was accounted for in this expression.)
> the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.
The article cites the original edition of POSIX from 1988.
The bug in question was fixed in the 2001 edition:
https://pubs.opengroup.org/onlinepubs/007904975/basedefs/xbd...
> I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.
Not just Unix time, converting future local time to UTC and storing that is also fraught with risk, as there's no guarantee that the conversion you apply today will be the same as the one that needs to be applied in the future.
Often (for future dates), the right thing to do is to store the thing you were provided (e.g. a local timestamp + the asserted local timezone) and then convert when you need to.
(Past dates have fewer problems converting to UTC, because we don't tend to retroactively change the meaning of timezones).
there is literally no easy and safe way to actually handle leap seconds. what happens when they need to remove one second? even for the easier case of inserted leap second, you can smear it, but what happens if there are multiple systems each smearing it at different rates? I'd strongly argue that you pretty much have to reboot all your time critical and mission critical systems during the leap second to be safe.
the issue is so wide spread and complicated that they decided to stop introducing extra leap seconds so people can come up with something better in the coming decades - probably way later than the arrival of AGI.
Lot of people seem to miss the point of the article.
Suppose you had a clock that counted seconds (in the way we understand seconds, moving forward one unit per second). If you looked at it in a few days at midnight UTC on NYE (according to any clock), it would not be a multiple of 86400 (number of seconds per day). It would be off by some 29 seconds due to leap seconds. In that way, Unix time is not seconds since the epoch.
You have it backwards. If you look at it at midnight UTC (on any day, not just NYE) it WOULD be an exact multiple of 86400. (Try it and see.)
Because of leap seconds, this is wrong. Midnight UTC tonight is in fact NOT a multiple of 86,400 real, physical seconds since midnight UTC on 1970-01-01.
He didn't have it backwards, he was saying the same thing as you. He said, "suppose you had a clock that counted seconds," then described how it would work (it would be a non-multiple) if that was the case, which it isn't. You ignored that his description of the behavior was part of a hypothetical and not meant to describe how it actually behaves.
You’re absolutely right — not sure how I misinterpreted that so badly.
Thanks but I’m a “she” :)
I wonder if the increasing number of computers in orbit will mean even more strange relativistic timekeeping stuff will become a concern for normal developers - will we have to add leap seconds to individual machines?
Back of the envelope says ~100 years in low earth orbit will cause a difference of 1 second
Most of those probably don't/won't have clocks that are accurate enough to measure 1 second every hundred years; typical quartz oscillators drift about one second every few weeks.
For GPS at least it is accounted for 38 microseconds per day, they have atomic clocks accurate to like 0.4 milliseconds over 100 years. The frequencies they measure at are different from earth and are constantly synchronised.
More often than I care to admit, I yearn for another of Aphers programming interview short stories. Some of my favorite prose and incredibly in depth programming.
> People, myself included, like to say that POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00.
> This is not true. Or rather, it isn’t true in the sense most people think.
I find that assertion odd, because it works exactly as I did assume. Though, to be fair, I'm not thinking in the scientific notion that the author may.
If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick. That scientists inject a second here or there wouldn't interfere with such logic.
All of that said, the leap second is going away anyways, so hopefully whatever replaces it is less troublesome.
> If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick.
It would, but Unix timestamps don't. It works exactly not how you assume.
Explain?
The article is claiming POSIX ignores injected leap seconds.
The article is needlessly unclear, but the specification given in the second blockquote is the one that is actually applied, and a simpler way of explaining it is: POSIX time() returns 86400 * [the number of UTC midnights since 1970-01-01T00:00:00] + [the number of seconds since the last UTC midnight].
POSIX doesn’t ignore leap seconds. Occasionally systems repeat a second, so time doesn’t drift beyond a second from when leap seconds were invented: https://en.wikipedia.org/wiki/Leap_second
After reading this article no less than 3 times, and the comments in this thread, I'm beyond lost.
So maybe the author was right. Because different people are claiming different things.
The Unix time article has concrete example with tables which should clarify the matter. https://en.wikipedia.org/wiki/Unix_time#Leap_seconds
In that example, Unix time goes from 915148799 -> 915148800 -> 915148800 -> 915148801. Note how the timestamp gets repeated during leap second.
The leap second in Unix time is supposed to wait a second and pretend it never happened. I can see why a longer second could be trouble, but also… if you knew it was coming you could make every nanosecond last 2 and lessen the impact as time would always be monotonic?
That's how Google documents their handling of it: https://developers.google.com/time/smear
Typically you don't need to worry about leap seconds on server because AWS or GCP will help you handle it.
You just need to read the docs to understand their behavior. Some will smooth it out for you, some will jump for you. It would be a problem if you have 3rd party integrations and you rely on their timestamp.
What we’re seeing is again the scientists trying to constrain a humanist system into a scientifically precise framework. It doesn’t really tend to work out. I’m reminded of the time that a bunch of astronomers decided to redefine what a planet is, and yet the cultural notion of Pluto remains strong.
Science and culture will rarely move hand-in-glove, so the rule of separation or concerns, to decouple human experience from scientific measurement, applies.
So what if leap seconds make the epoch 29 seconds longer-ago than date +%s would suggest? It matters a lot less than the fact that we all agree on some number N to represent the current time. That we have -29 fictional seconds doesn't affect the real world in any way. What are you going to do, run missile targeting routines on targets 30 years ago? I mean, I'm as much for abolish leap seconds as anyone, but I don't think it's useful --- even if it's pedantically correct --- to highlight the time discrepancy.
One could imagine a scenario where you’re looking at the duration of some brief event by looking at the start and end times. If that’s interval happens to span a leap second then the duration could be significantly different depending on how your timestamps handled it.
Much more important, though, is how it affects the future. The fact that timestamps in the past might be a few seconds different from the straightforward “now minus N seconds” calculation is mostly a curiosity. The fact that clocks might all have to shift by one more second at time point in the future is more significant. There are plenty of real-world scenarios where that needs some substantial effort to account for.
It matters for some things. Without those fictional leap seconds, the sun would be 29 seconds out of position at local noon, for instance.
That does not matter at all to anyone.
Did you ask everyone?
It most certainly matters to a lot of people. It sounds like you've never met those people.
For practically everyone the local civil time is off from local solar time more than 30 seconds, because very few people live at the exact longitude that corresponds to their time zone. And then you got DST which throws the local time even more off.
This is ignoring the fact that due equation of time, solar noon naturally shifts around tens of minutes over the course of the year.
To drive the point, for example local mean solar time at Buckingham palace is already more than 30 seconds off from Greenwich time.
The point is, since astronomical "time" isn't exactly on constant multiple of cesium standard seconds, and it even fluctuates due to astrophysical phenomena, applications that concern astro-kineti-geometrical reality has to use the tarnished timescale to match the motion of the planet we're on rather than following a monotonic counter pointed at a glass vial.
It is up to you to keep TAI for everything and let your representations of physical coordinates drift away into the galaxy or something, but that's not the majority choice. Overwhelming majority choose UTC time.
TAI is still nice for many high precision applications, weirdly including a lot of precisely those geo-spatial use cases, so we have both.
Sure, but that doesn't mean that we invented and practise leap seconds for the sheer fun of it.
There's very good reasons that are important behind why we try and keep UTC near UT1, so saying "it doesn't matter to anyone" without even entertaining that some people might care isn't very constructive.
UTC, and leap seconds, originate from (military) navies of the world, with the intent of supporting celestial navigation. It is already dubious how useful leap seconds were for that use, and much more dubious is its use as civil timescale.
We have leap seconds to save us from having leap minutes, or leap hours.
Generally, it's useful for midnight to be at night, and midday during the day. UT1 is not regular, so you need some form of correction. Then the debate is about how big and how often.
You don't need leap minutes. Nobody cares if the sun is off by minutes, it already is anyways thanks to timezones. You don't even need leap hours. If in seven thousand years no-one has done a 1 time correction, you can just move the timezones over 1 space, like computers do all the time for political reasons.
It’s going to be multiple centuries until the cumulative leap seconds add up to 30 minutes, and by that point, a majority of the human population is likely to be living off the earth anyway.
Okay, I’ll bite. Who does this matter to, and why?
Also, some of the most populous time zones in the world, such as the European and Chinese time zones, are multiple hours across.
Yeah. "Exact time" people are a bit like "entropy" people in cryptography. Constantly arguing about the perfect random number when nobody cares.
Isn't this the point to the tz files shipped on every linux system? If the crappy online converters only do the basic math formula, of course it's going to be off a little...
I would not be on a plane or maybe even an elevator mid-January 2038
if it can do this to cloudflare, imagine everything left on legacy signed 32bit integers
https://blog.cloudflare.com/how-and-why-the-leap-second-affe...
> POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00. … I think there should be a concise explanation of the problem.
I don’t think that the definition that software engineers believe is wrong or misleading at all. It really is the number of seconds that have passed since Unix’s “beginning of time”.
But to address the problem the article brings up, here’s my attempt at a concise definition:
POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00, and does not include leap seconds that have been added periodically since the 1970s.
Atomic clocks measure time passing.
Seconds are a fraction of a day which is Earth rotating, and count 86400 seconds and then roll over to the next day, but Earth's rotating speed changes so how much "time passing" is in 86400 seconds varies a little. Clocks based on Earth rotating get out of sync with atomic clocks.
Leap seconds go into day-rotation clocks so their date matches the atomic clock measure of how much time has passed - they are time which has actually passed and ordinary time has not accounted for; so it's inconsistant for you to say "Unix time really is the number of seconds that have passed" and "does not include leap seconds" because those leap seconds are time that has passed.
You’re wrong and have the situation exactly backwards.
If a day has 86,401 or 86,399 seconds due to leap seconds, POSIX time still advances by exactly 86,400.
If you had a perfectly accurate stopwatch running since 1970-01-01 the number it shows now would be different from POSIX time.
Wait, why would it be different?
Unix time is not monatomic. It sometimes goes backwards.
Strictly speaking Unix time is monotonic, because it counts integer number of seconds and it does not go backwards, it only repeats during leap seconds.
POSIX does define "the amount of time (in seconds and nanoseconds) since the Epoch", for the output of clock_gettime() with CLOCK_REALTIME [0]. That "amount of time" must be stopped or smeared or go backward in some way when it reaches a leap second. This isn't the 80s, we have functions that interact with Unix time at sub-second precision.
[0] https://pubs.opengroup.org/onlinepubs/9799919799/functions/c...
This feels like semantics. If a counter repeats a value, it's effectively gone backwards and by definition is not monatomic.
A delta between two monatomic values should always be non-negative. This is not true for Unix time.
“Monotonic” means non-decreasing (or non-increasing if you’re going the other way). Values are allowed to repeat. The term you’re looking for is “strictly increasing.”
I guess this hinges on whether you think Unix time is an integer or a float. If you think it's just an integer, then yes, you can't get a negative delta.
If, however, you think it's a float, then you can.
Because a day, that is the time between midnight UTC and midnight UTC, is not always exactly 86400 seconds, due to leap seconds. But Unix time always increases by exactly 86400.
I think you're describing the exact confusion that developers have. Unix time doesn't include leap seconds, but they are real seconds that happened. Consider a system that counts days since 1970, but ignores leap years so doesn't count Feb 29. Those 29ths were actual days, just recorded strangely in the calendar. A system that ignores them is going to give you an inaccurate number of days since 1970.
Are you sure they actually happened? as you say, at least one of us is confused. My understanding is that the added leap seconds never happened, they are just inserted to make the dates line up nicely. Perhaps this depends on the definition of second?
Leap seconds are exactly analogous to leap days. One additional unit is added to the calendar, shifting everything down. For leap days we add a day 29 when normally we wrap after 28. For leap seconds we add second 60 when normally we wrap after 59.
Imagine a timestamp defined as days since January 1, 1970, except that it ignores leap years and says all years have 365 days. Leap days are handled by giving February 29 the same day number as February 28.
If you do basic arithmetic with these timestamps to answer the question, “how many days has it been since Nixon resigned? then you will get the wrong number. You’ll calculate N, but the sun has in fact risen N+13 times since that day.
Same thing with leap seconds. If you calculate the number of seconds since Nixon resigned by subtracting POSIX timestamps, you’ll come up short. The actual time since that event is 20-some seconds more than the value you calculate.
I'm honestly just diving into this now after reading the article, and not a total expert. Wikipedia has a table of a leap second happening across TAI (atomic clock that purely counts seconds) UTC, and unix timestamps according to POSIX: https://en.wikipedia.org/wiki/Unix_time#Leap_seconds
It works out to be that unix time spits out the same integer for 2 seconds.
"spits out" as in, when you try to convert to it - isn't that precisely because that second second never happened, so it MUST output a repeat?
I thought you were wrong because if a timestamp is being repeated, that means two real seconds (that actually happened) got the same timestamp.
However, after looking hard at the tables in that Wikipedia article comparing TAI, UTC, and Unix time, I think you might actually be correct-- TAI is the atomic time (that counts "real seconds that actually happened"), and it gets out of sync with "observed solar time." The leap seconds are added into UTC, but ultimately ignored in Unix time.* ~~So Unix time is actually more accurate to "real time" as measured atomically than solar UTC is.~~
The only point of debate is that most people consider UTC to be "real time," but that's physically not the case in terms of "seconds that actually happened." It's only the case in terms of "the second that high noon hits." (For anyone wondering, we can't simply fix this by redefining a second to be an actual 24/60/60 division of a day because our orbit is apparently irregular and generally slowing down over time, which is why UTC has to use leap seconds in order to maintain our social construct of "noon == sun at the highest point" while our atomic clocks are able to measure time that's actually passed.)
*Edit: Or maybe my initial intuition was right. The table does show that one Unix timestamp ends up representing two TAI (real) timestamps. UTC inserts an extra second, while Unix time repeats a second, to handle the same phenomenon. The table is bolded weirdly (and I'm assuming it's correct while it may not be); and beyond that, I'm not sure if this confusion is actually the topic of conversation in the article, or if it's just too late in the night to be pondering this.
[dead]
It really is the number of seconds that have passed since Unix's "beginning of time", minus twenty-nine. Some UTC days have 86401 seconds, Unix assumes they had 86400.
It's wrong and misleading in precisely the way you (and other commenters here) were wrong and misled, so it seems like that's a fair characterization.