I interpreted it to mean the timer is monotonic and ignores leap seconds completely. It does make it easy to implement wrong if your most convenient time API does implement leap seconds. (I don’t see why this would have anything to do with the millisecond timer? Leap seconds happen on the second.)
Unix timestamps are not monotonic when a positive leap second is applied: the next day must always start at a multiple of 86400 seconds, even if the UTC day is 86401 seconds long. Unless some part of the day is smeared, the timestamp must be set back at some point. So either the UUIDv7 timer is not monotonic, or it does not align with Unix timestamps.
As for the millisecond timer, recall that a positive leap second lasts for 1000 milliseconds. So to 'exclude' the leap second, by one interpretation, would be to exclude each of those milliseconds individually as they arise; in other words, to halt the timer during the leap second.
As I read it, the value is specifically aligned with "the number of milliseconds since midnight 1 Jan 1970 UTC, leap seconds excluded". (And they really must not have been considering the rubber seconds in UTC up to 1972!)
Consider a day ending in a positive leap second. Suppose that at 23:59:59.500...Z, the millisecond counter is at T − 500. By the start of the leap second (23:59:59.999...Z), the millisecond counter is at T. Then, at the end of the leap second (00:00:00.000...Z), the counter must be at T, since the leap second must excluded from the counter by definition. By 00:00:00.500...Z, it's at T + 500, and so on.
The question is, what is the value of the counter between 23:59:59.999...Z (when it is at T) and 00:00:00.000...Z (when it is at T), during the course of the leap second? The definition doesn't make this clear.
Like, I have a timestamp, in the format YYYY-MM-DD HH:MM:SS.ffff Z. What rules do I use to translate that into/from a set of bits? Whatever answer you give here, it seems like it must run afoul of the problems the parent poster is pointing out!
Count the number of seconds that have elapsed from the Unix epoch time until that moment, excluding leap seconds. This increases monotonically and is consistent with the Unix epoch source time.
At 2016-12-31T23:59:59.999Z, 1483228799.999 seconds had elapsed from the epoch, excluding leap seconds, according to "Unix epoch source time".
At 2017-01-01T00:00:00.000Z, 1483228800.000 seconds had elapsed from the epoch, excluding leap seconds, according to "Unix epoch source time".
Now, at 2016-12-31T23:59:60.500Z, how many seconds had elapsed from the epoch, exluding leap seconds? What about 2016-12-31T23:59:60.000Z, or 2016-12-31T23:59:60.750Z? The only monotonic solution is for all of these to have the exact same timesamp of 1483228800.000 seconds. But then that runs into a thousandfold increase in collision probability.
> You can use whatever solution you want - hold the timestamp, smear time, it's up to you. It's still monotonic, and uses the same epoch.
"Whatever solution" I want? So… let's assume the programmer thinks "it's just POSIX time", they call their language of choice's "gimme POSIX time" function. This function, under default circumstances, is probably going to say "the leap second in POSIX time is also the last second of the day", i.e., it repeats the timestamp. Thus, it isn't monotonic.