Proposal
Problem statement
Right now, there is an incongruence in std::time::SystemTime compared to its sister types, namely the lack of limit constants such as SystemTime::MIN and SystemTime::MAX, which are provided for most other types the Rust standard library offers.
Even more anomalously, SystemTime does offer methods implying an (internal) existence of minimum and maximum values, namely SystemTime::checked_add and SystemTime::checked_sub.
Those methods make it immediately evident, that these types have limits, which are not exposed however.
In order for a better consistency with the remaining standard library, this ACP proposes the provision, or rather the public exposure, of these limits.
Motivating examples or use cases
As seen below (in Links and related work) this feature has been requested or pitched at least seven times, potentially even more.
Below are some examples outlining a need for this in the real world.
Minimal example
use std::time::{Duration, SystemTime};
fn main() {
// Expire all things more than 3 minutes old.
let configured_expiry = Duration::from_secs(60 * 3);
let expiry_threshold =
SystemTime::now()
.checked_sub(configured_expiry)
.unwrap_or(SystemTime::MIN);
expire_all_things_earlier_than(expiry_threshold);
}
Full practical example
In arti, the official Tor re-implementation in Rust, we are storing various network documents in an SQLite database with timestamps stored as seconds since the epoch in the tables.
Besides this, we also have to perform artihmetic upon the SystemTime::now() return value, as we are willing to tolerate small clock skews there.
Using Add<Duration> for SystemTime is unacceptable due to the fact that it may panic.
Similarly, SystemTime::checked_add() is not nice either, because if it fails, there is no sane/useful value to unwrap_or to, meaning the application would have to fail on an error that does not have to be critical in nature -- those trivial failures should not be able to undermine the Tor network by crashing a fundamental pillar of it.
The following module contains an example of this, but other parts of the code base are affected by this too: https://gitlab.torproject.org/tpo/core/arti/-/blob/70bef2ab911239ed2a13ec1a6cfc0009f8031bef/crates/tor-dirserver/src/mirror/operation.rs
Solution sketch
impl SystemTime {
pub const MAX: SystemTime = ...;
pub const MIN: SystemTime = ...;
}
See rust-lang/rust#148825 which already contains a working solution.
Alternatives
As far as Rust std is concerned, there are no sensible alternatives.
Providing MIN and MAX constants is how this is done for every other type.
Workarounds
For Rust users, there are a number of unsatisfactory workarounds available.
Define, downstream, min/max values for every platform supported by the application
A Rust user can define their own constant for the hypothetical SystemTime::MAX and SystemTime::MIN and use #[cfg] directives, thereby doing something that should be the responsibility of the standard library.
Define an ad-hoc value to use as the limit for time calculations
The famous chrono crate uses an arbitrary value hoping it will be representable on every operating system, despite it being potentially possible to have room for even higher values.
So this is not the real, physical maximum.
Rather, it's a desperate approach to find a practical solution without changes to Rust and without having to worry too much about operating system specifics.
Wrapping SystemTime in something that separately represents underflow/overflow
.checked_add() already returns Option.
So one could use that Option directly.
But its Ord impl is wrong. For correctness, one would need:
enum SystemTimeThatMaybeOverflowed {
Underflow,
Normal(SystemTime),
Overflow,
}
This is not an attractive workaround.
Such a type would be clumsy to work with (and is a word larger than SystemTime.)
Using Duration
std::time::Duration already has a Duration::ZERO and Duration::MAX serving as proper lower and upper limits, including a Duration::saturating_add() and Duration::saturating_sub().
If an end-user were to use Duration as a replacement for SystemTime, these problems could be solved.
This is semantically incorrect.
Duration was made to represent, well, a duration, a relative delta between two points in time.
SystemTime on the other hand, was made to represent the time of the operating system, which comes with its own features and attributes.
Of course, one could abuse Duration as a SystemTime, by using the epoch as the lower part of the delta, but this feels just wrong.
Also, this is non-trivial if one is to represent times before the epoch and there may be additional burdens that arise, such as when having to convert this type into a SystemTime, due to libraries demanding it, which cannot be done nicely with the current API.
Not using SystemTime and defining one's own type
Rust programs targeting a known platform could, instead of using SystemTime, define their own time type, with a more complete API.
For example, some programmers choose to define a type based simply on Unix time_t
(Even then, conversions to SystemTime are sometimes necessary, so this is not a complete solution.)
Inferring the minimum and maximum values from .checked_* at runtime(!)
pub static MAX_SYSTEM_TIME: LazyLock<SystemTime> =
LazyLock::new(|| find_system_time_limit(SystemTime::checked_add));
pub static MIN_SYSTEM_TIME: LazyLock<SystemTime> =
LazyLock::new(|| find_system_time_limit(SystemTime::checked_sub));
/// An algorithm that calulates the maximum/minimum [`SystemTime`].
///
/// It works by ± a large duration onto [`SystemTime::UNIX_EPOCH`], until this
/// operation fails, in which case this large duration will be halved, until it
/// reached `1ns`, in which case the algorithm will terminate if another ±
/// fails.
///
/// `f` should usually be one of the following:
/// * [`SystemTime::checked_add()`]
/// * [`SystemTime::checked_sub()`]
fn find_system_time_limit<F>(f: F) -> SystemTime
where
F: Fn(&SystemTime, Duration) -> Option<SystemTime>,
{
const INITIAL_STEP: Duration = Duration::new(1_000_000_000_000_000_000, 0);
const ONE_NS: Duration = Duration::new(0, 1);
let mut step = INITIAL_STEP;
let mut limit = SystemTime::UNIX_EPOCH;
loop {
match f(&limit, step) {
Some(st) => limit = st,
None => {
if step == ONE_NS {
break;
} else {
step = cmp::max(step / 2, ONE_NS);
}
}
}
}
limit
}
This algorithm takes about 10ms and about 1100 iterations on an Apple M2 Max CPU running macOS.
However, implementing this outside the standard library comes with various downsides:
- Pointless waste of runtime and CPU cycles for a constant that is known anyways.
- Differing performance based on the value of
INITIAL_STEP.
- Additional burden for developers only to obtain something internally known anyways.
Links and related work
What happens now?
This issue contains an API change proposal (or ACP) and is part of the libs-api team feature lifecycle. Once this issue is filed, the libs-api team will review open proposals as capability becomes available. Current response times do not have a clear estimate, but may be up to several months.
Possible responses
The libs team may respond in various different ways. First, the team will consider the problem (this doesn't require any concrete solution or alternatives to have been proposed):
- We think this problem seems worth solving, and the standard library might be the right place to solve it.
- We think that this probably doesn't belong in the standard library.
Second, if there's a concrete solution:
- We think this specific solution looks roughly right, approved, you or someone else should implement this. (Further review will still happen on the subsequent implementation PR.)
- We're not sure this is the right solution, and the alternatives or other materials don't give us enough information to be sure about that. Here are some questions we have that aren't answered, or rough ideas about alternatives we'd want to see discussed.
Proposal
Problem statement
Right now, there is an incongruence in
std::time::SystemTimecompared to its sister types, namely the lack of limit constants such asSystemTime::MINandSystemTime::MAX, which are provided for most other types the Rust standard library offers.Even more anomalously,
SystemTimedoes offer methods implying an (internal) existence of minimum and maximum values, namelySystemTime::checked_addandSystemTime::checked_sub.Those methods make it immediately evident, that these types have limits, which are not exposed however.
In order for a better consistency with the remaining standard library, this ACP proposes the provision, or rather the public exposure, of these limits.
Motivating examples or use cases
As seen below (in
Links and related work) this feature has been requested or pitched at least seven times, potentially even more.Below are some examples outlining a need for this in the real world.
Minimal example
Full practical example
In arti, the official Tor re-implementation in Rust, we are storing various network documents in an SQLite database with timestamps stored as seconds since the epoch in the tables.
Besides this, we also have to perform artihmetic upon the
SystemTime::now()return value, as we are willing to tolerate small clock skews there.Using
Add<Duration> for SystemTimeis unacceptable due to the fact that it maypanic.Similarly,
SystemTime::checked_add()is not nice either, because if it fails, there is no sane/useful value tounwrap_orto, meaning the application would have to fail on an error that does not have to be critical in nature -- those trivial failures should not be able to undermine the Tor network by crashing a fundamental pillar of it.The following module contains an example of this, but other parts of the code base are affected by this too: https://gitlab.torproject.org/tpo/core/arti/-/blob/70bef2ab911239ed2a13ec1a6cfc0009f8031bef/crates/tor-dirserver/src/mirror/operation.rs
Solution sketch
See rust-lang/rust#148825 which already contains a working solution.
Alternatives
As far as Rust
stdis concerned, there are no sensible alternatives.Providing
MINandMAXconstants is how this is done for every other type.Workarounds
For Rust users, there are a number of unsatisfactory workarounds available.
Define, downstream, min/max values for every platform supported by the application
A Rust user can define their own constant for the hypothetical
SystemTime::MAXandSystemTime::MINand use#[cfg]directives, thereby doing something that should be the responsibility of the standard library.Define an ad-hoc value to use as the limit for time calculations
The famous
chronocrate uses an arbitrary value hoping it will be representable on every operating system, despite it being potentially possible to have room for even higher values.So this is not the real, physical maximum.
Rather, it's a desperate approach to find a practical solution without changes to Rust and without having to worry too much about operating system specifics.
Wrapping
SystemTimein something that separately represents underflow/overflow.checked_add()already returnsOption.So one could use that
Optiondirectly.But its
Ordimpl is wrong. For correctness, one would need:This is not an attractive workaround.
Such a type would be clumsy to work with (and is a word larger than SystemTime.)
Using
Durationstd::time::Durationalready has aDuration::ZEROandDuration::MAXserving as proper lower and upper limits, including aDuration::saturating_add()andDuration::saturating_sub().If an end-user were to use
Durationas a replacement forSystemTime, these problems could be solved.This is semantically incorrect.
Durationwas made to represent, well, a duration, a relative delta between two points in time.SystemTimeon the other hand, was made to represent the time of the operating system, which comes with its own features and attributes.Of course, one could abuse
Durationas aSystemTime, by using the epoch as the lower part of the delta, but this feels just wrong.Also, this is non-trivial if one is to represent times before the epoch and there may be additional burdens that arise, such as when having to convert this type into a
SystemTime, due to libraries demanding it, which cannot be done nicely with the current API.Not using
SystemTimeand defining one's own typeRust programs targeting a known platform could, instead of using
SystemTime, define their own time type, with a more complete API.For example, some programmers choose to define a type based simply on Unix
time_t(Even then, conversions to
SystemTimeare sometimes necessary, so this is not a complete solution.)Inferring the minimum and maximum values from
.checked_*at runtime(!)This algorithm takes about 10ms and about 1100 iterations on an Apple M2 Max CPU running macOS.
However, implementing this outside the standard library comes with various downsides:
INITIAL_STEP.Links and related work
std::time::Instant::saturating_duration_since()? rust#133525What happens now?
This issue contains an API change proposal (or ACP) and is part of the libs-api team feature lifecycle. Once this issue is filed, the libs-api team will review open proposals as capability becomes available. Current response times do not have a clear estimate, but may be up to several months.
Possible responses
The libs team may respond in various different ways. First, the team will consider the problem (this doesn't require any concrete solution or alternatives to have been proposed):
Second, if there's a concrete solution: