We all know that the exact-width integer typedefs defined in C99's stdint.h are optional, being defined only if the architecture has primitive types of those widths, signs, etc.
However, I just now realised that [u]int_(fast|least)N_t are not optional, but required. See ISO/IEC 9899:9999, section 7.18.1:
[7.18.1.2] 3 The following types are required:
int_least8_t int_least16_t int_least32_t int_least64_t uint_least8_t uint_least16_t uint_least32_t uint_least64_t
[7.18.1.3] 3 The following types are required:
int_fast8_t int_fast16_t int_fast32_t int_fast64_t uint_fast8_t uint_fast16_t uint_fast32_t uint_fast64_t
So, as read, all architectures that can offer a Standard-compliant C or C++ compiler - including freestanding ones, seeing as stdint.h is required in those - must be capable of providing primitive types of at least 64 bits!
Given the Standard's leeway on so many other implementation details, this seems bizarre to me. That's especially because apparently we have been enforcing it since 1999, some years before 64-bit computing became mainstream even on the desktop. That's to say nothing of the lag behind that among, in many cases still current, embedded architectures.
What was the rationale for requiring all implementations to have primitive types of at least 64 bits? And, since this surely has severe implications for implementors in practice, how have they reacted to/dealt with this?
(...or what have I missed in my reading, is always an answer too)
You are correct that all C and C++ implementations must supply a type of at least 64 bits - but it isn't just
uint_least64_tandint_least64_tthat force that. The minimum range forunsigned long long intandlong long intrequire 64 bits too.Why did the standards committee think it was worth requiring a 64-bit type? Hard to tell, but probably because most architectures have one available, and those that don't can implement it via library functions (which won't be linked if they aren't used).