x86-CPUs have invariant TSCs for a long time, i.e. they change the timestamp counter according to a constant frequency, usually the base-clock of the CPU.
If Windows detects an invariant TSC it depends it's QueryPerformanceCounter() on this invariant TSC - unfortunately QueryPerformanceFrequency() is always constant and doesn't represent the TSC's frequency. Visual C++'s runtime relies its high_resoulution_clock on QueryPeformanceCounter() / QueryPerformanceFrequency().
So is the frequency of the timestamp counter really such a reliable source which absolutely doesn't vary? I'm aware that the crystal clock doesn't exactly match the CPU's nominal base-clock, but I'm just curious about whether the clock might slightly vary or even have a temperature-drift.
So I wrote a little C++-program that measures if there's a drift of RDTSC:
The program runs until you press Control C and shows the numer of timestamp-ticks per second, the summed up drift ("d: ") and the average drift so far (summed up absolute drift-differences, "ad: ") and calculates the average clock and the standard deviation of the clock at the end. The measurements aren't precise under Windows but very precise under Linux. On my Linux-PC, a Ryzen 7 1800X on an ASRock AB350 Pro4, the drift reported increasingly by the program is almost zero clock cycles after 40min. The drift slowly varies from a slight minus range symmetical to a plus-range (max. 4.000 clock cycles) around zero. There's for sure no clock-drift of 1-2s per day as @Alois Kraus mentioned.