I'm really confused about something named Accuracy in SQL Server.
I found some information about date and time datatypes in SQL server but I observed something named accuracy.
So please can someone help me and explain with a simple way to understand what is really accuracy means?
Massive thanks in advance.
It comes down to how many bits you have and perhaps how many you decide to use. 1-bit has 2 possible values. 2 bits has 4. 3 has 8. In general, given n bits of storage, there are 2^n possible distinct values. A byte with 8 bits can have 256 values.
For a positive integer stored in a byte, the range is 0 to 255 because zero counts as one possible value. Including negative numbers does not change the number of values. For example, a signed byte can have a range of -127 to 128. (2's compliment is convenient for hardware.)
Things are "exact" for integers with the restriction on range. For something like time or distance, we use real numbers. A property of a real numbers is that there there are an infinite number of real numbers between any two real numbers. Any if we pick a range, there are not enough bytes to represent all the real numbers between. We approximate the number by assigning some bits to an exponent and some to a mantissa.
So long story short, we are assigning a fixed number of bits to represent an infinite number of time values over a given range.
For a smalldatetime, the 1st 2 bytes are for the day and the 2nd 2 bytes are for the time. For a datetime, each are 4 bytes. The smalldatetime day part of 2 bytes allows 2^16 values or 65536 days. 65536 * (1 year/265 days) = 179 does match the year range of the smalldatetime. For the datetime day part 2^32 / 365 = 11767 years is more than the datetime range 1753 to 9999 used. The storage is there, but engineers don't have to use it.
Now for the time part. The datetime can be converted to a float. The integer part will be the day and the fraction the fraction of the day. (This does not work for datetime2.)
If we used every bit of the time part, then 2 bytes allows for 65,536 values and 4 bytes allows for 4,294,967,296 values for a single day. These are the best possible precisions for each.
These are not the precisions that Microsoft engineers decided to use; however, they are the best that can be stored in this number of bytes. The choice of a 3ms precision was likely related to hardware and API restrictions at the time. (A slow HW clock might have been the only standard.) The 1 minute precision was likely a round up. Can't have a second, so let's round it to minutes.
The end result is that if you measure the time 1 million times between now and a millisecond from now and store it in a datetime, you will see only 1 or 2 distinct values stored at most. If you store it as a smalldatetime, you see at most 1 or 2 distinct values. If you see 2 values, it's because you crossed into the interval of the next value.
Clear as mud?