Is CGFloat more accurate than Double?

669 Views Asked by At

Double & CGFloat both print the same value to the console (#2), but only the Double is issued the warning (#1):

(#1) - The warning

the warning

(#2) - Printing each to the console

DOUBLE  = 9.223372036854776e+18
CGFLOAT = 9.223372036854776e+18

Many articles mention that a CGFloat is a Float on 32-bit platforms & a Double on a 64-bit platform, but doesn't that only refer to the backing storage? Is there more going on behind the scenes which causes the CGFloat to be more accurate than the Double?

1

There are 1 best solutions below

3
On BEST ANSWER

No, CGFloat does not represent a BigDecimal-type number, with infinite precision. CGFloat is defined as a struct with either wraps a Double or a Float, depending on whether you are on a 32-bit or 64-bit platform; i.e., on a 64-bit platform, it is equivalent to a Double, with no more or less precision or range.

The difference in the warning you see is that Double is a type native to Swift, which the compiler knows about, and has enough information about in order to be able to provide such a warning. CGFloat, however, is defined by Apple frameworks, and the compiler does not know enough about it to produce such a warning.