In the XC16 compiler's DSP routines header (dsp.h) there are these lines:
/* Some constants. */
#ifndef PI /* [ */
#define PI 3.1415926535897931159979634685441851615905761718750 /* double */
#endif /* ] */
#ifndef SIN_PI_Q /* [ */
#define SIN_PI_Q 0.7071067811865474617150084668537601828575134277343750
/* sin(PI/4), (double) */
#endif /* ] */
But, the value of PI is actually (to the same number of decimal places) is:
3.1415926535897932384626433832795028841971693993751
The dsp.h-defined value starts to diverge at the 16th decimal place. For double floating point operations, this is borderline significant. For Q15 representations, this is not significant. The value of sin(pi/4) also diverges from the correct value at the 16th decimal place.
Why is Microchip using the incorrect value? Is there some esoteric reason related to computing trig function values, or is this simply a mistake? Or maybe it does not matter?
Sometimes such values are tweaked to force rounding to the machine number. 17 (including before the comma) significant places is where double get imprecise(and the operations in the compiler to calculate the value with limited precision might even worsen it)
So library programmers might have manipulated the value to ensure rounding to the "really" from the decimal representation in the source to the nearest binary number.
The test would be to write the number out in binary, and probably after the first 52 digits the remaining digits would be zero.
IOW this is a best binary representation of the 16-19 digit decimal pi number converted back to decimal, which can yield additional digits.