I have to code C for my college exams and I have practice of declaring double variables instead of float variables. Is it a bad habit? Can they deduct marks for it? (we never exceed the float limit)
I think using double is better than float because 0.71 is a double literal and if we declare float one = 1.1; we are converting a double to float.
In much discussion on this website, participants prefer
doubleoverfloatand recommend replacingfloatwithdoubleif they see it in a question posed.People expect floating point calculations on Windows desktops and servers to be identical between the two floating-point types.
Some smartphone and embedded architectures may get a performance benefit from using
floatrather thandouble.With IEEE 754,
floatis 32 bit (24 bit mantissa, 8 bit exponent) anddoubleis 64 bit (53 bit mantissa, 11 bit exponent). The exponent size determines the maximum (and minimum) values for each type. The mantissa size determines the maximum level of precision for each type.Unless you have a macro for "narrow" floating-point calculations, most mathmatical functions (of the type for which you include math.h) return
doublerather thanfloat. A C++ declaration likeconst auto cosine = cos(theta)will declare adoublecalledcosine. Trigonometric, rounding and exponentiation functions takedoubleas parameters and returndouble.OpenGL is an exception where
floatis often preferred because the API handles floats. With graphics calculations that require precision, you might do your own calculation withdoubleand cast down tofloat.Should I use double or float? What is the difference between float and double?
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf