Can someone tell me why this happening?
let formatter = NumberFormatter.init()
formatter.numberStyle = .decimal
formatter.usesGroupingSeparator = false
formatter.roundingMode = .floor
formatter.minimumFractionDigits = 2
formatter.maximumFractionDigits = 2
let v = 36
let scale = 10
let float = formatter.string(from: NSNumber(value: Float(v) / Float(scale)))!
let double = formatter.string(from: NSNumber(value: Double(v) / Double(scale)))!
print(float) // 3.59
print(double) // 3.60
When I use Float the result is 3.59 (wrong result in my opinion) and when I use Double the result is 3.60.
I know it is something related to .floor roundingMode, but i don't fully understand the reason.
If you would like to preserve your fraction digits precision it is better to use Swift native Decimal type. That's what it is. You can use the Decimal
init(sign: FloatingPointSign, exponent: Int, significand: Decimal)initializer and use your scale exponent and your value significand. Just make sure to negate its value: