I have a C program that performs a series of operations on data and displays it, and I need this data to be as accurate as possible. Some example data could be the following:
float a = 12300.395;
float b = 123000.395;
float c = 432100.395;
float d = 1234000.395;
float e = 4321000.395;
As you can see, they all have the same decimal part. But if I try to display them on the screen with two decimal places, I get the following:
printf("%.2f\n", a); // 12300.39
printf("%.2f\n", b); // 123000.40
printf("%.2f\n", c); // 432100.41
printf("%.2f\n", d); // 1234000.38
printf("%.2f\n", e); // 4321000.50
Each of them different from the previous one. And if I show them without even rounding I have the following data:
printf("%f\n", a); // 12300.394531
printf("%f\n", b); // 123000.398438
printf("%f\n", c); // 432100.406250
printf("%f\n", d); // 1234000.375000
printf("%f\n", e); // 4321000.500000
My goal would be for it to round to 2 decimal places and .395
end up at .40
, but it .394
would end up at .39
. I understand that a solution would be to pass all the data of the program from float
a double
, to have double the precision, although even then it does not make it perfect for me since it rounds all .40
but the last (variable 'e') to a .39
. The question is:
- Is there any chance to do this with floats?
- Why are you making up those numbers in the decimal part that make rounding fail?
Thank you very much for your help.
Recommended initial reading: Why can't my programs do arithmetic correctly?
The type
float
has a typical precision of 6 digits. That is, only the first 6 digits of the number are representative... the rest is garbageIf you want more precision you have to roll
double
, which has up to 12 digits of precision (if I remember correctly).To get more information about the capabilities of each data type available in C, you can make use of the library
limits.h