Your code isn’t C, but this is:
#include <stdio.h>
int main ()
{
float add = 0;
int count[] = { 3, 2, 1, 3, 1, 2, 3, 3, 1, 2, 1 };
for (int i = 0; i < 11; i++) {
add += 1 / ((float) count[i] + 1);
}
printf("%f\n", add);
return 0;
}
I’ve executed this code with add += 1 / ((float) count[i] + 1);
and add += 1.0 / ((float) count[i] + 1);
.
In both cases, printf("%f\n", add);
prints 4.000000
.
However, when I print each bit of the variable add
, it gives me 01000000011111111111111111111111
(3.9999998) and 01000000100000000000000000000000
(4)
As pointed by phuclv, this is because 1.0
is a double
, hence the calculation is done with double precision, whereas when using 1
, the calculation is done using single precision (because of the cast to float).
If you replace the cast to double
in the first equation or if you change 1.0
into 1.0f
in the second equation, you’ll obtain the same result.
2
solved Why does this float have 2 different values?