4.2352941176470588235294117647
contains 29 digits.
decimal
is define to have 28-29 significant digits. You can’t store a more accurate number in a decimal
.
What field of engineering or science are you working in where the 30th and more digits are significant to the accuracy of the overall calculation?
(It would also, possibly, help if you’d shown some more actual code. The only code you’ve shown is 18 / 4.25
, which can’t be an actual expression in your code, since the second number is a double
literal, and you can’t assign the result of this expression to a decimal
without a cast).
If you need arbitrary precision, then there isn’t a standard “BigRational” type, but there is a BigInteger
. You could use that to construct a BigRational
type if you need that (storing numerator and denominator as two separate integers). One guess of why there isn’t a standard type yet is that decisions on when to e.g. normalize such rationals may affect performance or equality comparisons.
1
solved c# Subtract is not accurate even with decimals?