I promised myself I’ll stop posting trivialities, but I feel compelled to discuss double versus decimal again. I see a lot of people use double or float for money amounts. Don’t! Double, or float for that matter, doesn’t have enough precision to handle financial calculations without a lot of rounding errors. The decimal data type provides significantly more precision than the other two. So if you are wondering where those pennies went, well it’s time to switch to decimal.
You should know that there are some performance penalties associated with this, as it is a 128 bit value. Also, ran into an interesting detail in the docs: the decimal struct is not guaranteed to be thread safe on all platforms! What gives? Apparently the value is too large to be assigned in one atomic operation. Does anyone know more about the mechanics of this? Please share!