Is floating-point math consistent in C#? Can it be

2018-12-31 22:14发布

No, this is not another "Why is (1/3.0)*3 != 1" question.

I've been reading about floating-points a lot lately; specifically, how the same calculation might give different results on different architectures or optimization settings.

This is a problem for video games which store replays, or are peer-to-peer networked (as opposed to server-client), which rely on all clients generating exactly the same results every time they run the program - a small discrepancy in one floating-point calculation can lead to a drastically different game-state on different machines (or even on the same machine!)

This happens even amongst processors that "follow" IEEE-754, primarily because some processors (namely x86) use double extended precision. That is, they use 80-bit registers to do all the calculations, then truncate to 64- or 32-bits, leading to different rounding results than machines which use 64- or 32- bits for the calculations.

I've seen several solutions to this problem online, but all for C++, not C#:

  • Disable double extended-precision mode (so that all double calculations use IEEE-754 64-bits) using _controlfp_s (Windows), _FPU_SETCW (Linux?), or fpsetprec (BSD).
  • Always run the same compiler with the same optimization settings, and require all users to have the same CPU architecture (no cross-platform play). Because my "compiler" is actually the JIT, which may optimize differently every time the program is run, I don't think this is possible.
  • Use fixed-point arithmetic, and avoid float and double altogether. decimal would work for this purpose, but would be much slower, and none of the System.Math library functions support it.

So, is this even a problem in C#? What if I only intend to support Windows (not Mono)?

If it is, is there any way to force my program to run at normal double-precision?

If not, are there any libraries that would help keep floating-point calculations consistent?

10条回答
骚的不知所云
2楼-- · 2018-12-31 22:18

Well, here would be my first attempt on how to do this:

  1. Create an ATL.dll project that has a simple object in it to be used for your critical floating point operations. make sure to compile it with flags that disable using any non xx87 hardware to do floating point.
  2. Create functions that call floating point operations and return the results; start simple and then if it's working for you, you can always increase the complexity to meet your performance needs later if necessary.
  3. Put the control_fp calls around the actual math to ensure that it's done the same way on all machines.
  4. Reference your new library and test to make sure it works as expected.

(I believe you can just compile to a 32-bit .dll and then use it with either x86 or AnyCpu [or likely only targeting x86 on a 64-bit system; see comment below].)

Then, assuming it works, should you want to use Mono I imagine you should be able to replicate the library on other x86 platforms in a similar manner (not COM of course; although, perhaps, with wine? a little out of my area once we go there though...).

Assuming you can make it work, you should be able to set up custom functions that can do multiple operations at once to fix any performance issues, and you'll have floating point math that allows you to have consistent results across platforms with a minimal amount of code written in C++, and leaving the rest of your code in C#.

查看更多
倾城一夜雪
3楼-- · 2018-12-31 22:18

Checking the links in the other answers make it clear you'll never have a guarantee of whether floating point is "correctly" implemented or whether you'll always receive a certain precision for a given calculation, but perhaps you could make a best effort by (1) truncating all calculations to a common minimum (eg, if different implementations will give you 32 to 80 bits of precision, always truncating every operation to 30 or 31 bits), (2) have a table of a few test cases at startup (borderline cases of add, subtract, multiply, divide, sqrt, cosine, etc.) and if the implementation calculates values matching the table then not bother making any adjustments.

查看更多
千与千寻千般痛.
4楼-- · 2018-12-31 22:20

I know of no way to way to make normal floating points deterministic in .net. The JITter is allowed to create code that behaves differently on different platforms(or between different versions of .net). So using normal floats in deterministic .net code is not possible.

The workarounds I considered:

  1. Implement FixedPoint32 in C#. While this is not too hard(I have a half finished implementation) the very small range of values makes it annoying to use. You have to be careful at all times so you neither overflow, nor lose too much precision. In the end I found this not easier than using integers directly.
  2. Implement FixedPoint64 in C#. I found this rather hard to do. For some operations intermediate integers of 128bit would be useful. But .net doesn't offer such a type.
  3. Implement a custom 32 bit floatingpoint. The lack of a BitScanReverse intrinsic causes a few annoyances when implementing this. But currently I think this is the most promising path.
  4. Use native code for the math operations. Incurs the overhead of a delegate call on every math operation.

I've just started a software implementation of 32 bit floating point math. It can do about 70million additions/multiplications per second on my 2.66GHz i3. https://github.com/CodesInChaos/SoftFloat . Obviously it's still very incomplete and buggy.

查看更多
一个人的天荒地老
5楼-- · 2018-12-31 22:32

The C# specification (§4.1.6 Floating point types) specifically allows floating point computations to be done using precision higher than that of the result. So, no, I don't think you can make those calculations deterministic directly in .Net. Others suggested various workarounds, so you could try them.

查看更多
路过你的时光
6楼-- · 2018-12-31 22:33

According to this slightly old MSDN blog entry the JIT will not use SSE/SSE2 for floating point, it's all x87. Because of that, as you mentioned you have to worry about modes and flags, and in C# that's not possible to control. So using normal floating point operations will not guarantee the exact same result on every machine for your program.

To get precise reproducibility of double precision you are going to have to do software floating point (or fixed point) emulation. I don't know of C# libraries to do this.

Depending on the operations you need, you might be able to get away with single precision. Here's the idea:

  • store all values you care about in single precision
  • to perform an operation:
    • expand inputs to double precision
    • do operation in double precision
    • convert result back to single precision

The big issue with x87 is that calculations might be done in 53-bit or 64-bit accuracy depending on the precision flag and whether the register spilled to memory. But for many operations, performing the operation in high precision and rounding back to lower precision will guarantee the correct answer, which implies that the answer will be guaranteed to be the same on all systems. Whether you get the extra precision won't matter, since you have enough precision to guarantee the right answer in either case.

Operations that should work in this scheme: addition, subtraction, multiplication, division, sqrt. Things like sin, exp, etc. won't work (results will usually match but there is no guarantee). "When is double rounding innocuous?" ACM Reference (paid reg. req.)

Hope this helps!

查看更多
孤独寂梦人
7楼-- · 2018-12-31 22:34

Is this a problem for C#?

Yes. Different architectures are the least of your worries, different framerates etc. can lead to deviations due to inaccuracies in float representations - even if they are the same inaccuracies (e.g. same architecture, except a slower GPU on one machine).

Can I use System.Decimal?

There is no reason you can't, however it's dog slow.

Is there a way to force my program to run in double precision?

Yes. Host the CLR runtime yourself; and compile in all the nessecary calls/flags (that change the behaviour of floating point arithmetic) into the C++ application before calling CorBindToRuntimeEx.

Are there any libraries that would help keep floating point calculations consistent?

Not that I know of.

Is there another way to solve this?

I have tackled this problem before, the idea is to use QNumbers. They are a form of reals that are fixed-point; but not fixed point in base-10 (decimal) - rather base-2 (binary); because of this the mathematical primitives on them (add, sub, mul, div) are much faster than the naive base-10 fixed points; especially if n is the same for both values (which in your case it would be). Furthermore because they are integral they have well-defined results on every platform.

Keep in mind that framerate can still affect these, but it is not as bad and is easily rectified using syncronisation points.

Can I use more mathematical functions with QNumbers?

Yes, round-trip a decimal to do this. Furthermore, you should really be using lookup tables for the trig (sin, cos) functions; as those can really give different results on different platforms - and if you code them correctly they can use QNumbers directly.

查看更多
登录 后发表回答