floating point precision

2019-01-15 21:02发布

I have a program written in C# and some parts are writing in native C/C++. I use doubles to calculate some values and sometimes the result is wrong because of too small precision. After some investigation i figured out that someone is setting the floating-point precision to 24-bits. My code works fine, when i reset the precision to at least 53-bits (using _fpreset or _controlfp), but i still need to figure out who is responsible for setting the precision to 24-bits in the first place.

Any ideas who i could achieve this?

3条回答
Emotional °昔
2楼-- · 2019-01-15 21:20

Is your code using DirectX or XNA at all? I've certainly heard that there are problems due to that - some DirectX initialization code (possibly only in the managed wrapper?) reduces the precision.

查看更多
Evening l夕情丶
3楼-- · 2019-01-15 21:30

This is caused by the default Direct3D device initialisation. You can tell Direct3D not to mess with the FPU precision by passing the D3DCREATE_FPU_PRESERVE flag to CreateDevice. There is also a managed code equivalent to this flag (CreateFlags.FpuPreserve) if you need it.

More information can be found at Direct3D and the FPU.

查看更多
forever°为你锁心
4楼-- · 2019-01-15 21:37

What about a binary search by partitions into your program and determining which calls reduce the precision?

查看更多
登录 后发表回答