Calling DLL From VB6 and C# Give Slightly Differen

2019-07-07 03:53发布

I have a proprietary library in a DLL (I don't have the code) that has been used for years from within VB6. I'm trying to upgrade the VB6 code to C#, and hope to make the C# code exactly replicate the VB6 behavior. I'm having trouble making the double precision results of some calculations done in the DLL match exactly when called from each environment.

In VB6 I have something like this (note file reading and writing is to make sure exact same values are used and generated):

Dim a As Double, b As Double, c As Double, d As Double
Open "C:\input.txt" For Binary As #1
Get #1, , a
Get #1, , b
Get #1, , c
Get #1, , d
Close #1
Dim t As New ProprietaryLib.Transform
t.FindLine a, b, c, d
Open "C:\output.txt" For Binary As #1
Put #1, , t.Slope
Put #1, , t.Intercept
Close #1

In C# I have something like this:

System.IO.BinaryReader br = new System.IO.BinaryReader(System.IO.File.Open(@"C:\input.txt", System.IO.FileMode.Open));
double a, b, c, d;
a = br.ReadDouble();
b = br.ReadDouble();
c = br.ReadDouble();
d = br.ReadDouble();
br.Close();
ProprietaryLib.Transform t = new ProprietaryLib.Transform();
t.FindLIne(a, b, c, d);
System.IO.BinaryWriter bw = new System.IO.BinaryWriter(System.IO.File.Open(@"C:\output2.txt", System.IO.FileMode.Create));
bw.Write(t.Slope);
bw.Write(t.Intercept);
bw.Close();

I have verified that the input is being read identically (verified by re-writing binary values to files), so identical double precision numbers are being fed to the DLL. The output values are very similar, but not identical (values are sometimes off in the least significant parts of the numbers, out in the noise of the 15th-17 decimal place, and binary write out to file verifies that they are different binary values). Does anyone have any advice on why these values might be calculated not quite identically or how I might fix or debug this?

1条回答
2楼-- · 2019-07-07 04:57

This probably happens because of the different standards used for double precision

  • VB6 uses a less precise internal standard by default for (back then) performance reasons.
  • .NET complies with the IEEE 754 standard for binary floating-point arithmetic

You can compile the VB6 application using the /OP option to improve float consistency.

By default, the compiler uses the coprocessor’s 80-bit registers to hold the intermediate results of floating-point calculations. This increases program speed and decreases program size. However, because the calculation involves floating-point data types that are represented in memory by less than 80 bits, carrying the extra bits of precision (80 bits minus the number of bits in a smaller floating-point type) through a lengthy calculation can produce inconsistent results. (Source: MSDN)

查看更多
登录 后发表回答