I have a proprietary library in a DLL (I don't have the code) that has been used for years from within VB6. I'm trying to upgrade the VB6 code to C#, and hope to make the C# code exactly replicate the VB6 behavior. I'm having trouble making the double precision results of some calculations done in the DLL match exactly when called from each environment.
In VB6 I have something like this (note file reading and writing is to make sure exact same values are used and generated):
Dim a As Double, b As Double, c As Double, d As Double
Open "C:\input.txt" For Binary As #1
Get #1, , a
Get #1, , b
Get #1, , c
Get #1, , d
Close #1
Dim t As New ProprietaryLib.Transform
t.FindLine a, b, c, d
Open "C:\output.txt" For Binary As #1
Put #1, , t.Slope
Put #1, , t.Intercept
Close #1
In C# I have something like this:
System.IO.BinaryReader br = new System.IO.BinaryReader(System.IO.File.Open(@"C:\input.txt", System.IO.FileMode.Open));
double a, b, c, d;
a = br.ReadDouble();
b = br.ReadDouble();
c = br.ReadDouble();
d = br.ReadDouble();
br.Close();
ProprietaryLib.Transform t = new ProprietaryLib.Transform();
t.FindLIne(a, b, c, d);
System.IO.BinaryWriter bw = new System.IO.BinaryWriter(System.IO.File.Open(@"C:\output2.txt", System.IO.FileMode.Create));
bw.Write(t.Slope);
bw.Write(t.Intercept);
bw.Close();
I have verified that the input is being read identically (verified by re-writing binary values to files), so identical double precision numbers are being fed to the DLL. The output values are very similar, but not identical (values are sometimes off in the least significant parts of the numbers, out in the noise of the 15th-17 decimal place, and binary write out to file verifies that they are different binary values). Does anyone have any advice on why these values might be calculated not quite identically or how I might fix or debug this?
This probably happens because of the different standards used for double precision
You can compile the VB6 application using the
/OP
option to improve float consistency.