My copy of VS2013 Ultimate compiles this code for 60+ seconds:
class Program
{
static void Main(string[] args)
{
double dichotomy = Dichotomy(
d =>
{
try
{
int size = (int) d;
byte[] b = new byte[size];
return -b.Length;
}
catch (Exception)
{
return 0;
}
},
0,
int.MaxValue,
1);
Console.WriteLine(dichotomy);
Console.ReadKey();
}
private static double Dichotomy(
Func<double, double> func,
double a,
double b,
double epsilon)
{
double delta = epsilon / 10;
while (b - a >= epsilon)
{
double middle = (a + b) / 2;
double lambda = middle - delta, mu = middle + delta;
if (func(lambda) < func(mu))
b = mu;
else
a = lambda;
}
return (a + b) / 2;
}
}
But if I replace double
with int
, it compiles immediately. How can be it explained...?
I repro, 27 seconds on my machine. The evil-doer is MsMpEng.exe, it burns 100% core for that long. Easy to see in Task Manager's Processes tab.
This is the Windows Defender service, the one that actually performs the malware scans. Disabling it by unticking the "Turn on real-time protection" option instantly fixes the delay. So does adding the path where I store projects to the "Excluded file locations" box, probably your preferred approach.
I'd hate to have to guess at the underlying reason, but have to assume that your source code is triggering a malware rule. Not a great explanation, I don't see the delay when I target a .NET version < 4.0. Okay, I give up :)
I can't say authoritatively because it's been 20+ years since I fiddled at the assembly code level, but I can easily believe this.
The difference between IEEE standard floating point ops and the ones implemented by a processor often forces linking in library routines to do the translation, while integer math can just use the CPU instructions. At the time the IEEE defined the standard, they made some choices that were very uncommon in implementation, and especially that long ago much more expensive to implement in microcode, and of course the current PC systems are built around chips descendent from the 80387 and 80486, which predate the standard.
So if I'm right, the increased time is because it involves adding a chunk of library code to the link, and linking is a big part of the build time that tends to grow multiplicatively as relocatable chunks are added.
Clang on Linux might or might not have the same slowdown; if it does avoid it, and extending my guesswork even further, that would be an artifact of the omnipresent shared-memory libc you get there and linker optimizations around that.