Make sure you run outside of the IDE. That is key.
-edit- I LOVE SLaks comment. "The amount of misinformation in these answers is staggering." :D
Calm down guys. Pretty much all of you were wrong. I DID make optimizations.
It turns out whatever optimizations I made wasn't good enough.
I ran the code in GCC using gettimeofday (I'll paste code below) and used g++ -O2 file.cpp
and got slightly faster results then C#.
Maybe MS didn't create the optimizations needed in this specific case but after downloading and installing mingw I was tested and found the speed to be near identical.
Justicle Seems to be right. I could have sworn I use clock on my PC and used that to count and found it was slower but problem solved. C++ speed isn't almost twice as slower in the MS compiler.
When my friend informed me of this I couldn't believe it. So I took his code and put some timers onto it.
Instead of Boo I used C#. I constantly got faster results in C#. Why? The .NET version was nearly half the time no matter what number I used.
C++ version (bad version):
#include <iostream>
#include <stdio.h>
#include <intrin.h>
#include <windows.h>
using namespace std;
int fib(int n)
{
if (n < 2) return n;
return fib(n - 1) + fib(n - 2);
}
int main()
{
__int64 time = 0xFFFFFFFF;
while (1)
{
int n;
//cin >> n;
n = 41;
if (n < 0) break;
__int64 start = __rdtsc();
int res = fib(n);
__int64 end = __rdtsc();
cout << res << endl;
cout << (float)(end-start)/1000000<<endl;
break;
}
return 0;
}
C++ version (better version):
#include <iostream>
#include <stdio.h>
#include <intrin.h>
#include <windows.h>
using namespace std;
int fib(int n)
{
if (n < 2) return n;
return fib(n - 1) + fib(n - 2);
}
int main()
{
__int64 time = 0xFFFFFFFF;
while (1)
{
int n;
//cin >> n;
n = 41;
if (n < 0) break;
LARGE_INTEGER start, end, delta, freq;
::QueryPerformanceFrequency( &freq );
::QueryPerformanceCounter( &start );
int res = fib(n);
::QueryPerformanceCounter( &end );
delta.QuadPart = end.QuadPart - start.QuadPart;
cout << res << endl;
cout << ( delta.QuadPart * 1000 ) / freq.QuadPart <<endl;
break;
}
return 0;
}
C# version:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;
using System.ComponentModel;
using System.Threading;
using System.IO;
using System.Diagnostics;
namespace fibCSTest
{
class Program
{
static int fib(int n)
{
if (n < 2)return n;
return fib(n - 1) + fib(n - 2);
}
static void Main(string[] args)
{
//var sw = new Stopwatch();
//var timer = new PAB.HiPerfTimer();
var timer = new Stopwatch();
while (true)
{
int n;
//cin >> n;
n = 41;
if (n < 0) break;
timer.Start();
int res = fib(n);
timer.Stop();
Console.WriteLine(res);
Console.WriteLine(timer.ElapsedMilliseconds);
break;
}
}
}
}
GCC version:
#include <iostream>
#include <stdio.h>
#include <sys/time.h>
using namespace std;
int fib(int n)
{
if (n < 2) return n;
return fib(n - 1) + fib(n - 2);
}
int main()
{
timeval start, end;
while (1)
{
int n;
//cin >> n;
n = 41;
if (n < 0) break;
gettimeofday(&start, 0);
int res = fib(n);
gettimeofday(&end, 0);
int sec = end.tv_sec - start.tv_sec;
int usec = end.tv_usec - start.tv_usec;
cout << res << endl;
cout << sec << " " << usec <<endl;
break;
}
return 0;
}
EDIT: While the original C++ timing is wrong (comparing cycles to milliseconds), better timing does show C# is faster with vanilla compiler settings.
OK, enough random speculation, time for some science. After getting weird results with existing C++ code, I just tried running:
EDIT:
MSN pointed out you should time C# outside the debugger, so I re-ran everything:
Best Results (VC2008, running release build from commandline, no special options enabled)
The original C++ code (with rdtsc) wasn't returning milliseconds, just a factor of reported clock cycles, so comparing directly to
StopWatch()
results is invalid. The original timing code is just wrong.Note
StopWatch()
uses QueryPerformance* calls: http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspxSo in this case C++ is faster than C#.It depends on your compiler settings - see MSN's answer.Not saying that's the issue, but you may want to read How to: Use the High-Resolution Timer
Also see this... http://en.wikipedia.org/wiki/Comparison_of_Java_and_C%2B%2B#Performance
It's about Java but begins to tackle the issue of Performance between C runtimes and JITed runtimes.
It could be that the methods are pre-jitted at runtime prior to running the test...or that the Console is a wrapper around the API for outputting to console, when the C++'s code for
cout
is buffered..I guess..Hope this helps, Best regards, Tom.
you are calling static function in c# code which will be inlined, and in c++ you use nonstatic function. i have ~1.4 sec for c++. with g++ -O3 you can have 1.21 sec.
you just can't compare c# with c++ with badly translated code
If that code is truly 1/2 the execution time then some possible reasons are:
Don't understand the answer with garbage collection or console buffering.
It could be that your timer mechanism in C++ is inherently flawed.
According to http://en.wikipedia.org/wiki/Rdtsc, it is possible that you get wrong benchmark results.
Quoted: