I've always thought it's the general wisdom that std::vector
is "implemented as an array," blah blah blah. Today I went down and tested it, and it seems to be not so:
Here's some test results:
UseArray completed in 2.619 seconds
UseVector completed in 9.284 seconds
UseVectorPushBack completed in 14.669 seconds
The whole thing completed in 26.591 seconds
That's about 3 - 4 times slower! Doesn't really justify for the "vector
may be slower for a few nanosecs" comments.
And the code I used:
#include <cstdlib>
#include <vector>
#include <iostream>
#include <string>
#include <boost/date_time/posix_time/ptime.hpp>
#include <boost/date_time/microsec_time_clock.hpp>
class TestTimer
{
public:
TestTimer(const std::string & name) : name(name),
start(boost::date_time::microsec_clock<boost::posix_time::ptime>::local_time())
{
}
~TestTimer()
{
using namespace std;
using namespace boost;
posix_time::ptime now(date_time::microsec_clock<posix_time::ptime>::local_time());
posix_time::time_duration d = now - start;
cout << name << " completed in " << d.total_milliseconds() / 1000.0 <<
" seconds" << endl;
}
private:
std::string name;
boost::posix_time::ptime start;
};
struct Pixel
{
Pixel()
{
}
Pixel(unsigned char r, unsigned char g, unsigned char b) : r(r), g(g), b(b)
{
}
unsigned char r, g, b;
};
void UseVector()
{
TestTimer t("UseVector");
for(int i = 0; i < 1000; ++i)
{
int dimension = 999;
std::vector<Pixel> pixels;
pixels.resize(dimension * dimension);
for(int i = 0; i < dimension * dimension; ++i)
{
pixels[i].r = 255;
pixels[i].g = 0;
pixels[i].b = 0;
}
}
}
void UseVectorPushBack()
{
TestTimer t("UseVectorPushBack");
for(int i = 0; i < 1000; ++i)
{
int dimension = 999;
std::vector<Pixel> pixels;
pixels.reserve(dimension * dimension);
for(int i = 0; i < dimension * dimension; ++i)
pixels.push_back(Pixel(255, 0, 0));
}
}
void UseArray()
{
TestTimer t("UseArray");
for(int i = 0; i < 1000; ++i)
{
int dimension = 999;
Pixel * pixels = (Pixel *)malloc(sizeof(Pixel) * dimension * dimension);
for(int i = 0 ; i < dimension * dimension; ++i)
{
pixels[i].r = 255;
pixels[i].g = 0;
pixels[i].b = 0;
}
free(pixels);
}
}
int main()
{
TestTimer t1("The whole thing");
UseArray();
UseVector();
UseVectorPushBack();
return 0;
}
Am I doing it wrong or something? Or have I just busted this performance myth?
I'm using Release mode in Visual Studio 2005.
In Visual C++, #define _SECURE_SCL 0
reduces UseVector
by half (bringing it down to 4 seconds). This is really huge, IMO.
I did some extensive tests that I wanted to for a while now. Might as well share this.
This is my dual boot machine i7-3770, 16GB Ram, x86_64, on Windows 8.1 and on Ubuntu 16.04. More information and conclusions, remarks below. Tested both MSVS 2017 and g++ (both on Windows and on Linux).
Test Program
Results
Notes
std::sort()
too (you can see it commented out) but removed them later because there were no significant relative differences.My Conclusions and Remarks
std::array
's time variations between consecutive runs, while others especially std:: data structs varied wildly in comparisonstd::array
and c-style arrays faster on Windows without optimizationVerdict
Of course this is code for an optimized build. And since the question was about
std::vector
then yes it is !much! slower than plain arrays (optimized/unoptimized). But when you're doing a benchmark, you naturally want to produce optimized code.The star of the show for me though has been
std::array
.Great question. I came in here expecting to find some simple fix that would speed the vector tests right up. That didn't work out quite like I expected!
Optimization helps, but it's not enough. With optimization on I'm still seeing a 2X performance difference between UseArray and UseVector. Interestingly, UseVector was significantly slower than UseVectorPushBack without optimization.
Idea #1 - Use new[] instead of malloc
I tried changing
malloc()
tonew[]
in UseArray so the objects would get constructed. And changing from individual field assignment to assigning a Pixel instance. Oh, and renaming the inner loop variable toj
.Surprisingly (to me), none of those changes made any difference whatsoever. Not even the change to
new[]
which will default construct all of the Pixels. It seems that gcc can optimize out the default constructor calls when usingnew[]
, but not when usingvector
.Idea #2 - Remove repeated operator[] calls
I also attempted to get rid of the triple
operator[]
lookup and cache the reference topixels[j]
. That actually slowed UseVector down! Oops.Idea #3 - Remove constructors
What about removing the constructors entirely? Then perhaps gcc can optimize out the construction of all of the objects when the vectors are created. What happens if we change Pixel to:
Result: about 10% faster. Still slower than an array. Hm.
Idea #4 - Use iterator instead of loop index
How about using a
vector<Pixel>::iterator
instead of a loop index?Result:
Nope, no different. At least it's not slower. I thought this would have performance similar to #2 where I used a
Pixel&
reference.Conclusion
Even if some smart cookie figures out how to make the vector loop as fast as the array one, this does not speak well of the default behavior of
std::vector
. So much for the compiler being smart enough to optimize out all the C++ness and make STL containers as fast as raw arrays.The bottom line is that the compiler is unable to optimize away the no-op default constructor calls when using
std::vector
. If you use plainnew[]
it optimizes them away just fine. But not withstd::vector
. Even if you can rewrite your code to eliminate the constructor calls that flies in face of the mantra around here: "The compiler is smarter than you. The STL is just as fast as plain C. Don't worry about it."This is an old but popular question.
At this point, many programmers will be working in C++11. And in C++11 the OP's code as written runs equally fast for
UseArray
orUseVector
.The fundamental problem was that while your
Pixel
structure was uninitialized,std::vector<T>::resize( size_t, T const&=T() )
takes a default constructedPixel
and copies it. The compiler did not notice it was being asked to copy uninitialized data, so it actually performed the copy.In C++11,
std::vector<T>::resize
has two overloads. The first isstd::vector<T>::resize(size_t)
, the other isstd::vector<T>::resize(size_t, T const&)
. This means when you invokeresize
without a second argument, it simply default constructs, and the compiler is smart enough to realize that default construction does nothing, so it skips the pass over the buffer.(The two overloads where added to handle movable, constructable and non-copyable types -- the performance improvement when working on uninitialized data is a bonus).
The
push_back
solution also does fencepost checking, which slows it down, so it remains slower than themalloc
version.live example (I also replaced the timer with
chrono::high_resolution_clock
).Note that if you have a structure that usually requires initialization, but you want to handle it after growing your buffer, you can do this with a custom
std::vector
allocator. If you want to then move it into a more normalstd::vector
, I believe careful use ofallocator_traits
and overriding of==
might pull that off, but am unsure.