I'm currently making an application using vectors with C++.
I know how pre-optimization is the root of all evil.
But I really can't help being curious.
I'm adding parts of other vectors into another vector.
We'll say the vector will have a size that never changes of 300.
Since I always append to the end of the vector
Is it faster to do:
a.reserve(300);
a.insert(a.end(), b.begin(), b.end());
or would it be faster to loop through the vector I want to append and add each items individually(while still reserving beforehand) with push_back
or emplace
. (unsure which is faster)
Anyone can help me on this?
Here's a general principle: when a library provides both
do_x_once
anddo_x_in_batch
, then the latter should be at least as fast as callingdo_x_once
in a simple loop. If it isn't, then the library is very badly implemented since a simple loop is enough to get a faster version. Often, such batch functions/methods can perform additional optimizations because they have knowledge of data structure internals.So,
insert
should be at least as fast aspush_back
in a loop. In this particular case, a smart implementation ofinsert
can do a singlereserve
for all the elements you want to insert.push_back
would have to check the vector's capacity every time. Don't try to outsmart the library :)I guess it really depends on the compiler (library implementation), compiling options and architecture. Doing a quick benchmark in VS2005 without optimization (/Od) on Intel Xeon:
I get these results for 10 000 000 items using these different methods of "copy values...":
b.push_back(a[i]);
: 0.808 secb[i] = a[i];
: 0.264 secb.insert(b.end(), a.begin(), a.end());
: 0.021 sec (no significant difference with reserve first)std::copy(a.begin(), a.end(), std::back_inserter(b));
: 0.944 sec (0.871 with reserve first)memcpy(&(b[0]), &(a[0]), 10000000*sizeof(int));
: 0.061 secWith optimizations turned on (/Ox) however, it's a different story. I had to increase the size to 100 000 000 to get more differentiation:
What's interesting to note is that with or without optimizations, the insert method scaled linearly. Other methods were clearly inefficient without optimizations but still couldn't get quite as fast with them. As James Kanze noted, it's different on g++. Run a test with your own platform to validate.
As larsmans says, the more you do in a single library call, the more likely it is to be more efficient. In the case of
insert
into a vector, the library will normally do at most a single re-allocation, and to copy each shifted element at most once. If you use a loop andpush_back
, it could reallocate several times, which could be significantly slower (like an order of magnitude).Depending on the type, however, it may also be faster to do something like:
I've found this to be faster for simple scalar types (like
int
) using g++ on an Intel machine.