Short version: It's common to return large objects—such as vectors/arrays—in many programming languages. Is this style now acceptable in C++0x if the class has a move constructor, or do C++ programmers consider it weird/ugly/abomination?
Long version: In C++0x is this still considered bad form?
std::vector<std::string> BuildLargeVector();
...
std::vector<std::string> v = BuildLargeVector();
The traditional version would look like this:
void BuildLargeVector(std::vector<std::string>& result);
...
std::vector<std::string> v;
BuildLargeVector(v);
In the newer version, the value returned from BuildLargeVector
is an rvalue, so v would be constructed using the move constructor of std::vector
, assuming (N)RVO doesn't take place.
Even prior to C++0x the first form would often be "efficient" because of (N)RVO. However, (N)RVO is at the discretion of the compiler. Now that we have rvalue references it is guaranteed that no deep copy will take place.
Edit: Question is really not about optimization. Both forms shown have near-identical performance in real-world programs. Whereas, in the past, the first form could have had order-of-magnitude worse performance. As a result the first form was a major code smell in C++ programming for a long time. Not anymore, I hope?
If performance is a real issue you should realise that move semantics aren't always faster than copying. For example if you have a string that uses the small string optimization then for small strings a move constructor must do the exact same amount of work as a regular copy constructor.
Indeed, since C++11, the cost of copying the
std::vector
is gone in most cases.However, one should keep in mind that the cost of constructing the new vector (then destructing it) still exists, and using output parameters instead of returning by value is still useful when you desire to reuse the vector's capacity. This is documented as an exception in F.20 of the C++ Core Guidelines.
Let's compare:
with:
Now, suppose we need to call these methods
numIter
times in a tight loop, and perform some action. For example, let's compute the sum of all elements.Using
BuildLargeVector1
, you would do:Using
BuildLargeVector2
, you would do:In the first example, there are many unnecessary dynamic allocations/deallocations happening, which are prevented in the second example by using an output parameter the old way, reusing already allocated memory. Whether or not this optimization is worth doing depends on the relative cost of the allocation/deallocation compared to the cost of computing/mutating the values.
Benchmark
Let's play with the values of
vecSize
andnumIter
. We will keep vecSize*numIter constant so that "in theory", it should take the same time (= there is the same number of assignments and additions, with the exact same values), and the time difference can only come from the cost of allocations, deallocations, and better use of cache.More specifically, let's use vecSize*numIter = 2^31 = 2147483648, because I have 16GB of RAM and this number ensures that no more than 8GB is allocated (sizeof(int) = 4), ensuring that I am not swapping to disk (all other programs were closed, I had ~15GB available when running the test).
Here is the code:
And here is the result:
(Intel i7-7700K @ 4.20GHz; 16GB DDR4 2400Mhz; Kubuntu 18.04)
Notation: mem(v) = v.size() * sizeof(int) = v.size() * 4 on my platform.
Not surprisingly, when
numIter = 1
(i.e., mem(v) = 8GB), the times are perfectly identical. Indeed, in both cases we are only allocating once a huge vector of 8GB in memory. This also proves that no copy happened when using BuildLargeVector1(): I wouldn't have enough RAM to do the copy!When
numIter = 2
, reusing the vector capacity instead of re-allocating a second vector is 1.37x faster.When
numIter = 256
, reusing the vector capacity (instead of allocating/deallocating a vector over and over again 256 times...) is 2.45x faster :)We can notice that time1 is pretty much constant from
numIter = 1
tonumIter = 256
, which means that allocating one huge vector of 8GB is pretty much as costly as allocating 256 vectors of 32MB. However, allocating one huge vector of 8GB is definitly more expensive than allocating one vector of 32MB, so reusing the vector's capacity provides performance gains.From
numIter = 512
(mem(v) = 16MB) tonumIter = 8M
(mem(v) = 1kB) is the sweet spot: both methods are exactly as fast, and faster than all other combinations of numIter and vecSize. This probably has to do with the fact that the L3 cache size of my processor is 8MB, so that the vector pretty much fits completely in cache. I don't really explain why the sudden jump oftime1
is for mem(v) = 16MB, it would seem more logical to happen just after, when mem(v) = 8MB. Note that surprisingly, in this sweet spot, not re-using capacity is in fact slightly faster! I don't really explain this.When
numIter > 8M
things start to get ugly. Both methods get slower but returning the vector by value gets even slower. In the worst case, with a vector containing only one singleint
, reusing capacity instead of returning by value is 3.3x faster. Presumably, this is due to the fixed costs of malloc() which start to dominate.Note how the curve for time2 is smoother than the curve for time1: not only re-using vector capacity is generally faster, but perhaps more importantly, it is more predictable.
Also note that in the sweet spot, we were able to perform 2 billion additions of 64bit integers in ~0.5s, which is quite optimal on a 4.2Ghz 64bit processor. We could do better by parallelizing the computation in order to use all 8 cores (the test above only uses one core at a time, which I have verified by re-running the test while monitoring CPU usage). The best performance is achieved when mem(v) = 16kB, which is the order of magnitude of L1 cache (L1 data cache for the i7-7700K is 4x32kB).
Of course, the differences become less and less relevant the more computation you actually have to do on the data. Below are the results if we replace
sum = std::accumulate(v.begin(), v.end(), sum);
byfor (int k : v) sum += std::sqrt(2.0*k);
:Conclusions
Results may differ on other platforms. As usual, if performance matters, write benchmarks for your specific use case.
Dave Abrahams has a pretty comprehensive analysis of the speed of passing/returning values.
Short answer, if you need to return a value then return a value. Don't use output references because the compiler does it anyway. Of course there are caveats, so you should read that article.
The gist is:
Copy Elision and RVO can avoid the "scary copies" (the compiler is not required to implement these optimizations, and in some situations it can't be applied)
C++ 0x RValue references allow a string/vector implementations that guarantees that.
If you can abandon older compilers / STL implementations, return vectors freely (and make sure your own objects support it, too). If your code base needs to support "lesser" compilers, stick to the old style.
Unfortunately, that has major influence on your interfaces. If C++ 0x is not an option, and you need guarantees, you might use instead reference-counted or copy-on-write objects in some scenarios. They have downsides with multithreading, though.
(I wish just one answer in C++ would be simple and straightforward and without conditions).
Just to nitpick a little: it is not common in many programming languages to return arrays from functions. In most of them, a reference to array is returned. In C++, the closest analogy would be returning
boost::shared_array
At least IMO, it's usually a poor idea, but not for efficiency reasons. It's a poor idea because the function in question should usually be written as a generic algorithm that produces its output via an iterator. Almost any code that accepts or returns a container instead of operating on iterators should be considered suspect.
Don't get me wrong: there are times it makes sense to pass around collection-like objects (e.g., strings) but for the example cited, I'd consider passing or returning the vector a poor idea.