可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
What is the capacity()
of an std::vector
which is created using the default constuctor? I know that the size()
is zero. Can we state that a default constructed vector does not call heap memory allocation?
This way it would be possible to create an array with an arbitrary reserve using a single allocation, like std::vector<int> iv; iv.reserve(2345);
. Let's say that for some reason, I do not want to start the size()
on 2345.
For example, on Linux (g++ 4.4.5, kernel 2.6.32 amd64)
#include <iostream>
#include <vector>
int main()
{
using namespace std;
cout << vector<int>().capacity() << "," << vector<int>(10).capacity() << endl;
return 0;
}
printed 0,10
. Is it a rule, or is it STL vendor dependent?
回答1:
The standard doesn't specify what the initial capacity
of a container should be, so you're relying on the implementation. A common implementation will start the capacity at zero, but there's no guarantee. On the other hand there's no way to better your strategy of std::vector<int> iv; iv.reserve(2345);
so stick with it.
回答2:
Storage implementations of std::vector vary significantly, but all the ones I've come across start from 0.
The following code:
#include <iostream>
#include <vector>
int main()
{
using namespace std;
vector<int> normal;
cout << normal.capacity() << endl;
for (unsigned int loop = 0; loop != 10; ++loop)
{
normal.push_back(1);
cout << normal.capacity() << endl;
}
std::cin.get();
return 0;
}
Gives the following output:
0
1
2
4
4
8
8
8
8
16
16
under GCC 5.1 and:
0
1
2
3
4
6
6
9
9
9
13
under MSVC 2013.
回答3:
As far as I understood the standard (though I could actually not name a reference), container instanciation and memory allocation have intentionally been decoupled for good reason. Therefor you have distinct, separate calls for
constructor
to create the container itself
reserve()
to pre allocate a suitably large memory block to accomodate at least(!) a given number of objects
And this makes a lot of sense. The only right to exist for reserve()
is to give you the opportunity to code around possibly expensive reallocations when growing the vector. In order to be useful you have to know the number of objects to store or at least need to be able to make an educated guess. If this is not given you better stay away from reserve()
as you will just change reallocation for wasted memory.
So putting it all together:
- The standard intentionally does not specify a constructor that allows you to pre allocate a memory block for a specific number of objects (which would be at least more desirable than allocating an implementation specific, fixed "something" under the hood).
- Allocation shouldn't be implicit. So, to preallocate a block you need to make a separate call to
reserve()
and this need not be at the same place of construction (could/should of course be later, after you became aware of the required size to accomodate)
- Thus if a vector would always preallocate a memory block of implementation defined size this would foil the intended job of
reserve()
, wouldn't it?
- What would be the advantage of preallocating a block if the STL naturally cannot know the intended purpose and expected size of a vector? It'll be rather nonsensical, if not counter-productive.
- The proper solution instead is to allocate and implementation specific block with the first
push_back()
- if not already explicitely allocated before by reserve()
.
- In case of a necessary reallocation the increase in block size is implementation specific as well. The vector implementations I know of start with an exponential increase in size but will cap the increment rate at a certain maximum to avoid wasting huge amounts of memory or even blowing it.
All this comes to full operation and advantage only if not disturbed by an allocating constructor. You have reasonable defaults for common scenarios that can be overriden on demand by reserve()
(and shrink_to_fit()
). So, even if the standard does not explicitely state so, I'm quite sure assuming that a newly constructed vector does not preallocate is a pretty safe bet for all current implementations.
回答4:
As a slight addition to the other answers, I found that when running under debug conditions with Visual Studio a default constructed vector will still allocate on the heap even though the capacity starts at zero.
Specifically if _ITERATOR_DEBUG_LEVEL != 0 then vector will allocate some space to help with iterator checking.
https://docs.microsoft.com/en-gb/cpp/standard-library/iterator-debug-level
I just found this slightly annoying since I was using a custom allocator at the time and was not expecting the extra allocation.
回答5:
Standard doesnt specify initial value for capacity but the STL container automatically grows to accomodate as much data as you put in, provided you don't exceed the maximum size(use max_size member function to know).
For vector and string, growth is handled by realloc whenever more space is needed. Suppose you'd like to create a vector holding value 1-1000. Without using reserve, the code will typically result in between
2 and 18 reallocations during following loop:
vector<int> v;
for ( int i = 1; i <= 1000; i++) v.push_back(i);
Modifying the code to use reserve might result in 0 allocations during the loop:
vector<int> v;
v.reserve(1000);
for ( int i = 1; i <= 1000; i++) v.push_back(i);
Roughly to say, vector and string capacities grow by a factor of between 1.5 and 2 each time.
回答6:
This is an old question, and all answers here have rightly explained the standard's point of view and the way you can get an initial capacity in a portable manner by using std::vector::reserve
;
However, I'll explain why it doesn't make sense for any STL implementation to allocate memory upon construction of an std::vector<T>
object;
std::vector<T>
of incomplete types;
Prior to C++17, it was undefined behavior to construct a std::vector<T>
if the definition of T
is still unknown at point of instantiation. However, that constraint was relaxed in C++17.
In order to efficiently allocate memory for an object, you need to know its size. From C++17 and beyond, your clients may have cases where your std::vector<T>
class does not know the size of T
. Does it makes sense to have memory allocation characteristics dependent on type completeness?
Unwanted Memory allocations
There are many, many, many times you'll need model a graph in software. (A tree is a graph); You are most likely going to model it like:
class Node {
....
std::vector<Node> children; //or std::vector< *some pointer type* > children;
....
};
Now think for a moment and imagine if you had lots of terminal nodes. You would be very pissed if your STL implementation allocates extra memory simply in anticipation of having objects in children
.
This is just one example, feel free to think of more...