I've read that accessing elements by position index can be done in constant time in a STL deque. As far as I know, elements in a deque may be stored in several non-contiguous locations, eliminating safe access through pointer arithmetic. For example:
abc->defghi->jkl->mnop
The elements of the deque above consists of a single character. The set of characters in one group indicate it is allocated in contiguous memory (e.g. abc is in a single block of memory, defhi is located in another block of memory, etc.). Can anyone explain how accessing by position index can be done in constant time, especially if the element to be accessed is in the second block? Or does a deque have a pointer to the group of blocks?
Update: Or is there any other common implementation for a deque?
The datas in
deque
are stored by chuncks of fixed size vector, which arepointered by a
map
(which is also a chunk of vector, but its size may change)The main part code of the
deque iterator
is as below:The main part code of the
deque
is as below:Below i will give you the core code of
deque
, mainly about two parts:iterator
How to Random access a
deque
realize1. iterator(
__deque_iterator
)The main problem of iterator is, when ++, -- iterator, it may skip to other chunk(if it pointer to edge of chunk). For example, there are three data chunks:
chunk 1
,chunk 2
,chunk 3
.The
pointer1
pointers to the begin ofchunk 2
, when operator--pointer
it will pointer to the end ofchunk 1
, so as to thepointer2
.Below I will give the main function of
__deque_iterator
:Firstly, skip to any chunk:
Note that, the
chunk_size()
function which compute the chunk size, you can think of it returns 8 for simplify here.operator*
get the data in the chunkoperator++, --
// prefix forms of increment
iterator skip n steps / random access2. Random access
deque
elementscommon function of
deque
You also see this question which give the main code of
deque
https://stackoverflow.com/a/50959796/6329006
If deque is implemented as a ring buffer on top of
std::vector
, which reallocates itself when it grows in size, then accessing by index is indeed O(1).The standard provides evidence that such implementation was meant--at least it conforms to standard in complexity estimations. Clauses 23.2.1.3/4 and 23.2.1.3/5 require that
inserting to the beginning or to the end of a deque require constant time, while insertion to the middle requires linear time of deque's size
when erasing elements from the deque, it may call as many assignment operators, as the distance from the elements being erased to the end of the deque is.
And, of course, you should count on standard requirements instead of your own vision on how containers and algorithms could be implemented.
NOTE that the standard requires more than these two points listed above. It also poses requirement that the references to elements must stay valid between insertions to the back or front of the deque. This can be satisfied if the ring buffer contains pointers to the actual elements rather than the elements themselves. Anyway, my answer just demonstrates the idea that multiple implementations may satisfy certain requirements.
I found this deque implementation from Wikipedia:
Storing contents in multiple smaller arrays, allocating additional arrays at the beginning or end as needed. Indexing is implemented by keeping a dynamic array containing pointers to each of the smaller arrays.
I guess it answers my question.
One of the possible implementation can be a dynamic array of const-sized arrays; such a const-sized array can be added to either end when more space is needed. All the arrays are full except, maybe, for the first and the last ones which can be partly empty. That means knowing the size of each array and the number of the elements used in the first array one can easily find a position of an element by index.
http://cpp-tip-of-the-day.blogspot.ru/2013/11/how-is-stddeque-implemented.html