Would it be reasonable to define destruction order

2019-06-16 05:18发布

问题:

I know that vector elements destruction order is not defined by C++ standard (see Order of destruction of elements of an std::vector) and I saw that all compilers I checked do this destruction from begin to end - which is quite surprising to me since dynamic and static arrays do it in reverse order, and this reverse order is quite often in C++ world.

To be strict: I know that "Container members ... can be constructed and destroyed in any order using for example insert and erase member functions" and I do not vote for "containers to keep some kind of log over these changes". I would just vote for changing current vector destructor implementation from forward destruction to backward destruction of elements - nothing more. And maybe add this rule to C++ standard.

And the reason why? The changing from arrays to vector would be safer this way.

REAL WORLD EXAMPLE: We all know that mutexes locking and unlocking order is very important. And to ensure that unlocking happens - ScopeGuard pattern is used. Then destruction order is important. Consider this example. There - switching from arrays to vector causes deadlock - just because their destruction order differs:

class mutex {
public:
    void lock() { cout << (void*)this << "->lock()\n"; }
    void unlock() { cout << (void*)this << "->unlock()\n"; }
};

class lock {
    lock(const mutex&);
public:
    lock(mutex& m) : m_(&m) { m_->lock(); }
    lock(lock&& o) { m_ = o.m_; o.m_ = 0; }
    lock& operator = (lock&& o) { 
        if (&o != this) {
            m_ = o.m_; o.m_ = 0;
        }
        return *this;
    }
    ~lock() { if (m_) m_->unlock(); }  
private:
    mutex* m_;
};

mutex m1, m2, m3, m4, m5, m6;

void f1() {
    cout << "f1() begin!\n";
    lock ll[] = { m1, m2, m3, m4, m5 };
    cout <<; "f1() end!\n";
}

void f2() {
    cout << "f2() begin!\n";
    vector<lock> ll;
    ll.reserve(6); // note memory is reserved - no re-assigned expected!!
    ll.push_back(m1);
    ll.push_back(m2);
    ll.push_back(m3);
    ll.push_back(m4);
    ll.push_back(m5);
    cout << "f2() end!\n";
}

int main() {
    f1();
    f2();
}

OUTPUT - see the destruction order change from f1() to f2()

f1() begin!
0x804a854->lock()
0x804a855->lock()
0x804a856->lock()
0x804a857->lock()
0x804a858->lock()
f1() end!
0x804a858->unlock()
0x804a857->unlock()
0x804a856->unlock()
0x804a855->unlock()
0x804a854->unlock()
f2() begin!
0x804a854->lock()
0x804a855->lock()
0x804a856->lock()
0x804a857->lock()
0x804a858->lock()
f2() end!
0x804a854->unlock()
0x804a855->unlock()
0x804a856->unlock()
0x804a857->unlock()
0x804a858->unlock()

回答1:

I think this is another case of C++ giving compiler writers the flexibility to write the most performant containers for their architecture. Requiring destruction in a particular order could hurt performance for a convenience in something like 0.001% of cases (I've actually never seen another example where the default order wasn't suitable). In this case since vector is contiguous data I'm referring to the hardware's ability to utilize look-ahead caching intelligently instead of iterating backwards and probably repeatedly missing the cache.

If a particular order of destruction is required for your container instance, the language asks that you implement it yourself to avoid potentially penalizing other clients of the standard features.



回答2:

Fwiw, libc++ outputs:

f1() begin!
0x1063e1168->lock()
0x1063e1169->lock()
0x1063e116a->lock()
0x1063e116b->lock()
0x1063e116c->lock()
f1() end!
0x1063e116c->unlock()
0x1063e116b->unlock()
0x1063e116a->unlock()
0x1063e1169->unlock()
0x1063e1168->unlock()
f2() begin!
0x1063e1168->lock()
0x1063e1169->lock()
0x1063e116a->lock()
0x1063e116b->lock()
0x1063e116c->lock()
f2() end!
0x1063e116c->unlock()
0x1063e116b->unlock()
0x1063e116a->unlock()
0x1063e1169->unlock()
0x1063e1168->unlock()

It was purposefully implemented this way. The key function defined here is:

template <class _Tp, class _Allocator>
_LIBCPP_INLINE_VISIBILITY inline
void
__vector_base<_Tp, _Allocator>::__destruct_at_end(const_pointer __new_last, false_type) _NOEXCEPT
{
    while (__new_last != __end_)
        __alloc_traits::destroy(__alloc(), const_cast<pointer>(--__end_));
}

This private implementation-detail is called whenever the size() needs to shrink.

I have not yet received any feedback on this visible implementation detail, either positive or negative.