What is the advantage in de-allocating memory in reverse order to variables?
问题:
回答1:
Consider this example:
Type1 Object1;
Type2 Object2(Object1);
Suppose that Object2
uses some internal resources of Object1
and is valid as long as Object1
is valid. For example, Object2
s destructor accesses Object1
's internal resource. If it weren't for the guarantee of reverse order of destruction, this would lead to problems.
回答2:
It's not just about deallocating memory, it's about symmetry in a broader sense.
Each time you create an object you are creating a new context to work in. You "push" into these contexts as you need them and "pop" back again later -- symmetry is necessary.
It's a very powerful way of thinking when it comes to RAII and exception-safety, or proving correctness w.r.t. preconditions and postconditions (constructors establish invariants, destructors ought to assert()
them, and in well-designed classes each method clearly preserves them).
IMHO, lack of this feature is Java's single biggest flaw. Consider objects whose constructors open file handles or mutexes or whatever -- Armen's answer brilliantly illustrates how this symmetry enforces some common-sense constraints (languages such as Java may let Object1 go out of scope before Object2 but Object2 keeps Object1 alive by reference counting) but there's a whole wealth of design issues that fall neatly into place when considered in terms of object lifetimes.
Lots of C++ gotchas explain themselves when you bear this in mind
- why
goto
s can't cross initialisations - why you may be advised to have only one
return
in any function (this only applies to non-RAII languages such as C and Java) - why an exception is the only reasonable way for a constructor to fail, and likewise why destructors can never reasonably throw
- why you shouldn't call virtual functions in a constructor
etc etc...
回答3:
The guarantee of destruction order of local variables is to allow you to write (for example) code like this:
{
LockSession s(lock);
std::ofstream output("filename");
// write stuff to output
}
LockSession
is a class that acquires the lock in its constructor and releases it in its destructor.
At the }
, we know that the file handle will be closed (and flushed) before the lock is released, which is a very useful guarantee to have if there are other threads in the program that use the same lock to protect access of the same file.
Suppose that destruction order were not specified by the standard, then we'd have to worry about the possibility that this code would release the lock (allowing other threads to access the file), and only then set about flushing and closing it. Or to keep the guarantee we need, we'd have to write the code like this, instead:
{
LockSession s(lock);
{
std::ofstream output("filename");
// write stuff to output
} // closes output
} // releases lock
This example isn't perfect - flushing a file isn't guaranteed to actually succeed, so relying on an ofstream
destructor to do it doesn't result in bullet-proof code in that respect. But even with that problem, we are at least guaranteed that we don't have the file open any more by the time we release the lock, and in general that's the sort of useful guarantee that destruction order can provide.
There are also other guarantees of destruction order in C++, for example that base class subobjects are destroyed after the derived class destructor has run, and that data members of an object are destroyed in reverse order of construction, also after the derived class destructor is run and before the base class subobjects. Each guarantee is there so that you can write code that relies in some way on the second thing still being there while the first thing is destroyed.
None of this has very much to do with the actual de-allocation of memory, it's much more about what the destructor does. Since you ask about de-allocation, though, there might be some cases where certain memory allocator implementations benefit from blocks being freed in reverse order of their allocation. It could make it a little bit easier for the allocator to reduce memory fragmentation by merging adjacent free blocks. You don't very often have to think about that, though, and anyway allocators that need to merge free blocks ought to be smart enough to do it whatever order they're allocated and freed.