Suppose that I define some class:
class Pixel {
public:
Pixel(){ x=0; y=0;};
int x;
int y;
}
Then write some code using it. Why would I do the following?
Pixel p;
p.x = 2;
p.y = 5;
Coming from a Java world I always write:
Pixel* p = new Pixel();
p->x = 2;
p->y = 5;
They basically do the same thing, right? One is on the stack while the other is on the heap, so I'll have to delete it later on. Is there any fundamental difference between the two? Why should I prefer one over the other?
Yes, one is on the stack, the other on the heap. There are two important differences:
delete
yourself, instead wrapping it in stack-allocated objects which calldelete
internally, typicaly in their destructor. If you attempt to manually keep track of all allocations, and calldelete
at the right times, I guarantee you that you'll have at least a memory leak per 100 lines of code.As a small example, consider this code:
Pretty innocent code, right? We create a pixel, then we call some unrelated function, and then we delete the pixel. Is there a memory leak?
And the answer is "possibly". What happens if
bar
throws an exception?delete
never gets called, the pixel is never deleted, and we leak memory. Now consider this:This won't leak memory. Of course in this simple case, everything is on the stack, so it gets cleaned up automatically, but even if the
Pixel
class had made a dynamic allocation internally, that wouldn't leak either. ThePixel
class would simply be given a destructor that deletes it, and this destructor would be called no matter how we leave thefoo
function. Even if we leave it becausebar
threw an exception. The following, slightly contrived example shows this:The Pixel class now internally allocates some heap memory, but its destructor takes care of cleaning it up, so when using the class, we don't have to worry about it. (I should probably mention that the last example here is simplified a lot, in order to show the general principle. If we were to actually use this class, it contains several possible errors too. If the allocation of y fails, x never gets freed, and if the Pixel gets copied, we end up with both instances trying to delete the same data. So take the final example here with a grain of salt. Real-world code is a bit trickier, but it shows the general idea)
Of course the same technique can be extended to other resources than memory allocations. For example it can be used to guarantee that files or database connections are closed after use, or that synchronization locks for your threading code are released.
The first case is not always stack allocated. If it's part of an object, it'll be allocated wherever the object is. For example:
The main advantages of stack variables are:
Once the object's been created, there's no performance difference between an object allocated on the heap, and one allocated on the stack (or wherever).
However, you can't use any kind of polymorphism unless you're using a pointer - the object has a completely static type, which is determined at compile time.
Why not use pointers for everything?
They're slower.
Compiler optimizations will not be as effective with pointer access symantics, you can read up about it in any number of web sites, but here's a decent pdf from Intel.
Check pages, 13,14,17,28,32,36;
... a number of variations on this theme....
My gut reaction is just to tell you that this could lead to serious memory leaks. Some situations in which you might be using pointers could lead to confusion about who should be responsible for deleting them. In simple cases such as your example, it's easy enough to see when and where you should call delete, but when you start passing pointers between classes, things can get a little more difficult.
I'd recommend looking into the boost smart pointers library for your pointers.
"Why not use pointers for everything in C++"
One simple answer - because it becomes a huge problem managing the memory - allocating and deleting/freeing.
Automatic/stack objects remove some of the busy work of that.
that is just the first thing that I would say about the question.