The function for freeing an instance of struct Foo
is given below:
void DestroyFoo(Foo* foo)
{
if (foo) free(foo);
}
A colleague of mine suggested the following instead:
void DestroyFoo(Foo** foo)
{
if (!(*foo)) return;
Foo *tmpFoo = *foo;
*foo = NULL; // prevents future concurrency problems
memset(tmpFoo, 0, sizeof(Foo)); // problems show up immediately if referred to free memory
free(tmpFoo);
}
I see that setting the pointer to NULL
after freeing is better, but I'm not sure about the following:
Do we really need to assign the pointer to a temporary one? Does it help in terms of concurrency and shared memory?
Is it really a good idea to set the whole block to 0 to force the program to crash or at least to output results with significant discrepancy?
Unfortunately, this idea is just not working.
If the intent was to catch double free, it is not covering cases like the following.
Assume this code:
The proposal is to write instead:
The problem is that the second call to
DestroyFoo()
will still crash, becauseptr_2
is not reset to NULL, and still point to memory already freed.It has nothing to do concurrency or shared memory. It's pointless.
No. Not at all.
The solution suggested by your colleague is terrible. Here's why:
Setting whole block to 0 achieves nothing either. Because someone is using a free()'ed block accidentally, they wouldn't know that based on the values at the block. That's the kind of block
calloc()
returns. So it's impossible to know whether it's freshly allocated memory (calloc()
ormalloc()+memset()
) or the one that's been free()'ed by your code earlier. If anything it's extra work for your program to zero out every block of memory that's being free()'ed.free(NULL);
is well-defined and is a no-op, so theif
condition inif(ptr) {free(ptr);}
achieves nothing.Since
free(NULL);
is no-op, setting the pointer toNULL
would actually hide that bug, because if some function is actually callingfree()
on an already free()'ed pointer, then they wouldn't know that.most user functions would have a NULL check at the start and may not consider passing
NULL
to it as error condition:So the all those extra checks and zero'ing out gives a fake sense of "robustness" while it didn't really improve anything. It just replaced one problem with another the additional cost of performance and code bloat.
So just calling
free(ptr);
without any wrapper function is both simple and robust (mostmalloc()
implementations would crash immediately on double free, which is a good thing).There's no easy way around "accidentally" calling
free()
twice or more. It's the programmer's responsibility to keep track of all memory allocated andfree()
it appropriately. If someone find this hard to handle then C is probably not the right language for them.Your colleague code is bad because
foo
isNULL
I think what your colleague might have in mind is this use-case
In that case, it should be like this. try here
First we need to take a look at
Foo
, let's assume that it looks like thisNow to define how it should be destroyed, let's first define how it should be created
Now we can write a destroy function oppose to the create function
Test try here
The second solution seems to be over engineered. Of course in some situation it might be safer but the overhead and the complexity is just too big.
What you should do if you want to be on a safe side is setting the pointer to NULL after freeing memory. This is always a good practice.
What is more, I don't know why people are checking if the pointer is NULL before calling free(). This is not needed as free() will do the job for you.
Setting memory to 0 (or something else) is only in some cases a good practice as free() will not clear memory. It will just mark a region of memory to be free so that it can be reused. If you want to clear the memory, so that no one will be able to read it you need to clean it manually. But this is quite heavy operation and that's why this shouldn't be used to free all the memory. In most cases freeing without clearing is just enough and you don't have to sacrifice performance to do unnecessary operation.
What your collegue suggests will make the code "safer" in case the function is called twice (see sleske comment...as "safer" may not mean the same for everybody...;-).
With your code, this will most likely crash:
With your collegues's version of the code, this will not crash:
Now, for this specific scenario, doing
tmpFoo = 0;
(withinDestroyFoo
) is enough.memset(tmpFoo, 0, sizeof(Foo));
will prevent crash if Foo has extra attributes that could be wrongly accessed after memory is released.So I would say yes, it may be a good practice to do so....but it's only a kind of security against developers who have bad practices (because there's definitely no reason to call
DestroyFoo
twice without reallocating it)...in the end, you makeDestroyFoo
"safer" but slower (it does more stuff to prevent bad usage of it).