This is related to the question 'Why do stacks typically grow downwards?', but more from a security point of view. I'm generally referring to x86.
It strikes me as odd that the stack would grow downwards, when buffers are usually written to upwards in memory. For example a typical C++ string has its end at a higher memory address than the beginning.
This means that if there's a buffer overflow you're overwriting further up the call stack, which I understand is a security risk, since it opens the possibility of changing return addresses and local variable contents.
If the stack grew upwards in memory, wouldn't buffer overflows simply run in to dead memory? Would this improve security? If so, why hasn't it been done? What about x64, do those stacks grow upwards and if not why not?
Technically this is OS/CPU dependant, but typically this is because the stack and heap grow in opposite directions and from opposite ends of the address space.
This arrangement gives you the most flexibility to split/allocate memory between the heap and the stack without causing them to collide. If they were both to grow in the same direction, then you would need to have a starting address for the stack that would put a hard limit the maximum size of the heap (and a hard limit on the size of the stack)
ETA:
Found an interesting piece on wikipedia about why making a stack grow upwards does not necessarily prevent stack overflows - it just makes them work a bit differently.
Well, I don't know if the stack growth direction would have much effect on security, but if you look at machine architecture, growing the stack in the negative address direction really simplifies calling conventions, stack frame pointers, local variable allocation, etc. etc.
The architecture for the 8088 (start of the x86 family) used a stack that grew downward, and for compatibility it has been that way ever since. Back then, (early 80s) buffer overflow vulnerabilities on home computers were well off the radar.
I couldn't tell you why they chose to have it grow down though, when it seems more intuitive to have it grow up. As has been mentioned, though, memory was often split between stack and heap; perhaps the CPU designer thought it was important for the heap to grow up, so the stack grew down as a consequence.
Probably because the architecture for most CPUs was designed in a time when men were men, and you could trust your programmers to not want to steal people's credit card numbers... it's mostly too late to change now (though as you say, it probably could been done for new architectures like Itanium which actually has two stacks!)