Why computer architecture based on von Neumann architecture is preferred over Harvard architecture, when designing personal computers; while Harvard architecture is used for designing microcomputer based computer systems and DSP based computer systems?
问题:
回答1:
Well current CPU designs for PC's have both Harvard and Von Neumann elements (more Von Neumann though).
If you look at the L1 caches you would see that in AMD, ARM and Intel systems you have Instruction L1 Cache and Data L1 Cache, that can be accessed independently and in parallel. That's the Harvard part. However, in L2, L3 or in DRAM, data and codes are mixed. That's the Von Neumann part.
So why isn't a pure Harvard architecture adopted for PC's? My opinion is that it does not make sense. If you profile main majority of applications, you would see that the L1 Instruction Cache miss ratio is very small. This means that generally code size is not a problem. So it wouldn't make sense to design a fully separate path for code. Data can grow very large but code can't really.
In DSP's it makes sense to use separate code and data paths. That's because DSP's work mainly on "streaming data" meaning that the need for caching is rather small. Also the DSP codes can contain pre-computed coefficients that increase the code size. So there is a balance between data size and code size, meaning that it makes sense using a Harvard architecture.
回答2:
The fundamental difference between Von Neumann architecture and Harvard architecture is that while in the Harvard architecture, instruction memory is distinct from data memory, in Von Neumann they are the same. This reflects the practical reality of PCs (in which programs are stored and read from the same medium as data, usually disk and RAM), and microcontrollers (in which the program is stored in non-volatile memory, and data is stored in volatile memory).