Why does Moore's Law necessitate parallel comp

2020-06-09 06:22发布

This was a question in one of my CS textbooks. I am at a loss. I don't see why it necessarily would lead to parallel computing. Anyone wanna point me in the right direction?

16条回答
我欲成王,谁敢阻挡
2楼-- · 2020-06-09 06:59

I think it is a reference to the free lunch is over article

basically, the original version of Moore's law, about transistor density, still holds. But one important derived law, about processing speed doubling every xx months, has hit a wall.

So we are facing a future where processor speeds will go up only slightly but we will have more core's and cache to play with.

查看更多
Anthone
3楼-- · 2020-06-09 06:59

The real answer is completely un-technical, not that the hardware explanations aren't fantastic. It's that Moore's Law has become less and less of an observation, and more of an expectation. This expectation of computers growing exponentially has become the driving force of the industry, which necessitates all the parallelism.

查看更多
Emotional °昔
4楼-- · 2020-06-09 07:03

Because orthogonal computing has failed. We should go quantum.

查看更多
做个烂人
5楼-- · 2020-06-09 07:04

Moore's law necessitates parallel computing because Moore's law is on the verge of/is dead. So taking that into consideration, if it is becoming harder and harder to cram transistors onto an IC (due to some of the reasons noted elsewhere) then the remaining options are to add more processors ala Parallel processing or go Quantum.

查看更多
Viruses.
6楼-- · 2020-06-09 07:06

Moore's law just says that the number of transistors on a reasonably priced integrated circuit tends to double every 2 years.

Observations about speed or transistor density or die size are all somewhat orthogonal to the original observation.

Here's why I think Moore's law leads inevitably to parallel computing:

If you keep doubling the number of transistors, what are you going to do with them all?

  • More instructions!
  • Wider data types!
  • Floating Point Math!
  • More caches (L1, L2, L3)!
  • Micro Ops!
  • More pipeline stages!
  • Branch prediction!
  • Speculative execution!
  • Data Pre-Fetch!
  • Single Instruction Multiple Data!

Eventually, when you've implemented all the tricks you can think of to use all those extra transistors, you eventually think to yourself: why don't we just do all those cool tricks TWICE on the came chip?

Bada bing. Bada boom. Multicore is inevitable.


Incidentally, I think the current trend of CPUs with multiple identical CPU cores will eventually subside as well, and the real processors of the future will have a single master core, a collection of general purpose cores, and a collection of special purpose coprocessors (like a graphics card, but on-die with the CPU and caches).

The IBM Cell processor (in the PS3) is already somewhat like this. It has one master core and seven "synergistic processing units".

查看更多
欢心
7楼-- · 2020-06-09 07:09

Interestingly, the idea proposed in the question that parallel computing is "necessitated" is thrown into question by Amdahl's Law, which basically says that having parallel processors will only get you so far unless 100% of your program is parallelizable (which is never the case in the real world).

For example, if you have a program which takes 20 minutes on one processor and is 50% parallelizable, and you buy a large number of processors to speed things up, your minimum time to run would still be over 10 minutes. This is ignoring the cost and other issues involved.

查看更多
登录 后发表回答