可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 7 years ago.
This was a question in one of my CS textbooks. I am at a loss. I don't see why it necessarily would lead to parallel computing. Anyone wanna point me in the right direction?
回答1:
Moore's law just says that the number of transistors on a reasonably priced integrated circuit tends to double every 2 years.
Observations about speed or transistor density or die size are all somewhat orthogonal to the original observation.
Here's why I think Moore's law leads inevitably to parallel computing:
If you keep doubling the number of transistors, what are you going to do with them all?
- More instructions!
- Wider data types!
- Floating Point Math!
- More caches (L1, L2, L3)!
- Micro Ops!
- More pipeline stages!
- Branch prediction!
- Speculative execution!
- Data Pre-Fetch!
- Single Instruction Multiple Data!
Eventually, when you've implemented all the tricks you can think of to use all those extra transistors, you eventually think to yourself: why don't we just do all those cool tricks TWICE on the came chip?
Bada bing. Bada boom. Multicore is inevitable.
Incidentally, I think the current trend of CPUs with multiple identical CPU cores will eventually subside as well, and the real processors of the future will have a single master core, a collection of general purpose cores, and a collection of special purpose coprocessors (like a graphics card, but on-die with the CPU and caches).
The IBM Cell processor (in the PS3) is already somewhat like this. It has one master core and seven "synergistic processing units".
回答2:
One word - Heat.
Due to an inability to dissipate heat at current transistor levels, engineers are using their every growing transistor budgets to create more cores instead of creating more complex (and hot) pipelines and faster processors.
Moore's law is not at all dead - moore's law is about transistor density at a given cost. It just so happens that for various reasons (like marketing) engineers decided to use their transistor budget to increase clock cycle. Now they decided (because of the heat issue) to start using the transistors for parallelism, plus 64bit computing and reducing power consumption.
回答3:
Moore's law describes the trend that performance of chips effectively doubles due to the addition of more transistors to a circuit board.
Since devices are not increasing in size (if anything the reverse is true) then clearly the space for these additional transistors only becomes available due to chip technology becoming smaller and manufacturing becoming ever better.
At somepoint however you reach the stage where transistors cannot be minimized any further. It also becomes impossible to increase the size of chips beyond a certain point due to the amount of heat generated and the manufacturing costs involved.
These limits necessitate a means of increasing performance beyond simply producing more complex chips.
One such method is to employ cheaper and less complex chips in parallel architectures, another is to move away from the traditional integrated chip to something like quantum computing - which by it's very definition is parallel processing.
It's worth noting that the title of this question relates more to the observed results of the law (performance increase) rather than the actual law itself which was largely an observation about transistor count.
回答4:
I think it is a reference to the free lunch is over article
basically, the original version of Moore's law, about transistor density, still holds. But one important derived law, about processing speed doubling every xx months, has hit a wall.
So we are facing a future where processor speeds will go up only slightly but we will have more core's and cache to play with.
回答5:
That is an odd question. Moore's law doesn't necessitate anything it is just an observation of the progression of computing power, it doesn't dictate that it must increase at a certain rate.
回答6:
Increasing the speed of processors would make the operating temperature so extremely high it would burn a hole in your desk. The makers of the chips are running up against certain limitations they can't get around... like the speed of light, for instance. Parallel computing will allow them to speed up the computers without starting a fire.
回答7:
Transistors and cpus and whatnot are getting smaller and smaller and faster and faster. Alas, the heat and voltage costs for computing are going up. The heat and voltage issues are as much of a concern as the actual physical size minimums. A 100ghz chip would suck up too much voltage and get too hot but 100 1ghz chips would have less of an issue with this.
回答8:
Interestingly, the idea proposed in the question that parallel computing is "necessitated" is thrown into question by Amdahl's Law, which basically says that having parallel processors will only get you so far unless 100% of your program is parallelizable (which is never the case in the real world).
For example, if you have a program which takes 20 minutes on one processor and is 50% parallelizable, and you buy a large number of processors to speed things up, your minimum time to run would still be over 10 minutes. This is ignoring the cost and other issues involved.
回答9:
The real answer is completely un-technical, not that the hardware explanations aren't fantastic. It's that Moore's Law has become less and less of an observation, and more of an expectation. This expectation of computers growing exponentially has become the driving force of the industry, which necessitates all the parallelism.
回答10:
Moore's law says that the number of transistors in an IC relative to cost increases exponentially year on year.
Historically, this was partly due to a decrease in transistor size, and smaller transistors also switched faster. Because you got faster transistors in step with Moore's law, clock speed increased. So there's a confusion that say Moore's law means faster processors rather than just wider.
Heat dissipation caused the speed increase to top out at around 3 GHz for economically produced silicon.
So if you want more cheap computation, it's easier to add more, slower circuits. So the current state-of-the-art commodity processors are multi-core - they are getting wider, but no faster.
Graphene film transistors require less power, and are performing at around 30 GHz, with theoretical limits at around 0.6 THz.
When graphene technology matures to commodity level in a few years, expect there to be another sea change and no-one will care about using parallel cores for performance, and go back to narrow, fast cores. On the other hand, concurrent computing will still matter for the problems it is a natural fit for, so you'll still have to know how to handle more than one execution unit though.
回答11:
Because orthogonal computing has failed. We should go quantum.
回答12:
Moore's law necessitates parallel computing because Moore's law is on the verge of/is dead. So taking that into consideration, if it is becoming harder and harder to cram transistors onto an IC (due to some of the reasons noted elsewhere) then the remaining options are to add more processors ala Parallel processing or go Quantum.
回答13:
Moore's law still holds. Transistor counts are still increasing. The problem is figuring out something useful to do with all those transistors. We can't just keep increasing the instruction level parallelism by making pipelines deeper and wider because the circuitry necessary to prove independence between instructions scales terribly in the number of instructions you need to prove independence of. We can't just keep cranking up clock speeds because of heat. We could just keep increasing cache size, but we've hit a point of diminishing returns here. The only use left for the transistors seems to be putting more cores on a chip, which means that the engineer's job of figuring out what to do with the transistors is just pushed up the abstraction ladder, and now programmers have to figure out what to do with all those cores.
回答14:
I don't think Moores law necessitates parallel computing, but it does necessiate an eventual shift away from pure miniturization. Multiple solutions exist. One of them is Parallel computing, another is co-processing (which is realted, but not the same thing as parallel computing. co-processing is when you offload work to a special purpose CPU such as a GPU, DSP, etc..)
回答15:
I honestly don't really know, but my guess would be that transistors at some point could get no smaller requiring processing power to be spread out in parallel.
回答16:
It's because we're all addicted to increasing speed in our processors. Years of conditioning have led us to expect more processing power, year after year. But the physical constraints caused by densely packed transistors have finally put a limit on clock speeds, so increases have to come from a different perspective.
It doesn't have to be this way. The success of the Intel Atom processor shows that processors could just get smaller and cheaper instead. The processor companies will try to keep us on the "bigger, faster" treadmill though, to keep their profits up. And we'll be willing participants, because we'll always find a way to use more power.