This was a question in one of my CS textbooks. I am at a loss. I don't see why it necessarily would lead to parallel computing. Anyone wanna point me in the right direction?
相关问题
- multidplyr : assign functions to cluster
- Parallel for loop over range of array indices in C
- Parallel downloads with Multiprocessing and PySftp
- Using more worker processes than there are cores
- jags.parallel: setting less clusters than chains:
相关文章
- How to use doMC under Windows or alternative paral
- Parallel while loop in R
- Does gfortran take advantage of DO CONCURRENT?
- Using Parallel Linq Extensions to union two sequen
- MPI and D: Linker Options
- Is there an existing solution for these particular
- SqlConnection closes unexpectedly inside using sta
- On a 64 bit machine, can I safely operate on indiv
I think it is a reference to the free lunch is over article
basically, the original version of Moore's law, about transistor density, still holds. But one important derived law, about processing speed doubling every xx months, has hit a wall.
So we are facing a future where processor speeds will go up only slightly but we will have more core's and cache to play with.
The real answer is completely un-technical, not that the hardware explanations aren't fantastic. It's that Moore's Law has become less and less of an observation, and more of an expectation. This expectation of computers growing exponentially has become the driving force of the industry, which necessitates all the parallelism.
Because orthogonal computing has failed. We should go quantum.
Moore's law necessitates parallel computing because Moore's law is on the verge of/is dead. So taking that into consideration, if it is becoming harder and harder to cram transistors onto an IC (due to some of the reasons noted elsewhere) then the remaining options are to add more processors ala Parallel processing or go Quantum.
Moore's law just says that the number of transistors on a reasonably priced integrated circuit tends to double every 2 years.
Observations about speed or transistor density or die size are all somewhat orthogonal to the original observation.
Here's why I think Moore's law leads inevitably to parallel computing:
If you keep doubling the number of transistors, what are you going to do with them all?
Eventually, when you've implemented all the tricks you can think of to use all those extra transistors, you eventually think to yourself: why don't we just do all those cool tricks TWICE on the came chip?
Bada bing. Bada boom. Multicore is inevitable.
Incidentally, I think the current trend of CPUs with multiple identical CPU cores will eventually subside as well, and the real processors of the future will have a single master core, a collection of general purpose cores, and a collection of special purpose coprocessors (like a graphics card, but on-die with the CPU and caches).
The IBM Cell processor (in the PS3) is already somewhat like this. It has one master core and seven "synergistic processing units".
Interestingly, the idea proposed in the question that parallel computing is "necessitated" is thrown into question by Amdahl's Law, which basically says that having parallel processors will only get you so far unless 100% of your program is parallelizable (which is never the case in the real world).
For example, if you have a program which takes 20 minutes on one processor and is 50% parallelizable, and you buy a large number of processors to speed things up, your minimum time to run would still be over 10 minutes. This is ignoring the cost and other issues involved.