The first Thinking Machines computer had tens of thousands of very dumb/slow processors. Their second system had far fewer processrs but they were significantly more capable.
Almost everyone at the time thought that the second machine was far more useable. Were they wrong? If so, why?
No, they weren't wrong. People like Hillis and now Patterson are thinking hardware-centric; the most efficient thing to build is many small cores. But users don't want that. If people bought hardware efficiency, we'd all be using Transputers, Alphas, and Cells. The real challenge is to figure out the most efficient design that customers will actually buy.
I'm guessing the reasoning behind that statement is the power consumption. The higher the clock rate of a core, the more power it consumes and the more heat it dissipates. If you don't slow down each core in a crowded multicore, things might heat up too much and melt down. Until we get some other semiconductors that can withstand the heat, I'd say that's probably going to happen.
It's a very simplified explanation. In the future you will have a choice between fewer, fatter cores (Nehalem etc.) or many, slower cores (Niagara, Larrabee, Tilera, etc.). The massive multicore chips will have more MIPS and MIPS/W if you can use them. But if you can't use them, don't buy them.
this post seems misinformed. in a massively multicore world, parallelism is more important than concurrency, and the two are not the same. basically concurrency means the order of tasks is not known a priori. parallelism means segmenting a problem so subproblems can be solved in many places at once.
in a massively mutlicore world, i offer parallelism is the goal these people want to address. how they did not mention haskell is beyond me.
in a massively multicore world, parallelism is more important than concurrency, and the two are not the same
I think you're right in a technical sense - that article used the term slightly sloppily. However, the degree of parallelism in a program is limited by the ability of the human author to cope with all the concurrent interactions. So fundamentally, the two words boil down to the same problem.
One approach - that taken by Erlang and (from what I understand) Haskell - is to force people to write programs in a special way (pure message-passing or pure-functional), such that they become (almost) embarrassingly parallelisable.
Another approach - that taken by Scala, and my personal favourite, Clojure - is to keep the existing paradigm (JVM+threads in both those cases), and encourage people to write large parts of their program in styles which make concurrency easier (Actors, STM, immutable values, and so on).
There's something to be said for both approaches: Erlang- or Haskell-style "purity" keeps you well-behaved, and gives you a lot more parallelism "for free". On the other hand, there are processes, tasks and other systems which are fundamentally sequential or mutable, and forcing you into conceptual backflips to cope with this can create pointless friction.
I don't mean to put one approach above the other - and I'm aware that my summary is woefully inadequate - but I do believe this article is engaging with a valid debate, and to pick it apart on sloppy use of a word is to miss the point.
in a massively multicore world, parallelism is more important than concurrency, and the two are not the same
I think you're right in a technical sense - parallelism is indeed the great goal, and that article used the term slightly sloppily. However, the degree of parallelism in a program is limited by the ability of the human author to cope with all the concurrent interactions. So fundamentally, the two boil down to the same problem.
One approach - that taken by Erlang and (from what I understand) Haskell - is to force people to write programs in a special way (pure message-passing or pure-functional), such that they become (almost) embarrassingly parallelisable.
Another approach - that taken by Scala, and my personal favourite, Clojure - is to keep the existing paradigm (JVM threads in both those cases), and encourage people to write large parts of their program in styles which make concurrency easier (Actors, STM, immutable values, and so on).
There's something to be said for both approaches: Erlang- or Haskell-style "purity" keeps you well-behaved, and gives you a lot more parallelism "for free". On the other hand, there are processes, tasks and other systems which are fundamentally sequential or mutable, and forcing you into conceptual backflips to cope with this can grate.
I don't mean to put one approach above the other - and I'm aware that my summary is woefully inadequate - but I do believe it's a valid debate this article is engaging with.
Almost everyone at the time thought that the second machine was far more useable. Were they wrong? If so, why?