The Petaflops Boondoggle Computer (was PET_ard)

jim bell jimbell at pacifier.com
Sun Sep 29 19:40:01 PDT 1996


At 12:16 PM 9/29/96 -0800, Timothy C. May wrote:
>At 11:50 AM -0800 9/29/96, jim bell wrote:
>>At 10:00 AM 9/29/96 -0800, Timothy C. May wrote:
>>>(Hoist by their own petards indeed! Don't tell our Russian what petard
>>>means.)
>>
>>Uh, wasn't that the name of the bald captain on Star Trek Next Generation?
>>You know, "Jean-Luc Petard"?
>
>Picard. 

I forgot the smiley  B^)

>>  The succeeding
>>factor-of-1000 improvement appears (if the item above is accurate) to have
>>taken  24 years to accomplish, so it's hard to imagine that the next factor
>>of 1000 will arrive appreciably sooner than year 2020.
>
>I agree. 

(Actually, it was a factor of 4000 since it was only 64 processors, but 
who's counting?)

Anyway, this reminds me:  What was Moore's law?  Performance doubling every 
18 months as I recall?  How does this stack up?  Well, 24 years is 16 times 
18 months, so the increase in performance should have been 2**16, or 64K.   
Off by a factor of 16, which is fairly close, as exponential expansions go.  
If I were inclined to make the numbers fit the theory, I would argue that 
the design for the Illiac IV was probably based on SSI IC technology that 
was defined in 1966 or so, which would provide the extra 6 years (four 
doubling periods) which account for the "error."  One of the advantages of 
modern CAD technology is that chips can go from foundry to a working 
computer far faster.


A few months ago, there was an item about how IBM had demonstrated its 
ability to produce 0.08 um silicon chips, with a gate delay (don't recall 
how loaded this was...) of about 24 picoseconds.  Such a process could 
probably be used to produce a single chip that can do about 1 giga 
operations per second, assuming it was pipelined adequately.  But even 
that's "only" a teraflop with 1000 such chips...It makes me wonder what kind 
of a rabbit they're gonna pull out of the hat to produce a petaflop.


>By the way, I knew some of the folks who worked on parts of the
>Illiac-IV, which was still limping along as late as the late 70s (maybe
>later).

I think it was turned off in about 1982 or 1983.  I did a web-search on its 
history a few months ago.


>>Yet a look at Intel's pricing for Pentiums shows that they sell a 120-MHz
>>chip for about $135, while they sell a 200-megahertz version for around $550
>>or so.   Arithmetic suggests that a person would be far better off with a
>>4-120-MHz-processor Pentium (cumulative clock rate 480 MHz) than a single,
>>200-megahertz version.  (admittedly, peripheral logic costs will adjust this
>>a little.)   Of course, this would also leave Intel flat on its ass
>>attempting to compete with AMD, Cyrix, etc, because a somewhat higher speed
>>per cpu is just about the only advantage they have.
>
>Intel is having no problem at all competing with AMD and Cyrix! Both of
>them are struggling---AMD just announced a layoff, and Cyrix is facing
>financial troubles. Neither are able to make competitive parts, for reasons
>I won't go into here, and neither are making the money they'll need to
>compete in the future with Intel. (Intel has half a dozen billion-dollar
>wafer fabs, running with extraordinarily high yields--so my sources tell me
>:-})--and the more money they make, the more factories they build, the more
>they learn about how to make 0.35 and 0.25 micron chips, etc.)

True, but the world would be FAR better off if the architecture for the 
commonly-used PC could be extended to allow multiple processors.  Yes, I'm 
aware of the inefficiency issues associated with multiple processors, but I 
think the dramatic cost reductions associated with the use of larger numbers 
of cheaper CPU's would much more than compensate for them.

As I see it, there are inefficiencies associated with both making a single 
CPU do multiple tasks, as well as making multiple CPU's do a single task.  
The former is one of the reasons that PC's have pretty much mopped the floor 
with mainframes, because the PC's were not "unfairly" hobbled with having to 
implement complicated time-sharing software.  The latter is the classic 
problem which kept prople from going to massively parallel machines from 
scalars.  However, I don't think that most of the problems a typical PC 
accomplishes are those "hard to divide" problems that resist parallel 
implementation.  Rather, a multiprocessor PC would assign larger tasks 
(programs) to individual CPU's and not try to break up a program.





Jim Bell
jimbell at pacifier.com






More information about the cypherpunks-legacy mailing list