Linux On Steroids: DIY supercomputer +Distributed Terascale Facility

Tim May tcmay at got.net
Sun Aug 12 18:38:50 PDT 2001


On Sunday, August 12, 2001, at 02:41 PM, Faustine wrote:

> J.A. Terranson wrote:
> On Thu, 9 Aug 2001, Faustine wrote:
>
>> 232.6 billion operations a second still looks fairly impressive to me.
>>
>> ~Faustine.
>
>>> Cryptographically speaking, *yawn*.
>
> "Fairly impressive" in that it's better than what I've got in my 
> basement
> right now. And for me, part of the appeal lies in the satisfaction of
> putting something like that together entirely yourself out of components
> other people considered worthless and discarded. Not to mention being 
> able
> to use it for whatever you want, whenever you want, without depending on
> anyone else's machine: a wonderful blend of self-sufficiency, ingenuity 
> and
> megalomania, ha.

So, are you now claiming you plan to build one? Why else the "part of 
the appeal lies in the satisfaction of" bit?

As I showed in some calculations a few days ago, power costs dominate 
hardware costs with older CPUs. Not cost-effective to use older 
processors.


>  Personally, I'd like to run problems through some
> optimization and simulation software, do a little code-based qualitative
> analysis, etc. without hogging resources somewhere else with all the old
> wizards looking over my shoulder, tapping their feet. Tim made a lot of
> great points about the drawbacks. Still, it's "a nice toy", as someone 
> here
> characterized it.

As with AI and other speculative applications, 99.9% of solving a 
computational problem is finding the right approach, the right 
algorithms.

I strongly doubt that there is anything along the lines of "run 
problems...simulation...code-based qualitative analysis" that you "need" 
a mere factor of 10 or 20 speed improvement with. (Said 10-20x speedup 
needing roughly 30-100 times as many CPUs, as I outlined in my recent 
post on using 66 MHz-300 MHz Pentiums and Pentiums IIs in place of 1.1 
GHz Pentium IIIs, 1.4 GHz Athlons, 1.7 GHz Pentium 4s, etc.)

It's sort of like astronomy: having a Keck or Hubble telescope sounds 
like a nice thing, but keeping it "loaded" is difficult for a user 
unless he has a very clear observation program in mind. Hence the 
intensive reviews for proposed uses.

If you are already running a CPU-intensive task on a "mere" 10 GOPS 
machine, and you think that you can load a 500 GOPS machine efficiently, 
and you are willing to spend more in electrical power in the first year 
than buying faster CPUs would have provided (!!!), go for it.

But if you just think it would be "neat" to have a 500 GOPS machine 
tripping your power mains when you turn the array on (you _did_ think 
about the additional power you'd have to provide, didn't you?), think 
harder.

The Terascale machine is one of several such _large_ arrays. My old 
company, Intel, makes the IA-64 (Itanium, and followups) architecture.

> Linux supercomputing grid unveiled for science use
>
> By TODD R. WEISS
> The National Science Foundation (NSF) yesterday announced a $53 million
> project to connect a series of remotely located powerful computers 
> into a
> high-speed Linux supercomputer grid that could open vast new 
> opportunities
> for scientific and medical breakthroughs.
> The project, to be funded by a three-year grant from the NSF, will be 
> built
> by the middle of next year, giving scientists and researchers access to
> massive combined supercomputer power they have until now only dreamed
> about.
> ....

> Armonk, N.Y.-based IBM will provide more than 1,000 IBM eServer Linux
> clusters that will be running more than 3,300 of Intel Corp.'s upcoming
> McKinley Itanium processors for the system, as well as IBM data storage
> products and support services.

And, of course, tasks for this machine will be very carefully picked.

By the way, the involvement of IBM is an important point. No mention of 
their own "Power" architecture in this project. This is yet another hint 
that they moving in the same direction Compaq just signalled when the 
dropped the Alpha and adopted the IA-64.

Since H-P is already a partner with Intel in the IA-64 effort, this 
means that every significant computer and server maker has adopted the 
IA-64 except for one: Sun Microsystems. They're still using the SPARC 
architecture, but clearly it has not become the building block they once 
hoped it would (for others). The fact that they don't control their own 
manufacturing of the chip is also an issue.

MIPS has become a microcontroller and game machine CPU--consumer 
electronics. Alpha is being discontinued. H-P's PA-RISC is being merged 
into the IA-64 path, and IBM is hedging its bets on "Power" (PowerPC a 
la G3/G4 and its higher-end Power chips for large computers) by 
aggressively marketing Pentium, Xeon, and Itanium machines. That's all 
the major architectures gone or going, except for UltraSPARC.

And knowing what I know about upcoming chips and processes, I'm hanging 
on to my Intel stock for the rocket ride that is coming.


--Tim May





More information about the cypherpunks-legacy mailing list