Fwd: [Cryptography] Buffer Overflows & Spectre
---------- Forwarded message ---------- From: Henry Baker <hbaker1@pipeline.com> Date: Tue, 20 Nov 2018 18:42:46 -0800 Subject: Re: [Cryptography] Buffer Overflows & Spectre To: Jon Callas <jon@callas.org> Cc: cryptography@metzdowd.com In-line comments below. At 05:17 PM 11/20/2018, Jon Callas wrote:
I understand what you're saying, but here, irony perhaps, but not betrayal, no.
I've been dealing with this for a year now, and it's not quite what your saying, if I understand you. It's nothing to do with buffer overflows, but with memory disclosures. It's a sidechannel or covert channel issue.
Your metaphor about automobiles doesn't quite hit the mark. In the case of the emissions testing that was an intentional thing put in to fool regulations. CPU manufacture is not regulated at all, and Spectre is so unlike emissions or even braking that it's hard to comment about what would tune the metaphor. It's not fraud, by any sense, since there's no intent there. Moreover, there are no direct analogues of such a problem to automotive technologies. (Note that I didn't call it a bug; it's not a bug, it is an emergent consequence of design.)
[I'm a bit of a car buff, so I'm quite well versed in the facts of auto safety and emissions history.] Of course, Spectre was intentional. The whole point of speculation is to "fool" the standard benchmarks, and run those fast even if the results on non-benchmark code are insecure or buggy. Your arguments work equally well for the auto companies -- they have "merely" tuned their engines to fool the standard emissions benchmarks, even if the results the rest of the time are complete crap. Your comment "CPU manufacture is not regulated at all" is precisely what Ralph Nader found when he published his book "Unsafe at any Speed" in 1965 that led to significant government regulation of auto safety. Since the CPU manufacturers have clearly put profits ahead of people, it is now time to regulate them -- either through legislation or through common-law lawsuits. [Come to think of it, Nader needs to write "Unsafe at any Speed: 2.0" to describe the current CPU Spectre crisis!]
Out of order instruction execution first appeared in the IBM 360/91 back in 1968. In our world, it appeared in PCs with the PPC 603, and in x86 CPUs with the AMD K5 and the Pentium Pro. Back in the day, there were some Multics papers about covert channels (Paul Karger did some nice ones) related to speculative execution, but by and large no one really considered it an issue. It's been a mainstay of CPU design for well over twenty years.
Automobiles in the 1960's had been deathtraps and smog generators for 50 years before Nader was finally able to stop the "car"-nage. A slow-motion train wreck is still a train wreck, and those companies responsible are just as guilty as those causing fast-motion train wrecks.
If one wants to rail about something, it would be the misplaced use of metrics and the truism that "if you can't measure it, it doesn't exist." We're in this mess because we care about performance (which we can measure) more than anything like security (which we can't). I think it's far more like the present mess with C compilers where the compiler people have gone and created so much undefined behavior that C is no longer a low-level language. There was a time where the problem with C was that since there was little to no optimization in the compiler that it all fell to the author to write efficient code (which was often silly because the other major facet of C was its alleged portability). As C delivered on the portability promises and started getting into more optimization, that led us to the security problems of today where it is very hard to write a program with no undefined behavior. We're up to nearly 200 different types of undefined behavior.
You'll have to speak for yourself. Many of us in the computer science community have long been advising correctness and security over speed. The C-oriented computer architectures of the 1980's and 1990's merely "externalized" the costs of software safety and security and "socialized" those costs for society as whole to bear, while "privatizing" the profits from these unsafe systems. We're still paying billions of dollars a year tracking down the bugs and buffer overflows in software written by C-weenies in the 1980's. [Check out the tests used by Microsoft and other SW companies in the 1980's to weed out careful programmers; "Move Fast and Break Things" is an attitude that started long before Facebook.]
Anyway, getting back to this, the problem we face is how to make systems that are faster without these side effects. The promises that CPU designers have been giving us and also we demand is what gets hand waved as "Moore's Law" a shorthand for seeing continuous, regular performance gains over time.
If you go back to the Congressional hearings about auto safety and auto emissions, you will find exactly the same set of excuses. The auto industry was building cars that could go well over 100 mph, but whose tires blew up over 90mph, and whose brakes and suspensions were completely inadequate for the power and speeds of these vehicles. Regarding auto emissions, check out the story of how Honda embarrassed the Detroit auto industry by demonstrating Honda's better technology to Congress on one of GM's *own engines*.
On top of this, there's no easy way to fix pipelining to make this go away. Even getting rid of speculative execution doesn't make it go away, since it is timings that cause the memory leak. Even getting rid of caches doesn't fix the problem because RAM often has its own internal caches (which cause timing differences) and even bare RAM has timing differences caused by locality of reference. Getting rid of high-resolution clocks doesn't fix it, either, because a spin loop is a close enough approximation to a clock to make it work.
The only real ways to get rid of this is to go back to an architecture where the CPU clock is the same as the memory clock, there are no speedups in memory fetching nor in pipelines, and likely more. I'll bet even that doesn't do it.
I really don't care any more about excuses. I see 4/8/12 core processors whose entire computational load is handling 50-100 (or more) Javascript components so that tens of advertisers can better track me as I read a single paragraph of text. Every additional computational cycle is more than eaten up with larger Javascript programs, so web pages are slower than they were twenty years ago. I really don't care to give up my security to enable more of this garbage.
There are people thinking about things that could fix this, some of which are outré, and some are complete rethinks of computer architecture, like flat address spaces with memory tagging.
This is far, far worse than most people think because it goes to the very core of the fact that any sort of a distinguisher is a side channel.
Jon
_______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
participants (1)
-
grarpamp