How much/what hardware does the rowhammer DRAM bug affects?

jim bell jdb10987 at yahoo.com
Wed Sep 16 22:07:51 PDT 2015


  From: grarpamp <grarpamp at gmail.com>
On Wed, Sep 16, 2015 at 4:03 PM, jim bell <jdb10987 at yahoo.com> wrote:



>  Product engineers were, and presumably still are, responsible for writing
> test programs to run chips through their paces, in Intel's case using a
> Teradyne computer.
> http://www.teradyne.com/products/semiconductor-test/magnum-v
>
> I don't think the concept of this kind of weakness is new:  Even in 1980,
> DRAMs were tested for such repeated accesses, to ensure that such errors
> would not occur.  This was particularly true for a process called "device
> characterization", in which chips were attacked in all manner of
> electronically-abusive ways, to uncover these weaknesses, and fix the
> circuit design should such flaws be uncovered.
> One way these techniques could be thwarted is to return to the use of
> parity-bits (8+1 parity) in memory access, in DRAM module and computer
> design, to whatever extent they are no longer used.  Any (successful)
> attempt to modify bits in a DRAM would quickly end up causing a parity
> error, which would at least show which manufacturer's DRAM chips are
> susceptible to this kind of attack.  A person who was forced to use a
> no-parity computer could, at least,  limit his purchases of such modules to
> those populated with DRAMs not susceptible to the problem.
>            Jim Bell


Some paper has said systems using ECC RAM are resistant / immune
to rowhammer.

There is still a fair bump in cost for ECC system
however once you've seen your first syslog entry
you forget about the cost. Regardless of rowhammer.
You're right, ECC would be even better.  ECC should, indeed, be essentially immune from rowhammer, and will correct it and all other sorts of single-bit errors, and they will generally detect all double-bit errors.   However, as you pointed out ECC is presumably much more costly than mere parity bits, not in the least because they have to use more bits of storage.  As I vaguely recall, 8 data bits had to be coupled with 4 ECC bits; 16 data/5ECC; 32 data/6 ECC; 64 data/7 ECC; 128 data/8 ECC.  This shows that ECC is much more efficient, as word width goes up, which in principle would make its cost penalty easier to take. I haven't been keeping up with DRAM technology like I did in the 70's, 80's, and 90's, but I am not aware if ECC is being easily implemented inside DRAM chips.  There was a very early Micron Technology 64Kbit DRAM that, as I recall, had this internal to individual DRAM chips, but it didn't last very long in competition with the jellybean parts.  Even more than the cost, I think that ECC added (and maybe still adds) an access-time penalty.  Generally, parity-only shouldn't add access time delays.  One obscure issue is that if the external memory system detects an error (either parity or ECC), can the microprocessor be instructed to "back up" and reject the recently-acquired byte/word?  Most early microprocessors didn't have that ability, which I believe is why those systems had to "wait" for the parity or ECC to be generated and checked.          Jim Bell





  
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: text/html
Size: 5431 bytes
Desc: not available
URL: <http://lists.cpunks.org/pipermail/cypherpunks/attachments/20150917/e7f0c8bc/attachment-0002.txt>


More information about the cypherpunks mailing list