>From: Georgi Guninski <guninski@guninski.com>
>This is old, but haven't seen it here.
This brings me back to my stint at Intel, 1980-82, as a new Product Engineer for the 2186, an 8kx8 pseudostatic (self-refreshing dynamic) RAM. (One of the first DRAMs to use redundancy to increase yield.) I may have been the first engineer in the world to see, through a microscope focused on a DRAM chip, a very quick series of flashes, evidence of the programming (blowing) silicon fuses on the chip, to program the row- and column-redundancy information.
Product engineers were, and presumably still are, responsible for writing test programs to run chips through their paces, in Intel's case using a Teradyne computer. http://www.teradyne.com/products/semiconductor-test/magnum-v
I don't think the concept of this kind of weakness is new: Even in 1980, DRAMs were tested for such repeated accesses, to ensure that such errors would not occur. This was particularly true for a process called "device characterization", in which chips were attacked in all manner of electronically-abusive ways, to uncover these weaknesses, and fix the circuit design should such flaws be uncovered.
One way these techniques could be thwarted is to return to the use of parity-bits (8+1 parity) in memory access, in DRAM module and computer design, to whatever extent they are no longer used. Any (successful) attempt to modify bits in a DRAM would quickly end up causing a parity error, which would at least show which manufacturer's DRAM chips are susceptible to this kind of attack. A person who was forced to use a no-parity computer could, at least, limit his purchases of such modules to those populated with DRAMs not susceptible to the problem.
Jim Bell