On Wed, Nov 14, 2018 at 4:15 PM jim bell <jdb10987@yahoo.com> wrote:
On Wednesday, November 14, 2018, 11:52:43 AM PST, juan <juan.g71@gmail.com> wrote:
On Wed, 14 Nov 2018 19:00:52 +0000 (UTC)
jim bell <jdb10987@yahoo.com> wrote:
My company, SemiDisk Systems, was very close to the first disk emulator for a number of types of PC, including the S-100, TRS-80 Model II, IBM PC, Epson Q-10. https://www.pcworld.com/article/246617/storage/evolution-of-the-solid-state-...
IIRC you also worked for intel designing memory chips? Excuse my rather naive question but...Did you see/hear at that time any hints that chips were being tampered with or somehow backdooored because of 'national security'?
I didn't design memory chips. I was a "product engineer" for a specific self-refreshing dynamic RAM (otherwise called a "pseudo-static") device called a 2186. https://www.ebay.com/p/Vintage-Intel-D2186a-30-8k-X-8-Pseudo-Static-RAM-D218... Vintage Intel D2186a-30 8k X 8 Pseudo Static RAM D2186 2186 SRAM | eBay <https://www.ebay.com/p/Vintage-Intel-D2186a-30-8k-X-8-Pseudo-Static-RAM-D2186-2186-SRAM/1918155784> It, along with a 32K x 8 "21D1", were Intel's first by-8 dynamic RAMs.
Product engineers design the test programs which check out the performance of a chip, using (at that time) an ultra-fast dedicated computer made by Teradyne. https://www.teradyne.com/products/test-solutions/semiconductor-test This computer very accurately placed clock edges, to a position and accuracy of a small fraction of a nanosecond. The 2186 was tricky by the standards of the day, partly due to the self-refreshing feature, but also because the 2186 (and 21D1) were the first Intel memory devices (possibly the first from anyone?) that employed "redundancy": Previous memory devices were essentially unusable if even a single bit, or row, or column failed. The 2186 incorporated many spare rows, and spare columns, which could be programmed in to substitute for bits, rows, and columns that had failed.
My program tested the chip, then took the map of bad rows, columns, and bits, and first checked to see if the part could be made good, at least theoretically, if the available rows and columns would solve the visible problems. If that appeared to be possible, my program determined which redundant rows and columns needed to be activated, and at which row and column they needed to be placed at. From this, a bit stream was generated that was clocked into the chip, one bit at a time, and was used to blow poly-silicon links (fuses) in a write-once memory area. That was the memory area which told the chip where to access the redundant rows and columns, instead of the array rows and columns.
In fact, I was the first person at Intel, and perhaps in the world, who saw the flash(es) through the microscope of the as-being-blown fuses on these chips. Intel was doing this redundancy before anyone else, I believe. ×
Pseudo-static DRAMs refreshed themselves, with the (possible) aid of RFSH signal that might occasionally be applied to the chip. Myself, I didn't think that DRAMs were hard to use, having designed a digital circuit and a DRAM card using an old Motorola DRAM called a "6605", that I got cheaply.
https://computerarchive.org/files/mirror/www.bitsavers.org/pdf/motorola/_dat...
I don't think that the 2186 was successful, mostly because Intel eventually got out of the DRAM business, and mostly that because other manufacturers got much better and more efficient than Intel was.
I was never in a position to hear if chips could be "backdoored".
Jim Bell
I believe Intel refers to 'backdoors' as 'features' for 'customer support scenarios'. -- Twitter <https://twitter.com/tbiehn> | LinkedIn <http://www.linkedin.com/in/travisbiehn> | GitHub <http://github.com/tbiehn> | TravisBiehn.com <http://www.travisbiehn.com> | Google Plus <https://plus.google.com/+TravisBiehn>