While many x86 implementation vulnerabilities in the past involve either electromagnetic emissions or cache timing attacks, I have not read anything about instruction dispatch contention. According to anger fog’s research, Intel’s implementation of the x86 instruction set does not dispatch more than three of a single instruction, and it has been so for a long time. Irregardless of their design decisions for instruction dispatch, this provides a side channel in which two cooperating processes operating on the same core can conduct half-duplex communication at the rate of 2 bits per cycle by one process attempting to compete with another process for the same capacity for dispatches over a single instruction (0, 1, 2, 3). While I do not have the resources to know how x86 processors handles dispatch contention issues, if it is handled in a regular and non-random manner, it would reach that theoretical level of severity. This violates certain access controls assumed to be imposed by the kernel. I suppose I can’t collect my quarter million dollar prize if I publish this to the world?
Sounds like a valid issue! Jim Bell On Wednesday, November 14, 2018, 9:36:06 AM PST, Ryan Carboni <ryacko@gmail.com> wrote: While many x86 implementation vulnerabilities in the past involve either electromagnetic emissions or cache timing attacks, I have not read anything about instruction dispatch contention. According to anger fog’s research, Intel’s implementation of the x86 instruction set does not dispatch more than three of a single instruction, and it has been so for a long time. Irregardless of their design decisions for instruction dispatch, this provides a side channel in which two cooperating processes operating on the same core can conduct half-duplex communication at the rate of 2 bits per cycle by one process attempting to compete with another process for the same capacity for dispatches over a single instruction (0, 1, 2, 3). While I do not have the resources to know how x86 processors handles dispatch contention issues, if it is handled in a regular and non-random manner, it would reach that theoretical level of severity. This violates certain access controls assumed to be imposed by the kernel. I suppose I can’t collect my quarter million dollar prize if I publish this to the world?
Pretty embarrassing for “Intel Inside” if you ask me. Wonder how many “whitehats” let their findings get suppressed for money. On Wednesday, November 14, 2018, jim bell <jdb10987@yahoo.com> wrote:
Sounds like a valid issue!
Jim Bell
On Wednesday, November 14, 2018, 9:36:06 AM PST, Ryan Carboni < ryacko@gmail.com> wrote:
While many x86 implementation vulnerabilities in the past involve either electromagnetic emissions or cache timing attacks, I have not read anything about instruction dispatch contention. According to anger fog’s research, Intel’s implementation of the x86 instruction set does not dispatch more than three of a single instruction, and it has been so for a long time. Irregardless of their design decisions for instruction dispatch, this provides a side channel in which two cooperating processes operating on the same core can conduct half-duplex communication at the rate of 2 bits per cycle by one process attempting to compete with another process for the same capacity for dispatches over a single instruction (0, 1, 2, 3). While I do not have the resources to know how x86 processors handles dispatch contention issues, if it is handled in a regular and non-random manner, it would reach that theoretical level of severity.
This violates certain access controls assumed to be imposed by the kernel.
I suppose I can’t collect my quarter million dollar prize if I publish this to the world?
Let my life be a lesson in futility. Go up against the government, and they’ll send everything they got against you, including things that defy known laws of physics. Go with the government, get paid out of the NATO vulnerability slush fund of tens of millions of dollars a year. And sometimes a higher power will even the odds. All I ever did was reveal a small fraction of vulnerabilities the government didn’t know about or had already purchased. What’s that compared to what they do have?
In "the good old days", in the 1970's, microprocessors were so much simpler. My favorite one for awhile, the Z-80 was trivial by today's standards. No multi-threading, no pipelining, no speculative instruction execution, etc. I built my own homebrew personal computer, which I called the "Bellyache I", using a Z-80. I also built a 'brick' shaped disk-emulator for it, consisting of a board with 32 sockets of stacked-8-high 2118 16-kilobit DRAM chips. (5-volt only 16 kilobit.) 512kbytes of disk emulator, which actually seemed a lot of memory at the time!!! In about October 1981, I actually discovered an error in the documentation sheet for the Z-80: I was implementing my first "SemiDisk", and I was trying to use the INIR and OTIR instructions to do fast block-moves of data to/from the i/o mapped memory. It turned out that doing those transfers from a 128-byte block of memory had one of them "off" by 1 byte-count, and I traced the error to the fact that the Z-80 didn't operate precisely as the data-sheet indicated it should. My company, SemiDisk Systems, was very close to the first disk emulator for a number of types of PC, including the S-100, TRS-80 Model II, IBM PC, Epson Q-10.https://www.pcworld.com/article/246617/storage/evolution-of-the-solid-state-... http://www.ryli.net/the-brief-history-of-solid-state-drive-ssd/ Jim Bell On Wednesday, November 14, 2018, 9:44:33 AM PST, Ryan Carboni <ryacko@gmail.com> wrote: Pretty embarrassing for “Intel Inside” if you ask me. Wonder how many “whitehats” let their findings get suppressed for money. On Wednesday, November 14, 2018, jim bell <jdb10987@yahoo.com> wrote: Sounds like a valid issue! Jim Bell On Wednesday, November 14, 2018, 9:36:06 AM PST, Ryan Carboni <ryacko@gmail.com> wrote: While many x86 implementation vulnerabilities in the past involve either electromagnetic emissions or cache timing attacks, I have not read anything about instruction dispatch contention. According to anger fog’s research, Intel’s implementation of the x86 instruction set does not dispatch more than three of a single instruction, and it has been so for a long time. Irregardless of their design decisions for instruction dispatch, this provides a side channel in which two cooperating processes operating on the same core can conduct half-duplex communication at the rate of 2 bits per cycle by one process attempting to compete with another process for the same capacity for dispatches over a single instruction (0, 1, 2, 3). While I do not have the resources to know how x86 processors handles dispatch contention issues, if it is handled in a regular and non-random manner, it would reach that theoretical level of severity. This violates certain access controls assumed to be imposed by the kernel. I suppose I can’t collect my quarter million dollar prize if I publish this to the world?
On Wed, 14 Nov 2018 19:00:52 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
My company, SemiDisk Systems, was very close to the first disk emulator for a number of types of PC, including the S-100, TRS-80 Model II, IBM PC, Epson Q-10.https://www.pcworld.com/article/246617/storage/evolution-of-the-solid-state-...
IIRC you also worked for intel designing memory chips? Excuse my rather naive question but...Did you see/hear at that time any hints that chips were being tampered with or somehow backdooored because of 'national security'?
On Wednesday, November 14, 2018, 11:52:43 AM PST, juan <juan.g71@gmail.com> wrote: On Wed, 14 Nov 2018 19:00:52 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
My company, SemiDisk Systems, was very close to the first disk emulator for a number of types of PC, including the S-100, TRS-80 Model II, IBM PC, Epson Q-10.https://www.pcworld.com/article/246617/storage/evolution-of-the-solid-state-...
IIRC you also worked for intel designing memory chips? Excuse my rather naive question but...Did you see/hear at that time any hints that chips were being tampered with or somehow backdooored because of 'national security'? I didn't design memory chips. I was a "product engineer" for a specific self-refreshing dynamic RAM (otherwise called a "pseudo-static") device called a 2186. https://www.ebay.com/p/Vintage-Intel-D2186a-30-8k-X-8-Pseudo-Static-RAM-D218... Vintage Intel D2186a-30 8k X 8 Pseudo Static RAM D2186 2186 SRAM | eBay It, along with a 32K x 8 "21D1", were Intel's first by-8 dynamic RAMs. Product engineers design the test programs which check out the performance of a chip, using (at that time) an ultra-fast dedicated computer made by Teradyne. https://www.teradyne.com/products/test-solutions/semiconductor-test This computer very accurately placed clock edges, to a position and accuracy of a small fraction of a nanosecond. The 2186 was tricky by the standards of the day, partly due to the self-refreshing feature, but also because the 2186 (and 21D1) were the first Intel memory devices (possibly the first from anyone?) that employed "redundancy": Previous memory devices were essentially unusable if even a single bit, or row, or column failed. The 2186 incorporated many spare rows, and spare columns, which could be programmed in to substitute for bits, rows, and columns that had failed.
My program tested the chip, then took the map of bad rows, columns, and bits, and first checked to see if the part could be made good, at least theoretically, if the available rows and columns would solve the visible problems. If that appeared to be possible, my program determined which redundant rows and columns needed to be activated, and at which row and column they needed to be placed at. From this, a bit stream was generated that was clocked into the chip, one bit at a time, and was used to blow poly-silicon links (fuses) in a write-once memory area. That was the memory area which told the chip where to access the redundant rows and columns, instead of the array rows and columns. In fact, I was the first person at Intel, and perhaps in the world, who saw the flash(es) through the microscope of the as-being-blown fuses on these chips. Intel was doing this redundancy before anyone else, I believe. × Pseudo-static DRAMs refreshed themselves, with the (possible) aid of RFSH signal that might occasionally be applied to the chip. Myself, I didn't think that DRAMs were hard to use, having designed a digital circuit and a DRAM card using an old Motorola DRAM called a "6605", that I got cheaply. https://computerarchive.org/files/mirror/www.bitsavers.org/pdf/motorola/_dat... I don't think that the 2186 was successful, mostly because Intel eventually got out of the DRAM business, and mostly that because other manufacturers got much better and more efficient than Intel was. I was never in a position to hear if chips could be "backdoored". Jim Bell
On Wed, Nov 14, 2018 at 4:15 PM jim bell <jdb10987@yahoo.com> wrote:
On Wednesday, November 14, 2018, 11:52:43 AM PST, juan <juan.g71@gmail.com> wrote:
On Wed, 14 Nov 2018 19:00:52 +0000 (UTC)
jim bell <jdb10987@yahoo.com> wrote:
My company, SemiDisk Systems, was very close to the first disk emulator for a number of types of PC, including the S-100, TRS-80 Model II, IBM PC, Epson Q-10. https://www.pcworld.com/article/246617/storage/evolution-of-the-solid-state-...
IIRC you also worked for intel designing memory chips? Excuse my rather naive question but...Did you see/hear at that time any hints that chips were being tampered with or somehow backdooored because of 'national security'?
I didn't design memory chips. I was a "product engineer" for a specific self-refreshing dynamic RAM (otherwise called a "pseudo-static") device called a 2186. https://www.ebay.com/p/Vintage-Intel-D2186a-30-8k-X-8-Pseudo-Static-RAM-D218... Vintage Intel D2186a-30 8k X 8 Pseudo Static RAM D2186 2186 SRAM | eBay <https://www.ebay.com/p/Vintage-Intel-D2186a-30-8k-X-8-Pseudo-Static-RAM-D2186-2186-SRAM/1918155784> It, along with a 32K x 8 "21D1", were Intel's first by-8 dynamic RAMs.
Product engineers design the test programs which check out the performance of a chip, using (at that time) an ultra-fast dedicated computer made by Teradyne. https://www.teradyne.com/products/test-solutions/semiconductor-test This computer very accurately placed clock edges, to a position and accuracy of a small fraction of a nanosecond. The 2186 was tricky by the standards of the day, partly due to the self-refreshing feature, but also because the 2186 (and 21D1) were the first Intel memory devices (possibly the first from anyone?) that employed "redundancy": Previous memory devices were essentially unusable if even a single bit, or row, or column failed. The 2186 incorporated many spare rows, and spare columns, which could be programmed in to substitute for bits, rows, and columns that had failed.
My program tested the chip, then took the map of bad rows, columns, and bits, and first checked to see if the part could be made good, at least theoretically, if the available rows and columns would solve the visible problems. If that appeared to be possible, my program determined which redundant rows and columns needed to be activated, and at which row and column they needed to be placed at. From this, a bit stream was generated that was clocked into the chip, one bit at a time, and was used to blow poly-silicon links (fuses) in a write-once memory area. That was the memory area which told the chip where to access the redundant rows and columns, instead of the array rows and columns.
In fact, I was the first person at Intel, and perhaps in the world, who saw the flash(es) through the microscope of the as-being-blown fuses on these chips. Intel was doing this redundancy before anyone else, I believe. ×
Pseudo-static DRAMs refreshed themselves, with the (possible) aid of RFSH signal that might occasionally be applied to the chip. Myself, I didn't think that DRAMs were hard to use, having designed a digital circuit and a DRAM card using an old Motorola DRAM called a "6605", that I got cheaply.
https://computerarchive.org/files/mirror/www.bitsavers.org/pdf/motorola/_dat...
I don't think that the 2186 was successful, mostly because Intel eventually got out of the DRAM business, and mostly that because other manufacturers got much better and more efficient than Intel was.
I was never in a position to hear if chips could be "backdoored".
Jim Bell
I believe Intel refers to 'backdoors' as 'features' for 'customer support scenarios'. -- Twitter <https://twitter.com/tbiehn> | LinkedIn <http://www.linkedin.com/in/travisbiehn> | GitHub <http://github.com/tbiehn> | TravisBiehn.com <http://www.travisbiehn.com> | Google Plus <https://plus.google.com/+TravisBiehn>
On Wed, 14 Nov 2018 21:15:29 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote: me >> IIRC you also worked for intel designing memory chips? Excuse my rather naive question but...Did you see/hear at that time any hints that chips were being tampered with or somehow backdooored because of 'national security'?
I didn't design memory chips. I was a "product engineer" for a specific self-refreshing dynamic RAM
I guess I didn't recall correctly then =P . I must have assumed that product engineer more or less meant designer.
In fact, I was the first person at Intel, and perhaps in the world, who saw the flash(es) through the microscope of the as-being-blown fuses on these chips. Intel was doing this redundancy before anyone else, I believe.
Interesting. I thought that sort of patching was something relatively new only done to the chips in sdcards and the like. I guess it's not new.
I was never in a position to hear if chips could be "backdoored".
Oh well. Thanks for the engineering details and info anyway =)
Jim Bell
On Thursday, November 15, 2018, 12:49:56 PM PST, juan <juan.g71@gmail.com> wrote: On Wed, 14 Nov 2018 21:15:29 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote: me >> IIRC you also worked for intel designing memory chips? Excuse my rather naive question but...Did you see/hear at that time any hints that chips were being tampered with or somehow backdooored because of 'national security'?
I didn't design memory chips. I was a "product engineer" for a specific self-refreshing dynamic RAM
> I guess I didn't recall correctly then =P . I must have assumed that product engineer more or less meant designer. A "design engineer" is a person who designs the circuitry. A "process engineer" is a person who specializes in the chemistry, photolithography, ion implantation, etching, and other steps involved in the creation of a processed silicon wafer. The whole process is amazingly complicated. When I worked for Intel (1980-1982), a typical silicon linewidth was 3 microns. (3000 nanometers.) Recently I saw that Intel was using a 10 nanometer process, 300x smaller in linear size, and (300x)**2 (90,000) smaller in area. What's truly amazing is how they have come to be able to etch such small feature-sizes on silicon. For a long time, they were using 193 (?) nanometer UV light to do that, and yet they got feature-sizes below 50 nanometers. (using a lot of photolithographic 'tricks' to do so!.) Now, I think they probably use "EUV", short for "Extreme Ultraviolet", which amounts to soft-xrays, maybe at about 10nm wavelength or even shorter. https://en.wikipedia.org/wiki/Extreme_ultraviolet https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithography>> In fact, I was the first person at Intel, and perhaps in the world, who saw the flash(es) through the microscope of the as-being-blown fuses on these chips. Intel was doing this redundancy before anyone else, I believe. > Interesting. I thought that sort of patching was something relatively new only done to the chips in sdcards and the like. I guess it's not new. × No, it's definitely not new! Although, it was somewhat hush-hush at the time. Apparently the big chip-buyers might not have liked to hear of these 'repair' techniques. They probably had these pictures in their minds of perfect chips emerging efficiently from the production line. But, they also wanted to buy cheaper chips, and it was hard to make a truly-defect-free chip when you're trying to put 65,536 transistors into the array of a DRAM. The more-sophisticated users no doubt were told of this technique, and they probably gave extra care to testing incoming parts. Hard-disk manufacturers probably characterize their platters in a similar way, looking for weak areas that have trouble recording data. Jim Bell
On Thu, 15 Nov 2018 23:25:18 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
When I worked for Intel (1980-1982), a typical silicon linewidth was 3 microns. (3000 nanometers.) Recently I saw that Intel was using a 10 nanometer process, 300x smaller in linear size, and (300x)**2 (90,000) smaller in area. What's truly amazing is how they have come to be able to etch such small feature-sizes on silicon. For a long time, they were using 193 (?) nanometer UV light to do that, and yet they got feature-sizes below 50 nanometers.
Yes, that's interesting. At first I naively assumed that you couldn't print stuff smaller than the wavelength used but that's not the case at all.
(using a lot of photolithographic 'tricks' to do so!.) Now, I think they probably use "EUV", short for "Extreme Ultraviolet", which amounts to soft-xrays, maybe at about 10nm wavelength or even shorter. https://en.wikipedia.org/wiki/Extreme_ultraviolet
Yeah, the accuracy is impressive. Now, from a libertarian point of view, there's a huge accumulation of knowledge and 'capital' in the hands of very few people. Also, many of these developments are govt subsidized in many ways and end up in the hands of a few monopolistic businesses. What this boils down to of course is the fact that the infrastructure is fully controlled by the enemy.
Hard-disk manufacturers probably characterize their platters in a similar way, looking for weak areas that have trouble recording data.
Yes, hard disks can mark and stop using bad sectors. Actually cheap floppy disks controllers did the same thing...
Jim Bell
On Friday, November 16, 2018, 12:15:13 PM PST, juan <juan.g71@gmail.com> wrote: On Thu, 15 Nov 2018 23:25:18 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
When I worked for Intel (1980-1982), a typical silicon linewidth was 3 microns. (3000 nanometers.) Recently I saw that Intel was using a 10 nanometer process, 300x smaller in linear size, and (300x)**2 (90,000) smaller in area. What's truly amazing is how they have come to be able to etch such small feature-sizes on silicon. For a long time, they were using 193 (?) nanometer UV light to do that, and yet they got feature-sizes below 50 nanometers.
> Yes, that's interesting. At first I naively assumed that you couldn't print stuff smaller than the wavelength used but that's not the case at all. That's actually a good first-approximation, at least prior to the insertion of a few billion dollars of research into better optical methods. I recall an analogy, 'How do you draw a 1 millimeter line on paper, if you only have a 5-millimeter paintbrush?" In the 1960's and much of the 1970's, they used something called "contact printing", basically pressing a chromium-on-quartz optical mask onto the wafer with the photoresist previously applied. That worked, except that the masks didn't last very long. Then they went to "projection printers", which separated the mask from the wafers. Then, they went to "step and repeat" systems https://en.wikipedia.org/wiki/Stepper , which projected only the image of a single chip onto the wafer at a time, precisely repeated across the area of the wafer. "Steppers", as they were called, became important because it was hard to match the temperature coefficient of expansion of a silicon wafer http://www.ioffe.ru/SVA/NSM/Semicond/Si/thermal.html 2.6 ppm/degree C, to the temperature coefficient of a silica photomask https://www.accuratus.com/fused.html 0.55 ppm/degree C. Consider that if the feature-size they wish to draw is 100 nanometer, and the distance across the (silicon) wafer is 300 millimeters. That's 1 part in 3 million!!! They would have had to thermostat the temperatures of the silicon and silica to well better than 0.1 degrees C!! Using a wafer-stepper meant that they only had to do this alignment over a relatively small distance, maybe about 1 centimeter. Far easier than 30 centimeters. × I have seen articles about the various weird optical tricks used to allow the writing of much-less-than-wavelength features on chips. I think one was in Scientific American about 10 years ago. In these, the mask looks little like the eventual features you want to produce: They "pre-calculate" the various optical distortions that they know the light will be subjected to, along with issues such as the sensitivity of the photoresist. However, these 'tricks' eventually run out of gas. For a long time, they used 193 nanometer UV "light", https://en.wikipedia.org/wiki/Photolithography That was somewhat of a limit, mostly because they could not find an easy way to generate UV at a shorter wavelength than 193.
(using a lot of photolithographic 'tricks' to do so!.) Now, I think they probably use "EUV", short for "Extreme Ultraviolet", which amounts to soft-xrays, maybe at about 10nm wavelength or even shorter. https://en.wikipedia.org/wiki/Extreme_ultraviolet
> Yeah, the accuracy is impressive. Now, from a libertarian point of view, there's a huge accumulation of knowledge and 'capital' in the hands of very few people. Also, many of these developments are govt subsidized in many ways and end up in the hands of a few monopolistic businesses. What this boils down to of course is the fact that the infrastructure is fully controlled by the enemy. There is still a lot of chip-making which does not need to be done at these "bleeding-edge" levels. Microprocessors and memory chips, primarily, need to have the smallest features. Jim Bell
participants (4)
-
jim bell
-
juan
-
Ryan Carboni
-
Travis Biehn