From sand to silicon chips, openly
-------- Original message --------From: Steven Schear <schear.steve@gmail.com> Date: 3/21/18 9:13 AM (GMT-08:00) To: cypherpunks <cypherpunks@lists.cpunks.org> Subject: From sand to silicon chips, openly http://parallel.princeton.edu/openpiton/open_source_processors.php Say buhbye... Peak Sand https://www.google.com/search?q=peak+sand Your grandchildren will be using an abacus. Rr
On Wed, Mar 21, 2018 at 09:13:02AM -0700, Steven Schear wrote:
http://parallel.princeton.edu/openpiton/open_source_processors.php
Looks like some nice architecture stuff being explored here, reducing core-count coherence overhead ("up to 64" core coherence domains are optimized) and reducing power usage with "drafting mode" which is essentially SIMD applied at a higher level ('cohering' many threads - e.g. uploading and analysing/scaling images for many web page end users - thread instructions are aligned to execute simultaneously (where possible) improving throughput to energy by up to 20 percent, 8.57 percent on average). So they're targetting the right metrics for a longer term traction - "throughput divided by energy" which is what datacenters must optimize if they want to increase profitability - a little different to minimizing latency, and minimizing power usage per data processed/ stuff done, required for battery powered end user devices - although I think it's not yet clear that these two outcomes are not ultimately one and the same result (with a laptop, hike the CPU frequency when some heavy calculations need to be done, to minimize the time required to do them (user's want low latency) and this thereby also minimizing overall power usage). All roads lead to Rome.
A good example of why totally open chips are problematic in the commercial world. Spectre/Meltdown Pits Transparency Against Liability: Which is More Important to You? https://www.bunniestudios.com/blog/?p=5127 As always, the devil is in the details. " You can’t have it both ways: the whole point of transparency is to enable peer review, so you can find and fix bugs more quickly. But if every time a bug is found, a manufacturer had to hand $50 to every user of their product as a concession for the bug, they would quickly go out of business. This partially answers the question why we don’t see open hardware much beyond simple breakout boards and embedded controllers: it’s far too risky from a liability standpoint to openly share the documentation for complex systems under these circumstances. " " However, even one of their most ardent open-source advocates pushed back quite hard when I suggested they should share their pre-boot code. By pre-boot code, I’m not talking about the little ROM blob that gets run after reset to set up your peripherals so you can pull your bootloader from SD card or SSD. That part was a no-brainer to share. I’m talking about the code that gets run before the architecturally guaranteed “reset vector”. A number of software developers (and alarmingly, some security experts) believe that the life of a CPU begins at the reset vector. In fact, there’s often a significant body of code that gets executed on a CPU to set things up to meet the architectural guarantees of a hard reset – bringing all the registers to their reset state, tuning clock generators, gating peripherals, and so forth. Critically, chip makers heavily rely upon this pre-boot code to also patch all kinds of embarrassing silicon bugs, and to enforce binning rules." If, OTOH, there were ways to manufacture arbitrarily complex chips on the desktop for reasonable costs and in reasonable time, and so eliminate the commercial issues, this conundrum could vanish. On Wed, Mar 21, 2018 at 9:13 AM, Steven Schear <schear.steve@gmail.com> wrote:
http://parallel.princeton.edu/openpiton/open_source_processors.php
On Tue, May 15, 2018 at 12:39 AM, Steven Schear <schear.steve@gmail.com> wrote:
A good example of why totally open chips are problematic in the commercial world.
Spectre/Meltdown Pits Transparency Against Liability: Which is More Important to You? https://www.bunniestudios.com/blog/?p=5127
As always, the devil is in the details.
" You can’t have it both ways: the whole point of transparency is to enable peer review, so you can find and fix bugs more quickly. But if every time a bug is found, a manufacturer had to hand $50 to every user of their product as a concession for the bug, they would quickly go out of business. This partially answers the question why we don’t see open hardware much beyond simple breakout boards and embedded controllers: it’s far too risky from a liability standpoint to openly share the documentation for complex systems under these circumstances. "
As an incomplete snip from article, it would be bullshit on its own. At least for systems that start their life as open. Closed hardware / software generally asserts its fitness, at least in the corporate sales pitch, so when it fails you can sue the fuck out of them. For example, Intel is sued in court over Meltdown right now. Open fabs / hardware / software / dev pushes that entire analysis out to the user... they can inspect it, pay for analyst verification, read reviews, etc. In an open model, it becomes understood that all that is upon you, and the recourse is no longer suit, but filing bugs and commits and process change, and then the next release happens. That paradigm shift is the exact same in open fabs and hardware as it is in open software, even now in currencies and markets. You don't see it today because they want power, profit, control, and to them closed over the ignorant is the way to achieve that. After all, to date all the sheep have accepted that model of abuse. Openness and sharing is now proving in demand, profitable, and hopefully slowly taking over. Similar conclusions were in the article... "The Choice: Truthful Mistakes or Fake Perfection?" The offer is on the table. Even corporate users of HW know their fitness lawsuits etc will not always win and recover losses, so they'd also be insane not to offer it.
" However, even one of their most ardent open-source advocates pushed back quite hard when I suggested they should share their pre-boot code. By pre-boot code, I’m not talking about the little ROM blob that gets run after reset to set up your peripherals so you can pull your bootloader from SD card or SSD. That part was a no-brainer to share. I’m talking about the code that gets run before the architecturally guaranteed “reset vector”. A number of software developers (and alarmingly, some security experts) believe that the life of a CPU begins at the reset vector. In fact, there’s often a significant body of code that gets executed on a CPU to set things up to meet the architectural guarantees of a hard reset – bringing all the registers to their reset state, tuning clock generators, gating peripherals, and so forth. Critically, chip makers heavily rely upon this pre-boot code to also patch all kinds of embarrassing silicon bugs, and to enforce binning rules."
If, OTOH, there were ways to manufacture arbitrarily complex chips on the desktop for reasonable costs and in reasonable time, and so eliminate the commercial issues, this conundrum could vanish.
On Wed, Mar 21, 2018 at 9:13 AM, Steven Schear <schear.steve@gmail.com> wrote:
http://parallel.princeton.edu/openpiton/open_source_processors.php
Nice list. Where's Intel and AMD and Qualcomm and ...
participants (4)
-
g2s
-
grarpamp
-
Steven Schear
-
Zenaan Harkness