A good example of why totally open chips are problematic in the commercial world.
Spectre/Meltdown Pits Transparency Against Liability: Which is More Important to You?
https://www.bunniestudios.com/blog/?p=5127As always, the devil is in the details.
"
You can’t have it both ways: the whole point of transparency is to
enable peer review, so you can find and fix bugs more quickly. But if
every time a bug is found, a manufacturer had to hand $50 to every user
of their product as a concession for the bug, they would quickly go out
of business. This partially answers the question why we don’t see open
hardware much beyond simple breakout boards and embedded controllers:
it’s far too risky from a liability standpoint to openly share the
documentation for complex systems under these circumstances.
"
"
However, even one of their most ardent open-source advocates pushed back
quite hard when I suggested they should share their pre-boot code. By
pre-boot code, I’m not talking about the little ROM blob that gets run
after reset to set up your peripherals so you can pull your bootloader
from SD card or SSD. That part was a no-brainer to share. I’m talking
about the code that gets run before the architecturally guaranteed
“reset vector”. A number of software developers (and alarmingly, some
security experts) believe that the life of a CPU begins at the reset
vector. In fact, there’s often a significant body of code that gets
executed on a CPU to set things up to meet the architectural guarantees
of a hard reset – bringing all the registers to their reset state,
tuning clock generators, gating peripherals, and so forth. Critically,
chip makers heavily rely upon this pre-boot code to also patch all kinds
of embarrassing silicon bugs, and to enforce binning rules."
If, OTOH, there were ways to manufacture arbitrarily complex chips on the desktop for reasonable costs and in reasonable time, and so eliminate the commercial issues, this conundrum could vanish.