On Tue, Jul 28, 2015 at 8:15 PM, jim bell <jdb10987@yahoo.com> wrote:
From: Seth <list@sysfu.com> On Tue, 28 Jul 2015 15:40:55 -0700, oshwm <oshwm@openmailbox.org> wrote:
So is anyone working on building an 'openfab' or is it such a big task that everyone just backs away in horror? :D
Of course you're going to fail if you keep saying no and FUDding yourself. Is anyone building "cars" or is it such a big task that... http://www.teslamotors.com/
My understanding is that the capital costs involved with building and operating a chip fabrication plant are astronomical, although the situation may be getting better.
http://spectrum.ieee.org/semiconductors/design/the-new-economics-of-semicond...
Competition, so what? Leech what's been done before you.
Even 30 years ago, there were custom fabs that were designed to allow small organizations to get chips fabbed. This may be one of the modern version of them: http://www.globalfoundries.com/manufacturing/manufacturing-overview
You don't need a $50B 1Msqft setup to start making chips, you need a floor in a warehouse and some people who believe.
In the mid 80's, they typically purchased older fabs (not state of the art) and allowed small companies to prototype their semiconductor designs. Today, some of them apparently do near-state-of-the-art production.
Universities have always had fabs too. The cost to get a basic line going at some tech level is not prohibitive, it's an adventure. Open is your differentiator and your ROI, even private runs will come paying to you because they know your ground up construction and monitored production process can't inject warez in their silicon. Can you as a reasonably learned hobbyist go observe an Intel run through GloFab from start to finish? What about for the chip inside your phone? Your makerbot? Your RPi? Your USRP? Your own mask? No? Well... that's a problem.
Steve Kinney wrote: If a market is willing to pay enough to support and grow the project, it can be done. Are there potential partners and large scale consumers for "top security through total transparency" to make an open hardware project viable today?
One potential route would be to broker a deal to pool the resources of specialty hardware integrators who already have a market base for high security "solutions." The Open Office project pulled off something similar years ago, obtaining major funding and support from IBM and others who wanted Microsoft out of their hair. So, who wants a shot at defending some of their digital assets from outfits like NSA and GHCQ, badly enough to pay for it?
The first place I would start shopping this "crypto anarchist" project around would be State security services - pretty much any small to mid-sized outfit not in BRICS or FVEYE could be a potential market for auditable scrambler phones for military commanders, senior elected officials, diplomatic corps and double-nought spies. From there to high performance servers and workstations would be a natural progression.
I haven't looked at how the Black Phone folks are doing lately, but that looks like the kind of product line where open hardware might find its first viable home.
Another consideration: One needs not necessarily own the facility where the chips are made: ISO quality assurance programs already in place support client access for audit and validation. A contract that specifies the client's intrusive presence during every phase of production and handling would cost extra, but a QA process that assumes the presence of hostile actors on the shop floor is definitely possible. Such a process would also be needed at a dedicated facility: One must assume the presence of hostile actors there, too. :o)
That's basically all part of the idea. And that some serious multi philosophical combination of hardcore Stallman Ghandi Cpunk Riseup Coder Maker Opensource Auditor like motherfuckers all build, run and observe the joint from the ground up as essentially a crosschecked incorruptible thing that anyone can look at. Todays shops are a mutable system of hierarchical employee paychecks, payoffs, closed door privacy and backroom games.
If you're willing to sacrifice some performance and power efficiency, you can always use an FPGA. The tools aren't open, but it seems like it would be a lot harder to make an FPGA or FPGA tools to backdoor arbitrary circuits. You could potentially do the "reflections on trusting trust" thing and detect and backdoor each of the major open source processor cores, but it seems pretty unlikely that such a thing wouldn't leak. On the other hand, I also seriously doubt Intel CPUs are backdoored, so maybe my paranoia isn't properly calibrated. Even if you generally trust Intel, though, FPGAs could still potentially protect you from all the investment the NSA has undoubtedly put into finding bugs and side channels in the widely used CPUs, though. And being much simpler, something like OpenRisc or J1 or SPARC v8 probably has far fewer places for such flaws/side channels to hide. On the gripping hand, none of those processors gives you an equivalent of Intel's TXT mode, and I'm not sure but it's probably much easier to dump internal state from an FPGA, so you could be more vulnerable to cold boot and evil maid attacks. On Tue, Jul 28, 2015, 19:27 grarpamp <grarpamp@gmail.com> wrote:
Steve Kinney wrote: If a market is willing to pay enough to support and grow the project, it can be done. Are there potential partners and large scale consumers for "top security through total transparency" to make an open hardware project viable today?
One potential route would be to broker a deal to pool the resources of specialty hardware integrators who already have a market base for high security "solutions." The Open Office project pulled off something similar years ago, obtaining major funding and support from IBM and others who wanted Microsoft out of their hair. So, who wants a shot at defending some of their digital assets from outfits like NSA and GHCQ, badly enough to pay for it?
The first place I would start shopping this "crypto anarchist" project around would be State security services - pretty much any small to mid-sized outfit not in BRICS or FVEYE could be a potential market for auditable scrambler phones for military commanders, senior elected officials, diplomatic corps and double-nought spies. From there to high performance servers and workstations would be a natural progression.
I haven't looked at how the Black Phone folks are doing lately, but that looks like the kind of product line where open hardware might find its first viable home.
Another consideration: One needs not necessarily own the facility where the chips are made: ISO quality assurance programs already in place support client access for audit and validation. A contract that specifies the client's intrusive presence during every phase of production and handling would cost extra, but a QA process that assumes the presence of hostile actors on the shop floor is definitely possible. Such a process would also be needed at a dedicated facility: One must assume the presence of hostile actors there, too. :o)
That's basically all part of the idea. And that some serious multi philosophical combination of hardcore Stallman Ghandi Cpunk Riseup Coder Maker Opensource Auditor like motherfuckers all build, run and observe the joint from the ground up as essentially a crosschecked incorruptible thing that anyone can look at.
Todays shops are a mutable system of hierarchical employee paychecks, payoffs, closed door privacy and backroom games.
On Tue, Jul 28, 2015 at 11:17 PM, Sean Lynch <seanl@literati.org> wrote:
On the other hand, I also seriously doubt Intel CPUs are backdoored
Even if they're not physically backdoored with handy extra gates, they seem logically backdoorable by demanding the signing keys needed for loading microcode (Intel/AMD) and AMT firmware (Intel, don't plug your "LM" series NIC into the WAN.) http://inertiawar.com/microcode/ https://wiki.archlinux.org/index.php/Microcode https://en.wikipedia.org/wiki/Intel_Active_Management_Technology http://invisiblethingslab.com/resources/bh09dc/Attacking%20Intel%20TXT%20-%2... http://invisiblethingslab.com/resources/bh09dc/Attacking%20Intel%20TXT%20-%2...
On Wed, Jul 29, 2015 at 03:17:03AM +0000, Sean Lynch wrote:
If you're willing to sacrifice some performance and power efficiency, you can always use an FPGA. The tools aren't open, but it seems like it would
check out icestorm for a counterexample. and a simple cpu implemented on it: http://www.excamera.com/sphinx/article-j1a-swapforth.html
On the other hand, I also seriously doubt Intel CPUs are backdoored, so
it's called bugdoor. if they backdoor your system with ME crap, why would they stop at the core? -- otr fp: https://www.ctrlc.hu/~stef/otr.txt
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 07/28/2015 08:17 PM, Sean Lynch wrote:
of Intel's TXT mode, and I'm not sure but it's probably much easier to dump internal state from an FPGA, so you could be more vulnerable to cold boot and evil maid attacks.
More likely, dump the contents of the EPROM some FPGAs read their gate matrices out of when they power up. - -- The Doctor [412/724/301/703/415] [ZS] PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F DD89 3BD8 FF2B 807B 17C1 WWW: https://drwho.virtadpt.net/ "Pills for breakfast. I'm so science fictional." --Doktor Sleepless -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJVuQZOAAoJED1np1pUQ8RkXDgP/jgt6OPJHKe3XBthaKrVDjJy RAJ8+mS/g8PESioT+a81Vwi0VlxtEWVjIBoXqtdhj7zeybDMu/CI5fP0Bil4VkUY V32fF3PM855+qIyC58JNyfCQrWgNMYoyBNa9QqW/sw2aXppZa+XYC7U6Xf3JxGs2 DE+K8tSvQgkXq8pC8gL3MvADLNoFQ75NO2LAJjmns58I4ZBwEuRw/qAePW8ZdkdH aRW4wAmbwa3LJazCsmrO+nYIIDUdy99g1NkrZMbfb1xYzln6Uv9o5/KpqebojWjU c8/yXxytcZo7p/TSxjn4xkcV23PF9b1LOiPZcSijx6DsNW1Hc/WBU3HKiU37xB9m zSmMB0Ltjs9btM8vEuGBpo/wh3F3Xx/yaW4tVaq+WX9Ly+3u4z8fagBPZ7z0hdiR doXTUUGpBy4uPJVAvwazCdspxoaP59/cLUQ5qdSqyQkJCGlCaGxaEGKzjV6nTpzd 1cWNIWEv6TfqPhgh+pfl3QrcSP2a8x8xShYn+dsiPiAJEOXG+qJAj0ctJSJ0ZOch cwGBAenAnLYpdDZRdhiqavOfr4QEkfiTfu8qdBrFncYu94HXNHJisZH8cDJGRuQ4 cbaDDLe8Nw/08+l6WK9qgcbJA2I6q7/MAIDkwkihGCMyFUhkX1gDimtSJ/ckqXnx T3HrxDupbkHnEGpa2aiS =/p9d -----END PGP SIGNATURE-----
The Doctor <drwho@virtadpt.net> wrote:
More likely, dump the contents of the EPROM some FPGAs read their gate matrices out of when they power up.
But that just gives away the bitstream describing the FPGA configuration (say, a trusted CPU). Is the CPU's *design* a secret? If not, I don't see why it matters that an evil cleaner might read out the FPGA's configuration. (Obviously, don't store secret keys in there!) If we really are worried about keeping the CPU's design a secret, it's possible with many FPGAs to encrypt the configuration bitstream such that the configuration is decrypted onboard the FPGA at power-on. This is intended to handle the case where I want to sell a product that uses an FPGA without revealing the contents of that FPGA's configuration to my customers or competitors. Of course, this doesn't really provide useful security guarantees against a sophisticated adversary. First, since the FPGA contains or automatically derives the secret key to decrypt this encrypted bitstream, the manufacturer of the FPGA likely also knows the key or how to derive it (as does anyone with sufficient dedication and a well-equipped lab). Second, we have no idea how well this system is designed or implemented, since the bitstream security system is itself a proprietary secret. And third, even if it's competently built, as I pointed out above, the threat model is "customer reads out my proprietary design," which means that, practically speaking, the technological barrier is only there to support a more comprehensive framework of legal recourse in the case that my customer or competitor tries to steal the secret sauce. Probably a reasonable evil cleaner attack against an FPGA-based "trusted" CPU is to overwrite the contents of the configuration ROM with a similar but subtly bugged design. This is more or less isomorphic to "the NSA signs bugged microcode for my Intel CPU." Cue the OTP / epoxy / physical security arms race, I guess. -=rsw
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 07/29/2015 11:07 AM, Riad S. Wahby wrote:
But that just gives away the bitstream describing the FPGA configuration (say, a trusted CPU). Is the CPU's *design* a secret? If
Gate matrices can be reversed (as mentioned earlier).
not, I don't see why it matters that an evil cleaner might read out the FPGA's configuration. (Obviously, don't store secret keys in there!)
I thought the point being made in the conversation was (and correct me if I'm wrong) that one could dump an arbitrary FPGA's contents to do a security audit on them. Not to say that you're wrong, you make a good point, but it's taking the discussion in a different direction.
If we really are worried about keeping the CPU's design a secret, it's
We were talking about open CPU designs, so why keep them a secret?
possible with many FPGAs to encrypt the configuration bitstream such that the configuration is decrypted onboard the FPGA at power-on. This
Yeah. It's pretty cool, isn't it?
is intended to handle the case where I want to sell a product that uses an FPGA without revealing the contents of that FPGA's configuration to my customers or competitors.
That's a few degrees off-center from where the discussion was going, but go ahead. We'll fork() as necessary.
Cue the OTP / epoxy / physical security arms race, I guess.
Or the electromechanical processing rigs that a few people have been bringing up over beer lately. Cool idea, but I strongly doubt that they'll scale, or even keep up with the watch on my wrist. - -- The Doctor [412/724/301/703/415] [ZS] PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F DD89 3BD8 FF2B 807B 17C1 WWW: https://drwho.virtadpt.net/ "The world is not a match for you and me/No matter what we used to say!" --InSoc -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJVuRx0AAoJED1np1pUQ8Rk8m8QAJD2YA1F5oizOt+C3HiCEAfS zTyPRdlEDb9Dvw1nzZghZklgb2uqeToDIRIOV5SG9R0tYSkct1w4cRSlG89EpDX/ trPDOyimQpMCv0mwASVCh9mGIobcoQYNVBJ+DB070VHjg+YEk+/2FYr1wNBFgTch fIXqzANNmVwwCt0PABs6J9MOmyYGBj+zddLMTIMQmOZv/tAbfLnb8yrqx6i59BUI 6dX0AILoXGbx5ssyxtOdy0bKdGnj7Qxp2MvhrNqbXb4vhKRnWyVt548CASx0EcMl fNwnyMsLCn38coatFhvi2TeyjM8Wm+v73pLlW7nBWvViOugvDlY4SB2MTCVh1A7v 0K2aVnxG+UGmQ/yQNZGSbgXT/hbzkeBBrSHjtYf5KBugdAgVhb1jkwILZjwancsm Xo3KgYJOgVqYWQiLnzYsQWp8sNMwUS8xjneHZm/bWxmcPmcNBzEsR13brp58FB+E FU2euIxRGOIDJQ9HnUy0NXFFQr0DlYcHquyvWeS6dBnhD/BZ0sVGhSwapslUwCH8 5F97AQ5/u8hdGXD5bGggxQmCVmcrgUsV6LhRUB+8EKjCgUUOY8PBcnWxAwfNeVFj 7oQuPEU3Gz1Xx/zg6hiMFICc4nTtnwoQA263aUO7P4xHp9mRYfXZisJ6s97WAQRX za1DvxZXZxbB4QVoOEdm =LiVz -----END PGP SIGNATURE-----
The Doctor <drwho@virtadpt.net> wrote:
I thought the point being made in the conversation was (and correct me if I'm wrong) that one could dump an arbitrary FPGA's contents to do a security audit on them.
Ah, I see. I thought the focus was on cold boot or evil maid attacks against FPGA-based (thus, nominally trustworthy) CPUs, and how these attacks might compare to similar attacks against a commercial CPU. As you pointed out before, one may as well just grab the configuration out of the ROM itself, and I agree---but my point was that either way, what are we getting except some information that's not really secret? So I think we're in violent agreement, at least to the extent that we're talking about the same thing :) Also: one assumes that cold boot attacks against the contents of RAM are more useful than against the SRAMs that hold the FPGA's configuration, and in that case probably it's little different from the equivalent attack against a commercial CPU (the DRAM is more or less the same whether we're talking about the commercial or the FPGA-based CPU---you're using the same DIMMs either way). On further reflection, I suppose the contents of the block RAMs inside the FPGA (little SRAMs sprinkled through the fabric) might be a prize worth chasing, since those are presumably acting as registers and cache for our CPU. It *might* be possible to do so by cold booting the FPGA with a configuration that dumps the contents of the block RAMs, assuming that those contents aren't cleared by power-on reset or the configuration process itself. To your point above about auditing the configuration actually running on an FPGA: that would be very interesting to prevent against an FPGA manufacturer going the reflections-on-trusting-trust route. Here's one way an evil FPGA manufacturer might proceed: the CAD software that the manufacturer provides with the FPGA detects that you're synthesizing a CPU. Rather than emit a flawed bitstream (which might be detectable just by examining the bitstream itself), perhaps the software would hide in the bitstream some instructions that direct the FPGA's configuration state machine to introduce flaws at config time. (FPGA config bitstreams are big, complicated, and proprietary; so it's not impossible that they contain enough redundancy that one could use stego to hide such commands in the bitstream.) (This approach also helps to get around the fact that the synthesis and fitting process does a randomized search for a configuration that meets your criteria (e.g., speed, size, etc.). In other words: the best time to detect "this guy is trying to build a CPU" is when the software is reading your Verilog, not when it's loading the bitstream into an FPGA, because it's really really hard to decide "this is a CPU" just by examining the bitstream itself.) But I suppose if I were so devious as a manufacturer of FPGAs as to detect a CPU design and introduce subtle bugs as a result, I would probably also do my best to keep you from detecting it, even if you *are* able to read out the config from a running FPGA. It's quite a large haystack for hiding such a little needle... (And regarding cold booting to read out the config SRAMs: I worry even more here than in the case of block RAMs that these have a carefully designed power-on reset scheme in place so that the FPGA fabric comes up in a known state.) -=rsw
Sorry for the delayed response. I have been falling behind on my personal email. On Wed, Jul 29, 2015 at 12:32 PM Riad S. Wahby <rsw@jfet.org> wrote:
The Doctor <drwho@virtadpt.net> wrote:
I thought the point being made in the conversation was (and correct me if I'm wrong) that one could dump an arbitrary FPGA's contents to do a security audit on them.
Ah, I see. I thought the focus was on cold boot or evil maid attacks against FPGA-based (thus, nominally trustworthy) CPUs, and how these attacks might compare to similar attacks against a commercial CPU.
This is what *I* meant. I was assuming an open source CPU design, which would make dumping the configuration ROM pointless.
As you pointed out before, one may as well just grab the configuration out of the ROM itself, and I agree---but my point was that either way, what are we getting except some information that's not really secret? So I think we're in violent agreement, at least to the extent that we're talking about the same thing :)
Also: one assumes that cold boot attacks against the contents of RAM are more useful than against the SRAMs that hold the FPGA's configuration, and in that case probably it's little different from the equivalent attack against a commercial CPU (the DRAM is more or less the same whether we're talking about the commercial or the FPGA-based CPU---you're using the same DIMMs either way).
On further reflection, I suppose the contents of the block RAMs inside the FPGA (little SRAMs sprinkled through the fabric) might be a prize worth chasing, since those are presumably acting as registers and cache for our CPU. It *might* be possible to do so by cold booting the FPGA with a configuration that dumps the contents of the block RAMs, assuming that those contents aren't cleared by power-on reset or the configuration process itself.
This is what I was thinking of, though the terminology had escaped me for the moment.
To your point above about auditing the configuration actually running on an FPGA: that would be very interesting to prevent against an FPGA manufacturer going the reflections-on-trusting-trust route.
Here's one way an evil FPGA manufacturer might proceed: the CAD software that the manufacturer provides with the FPGA detects that you're synthesizing a CPU. Rather than emit a flawed bitstream (which might be detectable just by examining the bitstream itself), perhaps the software would hide in the bitstream some instructions that direct the FPGA's configuration state machine to introduce flaws at config time.
(FPGA config bitstreams are big, complicated, and proprietary; so it's not impossible that they contain enough redundancy that one could use stego to hide such commands in the bitstream.)
(This approach also helps to get around the fact that the synthesis and fitting process does a randomized search for a configuration that meets your criteria (e.g., speed, size, etc.). In other words: the best time to detect "this guy is trying to build a CPU" is when the software is reading your Verilog, not when it's loading the bitstream into an FPGA, because it's really really hard to decide "this is a CPU" just by examining the bitstream itself.)
But I suppose if I were so devious as a manufacturer of FPGAs as to detect a CPU design and introduce subtle bugs as a result, I would probably also do my best to keep you from detecting it, even if you *are* able to read out the config from a running FPGA. It's quite a large haystack for hiding such a little needle...
Sounds possible but much harder to do and with lower impact than backdooring a widely used CPU, at least until a bunch of people the government considers threats start using soft cores.
(And regarding cold booting to read out the config SRAMs: I worry even more here than in the case of block RAMs that these have a carefully designed power-on reset scheme in place so that the FPGA fabric comes up in a known state.)
Might it be possible to use a glitch attack to prevent this from happening? I guess the question is, would you trust an FPGA's block RAM more than you would TXT and L1 cache on an Intel CPU to protect sensitive data?
Pasting in more from the truecrypt thread that should have gone in here... Dnia wtorek, 28 lipca 2015 21:34:07 Steve Kinney pisze:
If a market is willing to pay enough to support and grow the project, it can be done. Are there potential partners and large scale consumers for "top security through total transparency" to make an open hardware project viable today? rysiek: Yes. And there are ways to create a market like that, albeit it takes time. I haven't looked at how the Black Phone folks are doing lately, but that looks like the kind of product line where open hardware might find its first viable home. rysiek: Funny you should ask: http://www.theinquirer.net/inquirer/news/2402536/us-department-of-defence-ad...
On 07/28/2015 03:40 PM, oshwm wrote:
So is anyone working on building an 'openfab' or is it such a big task that everyone just backs away in horror? :D doctor: The closest I know to that is Jeri Ellsworth, who's at the point of fabbing her own discrete transistors in a homebrew semiconductor foundry. If she's still working on this project, she's probably a bit closer but I haven't spoken to her about it.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/29/2015 02:37 PM, grarpamp wrote:
Pasting in more from the truecrypt thread that should have gone in here...
Dnia wtorek, 28 lipca 2015 21:34:07 Steve Kinney pisze:
If a market is willing to pay enough to support and grow the project, it can be done. Are there potential partners and large scale consumers for "top security through total transparency" to make an open hardware project viable today?
rysiek: Yes. And there are ways to create a market like that, albeit it takes time.
I haven't looked at how the Black Phone folks are doing lately, but that looks like the kind of product line where open hardware might find its first viable home. rysiek: Funny you should ask: http://www.theinquirer.net/inquirer/news/2402536/us-department-of- defence-adopts-nsa-proof-blackphone-devices
Good
catch! See also: See also: http://www.defenseone.com/technology/2015/03/pentagon-rolls-out-nsa- proof-smartphones/108892/ Must have been something in the air. Now everybody will want something similar - and some won't have the luxury of trusting the NSA, NRO, ETC the same way the DoD can. "Let's talk about the chips in your TAO/ANT Brand scrambler phones..." :o)
So is anyone working on building an 'openfab' or is it such a big task that everyone just backs away in horror? :D doctor: The closest I know to that is Jeri Ellsworth, who's at
On 07/28/2015 03:40 PM, oshwm wrote: the point of fabbing her own discrete transistors in a homebrew semiconductor foundry. If she's still working on this project, she's probably a bit closer but I haven't spoken to her about it.
I do see problems with scaling DYI chip projects up to commercial production numbers, and down in scale to achieve fast, high capacity performance. That's why I am much more interested in the prospects of a manufacturing process built for radical transparency, using "commercial best practice" technology at conventional production facilities. IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc. :o) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJVuU5QAAoJEDZ0Gg87KR0Ldi8P/28tQayfNwd2gyAW/BXMBggU fD8MoUa9N+Zx9OooWXXxm+vE00of1gvcn/OSIXDt/8L9hKw26EDKPNXCV/S7fdlb V9hWxJEGVHAgqrSC1zOZrRU+eCpN98rtl4F/ah1uDQHYY/degwvnHtHqTmkZMIsC LuwIhtdDBO+kSoKvNbA/av0F/yCPvg1KwErq+zEb5wL94XW85vFd+Kwgn7uFCIU2 h4wHPJzn4jyfXAJftiwh8F5Sqq1pY2Iyw0quMBxOEK8UMz2ix4ZmuasGONswapby rPRmb/MKTpdY8K5+Qics7OljLJgeT6mdA9m3aT7RNxuj9enzETI1uL/31jmvsWiT 5qnnEuPIueA70KOodidDfyGETNzW6yppTy7HPUVVJtnHQMV2HJpz2e8bK5GOuHW4 78peFv9BFaPtO69FHADTYoUDG8ygqDnxkI4AQxm5pxctr64reA1h1HXNJIGVbkGu lMWPKgj2Nk8Sw7HC6+kEO7so9tMUnCECY/ImBTrLum7DeDhHwsmRBLuQuy67QPpj 9LYIXTkWpD87fV2Uh3z/OGHHicEdPQufkBvi/6FFiwyKvZYy8E9CSK/nQaWLKhSn jX3ksFJlXr3c03Due6tB2Q40vfW0vK350WR5vFIp+k/Bia+wkFUsgufK0D1gNIS4 GzkFkU0vfqtUaEsQseB3 =Fxgx -----END PGP SIGNATURE-----
On Wed, Jul 29, 2015 at 6:06 PM, Steve Kinney <admin@pilobilus.net> wrote:
I do see problems with scaling DYI chip projects up to commercial production numbers, and down in scale to achieve fast, high capacity performance.
It's not DIY. It's many similarly thinking Y's coming together to DI. Eventually you'll reach beyond any given initial fledgling "hobby class" goalposts. Nothing unusual or unachievable there. Since it's all been done before, how long to rebuild trustable compute and manufacturing from trustable discretes like relays, punchtape, and hand tools to 100nm? 5y? 10y?
That's why I am much more interested in the prospects of a manufacturing process built for radical transparency, using "commercial best practice" technology
All part of it.
at conventional production facilities.
Except this, unless you're demonstrating a way to convince these untrustable closed entities to open up their entire process and production line for your inspection pursuant to each and every audited run you want to put through it. If you're not, then you can't be certain that what you put in is what you get out.
IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc.
You should be able to read the as-built blueprints of all of these things online, access all areas of plants for independant inspection, raise enforceable design and safety flags, etc.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/29/2015 08:59 PM, grarpamp wrote: [ ... ]
That's why I am much more interested in the prospects of a manufacturing process built for radical transparency, using "commercial best practice" technology
All part of it.
at conventional production facilities.
Except this, unless you're demonstrating a way to convince these untrustable closed entities to open up their entire process and production line for your inspection pursuant to each and every audited run you want to put through it. If you're not, then you can't be certain that what you put in is what you get out.
That's exactly what I'm talking about: Essentially taking over the production process and working alongside facility staff, with particular attention to choke points where validation is both possible and productive. ISO quality programs include provision for onsite participation by clients; it's more a question of money, and picking a facility that can readily accommodate the requirements, than of getting anyone to open up any closed process. This might deprive one of the advantages of "commercial trade secret" techniques belonging to the facility's owners, but that's kind of the whole point of the exercise. Smaller facilities with older equipment would be better prospects than the mega-shops. One should never be certain that one is receiving exactly what was specified, regardless of validaiton. Somewhere, the rising curve of security costs will cross a falling curve of security risks, and that's as good a place as any to draw a line. Mark the other side of the line "here be dragons - maybe." End users can pick up any perceived slack if and as they want to spend the money to do so.
IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc.
You should be able to read the as-built blueprints of all of these things online, access all areas of plants for independant inspection, raise enforceable design and safety flags, etc.
Damn straight. I'm especially picky about the inspection records of nuclear power facilities; IMO they should include video of the inspectors at work, especially in hazardous / PITA locations they might be inclined to skip. I recall at least one case where reactor inspection logs were casually falsified for years, very nearly causing a catastrophic core breach due to undetected containment vessel deterioration located "around a corner and out of sight" from casual view. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJVuYInAAoJEDZ0Gg87KR0LnlAQANaYgr/vJ0oiIy1c5XHzTLT0 vKtYUxg07dJksSmv9UdGE+fDjLO+w9ojRMm5iGLTWUhfi1FF3nNLaSkR5o1Ho7kH vwLB/UknMxNlsP5Nfe2+JBVOcGh/r4w/QgrmYpksO5NSrc3vpeq3hmJQLq31uwNQ 4S90svYIyPI6r/TcuKJopIirDBAMKlC2FN5mWwUQ1wK5frFQ7QX467t/Nw7x/fIm GKKEHKtXFk3KtgCUWpEw9k0b0FiZR4g22jPhBxEqVatpwPBhUiiqkQ084202jH2V hLgF+Qlpoo0aPbY/8xGfULwtGRenJRv0YP2Wc7GXtQyDRRy7k7p6/YzA4jaO2qyJ 3NATz9p/xWzf5CeurPKmhJ4Kxz08+SrXesJDCOizoNLa0Glv98FLNisC2risrHyL D8N+VjQTHgczxgYXpb+ubbK6W8t4M6WkbEM721xqdgMdGTqa/AS/dMSTzARTQtqH GpvpxqkfWdmiFHpNjMVG+XSiZZMiKXybqwqI4jDCMbcZN9iOHcBRLviMtwkhkRjm M7yoXIRTEAD9OyIbZbg/n7IjYrI3/RXDfjwGm7H7893v+2XmLJOtOeMTw00TuO66 rPSI1zE5/rY1Bx3/F7ZwqY1LlP9RqbCNp0tIylQx+4Lz7tdEM+DzSHoJeE+c5dEm TEIAjPgbol14M/VViP5I =hsKV -----END PGP SIGNATURE-----
On Wed, Jul 29, 2015 at 9:47 PM, Steve Kinney <admin@pilobilus.net> wrote:
That's exactly what I'm talking about: Essentially taking over the production process and working alongside facility staff, with particular attention to choke points where validation is both possible and productive. ISO quality programs include provision for onsite participation by clients
I submit the above is moot... you're taking your chip design in on your USB, happy as a clam to be the one to insert it into their computer, pull it up on their screen, and watch the whole thing play out before your eyes... on down the line till out pops a chip in your hand, yay! But you failed to realize their computer and software probably wasn't made by them, nor has any open to you audit crosscheck been wrapped around it or it's operators and maintainers... on down the line. You can carve a stick with a knife but you can't really build a trusted cpu with an untrusted cpu. If the goal is to build an open trusted fab, you must build an open trusted fab, by and with the hard and different philosophical mofos who refuse to concur unless each step of design, build and operation is plainly validated. Otherwise you're just selling tourist tickets to the theme park. This is old school TCSEC / CC applied to manufacturing. You have cost efficiency in that the knowledge of tool and chip making already exists. You use that savings to offset cost of rebuilding with TCSEC. As opposed to trying to impart trust upon existing systems which is prohibitive.
Somewhere, the rising curve of security costs will cross a falling curve of security risks, and that's as good a place as any to draw a line.
Trust is not defined by a point on a cost curve.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/29/2015 10:48 PM, grarpamp wrote:
On Wed, Jul 29, 2015 at 9:47 PM, Steve Kinney <admin@pilobilus.net> wrote:
That's exactly what I'm talking about: Essentially taking over the production process and working alongside facility staff, with particular attention to choke points where validation is both possible and productive. ISO quality programs include provision for onsite participation by clients
I submit the above is moot... you're taking your chip design in on your USB, happy as a clam to be the one to insert it into their computer, pull it up on their screen, and watch the whole thing play out before your eyes... on down the line till out pops a chip in your hand, yay!
That's not what I have in mind at all. Everything that touches the production process would have to be isolated and audited. In practical terms, that would mean bringing the computers in question in from offsite, with relevant software already installed and validated. In the context at hand, watching the whole thing play out would consist of directing the whole process one step at a time, per a procedure created in collaboration with the contractor's engineering and QA departments. Optical masks and/or equivalent data files would be handled by client personnel and retained for validation. The chips that pop out would be under very stringent property control, and quite a lot of them would be torn down and thoroughly analyzed "at home" to validate the run.
But you failed to realize their computer and software probably wasn't made by them, nor has any open to you audit crosscheck been wrapped around it or it's operators and maintainers... on down the line. You can carve a stick with a knife but you can't really build a trusted cpu with an untrusted cpu.
If the goal is to build an open trusted fab, you must build an open trusted fab, by and with the hard and different philosophical mofos who refuse to concur unless each step of design, build and operation is plainly validated. Otherwise you're just selling tourist tickets to the theme park.
Just like doing it at an existing commercial facility, with the added advantage of much better control of physical access, hardware, etc. at the dedicated facility. Whether that advantage would be worth the extra costs, vs. real security improvements, depends on how reliable the post-production tear down and analysis of end product components is considered. "A difference that makes no difference is no difference." If it really is impossible to build a trusted CPU with an untrusted CPU, then it is not possible to build a trusted CPU. Fortunately, trust is not an absolute and there are ways to build relatively trustworthy systems from relatively untrustworthy components. A quote to the effect of "I do not care who votes, I only care who counts the votes" comes to mind but I'm too lazy to look it up right now.
This is old school TCSEC / CC applied to manufacturing.
You have cost efficiency in that the knowledge of tool and chip making already exists. You use that savings to offset cost of rebuilding with TCSEC. As opposed to trying to impart trust upon existing systems which is prohibitive.
Somewhere, the rising curve of security costs will cross a falling curve of security risks, and that's as good a place as any to draw a line.
Trust is not defined by a point on a cost curve.
I think that in the engineering and business worlds, trust is always a point on a cost curve. When Trust and Security are considered as absolutes, the costs of maintaining them rise exponentially until the protected assets die of resource starvation. Civilization as we know it is presently following the path of demanding absolute security, provided by rulers vested with absolute trust, to early termination. Getting more on the practical business of making IC chips into the public domain and widely distributed, enabling faster and decentralized recovery of today's industrial capabilities, is one of the benefits of open hardware development projects. The objectives of an open IC project include providing protections against institutional sabotage, but also the creation of protocols, documents and data that can be re-used and improved over time. Policies and protocols necessary to assure adequate transparency, including repeatability, would amount to enlarging the GPL ecosystem to encompass computer hardware as well as software. If such a project can't produce products that are cost effective for end users, it will remain at most a theme park ride for misguided investors. The "high security" angle looks like a place where potential customers and nearly off-the-shelf capabilities meet. :o) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJVuaPYAAoJEDZ0Gg87KR0LVJQQAJVUA0uR0eWYzUkqB0+b1HQu kBejHQss7jyLU4qhCIfKaoVF7bZcnIZLydT+BP/OCybDHisqlKcK5lI/1Ic8s1w+ uAY78xV2c+3N4IMRe1CgkirGDGP9PUOZCn/1czme66yqPWtHioY+ayh76QDZIz3q PPSt1j7XnodgTJoHLR1uc//vlBo8gQAgja9m7q9k6U72gl4EXVS+4Qm8TN5fMFbJ wLE8q7YqAOw8iU5UIa7vO767OqxOsfXoghoyis5PhkHtQKCW24SapgEBosl+uuSw s+WsS92rYlwigPXMIec33WBjstxK6Z10aebbW1BjZce/r9GM1cX24vs4vN4tvQ1a jpxeazQGhp6xUKi9m5UZ0d3uoZtSCgfyoIXiTaa+aZ3VGWt+OyxgqjG0HzeIh2Kv qa0r+JGaCa59atzwfNEs2DYld70atUIeebNBYiwWapumX7MSqPgqYbBenK+lodI6 5BaO97iHeayLjjVUPL7BeVpFsk/XGMw7QT5mwPz8JSCv/jyjQPtihkFOmB9jXTIc DfHd72IgXLkXFr32HelZjLn7RQsiJwwafU3Eki0WdciQ0CvwmsZuB530uifnKy6b EqvOAGFvqpc1ahvonnwgZ7Bg/0GhvbIzuLab0PamMUJ7G88HtAZbgRSiVWKPY/Jk zm2xMpzVtMWnAGnRnjBj =qtx4 -----END PGP SIGNATURE-----
On Thu, Jul 30, 2015 at 12:11 AM, Steve Kinney <admin@pilobilus.net> wrote:
staff, with particular attention to choke points where
That's not what I have in mind at all. Everything that touches the production process would have to be isolated and audited. In practical terms, that would mean bringing the computers in question in from offsite, with relevant software already installed and validated.
People talk a lot about refitting and auditing existing setups. There's a lot of inbred friction there so the cost to successfully do that vs. a complete ground up trusted rebuild may be roughly equivalent. Therefore if so why not just choose the latter?
In the context at hand, watching the whole thing play out would consist of directing the whole process one step at a time, per a procedure created in collaboration with the contractor's engineering and QA departments. Optical masks and/or equivalent data files would be handled by client personnel and retained for validation. The chips that pop out would be under very stringent property control, and quite a lot of them would be torn down and thoroughly analyzed "at home" to validate the run.
Still sounds like untrusted base, chicken and egg. http://s12.postimg.org/n93g4udql/DSCF0431_who_came_first.jpg
depends on how reliable the post-production tear down and analysis of end product components is considered.
A quote to the effect of "I do not care who votes, I only care who counts the votes" comes to mind
And how do you propose to count the votes when your ballots are measured in square nanometers and your counting machines are all made by one secretive company and composed of anywhere between 1B and 6B untrusted logic gates? Did you ever hear Intel say "our own designs and fabs have no backdoors and we're not subject to backdooring"? Did you ever hear GlobalF say "we don't inject backdoors in customer silicon and we're not subject to backdooring"? Would it mean anything to you if they did? Would it make any difference if they offered you a field trip? Do independants actually think their oneoff decap validation project proves or gives odds on the entire line and distribution chain? And when was any Intel / AMD CPU last publicly decapped and fully audited? 8088? Never?
This is old school TCSEC / CC applied to manufacturing.
then it is not possible to build a trusted CPU.
You watch while... I collect wood and ore and smelt into axe, you trust axe. I split tree and assemble hut, you trust hut. I put wheel in water and make mill, you trust flour. I give you magical computer before I make abacus, you throw in river and order me make abacus first. Eventually trusted CPU is made.
I think that in the engineering and business worlds, trust is always a point on a cost curve.
I'd have more trust in some kid to not destroy my lawn with the mower for $10 than some company for $50. Govt contracts seem to deliver more debt than trust and are prime example that trust and cost are separate. If not, then the HUNDREDS OF BILLIONS governments spend a year would have resulted in 5 9's of trust decades ago. But no, they can't even keep OPM secure from crackers, let alone backdoored cpu's they import from Malay fabs. Put well under 1/100 of that pie a year for a few years into a trusted open fab project and I'd bet you can get "Beyond A1" consumer gear out the other end at tolerable prices. Don't forget to charge 10+ times more for government jobs :)
On 07/30/2015 01:52 AM, grarpamp wrote:
On Thu, Jul 30, 2015 at 12:11 AM, Steve Kinney <admin@pilobilus.net> wrote:
<SNIP>
then it is not possible to build a trusted CPU.
You watch while... I collect wood and ore and smelt into axe, you trust axe. I split tree and assemble hut, you trust hut. I put wheel in water and make mill, you trust flour. I give you magical computer before I make abacus, you throw in river and order me make abacus first. Eventually trusted CPU is made.
Woah! How many years to build the tool chain to a trusted CPU? Also, how many people? And how to trust them? One bad apple, you know.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 would a pre-internet era set of IC's and components kick-start the process a little without losing too much trust? Use slow old IC's in parallel to gain something usable :) On 30 July 2015 09:22:33 BST, Mirimir <mirimir@riseup.net> wrote:
On 07/30/2015 01:52 AM, grarpamp wrote:
On Thu, Jul 30, 2015 at 12:11 AM, Steve Kinney <admin@pilobilus.net> wrote:
<SNIP>
then it is not possible to build a trusted CPU.
You watch while... I collect wood and ore and smelt into axe, you trust axe. I split tree and assemble hut, you trust hut. I put wheel in water and make mill, you trust flour. I give you magical computer before I make abacus, you throw in river and order me make abacus first. Eventually trusted CPU is made.
Woah! How many years to build the tool chain to a trusted CPU?
Also, how many people? And how to trust them? One bad apple, you know. -----BEGIN PGP SIGNATURE----- Version: APG v1.1.1
iQI7BAEBCgAlBQJVueEpHhxvc2h3bSA8b3Nod21Ab3Blbm1haWxib3gub3JnPgAK CRAqeAcYSpG1iBsiEACTMmkDG9ina2qlKycleGKuqwY54aqh6KTXC5BDjlKHuuR5 b7B9Hc/eQRgh46wFghM858YCxF4qLM6czMeJG3G6B2SxfZCXtdZerlhWjdu9a+MJ lQmZCqj06or6I1n1VkzPM26YWECAFp5+BXN5gGNO9y8o3d4SvS8GcuFnpbP2Ubha p/+/ediqbJKXsl0X8rUv7pnLuo9yPhwgIGgtVdHi3Mq4yYvPlMSw1WfCI2Ha2K9K kkSJyfLDmRHxqGc0dbCTFW49td6k15KwgQxgucLcLYwkfMZK6ZmKcO/fh6QJwC6E i1zVJQc/6Sy9OgEakJOCeW8R+EWODdOZ88yEnHopeSi5o/rRZ8PkuSCEHureJbMb IWTZCWN87VFCOcJFmDzKaW44Eqd464Q0xOIkxWec1hjJwWCE7XtfHx7IVUtLhus4 8taN2Ih6EtD3XbeIxLegBOxIezo39AB/4QFsYM+DPQ4XsMkcoBW3FFHD4EN+Yyol 7TQmMV02x2yJ6fegheufheNZpl+ZJ0ZVRA4CNaFEWREGYRj1BhOoVazL7BnNCM3n v8QZU2n0kK4ugu8xk1E+hJ2LaZU6oDiAv8CDjG3fVRcXEqFweyyjVTkn+AVzftQZ pqXVd2yeE8ysrqy1Xemf5AlJgiUW4UgbwgSUT6oVhtiGuBD+hW0XM2i7SCh8Tg== =5D9H -----END PGP SIGNATURE-----
On 07/30/2015 02:32 AM, oshwm wrote:
would a pre-internet era set of IC's and components kick-start the process a little without losing too much trust? Use slow old IC's in parallel to gain something usable :)
There was NSA before Internet ;)
On 30 July 2015 09:22:33 BST, Mirimir <mirimir@riseup.net> wrote:
On 07/30/2015 01:52 AM, grarpamp wrote:
On Thu, Jul 30, 2015 at 12:11 AM, Steve Kinney <admin@pilobilus.net> wrote:
<SNIP>
then it is not possible to build a trusted CPU.
You watch while... I collect wood and ore and smelt into axe, you trust axe. I split tree and assemble hut, you trust hut. I put wheel in water and make mill, you trust flour. I give you magical computer before I make abacus, you throw in river and order me make abacus first. Eventually trusted CPU is made.
Woah! How many years to build the tool chain to a trusted CPU?
Also, how many people? And how to trust them? One bad apple, you know.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 my point being that without a publicly available global network then backdooring IC's may not have been something they were interested in :) But i guess they could still have seen some value in doing this and having modems phone home. On 30 July 2015 09:39:08 BST, Mirimir <mirimir@riseup.net> wrote:
would a pre-internet era set of IC's and components kick-start the
On 07/30/2015 02:32 AM, oshwm wrote: process a little without losing too much trust?
Use slow old IC's in parallel to gain something usable :)
There was NSA before Internet ;)
On 30 July 2015 09:22:33 BST, Mirimir <mirimir@riseup.net> wrote:
On 07/30/2015 01:52 AM, grarpamp wrote:
On Thu, Jul 30, 2015 at 12:11 AM, Steve Kinney <admin@pilobilus.net> wrote:
<SNIP>
then it is not possible to build a trusted CPU.
You watch while... I collect wood and ore and smelt into axe, you trust axe. I split tree and assemble hut, you trust hut. I put wheel in water and make mill, you trust flour. I give you magical computer before I make abacus, you throw in river and order me make abacus first. Eventually trusted CPU is made.
Woah! How many years to build the tool chain to a trusted CPU?
Also, how many people? And how to trust them? One bad apple, you know.
-----BEGIN PGP SIGNATURE----- Version: APG v1.1.1
iQI7BAEBCgAlBQJVuehbHhxvc2h3bSA8b3Nod21Ab3Blbm1haWxib3gub3JnPgAK CRAqeAcYSpG1iDRND/9P0X42LewKBvlsCAuwi1vA385Nsu2aNj0baJ41L1Fdes5R nDHLUrAkJiE0zw1G/xxJofcYIGOk06nv43cMecX2vKMOASiIoe6214l/JvviEpI0 gRe6ytHyqUfw14Uehk7WnolB/YyukyqbfIxDcrmy6BIS8cXvHBOhAiFCyQMlLVil 7erYSSphRYvn4jSEmBGdooKU/h1YYnD128pnKAO5DJm84fP7YRUFtLsy0ajI3JeA cyY2WQF/Po7lkpjIq0a8Lwbx+Q0UG3iqCbCpB3bDyc6RK7NOWMxx4nJr6oxyF5D1 c5DZLDqIZ5AWjhWpno4+PH7xWOmYigV3lCIHCh8YurIRGQ8mhYQrInBF4N36jrs+ PT30CnR9T/EGQ6cdVg8egRsmOBB5C3c7e4+8iSZ1D3kefbf/xDMmV+E4CCSXbmc2 kHbrPU+bn5U1TN3mV3VY2OowDAeyJYu4U+XvOpCr8NH/qQ9uIp6dEfjIILBpcC4s ewD0ZxOwfJfsWwCulgr/rfWg6N33o4IInp+vSaoSIqaRPKIVGcHDp+3452b9lNJf YBgmrjLyzB0QM/7+3JV7e5Ab1URgi4fBcgsK2lE2WdbmsykNE4BchnGH8RAPpCpH 39IZ7pC0tGNLDrbn5rHsw21N0pWoEkQztPTFkw/jSW9K6Lef4WW14OYM8FiEVA== =39nR -----END PGP SIGNATURE-----
On Thu, 30 Jul 2015 10:03:23 +0100 oshwm <oshwm@openmailbox.org> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
my point being that without a publicly available global network then backdooring IC's may not have been something they were interested in :)
It should be possible to use 'older' components that can't be backdoored or that are highly unlikely to be backdoored to build tools to validate newer untrusted systems?
But i guess they could still have seen some value in doing this and having modems phone home.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 07/30/2015 01:32 AM, oshwm wrote:
would a pre-internet era set of IC's and components kick-start the process a little without losing too much trust? Use slow old IC's in parallel to gain something usable :)
Something like this? http://cpuville.com/ Or this? http://www.homebrewcpu.com/ Or maybe something like this, seeing as how we really can't trust anything integrated as the microscopic level or smaller? http://6502.org/users/dieter/mt15/mt15.htm Hell, why don't we just start building DCPU-16's and bootstrap from there? https://en.wikipedia.org/wiki/0x10c - -- The Doctor [412/724/301/703/415] [ZS] PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F DD89 3BD8 FF2B 807B 17C1 WWW: https://drwho.virtadpt.net/ "Maybe just one little bunny, huh?" -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJVulkaAAoJED1np1pUQ8RkKZ0P/jNrzc8Nj/+ph/0qLpWwXkL1 4kkgMNlufAxiq665KlZsF3zgJbvdzTmfp/25d2p/YARFPHP39Z5rVR/TX7Eo41z2 WXpkO5GyRGcE11QcGLZnMsxu+G4dwzxa3upg375IP3iXwAbL5TJrVKuc8X6XOeRp L/fZRusppqb5Ib+GHEGnKg7k6oonpWghvssciGRz6eunOr/nwAV/CGyi2ct11qQw PWAAVkP9GnLdsfA95K3hZCx1IR3f49DmGmaKLI2lYatXpLehDfhNdyxp7VDux2fi +7lQAfonWvTH16cBhCUiPkSg2jmUfhzWEFnleKgsn1J6cg6DjwTu6kTwiYR02Bau WvkZvuMxBkhDXLwcjb++sSs/dgLmpzLOin1UonGNswi+VnE4vTU87Bac4kU9Lpuo JgLtRZcDS5RBZCsxZ4gzZ3k/CmZsOl+/WjuRAETk3JyMakgAzLVHPmpONfpwAAgL gJ38/ru8rfT+h4DPThSRBuDTBm/AK7EiONAYha3pjLtZt7NVJumN86cqkRPP/WEJ KW6CqkoLCv2uPReHGadIPnA7KRWutK37UhO6ENuX5tHkeXFoW9dfsl5iq5wuUISn z7vD86TwVBWnvXpub3qtIHkCBe3Uy8ViEJEFzASX/DLJ0nchFt5Xb5l+wM8xSaL1 l952XGi/5QFA843nWvkQ =Bb5g -----END PGP SIGNATURE-----
On Thu, Jul 30, 2015 at 1:04 PM, The Doctor <drwho@virtadpt.net> wrote:
On 07/30/2015 01:32 AM, oshwm wrote:
would a pre-internet era set of IC's and components kick-start the process a little without losing too much trust? Use slow old IC's in parallel to gain something usable :)
Something like this? http://cpuville.com/ http://www.homebrewcpu.com/ Or maybe something like this, seeing as how we really can't trust anything integrated as the microscopic level or smaller? http://6502.org/users/dieter/mt15/mt15.htm
Those are close and would certainly be a goalpost in the rebuild. You might be able to trust logic gates because you can exhaustively test their logic. On the other hand, how do you know that once you connect enough of them to each other that their secret gates inside don't sense each other and activate? Since you're that close to stone age anyway, why not start one more step back at relays and core memory. It's like trying to validate a 256 bit blackbox hash function someone gives you... sure, their supplied test vectors may all pass, but you have no idea or way to test what happens when you start pushing real data through the secret instructions. You simply can't test all the possible data combinations so you have to throw their box back in the snakeoil.
Hell, why don't we just start building DCPU-16's and bootstrap from there? https://en.wikipedia.org/wiki/0x10c
That's too complex, and the first one pre-exists so it's chicken and egg.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 07/30/2015 11:42 AM, grarpamp wrote:
because you can exhaustively test their logic. On the other hand, how do you know that once you connect enough of them to each other that their secret gates inside don't sense each other and activate? Since you're that close to stone age anyway, why not start one more step back at relays and core memory.
So what you're basically saying is that the entire tech stack, all the way back to far edge pf electromechanical information processing is basically completely untrustworthy. There is no way at all to trust anything that we can't actually see the logic gates of with the naked eye, which would put us... where? Maybe tens of computations per second, at most? A little more (but not much)? Fuck it. Time to go home, everyone. They Won. - -- The Doctor [412/724/301/703/415] [ZS] PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F DD89 3BD8 FF2B 807B 17C1 WWW: https://drwho.virtadpt.net/ "The cafe' hates boring." --Blackie, _Nightmare Cafe'_ -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJVuqEUAAoJED1np1pUQ8RkwzQP/jGEBDZSqwQU/EHJSkfDdk+t d8jotLo5Iixzjybm8ThzHMPPtH9CyQK3OxuiNGRU6KXY+23gIkQBM314DK7+Ww10 VGZDPJC0t04UW1ycB3oP1zX834Lxq/pff9nkmfscyzuaNTCC4LV/8Fj2blqD4oXJ JHy9Jtr0FEz5v/p8NWEOnfP0pr2uBqQatw9dfFiFT2lCx01gw8E8H57rbE+sE4FE rmaGyOihAYCSLqBmDcMALnvuKKLQcgDHpfmPaR8UCpHKNp2HZQWYMOzKPUnLUE82 vFP4Qc13XJIUcRgWW9wKSkKjkjfpZsS1YTTAieiIoOnvzuXPswVBHQfCSwuyiMgN 7thrUa4SmD+kwJ9E/rNB8wnqkqff+i3NdHnzNvytI78bMF8tPA/pQUfWXJUC77Xy f8BUtH6VAug4ONaDqopIx3cZxtuxbX7BUrSjoCZ7O7pzbHM3okTzE/34QQipK1fG CUfgEso32OIRTrlC1IgDjRPxAMcPxeN+5KgIwSpyhZa31pniOVI6A0brfSQNY8zc sALD9zAIiSBnolrSKQR87uENugtxed+7p2br+pMktwc/3dFI5qr0tbCLNXCeRP2w PpHUnfxS5nMylezulHZM9xfvpBRighChnGTSvd/soMPb4tJJgYql1jPNOoN8bvoq S6nDJrQ0fGuUyFmv5Ou0 =eY7F -----END PGP SIGNATURE-----
On 30/07/15 23:11, The Doctor wrote:
On 07/30/2015 11:42 AM, grarpamp wrote:
because you can exhaustively test their logic. On the other hand, how do you know that once you connect enough of them to each other that their secret gates inside don't sense each other and activate? Since you're that close to stone age anyway, why not start one more step back at relays and core memory.
So what you're basically saying is that the entire tech stack, all the way back to far edge pf electromechanical information processing is basically completely untrustworthy. There is no way at all to trust anything that we can't actually see the logic gates of with the naked eye, which would put us... where? Maybe tens of computations per second, at most? A little more (but not much)?
Fuck it. Time to go home, everyone. They Won.
That is spot on, we can't trust any of it and most people would concede that we have lost the battle. So (in my fairly inexperienced opinion in this field) there are possibly two options:- 1) Re-invent the last 65 years of Computing - not impossible and we have the knowledge amongst most average tinkerer to do this but maybe it'll take us 10-20 years to catch up, utilising (potentially massively) parallel processing from early on in the process to gain speeds that were not common in the past at certain tech levels. 2) Look at if it is possible for us to develop trustworthy systems using untrustworthy components. Is there a way we can maybe use multiple components to compare their outputs to see if any of them are not trustworthy? Or maybe identify untrustworthy results from operations and ignore them, favouring trustworthy results?
On 7/30/15, oshwm <oshwm@openmailbox.org> wrote:
On 30/07/15 23:11, The Doctor wrote:
On 07/30/2015 11:42 AM, grarpamp wrote:
because you can exhaustively test their logic. On the other hand, how do you know that once you connect enough of them to each other that their secret gates inside don't sense each other and activate? Since you're that close to stone age anyway, why not start one more step back at relays and core memory.
So what you're basically saying is that the entire tech stack, all the way back to far edge pf electromechanical information processing is basically completely untrustworthy. There is no way at all to trust anything that we can't actually see the logic gates of with the naked eye, which would put us... where? Maybe tens of computations per second, at most? A little more (but not much)?
Fuck it. Time to go home, everyone. They Won.
That is spot on, we can't trust any of it and most people would concede that we have lost the battle.
I don't buy that. 0) Whilst using modern Intel vaseline chips: 1) Program full FLOSS stack for circuit/chip dev: # some starts: apt-cache search circuit apt-cache search electron 2) Start with one of the FLOSS CPUs, eg SPARC2, and divide and conquer it's analysis audit. 3) With open audited/auditable fab, we burn some chips. 4) Now divide and conquer to analyse those physical chips, using physical analysis one step below the process node - eg 120nm chip, 60nm chip analysis. As these steps occur, software is developed to facilitate each step of course. Proprietary software for the audit bits though to make sure it is not backdoored by Intel.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 07/30/2015 04:43 PM, Zenaan Harkness wrote:
1) Program full FLOSS stack for circuit/chip dev: # some starts: apt-cache search circuit apt-cache search electron
2) Start with one of the FLOSS CPUs, eg SPARC2, and divide and conquer it's analysis audit.
That's been mentioned here serveral times. Then someone else chimes in with the injection of boobytrapped packages to ensure that designs are automagickally tampered with or boobytrapped compilers (nevermind that we have a workable way of detecting and mitigating the attack (try it, it works!)). Then someone else chimes in with "Well, we can't even trust the FPGAs or the gate synthesis software for the same reason." This is yet another iteration of the same loop on this mailing list. It would be killfile-able if some of the basic terms didn't change between iterations. I suppose what I'm bitching about (and I've probably just faceplanted by stepping into that particular pothole - it's my turn, I guess) is that there seems to be no part of the threat model where risk is acceptible. I mean, going all the way back to hand-wired electromechanical processors just to be able to bootstrap back to silicon and losing 20-30 years of technical advancement? Somewhere, we went way off course. There is a saying: "Perfect is the enemy of working." I think that's where we as a group have lost our way. The threats are known. The risks are known. Let's act. - -- The Doctor [412/724/301/703/415] [ZS] PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F DD89 3BD8 FF2B 807B 17C1 WWW: https://drwho.virtadpt.net/ "Holy crap! What have I gotten myself into!?" --Adam Savage -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJVu7WMAAoJED1np1pUQ8RkJN4QAIbZgO+Hjz6/hYqlRjS7vUu6 tv2/9S0/ZXQ6EyzUWwkToWfN/KCIQg+UdlcsxnQMe+jLYfDYB4greeSdUnXhUOMN lec0rnxDJ2taHUgd0OVnsJ90uHXs7NGkI1QM5vlNXwFjvSpcCz212rKdljymBsJx bIsfzF2YioqAaSXREBe/OZvFahRfiKa693FmeU3WLVuU6k5CUk7fpEGUEhiZllmU JmE7jAPovTtZ9kSoNA9WX9WX2TEdBocOh0JmekX8vak1tXURzZKktomaEbnMaaOz +lPlbpRdzUL4KKBMoXWNwcZDCjizYO3HGJkeOyglgxzG0KiEW13EVwTwpON04rFw Cmsywhh3/eeRCjVQhuGuGbOUHe7UZRqPunKB8kC1glNMAMueICiDQVlYQIBNZZ7s Mjax7NhElVBfRWrVWW0ZgQTlxRxLVtoq4/vmtQaleofynxn3CBo1f2+faZO+sI7Q U10ZW7jku1mHmCeo4EZgj8CkE336F7uJ/J8ZvhzXsrSsA6aD5vcC5JOYkYk5NiNN R1ULwRH1Dt3g910jG/2KjD8x9yl9ZMplYftrNH9IWWoXEnLI2XkHZWCK2Hg+exqS /J/F6NhEY53x+ZqGAI9CJ4vYAVFFY8MuLSnbmmM1hCBQmT776Yoiv7lKDEuKJJsU Av0wsd46vJLmXF0/RY7P =O/ho -----END PGP SIGNATURE-----
From: The Doctor <drwho@virtadpt.net>
I suppose what I'm bitching about (and I've probably just faceplanted by stepping into that particular pothole - it's my turn, I guess) is that there seems to be no part of the threat model where risk is acceptible. I mean, going all the way back to hand-wired electromechanical processors just to be able to bootstrap back to silicon and losing 20-30 years of technical advancement?
Somewhere, we went way off course. There is a saying: "Perfect is the enemy of working." I think that's where we as a group have lost our way.
The threats are known. The risks are known. Let's act. I agree with that. I think it's better that we get 50% of the population to use encrypted phones, where the encryption isn't truly known to be perfect, than to get 1% of the population to use perfect encryption. Verifying the last little bit of doubt is going to cost a rather large amount of money. Raising the demand for crypto phones to 50% represents a huge market, which will be satisfied, and the profits for that market will pay for the next generation of closer-to-perfect phones. Jim Bell
On Fri, Jul 31, 2015 at 3:53 PM, jim bell <jdb10987@yahoo.com> wrote:
From: The Doctor <drwho@virtadpt.net>
I suppose what I'm bitching about (and I've probably just faceplanted by stepping into that particular pothole - it's my turn, I guess) is that there seems to be no part of the threat model where risk is acceptible. I mean, going all the way back to hand-wired electromechanical processors just to be able to bootstrap back to silicon and losing 20-30 years of technical advancement?
It's a fast rebuild using trusted principles, there are no tech discoveries needed, no loss of any tech. Yes you have to learn to apply trusted principles, that will take time. And keep up with whatever new tech comes after the time you start, which is normal. So for a while you just have to work harder, faster, better. That's standard practice and nothing new for a startup and people in them.
Somewhere, we went way off course. There is a saying: "Perfect is the enemy of working." I think that's where we as a group have lost our way.
The threats are known. The risks are known. Let's act.
I agree with that. I think it's better that we get 50% of the population to use encrypted phones, where the encryption isn't truly known to be perfect, than to get 1% of the population to use perfect encryption. Verifying the last little bit of doubt is going to cost a rather large amount of money. Raising the demand for crypto phones to 50% represents a huge market, which will be satisfied, and the profits for that market will pay for the next generation of closer-to-perfect phones.
Ok fine, let's say you don't care to trust your chip designing and printing hardware, and you opt to totally skip doing anything to rebuild or validate those parts of the trust chain [1]. But you still do want an open hardware crypto phone for yourself and the masses, which would you prefer to do: a) wait for some bigcorp like MS/Nokia Apple HTC to convincingly say and show an open hardware crypto design? b) send your own open hardware design to global foundries? c) send your own open hardware design to a comunity owned and operated open fab (still being subject to your choice in [1] above)? I suggest investing in (c) now will bring more and more community and other runs through it such that you can invest in [1] above later. You might even have to bank profit from (b) to get to (c). But anything involving (a) is not "Lets's act", unless you think your pleading and pressure (which is all you can do there) will be fruitful [2]. So at minimum you best start acting on (b) or (c). Right? [2] Still waiting on open video cards and drivers eh, how many decades of "raising the demand" has that been? Lol. Oh, but Apple did add some closed crypto'ey fingerprint'ey passphrase'ey thing to their phone, so maybe that was pressure, and trust'ey enough, and we can all "act" by throwing dollars at that instead of ever having our own (b/c/[1]), and just have faith instead. I'm tired of (a), and it's boring, and if not evil at least not really aligned to your interests. If you want something done, carry a big stick, or do it yourself until you have one.
On Thu, Jul 30, 2015 at 6:11 PM, The Doctor <drwho@virtadpt.net> wrote:
So what you're basically saying is that the entire tech stack, all the way back to far edge pf electromechanical information processing is basically completely untrustworthy.
A purist might say that, and you'd have a hard time refuting them because for the most part, you raced to build a system that "works", not necessarily one you "trust" or that is proofed. The point is, that if you're going to consider, analyze, create and certify trust, you have to rip apart your current way of thinking in some pretty mind bending ways. Because everyone has been cultured since birth to accept things that are blindly handed to them as trusted. Where along the historical line of tools would you feel confident or shaky in using such tool, effectively blindly dropped into your hand, to create or do something you trust with it, and why?
From sandpaper to CNC machine... From knife to MRI... From relay to the latest Xeons and ARM's...
https://www.schneier.com/blog/archives/2006/01/countering_trus.html Even with things like this, when it comes to hardware it's still turtles. You can't use an Intel CPU to crosscheck an Intel CPU. With actors like the NSA and datagrabber ideology inserting and rooting stuff everywhere, you probably can't use any other closed CPU either. Destructively testing your rig just to replace it with an untested copy is pointless.
There is no way at all to trust anything that we can't actually see the logic gates of with the naked eye
Theoretically, if the image data is passed through a computer to your eye on the screen, yes. Unless you know that the entire history and process that produced the suspect gates that were just placed in your hand (or equivalently, your imaging rig)... is trusted.
which would put us... where? Maybe tens of computations per second, at most? A little more (but not much)?
No, use that level to build the next faster and so on.
Fuck it. Time to go home, everyone. They Won.
Purists? Turtles? Who knows. But one thing's for certain, today's hardware and production is closed. And just as with closed source software, it would be a far stretch to point at the billion+ transistors on your desk and genuinely say "Yeah, I trust that". That should be enough reason to put serious thought and action into creating an opensource process that could print trusted opensource hardware... an open fab. Otherwise you're effectively saying "Fuck it".
On Thu, Jul 30, 2015 at 4:22 AM, Mirimir <mirimir@riseup.net> wrote:
then it is not possible to build a trusted CPU.
You watch while... I collect wood and ore and smelt into axe, you trust axe. I split tree and assemble hut, you trust hut. I put wheel in water and make mill, you trust flour. I give you magical computer before I make abacus, you throw in river and order me make abacus first. Eventually trusted CPU is made.
Woah! How many years to build the tool chain to a trusted CPU?
As before, the knowledge already exists, so physical replication from the ground up should be very fast. TCSEC is not unknown, but designing and embedding it into every process is rather new (both as mindset and applied) so it will take some time and must be done beforehand.
Also, how many people? And how to trust them? One bad apple, you know.
Again... draw interested people from multiple philosophical sectors, use multiple man rule, consensus rule. You don't have to trust them outside the fab, only observe them inside. The more principled zealots like Stallman and Juan involved the more likely somone will flag upon trust violation. The human problem is hard. But at the end of the day, if the outcome of the project (trusted chips) is important, the right people will come together to do it and the level of trust achieved will be orders of magnitude higher than what exists today.
On 7/30/15, grarpamp <grarpamp@gmail.com> wrote:
On Thu, Jul 30, 2015 at 4:22 AM, Mirimir <mirimir@riseup.net> wrote:
Also, how many people? And how to trust them? One bad apple, you know.
Again... draw interested people from multiple philosophical sectors, use multiple man rule, consensus rule.
<lightbulb> Ahh yes! Federal politician, State politician, Local politician, police occifer, detective, CIA agent, FBI agent, NSA agent, KGB agent, Richard Stallman, Juan. That should do it?
On Thu, Jul 30, 2015 at 7:25 PM, Zenaan Harkness <zen@freedbms.net> wrote:
<lightbulb> Ahh yes! Federal politician, State politician, Local politician, police occifer, detective, CIA agent, FBI agent, NSA agent, KGB agent, Richard Stallman, Juan.
That should do it?
Agency is a maddening asymptote of turtles... single, double, triple, quad... Eventually it reaches near Juan's domain who will always tell you straight. So yeah, that should do just fine ;-)
On Wed, 29 Jul 2015 18:06:10 -0400 Steve Kinney <admin@pilobilus.net> wrote:
IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc.
Are you crazy? The terrists would use the information against the good guys.
:o)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/29/2015 10:08 PM, Juan wrote:
On Wed, 29 Jul 2015 18:06:10 -0400 Steve Kinney <admin@pilobilus.net> wrote:
IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc.
Are you crazy? The terrists would use the information against the good guys.
Security by obscurity ain't security at all. But of course you must have been joking. :D -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJVuZQLAAoJEDZ0Gg87KR0L3o0P/3/F/PJ8EbG+UtiQ6cJjQnsZ yikqKXFK7uVyPPpXrNnlDfz5wrsh4l4kuK9jJcugHNJd0iI4Bj/JyoYlWpiD8AmQ 63FMDEiy8pteI5DHPHQzwVWT7slVsZhTBjwdak4SYpxGLNNyXT8KCmEuOD8NwcHi X1TUcR02gDh1ZZgGOm62hQcFwO2m+VpZBGcGLizbKY/czR8rV0+3vpoNaSion6S2 ZjcaTG+eWVheWmGXXKbnElPxacupeRX6KXIPi9HZBIrcfwqa+ZGcgVzVHmooJqTh prv6q3YtiYPiNPyP0zCKsImLXjknF8cSQEK4ItzAD+7iL0WnrcZ0kYG+Qa4u6PrO 5ecmhByzKAMFmSaiqtmHpycmktjijpFeRPRTAtBneeWsbADbSzN8KwSlWoSRrAI0 9G9IXRXaztyjFvio5Izy9PIO6y+TacRJPbiYwRtWXpDPvlsMxkkhSL6RaXeUWb/C IQoJrqpqtN1Qz5nt81EoACbj06sn0xYGVNxVi4VN36b4cnl2V+QLVsDBbAawihos cV2g+NOn6ayBACtDmrZLw8XnVZ4O+7sX9bY5Wm1nkMrhqAp+/OWdRmkrU3e0g1GJ aGW/AQ0HLPyaatSPRebntYob7ZemTQ7lLAK+LYYAtkV6eMQPXAt0GgkfdXPP86Q8 oKdE6ej8gFzrHPprlihy =Tc3R -----END PGP SIGNATURE-----
On Thu, Jul 30, 2015 at 1:06 AM, Steve Kinney <admin@pilobilus.net> wrote:
-----BEGIN PGP SIGNED MESSAGE-----
I do see problems with scaling DYI chip projects up to commercial production numbers, and down in scale to achieve fast, high capacity performance. That's why I am much more interested in the prospects of a manufacturing process built for radical transparency, using "commercial best practice" technology at conventional production facilities.
IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc.
:o)
i have been thinking about this and i was thinking a lot about actual nuclear bomb sites being added to the list of 'hazard to public' and i was like - not possible to give the public any kind of access to such horrors but then its already in play ... NATOish people sold the design of the bomb to pakistan and israel long ago so maybe actually if more people were involved in the process it would be in safer hands and there would be less proliferation as it seems to be the modus operandi that there be more bombs to counter best film ever points in the direction of nano/bio tech possibilities le jette with the murder of species... can we have the components make themselves and self destruct when in danger? -- Cari Machet NYC 646-436-7795 carimachet@gmail.com AIM carismachet Syria +963-099 277 3243 Amman +962 077 636 9407 Berlin +49 152 11779219 Reykjavik +354 894 8650 Twitter: @carimachet <https://twitter.com/carimachet> 7035 690E 5E47 41D4 B0E5 B3D1 AF90 49D6 BE09 2187 Ruh-roh, this is now necessary: This email is intended only for the addressee(s) and may contain confidential information. If you are not the intended recipient, you are hereby notified that any use of this information, dissemination, distribution, or copying of this email without permission is strictly prohibited.
and what were we just talking http://www.dailydot.com/politics/industrial-ethernet-switches-ies-vulnerabil... On Sat, Aug 1, 2015 at 1:56 AM, Cari Machet <carimachet@gmail.com> wrote:
On Thu, Jul 30, 2015 at 1:06 AM, Steve Kinney <admin@pilobilus.net> wrote:
-----BEGIN PGP SIGNED MESSAGE-----
I do see problems with scaling DYI chip projects up to commercial production numbers, and down in scale to achieve fast, high capacity performance. That's why I am much more interested in the prospects of a manufacturing process built for radical transparency, using "commercial best practice" technology at conventional production facilities.
IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc.
:o)
i have been thinking about this and i was thinking a lot about actual nuclear bomb sites being added to the list of 'hazard to public' and i was like - not possible to give the public any kind of access to such horrors but then its already in play ... NATOish people sold the design of the bomb to pakistan and israel long ago so maybe actually if more people were involved in the process it would be in safer hands and there would be less proliferation as it seems to be the modus operandi that there be more bombs to counter
best film ever points in the direction of nano/bio tech possibilities le jette with the murder of species... can we have the components make themselves and self destruct when in danger?
-- Cari Machet NYC 646-436-7795 carimachet@gmail.com AIM carismachet Syria +963-099 277 3243 Amman +962 077 636 9407 Berlin +49 152 11779219 Reykjavik +354 894 8650 Twitter: @carimachet <https://twitter.com/carimachet>
7035 690E 5E47 41D4 B0E5 B3D1 AF90 49D6 BE09 2187
Ruh-roh, this is now necessary: This email is intended only for the addressee(s) and may contain confidential information. If you are not the intended recipient, you are hereby notified that any use of this information, dissemination, distribution, or copying of this email without permission is strictly prohibited.
-- Cari Machet NYC 646-436-7795 carimachet@gmail.com AIM carismachet Syria +963-099 277 3243 Amman +962 077 636 9407 Berlin +49 152 11779219 Reykjavik +354 894 8650 Twitter: @carimachet <https://twitter.com/carimachet> 7035 690E 5E47 41D4 B0E5 B3D1 AF90 49D6 BE09 2187 Ruh-roh, this is now necessary: This email is intended only for the addressee(s) and may contain confidential information. If you are not the intended recipient, you are hereby notified that any use of this information, dissemination, distribution, or copying of this email without permission is strictly prohibited.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Not sure if someone already mentioned this, but what of opencores.org ? Which offers professional support for products from that site, I imagine that one could develop, for example, very transparent and open source hardware development with production processes that could scale up depending on demand, one could start with FPGAs. There may be some already working on just such a project but I'm not aware of the details. - -O On 07/31/2015 03:56 PM, Cari Machet wrote:
On Thu, Jul 30, 2015 at 1:06 AM, Steve Kinney <admin@pilobilus.net <mailto:admin@pilobilus.net>> wrote:
-----BEGIN PGP SIGNED MESSAGE-----
I do see problems with scaling DYI chip projects up to commercial production numbers, and down in scale to achieve fast, high capacity performance. That's why I am much more interested in the prospects of a manufacturing process built for radical transparency, using "commercial best practice" technology at conventional production facilities.
IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc.
:o)
i have been thinking about this and i was thinking a lot about actual nuclear bomb sites being added to the list of 'hazard to public' and i was like - not possible to give the public any kind of access to such horrors but then its already in play ... NATOish people sold the design of the bomb to pakistan and israel long ago so maybe actually if more people were involved in the process it would be in safer hands and there would be less proliferation as it seems to be the modus operandi that there be more bombs to counter
best film ever points in the direction of nano/bio tech possibilities le jette with the murder of species... can we have the components make themselves and self destruct when in danger?
-- Cari Machet NYC 646-436-7795 carimachet@gmail.com <mailto:carimachet@gmail.com> AIM carismachet Syria +963-099 277 3243 Amman +962 077 636 9407 Berlin +49 152 11779219 Reykjavik +354 894 8650 Twitter: @carimachet <https://twitter.com/carimachet>
7035 690E 5E47 41D4 B0E5 B3D1 AF90 49D6 BE09 2187
Ruh-roh, this is now necessary: This email is intended only for the addressee(s) and may contain confidential information. If you are not the intended recipient, you are hereby notified that any use of this information, dissemination, distribution, or copying of this email without permission is strictly prohibited.
- -- http://abis.io ~ "a protocol concept to enable decentralization and expansion of a giving economy, and a new social good" https://keybase.io/odinn -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJVvftFAAoJEGxwq/inSG8C1roIAMCQ28UW9YkwD8JEfUrLJlJ9 pca/sQgf15R7X4UXU47kjcMG/HKd3gpNdKi5s5/kSO0+dpg2AKi0v3/Lma8kfQXs hoB04VoBFx+oTW28i2sKPzAQZedtVGE5cU22BbUGLb+5wFQRitAiHGe7MpjPcShf xcTPkeaXy7gwvKsJUU89PXu8x52gBdYyW+aEQbaw8qWj4mtuBAvoi/fDmB3TYlby h9hFXt1q6hhehwan1X/Arfi522vkn7Bvw9yWyPVnMxH86hvDWeBjHwH7FZljiB8d aBGc2qK4jHheyloQ4SsZlNEzGSFT1zlMvFNcU7+/b3vjjmnd0B9mstab5BAQpFQ= =PeiD -----END PGP SIGNATURE-----
Sounds good is opensource hardware the future? On 08/02/15 06:13, odinn wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Not sure if someone already mentioned this, but what of opencores.org ?
Which offers professional support for products from that site,
I imagine that one could develop, for example, very transparent and open source hardware development with production processes that could scale up depending on demand, one could start with FPGAs.
There may be some already working on just such a project but I'm not aware of the details.
- -O
On 07/31/2015 03:56 PM, Cari Machet wrote:
On Thu, Jul 30, 2015 at 1:06 AM, Steve Kinney <admin@pilobilus.net <mailto:admin@pilobilus.net>> wrote:
-----BEGIN PGP SIGNED MESSAGE-----
I do see problems with scaling DYI chip projects up to commercial production numbers, and down in scale to achieve fast, high capacity performance. That's why I am much more interested in the prospects of a manufacturing process built for radical transparency, using "commercial best practice" technology at conventional production facilities.
IMO the same kind of radical transparency should apply to all industrial processes that pose large potential hazards to public health & safety, i.e. nuclear power stations, transgenic agriculture, etc.
:o)
i have been thinking about this and i was thinking a lot about actual nuclear bomb sites being added to the list of 'hazard to public' and i was like - not possible to give the public any kind of access to such horrors but then its already in play ... NATOish people sold the design of the bomb to pakistan and israel long ago so maybe actually if more people were involved in the process it would be in safer hands and there would be less proliferation as it seems to be the modus operandi that there be more bombs to counter
best film ever points in the direction of nano/bio tech possibilities le jette with the murder of species... can we have the components make themselves and self destruct when in danger?
-- Cari Machet NYC 646-436-7795 carimachet@gmail.com <mailto:carimachet@gmail.com> AIM carismachet Syria +963-099 277 3243 Amman +962 077 636 9407 Berlin +49 152 11779219 Reykjavik +354 894 8650 Twitter: @carimachet <https://twitter.com/carimachet>
7035 690E 5E47 41D4 B0E5 B3D1 AF90 49D6 BE09 2187
Ruh-roh, this is now necessary: This email is intended only for the addressee(s) and may contain confidential information. If you are not the intended recipient, you are hereby notified that any use of this information, dissemination, distribution, or copying of this email without permission is strictly prohibited.
- -- http://abis.io ~ "a protocol concept to enable decentralization and expansion of a giving economy, and a new social good" https://keybase.io/odinn -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQEcBAEBAgAGBQJVvftFAAoJEGxwq/inSG8C1roIAMCQ28UW9YkwD8JEfUrLJlJ9 pca/sQgf15R7X4UXU47kjcMG/HKd3gpNdKi5s5/kSO0+dpg2AKi0v3/Lma8kfQXs hoB04VoBFx+oTW28i2sKPzAQZedtVGE5cU22BbUGLb+5wFQRitAiHGe7MpjPcShf xcTPkeaXy7gwvKsJUU89PXu8x52gBdYyW+aEQbaw8qWj4mtuBAvoi/fDmB3TYlby h9hFXt1q6hhehwan1X/Arfi522vkn7Bvw9yWyPVnMxH86hvDWeBjHwH7FZljiB8d aBGc2qK4jHheyloQ4SsZlNEzGSFT1zlMvFNcU7+/b3vjjmnd0B9mstab5BAQpFQ= =PeiD -----END PGP SIGNATURE-----
We will not get 50% of the population to use semi-good-crypto. Far more than that just do not give any damns at all. Legal protection for those that make insecure shit is so huge that society is literally stacked against privacy as a whole. The "protected-consumer" culture has led to widespread market failure - rather than think people buy with their hearts. Simply put, "Think Different" turned into "Don't think at all". That said, I don't see why there's no company attempting to address the niche of "I want it truly secure". Wouldn't governments like if the US doesn't spy on them? Wouldn't large companies' officers be very happy with a secure e-mail/voice call system? If it runs Android apps (protip: Android's JVM is open source) in a more secure manner (like, uhm, in-hardware-sandboxing? Libre Hypervisor CPU with the OS on it, and a jailed EvilCorp coprocessor that does the Android stuff?) it doesn't seem to take that much to build a smartphone nowadays (looking at Chinaphones, that is). And, many of the people that want it "truly secure" will understand that the products will cost more than a mass-produced NSA sponsored unit.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 my replies below On 08/02/2015 08:54 AM, Lodewijk andré de la porte wrote:
We will not get 50% of the population to use semi-good-crypto. Far more than that just do not give any damns at all.
This is probably true... on the other hand, looking at the flip side of this (referring to software side... the textsecure / stuff based on textsecure implemented in WhatsApp on massive scale) has worked for people on a massive scale who don't give much of a damn but who do care enough to use software with a privacy label and is easy to use. And I think that works to the benefit of the public. This makes me think that if the hardware is similarly easy to use (or at least easier for the user) then it will be easier to present the open source hardware as a benefit. The fact is that the OSH/SC (open source hardware / software communities) really have kind of sucked at marketing. That should chang e.
Legal protection for those that make insecure shit is so huge that society is literally stacked against privacy as a whole.
Agreed.
The "protected-consumer" culture has led to widespread market failure - rather than think people buy with their hearts. Simply put, "Think Different" turned into "Don't think at all".
That said, I don't see why there's no company attempting to address the niche of "I want it truly secure". Wouldn't governments like if the US doesn't spy on them? Wouldn't large companies' officers be very happy with a secure e-mail/voice call system?
If it runs Android apps (protip: Android's JVM is open source) in a more secure manner (like, uhm, in-hardware-sandboxing? Libre Hypervisor CPU with the OS on it, and a jailed EvilCorp coprocessor that does the Android stuff?) it doesn't seem to take that much to build a smartphone nowadays (looking at Chinaphones, that is).
And, many of the people that want it "truly secure" will understand that the products will cost more than a mass-produced NSA sponsored unit.
This makes a lot of sense, and I'm guessing that the smaller scale projects we've seen so far are indicative that this can happen at small to medium scale without problem for specific clients or anticipated customer base. Here are some successful examples: http://wiki.openmoko.org/wiki/Main_Page (refer also to http://bb.osmocom.org/trac/wiki/Hardware/Phones and http://bb.osmocom.org/trac/) https://www.crowdsupply.com/sutajio-kosagi/novena http://vr-zone.com/articles/novena-open-laptop-project-exceeds-280-perce nt-crowd-funding-target/77905.html TBD, but looks pretty good (same concept as Novena): https://puri.sm/ And of course, see: https://www.fsf.org/resources/hw/endorsement/respects-your-freedom These are by no means the only examples of open source hardware being marketed to a larger public, albeit at a limited production scale for the time being. It definitely is a market which is in need of attention . - -- http://abis.io ~ "a protocol concept to enable decentralization and expansion of a giving economy, and a new social good" https://keybase.io/odinn -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJVvm1LAAoJEGxwq/inSG8CtcQH/RGY5uoPXZ5QYbe2up9kVIQc lvjjWV2GmISSxUo11vZi0pPpEXcy5aRrCIJ0mho+TcDovzFXbj2m50y24GXF5wLr aVn/YHIsGTQWyXYQmYsBnR1dD0iKIQfqNZlK/OrYTwATlBtpAqP5QJGmyJK2wFHQ sZWWB1xnnKQxhrxGx3UeGa5wwtAWx5kA9KC7Rj1c5+JMi4U9DVCb3/LXoYHc8GnE W7Q9HCNdNl6ErwNeuU1nnoWHi49uH8wjjSXXn+erTb1kTzjM4L8Rn0NE4J8wpKMT MbTINoCfkgxi0Him9MStbOcjrrOMHBFing3HQXuQZNDGkmv7e3hXCtyXL+OjezI= =oOG2 -----END PGP SIGNATURE-----
On 8/2/15, Lodewijk andré de la porte <l@odewijk.nl> wrote:
We will not get 50% of the population to use semi-good-crypto.
Assuming you define TBB as semi-good crypto, the block AIUI to Mozilla incorporating Tor plugins/ mods into Firefox by default is simply that the Tor network cannot scale Internet wide. Yet. Because of course you know then we'd all be secure. But what do you define as semi-good crypto?
Far more than that just do not give any damns at all.
Agreed. But they might give some dollars for shiny new handset.
Legal protection for those that make insecure shit is so huge that society is literally stacked against privacy as a whole. The "protected-consumer" culture has led to widespread market failure - rather than think people buy with their hearts. Simply put, "Think Different" turned into "Don't think at all".
You think you think? Bah! Thinking's overrated.
That said, I don't see why there's no company attempting to address the niche of "I want it truly secure".
Because there's no such thing as "truly secure" and marketing to the "I want it truly secure" crowd would likely result in deception, which due to the overly paranoid nature of such crowd, would quickly deconstruct the bullshit anywya.
Wouldn't governments like if the US doesn't spy on them?
Making a lot of assumptions there - such as "there are few enough politicians who are compromised (with sex or money), that the rest can stand strong in the face of the few". Although the very recent French and Italian delegations to Crimea suggest at least some are capable of on the ground analysis in the face of the rest of their respective parliaments. Who knows, next we might see Greece running a referendum so the people can decide what their government should be doi... oh, hang on...
Wouldn't large companies' officers be very happy with a secure e-mail/voice call system?
And a way to get rid of those compromising images held by their respective national governments...
If it runs Android apps (protip: Android's JVM is open source) in a more secure manner (like, uhm, in-hardware-sandboxing? Libre Hypervisor CPU with the OS on it, and a jailed EvilCorp coprocessor that does the Android stuff?) it doesn't seem to take that much to build a smartphone nowadays (looking at Chinaphones, that is).
<slaps forehead> Of course! We've all missed that obvious for so long - put JVM in a secure open source hypervisor and the world will be safe. That would even encourage me to be less sarcastic.
And, many of the people that want it "truly secure" will understand that the products will cost more than a mass-produced NSA sponsored unit.
Perhaps Google could kick this project off - they already own Android and they must have the dollars. In fact Intel could produce the CPU, Foxconn can build the handsets, Google can install the software and the NSA can deliver them to us^B^B^B sorry, the Postal Service can deliver them to us. We have a plan for a better world. It starts with money. Money and trust. Trust. But not without money. More money. And technology. Technology and money. I'll leave you to contact Goog to start this ball rolling.
ok zenaan ur way 2 intelligent for everyone ...we get it but.... On Sun, Aug 2, 2015 at 2:13 PM, odinn <odinn.cyberguerrilla@riseup.net> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Not sure if someone already mentioned this, but what of opencores.org ?
Which offers professional support for products from that site,
why cant you yak your head (while slapping it) off about solution? On Mon, Aug 3, 2015 at 2:22 AM, Zenaan Harkness <zen@freedbms.net> wrote:
On 8/2/15, Lodewijk andré de la porte <l@odewijk.nl> wrote:
We will not get 50% of the population to use semi-good-crypto.
Assuming you define TBB as semi-good crypto, the block AIUI to Mozilla incorporating Tor plugins/ mods into Firefox by default is simply that the Tor network cannot scale Internet wide. Yet. Because of course you know then we'd all be secure.
But what do you define as semi-good crypto?
Far more than that just do not give any damns at all.
Agreed. But they might give some dollars for shiny new handset.
Legal protection for those that make insecure shit is so huge that society is literally stacked against privacy as a whole. The "protected-consumer" culture has led to widespread market failure - rather than think people buy with their hearts. Simply put, "Think Different" turned into "Don't think at all".
You think you think? Bah! Thinking's overrated.
That said, I don't see why there's no company attempting to address the niche of "I want it truly secure".
Because there's no such thing as "truly secure" and marketing to the "I want it truly secure" crowd would likely result in deception, which due to the overly paranoid nature of such crowd, would quickly deconstruct the bullshit anywya.
Wouldn't governments like if the US doesn't spy on them?
Making a lot of assumptions there - such as "there are few enough politicians who are compromised (with sex or money), that the rest can stand strong in the face of the few". Although the very recent French and Italian delegations to Crimea suggest at least some are capable of on the ground analysis in the face of the rest of their respective parliaments. Who knows, next we might see Greece running a referendum so the people can decide what their government should be doi... oh, hang on...
Wouldn't large companies' officers be very happy with a secure e-mail/voice call system?
And a way to get rid of those compromising images held by their respective national governments...
If it runs Android apps (protip: Android's JVM is open source) in a more secure manner (like, uhm, in-hardware-sandboxing? Libre Hypervisor CPU with the OS on it, and a jailed EvilCorp coprocessor that does the Android stuff?) it doesn't seem to take that much to build a smartphone nowadays (looking at Chinaphones, that is).
<slaps forehead> Of course! We've all missed that obvious for so long - put JVM in a secure open source hypervisor and the world will be safe. That would even encourage me to be less sarcastic.
And, many of the people that want it "truly secure" will understand that the products will cost more than a mass-produced NSA sponsored unit.
Perhaps Google could kick this project off - they already own Android and they must have the dollars. In fact Intel could produce the CPU, Foxconn can build the handsets, Google can install the software and the NSA can deliver them to us^B^B^B sorry, the Postal Service can deliver them to us.
We have a plan for a better world. It starts with money. Money and trust. Trust. But not without money. More money. And technology. Technology and money. I'll leave you to contact Goog to start this ball rolling.
-- Cari Machet NYC 646-436-7795 carimachet@gmail.com AIM carismachet Syria +963-099 277 3243 Amman +962 077 636 9407 Berlin +49 152 11779219 Reykjavik +354 894 8650 Twitter: @carimachet <https://twitter.com/carimachet> 7035 690E 5E47 41D4 B0E5 B3D1 AF90 49D6 BE09 2187 Ruh-roh, this is now necessary: This email is intended only for the addressee(s) and may contain confidential information. If you are not the intended recipient, you are hereby notified that any use of this information, dissemination, distribution, or copying of this email without permission is strictly prohibited.
On Sun, 2 Aug 2015 23:22:43 +0000 Zenaan Harkness <zen@freedbms.net> wrote:
Perhaps Google could kick this project off - they already own Android and they must have the dollars. In fact Intel could produce the CPU, Foxconn can build the handsets, Google can install the software and the NSA can deliver them to us^B^B^B sorry, the Postal Service can deliver them to us.
We have a plan for a better world. It starts with money. Money and trust. Trust. But not without money. More money. And technology. Technology and money. I'll leave you to contact Goog to start this ball rolling.
Great! Finally the cypherpunks can start writing code and forget all the off-topic political ramblings.
aaaaahhhh i remember now seeing this shit before about one major reason why tor sux and i dont fucking want to use it and i desperately want smart brains to think up something new ... many ... different new things please not just one https://youtu.be/qXajND7BQzk?t=27m30s fuck the state in all its incarnations On Mon, Aug 3, 2015 at 3:28 AM, Juan <juan.g71@gmail.com> wrote:
On Sun, 2 Aug 2015 23:22:43 +0000 Zenaan Harkness <zen@freedbms.net> wrote:
Perhaps Google could kick this project off - they already own Android and they must have the dollars. In fact Intel could produce the CPU, Foxconn can build the handsets, Google can install the software and the NSA can deliver them to us^B^B^B sorry, the Postal Service can deliver them to us.
We have a plan for a better world. It starts with money. Money and trust. Trust. But not without money. More money. And technology. Technology and money. I'll leave you to contact Goog to start this ball rolling.
Great! Finally the cypherpunks can start writing code and forget all the off-topic political ramblings.
-- Cari Machet NYC 646-436-7795 carimachet@gmail.com AIM carismachet Syria +963-099 277 3243 Amman +962 077 636 9407 Berlin +49 152 11779219 Reykjavik +354 894 8650 Twitter: @carimachet <https://twitter.com/carimachet> 7035 690E 5E47 41D4 B0E5 B3D1 AF90 49D6 BE09 2187 Ruh-roh, this is now necessary: This email is intended only for the addressee(s) and may contain confidential information. If you are not the intended recipient, you are hereby notified that any use of this information, dissemination, distribution, or copying of this email without permission is strictly prohibited.
2015-08-03 8:22 GMT+09:00 Zenaan Harkness <zen@freedbms.net>:
If it runs Android apps (protip: Android's JVM is open source) in a more secure manner (like, uhm, in-hardware-sandboxing? Libre Hypervisor CPU with the OS on it, and a jailed EvilCorp coprocessor that does the Android stuff?) it doesn't seem to take that much to build a smartphone nowadays (looking at Chinaphones, that is).
<slaps forehead> Of course! We've all missed that obvious for so long - put JVM in a secure open source hypervisor and the world will be safe. That would even encourage me to be less sarcastic.
JVM in a proper jail would indeed solve many of our problems. If you're saying we can't develop a proper jail, then, well, maybe your sarcasm can shield us from the future instead. If some variant of "but then there's this" is on your tongue, yes there's many other issues too. I'm pretty sure they can all be dealt with.
And, many of the people that want it "truly secure" will understand that the products will cost more than a mass-produced NSA sponsored unit.
Perhaps Google could kick this project off - they already own Android and they must have the dollars. In fact Intel could produce the CPU, Foxconn can build the handsets, Google can install the software and the NSA can deliver them to us^B^B^B sorry, the Postal Service can deliver them to us.
I don't really know what kind of satire you're playing here :( If I squint I can see an argument along the lines of "but I won't trust you either" - which is entirely destructive. In the Bitcoin world there's some attempts to make trust less needed - but still deliver services. A 2-out-of-3 multisig wallet where the service holds one key, you hold another, and a trusted third party (a notary of your choosing, your safe, whatever) holds a key *you generated and gave that party* is a rather good way, for example, to securely store your money. Now you can run off screaming "LOL U R USING NSA CYRPTO WHAHA" and continue eating rocks instead, but try being clearer and less "funny" while doing so.
We have a plan for a better world. It starts with money. Money and trust. Trust. But not without money. More money. And technology. Technology and money. I'll leave you to contact Goog to start this ball rolling.
Your comments are neither in depth nor self evident. I'm not going to expand on the virtues and evils on capitalism. It suffices to say that money is here to stay, and we better make good goddamn use of it. We'll need it to live well, and allow others to live well while supporting the cause. I think it's better to deserve (earn) the money than to beg for it. If capitalism is a remotely sane system that must be possible. Feel free to propose an alternative. I talked about trust a bit before. There's always going to be some trust. If you're reasonable about the amount and type of trust it's entirely workable. Technology is the mantra here. If you want it otherwise, go elsewhere.
On 8/3/15, Lodewijk andré de la porte <l@odewijk.nl> wrote:
2015-08-03 8:22 GMT+09:00 Zenaan Harkness <zen@freedbms.net>: ...<things agreed>
We have a plan for a better world. It starts with money. Money and trust. Trust. But not without money. More money. And technology. Technology and money. I'll leave you to contact Goog to start this ball rolling.
Your comments are neither in depth nor self evident.
I'm not going to expand on the virtues and evils on capitalism. It suffices to say that money is here to stay, and we better make good goddamn use of it.
I agree. Thank you for bringing things down to earth - that is in fact a much more useful approach to (pick an adjective of) comedy in most circumstances.
We'll need it to live well, and allow others to live well while supporting the cause.
In general, and for most people, yes. For many years we have occasionally seen large endowments (in the order of say $1 million USD) to fund developers for this or that. A lurking thought for me is that this money could fund say one house, and the interest on the remainder could pay for rates, internet and electricity. Then somebody could live in that house, and grow their food, and have most of their time for programming without having to find rent each week. Yes some money is still needed, but a lot less than in a "normal rental" situation. There might be better futures still of course - this has just been a self posed question of "how can we harness resources better in general, so our results/ efforts are more long lasting".
I think it's better to deserve (earn) the money than to beg for it. If capitalism is a remotely sane system that must be possible. Feel free to propose an alternative.
Exchange of human energy using money as facilitator is the extremely dominant reality right now, so we must work with it. One of the greatest insights into "a new reality" that the Free Software movement shows us (as compared with traditional capitalism) is that many of us humans like to contribute their talents and skills back to the community and don't require proprietary land and profit maximisation. Many of course are happy to develop libre software for a wage, some "hard core" programmers have sacrificed significant monetary income in order to build something truly free and community based - witness Paul Davis of ardour.org fame, and Tom Lord's arch version control back in the day - these and others have made personal sacrifices which very few ordinarily would. So how might we expand the comfort zone for those willing to make some sacrifices? What other transitions (slow movement away from base capitalism) might make sense?
I talked about trust a bit before. There's always going to be some trust. If you're reasonable about the amount and type of trust it's entirely workable.
Agreed. Thanks again, Zenaan
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 08/02/2015 04:13 AM, odinn wrote:
Not sure if someone already mentioned this, but what of opencores.org ?
It's been mentioned a few times. It's what usually starts the cycle of "Can't trust the FPGAs"/"Can't trust gate synthesis software" cycle. I got tired of talking about it; it's more efficient to actually play around with the cores and figure out how to get the best use of them. - -- The Doctor [412/724/301/703/415] [ZS] PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F DD89 3BD8 FF2B 807B 17C1 WWW: https://drwho.virtadpt.net/ I love children, but I don't think that I could eat a whole one. -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJVvmkoAAoJED1np1pUQ8RkwR8QAIlNnMbeNKOJW23a5OLf2i0I Pie5IT7MrdmIZNgYqokF2sp19Z0L9QgccK8gM424ZVlAj5+pwJzSPpwRV+hTI7z7 rUMw1daFmFkxs8+ZFdKE/j7uCMFcqMBdqSPHXCiLWqEowYaAH/Zb14TAT0GDzihk uVJpNh2hga2fGCOn+kHGEGA12oHO7Iw+rpNrGPvUba4fKDEZqlmMUaMV9qZg+/it yjGbrC0MGoTQdPxyu/wpvSlW6ccK60Hiwr6eyUve2Db0lK7cDQEFei7M7HA81nn3 VDmFhR7JHcUqMv9Y1u7mwQGWl+KH1CRE/BFTFJuhtdcU7OsCOEuwTBk5Uxc0oZf8 tKOR+LKJKriyeo6cQhh57AvFXyhTwYuywGZro7jcvEBqLNar4JW3bwYBElj4Hm29 2kzzLvYiRJb5U88YiYZmdSLp3TB3SLDij53FgaveCsyKVxKJXFNOb/8+UN1RldEP 6kuvQ1D3APDvjLvwx5KJVWLIRXSnPp6sptSfYW/CGDQDg97nps/IhGqhSbgJRF2Q YOcKdkhOaNzpJ/6VjCOmaPJJu/kruZcIgoO8HJHnhqdt0gbvh/YlZjg3ye3Xv83r nfNvj+Dd5BfQpdPcFrKCzcb+yXtWOfU3hw1jXaLDiNpovbIVkyX1Isg+T+3L2r1D X6kga+oKXZDd0FelX2Ef =GjXT -----END PGP SIGNATURE-----
Dnia wtorek, 28 lipca 2015 21:27:44 grarpamp pisze:
You don't need a $50B 1Msqft setup to start making chips, you need a floor in a warehouse and some people who believe.
Testify, brother! -- Pozdrawiam, Michał "rysiek" Woźniak Zmieniam klucz GPG :: http://rys.io/pl/147 GPG Key Transition :: http://rys.io/en/147
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 07/28/2015 06:27 PM, grarpamp wrote:
Universities have always had fabs too.
The Rochester Institute of Technology should still have theirs online. Fun times.
Can you as a reasonably learned hobbyist go observe an Intel run through GloFab from start to finish? What about for the chip inside your phone? Your makerbot? Your RPi? Your USRP? Your own mask? No? Well... that's a problem.
Open designs that operate much the same way are going to take a while to get up to par, too. There are good FPGA designs, but then the "open gate synthesis" holy wars kick into high gear. Anybody got any popcorn? I'm out. - -- The Doctor [412/724/301/703/415] [ZS] PGP: 0x807B17C1 / 7960 1CDC 85C9 0B63 8D9F DD89 3BD8 FF2B 807B 17C1 WWW: https://drwho.virtadpt.net/ "Pills for breakfast. I'm so science fictional." --Doktor Sleepless -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJVuQVPAAoJED1np1pUQ8RkAE4P/RQgja9/HKvU6SFXY6B1Rwdi kERse875CAwBzKgQDzm+KS7EBoDioELxnJ3dzhvcDSWjaTkZd4nLRaGVK1+dHEns 38IUrwYqpWKh7FNqPDdYm1R0JADYe6tFeY0wN94gKK5e5SGSfBiXa4SSo+raJiGp dZ6zo9HKrd1bYoqi4ofqhu80AMq1FsQNNEUwD2Kb577nDJEi7rF39c7NVLjS2BcK fHAIwUIaOKTQxtUNpJuDZvzmAVdX3Hy+EP7DYozijnhY50av00cWVmyguM1XTInE tJipJTpQev9nmQf5XNxe6q1IycIu1ndHJW02JLkHuDA2b6BF7BMIVjcKa+A89uI0 wKokS3IhMqaLK1p8S4qMcwk3aveRLgecvgOu2za2W81YGStdToYtUis4+cFzdBp/ BZqVvxLY1SmxPkWINzBrP4LIynsxN0GqtJo7gJB3v6u+9oG25pWcLxjEsg8OBoT9 itfTik0FArT3iKtOHXFg1SEOI6PDfslD2g32fmz/weeY1bMBVrgv06fM3ThsP8dv 5d55F4Ztp/MfToKMEoUSG0bPmbcFeSMDm46vNX9uronbzvXNW9gQsyyP81HskoQk YTNQ500HM1eKLHwg6/aFVUUYyGwtBcnUiLJQoEtgTqBhwHTYAR2pjt3WIAIOOq2B nRU4JUGMwwcF8JFCTboa =unQ0 -----END PGP SIGNATURE-----
participants (16)
-
Cari Machet
-
grarpamp
-
jim bell
-
Juan
-
Lodewijk andré de la porte
-
Mirimir
-
odinn
-
oshwm
-
Riad S. Wahby
-
rysiek
-
Sean Lynch
-
stef
-
Steve Kinney
-
The Doctor
-
xcelq
-
Zenaan Harkness