On 30/07/15 23:11, The Doctor wrote:
On 07/30/2015 11:42 AM, grarpamp wrote:
because you can exhaustively test their logic. On the other hand, how do you know that once you connect enough of them to each other that their secret gates inside don't sense each other and activate? Since you're that close to stone age anyway, why not start one more step back at relays and core memory.
So what you're basically saying is that the entire tech stack, all the way back to far edge pf electromechanical information processing is basically completely untrustworthy. There is no way at all to trust anything that we can't actually see the logic gates of with the naked eye, which would put us... where? Maybe tens of computations per second, at most? A little more (but not much)?
Fuck it. Time to go home, everyone. They Won.
That is spot on, we can't trust any of it and most people would concede that we have lost the battle. So (in my fairly inexperienced opinion in this field) there are possibly two options:- 1) Re-invent the last 65 years of Computing - not impossible and we have the knowledge amongst most average tinkerer to do this but maybe it'll take us 10-20 years to catch up, utilising (potentially massively) parallel processing from early on in the process to gain speeds that were not common in the past at certain tech levels. 2) Look at if it is possible for us to develop trustworthy systems using untrustworthy components. Is there a way we can maybe use multiple components to compare their outputs to see if any of them are not trustworthy? Or maybe identify untrustworthy results from operations and ignore them, favouring trustworthy results?