On 15/07/15 08:44, grarpamp wrote:
dave@horsfall.org So, is there anything that could benefit from a few parallel teraflops here and there?
On Tue, Jul 14, 2015 at 12:27 PM, Ray Dillinger <bear@sonic.net> wrote:
Or you could apply static code analysis software to huge masses of existing operating system, device driver, plugin, email-client or god-help-us browser code in wide use and see if you can't spot instances of dangerous vulnerabilities like buffer overflows. A list of known errors would be very helpful in getting code up to 'bulletproof' reliability and no one runs ALL the possible static analysis we know about on large bodies of code because it takes too long on regular computers.
This, and fuzzing... of all the opensource OS's and all the ported packages they supply. And dump all of github in it for fun.
FYI, the AFL fuzzer already have an impressing trophy case: See "The bug-o-rama trophy case" at http://lcamtuf.coredump.cx/afl/
It takes too long, too much developer time, a different skillset, opensource test suites may not yet cover some areas that commercial ones do, etc.
Ripe for development of an open perpetual audit project.
That, and printing your own open and trusted chips, in your own open and trusted fab, are possible now. It's big picture, grand slam, full circle headiness, but it is doable. People just have to get together and kick it off.