Continuing to think about this, an analogy presents itself. If I tell you a secret after getting your agreement that you will not yourself tell anyone else, then I am trusting in non-recursive disclosure, i.e., you break the chain and I trust that you will not fail to do so. If I place my execution or my storage in the hands of others, then I am trusting in non-recursive propagation of my code and/or my data. If the pinnacle goal of security engineering is "No silent failure," then creating a dependence on non-recursive exposure of execution or storage is resolved either by blind trust or by a sufficient degree of surveillability that prevents silent breaking of the non-recursion constraint. But what would that be? Is this a kind of supply chain argument that devolves to whether a target is or is not big enough to sue? If I have proven, workable recourse, then perhaps I can trust -- which is to say I am able to then choose to take no additional, proactive countermeasures. If I do not have proven, workable recourse, then how can I prevent not just silent failure but silent failure plus a clean getaway even post-discovery? Daniel Solove suggested that the greatest danger to privacy is a blythe "I live a good life and have nothing to hide;" so, in parallel, is not the greatest danger to data integrity something of a parallel construction, something like "No one would want to screw with my cloud, I'm just a nobody"? Thinking out loud; no need to answer, --dan
On 7/21/15 11:01 AM, dan@geer.org wrote:
Continuing to think about this, an analogy presents itself. If I tell you a secret after getting your agreement that you will not yourself tell anyone else, then I am trusting in non-recursive disclosure, i.e., you break the chain and I trust that you will not fail to do so.
If I place my execution or my storage in the hands of others, then I am trusting in non-recursive propagation of my code and/or my data. If the pinnacle goal of security engineering is "No silent failure," then creating a dependence on non-recursive exposure of execution or storage is resolved either by blind trust or by a sufficient degree of surveillability that prevents silent breaking of the non-recursion constraint. But what would that be? Is this a kind of supply chain argument that devolves to whether a target is or is not big enough to sue? If I have proven, workable recourse, then perhaps I can trust -- which is to say I am able to then choose to take no additional, proactive countermeasures. If I do not have proven, workable recourse, then how can I prevent not just silent failure but silent failure plus a clean getaway even post-discovery?
Daniel Solove suggested that the greatest danger to privacy is a blythe "I live a good life and have nothing to hide;" so, in parallel, is not the greatest danger to data integrity something of a parallel construction, something like "No one would want to screw with my cloud, I'm just a nobody"?
Thinking out loud; no need to answer,
--dan +1
There are multiple avenues possible of assurance, architecture, audit, obfuscation, canaries, etc. Perhaps encrypted computing will be useful; already encrypted storage is relatively easy to use for at least some circumstances (object stores, backup). If billions of lightweight container-based compute transactions are flowing through a system that pools payment and has secure distributed storage and communication, is it possible to be too obscure to identify and tap? Spammer scammers are practicing this kind of thing daily, and countermeasures are being created too, but as for most of that there is a final traceable step, email etc., that's not quite the same as some other private security goals. sdw
participants (2)
-
dan@geer.org
-
Stephen D. Williams