an ominous comment

dan at geer.org dan at geer.org
Tue Jul 21 11:01:05 PDT 2015


Continuing to think about this, an analogy presents itself.
If I tell you a secret after getting your agreement that you
will not yourself tell anyone else, then I am trusting in
non-recursive disclosure, i.e., you break the chain and I
trust that you will not fail to do so.

If I place my execution or my storage in the hands of
others, then I am trusting in non-recursive propagation of
my code and/or my data.  If the pinnacle goal of security
engineering is "No silent failure," then creating a
dependence on non-recursive exposure of execution or storage
is resolved either by blind trust or by a sufficient degree
of surveillability that prevents silent breaking of the
non-recursion constraint.  But what would that be?  Is this
a kind of supply chain argument that devolves to whether a
target is or is not big enough to sue?  If I have proven,
workable recourse, then perhaps I can trust -- which is to
say I am able to then choose to take no additional,
proactive countermeasures.  If I do not have proven,
workable recourse, then how can I prevent not just silent
failure but silent failure plus a clean getaway even
post-discovery?

Daniel Solove suggested that the greatest danger to privacy
is a blythe "I live a good life and have nothing to hide;"
so, in parallel, is not the greatest danger to data
integrity something of a parallel construction, something
like "No one would want to screw with my cloud, I'm just a
nobody"?

Thinking out loud; no need to answer,

--dan




More information about the cypherpunks mailing list