On Fri, Dec 29, 2000 at 02:18:10AM -0500, dmolnar wrote:
Yes, BUT I think one of the reasons why a maximally powerful adversary model is so appealing, however, is that it sidesteps the question of evaluating "value of what is being sent through remailers."
The other reason it's appealing is that it lets academics consider their work finished when they've constructed a logical argument or proof, without considering implementation details, because the "maximally powerful adversary model" in practice seems to include a maximally competent implementer of the design under consideration. Which is great for academics, but in the real world, software and hardware systems have defects - frequently, defects with security implications.
If you can prove security against a maximally powerful adversary, then you don't have to answer that question - no matter how much it's worth to the adversary, it won't win.
Yes, for attacks against the strong part of the system - but that's not what sensible attackers go after. The "maximal adversary" imagined is apparently a very gentle and polite one, who can only operate on network wires, but won't consider physical penetration or torture, and in some models won't even subvert the security of the machines hosting the system.
This is *not* to discourage an economic analysis, but to point out a potential benefit to the "modern" approach. It wouldn't be much of a benefit, EXCEPT that in encryption and digital signatures, we have actually been able to achieve security against maximal adversaries (or at least probabilistic polytime ones assuming some problems are hard).
But - several, if not many times - the security we've achieved has been broken, because of implementation errors on the part of creators, installers, or users. Consider the computing power assembled for the DES or RC5 cracks, instead applied to dictionary attacks versus a PGP keyring, or SSH keyfile. How long until the average user's passphrase is recovered? -- Greg Broiles gbroiles@netbox.com PO Box 897 Oakland CA 94604