On Monday 30 June 2003 20:59, Morlock Elloi wrote:
There is no such thing as "automatic security." That's an oxymoron.
Any system that is "secure" without the ongoing burn of end-user brain cycles is subject to more-or-less easy subversion [a corollary of this is that "masses" will never be in situation to be both (1) end users and (2) secure. One can be a product and secure at the same time without effort, though.]
Another corollary of your statements is that we can't have an AI monitoring Joe User's system to maintain security. No matter how smart a consumer-grade AI is, you have to assume the attackers will have AIs at least as smart, and dedicated to tricking the defensive AIs. The same applies to human users, of course, but humans are more unpredicable than a security AI is likely to be, and can be held responsible if they're tricked; if the security AI is tricked, the vendor might be held liable. Too bad; I've about come to the conclusion that Joe User is too dumb (ignorant, inattentive, careless; in a word, dumb) to secure his systems, and doesn't think it worth paying someone to do it for him. That's a bummer because no one is going to trust an electronic wallet on a machine which has a 50% chance of being 0wn3d any given month. I'd been thinking that programs might soon get smart enough to handle Joe's security work, but as a result of your message I'm less confident than I was. SRF -- Steve Furlong Computer Condottiere Have GNU, Will Travel "If someone is so fearful that, that they're going to start using their weapons to protect their rights, makes me very nervous that these people have these weapons at all!" -- Rep. Henry Waxman