
(originally to a bunch of places, but I misspelled "cypherpunks" in the headers...) (operating on generator power after hurricane georges struck Anguilla...) The way I see it, there are n separate issues here, which should be addressed separately. The following are my summaries of the issues, using the information I have available -- I have never seen any confidential Arcot data, so I'm operating entirely from public sources. It's entirely possible my data or assumptions are incorrect -- I'd be very happy to have someone correct me. 1) The Arcot system uses "software smartcard" repeatedly throughout its marketing literature, and says that it provides similar security to smartcards. However, even in the technical paper recently posted to the web by Arcot, it admits it does not provide resistance to a wide class of attacks, including the two Stefan Brands mentioned: * Resistance to monitoring attacks on the CPU * End users reverse-engineering the system I have issues with this use of the term "smartcard". In fact, this system has *none* of the desirable properties of a smartcard, as users can freely copy the authenticator. It is, at best, equivalent to an encrypted certificate on disk which can only be checked for validity online (no local checksums, no structure to the cert, just a raw random string). The phrase "two factor authentication", used throughout the marketing literature, is being used improperly -- two passwords (one remembered and one stored on disk, since the ArcotSign authenticator is just a string of data) do not two-factor authentication make, if two passwords (one remembered and one stored as an encrypted certificate on disk) do not two-factor make. It's an open point whether both systems or neither are two-factor, but it's pretty clear that the two are equivalent. 2) Scott Loftesness, the new CEO of DigiCash, proposed this system as potentially a viable alternative to hardware tamper resistance for electronic finance systems (DBS systems). The threat model for real electronic finance systems (including and especially DBS) is in fact substantially more serious, as recent Rabobank windows client attack demonstrates. Once your customers *can* have money stolen by a third party, it's up to you to prove they *haven't*, at least according to US and other law, for small customers. This means only institutions capable of absorbing massive fraud (in absolute terms) can field real electronic cash systems. This is primarily a debate about threat models for a specific application, rather than a technical one. 3) The system appears to use PKC for no reason -- it is a closed system, like Kerberos, and only limits itself by using PKC. Kerberos, developed by some list participants *years* ago, appears to solve every problem Arcot claims to solve. Additionally, Kerberos (in some form) is now being integrated into MS Windows NT, so it is widely available. 4) Bruce Schneier, one of the firm's technical advisors, says "it's not [sufficient for] online real estate [presumably meaning high value transactions with no recourse]", but that it is for "ninny net users in chat rooms". The apps Bruce Schneier seems to propose the Arcot system for ("systems like AOL" (read: porn sites on the WWW)) are more concerned with their users not being able to share passwords than anything else. For that application, simple IP-based security in addition to passwords is sufficient, or other means of ensuring an authenticator is linked to an individual. The Arcot system does not provide security from this issue (even though it is intended for closed networks, where such functionality would be highly possible). However, the firm's marketing information seems to be selling this to corporate web sites, for protecting high value corporate data. I find it highly unlikely any board of directors would consider their proprietary corporate data closer to "ninny net users in chat rooms" than something requiring real security. (Even people without real security requirements often want high security, or at least systems with no known limitations -- I use high-grade encryption to protect my credit card information, even though my liability is limited to $50, and many other users are unwilling to use 40-bit crypto to protect their credit card numbers.) 5) The point was raised that in a smartcard system, you don't know what you're signing. I agree completely. However, in a smartcard system, you can protect the *keys* to a far higher level than protecting the *use* of the keys when the key is physically present. This is a substantially lower level of risk, particularly if a security policy is included in the smart card, such as only disbursing a limited amount of money per unit time, or monitoring suspicious transactions, or handling user authentication through a strong external channel, such as a PIN reader attached directly to the smartcard reader. The earlier debate on smartcards vs. tamper-resistant computers with I/O leads me to believe that for electronic cash in reasonable value ranges (>$1000), you most likely want to go with *more* than a smartcard (as in, a tamper resistant computer), but certainly you don't want to go with *less*. For many applications, a combination of trusted time (on the smartcard) and a trusted smartcard (and perhaps location) would be sufficient -- a user could then be made to prove they were not in a certain location at a certain time when a transaction were made to absolve themselves of responsibility for it. 6) "Hardware is expensive" So is software. The cost of implementing any authentication mechanism, particularly in a closed system, is relatively constant -- passwords through cheap hardware -- as most of the cost is in administrative overhead, user training, etc. I'd far rather go with an $8 Dallas Semiconductor iButton plus $0-$100 software plus $400/user training/etc. cost than $0 hardware plus $x software plus the same user training cost. In many systems, hardware is *cheaper* in the long run, as users understand it better, are better at doing their own key management, etc. It is exactly for closed systems where hardware solutions *shine* compared to software, even if they provide the *same* security in absolute terms -- the hardware solution has lower life-cycle costs. For an open system, or one where you have minimal control over the users (such as AOL, or a porn WWW site, or whatever), the requirement that software be added to the local machine is usually prohibitively expensive. In that kind of application, all of the intelligence needs to go on the server -- an online password checking system, coupled with standard browser features like client certificates, is about the most you can expect, and even that is pushing it (until MIT did cert support for Lynx, many of my machines were unable to use certificates at all). 7) "The system is patent-pending." I do not see how the system is provably higher in security or any other worthwhile features than the following widely deployable, standard, free system: * online password checking of user passwords, optionally changing in some kind of one-time-pad system, passed through an SSL-encrypted link to the server, such that if a password guess attempts were made, the account could be locked out. * encrypted local certificates, encrypted with optionally a different passphrase, which do *not* need to be kept secret. This provides "two factor" authentication to the same extent as the Arcot system, is easy to implement using standard systems, and is conceptually simple. In the event that the Arcot system is no better than the above (which I believe), which is *not* patentable, I fail to see why the issue of whether Arcot's system can be patented is important. The only parts which could be patented (as far as I can tell) are accidental, not the essential functionality of quasi-two factor authentication with online password checking to prevent off-line attacks. 8) The various IP/patent/etc. issues A company could implement open standards based authentication, in an open way, publish papers on every detail of their system, etc. and still make quite a bit of money. They do this through the quality of their *software engineering*, not the quality of their idea. Ideas are expensive to create, and impossible to keep secret, even with patents. Once you come up with an idea, most of what gives it value is the review process itself, not the initial idea. That's why the valuable ideas I can think of, at least in crypto, have mainly come from universities and open research labs -- they can more cheaply be reviewed. Only the NSA has the resources to make an idea valuable and also keep it closed (and if you believe Payne, even they don't...) I understand that Arcot is a commercial company, and that it's trying to make money, but I don't think that actually counters any of the points presented above -- if anything, it makes them stronger, as Arcot has a strong desire to increase the value of its assets (the deliverable authentication system) much more so than a random hacker working on this as a curiosity.
participants (1)
-
Ryan Lackey