
I think dongles (and non-copyable floppies) have been around since the early 80s at least...maybe the 70s. Tamper-resistant CPU modules have been around since the ATM network, I believe, in the form of PIN processors stored inside safes) The fundamental difference between a "dongle" and a full "trusted module" containing the critical application code is that with a dongle, you can just patch the application to skip over the checks (although they can be repeated, and relatively arcane). If the whole application, or at least the non-cloneable parts of the application, exist in a sealed module, the rest of the application can't be patched to just skip over this code. Another option for this is a client server or oracle model where the really sensitive pieces (say, a magic algorithm for finding oil from GIS data, or a good natural language processor) are stored on vendor-controlled hardware centrally located, with only the UI executing on the end user's machine. What I'd really like is a design which accomplishes the "good" parts of TCPA, ensuring that when code claims to be executing in a certain form, it really is, and providing a way to guarantee this remotely -- without making it easy to implement restrictions on content copying. It would be nice to have the good parts of TCPA, and given the resistance to DRM, if security and TCPA have their fates bound, they'll probably both die an extended and painful death. I suppose the real difference between a crypto-specific module and a general purpose module is how much of the UI is within the trusted platform envelope. If the module is only used for handling cryptographic keys, as an addition to an insecure general purpose CPU, with no user I/O, it seems unlikely to be useful for DRM. If the entire machine is inside the envelope, it seems obviously useful for DRM, and DRM would likely be the dominant application. If only a limited user IO is included in the envelope, sufficient for user authentication and keying, and to allow the user to load initially-trusted code onto the general purpose CPU, but where the user can fully use whatever general purpose code on the general purpose CPU, even uncertified code, with the certified module, it's not really useful for DRM, but still useful for the non-DRM security applications which are the alleged purpose behind TCPA. (given that text piracy doesn't seem to be a serious commercial concern, simply keeping video and audio playback and network communications outside the TCPA envelope entirely is good enough, in practice...this way, both authentication and keying can be done in text mode, and document distribution control, privacy of records, etc. can be accomplished, provided there is ALSO the ability to do arbitrary text processing and computing outside the trusted envelope, .) If it's the user's own data being protected, you don't need to worry about the user intentionally circumventing the protections. Any design which removes control from the 'superuser' of the machine is fundamentally about protecting someone other than the user. This, I think, is the difference between TCPA and smartcards. Notice which one has in its short lifetime attracted far more enmity :) Quoting lynn.wheeler@firstdata.com <lynn.wheeler@firstdata.com>:
-- Ryan Lackey [RL7618 RL5931-RIPE] ryan@havenco.com CTO and Co-founder, HavenCo Ltd. +44 7970 633 277 the free world just milliseconds away http://www.havenco.com/ OpenPGP 4096: B8B8 3D95 F940 9760 C64B DE90 07AD BE07 D2E0 301F --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
participants (1)
-
Ryan Lackey