There are a lot of misconceptions about TCPA and Palladium. I am not going to address TCPA per se, but I do want to try to clear up differences and misconceptions around what Pd does. Comments are in-line: ----- Original Message ----- From: "Adam Back" <adam@cypherspace.org> To: "Cypherpunks" <cypherpunks@minder.net> Cc: "Cryptography" <cryptography@wasabisystems.com>; "Adam Back" <adam@cypherspace.org> Sent: Sunday, August 04, 2002 10:00 PM Subject: dangers of TCPA/palladium
Like anonymous, I've been reading some of the palladium and TCPA docs.
Like anonymous and Adam, I have also been reading lots on Palladium lately. I have also been working on Pd since 1997.
I think some of the current disagreements and not very strongly technology grounded responses to anonymous are due to the lack of any concise and informative papers describing TCPA and palladium.
I agree, and from my perspective this is a problem. We have a great deal of information we need to get out there.
Not everyone has the energy to reverse engineer a detailed 300-odd pages of TCPA spec [1] back into high-level design considerations; the more manageably short business level TCPA FAQs [2], [3] are too heavily PR spun and biased to extract much useful information from.
So so far I've read Ross Anderson's initial expose of the problem [4]; plus Ross's FAQ [5]. (And more, reading list continues below...).
We have done technical reviews of Palladium, as shown by Seth Schoen's notes (a), which I think talk directly about many of the things discussed in this thread. I suggest anyone who wants to start to understand Pd read these notes. You don't cite the MS whitepaper. This is not a technical paper but it does set precedent and declare intent. See (b). The suggestions for TCPA responses that William Arbaugh raises seem quite good (c). 1 and 2 are already true for Pd, I believe that 3 is true but I would need to talk with him about what he means here to confirm it, 4 is covered in Eric Norlin's blog (d), and 5 is something we should do.
The relationship between TCPA, and Palladium is:
- TCPA is the hardware and firmware (Compaq, Intel, IBM, HP, and Microsoft, plus 135+ other companies)
The current TPM (version 1.1) doesn't have the primitives which we need to support Palladium, and the privacy model is different. We are working within TCPA to get the instruction set aligned so that Palladium and TCPA could use future silicon for attestation, sealing, and authentication, but as things stand today the approaches to the two of them are different enough so that TCPA 1.1 can't support Pd.
- Palladium is a proposed OS feature-set based on the TCPA hardware (Microsoft)
Pd is an OS feature set based on new hardware. Pd requires changes to the CPU, chipset and/or memory controller, graphics and USB, as well as new silicon (we call an SCP or SSP), . Microsoft currently has no announced plans to support TCPA directly, and as things stand today there is no SW or HW compatibility between the two.
The main 4 features proposed in the TCPA/palladium scheme are:
1. secure bootstrap -- checksums of BIOS, firmware, privileged OS code are used to ensure the machine knows whether it is running certified software or not. This is rooted in hardware, so you can't by pass it by using virtualization, only by hardware hacking (*).
This is not how Palladium works. Palladium loads a small piece of code called the TOR after the OS has booted and is running (this could be days later). Pd treats the BIOS, firmware, and privileged Windows OS code as untrusted. Pd doesn't care if the SW is certified or not - that is a question left to users.
2. software attestation -- the hardware supports attesting to a third party whether a call comes from a certified software component as assured by the hardware described in feature 1.
In Palladium, SW can actually know that it is running on a given platform and not being lied to by software. In 1, you say that SW virtualization doesn't work, and that is part of the design. (Pd can always be lied to by HW - we move the problem to HW, but we can't make it go away completely). As SW is capable of knowing its own state, it can attest this state to others - users, services, other apps, etc. It can't lie when it uses Pd to say what it is. It's up to third parties (again, the user of the machine, or an app, or service) to decide if it likes the answer and trusts the application. Disclosure of the apps identity is up to the user and no one else. Note that in Pd no one but the user can find out the totality of what SW is running except for the nub (aka TOR, or trusted operating root) and any required trusted services. So a service could say "I will only communicate with this app" and it will know that the app is what it says it is and hasn't been perverted. The service cannot say "I won't communicate with this app if this other app is running" because it has no way of knowing for sure if the other app isn't running.
3. hardware assisted compartmentalization -- CPU can run privileged software, and RAM can contain information that you can not examine, and can not modify. (Optionally the software source can be published, but that is not necessary, and if it's not you won't be able to reverse-engineer it as it can be encrypted for the CPU).
Confusion. The memory isn't encrypted, nor are the apps nor the TOR when they are on the hard drive. Encrypting the apps wouldn't make them more secure, so they aren't encrypted. The CPU uses HW protections to wall new running programs from the rest of the system and from each other. No one but the app itself, named third parties, and the TOR can see into this apps space. In fact, no one (should the app desire) can even know that the app is running at all except the TOR, and the TOR won't report this information to anyone without the apps permission. You can know this to be true because the TOR will be made available for review and thus you can read the source and decide for yourself if it behaves this way.
4. sealing -- applications can store data that can only be read by that application. This works based on more hardware -- the software state checksums developed in feature 1 are used by hardware to generate encryption keys. The hardware will refuse to generate the key unless the same software state is running.
Correct enough for this thread; it is actually the TOR that will manage the keys for the apps, as this makes the concept of migration and data roaming far more manageable. (Yes, we have thought about this.)
One good paper to understand the secure bootstrap is an academic paper "A Secure and Reliable Bootstrap architecture" [6].
It's interesting to see that one of the author's of [6] has said that TCPA as currently formed is a bad thing and is trying to influence TCPA to make it more open, to exhibit stronger privacy properties read his comments at [7].
There are a lot of potential negative implications of this technology, it represents a major shift in the balance of power comparable in magnitude to the clipper chip:
1. Potentially cedes control of the platform -- while the palladium docs talk about being able to boot the hardware with TCPA turned off, there exists possibility that with minor configuration change the hardware / firmware ensemble that forms palladium/TCPA could be configured to allow only certified OSes to boot, period. It's interesting to note, if I read correctly, that the X-box (based on Celeron processor and TCPA / TCPA-like features) does employ this feature. See for example: [8].
Comparing xBox and Pd isn't particularly fruitful - they are different problems and thus very different solutions. (Also note that xBox doesn't use the PID or any other unique HW key.) Palladium mostly doesn't care about the BIOS and considers it to be an untrusted system component. In Pd the BIOS can load any OS it wants, just like today, and in Pd the OS can load any TOR specified by the user. The MS TOR will run any app, as specified by the user. The security model doesn't depend on some apps being prevented from running. I believe that there isn't a single thing you can do with your PC today which is prevented on a Palladium PC. I am open to being challenged on this, so please let me know what you think you won't be able to do on a Pd PC that you can do today.
The documents talk about there being no barrier to certifying TCPA aware extensions to open-source OSes. However I'm having trouble figuring out how this would work. Perhaps IBM with it's linux support would build a TCPA extension for linux. Think about it -- the extension runs in privileged mode, and presumably won't be certified unless it passes some audit enforcing TCPA policies. (Such as keeping the owner of the machine from reading sealed documents, or reading the contents of DRM policy controlled documents without meeting the requirements for the DRM policy.)
2. DSS over-again -- a big aspect of the DSS reverse-engineering was to allow DVDs to be played in software on linux. The TCPA platform seems to have the primary goal of making a framework within which it is possible to build extensions to implement hardware tamper resistant DRM. (The DRM implementation would run in a hardware assisted code compartment as described in feature 3 above). So now where does that put open source platforms? Will they be able to read such DRM protected content? It seems likely that in the longer term the DRM platform will include video cards without access to video memory, perhaps encryption of the video signal out to the monitor, and of audio out to the speakers. (There are other existing schemes to do these things which dovetail into the likely TCPA DRM framework.) I think you mean CSS, not DSS.
I don't want people snooping my passwords from the keyboard buffer, nor my account info from the frame buffer, and HW protections in those HW areas prevent that.
With the secure boot strap described in feature 1, the video card and so on are also part of the boot strap process, so the DRM system would have ready support from the platform for robustly refusing to play except on certain types of hardware. Similarly the application software which plays these DRM policy protected files and talks to the DRM policy module in the hardware assisted code compartment will itself be an application which uses the security boot-strapping features. So it won't be possible to write an application on for example linux to play these files without an audit and license etc from various content, DRM and OS cartels. This will lead to exactly the kind of thing Richard Stallman talked about in his prescient paper on the coming platform and right to develop competing software control wars [9].
Palladium doesn't boot strap the OS. Pd loads a secure piece of SW, called the TOR, which runs in a secure space and loads other apps that want security. Anyone can load an app into this environment and get the full protections Pd offers. MS doesn't require that you show them the SW first - you wanna run, you get to run - provided the user wants you to run. If a user doesn't like the looks of your app, then you (the developer) have a problem with that user.
3. Privacy support is broken -- the "privacy" features while clearly attempts to defuse a re-run at the pentium serial number debacle, have not really fixed it's problems. You have to trust the "Trusted Third Party" privacy CA not to track you and not to collude with other CAs and software vendors. There are known solutions to this particular sub-problem, for example Stefan Brands digital credentials [10], which can be used to build a cryptographically assured privacy preserving PKI avoiding the linking problems arising from identity based and attribute certificates.
The privacy model in Pd is different from TCPA. I could go on for a long time about it, but the key difference is that the public key is only revealed to named third parties which a user trusts. You are right in thinking that you need to trust them, but you don't have to show anyone your key if you don't trust them, so you (the user) are always in control of this. Pd is not about user authentication - it is about machine and SW authentication. User auth can be better solved on a Pd platform than on a PC today, but it isn't required. Pd doesn't need to know who you are to work.
4. Strong enforcement for DMCA DRM excesses -- the types of DRM system which the platform enables stand a fair chance of providing high levels of enforcement for things which though strictly legally mandated (copyright licensing restrictions, limited number of plays of CDs / DVDs other disadvantageous schemes; inflexible and usurious software licensing), if enforced strictly would have deleterious effects on society and freedom. Copyright violation is widely practiced to a greater or less extent by just about all individuals. It is widely viewed as acceptable behavior. These social realities and personal freedoms are not taken into account or represented in the lobbying schemes which lead to the media cartels obtaining legal support for the erosion of users rights and expansionist power grabs in DMCA, WIPO etc.
I don't know where to begin on this one. It deserves a long, well thought out response, and I don't have the time to do it at the moment. I will follow up on this. Let me state that I think that much of the energy around DRM and HW is misplaced, and that Pd is designed to enable seamless distribution of encrypted information, not to disable distribution of clear text information.
Some of these issues might be not so bad except for the track records, and obvious monopolistic tendencies and economic pressures on the entities who will have the root keys to the worlds computers. There will be no effect choice or competition due to existing near monopolies, or cartelisation in the hardware, operating system, and content distribution conglomerates.
MS will not have the root keys to the world's computers. The TOR won't have access to the private keys either. No one but the HW does. The TOR isn't "MS" per se - it is a piece of SW written by users but vetted and examined by hopefully thousands of parties and found to do nothing other than manage the local security model upon which Pd depends. You can read it and know it doesn't do anything but effectively manage keys and applications. And if you don't trust it, you won't run it. If you don't trust the TOR, you don't trust Palladium. Trust is the *only* feature we are attempting to achieve, so every decision we make will be made with trust and security in mind.
5. Strong enforcement for the software renting model -- the types of software licensing policy enforcement that can be built with the platform will also start to strongly enable the software and object rental ideas. Again potentially these models have some merit except that they will be sabotaged by API lock out, where the root key owners will be able to charge monopoly rents for access to APIs.
I am confused as to how this would work in Pd. Anyone can write apps to the Pd API. Zero restrictions. (API's are full of restrictions - by their nature they limit things to a protocol, and potentially HW, both of which have understood limitations; I am dodging this concept in saying there are no restrictions).
6. Audits and certification become vastly more prevalent. Having had some involvement with software certification (FIPS 140-1 / CC) I can attest that this can be expensive exercises. It is unlikely that the open source community will be able to get software certified due to cost (the software is free, there is no business entity to claim ownership of the certification rights, and so no way to recuperate the costs). While certification where competition is able to function is a good thing, providing users with a transparency and needed assurance, the danger with tying audits to TCPA is that it will be another barrier to entry for small businesses, and for open source particularly.
This is a problem anyone who wants to compete in the security and trust space will need to overcome. I don't think that it is particularly new or different in a world with Pd. Writing a TOR is going to be really hard and will require processes and methods that are alien to many SW developers. One example (of many) is that we are generating our header files from specs. You don't change the header file, you change the spec and then gen the header. This process is required for the highest degrees of predictability, and those are cornerstones for the highest degree of trust. Unpredictable things are hard to trust.
7. Untrusted, unauditable software will be able to run without scrutiny inside the hardware assisted code compartments. Some of the documentation talks about open sourcing some aspects. While this may come to pass, but that sounded like the TOR (Trusted Operating Root); other extension modules also running in unauditable compartments will not be so published.
Everything in the TCB (Trusted Computing Base) for Pd will be made available for review to anyone who wants to review it; this includes software which the MS TOR mandates must be loaded.
8. Gives away root control of your machine -- providing potentially universal remote control of users machines to any government agencies with access to the TCPA certification master keys, or policies allowing them to demand certifications on hostile code on demand. Central authorities are likely to be the only, or the default controllers of the firmware/software upgrade mechanism which comes as part of the secure bootstrap feature.
This doesn't happen in Pd. There is no secure boot strap feature in Pd. The BIOS boots up the PC the same way it does today. Root control is held by the owner of the machine. There is no certification master key in Pd.
9. Provides a dangerously tempting target for government power-grabs -- governments will be very interested to be able to abuse the power provided by the platform, to gain access to its keys to be able to insert remote backdoors, and/or to try to mandate government policy enforcement modules once such a platform is built. Think this is unrealistic? Recall clipper? The TCPA is a generic extensible policy enforcement architecture which can be configured to robustly enforce policies against the interests of the machine owner. Clipper, key-escrow the whole multi-year fight, at some point in the near future if some of the more egregious TCPA/Palladium framework features and configuration possibilities becomes widely deployed could be implemented after the fact, as a TCPA/Palladium policy extentsion which runs in the hardware assisted code compartment and is authenticated up to the hardware boot by the secure bootstrapping process.
One of the beauties of Pd is that if there is any SW backdoor, you will know about it. HW robustness will be something for manufacturers to work out. For most systems, I think that extensive HW tamper resistance will be a waste of time, but for some (e.g. highly secure govt systems) it will be a necessity and one that works well in Pd.
So what I've read so far, I think people's gut reactions are right -- that it's an aggressive and abmitious power grab by the evil empire -- the 3 cartels / monopolies surrounding PC hardware, Operating systems and Content Distribution. The operating system near monoply will doubtless find creative ways to use and expand the increased control to control application interoperability (with the sealing function), to control with hardware assistance the access to undocumented APIs (no more reverse engineering, or using the APIs even if you do / could reverse engineer).
I know that we aren't using undocumented API's and that we will strive for the highest degree of interoperability and user control possible. Pd represents massive de-centralization of trust, not the centralization of it. I think that time is going to have to tell on this one. I know that this isn't true. You think that it is. I doubt that my saying it isn't true is going to change your mind; I know that the technology won't do much of what you are saying it does do, but I also know that some of these things boil down to suspicion around intent, and only time will show if my intent is aligned with my stated goals.
So some of the already applications are immediately objectionable. The scope for them to become more so with limited recourse or technical counter-measures possible on the part of the user community is huge. Probably the worst aspect is the central control -- it really effectively does give remote root control to your machine to people you don't want to trust. Also the control _will_ be abused for monopolistic rent seeking and exclusionary policies to lock-out competition. Don't forget the fact that microsoft views linux as a major enemy as revealed by documents uncovered some the anti-trust discovery process.
Pd does not give root control of your machine to someone else. It puts it into your hands, to do with as you so desire, including hacking away at it to your hearts content.
In fact I'd say this is the biggest coming risk to personal freedom since the days during the onset of the clipper chip / key escrow looked like they stood some chance of becoming reality.
I think that Pd represents an enhancement to personal freedoms and user control over their machines. I hope that over time I will be able to explain Pd sufficiently well so that you have all the facts you need to understand how and why I say this. Peter ++++ (a) Seth Schoens Blog http://vitanuova.loyalty.org/2002-07-05.html (b) MS Paper http://www.microsoft.com/presspass/features/2002/jul02/0724palladiumwp.asp (c) William Arbaugh on TCPA http://www.cs.umd.edu/~waa/TCPA/TCPA-goodnbad.html (d) Eric Norlin's blog http://www.unchartedshores.com/blogger/archive/2002_07_28_archive3.html#8530 0559
Adam -- http://www.cypherspace.org/adam/
(*) It may be possible to hack the firmware, given access to source temporarily.
[1] "Trusted Computing Platform Alliance (TCPA) Main Specification Version 1.1b", TCPA
http://www.trustedcomputing.org/docs/main%20v1_1b.pdf
[2] "TCPA Specification/TPM Q&A", TCPA
http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf
[3] "TCPA Frequently Asked Questions Rev 5.0", TCPA
http://www.trustedcomputing.org/docs/Website_TCPA%20FAQ_0703021.pdf
[4] "Security in Open versus Closed Systems (The Dance of Boltzmann, Coase and Moore)", Ross Anderson,
(Sections 4 and 5 only, rest is unrelated)
ftp://ftp.cl.cam.ac.uk/users/rja14/toulouse.pdf
[5] "TCPA / Palladium Frequently Asked Questions Version 1.0"
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html
[6] "A Secure and Reliable Bootstrap Architecture"
@inproceedings{Arbaugh:97:secure-bootstrap, author = "Bill Arbaugh and Dave Farber and Jonathan Smith", title = "A Secure and Reliable Bootstrap Architecture", booktitle = "Proceedings of the IEEE Symposium on Security and Privacy", pages = 65-71, note = "Also available as \url{http://www.cis.upenn.edu/~waa/aegis.ps}" }
[7] "The TCPA; What's wrong; What's right and what to do about", William Arbaugh, 20 Jul 2002
http://www.cs.umd.edu/~waa/TCPA/TCPA-goodnbad.html
[8] "Keeping Secrets in Hardware: the Micrsoft Xbox Case Study", Andre "bunnie" Huang, 26 May 2002
http://web.mit.edu/bunnie/www/proj/anatak/AIM-2002-008.pdf
[9] "The Right to Read", Richard Stallman, Feb 1997, Communications of the ACM (Volume 40, Number 2).
http://www.gnu.org/philosophy/right-to-read.html
[10] Stefan Brands
Book "Rethinking Public Key Infrastructures and Digital Certificates - Building in Privacy", MIT Press, Aug 2000.
Number of other technical and semi-technical papers on that page.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to
majordomo@wasabisystems.com
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com ----- End forwarded message -----
participants (1)
-
Peter N. Biddle