Re: Palladium: technical limits and implications

Adam Back writes:
+---------------+------------+ | trusted-agent | user mode | | space | app space | | (code +------------+ | compartment) | supervisor | | | mode / OS | +---------------+------------+ | ring -1 / TOR | +----------------------------+ | hardware / SCP key manager | +----------------------------+
I don't think this works. According to Peter Biddle, the TOR can be launched even days after the OS boots. It does not underly the ordinary user mode apps and the supervisor mode system call handlers and device drivers. +---------------+------------+ | trusted-agent | user mode | | space | app space | | (code +------------+ | compartment) | supervisor | | | mode / OS | +---+ +---------------+------------+ |SCP|---| ring -1 / TOR | +---+ +---------------+ This is more how I would see it. The SCP is more like a peripheral device, a crypto co-processor, that is managed by the TOR. Earlier you quoted Seth's blog: | The nub is a kind of trusted memory manager, which runs with more | privilege than an operating system kernel. The nub also manages access | to the SCP. as justification for putting the nub (TOR) under the OS. But I think in this context "more privilege" could just refer to the fact that it is in the secure memory, which is only accessed by this ring--1 or ring-0 or whatever you want to call it. It doesn't follow that the nub has anything to do with the OS proper. If the OS can run fine without it, as I think you agreed, then why would the entire architecture have to reorient itself once the TOR is launched? In other words, isn't my version simpler, as it adjoins the column at the left to the pre-existing column at the right, when the TOR launches, days after boot? Doesn't it require less instantaneous, on-the-fly, reconfiguration of the entire structure of the Windows OS at the moment of TOR launch? And what, if anything, does my version fail to accomplish that we know that Palladium can do?
Integrity Metrics in a given level are computed by the level below.
The TOR starts Trusted Agents, the Trusted Agents are outside the OS control. Therefore a remote application based on remote attestation can know about the integrity of the trusted-agent, and TOR.
ring -1/TOR is computed by SCP/hardware; Trusted Agent is computed by TOR;
I had thought the hardware might also produce the metrics for trusted agents, but you could be right that it is the TOR which does so. That would be consistent with the "incremental extension of trust" philosophy which many of these systems seem to follow.
The parallel stack to the right: OS is computed by TOR; Application is computed OS.
No, that doesn't make sense. Why would the TOR need to compute a metric of the OS? Peter has said that Palladium does not give information about other apps running on your machine: : Note that in Pd no one but the user can find out the totality of what SW is : running except for the nub (aka TOR, or trusted operating root) and any : required trusted services. So a service could say "I will only communicate : with this app" and it will know that the app is what it says it is and : hasn't been perverted. The service cannot say "I won't communicate with this : app if this other app is running" because it has no way of knowing for sure : if the other app isn't running.
So for general applications you still have to trust the OS, but the OS could itself have it's integrity measured by the TOR. Of course given the rate of OS exploits especially in Microsoft products, it seems likley that the aspect of the OS that checks integrity of loaded applications could itself be tampered with using a remote exploit.
Nothing Peter or anyone else has said indicates that this is a property of Palladium, as far as I can remember.
Probably the latter problem is the reason Microsoft introduced ring -1 in palladium (it seems to be missing in TCPA).
No, I think it is there to prevent debuggers and supervisor-mode drivers from manipulating secure code. TCPA is more of a whole-machine spec dealing with booting an OS, so it doesn't have to deal with the question of running secure code next to insecure code.

Peter Biddle, Brian LaMacchia or other Microsoft employees could short-cut this guessing game at any point by coughing up some details. Feel free guys... enciphering minds want to know how it works. (Tim Dierks: read the earlier posts about ring -1 to find the answer to your question about feasibility in the case of Palladium; in the case of TCPA your conclusions are right I think). On Mon, Aug 12, 2002 at 10:55:19AM -0700, AARG!Anonymous wrote:
Adam Back writes:
+---------------+------------+ | trusted-agent | user mode | | space | app space | | (code +------------+ | compartment) | supervisor | | | mode / OS | +---------------+------------+ | ring -1 / TOR | +----------------------------+ | hardware / SCP key manager | +----------------------------+
I don't think this works. According to Peter Biddle, the TOR can be launched even days after the OS boots.
I thought we went over this before? My hypothesis is: I presumed there would be a stub TOR loaded bvy the hardware. The hardware would allow you to load a new TOR (presumably somewhat like loading a new BIOS -- the TOR and hardware has local trusted path to some IO devices).
It does not underly the ordinary user mode apps and the supervisor mode system call handlers and device drivers.
I don't know what leads you to this conclusion.
+---------------+------------+ | trusted-agent | user mode | | space | app space | | (code +------------+ | compartment) | supervisor | | | mode / OS | +---+ +---------------+------------+ |SCP|---| ring -1 / TOR | +---+ +---------------+
How would the OS or user mode apps communicate with trusted agents with this model? The TOR I think would be the mediator of these communications (and of potential communications between trusted agents). Before loading a real TOR, the stub TOR would not implement talking to trusted agents. I think this is also more symmstric and therefore more likely. The trusted agent space is the same as supervisor mode that the OS runs in. It's like virtualization in OS360: there are now multiple "OSes" operating under a micro-kernel (the TOR in ring -1): the real OS and the multiple trusted agents. The TOR is supposed to be special purpose, simple and small enough to be audited as secure and stand a chance of being so. The trusted agents are the secure parts of applications (dealing with sealing, remote attestation, DRM, authenticated path to DRM implementing graphics cards, monitors, sound cards etc; that kind of thing). Trusted agents should also be small, simple special purpose to avoid them also suffering from remote compromise. There's limited point putting a trusted agent in a code compartment if it becomes a full blown complex application like MS word, because then the trusted agent would be nearly as likely to be remotely exploited as normal OSes.
[...] It doesn't follow that the nub has anything to do with the OS proper. If the OS can run fine without it, as I think you agreed, then why would the entire architecture have to reorient itself once the TOR is launched?
trusted-agents will also need to use OS services, the way you have it they can't.
In other words, isn't my version simpler, as it adjoins the column at the left to the pre-existing column at the right, when the TOR launches, days after boot? Doesn't it require less instantaneous, on-the-fly, reconfiguration of the entire structure of the Windows OS at the moment of TOR launch?
I don't think it's a big problem to replace a stub TOR with a given TOR sometime after OS boot. It's analogous to modifying kernel code with a kernel module, only a special purpose micro-kernel in ring -1 instead of ring 0. No big deal.
The parallel stack to the right: OS is computed by TOR; Application is computed OS.
No, that doesn't make sense. Why would the TOR need to compute a metric of the OS?
In TCPA which does not have a ring -1, this is all the TPM does (compute metrics on the OS, and then have the OS compute metrics on applications. While Trusted Agent space is separate and better protected as there are fewer lines of code that a remote exploit has to be found in to compromise one of them, I hardly think Palladium would discard the existing windows driver signing, code signing scheme. It also seems likely therefore that even though it offers lower assurance the code signing would be extended to include metrics and attestation for the OS, drivers and even applications.
Peter has said that Palladium does not give information about other apps running on your machine:
I take this to mean that as stated somewhere in the available docs the OS can not observe or even know how many trusted agents are running. So he's stating that they've made OS design decisions such that the OS could not refuse to run some code on the basis that a given Trusted Agent is running. This functionality however could be implemented if so desired in the TOR. Adam -- http://www.cypherspace.org/adam/

At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
(Tim Dierks: read the earlier posts about ring -1 to find the answer to your question about feasibility in the case of Palladium; in the case of TCPA your conclusions are right I think).
The addition of an additional security ring with a secured, protected memory space does not, in my opinion, change the fact that such a ring cannot accurately determine that a particular request is consistant with any definable security policy. I do not think it is technologically feasible for ring -1 to determine, upon receiving a request, that the request was generated by trusted software operating in accordance with the intent of whomever signed it. Specifically, let's presume that a Palladium-enabled application is being used for DRM; a secure & trusted application is asking its secure key manager to decrypt a content encryption key so it can access properly licensed code. The OS is valid & signed and the application is valid & signed. How can ring -1 distinguish a valid request from one which has been forged by rogue code which used a bug in the OS or any other trusted entity (the application, drivers, etc.)? I think it's reasonable to presume that desktop operating systems which are under the control of end-users cannot be protected against privilege escalation attacks. All it takes is one sound card with a bug in a particular version of the driver to allow any attacker to go out and buy that card & install that driver and use the combination to execute code or access data beyond his privileges. In the presence of successful privilege escalation attacks, an attacker can get access to any information which can be exposed to any privilige level he can escalate to. The attacker may not be able to access raw keys & other information directly managed by the TOR or the key manager, but those keys aren't really interesting anyway: all the interesting content & transactions will live in regular applications at lower security levels. The only way I can see to prevent this is for the OS to never transfer control to any software which isn't signed, trusted and intact. The problem with this is that it's economically infeasible: it implies the death of small developers and open source, and that's a higher price than the market is willing to bear. - Tim PS - I'm looking for a job in or near New York City. See my resume at <http://www.dierks.org/tim/resume.html> --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com

I think you are making incorrect presumptions about how you would use Palladium hardware to implement a secure DRM system. If used as you suggest it would indeed suffer the vulnerabilities you describe. The difference between an insecure DRM application such as you describe and a secure DRM application correctly using the hardware security features is somewhat analogous to the current difference between an application that relies on not being reverse engineered for it's security vs one that encrypts data with a key derived from a user password. In a Palladium DRM application done right everything which sees keys and plaintext content would reside inside Trusted Agent space, inside DRM enabled graphics cards which retrict access to video RAM, and later DRM enabled monitors with encrypted digital signal to the monitor, and DRM enabled soundcards, encrypted content to speakers. (The encrypted contentt to media related output peripherals is like HDCP, only done right with non-broken crypto). Now all that will be in application space that you can reverse engineer and hack on will be UI elements and application logic that drives the trusted agent, remote attesation, content delivery and hardware. At no time will keys or content reside in space that you can virtualize or debug. In the short term it may be that some of these will be not fully implemented so that content does pass through OS or application space, or into non DRM video cards and non DRM monitors, but the above is the end-goal as I understand it. As you can see there is still the limit of the non-remote exploitability of the trusted agent code, but this is within the control of the DRM vendor. If he does a good job of making a simple software architecture and avoiding potential for buffer overflows he stands a much better chance of having a secure DRM platofrm than if as you describe exploited OS code or rogue driver code can subvert his application. There is also I suppose possibility to push content decryption on to the DRM video card so the TOR does little apart from channel key exchange messages from the SCP to the video card, and channel remote attestation and key exchanges between the DRM license server and the SCP. The rest would be streaming encrypted video formats such as CSS VOB blocks (only with good crypto) from the network or disk to the video card. Similar kinds of arguments about the correct break down between application logic and placement of security policy enforcing code in Trusted Agent space apply to general applications. For example you could imagine a file sharing application which hid the data the users machine was serving from the user. If you did it correctly, this would be secure to the extent of the hardware tamper resistance (and the implementers ability to keep the security policy enforcing code line-count down and audit it well). At some level there has to be a trade-off between what you put in trusted agent space and what becomes application code. If you put the whole application in trusted agent space, while then all it's application logic is fully protected, the danger will be that you have added too much code to reasonably audit, so people will be able to gain access to that trusted agent via buffer overflow. So therein lies the crux of secure software design in the Palladium style secure application space: choosing a good break-down between security policy enforcement, and application code. There must be a balance, and what makes sense and is appropriate depends on the application and the limits of the ingenuity of the protocol designer in coming up with clever designs that cover to hardware tamper resistant levels the the applications desired policy enforcement while providing a workably small and pracitcally auditable associated trusted agent module. So there are practical limits stemming from realities to do with code complexity being inversely proportional to auditability and security, but the extra ring -1, remote attestation, sealing and integrity metrics really do offer some security advantages over the current situation. Adam On Mon, Aug 12, 2002 at 03:28:15PM -0400, Tim Dierks wrote:
At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
(Tim Dierks: read the earlier posts about ring -1 to find the answer to your question about feasibility in the case of Palladium; in the case of TCPA your conclusions are right I think).
The addition of an additional security ring with a secured, protected memory space does not, in my opinion, change the fact that such a ring cannot accurately determine that a particular request is consistant with any definable security policy. I do not think it is technologically feasible for ring -1 to determine, upon receiving a request, that the request was generated by trusted software operating in accordance with the intent of whomever signed it.
Specifically, let's presume that a Palladium-enabled application is being used for DRM; a secure & trusted application is asking its secure key manager to decrypt a content encryption key so it can access properly licensed code. The OS is valid & signed and the application is valid & signed. How can ring -1 distinguish a valid request from one which has been forged by rogue code which used a bug in the OS or any other trusted entity (the application, drivers, etc.)?
I think it's reasonable to presume that desktop operating systems which are under the control of end-users cannot be protected against privilege escalation attacks. All it takes is one sound card with a bug in a particular version of the driver to allow any attacker to go out and buy that card & install that driver and use the combination to execute code or access data beyond his privileges.
In the presence of successful privilege escalation attacks, an attacker can get access to any information which can be exposed to any privilige level he can escalate to. The attacker may not be able to access raw keys & other information directly managed by the TOR or the key manager, but those keys aren't really interesting anyway: all the interesting content & transactions will live in regular applications at lower security levels.
The only way I can see to prevent this is for the OS to never transfer control to any software which isn't signed, trusted and intact. The problem with this is that it's economically infeasible: it implies the death of small developers and open source, and that's a higher price than the market is willing to bear.
- Tim
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com

At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
At some level there has to be a trade-off between what you put in trusted agent space and what becomes application code. If you put the whole application in trusted agent space, while then all it's application logic is fully protected, the danger will be that you have added too much code to reasonably audit, so people will be able to gain access to that trusted agent via buffer overflow.
I agree; I think the system as you describe it could work and would be secure, if correctly executed. However, I think it is infeasible to generally implement commercially viable software, especially in the consumer market, that will be secure under this model. Either the functionality will be too restricted to be accepted by the market, or there will be a set of software flaws that allow the system to be penetrated. The challenge is to put all of the functionality which has access to content inside of a secure perimeter, while keeping the perimeter secure from any data leakage or privilege escalation. The perimeter must be very secure and well-understood from a security standpoint; for example, it seems implausible to me that any substantial portion of the Win32 API could be used from within the perimeter; thus, all user interface aspects of the application must be run through a complete security analysis with the presumption that everything outside of the perimeter is compromised and cannot be trusted. This includes all APIs & data. I think we all know how difficult it is, even for security professionals, to produce correct systems that enforce any non-trivial set of security permissions. This is true even when the items to be protected and the software functionality are very simple and straightforward (such as key management systems). I think it entirely implausible that software developed by multimedia software engineers, managing large quantities of data in a multi-operation, multi-vendor environment, will be able to deliver a secure environment. This is even more true when the attacker (the consumer) has control over the hardware & software environment. If a security bug is found & patched, the end user has no direct incentive to upgrade their installation; in fact, the most concerning end users (e.g., pirates) have every incentive to seek out and maintain installations with security faults. While a content or transaction server could refuse to conduct transactions with a user who has not upgraded their software, such a requirement can only increase the friction of commerce, a price that vendors & consumers might be quite unwilling to pay. I'm sure that the whole system is secure in theory, but I believe that it cannot be securely implemented in practice and that the implied constraints on use & usability will be unpalatable to consumers and vendors. - Tim PS - I'm looking for a job in or near New York City. See my resume at <http://www.dierks.org/tim/resume.html> --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com

At this point we largely agree, security is improved, but the limit remains assuring security of over-complex software. To sum up: The limit of what is securely buildable now becomes what is securely auditable. Before, without the Palladium the limit was the security of the OS, so this makes a big difference. Yes some people may design over complex trusted agents, with sloppy APIs and so forth, but the nice thing about trusted agents are they are compartmentalized: If the MPAA and Microsoft shoot themselves in the foot with a badly designed over complex DRM trusted agent component for MS Media Player, it has no bearing on my ability to implement a secure file-sharing or secure e-cash system in a compartment with rigorously analysed APIs, and well audited code. The leaky compromised DRM app can't compromise the security policies of my app. Also it's unclear from the limited information available but it may be that trusted agents, like other ring-0 code (eg like the OS itself) can delegate tasks to user mode code running in trusted agent space, which can't examine other user level space, nor the space of the trusted agent which stated them, and also can't be examined by the OS. In this way for example remote exploits could be better contained in the sub-division of trusted agent code. eg. The crypto could be done by the trusted-agent proper, the mpeg decoding by a user-mode component; compromise the mpeg-decoder, and you just get plaintext not keys. Various divisions could be envisaged. Given that most current applications don't even get the simplest of applications of encryption right (store key and password in the encrypted file, check if the password is right by string comparison is suprisingly common), the prospects are not good for general applications. However it becomes more feasible to build secure applications in the environment where it matters, or the consumer cares sufficiently to pay for the difference in development cost. Of course all this assumes microsoft manages to securely implement a TOR and SCP interface. And whether they manage to succesfully use trusted IO paths to prevent the OS and applications from tricking the user into bypassing intended trusted agent functionality (another interesting sub-problem). CC EAL3 on the SCP is a good start, but they have pressures to make the TOR and Trusted Agent APIs flexible, so we'll see how that works out. Adam -- http://www.cypherspace.org/adam/ On Mon, Aug 12, 2002 at 04:32:05PM -0400, Tim Dierks wrote:
At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
At some level there has to be a trade-off between what you put in trusted agent space and what becomes application code. If you put the whole application in trusted agent space, while then all it's application logic is fully protected, the danger will be that you have added too much code to reasonably audit, so people will be able to gain access to that trusted agent via buffer overflow.
I agree; I think the system as you describe it could work and would be secure, if correctly executed. However, I think it is infeasible to generally implement commercially viable software, especially in the consumer market, that will be secure under this model. Either the functionality will be too restricted to be accepted by the market, or there will be a set of software flaws that allow the system to be penetrated.
The challenge is to put all of the functionality which has access to content inside of a secure perimeter, while keeping the perimeter secure from any data leakage or privilege escalation. [...]
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com

-- On 12 Aug 2002 at 16:32, Tim Dierks wrote:
I'm sure that the whole system is secure in theory, but I believe that it cannot be securely implemented in practice and that the implied constraints on use & usability will be unpalatable to consumers and vendors.
Or to say the same thing more pithily, if it really is going to be voluntary, it really is not going to give hollywood what they want. If really gives hollywood what they want, it is really going to have to be forced down people's throats. --digsig James A. Donald 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG q/bTmZrGsVk2BT9JgumhMqvjDmyIbiElvtidl9aP 2/0CXfo6fzHCxpa+SX8o8Jzvyb71S0KzgBs0gDRhN --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com

Adam Back writes:
So there are practical limits stemming from realities to do with code complexity being inversely proportional to auditability and security, but the extra ring -1, remote attestation, sealing and integrity metrics really do offer some security advantages over the current situation.
You're wearing your programmer's hat when you say that. But the problem isn't programming, but is instead economic. Switch hats. The changes that you list above may or may not offer some security advantages. Who cares? What really matters is whether they increase the cost of copying. I say that the answer is no, for a very simple reason: breaking into your own computer is a "victimless" crime. In a crime there are at least two parties: the victim and the perpetrator. What makes the so-called victimless crime unique is that the victim is not present for the perpetration of the crime. In such a crime, all of the perpetrators have reason to keep silent about the comission of the crime. So it will be with people breaking into their own TCPA-protected computer and application. Nobody with evidence of the crime is interested in reporting the crime, nor in stopping further crimes. Yes, the TCPA hardware introduces difficulties. If there is way around them in software, then someone need only write it once. The whole TCPA house of cards relies on no card ever falling down. Once it falls down, people have unrestricted access to content. And that means that we go back to today's game, where the contents of CDs are open and available for modification. Someone could distribute a pile of "random" bits, which, when xored with the encrypted copy, becomes an unencrypted copy. -- -russ nelson http://russnelson.com | Crynwr sells support for free software | PGPok | businesses persuade 521 Pleasant Valley Rd. | +1 315 268 1925 voice | governments coerce Potsdam, NY 13676-3213 | +1 315 268 9201 FAX | --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
participants (5)
-
AARG! Anonymous
-
Adam Back
-
James A. Donald
-
Russell Nelson
-
Tim Dierks