I want to follow up on Adam's message because, to be honest, I missed his point before. I thought he was bringing up the old claim that these systems would "give the TCPA root" on your computer. Instead, Adam is making a new point, which is a good one, but to understand it you need a true picture of TCPA rather than the false one which so many cypherpunks have been promoting. Earlier Adam offered a proposed definition of TCPA/Palladium's function and purpose:
"Palladium provides an extensible, general purpose programmable dongle-like functionality implemented by an ensemble of hardware and software which provides functionality which can, and likely will be used to expand centralised control points by OS vendors, Content Distrbuters and Governments."
IMO this is total bullshit, political rhetoric that is content-free compared to the one I offered: : Allow computers separated on the internet to cooperate and share data : and computations such that no one can get access to the data outside : the limitations and rules imposed by the applications. It seems to me that my definition is far more useful and appropriate in really understanding what TCPA/Palladium are all about. Adam, what do you think? If we stick to my definition, you will come to understand that the purpose of TCPA is to allow application writers to create closed spheres of trust, where the application sets the rules for how the data is handled. It's not just DRM, it's Napster and banking and a myriad other applications, each of which can control its own sensitive data such that no one can break the rules. At least, that's the theory. But Adam points out a weak spot. Ultimately applications trust each other because they know that the remote systems can't be virtualized. The apps are running on real hardware which has real protections. But applications know this because the hardware has a built-in key which carries a certificate from the manufacturer, who is called the TPME in TCPA. As the applications all join hands across the net, each one shows his cert (in effect) and all know that they are running on legitimate hardware. So the weak spot is that anyone who has the TPME key can run a virtualized TCPA, and no one will be the wiser. With the TPME key they can create their own certificate that shows that they have legitimate hardware, when they actually don't. Ultimately this lets them run a rogue client that totally cheats, disobeys all the restrictions, shows the user all of the data which is supposed to be secret, and no one can tell. Furthermore, if people did somehow become suspicious about one particular machine, with access to the TPME key the eavesdroppers can just create a new virtual TPM and start the fraud all over again. It's analogous to how someone with Verisign's key could masquerade as any secure web site they wanted. But it's worse because TCPA is almost infinitely more powerful than PKI, so there is going to be much more temptation to use it and to rely on it. Of course, this will be inherently somewhat self-limiting as people learn more about it, and realize that the security provided by TCPA/Palladium, no matter how good the hardware becomes, will always be limited to the political factors that guard control of the TPME keys. (I say keys because likely more than one company will manufacture TPM's. Also in TCPA there are two other certifiers: one who certifies the motherboard and computer design, and the other who certifies that the board was constructed according to the certified design. The NSA would probably have to get all 3 keys, but this wouldn't be that much harder than getting just one. And if there are multiple manufacturers then only 1 key from each of the 3 categories is needed.) To protect against this, Adam offers various solutions. One is to do crypto inside the TCPA boundary. But that's pointless, because if the crypto worked, you probably wouldn't need TCPA. Realistically most of the TCPA applications can't be cryptographically protected. "Computing with encrypted instances" is a fantasy. That's why we don't have all those secure applications already. Another is to use a web of trust to replace or add to the TPME certs. Here's a hint. Webs of trust don't work. Either they require strong connections, in which case they are too sparse, or they allow weak connections, in which case they are meaningless and anyone can get in. I have a couple of suggestions. One early application for TCPA is in closed corporate networks. In that case the company usually buys all the computers and prepares them before giving them to the employees. At that time, the company could read out the TPM public key and sign it with the corporate key. Then they could use that cert rather than the TPME cert. This would protect the company's sensitive data against eavesdroppers who manage to virtualize their hardware. For the larger public network, the first thing I would suggest is that the TPME key ought to be in hardware, so it can't be given out freely. Of course the NSA could still come in and get their virtual-TPM keys signed one at a time. So the next step is that the device holding the TPME key must be managed in a high security environment. This may be difficult, given the need to sign potentially thousands of TPM keys a day, but I think it has to be done. I want to see watchdogs from the EFF and a lot of other groups sitting there 24 hours a day watching over the device. Remember how Clipper was going to use a vault, split keys and all this elaborate precautions? We need at least that much security. Think about it: this one innocuous little box holding the TPME key could ultimately be the root of trust for the entire world. IMO we should spare no expense in guarding it and making sure it is used properly. With enough different interest groups keeping watch, we should be able to keep it from being used for anything other than its defined purpose.