Here are some more thoughts on how cryptography could be used to enhance user privacy in a system like TCPA. Even if the TCPA group is not receptive to these proposals, it would be useful to have an understanding of the security issues. And the same issues arise in many other kinds of systems which use certificates with some degree of anonymity, so the discussion is relevant even beyond TCPA. The basic requirement is that users have a certificate on a long-term key which proves they are part of the system, but they don't want to show that cert or that key for most of their interactions, due to privacy concerns. They want to have their identity protected, while still being able to prove that they do have the appropriate cert. In the case of TCPA the key is locked into the TPM chip, the "endorsement key"; and the cert is called the "endorsement certificate", expected to be issued by the chip manufacturer. Let us call the originating cert issuer the CA in this document, and the long-term cert the "permanent certificate". A secondary requirement is for some kind of revocation in the case of misuse. For TCPA this would mean cracking the TPM and extracting its key. I can see two situations where this might lead to revocation. The first is a "global" crack, where the extracted TPM key is published on the net, so that everyone can falsely claim to be part of the TCPA system. That's a pretty obvious case where the key must be revoked for the system to have any integrity at all. The second case is a "local" crack, where a user has extracted his TPM key but keeps it secret, using it to cheat the TCPA protocols. This would be much harder to detect, and perhaps equally significantly, much harder to prove. Nevertheless, some way of responding to this situation is a desirable security feature. The TCPA solution is to use one or more Privacy CAs. You supply your permanent cert and a new short-term "identity" key; the Privacy CA validates the cert and then signs your key, giving you a new cert on the identity key. For routine use on the net, you show your identity cert and use your identity key; your permanent key and cert are never shown except to the Privacy CA. This means that the Privacy CA has the power to revoke your anonymity; and worse, he (or more precisely, his key) has the power to create bogus identities. On the plus side, the Privacy CA can check a revocation list and not issue a new identity cert of the permanent key has been revoked. And if someone has done a local crack and the evidence is strong enough, the Privacy CA can revoke his anonymity and allow his permanent key to be revoked. Let us now consider some cryptographic alternatives. The first is to use Chaum blinding for the Privacy CA interaction. As before, the user supplies his permanent cert to prove that he is a legitimate part of the system, but instead of providing an identity key to be certified, he supplies it in blinded form. The Privacy CA signs this blinded key, the user strips the blinding, and he is left with a cert from the Privacy CA on his identity key. He uses this as in the previous example, showing his privacy cert and using his privacy key. In this system, the Privacy CA no longer has the power to revoke your anonymity, because he only saw a blinded version of your identity key. However, the Privacy CA retains the power to create bogus identities, so the security risk is still there. If there has been a global crack, and a permanent key has been revoked, the Privacy CA can check the revocation list and prevent that user from acquiring new identities, so revocation works for global cracks. However, for local cracks, where there is suspicious behavior, there is no way to track down the permanent key associated with the cheater. All his interactions are done with an identity key which is unlinkable. So there is no way to respond to local cracks and revoke the keys. Actually, in this system the Privacy CA is not really protecting anyone's privacy, because it doesn't see any identities. There is no need for multiple Privacy CAs and it would make more sense to merge the Privacy CA and the original CA that issues the permanent certs. That way there would be only one agency with the power to forge keys, which would improve accountability and auditability. One problem with revocation in both of these systems, especially the one with Chaum blinding, is that existing identity certs (from before the fraud was detected) may still be usable. It is probably necessary to have identity certs be valid for only a limited time so that users with revoked keys are not able to continue to use their old identity certs. Brands credentials provide a more flexible and powerful approach than Chaum blinding which can potentially provide improvements. The basic setup is the same: users would go to a Privacy CA and show their permanent cert, getting a new cert on an identity key which they would use on the net. The difference is that Brands provides for "restrictive blinding". This allows the Privacy CA to issue a cert on a key which would be unlinkable to the permanent key under normal circumstances, but perhaps linkability could be established in some cases. It's not entirely clear how this technology could best be exploited to solve the problems. One possibility, for example, would be to encode information about the permanent key in the restrictive blinding. This would allow users to use their identity keys freely; but upon request they could prove things about their associated permanent keys. They could, for example, reveal the permanent key value associated with their identity key, and do so unforgeably. Or they could prove that their permanent key is not on a given list of revoked keys. Similar logical operations are possible including partial revelation of the permanent key information. However it does not appear possible to solve the case of a local crack using this technology. In that case it is unlikely that they would respond favorably to a request to reveal the permanent key associated with their identity, so that it could be revoked. Brands' technology would allow them to do so in a convincing manner, but they would not cooperate. In the end it's not clear how much Brands certificates really add over the basic Chaum blinding in this application. With the specific usage described above, they have the same basic security properties as in the case of Chaum blinding, except potentially for being able to prove that an identity cert is not associated with a revoked permanent key. Perhaps some other approach using his technology would be more successful. One other cryptographic method that might be relevant is the group signature. This allows someone to sign with a key where he does not reveal his signing key, but he proves that it is part of some group. In the relevant variants, the group is defined as the set of keys which has been certified by a "group membership key". This approach can therefore dispense with the Privacy CA entirely, and with blinding. Instead, the permanent key itself is used for signing on the net, but via a group signature which does not reveal the key value. Instead, the group signature protocol proves that the key exists and that it has been certified by the CA. The main problem with the group signature approach is handling revocation. In the case of a global crack, where someone has published his permanent key, at a minimum it is necessary to create a revocation list for those keys. This means that the group signature protocol must be extended to not only prove that a key exists and has been certified, but also that the key is not on the list of revoked keys - and to do this without revealing the key itself. That's a pretty complicated requirement which is pushing the state of the art. There is a paper being presented at Crypto 02 which claims to offer the first group signature scheme with efficient revocation. Group signatures also offer an optional mechanism which can deal with local cracks. The original group signature concept included the concept of a "revocation manager" who could link signatures to keys - that is, there is one trusted party who can tell which key issued a given signature. In most of the modern variants, this is accomplished by creating, as part of the group signature, an encrypted blob which holds the user's permanent key, where that blob can be encrypted to any specified key. The only one who can tell who made the signature is the key holder that the blob is encrypted to. If this mechanism is used, we can bring back the Privacy CA, who now functions as the party who can link signatures to permanent keys. When someone uses a group signature to participate in a TCPA network, he would optionally specify a Privacy CA who could reveal his permanent key. This would allow for a multiplicity of Privacy CAs with different policies about when and how they would reveal idenities, similar to the original (non-cryptographic) TCPA concept. Then it would be up to the recipients of the signature to judge whether they trusted that Privacy CA to unmask rogues upon sufficient evidence. The main advantage of this scheme over the non-cryptographic TCPA method is, first, that the Privacy CA is optional - users don't have to reveal their identity to anyone if they don't want; and second, that the Privacy CA no longer has the power to forge identities and disrupt the system. This strengthens the overall security of the system. Summing up, none of the alternatives presented here is ideal. The current scheme is among the worst, as it provides the weakest privacy protection and allows the Privacy CAs to break the security of the entire system. The Chaum and Brands blinding methods strengthen privacy at the cost of reducing the ability to respond to local cracks, where the user extracts his TPM key but keeps it to himself. Group signatures provide good privacy protection and can optionally respond to local cracks, but they are cutting edge cryptography and are generally less efficient than the other methods.
On Fri, 16 Aug 2002, AARG! Anonymous wrote:
Here are some more thoughts on how cryptography could be used to enhance user privacy in a system like TCPA. Even if the TCPA group is not receptive to these proposals, it would be useful to have an understanding of the security issues. And the same issues arise in many other kinds of systems which use certificates with some degree of anonymity, so the discussion is relevant even beyond TCPA.
OK, I'm going to discuss it from a philosophical perspective. i.e. I'm just having fun with this.
The basic requirement is that users have a certificate on a long-term key which proves they are part of the system, but they don't want to show that cert or that key for most of their interactions, due to privacy concerns. They want to have their identity protected, while still being able to prove that they do have the appropriate cert. In the case of TCPA the key is locked into the TPM chip, the "endorsement key"; and the cert is called the "endorsement certificate", expected to be issued by the chip manufacturer. Let us call the originating cert issuer the CA in this document, and the long-term cert the "permanent certificate".
I don't like the idea that users *must* have a "certificate". Why can't each person develop their own personal levels of trust and associate it with their own public key? Using multiple channels, people can prove their key is their word. If any company wants to associate a certificate with a customer, that can have lots of meanings to lots of other people. I don't see the usefullness of a "permanent certificate". Human interaction over electronic media has to deal with monkeys, because that's what humans are :-)
A secondary requirement is for some kind of revocation in the case of misuse. For TCPA this would mean cracking the TPM and extracting its key. I can see two situations where this might lead to revocation. The first is a "global" crack, where the extracted TPM key is published on the net, so that everyone can falsely claim to be part of the TCPA system. That's a pretty obvious case where the key must be revoked for the system to have any integrity at all. The second case is a "local" crack, where a user has extracted his TPM key but keeps it secret, using it to cheat the TCPA protocols. This would be much harder to detect, and perhaps equally significantly, much harder to prove. Nevertheless, some way of responding to this situation is a desirable security feature.
Ouch, that doesn't sound too robust.
The TCPA solution is to use one or more Privacy CAs. You supply your permanent cert and a new short-term "identity" key; the Privacy CA validates the cert and then signs your key, giving you a new cert on the identity key. For routine use on the net, you show your identity cert and use your identity key; your permanent key and cert are never shown except to the Privacy CA.
This means that the Privacy CA has the power to revoke your anonymity; and worse, he (or more precisely, his key) has the power to create bogus identities. On the plus side, the Privacy CA can check a revocation list and not issue a new identity cert of the permanent key has been revoked. And if someone has done a local crack and the evidence is strong enough, the Privacy CA can revoke his anonymity and allow his permanent key to be revoked.
The CA has a bit too much power if you ask me. Those are some really good reasons not to like the idea of a "permanent certificate" ruled by one (nasty?) person. [...]
Actually, in this system the Privacy CA is not really protecting anyone's privacy, because it doesn't see any identities. There is no need for multiple Privacy CAs and it would make more sense to merge the Privacy CA and the original CA that issues the permanent certs. That way there would be only one agency with the power to forge keys, which would improve accountability and auditability.
I really, REALLY, *REALLY*, don't like the idea of one entity having the ability to create or destroy any persons ability to use their computer at whim. You are suggesting that one person (or small group) has the power to create (or not) and revoke (or not!) any and all TPM's! I don't know how to describe my astoundment at the lack of comprehension of history. [...]
It's not entirely clear how this technology could best be exploited to solve the problems. One possibility, for example, would be to encode information about the permanent key in the restrictive blinding. This would allow users to use their identity keys freely; but upon request they could prove things about their associated permanent keys. They could, for example, reveal the permanent key value associated with their identity key, and do so unforgeably. Or they could prove that their permanent key is not on a given list of revoked keys. Similar logical operations are possible including partial revelation of the permanent key information.
There's no problem if we just extend our normal concepts of trust between humans on a face to face level onto the net. Person to person, peer to peer, face to face. It's all the same thing, and using the technology to make it happen the same way it happens in the street is going to be the ultimate success story. Nobody controls everything, but some people have some control of some resources. People who attempt to rule the whole world usually burn out sooner or later :-) [...]
The main problem with the group signature approach is handling revocation. In the case of a global crack, where someone has published his permanent key, at a minimum it is necessary to create a revocation list for those keys. This means that the group signature protocol must be extended to not only prove that a key exists and has been certified, but also that the key is not on the list of revoked keys - and to do this without revealing the key itself. That's a pretty complicated requirement which is pushing the state of the art. There is a paper being presented at Crypto 02 which claims to offer the first group signature scheme with efficient revocation.
How about everybody on their block signs each other's keys, and when one monkey misbehaves, the other ones toss 'em out of the troop. A Web of trust is kind of like a group signature. [...]
Summing up, none of the alternatives presented here is ideal. The current scheme is among the worst, as it provides the weakest privacy protection and allows the Privacy CAs to break the security of the entire system. The Chaum and Brands blinding methods strengthen privacy at the cost of reducing the ability to respond to local cracks, where the user extracts his TPM key but keeps it to himself. Group signatures provide good privacy protection and can optionally respond to local cracks, but they are cutting edge cryptography and are generally less efficient than the other methods.
If a company wants control of their hardware, they should put their own private keys in each machine. They can use that with each persons public key and create a reasonably secure system for their business. They don't need a privacy CA, and they can prove to any other firm that the person with the box on their premisis *must* be from the right place. If TCPA is going to fly as a real business it can't just be for DRM. If copyright owners want people to buy their stuff, they'll have to sell "copyright owners certificate" which can be based on the customer's key. Again, nobody needs a privacy CA. There is no valid argument for centralized control *of anything*. TCPA is a valid concept in and of itself. It's way too dangerous a weapon if forced down people's throats the wrong way. So the TCPA guys should sell their boxes as useful tools to lots of big corporations, and forget about the "content industry". There's a lot more money to be made solving real problems for real people. Patience, persistence, truth, Dr. mike
With Brands digital credentials (or Chaums credentials) another approach is to make the endorsement key pair and certificate the anonymous credential. That way you can use the endorsement key and certificate directly rather than having to obtain (blinded) identity certificates from a privacy CA and trust the privacy CA not to issue identity certificates without seeing a corresponding endorsement credential. However the idea with the identity certificates is that you can use them once only and keep fetching new ones to get unlinkable anonymity, or you can re-use them a bit to get pseudonymity where you might use a different psuedonym for a different service where you are anyway otherwise linkable to a given service. With Brands credentials the smart card setting allows you to have more compact and computationally cheap control of the credential from within a smart card which you could apply to the TPM/SCP. So you can fit more (unnamed) pseudonym credentials on the TPM to start with. You could perhaps more simply rely on Brands credential lending discouraging feature (ability to encode arbitrary values in the credential private key) to prevent break once virtualize anywhere. For discarding pseudonyms and when you want to use lots of pseudonyms (one-use unlinkable) you need to refresh the certificates you could use the refresh protocol which allows you to exchange a credential for a new one without trusting the privacy CA for your privacy. Unfortunately I think you again are forced to trust the privacy CA not to create fresh virtualized credentials. Perhaps there would be someway to have the privacy CA be a different CA to the endorsement CA and for the privacy CA to only be able to refresh existing credentials issued by the endorsement CA, but not to create fresh ones. Or perhaps some restriction could be placed on what the privacy CA could do of the form if the privacy CA issued new certificates it would reveal it's private key. "Also relevant is An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation", Jan Camenisch and Anna Lysyanskaya, Eurocrypt 01 http://eprint.iacr.org/2001/019/ These credentials allow the user to do unlinkable multi-show without involving a CA. They are somewhat less efficient than Chaum or Brands credentials though. But for this application does this removes the need to trusting a CA, or even have a CA: the endorsement key and credential can be inserted by the manufacturer, can be used indefinitely many times, and are not linkable.
A secondary requirement is for some kind of revocation in the case of misuse.
As you point out unlinkable anonymity tends to complicate revocation. I think Camenisch's optional anonymity revocation has similar properties in allowing a designated entity to link credentials. Another less "TTP-based" approach to unlinkable but revocable credentials is Stubblebine's, Syverson and Goldschlag, "Unlinkable Serial Transactions", ACM Trans on Info Systems, 1999: http://www.stubblebine.com/99tissec-ust.pdf (It's quite simple you just have to present and relinquish a previous pseudonym credential to get a new credential; if the credential is due to be revoked you will not get a fresh credential.) I think I would define away the problem of local breaks. I mean the end-user does own their own hardware, and if they do break it you can't detect it anyway. If it's anything like playstation mod-chips some proportion of the population would in fact would do this. May be 1-5% or whatever. I think it makes sense to just live with this, and of course not make it illegal. Credentials which are shared are easier to revoke -- knowledge of the private keys typically will render most schemes linkable and revocable. This leaves only online lending which is anyway harder to prevent. Adam On Fri, Aug 16, 2002 at 03:56:09PM -0700, AARG!Anonymous wrote:
Here are some more thoughts on how cryptography could be used to enhance user privacy in a system like TCPA. Even if the TCPA group is not receptive to these proposals, it would be useful to have an understanding of the security issues. And the same issues arise in many other kinds of systems which use certificates with some degree of anonymity, so the discussion is relevant even beyond TCPA.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
On Sun, Aug 18, 2002 at 04:58:56PM +0100, Adam Back wrote:
[...] "Also relevant is An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation", Jan Camenisch and Anna Lysyanskaya, Eurocrypt 01
http://eprint.iacr.org/2001/019/
These credentials allow the user to do unlinkable multi-show without involving a CA. They are somewhat less efficient than Chaum or Brands credentials though. But for this application does this removes the need to trusting a CA, or even have a CA: the endorsement key and credential can be inserted by the manufacturer, can be used indefinitely many times, and are not linkable.
There was some off-list discussion about possibility for sharing these credentials once a given credential is extracted from it's TPM by a user who broke the tamper resistance of his TPM. I also said:
[...] Credentials which are shared are easier to revoke -- knowledge of the private keys typically will render most schemes linkable and revocable. This leaves only online lending which is anyway harder to prevent.
Because Camenisch credentials are unlinkable multi-show it makes it harder to recognize sharing, so the user could undetectably share credentials with a small group that he trusts. (By comparison with linkable pseudonymous credentials and a privacy CA the issuer and/or verifier would see unusually high activity from a given pseudonym or TPM endorsement key if the corresponding credential were shared too widely.) However if the Camenisch (unlinkable multi-show) credential were shared too widely the issuer may also learn the secret key and hence be able to link and so revoke the overly-shared credentials. This combats sharing though to a limited extent. Another idea to improve upon this inherent risk of sharing too widely may be to use a protocol which it is not safe to do parallel shows with. (Some ZKPs are not secure when you engage in multiple show protocols in parallel. Usually this is considered a bad thing, and steps are taken to allow safe parallel show.) For this application a show protocol which it is not safe to engage in parallel shows may frustrate sharing: someone who shared the credential too widely would have difficulty coordinating amongst the sharees not to show the same credential in parallel. I notice Camenisch et al mention steps to avoid parallel showing problem, so perhaps that feature could be reintroduced. In contrast, the TPM can easily ensure that the credential is not used in parallel shows. Adam -- http://www.cypherspace.org/adam/ --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
On Wed, Aug 21, 2002 at 03:24:21AM +0100, Adam Back wrote:
Because Camenisch credentials are unlinkable multi-show it makes it harder to recognize sharing, so the user could undetectably share credentials with a small group that he trusts.
[...]
However if the Camenisch (unlinkable multi-show) credential were shared too widely the issuer may also learn the secret key and hence be able to link and so revoke the overly-shared credentials. This combats sharing though to a limited extent.
Since writing this I realised that there is a problem revoking unlinkable multi-show credentials: - I was presuming that revealing the credential and it's secret key is sufficient to allow someone to link shows of that credential. - but to link you'd have to try each revoked credential in turn. Therefore the verifier would have to perform work linear in the number of revoked credentials at each show, for the duration of the epoch. Anonymous suggests one way out is to just define that the issuing CA and the refreshing CA to be the same entity. Then you already have to trust the hardware manufacturer not to issue certs whose secrets are outside of a TPM. In this case Brands or Chaum credentials work. The remaining desiderata are: - it is not ideal from a risk management perspective to have to have the hardware manufacturers endorsement private key online to refresh certificates (or in general for there to be any private key online that allows issuing of credentials whose private keys lie outside a TPM); - not ideal to have to have an online protocol with an otherwise non-existant third party (credential refresh CA) in order to avoid linkability; Other ideas I gave in an earlier post towards fixing these remaining issues now that it seems unlinkable multi-show credentials won't work: | Perhaps there would be someway to have the privacy CA be a different | CA to the endorsement CA and for the privacy CA to only be able to | refresh existing credentials issued by the endorsement CA, but not to | create fresh ones. | | Or perhaps some restriction could be placed on what the privacy CA | could do of the form if the privacy CA issued new certificates it | would reveal it's private key. Adam -- http://www.cypherspace.org/adam/
participants (3)
-
AARG! Anonymous
-
Adam Back
-
Mike Rosing