Executing Encrypted Code
At the last meeting references were made to processors which only execute encrypted code. Decryption occurs on chip. If each chip has a unique public/secret key pair, and executes authenticated code only, there are some interesting implications. Software piracy becomes difficult, if not impossible. Code is sold on a processor by processor basis. Code for a different physical processor cannot be decrypted or executed. Even if it is feasible to determine the secret key stored on the chip, software piracy is still hard because it is not possible to execute the code on another chip without authenticating it. One could execute the code on another architecture entirely using an emulator, but there would be a performance price paid. It wouldn't be worth the trouble for most software. The manufacturer of the encrypted-code processor would protect its instruction set using intellectual property law. Given the high price of a fab, it is entirely feasible to stop anybody from building a new architecture which can execute the code about as fast as the encrypting-code processor. Viruses are not feasible if the authentication is strong. Retrieval of the secret key is quite difficult. Since the results of the decryption never leave the chip, the recent attacks against smart cards do not work. (In the case of an error, the authentication fails and the code does not execute. No information has to leave the chip.) I would be interested to hear comments and corrections. Peter Hendrickson ph@netcom.com
-----BEGIN PGP SIGNED MESSAGE-----
At the last meeting references were made to processors which only execute encrypted code. Decryption occurs on chip.
If each chip has a unique public/secret key pair, and executes authenticated code only, there are some interesting implications.
Let's see... What about this scenario: Alice gets a contraband copy of PGP 4.0 off the Internet. Since the public-key algorithm is publicized so that people can encrypt software to a chip, PGP 4.0 has the ability to encode/decode/generate keys for the chip. Alice generates a public key/private key pair 0x12345678, in software. Alice goes to www.microsoft.com and orders Office '99 online, and tells Microsoft "Hi, my name is Alice, my credit card number is 31426436136778 and my PGPentium's public key is 0x12345678." Microsoft unwittingly sends Alice a copy encrypted to 0x12345678, for which she has the private key to. Alice decrypts Office '99, and reencrypts it with public key of her PGPentium, as well as the keys f all her friends. Does the authentication defeat this? Our computers would only run software from Microsoft? Scary. - -- Ben Byer root@bushing.plastic.crosslink.net I am not a bushing -----BEGIN PGP SIGNATURE----- Version: 2.6.2 iQB1AwUBMrn3V7D5/Q37XXHFAQFuVAMAg90hbta98fduPUdvneYYbfZe4v+9fsmc rSyYYStamC/mX8Mr2BRJVtNlOoWLkALhfPcnF0tKL5cVBTgufVlZRyJBc5KypkeZ q/hyIupaA4aETwALBlEdZ+3k1eOKiE6L =nGsN -----END PGP SIGNATURE-----
At 12:08 PM -0800 12/19/96, Peter Hendrickson wrote:
If each chip has a unique public/secret key pair, and executes authenticated code only, there are some interesting implications.
Software piracy becomes difficult, if not impossible. Code is sold on a processor by processor basis. Code for a different physical processor cannot be decrypted or executed.
This makes backup hard. That is the rock the routine copy protection hit up against. There were many, me included, who simply said, "If your product is copy protected then I will buy from your competitor."
Viruses are not feasible if the authentication is strong.
So is user written code, public domain code etc. If there is an un-encrypted mode for that kind of code, then viruses again become possible. ------------------------------------------------------------------------- Bill Frantz | I still read when I should | Periwinkle -- Consulting (408)356-8506 | be doing something else. | 16345 Englewood Ave. frantz@netcom.com | It's a vice. - R. Heinlein | Los Gatos, CA 95032, USA
ph@netcom.com (Peter Hendrickson) writes:
At the last meeting references were made to processors which only execute encrypted code. Decryption occurs on chip. If each chip has a unique public/secret key pair, and executes authenticated code only, there are some interesting implications.
Yes, interesting indeed. It would also partially solve a problem I've been thinking about: how can I safely run code on a machine that I don't trust? I'm working on some mobile agent / distributed computation research. The basic model is that I send an agent to a server (say, a Java interpreter) running somewhere. A lot has been written about security, how to protect the server from malicious agents. But what about protecting agents from malicious servers? Possible threat models include servers that steal an agent's propietary code and data or servers that deliberately misexecute the agent's code. The latter threat model is under serious consideration with the distributed DES cracking project that's being designed now. The ultimate solution is trusted hardware on the server end. I think, for a variety of reasons, this is really unlikely to be widly deployed. But bringing the trusted hardware needed down to just a black-box CPU that decrypts on the fly is a neat idea. Other ideas include obfuscating code (protects against theft), splitting up your computation across multiple machines (spread the risk of theft), independently verify the results of remote comptuations (protects from spoofing), or build some reputation mechanism for servers (so bad guys are identified). None of these solutions is very satisfying. I suspect that really guaranteeing safety to mobile agents is impossible, or at least very difficult, without trusted hardware. But I'm not 100% sure. There are some interesting notes in Applied Crypto 2nd about performing computations on encrypted data (p.540). These algorithms seem to be of very limited application. Or are they? If anyone has any thoughts on this issue, I'd love to hear them. If you send to cypherpunks, please also mail me privately as I'm going offline for a few days..
There are several algorithms I've seen that allow for blind execution of arbitrary code and verification of correctness given the usual cryptographic assumptions. Their problem is that they are absurdly inefficient. But their existence suggests the possibility of efficient algorithms (or at least a good paper deriving lower bounds on the complexity of such algorithms). JWS
participants (5)
-
Ben Byer -
Bill Frantz -
Nelson Minar -
ph@netcom.com -
solman@MIT.EDU