Re: Thanks, Lucky, for helping to kill gnutella
Several people have objected to my point about the anti-TCPA efforts of Lucky and others causing harm to P2P applications like Gnutella. Eric Murray wrote:
Depending on the clients to "do the right thing" is fundamentally stupid.
Bran Cohen agrees:
Before claiming that the TCPA, which is from a deployment standpoint vaporware, could help with gnutella's scaling problems, you should probably learn something about what gnutella's problems are first. The truth is that gnutella's problems are mostly that it's a screamer protocol, and limiting which clients could connect would do nothing to fix that.
I will just point out that it was not my idea, but rather that Salon said that the Gnutella developers were considering moving to authorized clients. According to Eric, those developers are "fundamentally stupid." According to Bram, the Gnutella developers don't understand their own protocol, and they are supporting an idea which will not help. Apparently their belief that clients like Qtrax are hurting the system is totally wrong, and keeping such clients off the system won't help. I can't help believing the Gnutella developers know more about their own system than Bram and Eric do. If they disagree, their argument is not with me, but with the Gnutella people. Please take it there. Ant chimes in:
My copy of "Peer to Peer" (Oram, O'Reilly) is out on loan but I think Freenet and Mojo use protocols that require new users to be contributors before they become consumers.
Pete Chown echoes:
If you build a protocol which allows selfish behaviour, you have done your job badly. Preventing selfish behaviour in distributed systems is not easy, but that is the problem we need to solve. It would be a good discussion for this list.
As far as Freenet and MojoNation, we all know that the latter shut down, probably in part because the attempted traffic-control mechanisms made the whole network so unwieldy that it never worked. At least in part this was also due to malicious clients, according to the analysis at http://www.cs.rice.edu/Conferences/IPTPS02/188.pdf. And Freenet has been rendered inoperative in recent months by floods. No one knows whether they are fundamental protocol failings, or the result of selfish client strategies, or calculated attacks by the RIAA and company. Both of these are object lessons in the difficulties of successful P2P networking in the face of arbitrary client attacks. Some people took issue with the personal nature of my criticism:
Your personal vendetta against Lucky is very childish.
This sort of attack doesn't do your position any good.
Right, as if my normal style has been so effective. Not one person has given me the least support in my efforts to explain the truth about TCPA and Palladium. Anyway, maybe I was too personal in singling out Lucky. He is far from the only person who has opposed TCPA. But Lucky, in his slides at http://www.cypherpunks.to, claims that TCPA's designers had as one of their objectives "To meet the operational needs of law enforcement and intelligence services" (slide 2); and to give privileged access to user's computers to "TCPA members only" (slide 3); that TCPA has an OS downloading a "serial number revocation list" (SNRL) which he has provided no evidence for whatsoever (slide 14); that it loads an "initial list of undesirable applications" which is apparently another of his fabrications (slide 15); that TCPA applications on startup load both a serial number revocation list but also a document revocation list, again a completely unsubstantiated claim (slide 19); that apps then further verify that spyware is running, another fabrication (slide 20). He then implies that the DMCA applies to reverse engineering when it has an explicit exemption for that (slide 23); that the maximum possible sentence of 5 years is always applied (slide 24); that TCPA is intended to: defeat the GPL, enable information invalidation, facilitate intelligence collection, meet law enforcement needs, and more (slide 27); that only signed code will boot in TCPA, contrary to the facts (slide 28). He provides more made-up details about the mythical DRL (slide 31); more imaginary details about document IDs, information monitoring and invalidation to support law enforcement and intelligence needs, none of which has anything to do with TCPA (slide 32-33). As apparent support for these he provides an out-of-context quote[1] from a Palladium manager, who if you read the whole article was describing their determination to keep the system open (slide 34). He repeats the unfounded charge that the Hollings bill would mandate TCPA, when there's nothing in the bill that says such a thing (slide 35); and he exaggerates the penalties in that bill by quoting the maximum limits as if they are the default (slide 36). Lucky can provide all this misinformation, all under the pretence, mind you, that this *is* TCPA. He was educating the audience, mostly people who were completely unfamiliar with the system other than some vague rumors. And this is what he presents, a tissue of lies and fabrications and unfounded sensationalism. Don't forget, TCPA and Palladium were designed by real people. In making these charges, Lucky is not just talking about a standard, he is talking about its authors. He is saying that those people were attempting to serve intelligence needs, to make sure that people had to run spyware, to close down the system so it could keep "undesirable" applications off. He is accusing the designers of far worse than anything I have said about him. He is basically saying that they are striving to bring about a technological police state. And yet, no one (other than me, of course) dared to criticize Lucky for these claims. He can say whatever he wants, be as outrageous as he wants, and no one says a thing. I don't know whether everyone agrees with him, or is simply unwilling to risk criticism by departing from the groupthink which is so universal around here. I asked Eric Murray, who knows something about TCPA, what he thought of some of the more ridiculous claims in Ross Anderson's FAQ (like the SNRL), and he didn't respond. I believe it is because he is unwilling to publicly take a position in opposition to such a famous and respected figure. But anyway, maybe I was too personal in criticizing Lucky. Tell you what. I'll apologize to Lucky as soon as he apologizes to the designers of TCPA for the fabrications in his slide show. Deal? ------------------------------------------------------------------------ [1] We are talking to the government now, and maybe this is where we get some advantage from having a broad industry initiative. Our fundamental goal is "let's do the right thing." We have pretty strong feelings about what the right thing is on terms of making sure that things are truly anonymous and that key escrow kinds of things don't happen. But there ARE governments in the world, and not just the U.S. Government. http://www.techweb.com/index/news/Hardwa...WB19980901S0016/INW20020626S0007
At 08:25 PM 8/9/2002 -0700, AARG!Anonymous wrote:
As far as Freenet and MojoNation, we all know that the latter shut down, probably in part because the attempted traffic-control mechanisms made the whole network so unwieldy that it never worked.
I worked there and respectfully disagree. MN never gained a foothold first and foremost because of the frequent join/leave problem. This, in turn, was a direct result of insufficient resources to address automated publication of .mp3 header data. The inability of the client SW to automatically create the header data and publish directories full of .mp3 files at each client meant users had to expend more much effort to make available their content than file-oriented P2P alternatives. This hurdle, when combined with data retention problems related to other MN deficiencies, assured that little content was available for DL. New users simply abandoned the effort when they came up empty handed. The introducer problem could probably have been solved using Usenet postings. The nature of Usenet meant it could scale and was fairly resistant legal and technical attacks. Usenet might also have served for a fallback block store but neither approach was ever carefully considered, again due to resource limitations.
At least in part this was also due to malicious clients, according to the analysis at http://www.cs.rice.edu/Conferences/IPTPS02/188.pdf.
My experience is that the malicious client problem was not a major issue. [much deleted]
Lucky can provide all this misinformation, all under the pretence, mind you, that this *is* TCPA. He was educating the audience, mostly people who were completely unfamiliar with the system other than some vague rumors. And this is what he presents, a tissue of lies and fabrications and unfounded sensationalism.
At Lucky's Defcon talk he stated that he was a participant in the development of TCPA. Can't clearly recall in what capacity he served but me recollection is it was as a reviewer. steve
I asked Eric Murray, who knows something about TCPA, what he thought of some of the more ridiculous claims in Ross Anderson's FAQ (like the SNRL), and he didn't respond. I believe it is because he is unwilling to publicly take a position in opposition to such a famous and respected figure.
Many of the people who "know something about TCPA" are constrained by NDA's with Intel. Perhaps that is Eric's problem -- I don't know. (I have advised Intel about its security and privacy initiatives, under a modified NDA, for a few years now. Ross Anderson has also. Dave Farber has also. It was a win-win: I could hear about things early enough to have a shot at convincing Intel to do the right things according to my principles; they could get criticized privately rather than publicly, if they actually corrected the criticized problems before publicly announcing. They consult me less than they used to, probably because I told them too many things they didn't want to hear.) One of the things I told them years ago was that they should draw clean lines between things that are designed to protect YOU, the computer owner, from third parties; versus things that are designed to protect THIRD PARTIES from you, the computer owner. This is so consumers can accept the first category and reject the second, which, if well-informed, they will do. If it's all a mishmash, then consumers will have to reject all of it, and Intel can't even improve the security of their machines FOR THE OWNER, because of their history of "security" projects that work against the buyer's interest, such as the Pentium serial number and HDCP. TCPA began in that "protect third parties from the owner" category, and is apparently still there today. You won't find that out by reading Intel's modern public literature on TCPA, though; it doesn't admit to being designed for, or even useful for, DRM. My guess is that they took my suggestion as marketing advice rather than as a design separation issue. "Pitch all your protect-third-party products as if they are protect-the-owner products" was the opposite of what I suggested, but it's the course they (and the rest of the DRM industry) are on. E.g. see the July 2002 TCPA faq at: http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf 3. Is the real "goal" of TCPA to design a TPM to act as a DRM or Content Protection device? No. The TCPA wants to increase the trust ... [blah blah blah] I believe that "No" is a direct lie. Intel has removed the first public version 0.90 of the TCPA spec from their web site, but I have copies, and many of the examples in the mention DRM, e.g.: http://www.trustedcomputing.org/docs/TCPA_first_WP.pdf (still there) This TCPA white paper says that the goal is "ubiquity". Another way to say that is monopoly. The idea is to force any other choices out of the market, except the ones that the movie & record companies want. The first "scenario" (PDF page 7) states: "For example, before making content available to a subscriber, it is likely that a service provider will need to know that the remote platform is trustworthy." http://www.trustedpc.org/home/pdf/spec0818.pdf (gone now) Even this 200-page TCPA-0.90 specification, which is carefully written to be obfuscatory and misleading, leaks such gems as: "These features encourage third parties to grant access to by the platform to information that would otherwise be denied to the platform" (page 14). "The 'protected store' feature...can hold and manipulate confidential data, and will allow the release or use of that data only in the presence of a particular combination of access rghts and software environment. ... Applications that might benefit include ... delivery of digital content (such as movies and songs)." (page 15). Of course, they can't help writing in the DRM mindset regardless of their intent to confuse us. In that July 2002 FAQ again: 9. Does TCPA certify applications and OS's that utilize TPMs? No. The TCPA has no plans to create a "certifying authority" to certify OS's or applications as "trusted". The trust model the TCPA promotes for the PC is: 1) the owner runs whatever OS or applications they want; 2) The TPM assures reliable reporting of the state of the platform; and 3) the two parties engaged in the transaction determine if the other platform is trusted for the intended transaction. "The transaction"? What transaction? They were talking about the owner getting reliable reporting on the security of their applications and OS's and -- uh -- oh yeah, buying music or video over the Internet. Part of their misleading technique has apparently been to present no clear layman's explanations of the actual workings of the technology. There's a huge gap between the appealing marketing sound bites -- or FAQ lies -- and the deliberately dry and uneducational 400-page technical specs. My own judgement is that this is probably deliberate, since if the public had an accurate 20-page document that explained how this stuff works and what it is good for, they would reject the tech instantly. Perhaps we in the community should write such a document. Lucky and Adam Back seem to be working towards it. The similar document about key-escrow (that CDT published after assembling a panel of experts including me, Whit, and Matt Blaze) was quite useful in explaining to lay people and Congressmen what was wrong with it. NSA/DoJ had trouble countering it, since it was based on the published facts, and they couldn't impugn the credentials of the authors, nor the document's internal reasoning. Intel and Microsoft and anonymous chauvanists can and should criticize such a document if we write one. That will strengthen it by eliminating any faulty reasoning or errors of public facts. But they had better bring forth new exculpating facts if they expect the authors to change their conclusions. They're free to allege that "No current Microsoft products have Document Revocation Lists", but that doesn't undermine the conclusion that their architectures make it easy to secretly implement that feature, anytime they want to. John
On Fri, Aug 09, 2002 at 08:25:40PM -0700, AARG!Anonymous wrote:
Several people have objected to my point about the anti-TCPA efforts of Lucky and others causing harm to P2P applications like Gnutella.
The point that a number of people made is that what is said in the article is not workable: clearly you can't ultimately exclude chosen clients on open computers due to reverse-engineering. (With TCPA/Palladium remote attestation you probably could so exclude competing clients, but this wasn't what was being talked about). The client exclusion plan is also particularly unworkable for gnutella because some of the clients are open-source, and the protocol is (now since original reverse engineering from nullsoft client) also open. With closed-source implementations there is some obfuscation barrier that can be made: Kazaa/Morpheus did succeed in frustrating competing clients due to it's closed protocols and unpublished encryption algorithm. At one point an open source group reverse-engineered the encryption algorithm, and from there the contained kazaa protocols, and built an interoperable open-source client giFT http://gift.sourceforge.net, but then FastTrack promptly changed the unpublished encryption algorithm to another one and then used remote code upgrade ability to "upgrade" all of the clients. Now the open-source group could counter-strike if they had particularly felt motivated to. For example they could (1) reverse-engineer the new unpublished encryption algorithm, and (2) the remote code upgrade, and then (3) do their own forced upgrade to an open encryption algorithm and (4) disable further forced upgrades. (giFT instead after the "ugrade" attack from FastTrack decided to implement their own open protocol "openFT" instead and compete. It also includes a general bridge between different file-sharing networks, in a somewhat gaim like way, if you are familiar with gaim.)
[Freenet and Mojo melt-downs/failures...] Both of these are object lessons in the difficulties of successful P2P networking in the face of arbitrary client attacks.
I grant you that making simultaneously DoS resistant, scalable and anonymous peer-to-peer networks is a Hard Problem. Even removing the anonymous part it's still a Hard Problem. Note both Freenet and Mojo try to tackle the harder of those two problems and have aspects of publisher and reader anonymity, so that they are doing less well than Kazaa, gnutella and others is partly because they are more ambitious and tackling a harder problem. Also the anonymity aspect possibly makes abuse more likely -- ie the attacker is provided as part of the system tools to obscure his own identity in attacking the system. DoSers of Kazaa or gnutella would likely be more easily identified which is some deterrence. I also agree that the TCPA/Palladium attested closed world computing model could likely more simply address some of these problems. (Lucky slide critique in another post). Adam -- http://www.cypherspace.org/adam/ --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
At 04:02 AM 8/10/2002 -0700, John Gilmore wrote:
"The transaction"? What transaction? They were talking about the owner getting reliable reporting on the security of their applications and OS's and -- uh -- oh yeah, buying music or video over the Internet.
Part of their misleading technique has apparently been to present no clear layman's explanations of the actual workings of the technology. There's a huge gap between the appealing marketing sound bites -- or FAQ lies -- and the deliberately dry and uneducational 400-page technical specs. My own judgement is that this is probably deliberate, since if the public had an accurate 20-page document that explained how this stuff works and what it is good for, they would reject the tech instantly.
Perhaps we in the community should write such a document. Lucky and Adam Back seem to be working towards it. The similar document about key-escrow (that CDT published after assembling a panel of experts including me, Whit, and Matt Blaze) was quite useful in explaining to lay people and Congressmen what was wrong with it. NSA/DoJ had trouble countering it, since it was based on the published facts, and they couldn't impugn the credentials of the authors, nor the document's internal reasoning.
Indeed. Another item I recall from Lucky's Defcon talk is that (I assume) Intel are back at it when it comes to obfuscated crypto. Like the Pentium RNG before it, the TPCA HW will only expose a whitened version making independent analysis difficult-impossible. steve
Date: Fri, 9 Aug 2002 20:25:40 -0700 From: AARG!Anonymous <remailer@aarg.net>
Right, as if my normal style has been so effective. Not one person has given me the least support in my efforts to explain the truth about TCPA and Palladium.
Hal, I think you were right on when you wrote: But feel free to make whatever assumptions you like about my motives. All I ask is that you respond to my facts. I, for one, support your efforts, even though I don't agree with some of your conclusions. It is clear that you hold a firm opinion that differs from what many others here believe, so in making your points you can expect objections to be raised. You will be more convincing (at least to me) if you continue to respond to these dispassionately on the basis of facts and reasoned opinions (your "normal style"?). Calling Lucky a liar is no more illuminating than others calling you an idiot. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
On Sat, 10 Aug 2002, R. Hirschfeld wrote:
Calling Lucky a liar is no more illuminating than others calling you an idiot.
You're confusing a classification for an argument. The argument is over. You can read it up in the archives. If you think there's still anything left to discuss, I've got these plans of the Death Star I could sell you...
Anonymous wrote:
As far as Freenet and MojoNation, we all know that the latter shut down, probably in part because the attempted traffic-control mechanisms made the whole network so unwieldy that it never worked.
Right, so let's solve this problem. Palladium/TCPA solves the problem in one sense, but in a very inconvenient way. First of all, they stop you running a client which has been modified in any way -- not just a client which has been modified to be selfish. Secondly, they facilitate the other bad things which have been raised on this list.
Right, as if my normal style has been so effective. Not one person has given me the least support in my efforts to explain the truth about TCPA and Palladium.
The reason for that is that we all disagree with you. I'm interested to read your opinions, but I will argue against you. I'm not interested in reading flames at all. -- Pete
AARG!Anonymous wrote:
I will just point out that it was not my idea, but rather that Salon said that the Gnutella developers were considering moving to authorized clients. According to Eric, those developers are "fundamentally stupid." According to Bram, the Gnutella developers don't understand their own protocol, and they are supporting an idea which will not help. Apparently their belief that clients like Qtrax are hurting the system is totally wrong, and keeping such clients off the system won't help.
You can try running a sniffer on it yourself. Gnutella traffic is almost all search queries.
As far as Freenet and MojoNation, we all know that the latter shut down, probably in part because the attempted traffic-control mechanisms made the whole network so unwieldy that it never worked.
Mojo Nation actually had a completely excessive amount of bandwidth donated to it. There was a problem that people complained of losing mojo when running a server due to the total amount of upload being greater than the total amount of download. The main user experience disaster in Mojo Nation was that the retrieval rate for files was very bad, mostly due to the high peer churn rate.
At least in part this was also due to malicious clients, according to the analysis at http://www.cs.rice.edu/Conferences/IPTPS02/188.pdf.
Oh gee, that paper mostly talks about high churn rate too. In fact, I was one of the main developers of Mojo Nation, and based on lessons learned from that figured out how to build a system which can cope with very high churn rates and has good leech resistance. It is now mature and has had several quite successful deployments. http://bitconjurer.org/BitTorrent/ Not only are the algorithms used good for leech resistance, they are also very good at being robust under normal variances in net conditions - in fact, the decentralized greedy approach to resource allocation outperforms any known centralized method. The TCPA, even if it some day works perfectly (which I seriously doubt it will) would just plain not help with any of the immediate problems in Gnutella, BitTorrent, or Mojo Nation. I would guess the same is true for most, if not all other p2p systems. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
The remote attesation is the feature which is in the interests of third parties. I think if this feature were removed the worst of the issues the complaints are around would go away because the remaining features would be under the control of the user, and there would be no way for third parties to discriminate against users who did not use them, or configured them in given ways. The remaining features of note being sealing, and integrity metric based security boot-strapping. However the remote attestation is clearly the feature that TCPA, and Microsoft place most value on (it being the main feature allowing DRM, and allowing remote influence and control to be exerted on users configuration and software choices). So the remote attesation feature is useful for _servers_ that want to convince clients of their trust-worthiness (that they won't look at content, tamper with the algorithm, or anonymity or privacy properties etc). So you could imagine that feature being a part of server machines, but not part of client machines -- there already exists some distinctions between client and server platforms -- for example high end Intel chips with larger cache etc intended for server market by their pricing. You could imagine the TCPA/Palladium support being available at extra cost for this market. But the remaining problem is that the remote attesation is kind of dual-use (of utility to both user desktop machines and servers). This is because with peer-to-peer applications, user desktop machines are also servers. So the issue has become entangled. It would be useful for individual liberties for remote-attestation features to be widely deployed on desktop class machines to build peer-to-peer systems and anonymity and privacy enhancing systems. However the remote-attestation feature is also against the users interests because it's wide-spread deployment is the main DRM enabling feature and general tool for remote control descrimination against user software and configuration choices. I don't see any way to have the benefits without the negatives, unless anyone has any bright ideas. The remaining questions are: - do the negatives out-weigh the positives (lose ability to reverse-engineer and virtualize applications, and trade software-hacking based BORA for hardware-hacking based ROCA); - are there ways to make remote-attestation not useful for DRM, eg. limited deployment, other; - would the user-positive aspects of remote-attestation still be largely available with only limited-deployment -- eg could interesting peer-to-peer and privacy systems be built with a mixture of remote-attestation able and non-remote-attestation able nodes. Adam -- http://www.cypherspace.org/adam/ On Sat, Aug 10, 2002 at 04:02:36AM -0700, John Gilmore wrote:
One of the things I told them years ago was that they should draw clean lines between things that are designed to protect YOU, the computer owner, from third parties; versus things that are designed to protect THIRD PARTIES from you, the computer owner. This is so consumers can accept the first category and reject the second, which, if well-informed, they will do. If it's all a mishmash, then consumers will have to reject all of it, and Intel can't even improve the security of their machines FOR THE OWNER, because of their history of "security" projects that work against the buyer's interest, such as the Pentium serial number and HDCP. [...]
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
Adam Back wrote:
The remote attesation is the feature which is in the interests of third parties.
I think if this feature were removed the worst of the issues the complaints are around would go away because the remaining features would be under the control of the user, and there would be no way for third parties to discriminate against users who did not use them, or configured them in given ways.
The remaining features of note being sealing, and integrity metric based security boot-strapping.
However the remote attestation is clearly the feature that TCPA, and Microsoft place most value on (it being the main feature allowing DRM, and allowing remote influence and control to be exerted on users configuration and software choices).
So the remote attesation feature is useful for _servers_ that want to convince clients of their trust-worthiness (that they won't look at content, tamper with the algorithm, or anonymity or privacy properties etc). So you could imagine that feature being a part of server machines, but not part of client machines -- there already exists some distinctions between client and server platforms -- for example high end Intel chips with larger cache etc intended for server market by their pricing. You could imagine the TCPA/Palladium support being available at extra cost for this market.
But the remaining problem is that the remote attesation is kind of dual-use (of utility to both user desktop machines and servers). This is because with peer-to-peer applications, user desktop machines are also servers.
So the issue has become entangled.
It would be useful for individual liberties for remote-attestation features to be widely deployed on desktop class machines to build peer-to-peer systems and anonymity and privacy enhancing systems.
However the remote-attestation feature is also against the users interests because it's wide-spread deployment is the main DRM enabling feature and general tool for remote control descrimination against user software and configuration choices.
I don't see any way to have the benefits without the negatives, unless anyone has any bright ideas. The remaining questions are:
- do the negatives out-weigh the positives (lose ability to reverse-engineer and virtualize applications, and trade software-hacking based BORA for hardware-hacking based ROCA);
- are there ways to make remote-attestation not useful for DRM, eg. limited deployment, other;
- would the user-positive aspects of remote-attestation still be largely available with only limited-deployment -- eg could interesting peer-to-peer and privacy systems be built with a mixture of remote-attestation able and non-remote-attestation able nodes.
A wild thought that occurs to me is that some mileage could be had by using remotely attested servers to verify _signatures_ of untrusted peer-to-peer stuff. So, you get most of the benefits of peer-to-peer and the servers only have to do cheap, low-bandwidth stuff. I admit I haven't worked out any details of this at all! Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ Available for contract work. "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff
participants (9)
-
AARG! Anonymous
-
Adam Back
-
Ben Laurie
-
Bram Cohen
-
Eugen Leitl
-
John Gilmore
-
Pete Chown
-
R. Hirschfeld
-
Steve Schear