Re: [Freedombox-discuss] Dumb idea: Alternative to Tor that promotes good behavior
On 10/27/2013 01:26 PM, Bill Cox wrote:
Here's the problem: Tor has little public support, because most Tor traffic is wasted on supporting bad behavior.
I don't think that's the main reason Tor has little public support. I think Tor has little public support because using Tor is slower and less convenient than not using Tor, and people (at least in the USA) seem to value convenience above most other things.
Here's my solution: Build a Tor-like network for routing anonymous data, but track behavior of all users' secret identities, and make their Internet history public. Allow node operators to choose categories of public identities they which to support.
This would not be anonymous data. You're asking people to publish their full internet histories for the privilege of being able to use the network. Are you aware of the work being done toward de-anonymization of rich data sets? I think many people's web browsing habits alone would be sufficient to discover their physical identity with relatively high certainty. Even if we were to assume that the data was not possible to tie back to someone's physical location to put them in harm's way, if their network use patterns are how they do their activism, then publishing their network use patterns provides an adversary with a lot of information that is very helpful to disrupt this same activity (e.g. "Which web sites do they usually use to distribute their [tools|analysis|incitements]? Which chat channels do they frequent? Where/how do they get their e-mail? Can we destroy or subvert those services?)
For example, I would choose to promote all forms of non-violent free speech. I should be able to contribute my bandwidth to this purpose. If a dissident in China goes by the public ID of ChinaCat, and has a high reputation for promoting freedom, they are welcome to use my bandwidth. If someone just wants access to redtube.com, they can get that access from someone else.
If you prefer this, then you should personally make arrangements with ChinaCat directly. I'm not convinced that you could ever make such an arrangement scale cleanly without gross oversimplifications that wouldn't meet many people's assumptions about what the terms mean. Is a sit-in at a restaurant "non-violent free speech"? What about a work stoppage at a factory? how about when the workers barricade the factory against its owners? What about people who sabotage or destroy machinery in their factory? What about destruction of machinery that is prepared to destroy desparately needed housing stock? What about people who smash the windows of low-wage corporate franchises? What about smashing the windows and doors of fire-prone sweatshops? Are all of these things non-violent free speech? can you imagine that someone else might have a different answer for any of them than you do?
There are various technical aspects to this idea. For example, would prefer that the social graph between secret identities be public so I can use a simple network flow algorithm over trust edges between identities to determine how much I trust someone.
I think it would be worthwhile to spec out such an algorithm, and then think through the spec under a handful of real-world use cases. what does it mean to do "network flow over trust edges"? What specifically does "trust" mean in this context? Can you give an example of how that would let you automatically determine how much you trust someone? What does that kind of automated trust discovery mean from a human perspective? What are the ways it could be exploited by an adversary intent on causing trouble? sorry to be a pessimist, but i'm not convinced this is an effective or even desirable framework. --dkg _______________________________________________ Freedombox-discuss mailing list Freedombox-discuss@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/freedombox-discuss
On 10/27/2013 6:42 PM, Daniel Kahn Gillmor wrote:
On 10/27/2013 01:26 PM, Bill Cox wrote:
Here's the problem: Tor has little public support, because most Tor traffic is wasted on supporting bad behavior.
I don't think that's the main reason Tor has little public support. I think Tor has little public support because using Tor is slower and less convenient than not using Tor, and people (at least in the USA) seem to value convenience above most other things.
Granted, that's why we don't use Tor. However, I thought I read there are only around 3,000 Tor nodes. That's pretty wimpy. I may be too lazy to encrypt my traffic, but I'm perfectly happy to spend some of my money supporting free speech. I just don't want to also support griefers and leechers.
Here's my solution: Build a Tor-like network for routing anonymous data, but track behavior of all users' secret identities, and make their Internet history public. Allow node operators to choose categories of public identities they which to support.
This would not be anonymous data. You're asking people to publish their full internet histories for the privilege of being able to use the network. Are you aware of the work being done toward de-anonymization of rich data sets? I think many people's web browsing habits alone would be sufficient to discover their physical identity with relatively high certainty. Even if we were to assume that the data was not possible to tie back to someone's physical location to put them in harm's way, if their network use patterns are how they do their activism, then publishing their network use patterns provides an adversary with a lot of information that is very helpful to disrupt this same activity (e.g. "Which web sites do they usually use to distribute their [tools|analysis|incitements]? Which chat channels do they frequent? Where/how do they get their e-mail? Can we destroy or subvert those services?)
OK, so scratch out the idea of keeping internet histories. It does sound too dangerous. Instead, rely on the web of trust model similar to what we see in the original Ripple e-money system. Maybe enhance it with a reputation system based on recommendations.
For example, I would choose to promote all forms of non-violent free speech. I should be able to contribute my bandwidth to this purpose. If a dissident in China goes by the public ID of ChinaCat, and has a high reputation for promoting freedom, they are welcome to use my bandwidth. If someone just wants access to redtube.com, they can get that access from someone else.
If you prefer this, then you should personally make arrangements with ChinaCat directly. I'm not convinced that you could ever make such an arrangement scale cleanly without gross oversimplifications that wouldn't meet many people's assumptions about what the terms mean. Is a sit-in at a restaurant "non-violent free speech"? What about a work stoppage at a factory? how about when the workers barricade the factory against its owners? What about people who sabotage or destroy machinery in their factory? What about destruction of machinery that is prepared to destroy desparately needed housing stock? What about people who smash the windows of low-wage corporate franchises? What about smashing the windows and doors of fire-prone sweatshops? Are all of these things non-violent free speech? can you imagine that someone else might have a different answer for any of them than you do?
You're probably right. We could instead provide controls similar to OpenDNS where access to sites may be blocked by some exit and routing nodes. I don't think this would be a huge complication. Also, users could be rated by other users in different categories. For example, on LinkedIn, people verify that I know about FPGAs. I'm not sure if a routing policy should depend on such ratings, as it might give me some clue about who is using my router.
There are various technical aspects to this idea. For example, would prefer that the social graph between secret identities be public so I can use a simple network flow algorithm over trust edges between identities to determine how much I trust someone.
I think it would be worthwhile to spec out such an algorithm, and then think through the spec under a handful of real-world use cases. what does it mean to do "network flow over trust edges"? What specifically does "trust" mean in this context? Can you give an example of how that would let you automatically determine how much you trust someone? What does that kind of automated trust discovery mean from a human perspective? What are the ways it could be exploited by an adversary intent on causing trouble?
sorry to be a pessimist, but i'm not convinced this is an effective or even desirable framework.
No worries about pessimism. That's the natural environment new ideas live in, or in this case, probably a regurgitated old idea. I'm a fan of the original Ripple e-money algorithm (not this new stuff designed to make the authors rich). Users have nodes in the trust graph, and can specify that they "trust" other users up to some maximum monetary value. Simply based on these trust relationships, an economy with e-money based on this trust is possible. I would love to enable micro-transactions in a Ripple system, where I might pay 10 microcents for 100KB of data bandwidth (or whatever the going rate is). I could also have my FreedomBox pay for at least it's electricity and bandwidth while hosting encrypted backups or helping people download from BitTorrent faster. It's one thing to say "I trust that guy a lot". It's another to say "Let him put $10 on my tab." It forces us to put trust in terms of dollars, and that's when we find out that to one guy "absolute trust" is worth about 10 cents, while "medium trust" is worth $100 to someone else. If a griefer attacked one of my web sites, I might place a negative trust edge against him, claiming he owes me damages for the griefing. In terms of automated trust discovery, I'm frankly against it. We're talking anonymous people out there, and they have to earn trust. To do that, your FreedomBox could host a web site for me, or help transfer files, or backup my data, or you could even simply pay me a few cents. With a tit-for-tat algorithm, I would trust you back, plus a little extra, over time. So if we've done $100 in business in the past, I might be willing to lend you $1, no questions asked. If there are other reasons I want to trust you, for example if we've exchanged e-mails and I've become convinced we share certain goals, then I could manually increase the trust value. An adversary would have great difficulty scamming people and taking their money in a Ripple network. The protocol has been running live for years, and I wont go into the details, but it's a hardy algorithm. However, an adversary who just wants to know our true identities could gain useful insights by analyzing the trust graph. People who trust each other more are more likely to be electrically close to each other, and when people manually set a trust relationship, you can assume they have somehow interacted. If an adversary takes some step like giving a lot of money to a terrorist group, then they might identify members of that group based on who increases their trust relationship afterwards. However, I've seen these same arguments made against BitCoin, and an adversary with a lot of nodes in the Tor network (like the NSA) might be able to determine identities based on packet timing. Overall, I suspect we may be able to do a better job promoting freedom if we were to build such a network. Bill _______________________________________________ Freedombox-discuss mailing list Freedombox-discuss@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/freedombox-discuss
participants (2)
-
Bill Cox
-
Daniel Kahn Gillmor