cypherpunks-legacy
Threads by month
- ----- 2025 -----
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2005 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2004 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2003 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2002 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2001 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2000 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1999 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1998 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1997 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1996 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1995 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1994 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1993 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1992 -----
- December
- November
- October
- September
July 2018
- 1371 participants
- 9656 discussions
Europe's Plan to Track Phone and Net Use
http://www.nytimes.com/2007/02/20/business/worldbusiness/20privacy.html
By VICTORIA SHANNON
PARIS, Feb. 19 -- European governments are preparing legislation to
require companies to keep detailed data about people's Internet and
phone use that goes beyond what the countries will be required to
do under a European Union directive.
In Germany, a proposal from the Ministry of Justice would
essentially prohibit using false information to create an e-mail
account, making the standard Internet practice of creating accounts
with pseudonyms illegal.
A draft law in the Netherlands would likewise go further than the
European Union requires, in this case by requiring phone companies
to save records of a caller's precise location during an entire
mobile phone conversation.
Even now, Internet service providers in Europe divulge customer
information -- which they normally keep on hand for about three
months, for billing purposes -- to police officials with legally
valid orders on a routine basis, said Peter Fleischer, the
Paris-based European privacy counsel for Google. The data concerns
how the communication was sent and by whom but not its content.
But law enforcement officials argued after the terrorist bombings
in Spain and Britain that they needed better and longer data
storage from companies handling Europe's communications networks.
European Union countries have until 2009 to put the Data Retention
Directive into law, so the proposals seen now are early
interpretations. But some people involved in the issue are
concerned about a shift in policy in Europe, which has long been a
defender of individuals' privacy rights.
Under the proposals in Germany, consumers theoretically could not
create fictitious e-mail accounts, to disguise themselves in online
auctions, for example. Nor could they use a made-up account to use
for receiving commercial junk mail. While e-mail aliases would not
be banned, they would have to be traceable to the actual account
holder.
"This is an incredibly bad thing in terms of privacy, since people
have grown up with the idea that you ought to be able to have an
anonymous e-mail account," Mr. Fleischer said. "Moreover, it's
totally unenforceable and would never work."
Mr. Fleischer said the law would have to require some kind of
identity verification, "like you may have to register for an e-mail
address with your national ID card."
Jvrg Hladjk, a privacy lawyer at Hunton & Williams, a Brussels law
firm, said that might also mean that it could become illegal to pay
cash for prepaid cellphone accounts. The billing information for
regular cellphone subscriptions is already verified.
Mr. Fleischer said: "It's ironic, because Germany is one of the
countries in Europe where people talk the most about privacy. In
terms of consciousness of privacy in general, I would put Germany
at the extreme end."
He said it was not clear that any European law would apply to
e-mail providers based in the United States, like Google, so anyone
who needed an unverified e-mail address -- for political,
commercial or philosophical reasons -- could still use Gmail, Yahoo
or Hotmail addresses.
Mr. Hladjk said, "It's going to be difficult to know which law
applies." Google requires only two pieces of information to open a
Gmail account -- a name and a password -- and the company does not
try to determine whether the name is authentic.
In the Netherlands, the proposed extension of the law on phone
company records to all mobile location data "implies surveillance
of the movement of large amounts of innocent citizens," the Dutch
Data Protection Agency has said. The agency concluded in January
that the draft disregarded privacy protections in the European
Convention on Human Rights. Similarly, the German technology trade
association Bitkom said the draft there violated the German
Constitution.
Internet and telecommunications industry associations raised
objections when the directive was being debated, but at that time
their concerns were for the length of time the data would have to
be stored and how the companies would be compensated for the cost
of gathering and keeping the information. The directive ended up
leaving both decisions in the hands of national governments,
setting a range of six months to two years. The German draft
settled on six months, while in Spain the proposal is for a year,
and in the Netherlands it is 18 months.
"There are not a lot of people in Germany who support this draft
entirely," said Christian Spahr, a spokesman for Bitkom. "But there
are others who are more critical of it than we are."
------------------------ Yahoo! Groups Sponsor --------------------~-->
Check out the new improvements in Yahoo! Groups email.
http://us.click.yahoo.com/4It09A/fOaOAA/yQLSAA/PMYolB/TM
--------------------------------------------------------------------~->
Post message: transhumantech(a)yahoogroups.com
Subscribe: transhumantech-subscribe(a)yahoogroups.com
Unsubscribe: transhumantech-unsubscribe(a)yahoogroups.com
List owner: transhumantech-owner(a)yahoogroups.com
List home: http://www.yahoogroups.com/group/transhumantech/
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/transhumantech/
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
http://groups.yahoo.com/group/transhumantech/join
(Yahoo! ID required)
<*> To change settings via email:
mailto:transhumantech-digest@yahoogroups.com
mailto:transhumantech-fullfeatured@yahoogroups.com
<*> To unsubscribe from this group, send an email to:
transhumantech-unsubscribe(a)yahoogroups.com
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
1
0
Dear Zooko,
thanks for your detailed comments. I'd like to elaborate some more on
them in the following:
Quoting Zooko O'Whielacronx <zooko(a)zooko.com>:
>> Also today
>> indeed bandwidth should be the more precious resource in a P2P system
>> compared to storage, which is available in abundance to the home user.
>> So a simple replication strategy might not be so bad after all...
>
> Replication costs more bandwidth on upload than erasure coding does
> (for a similar degree of fault-tolerance) as well as costing more
> storage.
True. But the repair cost is minimal compared to replacing a lost erasure
coded fragment. So assuming long-lived data stored in a network of dynamic,
unreliable peers (P2P), repair may be frequently needed and cost much more
bandwidth over the whole life-time of a data object than the initial upload.
In such an environment replication can hence be more bandwidth efficient
than erasure coding - while of course requiring more storage in any case.
>> If I understood it right, Tahoe clients simply keep a connection with each
>> storage node in a storage cluster.
>
> That's right, We have kicked around some ideas about how to do a more
> scalable-DHT-like routing instead of keeping a connection open from
> each client to each server, but even if we had such a thing Tahoe-LAFS
> grids would still be comprised exclusively of servers whose owners
> gave you some reason to believe that they were reliable. Scalable
> routing a la DHT wouldn't be sufficient to allow you to safely rely on
> strangers for storage, for various reasons that you touched on next:
"Tahoe-LAFS grids would still be comprised exclusively of servers whose
owners gave you some reason to believe that they were reliable."
^^
That is exactly what I initially referred to as "trusted" and "trusted
environment". Because the servers that take part in the grid are selected
(friends, an operator you trust, etc.), you can assume within Tahoe's
use-case that all (or almost all) servers are honest and have no
malicious intentions.
This assumption is not important for data integrity and confidentiality
because this is assured by cryptographic primitives. However, this trust
relationship ("friends network") is necessary to gain reliability with
regard to data availability / accessibility and censorship-resistance.
When you give up the assumption of a trusted environment and allow random
strangers to join the network and serve as storage nodes, the Tahoe design
can become vulnerable to DoS attacks. That's not a criticism of Tahoe at
all because it's not the use-case of Tahoe. I just wanted to point out why
Tahoe cannot be employed (at least not without modification) for the use
case I have in mind: A global P2P file store made of arbitrary, unreliable
nodes.
>> So if the DHT is deployed on untrusted nodes we need to care about things
>> like admission control, sybil attack, routing and index poisening, eclipse
>> attack and so on.
>
> Hm, but then you say something that I don't quite follow:
>
>> - It may need further modification to be safely usable in a network
>> comprised of untrusted nodes (sybils, DHT robustness against denial of
>> service attacks, ...)
>
> I think the word "trust" often causes confusion, because it bundles
> together a lot of concepts into one word. I find that rephrasing
> things in terms of "reliance" often makes things clearer.
Ok, we can use the term reliable or maybe it's clearer referring to game
theory's "selfish but honest". I realize now that my claim that Tahoe
would not be deployable to "untrusted nodes" might have sound as if I
wanted to challenge Tahoe's least authority principle. That's of course
not the case.
I just wanted to point out that a) Tahoe doesn't scale to the size I'd
like and b) is not designed to withstand the presence of malicious nodes.
Both would however be needed for the use-case I have in mind.
> So: Tahoe-LAFS users absolutely do *not* rely on the storage servers
> for confidentiality and integrity. Confidentiality and integrity are
> guaranteed by the user's client software, using math (cryptography).
> Even if *all* of the storage servers that you are using turn out to be
> controlled by a single malicious entity who will stop at nothing to
> harm you, this doesn't threaten the confidentiality of the data in
> your files nor its integrity.
I know. That's why I also said that I don't think Tahoe has deficiencies
with regard to confidentiality and integrity even if used on a global
scale in an uncontrolled / partly malicious environment.
> But, Tahoe-LAFS users *do* rely on the storage servers for the
> longevity and availability of their data. If the malicious entity that
> controls all the servers decides to delete all of the ciphertext that
> they are holding for you, then no mathematical magic will help you get
> the data back. :-)
Correct. But you can design the system in such a way that the data will
still be available as long as only a certain percentage of the nodes are
"honest". Then the key to achieve a reliable storage system reduces to
making it practically impossible that any single malicious entity can
gain control over the necessary set of network nodes. And here, size
becomes your friend: The larger the network and the wider the stored data
is dispersed, the harder for a single attacker to reach the necessary
percentage to break availability. That's mathematical magic too ;)
> That is why Tahoe-LAFS users typically limit the set of storage
> servers that they will entrust their ciphertext. They choose only
> servers which are operated by friends of theirs, or by a company that
> they pay for service, or servers operated by members of a group that
> has collectively agreed to trade storage for storage with each other.
Yep, I understand the concept. That's the trust relationship I initially
referred to. However, I don't quite like the concept ;)
It certainly has a use-case but it is limited imho. Your average Joe will
pretty certainly not be able to set up a dedicated storage network with
his friends. It's too complicated (or he doesn't have friends ;)). The
only easy option is to connect to the storage grid of a commercial provider
(like allmydata was). But then you suffer from some similar risks than
when using a central cloud storage system in the first place: provider
insolvency, management fault, etc.
So: One large, public network where everyone could freely join or leave
would be much easier from a usability point of view: Just download and
run the software and you're ready to start. No setup, no configuration,
no further management required.
> I wrote more on this topic in a letter to the tahoe-dev mailing list
> last night:
>
> "BitTorrent for storage" is a bad idea
> http://tahoe-lafs.org/pipermail/tahoe-dev/2011-February/006150.html
Thanks for the cross-post. I'm not on the Tahoe dev list, so I missed the
earlier discussion. To clarify: I don't think your documentation on Tahoe
is incomprehensible. Seems it is rather me who is widely misunderstood ;)
I think I've made it clear above what I was referring to with "untrusted
nodes" and that I don't think Tahoe has any design problem within the
scope of its targeted use-case.
The question is rather whether "BitTorrent for storage" is really a bad
idea and whether there's really no value for Tahoe in dropping the "friends
network" assumption.
I named "ease of use" as an argument already. And I think this is not just
a "nice to have" feature. If ease of use is not provided built-in, people
will look for alternatives to easily join a storage grid. If you think
experiments like "volunteer-grid" a bit further you are not far from my
envisioned ad-hoc network of strangers. The participants in such a volunteer
grid are not your friends. Can you still disregard the possibility of
malicious nodes for this use-case?
>> - To guarantee persistence in a P2P network of untrusted and unreliable
>> nodes Tahoe's information dispersal strategy needs be adapted. The degree
>> of redundancy must be increased (n/k) but just as well the number of
>> erasure coded fragments (k) too for storage efficiency.
>
> Why do you think these parameters would need to be changed?
For several considerations:
In a P2P network made of home computers, the availability of the single
nodes will be significantly lower than for dedicated storage nodes in a
grid (maybe just around 30-40% instead of 90%). To still achieve high
overall availability, n needs to be increased (or more precisely: n/k).
Because we however don't want to introduce too much redundancy and waste
storage, we'd like to increase system availability primarily by increasing
k. This has also the advantage that we could download from many sources
in parallel and better level the node heterogenity with regard to available
upstream bandwidth.
Finally, large k (and n) makes it much harder for a malicious entity to
attack the availability of a file because the attacker then needs to gain
control over not just a few but many specific storage nodes, which - with
proper admission control and network size - can be made largely infeasible.
>> I don't know if
>> this is practically doable within Tahoe's current structure (galois-field
>> based Reed-Solomon coding is slow with large k and n) or what other side
>> effects this may have (size of the Merkle trees?).
>
> It is plenty efficient for k and n up to about 256. It is also
> probably efficient enough for k and n up to about 2^16, although I'm
> skeptical that anyone actually needs k and n that size.
Hm, what is plenty efficient for you? Again, I have to admit that I didn't
run any specific tests with zfec, so my opinion is mostly guess-work: I
read James Plank's paper on open-source erasure coding libraries that
included some numbers on zfec and did some actual tests with Alexandre
Soro's Fermat-prime based reed-solomon implementation (which should be more
efficient than zfec for k, n >= 256 - at least in theory): For k = 256 and
n = 1024, I get a decoding speed of about 7.5 Mbit/s on my two years
old notebook.
But this is not efficient enough for me. Erasure decoding speed should be
much higher than network bandwidth. And considering that the next-gen
broadband internet links offer 100 MBit/s for the home user, 7.5 Mbit/s
erasure decoding seems by far not sufficient.
> There is a Merkle Tree in Tahoe-LAFS which is computed over the
> identifiers of the n shares, so that Merkle Tree would grow in size as
> n grew. However that is a small cost that probably wouldn't need much
> if any optimization.
Yes, the size of the hash tree should indeed be less of an issue.
>> - Censorship-resistance obviously also depends on availability and data
>> persistence guarantees. If directed (or undirected) denial of service
>> attacks are possible on the DHT, the system cannot said to be censorship-
>> resistant.
>
> Hm, so if I understand correctly, Tahoe-LAFS currently doesn't have
> *scalability* in terms of the number of servers, but it does have
> nearly optimal *censorship resistance* at a given scale. For example,
> suppose there are 200 servers which are all joined in the conspiracy
> to host a repository of Free and Open Source Software, and some evil
> attacker is expending resources attempting to disrupt that hosting or
> deny users access to it. If those 200 servers are organized into a
> traditional scalable DHT like Chord, then a client would have
> approximately a logarithmic number of connections to servers, say to
> perhaps eight of them. An attacker who wants to deny that client
> access to the Free and Open Source software repository would have to
> take down only eight servers or prevent the client from establishing
> working connections to them, right?
Could be even worse. Actually, if an attacker sits somewhere along the DHT
lookup path he could pretend to be the root node for a searched key. In
that case (and assuming a traditional DHT) just one compromised node could
already be sufficient to make certain data unavailable.
> Whereas with a full bipartite
> graph topology like Tahoe-LAFS the attacker would have to take down or
> deny access to a substantial constant fraction of all 200 of them
> (depending on the ratio of k to n).
Yes. But again: I didn't challenge that ;) Tahoe is good for its current
use case. I was rather responding to questions like "Why don't you simply
use Tahoe for what you have in mind?".
BTW: While Tahoe is pretty censorship-resistant (depending on the size and
ratio of k to n as well as the number of independent parties in the grid),
censorship is a problem you shouldn't need to worry about too much in a
network of "friends" anyway. And for the use-case where the whole storage
grid is operated by just one commercial provider you have no censorship-
resistance anyway - no matter what is the network topology ;)
> (Note: is assuming that the erasure coding parameter n is turned up to
> 200, which is already supported in Tahoe-LAFS -- you can configure it
> in the tahoe.cfg configuration file.)
Yes, that's the same consideration why I proposed to increase k and n too:
Larger k and n reduce the feasibility of DoS attacks.
> (Note: this is about attacking the storage layer, not the introduction
> layer. Those are separate in Tahoe-LAFS and while the latter does need
> some work, it is probably easier to defend the introduction layer than
> the storage layer since introducers are stateless and have minimal
> ability to do damage if they act maliciously. Multiple redundant
> introducers were implemented by MO Faruque Sarker as part of the
> Google Summer of Code 2010 but it hasn't been merged into trunk yet.
> You can help! We need code-review, testing, documentation, etc.
> http://tahoe-lafs.org/trac/tahoe-lafs/ticket/68 :-) )
I agree that bootstrapping can be rather easily distributed and thereby
made more robust against potential attacks.
>> And there are other, less-obvious censorship risks too: If a third-party
>> can force specific node owners (e.g. by court order) to shut down their
>> storage nodes then certain data can become unavailable in the system.
>
> You may be interested in Tahoe-LAFS-over-Tor and Tahoe-LAFS-over-i2p.
> :-) I'm sure both of those projects would be grateful for bug reports,
> patches, etc.
Not really. I'm not convinced of the onion routing or mixnet concepts for
such a storage use-case. First, it's inefficient as lots of bandwidth is
"wasted". Second, is sender and receiver anonymity really what we need?
I think in a storage network there is no need to disguise who communicates
with whom or who participates in the network at all. For privacy, it's
important to obfuscate which nodes store which data and consequently who
accesses or forwards which data.
Anonymous routing is not sufficient to solve the former and does not fix
the "legal attack" problem I mentioned (rather the opposite seems to be
true). Let's consider the case of downloading a MP3 file in Freenet
(the infamous "anonymous filesharing" use-case):
The file is stored (or cached) encrypted on some node. It is transferred
to the requester by onion routing. Indeed, the downloader cannot determine
the identity of the storage node and the storage node doesn't know the
identity of the requester. So with regard to criminal liability this scheme
may provide some protection (plausible deniability, benefit of the doubt).
However, in civil law it doesn't matter who is the original sender and who
the actual receiver of the copyrighted work. Everybody who participates in
the copying and distribution of the file can become liable to recourse. This
means that the copyright holder can send a C&D letter to every node along
the onion route - and not just sue sender or receiver. And it should not be
difficult to log the identity of some of the hops along the route.
Now I don't want to promote any "anonymous filesharing" nonsense. I just
think that in order for people to accept the use of online storage their
data needs to be as "safe" from third-party delete requests as on their
local hard disk. Further, people will only be ready to participate in such
a network (and dedicate storage to it) when they don't risk to be held
liable for the actions of others.
Regards,
Michael
_______________________________________________
p2p-hackers mailing list
p2p-hackers(a)lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
Dear Zooko,
thanks for your detailed comments. I'd like to elaborate some more on
them in the following:
Quoting Zooko O'Whielacronx <zooko(a)zooko.com>:
>> Also today
>> indeed bandwidth should be the more precious resource in a P2P system
>> compared to storage, which is available in abundance to the home user.
>> So a simple replication strategy might not be so bad after all...
>
> Replication costs more bandwidth on upload than erasure coding does
> (for a similar degree of fault-tolerance) as well as costing more
> storage.
True. But the repair cost is minimal compared to replacing a lost erasure
coded fragment. So assuming long-lived data stored in a network of dynamic,
unreliable peers (P2P), repair may be frequently needed and cost much more
bandwidth over the whole life-time of a data object than the initial upload.
In such an environment replication can hence be more bandwidth efficient
than erasure coding - while of course requiring more storage in any case.
>> If I understood it right, Tahoe clients simply keep a connection with each
>> storage node in a storage cluster.
>
> That's right, We have kicked around some ideas about how to do a more
> scalable-DHT-like routing instead of keeping a connection open from
> each client to each server, but even if we had such a thing Tahoe-LAFS
> grids would still be comprised exclusively of servers whose owners
> gave you some reason to believe that they were reliable. Scalable
> routing a la DHT wouldn't be sufficient to allow you to safely rely on
> strangers for storage, for various reasons that you touched on next:
"Tahoe-LAFS grids would still be comprised exclusively of servers whose
owners gave you some reason to believe that they were reliable."
^^
That is exactly what I initially referred to as "trusted" and "trusted
environment". Because the servers that take part in the grid are selected
(friends, an operator you trust, etc.), you can assume within Tahoe's
use-case that all (or almost all) servers are honest and have no
malicious intentions.
This assumption is not important for data integrity and confidentiality
because this is assured by cryptographic primitives. However, this trust
relationship ("friends network") is necessary to gain reliability with
regard to data availability / accessibility and censorship-resistance.
When you give up the assumption of a trusted environment and allow random
strangers to join the network and serve as storage nodes, the Tahoe design
can become vulnerable to DoS attacks. That's not a criticism of Tahoe at
all because it's not the use-case of Tahoe. I just wanted to point out why
Tahoe cannot be employed (at least not without modification) for the use
case I have in mind: A global P2P file store made of arbitrary, unreliable
nodes.
>> So if the DHT is deployed on untrusted nodes we need to care about things
>> like admission control, sybil attack, routing and index poisening, eclipse
>> attack and so on.
>
> Hm, but then you say something that I don't quite follow:
>
>> - It may need further modification to be safely usable in a network
>> comprised of untrusted nodes (sybils, DHT robustness against denial of
>> service attacks, ...)
>
> I think the word "trust" often causes confusion, because it bundles
> together a lot of concepts into one word. I find that rephrasing
> things in terms of "reliance" often makes things clearer.
Ok, we can use the term reliable or maybe it's clearer referring to game
theory's "selfish but honest". I realize now that my claim that Tahoe
would not be deployable to "untrusted nodes" might have sound as if I
wanted to challenge Tahoe's least authority principle. That's of course
not the case.
I just wanted to point out that a) Tahoe doesn't scale to the size I'd
like and b) is not designed to withstand the presence of malicious nodes.
Both would however be needed for the use-case I have in mind.
> So: Tahoe-LAFS users absolutely do *not* rely on the storage servers
> for confidentiality and integrity. Confidentiality and integrity are
> guaranteed by the user's client software, using math (cryptography).
> Even if *all* of the storage servers that you are using turn out to be
> controlled by a single malicious entity who will stop at nothing to
> harm you, this doesn't threaten the confidentiality of the data in
> your files nor its integrity.
I know. That's why I also said that I don't think Tahoe has deficiencies
with regard to confidentiality and integrity even if used on a global
scale in an uncontrolled / partly malicious environment.
> But, Tahoe-LAFS users *do* rely on the storage servers for the
> longevity and availability of their data. If the malicious entity that
> controls all the servers decides to delete all of the ciphertext that
> they are holding for you, then no mathematical magic will help you get
> the data back. :-)
Correct. But you can design the system in such a way that the data will
still be available as long as only a certain percentage of the nodes are
"honest". Then the key to achieve a reliable storage system reduces to
making it practically impossible that any single malicious entity can
gain control over the necessary set of network nodes. And here, size
becomes your friend: The larger the network and the wider the stored data
is dispersed, the harder for a single attacker to reach the necessary
percentage to break availability. That's mathematical magic too ;)
> That is why Tahoe-LAFS users typically limit the set of storage
> servers that they will entrust their ciphertext. They choose only
> servers which are operated by friends of theirs, or by a company that
> they pay for service, or servers operated by members of a group that
> has collectively agreed to trade storage for storage with each other.
Yep, I understand the concept. That's the trust relationship I initially
referred to. However, I don't quite like the concept ;)
It certainly has a use-case but it is limited imho. Your average Joe will
pretty certainly not be able to set up a dedicated storage network with
his friends. It's too complicated (or he doesn't have friends ;)). The
only easy option is to connect to the storage grid of a commercial provider
(like allmydata was). But then you suffer from some similar risks than
when using a central cloud storage system in the first place: provider
insolvency, management fault, etc.
So: One large, public network where everyone could freely join or leave
would be much easier from a usability point of view: Just download and
run the software and you're ready to start. No setup, no configuration,
no further management required.
> I wrote more on this topic in a letter to the tahoe-dev mailing list
> last night:
>
> "BitTorrent for storage" is a bad idea
> http://tahoe-lafs.org/pipermail/tahoe-dev/2011-February/006150.html
Thanks for the cross-post. I'm not on the Tahoe dev list, so I missed the
earlier discussion. To clarify: I don't think your documentation on Tahoe
is incomprehensible. Seems it is rather me who is widely misunderstood ;)
I think I've made it clear above what I was referring to with "untrusted
nodes" and that I don't think Tahoe has any design problem within the
scope of its targeted use-case.
The question is rather whether "BitTorrent for storage" is really a bad
idea and whether there's really no value for Tahoe in dropping the "friends
network" assumption.
I named "ease of use" as an argument already. And I think this is not just
a "nice to have" feature. If ease of use is not provided built-in, people
will look for alternatives to easily join a storage grid. If you think
experiments like "volunteer-grid" a bit further you are not far from my
envisioned ad-hoc network of strangers. The participants in such a volunteer
grid are not your friends. Can you still disregard the possibility of
malicious nodes for this use-case?
>> - To guarantee persistence in a P2P network of untrusted and unreliable
>> nodes Tahoe's information dispersal strategy needs be adapted. The degree
>> of redundancy must be increased (n/k) but just as well the number of
>> erasure coded fragments (k) too for storage efficiency.
>
> Why do you think these parameters would need to be changed?
For several considerations:
In a P2P network made of home computers, the availability of the single
nodes will be significantly lower than for dedicated storage nodes in a
grid (maybe just around 30-40% instead of 90%). To still achieve high
overall availability, n needs to be increased (or more precisely: n/k).
Because we however don't want to introduce too much redundancy and waste
storage, we'd like to increase system availability primarily by increasing
k. This has also the advantage that we could download from many sources
in parallel and better level the node heterogenity with regard to available
upstream bandwidth.
Finally, large k (and n) makes it much harder for a malicious entity to
attack the availability of a file because the attacker then needs to gain
control over not just a few but many specific storage nodes, which - with
proper admission control and network size - can be made largely infeasible.
>> I don't know if
>> this is practically doable within Tahoe's current structure (galois-field
>> based Reed-Solomon coding is slow with large k and n) or what other side
>> effects this may have (size of the Merkle trees?).
>
> It is plenty efficient for k and n up to about 256. It is also
> probably efficient enough for k and n up to about 2^16, although I'm
> skeptical that anyone actually needs k and n that size.
Hm, what is plenty efficient for you? Again, I have to admit that I didn't
run any specific tests with zfec, so my opinion is mostly guess-work: I
read James Plank's paper on open-source erasure coding libraries that
included some numbers on zfec and did some actual tests with Alexandre
Soro's Fermat-prime based reed-solomon implementation (which should be more
efficient than zfec for k, n >= 256 - at least in theory): For k = 256 and
n = 1024, I get a decoding speed of about 7.5 Mbit/s on my two years
old notebook.
But this is not efficient enough for me. Erasure decoding speed should be
much higher than network bandwidth. And considering that the next-gen
broadband internet links offer 100 MBit/s for the home user, 7.5 Mbit/s
erasure decoding seems by far not sufficient.
> There is a Merkle Tree in Tahoe-LAFS which is computed over the
> identifiers of the n shares, so that Merkle Tree would grow in size as
> n grew. However that is a small cost that probably wouldn't need much
> if any optimization.
Yes, the size of the hash tree should indeed be less of an issue.
>> - Censorship-resistance obviously also depends on availability and data
>> persistence guarantees. If directed (or undirected) denial of service
>> attacks are possible on the DHT, the system cannot said to be censorship-
>> resistant.
>
> Hm, so if I understand correctly, Tahoe-LAFS currently doesn't have
> *scalability* in terms of the number of servers, but it does have
> nearly optimal *censorship resistance* at a given scale. For example,
> suppose there are 200 servers which are all joined in the conspiracy
> to host a repository of Free and Open Source Software, and some evil
> attacker is expending resources attempting to disrupt that hosting or
> deny users access to it. If those 200 servers are organized into a
> traditional scalable DHT like Chord, then a client would have
> approximately a logarithmic number of connections to servers, say to
> perhaps eight of them. An attacker who wants to deny that client
> access to the Free and Open Source software repository would have to
> take down only eight servers or prevent the client from establishing
> working connections to them, right?
Could be even worse. Actually, if an attacker sits somewhere along the DHT
lookup path he could pretend to be the root node for a searched key. In
that case (and assuming a traditional DHT) just one compromised node could
already be sufficient to make certain data unavailable.
> Whereas with a full bipartite
> graph topology like Tahoe-LAFS the attacker would have to take down or
> deny access to a substantial constant fraction of all 200 of them
> (depending on the ratio of k to n).
Yes. But again: I didn't challenge that ;) Tahoe is good for its current
use case. I was rather responding to questions like "Why don't you simply
use Tahoe for what you have in mind?".
BTW: While Tahoe is pretty censorship-resistant (depending on the size and
ratio of k to n as well as the number of independent parties in the grid),
censorship is a problem you shouldn't need to worry about too much in a
network of "friends" anyway. And for the use-case where the whole storage
grid is operated by just one commercial provider you have no censorship-
resistance anyway - no matter what is the network topology ;)
> (Note: is assuming that the erasure coding parameter n is turned up to
> 200, which is already supported in Tahoe-LAFS -- you can configure it
> in the tahoe.cfg configuration file.)
Yes, that's the same consideration why I proposed to increase k and n too:
Larger k and n reduce the feasibility of DoS attacks.
> (Note: this is about attacking the storage layer, not the introduction
> layer. Those are separate in Tahoe-LAFS and while the latter does need
> some work, it is probably easier to defend the introduction layer than
> the storage layer since introducers are stateless and have minimal
> ability to do damage if they act maliciously. Multiple redundant
> introducers were implemented by MO Faruque Sarker as part of the
> Google Summer of Code 2010 but it hasn't been merged into trunk yet.
> You can help! We need code-review, testing, documentation, etc.
> http://tahoe-lafs.org/trac/tahoe-lafs/ticket/68 :-) )
I agree that bootstrapping can be rather easily distributed and thereby
made more robust against potential attacks.
>> And there are other, less-obvious censorship risks too: If a third-party
>> can force specific node owners (e.g. by court order) to shut down their
>> storage nodes then certain data can become unavailable in the system.
>
> You may be interested in Tahoe-LAFS-over-Tor and Tahoe-LAFS-over-i2p.
> :-) I'm sure both of those projects would be grateful for bug reports,
> patches, etc.
Not really. I'm not convinced of the onion routing or mixnet concepts for
such a storage use-case. First, it's inefficient as lots of bandwidth is
"wasted". Second, is sender and receiver anonymity really what we need?
I think in a storage network there is no need to disguise who communicates
with whom or who participates in the network at all. For privacy, it's
important to obfuscate which nodes store which data and consequently who
accesses or forwards which data.
Anonymous routing is not sufficient to solve the former and does not fix
the "legal attack" problem I mentioned (rather the opposite seems to be
true). Let's consider the case of downloading a MP3 file in Freenet
(the infamous "anonymous filesharing" use-case):
The file is stored (or cached) encrypted on some node. It is transferred
to the requester by onion routing. Indeed, the downloader cannot determine
the identity of the storage node and the storage node doesn't know the
identity of the requester. So with regard to criminal liability this scheme
may provide some protection (plausible deniability, benefit of the doubt).
However, in civil law it doesn't matter who is the original sender and who
the actual receiver of the copyrighted work. Everybody who participates in
the copying and distribution of the file can become liable to recourse. This
means that the copyright holder can send a C&D letter to every node along
the onion route - and not just sue sender or receiver. And it should not be
difficult to log the identity of some of the hops along the route.
Now I don't want to promote any "anonymous filesharing" nonsense. I just
think that in order for people to accept the use of online storage their
data needs to be as "safe" from third-party delete requests as on their
local hard disk. Further, people will only be ready to participate in such
a network (and dedicate storage to it) when they don't risk to be held
liable for the actions of others.
Regards,
Michael
_______________________________________________
p2p-hackers mailing list
p2p-hackers(a)lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
*************************************************
DIMACS Workshop on Electronic Voting -- Theory and Practice
May 26 - 27, 2004
DIMACS Center, Rutgers University, Piscataway, NJ
Organizers:
Markus Jakobsson, RSA Laboratories, mjakobsson(a)rsasecurity.com
Ari Juels, RSA Laboratories, ajuels(a)rsasecurity.com
Presented under the auspices of the Special Focus on Communication
Security and Information Privacy and the Special Focus on Computation
and the Socio-Economic Sciences..
************************************************
To many technologists, electronic voting represents a seemingly simple
exercise in system design. In reality, the many requirements it
imposes with regard to correctness, anonymity, and availability pose
an unusually thorny collection of problems, and the security risks
associated with electronic voting, especially remotely over the
Internet, are numerous and complex, posing major technological
challenges for computer scientists. (For a few examples, see
references below.) The problems range from the threat of
denial-of-service-attacks to the need for careful selection of
techniques to enforce private and correct tallying of ballots. Other
possible requirements for electronic voting schemes are resistance to
vote buying, defenses against malfunctioning software, viruses, and
related problems, audit ability, and the development of user-friendly
and universally accessible interfaces.
The goal of the workshop is to bring together and foster an interplay
of ideas among researchers and practitioners in different areas of
relevance to voting. For example, the workshop will investigate
prevention of penetration attacks that involve the use of a delivery
mechanism to transport a malicious payload to the target host. This
could be in the form of a ``Trojan horse'' or remote control
program. It will also investigate vulnerabilities of the communication
path between the voting client (the devices where a voter votes) and
the server (where votes are tallied). Especially in the case of remote
voting, the path must be ``trusted'' and a challenge is to maintain an
authenticated communications linkage. Although not specifically a
security issue, reliability issues are closely related and will also
be considered. The workshop will consider issues dealing with random
hardware and software failures (as opposed to deliberate, intelligent
attack). A key difference between voting and electronic commerce is
that in the former, one wants to irreversibly sever the link between
the ballot and the voter. The workshop will discuss audit trails as a
way of ensuring this. The workshop will also investigate methods for
minimizing coercion and fraud, e.g., schemes to allow a voter to vote
more than once and only having the last vote count.
This workshop is part of the Special Focus on Communication Security
and Information Privacy and will be coordinated with the Special Focus
on Computation and the Socio-Economic Sciences.
This workshop follows a successful first WOTE event, organized by
David Chaum and Ron Rivest in 2001 at Marconi Conference Center in
Tomales Bay, California (http://www.vote.caltech.edu/wote01/) Since
that time, a flurry of voting bills has been enacted at the federal
and state levels, including most notably the Help America Vote Act
(HAVA). Standards development has represented another avenue of reform
(e.g., the IEEE Voting Equipment Standards Project 1583), while a
grassroots movement (http://www.verifiedvoting.org) has arisen to
promote the importance of audit trails as enhancements to
trustworthiness.
**************************************************************
Program:
This is a preliminary program.
Wednesday, May 26, 2004
7:45 - 8:20 Breakfast and Registration
8:20 - 8:30 Welcome and Opening Remarks
Fred Roberts, DIMACS Director
8:30 - 9:15 Ron Rivest (tentative)
9:15 - 10:15 Rebecca Mercuri
10:15 - 10:45 Break
10:45 - 11:30 David Chaum
11:30 - 12:15 Michael Shamos
12:15 - 1:30 Lunch
1:30 - 1:50 European online voting experiences
Andreu Riera i Jorba
1:50 - 2:10 Providing Trusted Paths Using Untrusted Components
Andre Dos Santos
2:10 - 2:30 Internet voting based on PKI: the TruE-vote system
Emilia Rosti
2:30 - 2:50 Andy Neff
2:50 - 3:10 Aggelos Kiayas
3:10 - 3:30 How hard is it to manipulate voting?
Edith Elkind and Helger Lipmaa
3:30 - 3:50 Towards a dependability case for the Chaum e - voting scheme
Peter Ryan
3:50 - 4:20 Break
4:20 - 4:40 Secure practical voting systems: A Cautionary Note
Quisquater
4:40 - 5:25 Rob Ritchie
5:25 - 6:10 Panel (moderator: David Chaum)
6:10 - 7:30 Buffet Dinner - Reception - DIMACS Lounge
Thursday, May 27, 2004
7:45 - 8:30 Breakfast and Registration
8:30 - 9:15 Rice University "hack - a - vote" project
Dan Wallach
9:15 - 9:50 David Jefferson
9:50 - 10:10 Jeroen Van de Graaf
10:10 - 10:30 Voting, Driving, Death, and Social Security: The risk of
centralized voter registration Data
Guy Duncan
10:30 - 11:00 Break
11:00 - 11:20 Pedro Rezende
11:20 - 12:05 On optical scanning
Doug Jones
12:05 - 1:30 Lunch
1:30 - 2:15 SERVE project
Barbara Simons
2:15 - 3:00 Moti Yung
3:00 - 3:20 Ed Gerck
3:20 - 3:50 Break
3:50 - 4:10 Tatsuaki Okamoto
4:10 - 4:30 Lessions from Internet voting during 2002 FIFA WorldCup
Korea/Japan(TM)
Kwangjo Kim
4:30 - 4:50 Kazue Sako
4:50 - 5:50 Panel (moderator: Sanford Morganstein)
********************************************************************
Registration:
(Pre-registration deadline: May 20, 2004)
Please see website for complete registration details.
*********************************************************************
Information on participation, registration, accomodations, and travel
can be found at:
http://dimacs.rutgers.edu/Workshops/Voting/
**PLEASE BE SURE TO PRE-REGISTER EARLY**
********************************************************************
---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo(a)metzdowd.com
--- end forwarded text
--
-----------------
R. A. Hettinga <mailto: rah(a)ibuc.com>
The Internet Bearer Underwriting Corporation <http://www.ibuc.com/>
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
1
0
On Sat, Jan 15, 2005 at 10:50:49AM -0500, Perry E. Metzger wrote:
> Panix is highly screwed by this -- their users are all off the air,
> and they can't really wait for an appeals process to complete in order
> to get everything back together again.
from panix shell hosts motd:
. panix.net usable as panix.com (marcotte) Sat Jan 15 10:44:57 2005
.
. Until we resolve the issue of the domain "panix.com", we have set up
. the domain "panix.net" to include the same names and addresses as
. "panix.com".
.
. You may use this as a temporary solution for access to mail, webpages,
. etc. Wherever you would use "panix.com", you can replace it with
. "panix.net".
--
Henry Yen Aegis Information Systems, Inc.
Senior Systems Programmer Hicksville, New York
--- end forwarded text
--
-----------------
R. A. Hettinga <mailto: rah(a)ibuc.com>
The Internet Bearer Underwriting Corporation <http://www.ibuc.com/>
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
1
0
Europe's Plan to Track Phone and Net Use
http://www.nytimes.com/2007/02/20/business/worldbusiness/20privacy.html
By VICTORIA SHANNON
PARIS, Feb. 19 -- European governments are preparing legislation to
require companies to keep detailed data about people's Internet and
phone use that goes beyond what the countries will be required to
do under a European Union directive.
In Germany, a proposal from the Ministry of Justice would
essentially prohibit using false information to create an e-mail
account, making the standard Internet practice of creating accounts
with pseudonyms illegal.
A draft law in the Netherlands would likewise go further than the
European Union requires, in this case by requiring phone companies
to save records of a caller's precise location during an entire
mobile phone conversation.
Even now, Internet service providers in Europe divulge customer
information -- which they normally keep on hand for about three
months, for billing purposes -- to police officials with legally
valid orders on a routine basis, said Peter Fleischer, the
Paris-based European privacy counsel for Google. The data concerns
how the communication was sent and by whom but not its content.
But law enforcement officials argued after the terrorist bombings
in Spain and Britain that they needed better and longer data
storage from companies handling Europe's communications networks.
European Union countries have until 2009 to put the Data Retention
Directive into law, so the proposals seen now are early
interpretations. But some people involved in the issue are
concerned about a shift in policy in Europe, which has long been a
defender of individuals' privacy rights.
Under the proposals in Germany, consumers theoretically could not
create fictitious e-mail accounts, to disguise themselves in online
auctions, for example. Nor could they use a made-up account to use
for receiving commercial junk mail. While e-mail aliases would not
be banned, they would have to be traceable to the actual account
holder.
"This is an incredibly bad thing in terms of privacy, since people
have grown up with the idea that you ought to be able to have an
anonymous e-mail account," Mr. Fleischer said. "Moreover, it's
totally unenforceable and would never work."
Mr. Fleischer said the law would have to require some kind of
identity verification, "like you may have to register for an e-mail
address with your national ID card."
Jvrg Hladjk, a privacy lawyer at Hunton & Williams, a Brussels law
firm, said that might also mean that it could become illegal to pay
cash for prepaid cellphone accounts. The billing information for
regular cellphone subscriptions is already verified.
Mr. Fleischer said: "It's ironic, because Germany is one of the
countries in Europe where people talk the most about privacy. In
terms of consciousness of privacy in general, I would put Germany
at the extreme end."
He said it was not clear that any European law would apply to
e-mail providers based in the United States, like Google, so anyone
who needed an unverified e-mail address -- for political,
commercial or philosophical reasons -- could still use Gmail, Yahoo
or Hotmail addresses.
Mr. Hladjk said, "It's going to be difficult to know which law
applies." Google requires only two pieces of information to open a
Gmail account -- a name and a password -- and the company does not
try to determine whether the name is authentic.
In the Netherlands, the proposed extension of the law on phone
company records to all mobile location data "implies surveillance
of the movement of large amounts of innocent citizens," the Dutch
Data Protection Agency has said. The agency concluded in January
that the draft disregarded privacy protections in the European
Convention on Human Rights. Similarly, the German technology trade
association Bitkom said the draft there violated the German
Constitution.
Internet and telecommunications industry associations raised
objections when the directive was being debated, but at that time
their concerns were for the length of time the data would have to
be stored and how the companies would be compensated for the cost
of gathering and keeping the information. The directive ended up
leaving both decisions in the hands of national governments,
setting a range of six months to two years. The German draft
settled on six months, while in Spain the proposal is for a year,
and in the Netherlands it is 18 months.
"There are not a lot of people in Germany who support this draft
entirely," said Christian Spahr, a spokesman for Bitkom. "But there
are others who are more critical of it than we are."
------------------------ Yahoo! Groups Sponsor --------------------~-->
Check out the new improvements in Yahoo! Groups email.
http://us.click.yahoo.com/4It09A/fOaOAA/yQLSAA/PMYolB/TM
--------------------------------------------------------------------~->
Post message: transhumantech(a)yahoogroups.com
Subscribe: transhumantech-subscribe(a)yahoogroups.com
Unsubscribe: transhumantech-unsubscribe(a)yahoogroups.com
List owner: transhumantech-owner(a)yahoogroups.com
List home: http://www.yahoogroups.com/group/transhumantech/
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/transhumantech/
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
http://groups.yahoo.com/group/transhumantech/join
(Yahoo! ID required)
<*> To change settings via email:
mailto:transhumantech-digest@yahoogroups.com
mailto:transhumantech-fullfeatured@yahoogroups.com
<*> To unsubscribe from this group, send an email to:
transhumantech-unsubscribe(a)yahoogroups.com
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
1
0
Initiative
An Open Letter to Google:
Concepts for a Google Privacy Initiative
Lauren Weinstein
May 9, 2006
http://www.vortex.com/google-privacy-initiative
Preface: The overall situation relating to U.S. and global
privacy issues is deteriorating rapidly. Recent Congressional
moves toward legislating broad, government-mandated data
retention laws ( http://lauren.vortex.com/archive/000175.html )
are particularly alarming. The manners in which we
collectively choose to address these sorts of issues are
likely to have drastic impacts not only on our own lives, but
also broadly on the shape of society, both today and in the
future.
Greetings. When I was recently invited to speak at Google's Santa
Monica center ( Video at http://lauren.vortex.com/archive/000168.html ),
I was impressed by the quality of the facilities, but even more so
by the caliber of the Google employees I met during my visit.
Google's capabilities are extraordinary. While I have been publicly
critical of some Google policies, my concerns have been focused not
on Google today, but rather mainly on how Google's immense data
processing, storage, and related infrastructures might be abused
in the future, particularly by outside entities in a position to
force Google's hand despite Google's own best intentions.
As discussed in my talk, I consider Google to be an incredibly
important and admirable resource with vast potential to do good.
But by the same token, it is largely this very power that increases
the risks of serious abuses of Google capabilities being forced upon
the organization, and Google will likely be unable to mitigate many
of these unless it takes major proactive steps on an immediate and
ongoing basis, particularly including privacy-related efforts.
Increasingly, Internet users are becoming highly sensitized to both
perceived and real risks to their privacy associated with their use
of the Net. While the real risks we face in this arena are serious
enough, people's confidence (or lack thereof) in products and
services will in many cases be shaped primarily by perceptions, and
often significantly less by the underlying realities. This
highlights the critical fact that to be truly successful, efforts to
reduce privacy risks must not only have genuine and ongoing positive
privacy effects, but also need to be clearly perceived by users and
the broader public to be in place and fully supported as primary
goals of the organizations involved.
Web-based search engines are an obvious current focus of many privacy
concerns, but as more traditional "desktop" applications migrate to
tightly coupled topologies with user data stored on remote servers
not under users' direct local control (e.g. for PC searches,
document preparation, e-mail, etc.), these issues and related
potential risks are rapidly spreading across the entire computer and
Internet spectrums.
Fears that users' private information may be increasingly subject to
intrusive perusal by law enforcement or other authorities (often with
minimal and/or questionable cause) are further damaging user
confidence in such services, with a range of issues related to data
retention being an important element at the heart of these
concerns. To the extent that potentially sensitive data is stored
for extended periods, particularly in non-anonymous forms, it is
inevitable that outside demands for access to it -- on ever broader
scales -- will be accelerating. While individual court cases will
of course vary in their results, the court system cannot be relied
upon to always render appropriate decisions regarding such matters,
particularly in today's political and legislative environments.
I believe that Google, by virtue of its Internet industry leadership,
technical and human resources, and corporate culture, is in a unique
position. Google can demonstrate how world-class privacy protection
policies and technologies can be developed and deployed in ways that
enhance user confidence in current and future Google services -- by
proactively protecting users' private data without interfering with
service operations, innovation, R&D, or the legitimate concerns of
law enforcement. Google could be the acknowledged global leader in
this area, becoming synonymous with the concept of integrating new
and advanced privacy capabilities into world-class Internet services
and products.
Obviously the confidence such efforts would engender in Google's
users would be healthy for Google's bottom line, but more
importantly it will provide genuine and continuing real benefits to
the Google user community itself (i.e. the entire world). Where
non-proprietary information is involved, further benefits to society
could be achieved through making publicly available (via published
papers, conferences, etc.) those aspects of resulting
privacy-related R&D technologies that could be deployed by other
entities to the benefit of the global community.
I recommend that Google establish a team explicitly dedicated to the
development and deployment of privacy-related efforts as outlined
above. Such a team would be tasked with establishing the framework
of these projects in a consistent manner, and ensuring to the
greatest extent practicable that all current and future Google
products and services would be integrated (from the outset when
possible) with these privacy technologies and policies. The team
would need access to other individuals within both the development
and operational aspects of Google, and ideally would report directly
to high-level management.
To be effective, such a team would need to be significantly
interdisciplinary in its makeup and scope, including a variety of
skills. Some of these would include a broad range of CS capabilities
(including specialized mathematical disciplines related to
encryption, among many others). Experience in dealing with the
particular and complex interplay between technology and societal
issues will also be an important component of such a team.
Google's growing scale and influence suggest that the sorts of
privacy efforts suggested herein could be among the most important
non-governmental privacy-related endeavors for many years to come,
and could have vast positive impacts far into the future not only
for Google and its users, but throughout the commercial, nonprofit,
and government sectors.
This document represents a very brief conceptual outline, offered
with only the best interests of both Google and the world at large
in mind. Google and the broader Internet are at a critical
crossroads in many respects, and I believe that Google has the
opportunity to do enormous good by initiating the types of efforts
that I've described.
I would welcome the opportunity to discuss these concepts with you in
more detail and to work with Google toward their realization, as you
may deem appropriate.
Thank you very much for your consideration.
--Lauren--
Lauren Weinstein
lauren(a)vortex.com or lauren(a)pfir.org
Tel: +1 (818) 225-2800
http://www.pfir.org/lauren
Co-Founder, PFIR
- People For Internet Responsibility - http://www.pfir.org
Co-Founder, IOIC
- International Open Internet Coalition - http://www.ioic.net
Moderator, PRIVACY Forum - http://www.vortex.com
Member, ACM Committee on Computers and Public Policy
Lauren's Blog: http://lauren.vortex.com
DayThink: http://daythink.vortex.com
-------------------------------------
You are subscribed as eugen(a)leitl.org
To manage your subscription, go to
http://v2.listbox.com/member/?listname=ip
Archives at: http://www.interesting-people.org/archives/interesting-people/
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
1
0
Initiative
X-Mailer: Apple Mail (2.749.3)
Reply-To: dave(a)farber.net
Begin forwarded message:
1
0
06 Jul '18
http://www.libertesnumeriques.net/evenements/stallman-19octobre2011/a-free-…
Richard Stallman:
Projects with the goal of digital inclusion are making a big assumption.
They are assuming that participating in a digital society is good; but
thatbs not necessarily true. Being in a digital society can be good or bad,
depending on whether that digital society is just or unjust. There are many
ways in which our freedom is being attacked by digital technology. Digital
technology can make things worse, and it will, unless we fight to prevent
it.
Therefore, if we have an unjust digital society, we should cancel these
projects for digital inclusion and launch projects for digital extraction.
We have to extract people from digital society if it doesnbt respect their
freedom; or we have to make it respect their freedom.
[Surveillance]
What are the threats? First, surveillance. Computers are Stalinbs dream:
they are ideal tools for surveillance, because anything we do with
computers, the computers can record. They can record the information in a
perfectly indexed searchable form in a central database, ideal for any
tyrant who wants to crush opposition.
Surveillance is sometimes done with our own computers. For instance, if you
have a computer thatbs running Microsoft Windows, that system is doing
surveillance. There are features in Windows that send data to some server.
Data about the use of the computer. A surveillance feature was discovered
in the iPhone a few months ago, and people started calling it the
b spy-phone.b Flash player has a surveillance feature too, and so does the
Amazon b Swindle.b They call it the Kindle, but I call it the Swindle
(lbescroc) because itbs meant to swindle users out of their freedom. It
makes people identify themselves whenever they buy a book, and that means
Amazon has a giant list of all the books each user has read. Such a list
must not exist anywhere.
Most portable phones will transmit their location, computed using GPS, on
remote command. The phone company is accumulating a giant list of places
that the user has been. A German MP in the Green Party asked the phone
company to give him the data it had about where *he* was. He had to sue, he
had to go to court to get this information. And when he got it, he received
forty-four thousand location points for a period of six months! Thatbs more
than two hundred per day! What that means is someone could form a very good
picture of his activities just by looking at that data.
We can stop our own computers from doing surveillance on us if we have
control of the software that they run. But the software these people are
running, they donbt have control over. Itbs non-free software, and thatbs
why it has malicious features, such as surveillance. However, the
surveillance is not always done with our own computers, itbs also done at
one remove. For instance ISPs in Europe are required to keep data about the
userbs internet communications for a long time, in case the State decides
to investigate that person later for whatever imaginable reason.
With a portable phone b even if you can stop the phone from transmitting
your GPS location, the system can determine the phonebs location
approximately, by comparing the time when the signals arrive at different
towers. So the phone system can do surveillance even without special
cooperation from the phone itself.
Likewise, the bicycles that people rent in Paris. Of course the system
knows where you get the bicycle and it knows where you return the bicycle,
and Ibve heard reports that it tracks the bicycles as they are moving
around as well. So they are not something we can really trust.
But there are also systems that have nothing to do with us that exist only
for tracking. For instance, in the UK all car travel is monitored. Every
carbs movements are being recorded in real time and can be tracked by the
State in real time. This is done with cameras on the side of the road.
Now, the only way we can prevent surveillance thatbs done at one remove or
by unrelated systems is through political action against increased
government power to track and monitor everyone, which means of course we
have to reject whatever excuse they come up with. For doing such systems,
no excuse is valid b to monitor everyone.
In a free society, when you go out in public, you are not guaranteed
anonymity. Itbs possible for someone to recognize you and remember. And
later that person could say that he saw you at a certain place. But that
information is diffuse. Itbs not conveniently assembled to track everybody
and investigate what they did. To collect that information is a lot of
work, so itbs only done in special cases when itbs necessary.
But computerized surveillance makes it possible to centralize and index all
this information so that an unjust regime can find it all, and find out all
about everyone. If a dictator takes power, which could happen anywhere,
people realize this and they recognize that they should not communicate
with other dissidents in a way that the State could find out about. But if
the dictator has several years of stored records, of who talks with whom,
itbs too late to take any precautions then. Because he already has
everything he needs to realize: b OK this guy is a dissident, and he spoke
with him. Maybe he is a dissident too. Maybe we should grab him and torture
him.b
So we need to campaign to put an end to digital surveillance now. You canbt
wait until there is a dictator and it would really matter. And besides, it
doesnbt take an outright dictatorship to start attacking human rights.
I wouldnbt quite call the government of the UK a dictatorship. Itbs not
very democratic, and one way it crushes democracy is using surveillance. A
few years ago, people believed to be on their way to a protest; they were
going to protest. They were arrested before they could get there, because
their car was tracked through this universal car tracking system.
[Censorship]
The second threat is censorship. Censorship is not new, it existed long
before computers. But 15 years ago, we thought that the Internet would
protect us from censorship, that it would defeat censorship. Then, China
and some other obvious tyrannies went to great lengths to impose censorship
on the Internet, and we said: b well thatbs not surprising, what else would
governments like that do?b
But today we see censorship imposed in countries that are not normally
thought of as dictatorships, such as for instance the UK, France, Spain,
Italy, Denmarkb&
They all have systems of blocking access to some websites. Denmark
established a system that blocks access to a long list of webpages, which
was secret. The citizens were not supposed to know how the government was
censoring them, but the list was leaked, and posted on WikiLeaks. At that
point, Denmark added the WikiLeaks page to its censorship list.
So, the whole rest of the world can find out how Danes are being censored,
but Danes are not supposed to know.
A few months ago, Turkey, which claims to respect some human rights,
announced that every Internet user would have to choose between censorship
and more censorship. Four different levels of censorship they get to
choose! But freedom is not one of the options.
Australia wanted to impose filtering on the Internet, but that was blocked.
However Australia has a different kind of censorship: it has censorship of
links. That is, if a website in Australia has a link to some censored site
outside Australia, the one in Australia can be punished.
Electronic Frontier Australia, that is an organization that defends human
rights in the digital domain in Australia, posted a link to a foreign
political website. It was ordered to delete the link or face a penalty of
$11,000 a day. So they deleted it, what else could they do? This is a very
harsh system of censorship.
In Spain, the censorship that was adopted earlier this year allows
officials to arbitrarily shut down an Internet site in Spain, or impose
filtering to block access to a site outside of Spain. And they can do this
without any kind of trial. This was one of the motivations for the*
Indignados*, who have been protesting in the street.
There were protests in the street in Turkey as well, after that
announcement, but the government refused to change its policy.
We must recognize that a country that imposes censorship on the Internet is
not a free country. And is not a legitimate government either.
[Restricted data formats]
The next threat to our freedom comes from data formats that restrict the
users.
Sometimes itbs because the format is secret. There are many application
programs that save the userbs data in a secret format, which is meant to
prevent the user from taking that data and using it with some other
program. The goal is to prevent interoperability.
Now, evidently, if the program implements a secret format, thatbs because
the program is not free software. So this is another kind of malicious
feature. Surveillance is one kind of malicious feature that you find in
some non-free programs; using secret formats to restrict the users is
another kind of malicious feature that you also find in some non-free
programs.
But if you have a free program that handles a certain format, *ipso facto* that
format is not secret. This kind of malicious feature can only exist in a
non-free program. Surveillance features could theoretically exist in a free
program but you donbt find them happening. Because the users would fix it.
The users wouldnbt like this, so they would fix it.
In any case, we also find secret data formats in use for publication of
works. You find secret data formats in use for audio, such as music, for
video, for booksb& And these secret formats are known as Digital
Restrictions Management, or DRM, or digital handcuffs (les menottes
numC)riques).
So, the works are published in secret formats so that only proprietary
programs can play them, so that these proprietary programs can have the
malicious feature of restricting the users, stopping them from doing
something that would be natural to do.
And this is used even by public entities to communicate with the people.
For instance Italian public television makes its programs available on the
net in a format called VC-1, which is a standard supposedly, but itbs a
secret standard.
Now I canbt imagine how any publicly supported entity could justify using a
secret format to communicate with the public. This should be illegal. In
fact I think all use of Digital Restrictions Management should be illegal.
No company should be allowed to do this.
There are also formats that are not secret but almost might as well be
secret, for instance Flash. Flash is not actually secret but Adobe keeps
making new versions, which are different, faster than anyone can keep up
and make free software to play those files; so it has almost the same
effect as being secret.
Then there are the patented formats, such as MP3 for audio. Itbs bad to
distribute audio in MP3 format! There is free software to handle MP3
format, to play it and to generate it, but because itbs patented in many
countries, many distributors of free software donbt dare include those
programs; so if they distribute the GNU+Linux system, their system doesnbt
include a player for MP3.
As a result if anyone distributes some music in MP3 thatbs putting pressure
on people not to use GNU/Linux. Sure, if youbre an expert you can find a
free software and install it, but there are lots of non experts, and they
might see that they installed a version of GNU/Linux which doesnbt have
that software, and it wonbt play MP3 files, and they think itbs the
systembs fault. They donbt realize itbs MP3b2s fault. But this is the fact.
Therefore, if you want to support freedom, donbt distribute MP3 files.
Thatbs why I say if youbre recording my speech and you want to distribute
copies, donbt do it in a patented format such as MPEG-2, or MPEG-4, or MP3.
Use a format friendly to free software, such as the Ogg format or WebM. And
by the way, if you are going to distribute copies of the recording, please
put on it the Creative Commons-No derivatives
license<http://creativecommons.org/licenses/by-nd/3.0/>.
This is a statement of my personal views. If it were a lecture for a
course, if it were didactic, then it ought to be free, but statements of
opinion are different.
[Software that isn't free]
Now this leads me to the next threat which comes from software that the
users donbt have control over. In other words: software that isnbt free,
that is not b libreb. In this particular point French is clearer than
English. The English word free means blibreb and bgratuitb, but what I mean
when I say free software is blogiciel libreb. I donbt mean bgratuitb. Ibm
not talking about price. Price is a side issue, just a detail, because it
doesbnt matter ethically. You know if i have a copy of a program and I sell
it to you for one euro or a hundred euros, who cares?Why should anyone
think that thatbs good or bad? Or suppose I gave it to you bgratuitementbb&
still, who cares? But whether this program respects your freedom, thatbs
important!
So free software is software that respects usersb freedom. What does this
mean? Ultimately there are just two possibilities with software: either the
users control the program or the program controls the users. If the users
have certain essential freedoms, then they control the program, and those
freedoms are the criterion for free software. But if the users donbt fully
have the essential freedoms, then the program controls the users. But
somebody controls that program and, through it, has *power* over the users.
So, a non-free program is an instrument to give somebody *power* over a lot
of other people and this is unjust power that nobody should ever have. This
is why non-free software (les logiciels privateurs, qui privent de la
libertC)), why proprietary software is an injustice and should not exist;
because it leaves the users without freedom.
Now, the developer who has control of the program often feels tempted to
introduce malicious features to *further* exploit or abuse those users. He
feels a temptation because he knows he can get away with it: because his
program controls the users and the users do not have control of the
program, if he puts in a malicious feature, the users canbt fix it; they
canbt remove the malicious feature.
Ibve already told you about two kinds of malicious features: surveillance
features, such as are found in Windows, and the Iphone and Flash player,
and the b Swindleb. And there are also features to restrict users, which
work with secret data formats, and those are found in Windows, Macintosh,
the Iphone, Flash player, the Amazon b Swindleb, the Playstation 3 and lots
and lots of other programs.
The other kind of malicious feature is the backdoor. That means something
in that program is listening for remote commands and obeying them, and
those commands can mistreat the user. We know of backdoors in Windows, in
the Iphone, in the Amazon b Swindleb. The Amazon b Swindleb has a backdoor
that can remotely delete books. We know this by observation, because Amazon
did it: in 2009 Amazon remotely deleted thousands of copies of a particular
book. Those were authorized copies, people had obtain them directly from
Amazon, and thus Amazon knew exactly where they were, which is how Amazon
knew where to send the commands to delete those books. You know which book
Amazon deleted? *1984* by Georges Orwell. Itbs a book everyone should read,
because it discusses a totalitarian state that did things like delete books
it didnbt like. Everybody should read it, but not on the Amazon b Swindleb.
Anyway, malicious features are present in the most widely used non-free
programs, but they are rare in free software, because with free software
the users have control: they can read the source code and they can change
it. So, if there were a malicious feature, somebody would sooner or later
spot it and fix it. This means that somebody who is considering introducing
a malicious feature does not find it so tempting, because he knows he might
get away with it for a while but somebody will spot it, will fix it, and
everybody will loose trust in the perpetrator. Itbs not so tempting when
you know youbre going to fail. And thatbs why we find that malicious
features are rare in free software, and common in proprietary software.
[The 4 freedoms of free software]
Now the essential freedoms are four:
- freedom 0 is the freedom to run the program as you wish.
- Freedom 1 is the freedom to study the source code and change it, so
the program does your computing the way you wish.
- Freedom 2 is the freedom to help others. Thatbs the freedom to make
exact copies and redistribute them when you wish.
- Freedom 3 is the freedom to contribute to your community. Thatbs the
freedom to make copies of your modified versions, if you have made any, and
then distribute them to others when you wish.
These freedoms, in order to be adequate, must apply to all activities of
life. For instance if it says: b This is free for academic use,b itbs not
free. Because thatbs too limited. It doesnbt apply to all areas of life. In
particular, if a program is free, that means it can be modified and
distributed commercially, because commerce is an area of life, an activity
in life. And this freedom has to apply to all activities.
Now however, itbs not obligatory to do any of these things. The point is
youbre free to do them if you wish, when you wish. But you never have to do
them. You donbt have to do any of them. You donbt have to run the program.
You donbt have to study or change the source code. You donbt have to make
any copies. You donbt have to distribute your modified versions. The point
is you should be free to do those things if you wish.
Now, freedom number 1, the freedom to study and change the source code to
make the program do your computing as you wish, includes something that
might not be obvious at first. If the program comes in a product, and a
developer can provide an upgrade that will run, then you have to be able to
make your version run in that product. If the product would only run the
developerbs versions, and refuses to run yours, the executable in that
product is not free software. Even if it was compiled from free source
code, itbs not free because you donbt have the freedom to make the program
do your computing the way *you* wish. So, freedom 1 has to be real, not
just theoretical. It has to include the freedom to use your version, not
just the freedom to make some source code that wonbt run.
[The GNU project and the free software movement]
I launched the free software movement in 1983, when I announced the plan to
develop a free software operating system whose name is GNU. Now GNU, the
name GNU, is a joke; because part of the hackerbs spirit is to have fun
even when youbre doing something very serious. Now I canbt think of
anything more seriously important than defending freedom.
But that didnbt mean I couldnbt give my system a name thatbs a joke. So GNU
is a joke because itbs a recursive acronym, it stands for b GNU is Not
Unixb, so G.N.U.: GNUbs Not Unix. So the G in GNU stands for GNU.
Now in fact that was a tradition at the time. The tradition was: if there
was an existing program and you wrote something similar to it, inspired by
it, you could give credit by giving your program a name thatbs a recursive
acronym saying itbs not the other one.
So I gave credit to Unix for the technical ideas of Unix, but with the name
GNU, because I decided to make GNU a Unix-like system, with the same
commands, the same system calls, so that it would be compatible, so that
people who used Unix can switch over easily.
But the reason for developing GNU, that was unique. GNU is the only
operating system, as far as I know, ever developed for the purpose of
freedom. Not for technical motivations, not for commercial motivations. GNU
was written for *your* freedom. Because without a free operating system,
itbs impossible to have freedom and use a computer. And there were none,
and I wanted people to have freedom, so it was up to me to write one.
Nowadays there are millions of users of the GNU operating system and most
of them donbt*know* they are using the GNU operating system, because there
is a widespread practice which is not nice. People call the system b Linuxb.
Many do, but some people donbt, and I hope youbll be one of them. Please,
since we started this, since we wrote the biggest piece of the code, please
give us equal mention, please call the system GNU+Linux, or GNU/Linux. Itbs
not much to ask!
But there is another reason to do this. It turns out that the person who
wrote Linux, which is one component of the system as we use it
today, doesnbt agree with the free software movement. And so if you call
the whole system Linux, in effect youbre steering people towards his ideas,
and away from our ideas. Because hebs not gonna say to them that they
deserve freedom. Hebs going to say to them that he likes convenient,
reliable, powerful software. Hebs going to tell people that those are the
important values.
But if you tell them the system is GNU+Linux b the GNU operating system
plus Linux the kernel b then theybll know about us, and then they might
listen to what *we* say. You deserve freedom, and since freedom will be
lost if we donbt defend it b therebs always going to be a Sarkozy to take
it away b we need above all to teach people to demand freedom, to be ready
to stand up for their freedom the next time someone threatens to take it
away.
Nowadays, you can tell who doesbnt want to discuss these ideas of freedom
because they donbt say b logiciel libreb. They donbt say b libreb, they say
b open sourceb. That term was coined by the people like Mr Torvalds who
would prefer that these ethical issues donbt get raised. And so the way you
can help us raise them is by saying libre. You know, itbs up to you where
you stand, youbre free to say what you think. If you agree with them, you
can say open source. If you agree with us, show it: say libre!
Free software and education
Now the most important point about free software is that schools *must* teach
exclusively free software. All levels of schools from kindergarten to
university, itbs their *moral*responsibility to teach only free software in
their education, and all other educational activities as well, including
those that say that theybre spreading digital literacy. A lot of those
activities teach Windows, which means theybre teaching *dependence*. To
teach people the use proprietary software is to teach dependence, and
educational activities must never do that because itbs the opposite of
their mission. Educational activities have a social mission to educate good
citizens of a strong, capable, cooperating, independent and free society.
And in the area of computing, that means: teach free software. Never teach
a proprietary program because thatbs inculcating dependence.
Why do you think some proprietary developers offer gratis copies to
schools? They want the schools to make the children dependent. And then,
when they graduate, theybre still dependent and you know the company is not
going to offer them gratis copies. And some of them get jobs and go to work
for companies. Not many of them anymore, but some of them. And those
companies are not going to be offered gratis copies. Oh no! The idea is if
the school directs the students down the path of permanent dependence, they
can drag the rest of society with them into dependence. Thatbs the plan!
Itbs just like giving the school gratis needles full of addicting drugs,
saying b inject this into your students, the first dose is gratis.b Once
youbre dependent, then you have to pay. Well, the school would reject the
drugs because it isnbt right to teach the students to use addictive drugs
and itbs got to reject the proprietary software also.
Some people say b letbs have the school teach both proprietary software and
free software, so the students become familiar with both.b Thatbs like
saying b for the lunch lets give the kids spinach and tabacco, so that they
become accustomed to both.b No! The schools are only supposed to teach good
habits, not bad ones! So there should be no Windows in a school, no
Macintosh, nothing proprietary in the education.
But also, for the sake of educating the programmers. You see, some people
have a talent for programming. At ten to thirteen years old, typically,
theybre fascinated, and if they use a program, they want to know b how does
it do this?b But when they ask the teacher, if itbs proprietary, the
teacher has to say b Ibm sorry, itbs a secret, we canbt find out.b Which
means education is forbidden. A proprietary program is the enemy of the
spirit of education. Itbs knowledge withheld, so it should not be tolerated
in a school, even though there may be plenty of people in the school who
donbt care about programming, donbt want to learn this. Still, because itbs
the enemy of the spirit of education, it shouldnbt be there in the school.
But if the program is free, the teacher can explain what he knows, and then
give out copies of the source code, saying: b read it and youbll understand
everything.b And those who are really fascinated, they will read it! And
this gives them an opportunity to start to learn how to be good programmers.
To learn to be a good programmer, youbll need to recognize that certain
ways of writing code, even if they make sense to you and they are correct,
theybre not good because other people will have trouble understanding them.
Good code is clear code, that others will have an easy time working on when
they need to make further changes.
How do you learn to write good clear code? You do it by reading lots of
code, and writing lots of code. And only free software offers the chance to
*read* the code of large programs that we really use. And then you have to
write lots of code, which means you have to write changes in large programs.
How do you learn to write good code for the large programs? You have to
start small, which does not mean small program, oh no! The challenges of
the code for large programs donbt even begin to appear in small programs.
So the way you start small at writing code for large programs is by writing
small changes in large programs. And only free software gives you the
chance to do that!
So, if a school wants to offer the possibility of learning to be a good
programmer, it needs to be a free software school.
But there is an even deeper reason, and that is for the sake of moral
education, education in citizenship. Itbs not enough for a school to teach
facts and skills, it has to teach the spirit of goodwill, the habit of
helping others. Therefore, every class should have this rule: b Students, if
you bring software to class, you may not keep it for yourself, you must
share copies with the rest of the class, including the source code in case
anyone here wants to learn! Because this class is a place where we share
our knowledge. Therefore, bringing a proprietary program to class is not
permitted.b The school must follow its own rule to set a good example.
Therefore, the school must bring only free software to class, and share
copies, including the source code, with anyone in the class that wants
copies.
Those of you who have a connection with a school, itbs your duty to
campaign and pressure that school to move to free software. And you have to
be firm. It may take years, but you can succeed as long as you never give
up. Keep seeking more allies among the students, the faculty, the staff,
the parents, anyone!
And always bring it up as an ethical issue. If someone else wants to
sidetrack the discussion into this practical advantage and this practical
disadvantage, which means theybre ignoring the most important question,
then you have to say: b this is not about how to do the best job of
educating, this is about how to do a good education instead of an evil one.
Itbs how to do education right instead of wrong, not just how to make it a
little more effective, or less.b So donbt get distracted with those
secondary issues, and ignore what really matters!
Internet services
So, moving on to the next menace. There are two issues that arise from the
use of internet services. One of them is that the server could abuse your
data, and another is that it could take control of your computing.
The first issue, people already know about. They are aware that, if you
upload data to an internet service, there is a question of what it will do
with that data. It might do things that mistreat you. What could it do? It
could lose the data, it could change the data, it could refuse to let you
get the data back. And it could also show the data to someone else you
donbt want to show it to. Four different possible things.
Now, here, Ibm talking about the data that you knowingly gave to that site.
Of course, many of those services do *surveillance* as well.
For instance, consider Facebook. Users send lots of data to Facebook, and
one of the bad things about Facebook is that it shows a lot of that data to
lots of other people, and even if it offers them a setting to say b no!b,
that may not really work. After all, if you say b some other people can see
this piece of information,b one of them might publish it. Now, thatbs not
Facebookbs fault, there is nothing they could do to prevent that, but it
ought to warn people. Instead of saying b mark this as only to your
so-called friends,b it should say b keep in mind that your so-called friends
are not really your friends, and if they want to make trouble for you, they
could publish this.b Every time, it should say that, if they want to deal
with people ethically.
As well as all the data users of Facebook voluntarily give to Facebook,
Facebook is collecting through data about peoplebs activities on the net
through various methods of surveillance. But for now I am talking about the
data that people *know* they are giving to these sites.
Losing data is something that could always happen by accident. That
possibility is always there, no matter how careful someone is. Therefore,
you need to keep multiple copies of data that matters. If you do that,
then, even if someone decided to delete your data intentionally, it
wouldnbt hurt you that much, because youbd have other copies of it.
So, as long as you are maintaining multiple copies, you donbt have to worry
too much about someonebs losing your data. What about whether you can get
it back. Well, some services make it possible to get back all the data that
you sent, and some donbt. Google services will let the user get back the
data the user has put into them. Facebook, famously, does not.
Of course in the case of Google, this only applies to the data the user *
knows* Google has. Google does lots of surveillance, too, and that data is
not included.
But in any case, if you can get the data back, then you could track whether
they have altered it. And they are not very likely to start altering
peoplebs data if the people can tell. So maybe we can keep a track on that
particular kind of abuse.
But the abuse of showing the data to someone you donbt want it to be shown
to is very common and almost impossible for you to prevent, especially if
itbs a US company. You see, the most hypocritically named law in US
history, the so-called USA Patriot Act, says that Big Brotherbs police can
collect just about all the data that companies maintain about individuals.
Not just companies, but other organizations too, like public libraries. The
police can get this massively, without even going to court. Now, in a
country that was founded on an idea of freedom, there is nothing more
unpatriotic than this. But this is what they did. So you mustnbt ever trust
any of your data to a US company. And they say that foreign subsidiaries of
US companies are subject to this as well, so the company you are directly
dealing with may be in Europe, but if itbs owned by a US company, you got
the same problem to deal with.
However, this is mainly a concern when the data you are sending to the
service is not for publication. There are some services where you publish
things. Of course, if you publish something, you know everybody is gonna be
able to see it. So, there is no way they can hurt you by showing it to
somebody who wasnbt supposed to see it. There is nobody who wasnbt supposed
to see it if you publish it. So in that case the problem doesnbt exist.
So these are four sub-issues of this one threat of abusing our data. The
idea of theFreedom Box project <https://www.freedomboxfoundation.org/> is
you have your own server in your own home, and when you want to do
something remotely, you do it with your own server, and the police have to
get a court order in order to search your server. So you have the same
rights this way that you would have traditionally in the physical world.
The point here and in so many other issues is: as we start doing things
digitally instead of physically, we shouldnbt lose any of our rights,
because the general tendency is that we do lose rights.
Basically, Stallmanbs law says that, in an epoch when governments work for
the mega-corporations instead of reporting to their citizens, every
technological change can be taken advantage of to reduce our freedom.
Because reducing our freedom is what these governments want to do. So the
question is: when do they get an opportunity? Well, any change that happens
for some other reason is a possible opportunity, and they will take
advantage of it if thatbs their general desire.
But the other issue with internet services is that they can take control of
your computing, and thatbs not so commonly known. But Itbs becoming more
common. There are services that offer to do computing for you on data
supplied by you b things that you should do on your own computer but they
invite you to let somebody elsebs computer do that computing work for you.
And the result is you lose control over it. Itbs just as if you used a
non-free program.
Two different scenarios but they lead to the same problem. If you do your
computing with a non-free program b well, the users donbt control the
non-free program, it controls the users, which would include you. So youbve
lost control of the computing thatbs being done. But if you do your
computing in his server b well, the programs that are doing it are the ones
he chose. You canbt touch them or see them, so you have no control over
them. He has control over them b maybe.
If they are free software and he installs them, then he has control over
them. But even he might not have control. He might be running a proprietary
program in his server, in which case itbs somebody else who has control of
the computing being done in his server. He doesnbt control it and you donbt.
But suppose he installs a free program, then he has control over the
computing being done in his computer, but you donbt. So, either way, you
donbt! So the only way to have control over your computing is to do it with
*your copy* of a free program.
This practice is called B+ Software as a Service B;. It means doing your
computing with your data in somebody elsebs server. And I donbt know of
anything that can make this acceptable. Itbs always something that takes
away your freedom, and the only solution I know of is to refuse. For
instance, there are servers that will do translation or voice recognition,
and you are letting them have control over this computing activity, which
we shouldnbt ever do.
Of course, we are also giving them data about ourselves which they
shouldnbt have. Imagine if you had a conversation with somebody through a
voice-recognition translation system that was Software as as Service and
itbs really running on a server belonging to some company. That company
also gets to know what was said in the conversation, and if itbs a US
company that means Big Brother also gets to know. This is no good.
The next threat to our freedom in a digital society is using computers for
voting. You canbt trust computers for voting. Whoever controls the software
in those computers has the power to commit undetectable fraud. Elections
are special. Because therebs nobody involved that we dare trust fully.
Everybody has to be checked, crosschecked by others, so that nobody is in
the position to falsify the results by himself. Because if anybody is in a
position to do that, he might do it! So our traditional systems for voting
were designed so that nobody was fully trusted, everybody was being checked
by others. So that nobody could easily commit fraud. But once you introduce
a program, this is impossible! How can you tell if a voting machine would
honestly count the votes? Youbd have to study the program thatbs running in
it during the election, which of course nobody can do, and most people
wouldnbt even know how to do. But even the experts who might theoretically
be capable of studying the program, they canbt do it while people are
voting. Theybd have to do it in advance, and then how do they know that the
program they studied is the one thatbs running while pople vote? Maybe itbs
been changed. Now, if this program is proprietary, that means some company
controls it. The election authority canbt even tell what that program is
doing. Well, this company then could rig the election. There are
accusations that this was done in the US in the past ten years, that
election results were falsified this way.
But what if the program is free software? That means the election authority
who owns this voting machine has control over the software in it, so the
election authority could rig the election. You canbt trust them either. You
donbt dare trust *anybody* in voting, and the reason is, therebs no way
that the voters can verify for themselves that their votes were correctly
counted, nor that false votes were not added.
In other activities of life, you can usually tell if somebody is trying to
cheat you. Consider for instance buying something from a store. You order
something, maybe you give a credit card number. If the product doesnbt
come, you can complain and you can b of course if you got a good enough
memory you will b notice if that product doesnbt come. Youbre not just
giving total blind trust to the store, because you can *check*. But in
elections you canbt check.
I saw once a paper where someone described a theoretical system for voting
which uses some sophisticated mathematics so that people could check that
their votes had been counted, even though everybodybs vote was secret, and
they could also verify that false votes hadnbt been added. It was very
exciting, powerful mathematics; but even if that mathematics is correct,
that doesnbt mean the system would be acceptable to use in practice,
because the vulnerabilities of a real system might be outside of that
mathematics. For instance, suppose youbre voting over the Internet and
suppose youbre using a machine thatbs a zombie. It might tell you that the
vote was sent for A while actually sending a vote for B. Who knows whether
youbd ever find out? In practice, the only way to see if these systems work
and are honest is through years, in fact decades, of trying them and
checking in other ways what happened.
I wouldnbt want my country to be the pioneer in this. So, use paper for
voting. Make sure there are ballots that can be recounted.
The war on sharing
The next threat to our freedom in a digital society comes from the war on
sharing.
One of the tremendous benefits of digital technology is that it is easy to
copy published works and share these copies with others. Sharing is good,
and with digital technology, sharing is easy. So, millions of people share.
Those who profit by having power over the distribution of these works donbt
want us to share. And since they are businesses, governments which have
betrayed their people and work for the empire of mega-corporations try to
serve those businesses, they are against their own people, they are for the
businesses, for the publishers.
Well, thatbs not good. And with the help of these governments, the
companies have been waging *war* on sharing, and theybve proposed a series
of cruel draconian measures. Why do they propose cruel draconian measures?
Because nothing less has a chance of success: when something is good and
easy, people do it. The only way to stop them is by being very nasty. So of
course, what they propose is nasty, nasty, and the next one is nastier. So
they tried suing teenagers for hundreds of thousands of dollars b that was
pretty nasty. And they tried turning our technology against us, Digital
Restrictions Management that means, digital handcuffs.
But among the people there were clever programmers too and they found ways
to break the handcuffs. For instance, DVDs were designed to have encrypted
movies in a secret encryption format, and the idea was that all the
programs to decrypt the video would be proprietary with digital handcuffs.
They would all be designed to restrict the users. And their scheme worked
okay for a while. But some people in Europe figured out the encryption and
they released a free program that could actually play the video on a DVD.
Well, the movie companies didnbt leave it there. They went to the US
congress and bought a law making that software illegal. The United States
invented censorship of software in 1998, with the Digital Millennium
Copyright Act [DMCA]. So the distribution of that free program was
forbidden in the United States. Unfortunately it didnbt stop with the
United States. The European Union adopted a directive in 2003 requiring
such laws. The directive only says that commercial distribution has to be
banned, but just about every country in the European Union has adopted a
nastier law. In France, the mere possession of a copy of that program is an
offense punished by imprisonment, thanks to Sarkozy. I believe that was
done by the law DADVSI. I guess he hoped that with an unpronounceable name,
people wouldnbt be able to criticize it.
So, elections are coming. Ask the candidates in the parties: will you
repeal the DADVSI? And if not, donbt support them. You mustnbt give up lost
moral territory forever. Youbve got to fight to win it back.
So, we still are fighting against digital handcuffs. The Amazon b Swindleb
has digital handcuffs to take away the traditional freedoms of readers to
do things such as: give a book to someone else, or lend a book to someone
else. Thatbs a vitally important social act. That is what builds society
among people who read: lending books. Amazon doesnbt want to let people
lend books freely. And then there is also selling a book, perhaps to a used
bookstore. You canbt do that either.
It looked for a while as if DRM had disappeared on music, but now theybre
bringing it back with streaming services such as Spotify. These services
all require proprietary client software, and the reason is so they can put
digital handcuffs on the users. So, reject them! They already showed quite
openly that you canbt trust them, because first they said: b you can listen
as much as you like.b, and then they said: b Oh, no! You can only listen a
certain number of hours a month.b The issue is not whether that particular
change was good or bad, just or unjust; the point is, they have the power
to impose any change in policies. So donbt let them have that power. You
should have your own copy of any music you want to listen to.
And then came the next assault on our freedom: HADOPI, basically punishment
on accusation. It was started in France but itbs been exported to many
other countries. The United States now demand such unjust policies in its
free exploitation treaties. A few months ago, Columbia adopted such a law
under orders from its masters in Washington. Of course, the ones in
Washington are not the real masters, theybre just the ones who control the
United States on behalf of the Empire. But theybre the ones who also
dictate to Columbia on behalf of the Empire.
In France, since the Constitutional Council objected to explicity giving
people punishment without trial, they invented a kind of trial which is not
a real trial, which is just a form of a trial, so they can *pretend* that
people have a trial before theybre punished. But in other countries they
donbt bother with that, itbs explicit punishment on accusation only. Which
means that for the sake of their war on sharing, theybre prepared to
abolish the basic principles of justice. It shows how thoroughly
anti-freedom anti-justice they are. These are not legitimate governments.
And Ibm sure theybll come up with more nasty ideas because theybre paid to
defeat the people no matter what it takes. Now, when they do this, they
always say that itbs for the sake of the artists, that they have b protectb
the b creators.b Now those are both propaganda terms. I bm convinced that
the reason they love the word b creatorsb is because it is a comparison with
a deity. They want us to think of artists as super-human, and thus
deserving special privileges and power over us, which is something I
disagree with.
In fact , the only artists that benefit very much from this system are the
big stars. The other artists are getting crushed into the ground by the
heels of these same companies. But they treat the stars very well, because
the stars have a lot of clout. If a star threatens to move to another
company, the company says: b oh, webll give you what you want.b But for any
other artist they say: b you donbt matter, we can treat you any way we like.b
So the superstars have been corrupted by the millions of dollars or euros
that they get, to the point where theybll do almost anything for more
money. For instance, J. K. Rowling is a good example. J. K. Rowling, a few
years ago, went to court in Canada and obtained an order that people who
had bought her books must not read them. She got an order telling people
not to read her books.
Herebs what happened. A bookstore put the books on display for sale too
early, before the day they were supposed to go on sale. And people came
into the store and said: b oh, I want that!b and they bought it and took
away their copies. Then, they discovered the mistake and took the copies
off of display. But Rowling wanted to crush any circulation of any
information from those books, so she went to court, and the court ordered
those people not to read the books that they now owned.
In response, I call for a total boycott of Harry Potter. But I donbt say
you shouldnbt read those books or watch the movies, I only say you
shouldnbt buy the books or pay for the movies. I leave it to Rowling to
tell people not to read the books. As far as Ibm concerned, if you borrow
the book and read it, thatbs okay. Just donbt give her any money! But this
happened with paper books. The court could make this order but it couldnbt
get the books back from the people who had bought them. Imagine if they
were ebooks. Imagine if they were ebooks on the b Swindleb. Amazon could
send commands to erase them.
So, I donbt have much respect for stars who will go to such lengths for
more money. But most artists arenbt like that, they never got enough money
to be corrupted. Because the current system of copyright supports most
artists very badly. And so, when these companies demand to expand the war
on sharing, supposedly for the sake of the artists, Ibm against what they
want but I would like to support the artists better. I appreciate their
work and I realize if we want them to do more work we should support them.
[Supporting the arts]
I have two proposals for how to support artists, methods that are
compatible with sharing. That would allow us to end the war on sharing and
still support artists.
One method uses tax money. We get a certain amount of public funds to
distribute among artists. But, how much should each artist get? We have to
measure popularity.
The current system supposedly supports artists based on their popularity.
So Ibm saying letbs keep that, letbs continue on this system based on
popularity. We can measure the popularity of all the artists with some kind
of polling or sampling, so that we donbt have to do surveillance. We can
respect peoplebs anonymity.
We get a raw popularity figure for each artist, how do we convert that into
an amount of money? The obvious way is: distribute the money in proportion
to popularity. So if A is a thousand times as popular as B, A will get a
thousand times as much money as B. Thatbs not efficient distribution of the
money. Itbs not putting the money to good use. Itbs easy for a star A to be
a thousand times as popular as a fairly successful artist B. If we use
linear proportion, webll give A a thousand times as much money as we give
B. And that means that, either we have to make A tremendously rich, or we
are not supporting B enough.
The money we use to make A tremendously rich is failing to do an effective
job of supporting the arts; so, itbs inefficient. Therefore I say: letbs
use the cube root. Cube root looks sort of like this. The point is: if A is
a thousand times as popular as B, with the cube root A will get ten times
as much as B, not a thousand times as much, just ten times as much. The use
of the cube root shifts a lot of the money from the stars to the artists of
moderate popularity. And that means, with less money we can adequately
support a much larger number of artists.
There are two reasons why this system would use less money than we pay now.
First of all because it would be supporting artists but not companies,
second because it would shift the money from the stars to the artists of
moderate popularity. Now, it would remain the case that the more popular
you are, the more money you get. So the star A would still get more than B,
but not astronomically more.
Thatbs one method, and because it wonbt be so much money it doesnbt matter
so much how we get the money. It could be from a special tax on Internet
connectivity, it could just be some of the general budget that gets
allocated to this purpose. We wonbt care because it wonbt be so much money;
much less than webre paying now.
The other method Ibve proposed is voluntary payments. Suppose each player
had a button you could use to send one euro. A lot of people would send it,
after all itbs not that much money. I think a lot of you might push that
button every day, to give one euro to some artist who had made a work that
you liked. But nothing would demand this, you wouldnbt be required or
ordered or pressured to send the money; you would do it because you felt
like it. But there are some people who wouldnbt do it because theybre poor
and they canbt afford to give one euro. And itbs good that they wonbt give
it, we donbt have to squeeze money out of poor people to support the
artists. There are enough non poor people whobll be happy to do it. Why
wouldnbt you give one euro to some artists today, if you appreciated their
work? Itbs too inconvenient to give it to them. So my proposal is to remove
the inconvenience. If the only reason not to give that euro is [that ] you
would have one euro less, you would do it fairly often.
So these are my two proposals for how to support artists, while encouraging
sharing because sharing is good. Letbs put an end to the war on sharing,
laws like DADVSI and HADOPI, itbs not just the methods that they propose
that are evil, their purpose is evil. Thatbs why they propose cruel and
draconian measures. Theybre trying to do something thatbs nasty by nature.
So letbs support artists in other ways.
[Rights in cyberspace]
The last threat to our freedom in digital society is the fact that we donbt
have a firm right to do the things we do, in cyberspace. In the physical
world, if you have certain views and you want to give people copies of a
text that defends those views, youbre free to do so. You could even buy a
printer to print them, and youbre free to hand them out on the street, or
youbre free to rent a store and hand them out there. If you want to collect
money to support your cause, you can just have a can and people could put
money into the can. You donbt need to get somebody elsebs approval or
cooperation to do these things.
But, in the Internet, you do need that. For instance if want to distribute
a text on the Internet, you need companies to help you do it. You canbt do
it by yourself. So if you want to have a website, you need the support of
an ISP or a hosting company, and you need a domain name registrar. You need
them to continue to let you do what youbre doing. So youbre doing it
effectively on sufferance, not by right.
And if you want to receive money, you canbt just hold out a can. You need
the cooperation of a payment company. And we saw that this makes all of our
digital activities vulnerable to suppression. We learned this when the
United States government launched a b distributed denial of service attackb [
DDoS <http://fr.wikipedia.org/wiki/Attaque_par_d%C3%A9ni_de_service>]
against WikiLeaks. Now Ibm making a bit of joke because the words
b distributed denial of service attackb usually refer to a different kind of
attack. But they fit perfectly with what the United States did. The United
States went to the various kinds of network services that WikiLeaks
depended on, and told them to cut off service to WikiLeaks. And they did.
For instance, WikiLeaks had rented a virtual Amazon server, and the US
government told Amazon: b cut off service for WikiLeaks.b And it did,
arbitrarily. And then, Amazon had certain domain names such as as
wikileaks.org, the US government tried to get all those domains shut off.
But it didnbt succeed, some of them were outside its control and were not
shut off.
Then, there were the payment companies. The US went to PayPal, and said:
b Stop transferring money to WikiLeaks or webll make life difficult for
you.b And PayPal shut off payments to WikiLeaks. And then it went to Visa
and Mastercard and got them to shut off payments to WikiLeaks. Others
started collecting money on WikiLeaks behalf and their account were shut
off too. But in this case, maybe something can be done. Therebs a company
in Iceland which began collecting money on behalf of WikiLeaks, and so Visa
and Mastercard shut off its account; it couldnbt receive money from its
customers either. Now, that business is suing Visa and Mastercard
apparently, under European Union law, because Visa and Mastercard together
have a near-monopoly. Theybre not allowed to arbitrarily deny service to
anyone.
Well, this is an example of how things need to be for all kinds of services
that we use in the Internet. If you rented a store to hand out statements
of what you think, or any other kind of information that you can lawfully
distribute, the landlord couldnbt kick you out just because he didnbt like
what you were saying. As long as you keep paying the rent, you have the
right to continue in that store for a certain agreed-on period of time that
you signed. So you have some rights that you can enforce. And they couldnbt
shut off your telephone line because the phone company doesnbt like what
you said or because some powerful entity didnbt like what you said and
threatened the phone company. No! As long as you pay the bills and obey
certain basic rules, they canbt shut off your phone line. This is what itbs
like to have some rights!
Well, if we move our activities from the physical world to the virtual
world, then either we have the same rights in the virtual world, or we have
been harmed. So, the precarity of all our Internet activities is the last
of the menaces I wanted to mention.
Now Ibd like to say that for more information about free software, look at
GNU.org <http://www.gnu.org/>. Also look at fsf.org <http://www.fsf.org/>,
which is the website of the Free Software Foundation. You can go there and
find many ways you can help us, for instance. You can also become a member
of the Free Software Foundation through that site. [b&] There is also the
Free Software Foundation of Europe fsfe.org <http://www.fsfe.org/>. You can
join FSF Europe also. [b&]
[Auction of an b adorable GNU.b Highest bid was b,420!]
------------------------------
This transcription was done <http://etherpad.fsfe.org/Mh8Fateg0a> by: Loki,
ThC)rC(se, duthils, regisrob, mrMuggles, Moonwalker, Hugo b&and 1 unnamed
author. A *big thanks* to them! The transcription of Richard Stallmanbs
lecture, as well as the video, are licensed under the
CreativeCommons-Attribution-NonDerivative
3.0 license <http://creativecommons.org/licenses/by-nd/3.0/>.
_______________________________________________
liberationtech mailing list
liberationtech(a)lists.stanford.edu
Should you need to change your subscription options, please go to:
https://mailman.stanford.edu/mailman/listinfo/liberationtech
If you would like to receive a daily digest, click "yes" (once you click above) next to "would you like to receive list mail batched in a daily digest?"
You will need the user name and password you receive from the list moderator in monthly reminders.
Should you need immediate assistance, please contact the list moderator.
Please don't forget to follow us on http://twitter.com/#!/Liberationtech
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
Initiative
An Open Letter to Google:
Concepts for a Google Privacy Initiative
Lauren Weinstein
May 9, 2006
http://www.vortex.com/google-privacy-initiative
Preface: The overall situation relating to U.S. and global
privacy issues is deteriorating rapidly. Recent Congressional
moves toward legislating broad, government-mandated data
retention laws ( http://lauren.vortex.com/archive/000175.html )
are particularly alarming. The manners in which we
collectively choose to address these sorts of issues are
likely to have drastic impacts not only on our own lives, but
also broadly on the shape of society, both today and in the
future.
Greetings. When I was recently invited to speak at Google's Santa
Monica center ( Video at http://lauren.vortex.com/archive/000168.html ),
I was impressed by the quality of the facilities, but even more so
by the caliber of the Google employees I met during my visit.
Google's capabilities are extraordinary. While I have been publicly
critical of some Google policies, my concerns have been focused not
on Google today, but rather mainly on how Google's immense data
processing, storage, and related infrastructures might be abused
in the future, particularly by outside entities in a position to
force Google's hand despite Google's own best intentions.
As discussed in my talk, I consider Google to be an incredibly
important and admirable resource with vast potential to do good.
But by the same token, it is largely this very power that increases
the risks of serious abuses of Google capabilities being forced upon
the organization, and Google will likely be unable to mitigate many
of these unless it takes major proactive steps on an immediate and
ongoing basis, particularly including privacy-related efforts.
Increasingly, Internet users are becoming highly sensitized to both
perceived and real risks to their privacy associated with their use
of the Net. While the real risks we face in this arena are serious
enough, people's confidence (or lack thereof) in products and
services will in many cases be shaped primarily by perceptions, and
often significantly less by the underlying realities. This
highlights the critical fact that to be truly successful, efforts to
reduce privacy risks must not only have genuine and ongoing positive
privacy effects, but also need to be clearly perceived by users and
the broader public to be in place and fully supported as primary
goals of the organizations involved.
Web-based search engines are an obvious current focus of many privacy
concerns, but as more traditional "desktop" applications migrate to
tightly coupled topologies with user data stored on remote servers
not under users' direct local control (e.g. for PC searches,
document preparation, e-mail, etc.), these issues and related
potential risks are rapidly spreading across the entire computer and
Internet spectrums.
Fears that users' private information may be increasingly subject to
intrusive perusal by law enforcement or other authorities (often with
minimal and/or questionable cause) are further damaging user
confidence in such services, with a range of issues related to data
retention being an important element at the heart of these
concerns. To the extent that potentially sensitive data is stored
for extended periods, particularly in non-anonymous forms, it is
inevitable that outside demands for access to it -- on ever broader
scales -- will be accelerating. While individual court cases will
of course vary in their results, the court system cannot be relied
upon to always render appropriate decisions regarding such matters,
particularly in today's political and legislative environments.
I believe that Google, by virtue of its Internet industry leadership,
technical and human resources, and corporate culture, is in a unique
position. Google can demonstrate how world-class privacy protection
policies and technologies can be developed and deployed in ways that
enhance user confidence in current and future Google services -- by
proactively protecting users' private data without interfering with
service operations, innovation, R&D, or the legitimate concerns of
law enforcement. Google could be the acknowledged global leader in
this area, becoming synonymous with the concept of integrating new
and advanced privacy capabilities into world-class Internet services
and products.
Obviously the confidence such efforts would engender in Google's
users would be healthy for Google's bottom line, but more
importantly it will provide genuine and continuing real benefits to
the Google user community itself (i.e. the entire world). Where
non-proprietary information is involved, further benefits to society
could be achieved through making publicly available (via published
papers, conferences, etc.) those aspects of resulting
privacy-related R&D technologies that could be deployed by other
entities to the benefit of the global community.
I recommend that Google establish a team explicitly dedicated to the
development and deployment of privacy-related efforts as outlined
above. Such a team would be tasked with establishing the framework
of these projects in a consistent manner, and ensuring to the
greatest extent practicable that all current and future Google
products and services would be integrated (from the outset when
possible) with these privacy technologies and policies. The team
would need access to other individuals within both the development
and operational aspects of Google, and ideally would report directly
to high-level management.
To be effective, such a team would need to be significantly
interdisciplinary in its makeup and scope, including a variety of
skills. Some of these would include a broad range of CS capabilities
(including specialized mathematical disciplines related to
encryption, among many others). Experience in dealing with the
particular and complex interplay between technology and societal
issues will also be an important component of such a team.
Google's growing scale and influence suggest that the sorts of
privacy efforts suggested herein could be among the most important
non-governmental privacy-related endeavors for many years to come,
and could have vast positive impacts far into the future not only
for Google and its users, but throughout the commercial, nonprofit,
and government sectors.
This document represents a very brief conceptual outline, offered
with only the best interests of both Google and the world at large
in mind. Google and the broader Internet are at a critical
crossroads in many respects, and I believe that Google has the
opportunity to do enormous good by initiating the types of efforts
that I've described.
I would welcome the opportunity to discuss these concepts with you in
more detail and to work with Google toward their realization, as you
may deem appropriate.
Thank you very much for your consideration.
--Lauren--
Lauren Weinstein
lauren(a)vortex.com or lauren(a)pfir.org
Tel: +1 (818) 225-2800
http://www.pfir.org/lauren
Co-Founder, PFIR
- People For Internet Responsibility - http://www.pfir.org
Co-Founder, IOIC
- International Open Internet Coalition - http://www.ioic.net
Moderator, PRIVACY Forum - http://www.vortex.com
Member, ACM Committee on Computers and Public Policy
Lauren's Blog: http://lauren.vortex.com
DayThink: http://daythink.vortex.com
-------------------------------------
You are subscribed as eugen(a)leitl.org
To manage your subscription, go to
http://v2.listbox.com/member/?listname=ip
Archives at: http://www.interesting-people.org/archives/interesting-people/
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
1
0