CRYPTO-GRAM, August 15, 2009

Bruce Schneier schneier at SCHNEIER.COM
Sat Aug 15 00:24:22 PDT 2009


                 CRYPTO-GRAM

               August 15, 2009

              by Bruce Schneier
      Chief Security Technology Officer, BT
             schneier at schneier.com
            http://www.schneier.com


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit 
<http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at 
<http://www.schneier.com/crypto-gram-0908.html>.  These same essays 
appear in the "Schneier on Security" blog: 
<http://www.schneier.com/blog>.  An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
     Risk Intuition
     Privacy Salience and Social Networking Sites
     Building in Surveillance
     News
     Laptop Security while Crossing Borders
     Self-Enforcing Protocols
     Schneier News
     Another New AES Attack
     Lockpicking and the Internet
     Comments from Readers


** *** ***** ******* *********** *************

     Risk Intuition



People have a natural intuition about risk, and in many ways it's very 
good. It fails at times due to a variety of cognitive biases, but for 
normal risks that people regularly encounter, it works surprisingly 
well: often better than we give it credit for. This struck me as I 
listened to yet another conference presenter complaining about security 
awareness training. He was talking about the difficulty of getting 
employees at his company to actually follow his security policies: 
encrypting data on memory sticks, not sharing passwords, not logging in 
from untrusted wireless networks. "We have to make people understand the 
risks," he said.

It seems to me that his co-workers understand the risks better than he 
does. They know what the real risks are at work, and that they all 
revolve around not getting the job done. Those risks are real and 
tangible, and employees feel them all the time. The risks of not 
following security procedures are much less real. Maybe the employee 
will get caught, but probably not. And even if he does get caught, the 
penalties aren't serious.

Given this accurate risk analysis, any rational employee will regularly 
circumvent security to get his or her job done. That's what the company 
rewards, and that's what the company actually wants.

"Fire someone who breaks security procedure, quickly and publicly," I 
suggested to the presenter. "That'll increase security awareness faster 
than any of your posters or lectures or newsletters." If the risks are 
real, people will get it.

You see the same sort of risk intuition on motorways. People are less 
careful about posted speed limits than they are about the actual speeds 
police issue tickets for. It's also true on the streets: people respond 
to real crime rates, not public officials proclaiming that a 
neighborhood is safe.

The warning stickers on ladders might make you think the things are 
considerably riskier than they are, but people have a good intuition 
about ladders and ignore most of the warnings. (This isn't to say that 
some people don't do stupid things around ladders, but for the most part 
they're safe. The warnings are more about the risk of lawsuits to ladder 
manufacturers than risks to people who climb ladders.)

As a species, we are naturally tuned in to the risks inherent in our 
environment. Throughout our evolution, our survival depended on making 
reasonably accurate risk management decisions intuitively, and we're so 
good at it, we don't even realize we're doing it.

Parents know this. Children have surprisingly perceptive risk intuition. 
They know when parents are serious about a threat and when their threats 
are empty. And they respond to the real risks of parental punishment, 
not the inflated risks based on parental rhetoric. Again, awareness 
training lectures don't work; there have to be real consequences.

It gets even weirder. The University College London professor John Adams 
popularized the metaphor of a mental risk thermostat. We tend to seek 
some natural level of risk, and if something becomes less risky, we tend 
to make it more risky. Motorcycle riders who wear helmets drive faster 
than riders who don't.

Our risk thermostats aren't perfect (that newly helmeted motorcycle 
rider will still decrease his overall risk) and will tend to remain 
within the same domain (he might drive faster, but he won't increase his 
risk by taking up smoking), but in general, people demonstrate an innate 
and finely tuned ability to understand and respond to risks.

Of course, our risk intuition fails spectacularly and often, with 
regards to rare risks, unknown risks, voluntary risks, and so on. But 
when it comes to the common risks we face every day -- the kinds of 
risks our evolutionary survival depended on -- we're pretty good.

So whenever you see someone in a situation who you think doesn't 
understand the risks, stop first and make sure you understand the risks. 
You might be surprised.

This essay previously appeared in The Guardian.
http://www.guardian.co.uk/technology/2009/aug/05/bruce-schneier-risk-security 
or http://tinyurl.com/ngu224

Risk thermostat:
http://www.amazon.com/Risk-John-Adams/dp/1857280687/ref=sr_1_1?ie=UTF8&s=books&qid=1246306830&sr=8-1 
or http://tinyurl.com/kwmuz9
http://davi.poetry.org/blog/?p=4492

Failures in risk intuition
http://www.schneier.com/essay-155.html
http://www.schneier.com/essay-171.html


** *** ***** ******* *********** *************

     Privacy Salience and Social Networking Sites



Reassuring people about privacy makes them more, not less, concerned. 
It's called "privacy salience," and Leslie John, Alessandro Acquisti, 
and George Loewenstein -- all at Carnegie Mellon University -- 
demonstrated this in a series of clever experiments. In one, subjects 
completed an online survey consisting of a series of questions about 
their academic behavior -- "Have you ever cheated on an exam?" for 
example. Half of the subjects were first required to sign a consent 
warning -- designed to make privacy concerns more salient -- while the 
other half did not. Also, subjects were randomly assigned to receive 
either a privacy confidentiality assurance, or no such assurance. When 
the privacy concern was made salient (through the consent warning), 
people reacted negatively to the subsequent confidentiality assurance 
and were less likely to reveal personal information.

In another experiment, subjects completed an online survey where they 
were asked a series of personal questions, such as "Have you ever tried 
cocaine?" Half of the subjects completed a frivolous-looking survey -- 
How BAD are U??" -- with a picture of a cute devil. The other half 
completed the same survey with the title "Carnegie Mellon University 
Survey of Ethical Standards," complete with a university seal and 
official privacy assurances. The results showed that people who were 
reminded about privacy were less likely to reveal personal information 
than those who were not.

Privacy salience does a lot to explain social networking sites and their 
attitudes towards privacy. From a business perspective, social 
networking sites don't want their members to exercise their privacy 
rights very much. They want members to be comfortable disclosing a lot 
of data about themselves.

Joseph Bonneau and Soeren Preibusch of Cambridge University have been 
studying privacy on 45 popular social networking sites around the world. 
(You may not have realized that there *are* 45 popular social networking 
sites around the world.) They found that privacy settings were often 
confusing and hard to access; Facebook, with its 61 privacy settings, is 
the worst. To understand some of the settings, they had to create 
accounts with different settings so they could compare the results. 
Privacy tends to increase with the age and popularity of a site. 
General-use sites tend to have more privacy features than niche sites.

But their most interesting finding was that sites consistently hide any 
mentions of privacy. Their splash pages talk about connecting with 
friends, meeting new people, sharing pictures: the benefits of 
disclosing personal data.

These sites do talk about privacy, but only on hard-to-find privacy 
policy pages. There, the sites give strong reassurances about their 
privacy controls and the safety of data members choose to disclose on 
the site. There, the sites display third-party privacy seals and other 
icons designed to assuage any fears members have.

It's the Carnegie Mellon experimental result in the real world. Users 
care about privacy, but don't really think about it day to day. The 
social networking sites don't want to remind users about privacy, even 
if they talk about it positively, because any reminder will result in 
users remembering their privacy fears and becoming more cautious about 
sharing personal data. But the sites also need to reassure those 
"privacy fundamentalists" for whom privacy is always salient, so they 
have very strong pro-privacy rhetoric for those who take the time to 
search them out. The two different marketing messages are for two 
different audiences.

Social networking sites are improving their privacy controls as a result 
of public pressure. At the same time, there is a counterbalancing 
business pressure to decrease privacy; watch what's going on right now 
on Facebook, for example. Naively, we should expect companies to make 
their privacy policies clear to allow customers to make an informed 
choice. But the marketing need to reduce privacy salience will frustrate 
market solutions to improve privacy; sites would much rather obfuscate 
the issue than compete on it as a feature.

This essay originally appeared in the Guardian.
http://www.guardian.co.uk/technology/2009/jul/15/privacy-internet-facebook 
or http://tinyurl.com/ml7kv4

Privacy experiments:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1430482

Privacy and social networking sites:
http://www.cl.cam.ac.uk/~jcb82/doc/privacy_jungle_bonneau_preibusch.pdf

Facebook:
http://www.insidefacebook.com/2009/05/13/facebook-privacy-guide/
http://www.nytimes.com/external/readwriteweb/2009/06/24/24readwriteweb-the-day-facebook-changed-messages-to-become-18772.html 
or http://tinyurl.com/lgpfh8
http://www.allfacebook.com/2009/02/facebook-privacy


** *** ***** ******* *********** *************

     Building in Surveillance



China is the world's most successful Internet censor. While the Great 
Firewall of China isn't perfect, it effectively limits information 
flowing in and out of the country. But now the Chinese government is 
taking things one step further.

Under a requirement taking effect soon, every computer sold in China 
will have to contain the Green Dam Youth Escort software package. 
Ostensibly a pornography filter, it is government spyware that will 
watch every citizen on the Internet.

Green Dam has many uses. It can police a list of forbidden Web sites. It 
can monitor a user's reading habits. It can even enlist the computer in 
some massive botnet attack, as part of a hypothetical future cyberwar.

China's actions may be extreme, but they're not unique. Democratic 
governments around the world -- Sweden, Canada and the United Kingdom, 
for example -- are rushing to pass laws giving their police new powers 
of Internet surveillance, in many cases requiring communications system 
providers to redesign products and services they sell.

Many are passing data retention laws, forcing companies to keep 
information on their customers. Just recently, the German government 
proposed giving itself the power to censor the Internet.

The United States is no exception. The 1994 CALEA law required phone 
companies to facilitate FBI eavesdropping, and since 2001, the NSA has 
built substantial eavesdropping systems in the United States. The 
government has repeatedly proposed Internet data retention laws, 
allowing surveillance into past activities as well as present.

Systems like this invite criminal appropriation and government abuse. 
New police powers, enacted to fight terrorism, are already used in 
situations of normal crime. Internet surveillance and control will be no 
different.

Official misuses are bad enough, but the unofficial uses worry me more. 
Any surveillance and control system must itself be secured. An 
infrastructure conducive to surveillance and control invites 
surveillance and control, both by the people you expect and by the 
people you don't.

China's government designed Green Dam for its own use, but it's been 
subverted. Why does anyone think that criminals won't be able to use it 
to steal bank account and credit card information, use it to launch 
other attacks, or turn it into a massive spam-sending botnet?

Why does anyone think that only authorized law enforcement will mine 
collected Internet data or eavesdrop on phone and IM conversations?

These risks are not theoretical. After 9/11, the National Security 
Agency built a surveillance infrastructure to eavesdrop on telephone 
calls and e-mails within the United States.

Although procedural rules stated that only non-Americans and 
international phone calls were to be listened to, actual practice didn't 
always match those rules. NSA analysts collected more data than they 
were authorized to, and used the system to spy on wives, girlfriends, 
and famous people such as President Clinton.

But that's not the most serious misuse of a telecommunications 
surveillance infrastructure.  In Greece, between June 2004 and March 
2005, someone wiretapped more than 100 cell phones belonging to members 
of the Greek government -- the prime minister and the ministers of 
defense, foreign affairs and justice.

Ericsson built this wiretapping capability into Vodafone's products, and 
enabled it only for governments that requested it. Greece wasn't one of 
those governments, but someone still unknown -- a rival political party? 
organized crime? -- figured out how to surreptitiously turn the feature on.

Researchers have already found security flaws in Green Dam that would 
allow hackers to take over the computers. Of course there are additional 
flaws, and criminals are looking for them.

Surveillance infrastructure can be exported, which also aids 
totalitarianism around the world. Western companies like Siemens, Nokia, 
and Secure Computing built Iran's surveillance infrastructure. U.S. 
companies helped build China's electronic police state. Twitter's 
anonymity saved the lives of Iranian dissidents -- anonymity that many 
governments want to eliminate.

Every year brings more Internet censorship and control -- not just in 
countries like China and Iran, but in the United States, the United 
Kingdom, Canada and other free countries.

The control movement is egged on by both law enforcement, trying to 
catch terrorists, child pornographers and other criminals, and by media 
companies, trying to stop file sharers.

It's bad civic hygiene to build technologies that could someday be used 
to facilitate a police state. No matter what the eavesdroppers and 
censors say, these systems put us all at greater risk.  Communications 
systems that have no inherent eavesdropping capabilities are more secure 
than systems with those capabilities built in.

This essay previously appeared -- albeit with fewer links -- on the 
Minnesota Public Radio website.
http://minnesota.publicradio.org/display/web/2009/07/30/schneier/

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/08/building_in_sur.html


** *** ***** ******* *********** *************

     News



Data can leak through power lines; the NSA has known about this for decades:
http://news.bbc.co.uk/2/hi/technology/8147534.stm
These days, there's a lot of open research on side channels.
http://www.schneier.com/blog/archives/2008/10/remotely_eavesd.html
http://www.schneier.com/blog/archives/2009/06/eavesdropping_o_3.html
http://www.schneier.com/paper-side-channel.html

South Africa takes its security seriously.  Here's an ATM that 
automatically squirts pepper spray into the faces of "people tampering 
with the card slots." Sounds cool, but these kinds of things are all 
about false positives:
http://www.guardian.co.uk/world/2009/jul/12/south-africa-cash-machine-pepper-spray 
or http://tinyurl.com/nj5zks

Cybercrime paper: "Distributed Security: A New Model of Law 
Enforcement," Susan W. Brenner and Leo L. Clarke.  It's from 2005, but 
I'd never seen it before.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=845085

Cryptography has zero-knowledge proofs, where Alice can prove to Bob 
that she knows something without revealing it to Bob.  Here's something 
similar from the real world.  It's a research project to allow weapons 
inspectors from one nation to verify the disarming of another nation's 
nuclear weapons without learning any weapons secrets in the process, 
such as the amount of nuclear material in the weapon.
http://news.bbc.co.uk/2/hi/europe/8154029.stm

I wrote about mapping drug use by testing sewer water in 2007, but 
there's new research:
http://www.schneier.com/blog/archives/2009/07/mapping_drug_us.html

Excellent article detailing the Twitter attack.
http://www.techcrunch.com/2009/07/19/the-anatomy-of-the-twitter-attack/ 
or http://tinyurl.com/lderkq

Social Security numbers are not random.  In some cases, you can predict 
them with date and place of birth.
http://www.nhregister.com/articles/2009/07/07/news/a1_--_id_theft.txt
http://redtape.msnbc.com/2009/07/theres-a-new-reason-to-worry-about-the-security-of-your-social-security-number-turns-out-theyre-easy-to-guess--a-gro.html 
or http://tinyurl.com/n8o7kf
http://www.wired.com/wiredscience/2009/07/predictingssn/
http://www.cnn.com/2009/US/07/10/social.security.numbers/index.html
http://www.pnas.org/content/106/27/10975
http://www.pnas.org/content/early/2009/07/02/0904891106.full.pdf
http://www.heinz.cmu.edu/~acquisti/ssnstudy/
I don't see any new insecurities here.  We already know that Social 
Security numbers are not secrets.  And anyone who wants to steal a 
million SSNs is much more likely to break into one of the gazillion 
databases out there that store them.

NIST has announced the 14 SHA-3 candidates that have advanced to the 
second round: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grostl, 
Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein.  In 
February, I chose my favorites: Arirang, BLAKE, Blue Midnight Wish, 
ECHO, Grostl, Keccak, LANE, Shabal, and Skein.  Of the ones NIST 
eventually chose, I am most surprised to see CubeHash and most surprised 
not to see LANE.
http://csrc.nist.gov/groups/ST/hash/sha-3/Round2/submissions_rnd2.html
http://www.schneier.com/essay-249.html
http://csrc.nist.gov/groups/ST/hash/sha-3/index.html
http://www.skein-hash.info/

Nice description of the base rate fallacy.
http://news.bbc.co.uk/2/hi/uk_news/magazine/8153539.stm

This is funny: "Tips for Staying Safe Online":
http://www.schneier.com/blog/archives/2009/07/tips_for_stayin.html

Seems like the Swiss may be running out of secure gold storage.  If this 
is true, it's a real security issue.  You can't just store the stuff 
behind normal locks.  Building secure gold storage takes time and money.
http://www.commodityonline.com/news/Swiss-banks-have-no-space-left-for-gold!-19698-3-1.html 
or http://tinyurl.com/kqpm8w
I am reminded of a related problem the EU had during the transition to 
the euro: where to store all the bills and coins before the switchover 
date.  There wasn't enough vault space in banks, because the vast 
majority of currency is in circulation.  It's a similar problem, 
although the EU banks could solve theirs with lots of guards, because it 
was only a temporary problem.

A large sign saying "United States" at a border crossing was deemed a 
security risk:
http://www.schneier.com/blog/archives/2009/07/large_signs_a_s.html

Clever new real estate scam:
http://www.schneier.com/blog/archives/2009/07/new_real_estate.html

Bypassing the iPhone's encryption.  I want more technical details.
http://www.wired.com/gadgetlab/2009/07/iphone-encryption/

Excellent essay by Jonathan Zittrain on the risks of cloud computing:
http://www.nytimes.com/2009/07/20/opinion/20zittrain.html
Here's me on cloud computing:
http://www.schneier.com/blog/archives/2009/06/cloud_computing.html

More fearmongering.  The headline is "Terrorists could use internet to 
launch nuclear attack: report."  The subhead: "The risk of 
cyber-terrorism escalating to a nuclear strike is growing daily, 
according to a study."
http://www.guardian.co.uk/technology/2009/jul/24/internet-cyber-attack-terrorists 
or http://tinyurl.com/mhfdyy
Note the weasel words in the article.  The study "suggests that under 
the right circumstances."  We're "leaving open the possibility."  The 
report "outlines a number of potential threats and situations" where the 
bad guys could "make a nuclear attack more likely."  Gadzooks.  I'm 
tired of this idiocy.  Stop overreacting to rare risks.  Refuse to be 
terrorized, people.
http://www.schneier.com/essay-171.html
http://www.schneier.com/essay-124.html

Interesting TED talk by Eve Ensler on security.  She doesn't use any of 
the terms, but in the beginning she's echoing a lot of the current 
thinking about evolutionary psychology and how it relates to security.
http://www.ted.com/talks/eve_ensler_on_security.html

In cryptography, we've long used the term "snake oil" to refer to crypto 
systems with good marketing hype and little actual security.  It's the 
phrase I generalized into "security theater."  Well, it turns out that 
there really is a snake oil salesman.
http://blogs.reuters.com/oddly-enough/2009/07/24/we-found-him-he-really-exists/ 
or http://tinyurl.com/mo75tu

Research that proves what we already knew:  too many security warnings 
results in complacency.
http://lorrie.cranor.org/pubs/sslwarnings.pdf

The New York Times has an editorial on regulating chemical plants.
http://www.nytimes.com/2009/08/04/opinion/04tue2.html
The problem is a classic security externality, which I wrote about in 2007.
http://www.schneier.com/essay-194.html

Good essay on security vs. usability: "When Security Gets in the Way."
http://jnd.org/dn.mss/when_security_gets_in_the_way.html

A 1934 story from the International Herald Tribune shows how we reacted 
to the unexpected 75 years ago:
http://www.schneier.com/blog/archives/2009/08/how_we_reacted.html

New airport security hole: funny.
http://scienceblogs.com/gregladen/2009/07/overheard_at_airport.php

Here's some complicated advice on securing passwords that -- I'll bet -- 
no one follows. Of the ten rules, I regularly break seven.  How about you?
http://windowssecrets.com/2009/08/06/01-Gmail-flaw-shows-value-of-strong-passwords/ 
or http://tinyurl.com/px784h
Here's my advice on choosing secure passwords.
http://www.wired.com/politics/security/commentary/securitymatters/2007/01/72458 
or http://tinyurl.com/2beaq2

"An Ethical Code for Intelligence Officers"
http://www.schneier.com/blog/archives/2009/08/an_ethical_code.html

Man-in-the-middle trucking attack:
http://www.schneier.com/blog/archives/2009/08/man-in-the-midd.html

"On Locational Privacy, and How to Avoid Losing it Forever"
http://www.eff.org/wp/locational-privacy


** *** ***** ******* *********** *************

     Laptop Security while Crossing Borders



Last year, I wrote about the increasing propensity for governments, 
including the U.S. and Great Britain, to search the contents of people's 
laptops at customs. What we know is still based on anecdote, as no 
country has clarified the rules about what their customs officers are 
and are not allowed to do, and what rights people have.

Companies and individuals have dealt with this problem in several ways, 
from keeping sensitive data off laptops traveling internationally, to 
storing the data -- encrypted, of course -- on websites and then 
downloading it at the destination. I have never liked either solution. I 
do a lot of work on the road, and need to carry all sorts of data with 
me all the time. It's a lot of data, and downloading it can take a long 
time. Also, I like to work on long international flights.

There's another solution, one that works with whole-disk encryption 
products like PGP Disk (I'm on PGP's advisory board), TrueCrypt, and 
BitLocker: Encrypt the data to a key you don't know.

It sounds crazy, but stay with me. Caveat: Don't try this at home if 
you're not very familiar with whatever encryption product you're using. 
Failure results in a bricked computer. Don't blame me.

Step One: Before you board your plane, add another key to your 
whole-disk encryption (it'll probably mean adding another "user") -- and 
make it random. By "random," I mean really random: Pound the keyboard 
for a while, like a monkey trying to write Shakespeare. Don't make it 
memorable. Don't even try to memorize it.

Technically, this key doesn't directly encrypt your hard drive. Instead, 
it encrypts the key that is used to encrypt your hard drive -- that's 
how the software allows multiple users.

So now there are two different users named with two different keys: the 
one you normally use, and some random one you just invented.

Step Two: Send that new random key to someone you trust. Make sure the 
trusted recipient has it, and make sure it works. You won't be able to 
recover your hard drive without it.

Step Three: Burn, shred, delete or otherwise destroy all copies of that 
new random key. Forget it. If it was sufficiently random and 
non-memorable, this should be easy.

Step Four: Board your plane normally and use your computer for the whole 
flight.

Step Five: Before you land, delete the key you normally use.

At this point, you will not be able to boot your computer. The only key 
remaining is the one you forgot in Step Three. There's no need to lie to 
the customs official, which in itself is often a crime; you can even 
show him a copy of this article if he doesn't believe you.

Step Six: When you're safely through customs, get that random key back 
from your confidant, boot your computer and re-add the key you normally 
use to access your hard drive.

And that's it.

This is by no means a magic get-through-customs-easily card. Your 
computer might be impounded, and you might be taken to court and 
compelled to reveal who has the random key.

But the purpose of this protocol isn't to prevent all that; it's just to 
deny any possible access to your computer to customs. You might be 
delayed. You might have your computer seized. (This will cost you any 
work you did on the flight, but -- honestly -- at that point that's the 
least of your troubles.) You might be turned back or sent home. But when 
you're back home, you have access to your corporate management, your 
personal attorneys, your wits after a good night's sleep, and all the 
rights you normally have in whatever country you're now in.

This procedure not only protects you against the warrantless search of 
your data at the border, it also allows you to deny a customs official 
your data without having to lie or pretend -- which itself is often a crime.

Now the big question: Who should you send that random key to?

Certainly it should be someone you trust, but -- more importantly -- it 
should be someone with whom you have a privileged relationship. 
Depending on the laws in your country, this could be your spouse, your 
attorney, your business partner or your priest. In a larger company, the 
IT department could institutionalize this as a policy, with the help 
desk acting as the key holder.

You could also send it to yourself, but be careful. You don't want to 
e-mail it to your webmail account, because then you'd be lying when you 
tell the customs official that there is no possible way you can decrypt 
the drive.

You could put the key on a USB drive and send it to your destination, 
but there are potential failure modes. It could fail to get there in 
time to be waiting for your arrival, or it might not get there at all. 
You could airmail the drive with the key on it to yourself a couple of 
times, in a couple of different ways, and also fax the key to yourself 
... but that's more work than I want to do when I'm traveling.

If you only care about the return trip, you can set it up before you 
return. Or you can set up an elaborate one-time pad system, with 
identical lists of keys with you and at home: Destroy each key on the 
list you have with you as you use it.

Remember that you'll need to have full-disk encryption, using a product 
such as PGP Disk, TrueCrypt or BitLocker, already installed and enabled 
to make this work.

I don't think we'll ever get to the point where our computer data is 
safe when crossing an international border. Even if countries like the 
U.S. and Britain clarify their rules and institute privacy protections, 
there will always be other countries that will exercise greater latitude 
with their authority. And sometimes protecting your data means 
protecting your data from yourself.

This essay originally appeared on Wired.com.
http://www.wired.com/politics/security/commentary/securitymatters/2009/07/securitymatters_0715 
or http://tinyurl.com/nw6bkd


** *** ***** ******* *********** *************

     Self-Enforcing Protocols



There are several ways two people can divide a piece of cake in half. 
One way is to find someone impartial to do it for them.  This works, but 
it requires another person.  Another way is for one person to divide the 
piece, and the other person to complain (to the police, a judge, or his 
parents) if he doesn't think it's fair.  This also works, but still 
requires another person -- at least to resolve disputes.  A third way is 
for one person to do the dividing, and for the other person to choose 
the half he wants.

That third way, known by kids, pot smokers, and everyone else who needs 
to divide something up quickly and fairly, is called cut-and-choose. 
People use it because it's a self-enforcing protocol: a protocol 
designed so that neither party can cheat.

Self-enforcing protocols are useful because they don't require trusted 
third parties.  Modern systems for transferring money -- checks, credit 
cards, PayPal -- require trusted intermediaries like banks and credit 
card companies to facilitate the transfer.  Even cash transfers require 
a trusted government to issue currency, and they take a cut in the form 
of seigniorage.  Modern contract protocols require a legal system to 
resolve disputes. Modern commerce wasn't possible until those systems 
were in place and generally trusted, and complex business contracts 
still aren't possible in areas where there is no fair judicial system. 
Barter is a self-enforcing protocol: nobody needs to facilitate the 
transaction or resolve disputes.  It just works.

Self-enforcing protocols are safer than other types because participants 
don't gain an advantage from cheating.  Modern voting systems are rife 
with the potential for cheating, but an open show of hands in a room -- 
one that everyone in the room can count for himself -- is 
self-enforcing.  On the other hand, there's no secret ballot, late 
voters are potentially subjected to coercion, and it doesn't scale well 
to large elections.  But there are mathematical election protocols that 
have self-enforcing properties, and some cryptographers have suggested 
their use in elections.

Here's a self-enforcing protocol for determining property tax: the 
homeowner decides the value of the property and calculates the resultant 
tax, and the government can either accept the tax or buy the home for 
that price.  Sounds unrealistic, but the Greek government implemented 
exactly that system for the taxation of antiquities.  It was the easiest 
way to motivate people to accurately report the value of antiquities. 
And shotgun clauses in contracts are essentially the same thing.

A VAT, or value-added tax, is a self-enforcing alternative to sales tax. 
 Sales tax is collected on the entire value of the thing at the point 
of retail sale; both the customer and the storeowner want to cheat the 
government.  But VAT is collected at every step between raw materials 
and that final customer; it's the difference between the price of the 
materials sold and the materials bought.  Buyers wants official receipts 
with as high a purchase price as possible, so each buyer along the chain 
keeps each seller honest. Yes, there's still an incentive to cheat on 
the final sale to the customer, but the amount of tax collected at that 
point is much lower.

Of course, self-enforcing protocols aren't perfect.  For example, 
someone in a cut-and-choose can punch the other guy and run away with 
the entire piece of cake.  But perfection isn't the goal here; the goal 
is to reduce cheating by taking away potential avenues of cheating. 
Self-enforcing protocols improve security not by implementing 
countermeasures that prevent cheating, but by leveraging economic 
incentives so that the parties don't want to cheat.

One more self-enforcing protocol.  Imagine a pirate ship that encounters 
a storm.  The pirates are all worried about their gold, so they put 
their personal bags of gold in the safe.  During the storm, the safe 
cracks open, and all the gold mixes up and spills out on the floor.  How 
do the pirates determine who owns what?  They each announce to the group 
how much gold they had.  If the total of all the announcements matches 
what's in the pile, it's divided as people announced.  If it's 
different, then the captain keeps it all.  I can think of all kinds of 
ways this can go wrong -- the captain and one pirate can collude to 
throw off the total, for example -- but it is self-enforcing against 
individual misreporting.

This essay originally appeared on ThreatPost.
http://threatpost.com/blogs/value-self-enforcing-protocols


** *** ***** ******* *********** *************

     Schneier News



I am speaking at the OWASP meeting in Minneapolis on August 24:
http://www.owasp.org/index.php/Minneapolis_St_Paul

Audio from my Black Hat talk is here:
http://www.blackhat.com/html/bh-usa-09/bh-usa-09-archives.html#Schneier 
or http://tinyurl.com/mvewwx


** *** ***** ******* *********** *************

     Another New AES Attack



A new and very impressive attack against AES has just been announced.

Over the past couple of months, there have been two new cryptanalysis 
papers on AES.  The attacks presented in the papers are not practical -- 
they're far too complex, they're related-key attacks, and they're 
against larger-key versions and not the 128-bit version that most 
implementations use -- but they are impressive pieces of work all the same.

This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry 
Khovratovich, and Adi Shamir, is much more devastating.  It is a 
completely practical attack against ten-round AES-256:

    Abstract.  AES is the best known and most widely used
    block cipher. Its three versions (AES-128, AES-192, and AES-256)
    differ in their key sizes (128 bits, 192 bits and 256 bits) and in
    their number of rounds (10, 12, and 14, respectively). In the case
    of AES-128, there is no known attack which is faster than the
    2^128 complexity of exhaustive search. However, AES-192
    and AES-256 were recently shown to be breakable by attacks which
    require 2^176 and 2^119 time, respectively. While these
    complexities are much faster than exhaustive search, they are
    completely non-practical, and do not seem to pose any real threat
    to the security of AES-based systems.

    In this paper we describe several attacks which can break with
    practical complexity variants of AES-256 whose number of rounds
    are comparable to that of AES-128. One of our attacks uses only
    two related keys and 2^39^ time to recover the complete
    256-bit key of a 9-round version of AES-256 (the best previous
    attack on this variant required 4 related keys and 2^120
    time). Another attack can break a 10 round version of AES-256 in
    2^45 time, but it uses a stronger type of related subkey
    attack (the best previous attack on this variant required 64
    related keys and 2^172 time).

They also describe an attack against 11-round AES-256 that requires 2^70 
time -- almost practical.

These new results greatly improve on the Biryukov, Khovratovich, and 
Nikolic papers mentioned above, and a paper I wrote with six others in 
2000, where we describe a related-key attack against 9-round AES-256 
(then called Rijndael) in 2^224.  (This again proves the cryptographer's 
adage: attacks always get better, they never get worse.)

By any definition of the term, this is a huge result.

There are three reasons not to panic:

*  The attack exploits the fact that the key schedule for 256-bit 
version is pretty lousy -- something we pointed out in our 2000 paper -- 
but doesn't extend to AES with a 128-bit key.

*  It's a related-key attack, which requires the cryptanalyst to have 
access to plaintexts encrypted with multiple keys that are related in a 
specific way.

*  The attack only breaks 11 rounds of AES-256.  Full AES-256 has 14 rounds.

Not much comfort there, I agree.  But it's what we have.

Cryptography is all about safety margins.  If you can break n rounds of 
a cipher, you design it with 2n or 3n rounds.  What we're learning is 
that the safety margin of AES is much less than previously believed. 
And while there is no reason to scrap AES in favor of another algorithm, 
NST should increase the number of rounds of all three AES variants.  At 
this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and 
AES-256 at 28 rounds.  Of maybe even more; we don't want to be revising 
the standard again and again.

And for new applications I suggest that people don't use AES-256. 
AES-128 provides more than enough security margin for the foreseeable 
future.  But if you're already using AES-256, there's no reason to change.

The paper:
http://eprint.iacr.org/2009/374

Older AES cryptanalysis papers:
http://eprint.iacr.org/2009/241
http://eprint.iacr.org/2009/317

AES:
http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf
http://www.schneier.com/blog/archives/2009/07/new_attack_on_a.html
http://www.schneier.com/paper-rijndael.pdf


** *** ***** ******* *********** *************

     Lockpicking and the Internet



Physical locks aren't very good. They keep the honest out, but any 
burglar worth his salt can pick the common door lock pretty quickly.

It used to be that most people didn't know this. Sure, we all watched 
television criminals and private detectives pick locks with an ease only 
found on television and thought it realistic, but somehow we still held 
onto the belief that our own locks kept us safe from intruders.

The Internet changed that.

First was the MIT Guide to Lockpicking, written by the late Bob ("Ted 
the Tool") Baldwin. Then came Matt Blaze's 2003 paper on breaking master 
key systems. After that, came a flood of lockpicking information on the 
Net: opening a bicycle lock with a Bic pen, key bumping, and more. Many 
of these techniques were already known in both the criminal and 
locksmith communities. The locksmiths tried to suppress the knowledge, 
believing their guildlike secrecy was better than openness. But they've 
lost: never has there been more public information about lockpicking -- 
or safecracking, for that matter.

Lock companies have responded with more complicated locks, and more 
complicated disinformation campaigns.

There seems to be a limit to how secure you can make a wholly mechanical 
lock, as well as a limit to how large and unwieldy a key the public will 
accept. As a result, there is increasing interest in other lock 
technologies.

As a security technologist, I worry that if we don't fully understand 
these technologies and the new sorts of vulnerabilities they bring, we 
may be trading a flawed technology for an even worse one. Electronic 
locks are vulnerable to attack, often in new and surprising ways.

Start with keypads, more and more common on house doors. These have the 
benefit that you don't have to carry a physical key around, but there's 
the problem that you can't give someone the key for a day and then take 
it away when that day is over. As such, the security decays over time -- 
the longer the keypad is in use, the more people know how to get in. 
More complicated electronic keypads have a variety of options for 
dealing with this, but electronic keypads work only when the power is 
on, and battery-powered locks have their own failure modes.  Plus, far 
too many people never bother to change the default entry code.

Keypads have other security failures, as well. I regularly see keypads 
where four of the 10 buttons are more worn than the other six. They're 
worn from use, of course, and instead of 10,000 possible entry codes, I 
now have to try only 24.

Fingerprint readers are another technology, but there are many known 
security problems with those. And there are operational problems, too: 
They're hard to use in the cold or with sweaty hands; and leaving a key 
with a neighbor to let the plumber in starts having a spy-versus-spy feel.

Some companies are going even further. Earlier this year, Schlage 
launched a series of locks that can be opened either by a key, a 
four-digit code, or the Internet. That's right: The lock is online. You 
can send the lock SMS messages or talk to it via a website, and the lock 
can send you messages when someone opens it -- or even when someone 
tries to open it and fails.

Sounds nifty, but putting a lock on the Internet opens up a whole new 
set of problems, none of which we fully understand. Even worse: Security 
is only as strong as the weakest link. Schlage's system combines the 
inherent "pickability" of a physical lock, the new vulnerabilities of 
electronic keypads, and the hacking risk of online. For most 
applications, that's simply too much risk.

This essay previously appeared on DarkReading.com.
http://www.darkreading.com/blog/archives/2009/08/locks.html

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/08/lockpicking_and.html


** *** ***** ******* *********** *************

     Comments from Readers



There are thousands of comments -- many of them interesting -- on these 
topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/blog


** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing 
summaries, analyses, insights, and commentaries on security: computer 
and otherwise.  You can subscribe, unsubscribe, or change your address 
on the Web at <http://www.schneier.com/crypto-gram.html>.  Back issues 
are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to 
colleagues and friends who will find it valuable.  Permission is also 
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is the author of the 
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," 
and "Applied Cryptography," and an inventor of the Blowfish, Twofish, 
Phelix, and Skein algorithms.  He is the Chief Security Technology 
Officer of BT BCSG, and is on the Board of Directors of the Electronic 
Privacy Information Center (EPIC).  He is a frequent writer and lecturer 
on security topics.  See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter.  Opinions expressed are not 
necessarily those of BT.

Copyright (c) 2009 by Bruce Schneier.

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE





More information about the cypherpunks-legacy mailing list