CRYPTO-GRAM, July 15, 2009

Bruce Schneier schneier at SCHNEIER.COM
Tue Jul 14 22:12:09 PDT 2009


                 CRYPTO-GRAM

                July 15, 2009

              by Bruce Schneier
      Chief Security Technology Officer, BT
             schneier at schneier.com
            http://www.schneier.com


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit 
<http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at 
<http://www.schneier.com/crypto-gram-0907.html>.  These same essays 
appear in the "Schneier on Security" blog: 
<http://www.schneier.com/blog>.  An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
     Imagining Threats
     Security, Group Size, and the Human Brain
     North Korean Cyberattacks
     Why People Don't Understand Risks
     Fraud on eBay
     News
     Authenticating Paperwork
     The Pros and Cons of Password Masking
     The "Hidden Cost" of Privacy
     Fixing Airport Security
     Schneier News
     Homomorphic Encryption Breakthrough
     New Attack on AES
     MD6 Withdrawn from SHA-3 Competition
     Ever Better Cryptanalytic Results Against SHA-1
     Comments from Readers


** *** ***** ******* *********** *************

     Imagining Threats



A couple of years ago, the Department of Homeland Security hired a bunch 
of science fiction writers to come in for a day and think of ways 
terrorists could attack America. If our inability to prevent 9/11 marked 
a failure of imagination, as some said at the time, then who better than 
science fiction writers to inject a little imagination into 
counterterrorism planning?

I discounted the exercise at the time, calling it "embarrassing." I 
never thought that 9/11 was a failure of imagination. I thought, and 
still think, that 9/11 was primarily a confluence of three things: the 
dual failure of centralized coordination and local control within the 
FBI, and some lucky breaks on the part of the attackers. More 
imagination leads to more movie-plot threats -- which contributes to 
overall fear and overestimation of the risks. And that doesn't help keep 
us safe at all.

Recently, I read a paper by Magne Jorgensen that provides some insight 
into why this is so. Titled More Risk Analysis Can Lead to Increased 
Over-Optimism and Over-Confidence, the paper isn't about terrorism at 
all. It's about software projects.

Most software development project plans are overly optimistic, and most 
planners are overconfident about their overoptimistic plans. Jorgensen 
studied how risk analysis affected this. He conducted four separate 
experiments on software engineers, and concluded (though there are lots 
of caveats in the paper, and more research needs to be done) that 
performing more risk analysis can make engineers more overoptimistic 
instead of more realistic.

Potential explanations all come from behavioral economics: cognitive 
biases that affect how we think and make decisions. (I've written about 
some of these biases and how they affect security decisions, and there's 
a great book on the topic as well.)

First, there's a control bias. We tend to underestimate risks in 
situations where we are in control, and overestimate risks in situations 
when we are not in control. Driving versus flying is a common example. 
This bias becomes stronger with familiarity, involvement and a desire to 
experience control, all of which increase with increased risk analysis. 
So the more risk analysis, the greater the control bias, and the greater 
the underestimation of risk.

The second explanation is the availability heuristic. Basically, we 
judge the importance or likelihood of something happening by the ease of 
bringing instances of that thing to mind. So we tend to overestimate the 
probability of a rare risk that is seen in a news headline, because it 
is so easy to imagine. Likewise, we underestimate the probability of 
things occurring that don't happen to be in the news.

A corollary of this phenomenon is that, if we're asked to think about a 
series of things, we overestimate the probability of the last thing 
thought about because it's more easily remembered.

According to Jorgensen's reasoning, people tend to do software risk 
analysis by thinking of the severe risks first, and then the more 
manageable risks. So the more risk analysis that's done, the less severe 
the last risk imagined, and thus the greater the underestimation of the 
total risk.

The third explanation is similar: the peak end rule. When thinking about 
a total experience, people tend to place too much weight on the last 
part of the experience. In one experiment, people had to hold their 
hands under cold water for one minute. Then, they had to hold their 
hands under cold water for one minute again, then keep their hands in 
the water for an additional 30 seconds while the temperature was 
gradually raised. When asked about it afterwards, most people preferred 
the second option to the first, even though the second had more total 
discomfort. (An intrusive medical device was redesigned along these 
lines, resulting in a longer period of discomfort but a relatively 
comfortable final few seconds. People liked it a lot better.) This 
means, like the second explanation, that the least severe last risk 
imagined gets greater weight than it deserves.

Fascinating stuff. But the biases produce the reverse effect when it 
comes to movie-plot threats. The more you think about far-fetched 
terrorism possibilities, the more outlandish and scary they become, and 
the less control you think you have. This causes us to overestimate the 
risks.

Think about this in the context of terrorism. If you're asked to come up 
with threats, you'll think of the significant ones first. If you're 
pushed to find more, if you hire science-fiction writers to dream them 
up, you'll quickly get into the low-probability movie plot threats. But 
since they're the last ones generated, they're more available. (They're 
also more vivid -- science fiction writers are good at that -- which 
also leads us to overestimate their probability.) They also suggest 
we're even less in control of the situation than we believed. Spending 
too much time imagining disaster scenarios leads people to overestimate 
the risks of disaster.

I'm sure there's also an anchoring effect in operation. This is another 
cognitive bias, where people's numerical estimates of things are 
affected by numbers they've most recently thought about, even random 
ones. People who are given a list of three risks will think the total 
number of risks are lower than people who are given a list of 12 risks. 
So if the science fiction writers come up with 137 risks, people will 
believe that the number of risks is higher than they otherwise would -- 
even if they recognize the 137 number is absurd.

Jorgensen does not believe risk analysis is useless in software 
projects, and I don't believe scenario brainstorming is useless in 
counterterrorism. Both can lead to new insights and, as a result, a more 
intelligent analysis of both specific risks and general risk. But an 
over-reliance on either can be detrimental.

Last month, at the 2009 Homeland Security Science & Technology 
Stakeholders Conference in Washington D.C., science fiction writers 
helped the attendees think differently about security. This seems like a 
far better use of their talents than imagining some of the zillions of 
ways terrorists can attack America.

This essay originally appeared on Wired.com.
http://www.wired.com/politics/security/commentary/securitymatters/2009/06/securitymatters_0619 
or http://tinyurl.com/nm6tj7

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/imagining_threa.html


** *** ***** ******* *********** *************

     Security, Group Size, and the Human Brain



If the size of your company grows past 150 people, it's time to get name 
badges. It's not that larger groups are somehow less secure, it's just 
that 150 is the cognitive limit to the number of people a human brain 
can maintain a coherent social relationship with.

Primatologist Robin Dunbar derived this number by comparing neocortex -- 
the "thinking" part of the mammalian brain -- volume with the size of 
primate social groups. By analyzing data from 38 primate genera and 
extrapolating to the human neocortex size, he predicted a human "mean 
group size" of roughly 150.

This number appears regularly in human society; it's the estimated size 
of a Neolithic farming village, the size at which Hittite settlements 
split, and the basic unit in professional armies from Roman times to the 
present day. Larger group sizes aren't as stable because their members 
don't know each other well enough. Instead of thinking of the members as 
people, we think of them as groups of people. For such groups to 
function well, they need externally imposed structure, such as name badges.

Of course, badges aren't the only way to determine in-group/out-group 
status. Other markers include insignia, uniforms, and secret handshakes. 
They have different security properties and some make more sense than 
others at different levels of technology, but once a group reaches 150 
people, it has to do something.

More generally, there are several layers of natural human group size 
that increase with a ratio of approximately three: 5, 15, 50, 150, 500, 
and 1500 -- although, really, the numbers aren't as precise as all that, 
and groups that are less focused on survival tend to be smaller. The 
layers relate to both the intensity and intimacy of relationship and the 
frequency of contact.

The smallest, three to five, is a "clique": the number of people from 
whom you would seek help in times of severe emotional distress. The 
twelve to 20 group is the "sympathy group": people with which you have 
special ties. After that, 30 to 50 is the typical size of 
hunter-gatherer overnight camps, generally drawn from the same pool of 
150 people. No matter what size company you work for, there are only 
about 150 people you consider to be "co-workers." (In small companies, 
Alice and Bob handle accounting. In larger companies, it's the 
accounting department -- and maybe you know someone there personally.) 
The 500-person group is the "megaband," and the 1,500-person group is 
the "tribe." Fifteen hundred is roughly the number of faces we can put 
names to, and the typical size of a hunter-gatherer society.

These numbers are reflected in military organization throughout history: 
squads of 10 to 15 organized into platoons of three to four squads, 
organized into companies of three to four platoons, organized into 
battalions of three to four companies, organized into regiments of three 
to four battalions, organized into divisions of two to three regiments, 
and organized into corps of two to three divisions.

Coherence can become a real problem once organizations get above about 
150 in size.  So as group sizes grow across these boundaries, they have 
more externally imposed infrastructure -- and more formalized security 
systems. In intimate groups, pretty much all security is ad hoc. 
Companies smaller than 150 don't bother with name badges; companies 
greater than 500 hire a guard to sit in the lobby and check badges.  The 
military have had centuries of experience with this under rather trying 
circumstances, but even there the real commitment and bonding invariably 
occurs at the company level. Above that you need to have rank imposed by 
discipline.

The whole brain-size comparison might be bunk, and a lot of evolutionary 
psychologists disagree with it. But certainly security systems become 
more formalized as groups grow larger and their members less known to 
each other. When do more formal dispute resolution systems arise: town 
elders, magistrates, judges? At what size boundary are formal 
authentication schemes required? Small companies can get by without the 
internal forms, memos, and procedures that large companies require; when 
does what tend to appear? How does punishment formalize as group size 
increase? And how do all these things affect group coherence? People act 
differently on social networking sites like Facebook when their list of 
"friends" grows larger and less intimate. Local merchants sometimes let 
known regulars run up tabs. I lend books to friends with much less 
formality than a public library. What examples have you seen?

An edited version of this essay, without links, appeared in the 
July/August 2009 issue of IEEE Security & Privacy.

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/security_group.html


** *** ***** ******* *********** *************

     North Korean Cyberattacks



To hear the media tell it, the United States suffered a major 
cyberattack last week.  Stories were everywhere. "Cyber Blitz hits U.S., 
Korea" was the headline in Thursday's Wall Street Journal. North Korea 
was blamed.

Where were you when North Korea attacked America?  Did you feel the fury 
of North Korea's armies?  Were you fearful for your country?  Or did 
your resolve strengthen, knowing that we would defend our homeland 
bravely and valiantly?

My guess is that you didn't even notice, that -- if you didn't open a 
newspaper or read a news website -- you had no idea anything was 
happening.  Sure, a few government websites were knocked out, but that's 
not alarming or even uncommon. Other government websites were attacked 
but defended themselves, the sort of thing that happens all the time. If 
this is what an international cyberattack looks like, it hardly seems 
worth worrying about at all.

Politically motivated cyber attacks are nothing new. We've seen UK vs. 
Ireland. Israel vs. the Arab states. Russia vs. several former Soviet 
Republics. India vs. Pakistan, especially after the nuclear bomb tests 
in 1998. China vs. the United States, especially in 2001 when a U.S. spy 
plane collided with a Chinese fighter jet. And so on and so on.

The big one happened in 2007, when the government of Estonia was 
attacked in cyberspace following a diplomatic incident with Russia about 
the relocation of a Soviet World War II memorial. The networks of many 
Estonian organizations, including the Estonian parliament, banks, 
ministries, newspapers and broadcasters, were attacked and -- in many 
cases -- shut down.  Estonia was quick to blame Russia, which was 
equally quick to deny any involvement.

It was hyped as the first cyberwar, but after two years there is still 
no evidence that the Russian government was involved. Though Russian 
hackers were indisputably the major instigators of the attack, the only 
individuals positively identified have been young ethnic Russians living 
inside Estonia, who were angry over the statue incident.

Poke at any of these international incidents, and what you find are kids 
playing politics. Last Wednesday, South Korea's National Intelligence 
Service admitted that it didn't actually know that North Korea was 
behind the attacks: "North Korea or North Korean sympathizers in the 
South" was what it said. Once again, it'll be kids playing politics.

This isn't to say that cyberattacks by governments aren't an issue, or 
that cyberwar is something to be ignored. The constant attacks by 
Chinese nationals against U.S. networks may not be government-sponsored, 
but it's pretty clear that they're tacitly government-approved. 
Criminals, from lone hackers to organized crime syndicates, attack 
networks all the time. And war expands to fill every possible theater: 
land, sea, air, space, and now cyberspace. But cyberterrorism is nothing 
more than a media invention designed to scare people. And for there to 
be a cyberwar, there first needs to be a war.

Israel is currently considering attacking Iran in cyberspace, for 
example.  If it tries, it'll discover that attacking computer networks 
is an inconvenience to the nuclear facilities it's targeting, but 
doesn't begin to substitute for bombing them.

In May, President Obama gave a major speech on cybersecurity.  He was 
right when he said that cybersecurity is a national security issue, and 
that the government needs to step up and do more to prevent 
cyberattacks. But he couldn't resist hyping the threat with scare 
stories: "In one of the most serious cyber incidents to date against our 
military networks, several thousand computers were infected last year by 
malicious software -- malware," he said. What he didn't add was that 
those infections occurred because the Air Force couldn't be bothered to 
keep its patches up to date.

This is the face of cyberwar: easily preventable attacks that, even when 
they succeed, only a few people notice.  Even this current incident is 
turning out to be a sloppily modified five-year-old worm that no modern 
network should still be vulnerable to.

Securing our networks doesn't require some secret advanced NSA 
technology.  It's the boring network security administration stuff we 
already know how to do: keep your patches up to date, install good 
anti-malware software, correctly configure your firewalls and 
intrusion-detection systems, monitor your networks. And while some 
government and corporate networks do a pretty good job at this, others 
fail again and again.

Enough of the hype and the bluster. The news isn't the attacks, but that 
some networks had security lousy enough to be vulnerable to them.

This essay originally appeared on the Minnesota Public Radio website.
http://minnesota.publicradio.org/display/web/2009/07/10/schneier/

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/north_korean_cy.html


** *** ***** ******* *********** *************

     Why People Don't Understand Risks



Last week's Minneapolis Star Tribune had the front-page headline: 
"Co-sleeping kills about 20 infants each year."  The only problem is 
that there's no additional information with which to make sense of the 
statistic.

How many infants don't die each year?  How many infants die each year in 
separate beds?  Is the death rate for co-sleepers greater or less than 
the death rate for separate-bed sleepers?  Without this information, 
it's impossible to know whether this statistic is good or bad.

But the media rarely provides context for the data.  The story is in the 
aftermath of an incident where a baby was accidentally smothered in his 
sleep.

Oh, and that 20-infants-per-year number is for Minnesota only.  No word 
as to whether the situation is better or worse in other states.

The headline in the web article is different.
http://www.startribune.com/local/49985722.html?elr=KArksUUUoDEy3LGDiO7aiU 
or http://tinyurl.com/nfzgcl


** *** ***** ******* *********** *************

     Fraud on eBay



I expected selling my computer on eBay to be easy.

Attempt 1:  I listed it.  Within hours, someone bought it -- from a 
hacked account, as eBay notified me, canceling the sale.

Attempt 2:  I listed it again.  Within hours, someone bought it, and 
asked me to send it to her via FedEx overnight.  The buyer sent payment 
via PayPal immediately, and then -- near as I could tell -- immediately 
opened a dispute with PayPal so that the funds were put on hold.  And 
then she sent me an e-mail saying "I paid you, now send me the 
computer."  But PayPal was faster than she expected, I think.  At the 
same time, I received an e-mail from PayPal saying that I might have 
received a payment that the account holder did not authorize, and that I 
shouldn't ship the item until the investigation is complete.

I was willing to make Attempt 3, but someone on my blog bought it first. 
 It looks like eBay is completely broken for items like this.

It's not just me.
http://consumerist.com/5007790/its-now-completely-impossible-to-sell-a-laptop-on-ebay 
or http://tinyurl.com/55hprp

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/fraud_on_ebay.html


** *** ***** ******* *********** *************

     News



Did a public Twitter post lead to a burglary?
http://www.usatoday.com/travel/news/2009-06-08-twitter-vacation_N.htm

Prairie dogs hack Baltimore Zoo; an amusing story that echoes a lot of 
our own security problems.
http://www.baltimoresun.com/news/maryland/baltimore-city/bal-md.ci.zoo12jun12,0,685569.story 
or http://tinyurl.com/mcuzam

The U.S. Department of Homeland Security has a blog.  I don't know if it 
will be as interesting or entertaining as the TSA's blog.
http://www.dhs.gov/journal/theblog

Carrot-bomb art project bombs in Sweden:
http://news.bbc.co.uk/2/hi/europe/8099561.stm

Fascinating research on the psychology of con games.  "The psychology of 
scams: Provoking and committing errors of judgement" was prepared for 
the UK Office of Fair Trading by the University of Exeter School of 
Psychology.
http://www.schneier.com/blog/archives/2009/06/the_psychology_3.html

New computer snooping tool: 
http://investors.guidancesoftware.com/releasedetail.cfm?ReleaseID=384544 
or http://tinyurl.com/lwhuod

This week's movie-plot threat -- fungus:
http://www.schneier.com/blog/archives/2009/06/this_weeks_movi.html

Engineers are more likely to become Muslim terrorists.  At least, that's 
what the facts indicate.  Is it time to start profiling?
http://www.newscientist.com/article/mg20227127.200-can-university-subjects-reveal-terrorists-in-the-making.html 
or http://tinyurl.com/m5r56h
http://www.nuff.ox.ac.uk/users/gambetta/Engineers%20of%20Jihad.pdf

John Mueller on nuclear disarmament: "The notion that the world should 
rid itself of nuclear weapons has been around for over six decades -- 
during which time they have been just about the only instrument of 
destruction that hasn't killed anybody."
http://www.schneier.com/blog/archives/2009/06/john_mueller_on.html

Eavesdropping on dot-matrix printers by listening to them.
http://www.schneier.com/blog/archives/2009/06/eavesdropping_o_3.html

Research on the security of online games:
http://www.schneier.com/blog/archives/2009/06/research_on_the.html

Ross Anderson liveblogged the 8th Workshop on Economics of Information 
Security (WEIS) at University College London.
http://www.lightbluetouchpaper.org/2009/06/24/weis-2009-liveblog/
I wrote about WEIS 2006 back in 2006.
http://www.schneier.com/blog/archives/2006/06/economics_and_i_1.html

Clear, the company that sped people through airport security, has ceased 
operations.  It is unclear what will happen to all that personal data 
they have collected.
http://www.schneier.com/blog/archives/2009/06/clear_shuts_dow.html

This no-stabbing knife seems not to be a joke.
http://www.timesonline.co.uk/tol/news/uk/crime/article6501720.ece
I've already written about the risks of pointy knives.
http://www.schneier.com/blog/archives/2005/06/risks_of_pointy.html

The Communication Security Establishment (CSE, basically Canada's NSA) 
is growing so fast they're running out of room and building new office 
buildings.
http://www.defenseindustrydaily.com/Canadas-CSE-ELINT-Agency-Building-New-Facilities-05498/ 
or http://tinyurl.com/leu79h

Cryptography spam:
http://www.schneier.com/blog/archives/2009/06/cryptography_sp.html

More security countermeasures from the natural world:
1.  The plant caladium steudneriifolium pretends to be ill so mining 
moths won't eat it.
http://news.bbc.co.uk/earth/hi/earth_news/newsid_8108000/8108940.stm
2.  Cabbage aphids arm themselves with chemical bombs.
http://scienceblogs.com/notrocketscience/2009/06/aphids_defend_themselves_with_chemical_bombs.php 
or http://tinyurl.com/ksegwk
3.  The dark-footed ant spider mimics an ant so that it's not eaten by 
other spiders, and so it can eat spiders itself.
http://scienceblogs.com/notrocketscience/2009/06/spiders_gather_in_groups_to_impersonate_ants.php 
or http://tinyurl.com/p9u8r9
http://scienceblogs.com/notrocketscience/2009/07/spider_mimics_ant_to_eat_spiders_and_avoid_being_eaten_by_sp.php 
or http://tinyurl.com/mhjxh3

Information leakage from keypads.  (You need to click on the link to see 
the pictures.)
http://www.schneier.com/blog/archives/2009/07/information_lea_1.html

Good essay -- "The Staggering Cost of Playing it 'Safe'" -- about the 
political motivations for terrorist security policy.
http://www.dailykos.com/storyonly/2009/6/16/743102/-The-Staggering-Cost-of-Playing-it-Safe 
or http://tinyurl.com/m8dlvr

My commentary on a article hyping the terrorist risk of cloud computing:
http://www.schneier.com/blog/archives/2009/07/terrorist_risk.html

Pocketless trousers to protect against bribery in Nepal:
http://www.google.com/hostednews/afp/article/ALeqM5gmKIu2qKjavgL6B0s7161VCyMSAQ 
or http://tinyurl.com/mexcdy

Anti-theft lunch bags:
http://design-milk.com/anti-theft-lunch-bags/

U.S. court institutes limits on TSA searches.  This is good news.
http://www.schneier.com/blog/archives/2009/07/court_limits_on.html

Spanish police foil remote-controlled zeppelin jailbreak.  Sometimes 
movie plots actually happen.
http://gizmodo.com/5307943/spanish-police-foil-remote+controlled-zeppelin-jailbreak 
or http://tinyurl.com/qcns4y
http://www.thestar.com/news/world/article/660875

Almost two years ago, I wrote about my strategy for encrypting my 
laptop.  One of the things I said was:  "There are still two scenarios 
you aren't secure against, though. You're not secure against someone 
snatching your laptop out of your hands as you're typing away at the 
local coffee shop. And you're not secure against the authorities telling 
you to decrypt your data for them."  Here's a free program that defends 
against that first threat: it locks the computer unless a key is pressed 
every n seconds.  Honestly, this would be too annoying for me to use, 
but you're welcome to try it.
http://www.donationcoder.com/Forums/bb/index.php?topic=18656.0
http://www.schneier.com/blog/archives/2009/06/protecting_agai.html
http://www.schneier.com/essay-199.html

You won't hear about this ATM vulnerability, because the presentation 
has been pulled from the BlackHat conference:
http://www.schneier.com/blog/archives/2009/07/the_atm_vulnera.html

The NSA is building a massive data center in Utah.
http://www.sltrib.com/ci_12735293
http://www.deseretnews.com/article/705314456/Psst-Big-spy-center-is-coming-to-Utah.html 
or http://tinyurl.com/nrn64r

I was quoted as calling Google's Chrome operating system "idiotic." 
Here's additional explanation and context.
http://www.schneier.com/blog/archives/2009/07/making_an_opera.html

How to cause chaos in an airport: leave a suitcase in a restroom.
http://www.schneier.com/blog/archives/2009/07/lost_suitcases.html

Interesting paper from HotSec '07: "Do Strong Web Passwords Accomplish 
Anything?" by Dinei Florencio, Cormac Herley, and Baris Coskun.
http://www.usenix.org/event/hotsec07/tech/full_papers/florencio/florencio.pdf 
or http://tinyurl.com/ca9mp9

Interesting use of gaze tracking software to protect privacy:
http://www.schneier.com/blog/archives/2009/07/gaze_tracking_s.html

Poor man's steganography -- hiding documents in corrupt PDF documents:
http://blog.didierstevens.com/2009/07/01/embedding-and-hiding-files-in-pdf-documents/ 
or http://tinyurl.com/m6onbo


** *** ***** ******* *********** *************

     Authenticating Paperwork



It's a sad, horrific story. Homeowner returns to find his house 
demolished. The demolition company was hired legitimately but there was 
a mistake and it demolished the wrong house. The demolition company 
relied on GPS co-ordinates, but requiring street addresses isn't a 
solution. A typo in the address is just as likely, and it would have 
demolished the house just as quickly.

The problem is less how the demolishers knew which house to knock down, 
and more how they confirmed that knowledge. They trusted the paperwork, 
and the paperwork was wrong. Informality works when everybody knows 
everybody else. When merchants and customers know each other, government 
officials and citizens know each other, and people know their neighbors, 
people know what's going on. In that sort of milieu, if something goes 
wrong, people notice.

In our modern anonymous world, paperwork is how things get done. 
Traditionally, signatures, forms, and watermarks all made paperwork 
official. Forgeries were possible but difficult. Today, there's still 
paperwork, but for the most part it only exists until the information 
makes its way into a computer database. Meanwhile, modern technology -- 
computers, fax machines and desktop publishing software -- has made it 
easy to forge paperwork. Every case of identity theft has, at its core, 
a paperwork failure. Fake work orders, purchase orders, and other 
documents are used to steal computers, equipment, and stock. 
Occasionally, fake faxes result in people being sprung from prison. Fake 
boarding passes can get you through airport security. This month hackers 
officially changed the name of a Swedish man.

A reporter even changed the ownership of the Empire State Building. 
Sure, it was a stunt, but this is a growing form of crime. Someone 
pretends to be you -- preferably when you're away on holiday -- and 
sells your home to someone else, forging your name on the paperwork. You 
return to find someone else living in your house, someone who thinks he 
legitimately bought it. In some senses, this isn't new. Paperwork 
mistakes and fraud have happened ever since there was paperwork. And the 
problem hasn't been fixed yet for several reasons.

One, our sloppy systems generally work fine, and it's how we get things 
done with minimum hassle. Most people's houses don't get demolished and 
most people's names don't get maliciously changed. As common as identity 
theft is, it doesn't happen to most of us. These stories are news 
because they are so rare. And in many cases, it's cheaper to pay for the 
occasional blunder than ensure it never happens.

Two, sometimes the incentives aren't in place for paperwork to be 
properly authenticated. The people who demolished that family home were 
just trying to get a job done. The same is true for government officials 
processing title and name changes. Banks get paid when money is 
transferred from one account to another, not when they find a paperwork 
problem. We're all irritated by forms stamped 17 times, and other 
mysterious bureaucratic processes, but these are actually designed to 
detect problems.

And three, there's a psychological mismatch: it is easy to fake 
paperwork, yet for the most part we act as if it has magical properties 
of authenticity.

What's changed is scale. Fraud can be perpetrated against hundreds of 
thousands, automatically. Mistakes can affect that many people, too. 
What we need are laws that penalize people or companies -- criminally or 
civilly -- who make paperwork errors. This raises the cost of mistakes, 
making authenticating paperwork more attractive, which changes the 
incentives of those on the receiving end of the paperwork. And that will 
cause the market to devise technologies to verify the provenance, 
accuracy, and integrity of information: telephone verification, 
addresses and GPS co-ordinates, cryptographic authentication, systems 
that double- and triple-check, and so on.

We can't reduce society's reliance on paperwork, and we can't eliminate 
errors based on it. But we can put economic incentives in place for 
people and companies to authenticate paperwork more.

This essay originally appeared in The Guardian.
http://www.guardian.co.uk/technology/2009/jun/24/read-me-first-identity-fraud 
or http://tinyurl.com/ls3cdp

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/authenticating_1.html


** *** ***** ******* *********** *************

     The Pros and Cons of Password Masking



Usability guru Jakob Nielsen opened up a can of worms when he made the 
case against password masking -- the practice of hiding computer 
password characters behind asterisks -- in his blog. I chimed in that I 
agreed. Almost 165 comments on my blog (and several articles, essays, 
and many other blog posts) later, the consensus is that we were wrong.

I was certainly too glib. Like any security countermeasure, password 
masking has value. But like any countermeasure, password masking is not 
a panacea. And the costs of password masking need to be balanced with 
the benefits.

The cost is accuracy. When users don't get visual feedback from what 
they're typing, they're more prone to make mistakes. This is especially 
true with character strings that have non-standard characters and 
capitalization. This has several ancillary costs:

* Users get pissed off.

* Users are more likely to choose easy-to-type passwords, reducing both 
mistakes and security. Removing password masking will make people more 
comfortable with complicated passwords: they'll become easier to 
memorize and easier to use.

The benefits of password masking are more obvious:

*Security from shoulder surfing. If people can't look over your shoulder 
and see what you're typing, they're much less likely to be able to steal 
your password. Yes, they can look at your fingers instead, but that's 
much harder than looking at the screen. Surveillance cameras are also an 
issue: it's easier to watch someone's fingers on recorded video, but 
reading a cleartext password off a screen is trivial.

* In some situations, there is a trust dynamic involved. Do you type 
your password while your boss is standing over your shoulder watching? 
How about your spouse or partner? Your parent or child? Your teacher or 
students? At ATMs, there's a social convention of standing away from 
someone using the machine, but that convention doesn't apply to 
computers. You might not trust the person standing next to you enough to 
let him see your password, but don't feel comfortable telling him to 
look away. Password masking solves that social awkwardness.

* Security from screen scraping malware. This is less of an issue; 
keyboard loggers are more common and unaffected by password masking. And 
if you have that kind of malware on your computer, you've got all sorts 
of problems.

* A security "signal." Password masking alerts users, and I'm thinking 
users who aren't particularly security savvy, that passwords are a secret.

I believe that shoulder surfing isn't nearly the problem it's made out 
to be. One, lots of people use their computers in private, with no one 
looking over their shoulders. Two, personal handheld devices are used 
very close to the body, making shoulder surfing all that much harder. 
Three, it's hard to quickly and accurately memorize a random 
non-alphanumeric string that flashes on the screen for a second or so.

This is not to say that shoulder surfing isn't a threat. It is. And, as 
many readers pointed out, password masking is one of the reasons it 
isn't more of a threat. And the threat is greater for those who are not 
fluent computer users: slow typists and people who are likely to choose 
bad passwords. But I believe that the risks are overstated.

Password masking is definitely important on public terminals with short 
PINs. (I'm thinking of ATMs.) The value of the PIN is large, shoulder 
surfing is more common, and a four-digit PIN is easy to remember in any 
case.

And lastly, this problem largely disappears on the Internet on your 
personal computer. Most browsers include the ability to save and then 
automatically populate password fields, making the usability problem go 
away at the expense of another security problem (the security of the 
password becomes the security of the computer). There's a Firefox 
plug-in that gets rid of password masking. And programs like my own 
Password Safe allow passwords to be cut and pasted into applications, 
also eliminating the usability problem.

One approach is to make it a configurable option. High-risk banking 
applications could turn password masking on by default; other 
applications could turn it off by default. Browsers in public locations 
could turn it on by default. I like this, but it complicates the user 
interface.

A reader mentioned BlackBerry's solution, which is to display each 
character briefly before masking it; that seems like an excellent 
compromise.

I, for one, would like the option. I cannot type complicated WEP keys 
into Windows -- twice! what's the deal with that? -- without making 
mistakes. I cannot type my rarely used and very complicated PGP keys 
without making a mistake unless I turn off password masking. That's what 
I was reacting to when I said "I agree."

So was I wrong? Maybe. Okay, probably. Password masking definitely 
improves security; many readers pointed out that they regularly use 
their computer in crowded environments, and rely on password masking to 
protect their passwords. On the other hand, password masking reduces 
accuracy and makes it less likely that users will choose secure and 
hard-to-remember passwords, I will concede that the password masking 
trade-off is more beneficial than I thought in my snap reaction, but 
also that the answer is not nearly as obvious as we have historically 
assumed.

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/the_pros_and_co.html


** *** ***** ******* *********** *************

     The "Hidden Cost" of Privacy



Forbes ran an article talking about the "hidden" cost of privacy. 
Basically, the point was that privacy regulations are expensive to 
comply with, and a lot of that expense gets eaten up by the mechanisms 
of compliance and doesn't go toward improving anyone's actual privacy. 
This is a valid point, and one that I make in talks about privacy all 
the time.  It's particularly bad in the United States, because we have a 
patchwork of different privacy laws covering different types of 
information and different situations and not a single comprehensive 
privacy law.

The meta-problem is simple to describe: those entrusted with our privacy 
often don't have much incentive to respect it.  Examples include: credit 
bureaus such as TransUnion and Experian, who don't have any business 
relationship at all with the people whose data they collect and sell; 
companies such as Google who give away services -- and collect personal 
data as a part of that -- as an incentive to view ads, and make money by 
selling those ads to other companies; medical insurance companies, who 
are chosen by a person's employer; and computer software vendors, who 
can have monopoly powers over the market.  Even worse, it can be 
impossible to connect an effect of a privacy violation with the 
violation itself -- if someone opens a bank account in your name, how do 
you know who was to blame for the privacy violation? -- so even when 
there is a business relationship, there's no clear cause-and-effect 
relationship.

What this all means is that protecting individual privacy remains an 
externality for many companies, and that basic market dynamics won't 
work to solve the problem.  Because the efficient market solution won't 
work, we're left with inefficient regulatory solutions.  So now the 
question becomes: how do we make regulation as efficient as possible?  I 
have some suggestions:

*  Broad privacy regulations are better than narrow ones.

*  Simple and clear regulations are better than complex and confusing ones.

*  It's far better to regulate results than methodology.

*  Penalties for bad behavior need to be expensive enough to make good 
behavior the rational choice.

We'll never get rid of the inefficiencies of regulation -- that's the 
nature of the beast, and why regulation only makes sense when the market 
fails -- but we can reduce them.

Forbes article:
http://www.forbes.com/forbes/2009/0608/034-privacy-research-hidden-cost-of-privacy.html 
or http://tinyurl.com/obpf6j


** *** ***** ******* *********** *************

     Fixing Airport Security



It's been months since the Transportation Security Administration has 
had a permanent director. If, during the job interview (no, I didn't get 
one), President Obama asked me how I'd fix airport security in one 
sentence, I would reply: "Get rid of the photo ID check, and return 
passenger screening to pre-9/11 levels."

Okay, that's a joke. While showing ID, taking your shoes off and 
throwing away your water bottles isn't making us much safer, I don't 
expect the Obama administration to roll back those security measures 
anytime soon. Airport security is more about CYA than anything else: 
defending against what the terrorists did last time.

But the administration can't risk appearing as if it facilitated a 
terrorist attack, no matter how remote the possibility, so those 
annoyances are probably here to stay.

This would be my real answer: "Establish accountability and transparency 
for airport screening." And if I had another sentence: "Airports are one 
of the places where Americans, and visitors to America, are most likely 
to interact with a law enforcement officer - and yet no one knows what 
rights travelers have or how to exercise those rights."

Obama has repeatedly talked about increasing openness and transparency 
in government, and it's time to bring transparency to the Transportation 
Security Administration (TSA).

Let's start with the no-fly and watch lists. Right now, everything about 
them is secret: You can't find out if you're on one, or who put you 
there and why, and you can't clear your name if you're innocent. This 
Kafkaesque scenario is so un-American it's embarrassing. Obama should 
make the no-fly list subject to judicial review.

Then, move on to the checkpoints themselves. What are our rights? What 
powers do the TSA officers have? If we're asked "friendly" questions by 
behavioral detection officers, are we allowed not to answer? If we 
object to the rough handling of ourselves or our belongings, can the TSA 
official retaliate against us by putting us on a watch list? Obama 
should make the rules clear and explicit, and allow people to bring 
legal action against the TSA for violating those rules; otherwise, 
airport checkpoints will remain a Constitution-free zone in our country.

Next, Obama should refuse to use unfunded mandates to sneak expensive 
security measures past Congress. The Secure Flight program is the worst 
offender. Airlines are being forced to spend billions of dollars 
redesigning their reservations systems to accommodate the TSA's demands 
to preapprove every passenger before he or she is allowed to board an 
airplane. These costs are borne by us, in the form of higher ticket 
prices, even though we never see them explicitly listed.

Maybe Secure Flight is a good use of our money; maybe it isn't. But 
let's have debates like that in the open, as part of the budget process, 
where it belongs.

And finally, Obama should mandate that airport security be solely about 
terrorism, and not a general-purpose security checkpoint to catch 
everyone from pot smokers to deadbeat dads.

The Constitution provides us, both Americans and visitors to America, 
with strong protections against invasive police searches. Two exceptions 
come into play at airport security checkpoints. The first is "implied 
consent," which means that you cannot refuse to be searched; your 
consent is implied when you purchased your ticket. And the second is 
"plain view," which means that if the TSA officer happens to see 
something unrelated to airport security while screening you, he is 
allowed to act on that.

Both of these principles are well established and make sense, but it's 
their combination that turns airport security checkpoints into 
police-state-like checkpoints.

The TSA should limit its searches to bombs and weapons and leave general 
policing to the police - where we know courts and the Constitution still 
apply.

None of these changes will make airports any less safe, but they will go 
a long way to de-ratcheting the culture of fear, restoring the 
presumption of innocence and reassuring Americans, and the rest of the 
world, that - as Obama said in his inauguration speech - "we reject as 
false the choice between our safety and our ideals."

This essay originally appeared, without hyperlinks, in the New York 
Daily News.
http://www.nydailynews.com/opinions/2009/06/24/2009-06-24_clear_common_sense_for_takeoff_how_the_tsa_can_make_airport_security_work_for_pa.html 
or http://tinyurl.com/kwa2pd

http://www.schneier.com/blog/archives/2009/06/fixing_airport.html


** *** ***** ******* *********** *************

     Schneier News



I am speaking at Black Hat and DefCon, in Las Vegas, on 30 and 31 July 2009.
https://www.blackhat.com/html/bh-usa-09/bh-us-09-main.html
http://defcon.org/html/defcon-17/dc-17-index.html


** *** ***** ******* *********** *************

     Homomorphic Encryption Breakthrough



Last month, IBM made some pretty brash claims about homomorphic 
encryption and the future of security. I hate to be the one to throw 
cold water on the whole thing -- as cool as the new discovery is -- but 
it's important to separate the theoretical from the practical.

Homomorphic cryptosystems are ones where mathematical operations on the 
ciphertext have regular effects on the plaintext. A normal symmetric 
cipher -- DES, AES, or whatever -- is not homomorphic. Assume you have a 
plaintext P, and you encrypt it with AES to get a corresponding 
ciphertext C. If you multiply that ciphertext by 2, and then decrypt 2C, 
you get random gibberish instead of P. If you got something else, like 
2P, that would imply some pretty strong nonrandomness properties of AES 
and no one would trust its security.

The RSA algorithm is different. Encrypt P to get C, multiply C by 2, and 
then decrypt 2C -- and you get 2P. That's a homomorphism: perform some 
mathematical operation to the ciphertext, and that operation is 
reflected in the plaintext. The RSA algorithm is homomorphic with 
respect to multiplication, something that has to be taken into account 
when evaluating the security of a security system that uses RSA.

This isn't anything new. RSA's homomorphism was known in the 1970s, and 
other algorithms that are homomorphic with respect to addition have been 
known since the 1980s. But what has eluded cryptographers is a fully 
homomorphic cryptosystem: one that is homomorphic under both addition 
and multiplication and yet still secure. And that's what IBM researcher 
Craig Gentry has discovered.

This is a bigger deal than might appear at first glance. Any computation 
can be expressed as a Boolean circuit: a series of additions and 
multiplications. Your computer consists of a zillion Boolean circuits, 
and you can run programs to do anything on your computer. This algorithm 
means you can perform arbitrary computations on homomorphically 
encrypted data. More concretely: if you encrypt data in a fully 
homomorphic cryptosystem, you can ship that encrypted data to an 
untrusted person and that person can perform arbitrary computations on 
that data without being able to decrypt the data itself. Imagine what 
that would mean for cloud computing, or any outsourcing infrastructure: 
you no longer have to trust the outsourcer with the data.

Unfortunately -- you knew that was coming, right? -- Gentry's scheme is 
completely impractical. It uses something called an ideal lattice as the 
basis for the encryption scheme, and both the size of the ciphertext and 
the complexity of the encryption and decryption operations grow 
enormously with the number of operations you need to perform on the 
ciphertext -- and that number needs to be fixed in advance. And 
converting a computer program, even a simple one, into a Boolean circuit 
requires an enormous number of operations. These aren't impracticalities 
that can be solved with some clever optimization techniques and a few 
turns of Moore's Law; this is an inherent limitation in the algorithm. 
In one article, Gentry estimates that performing a Google search with 
encrypted keywords -- a perfectly reasonable simple application of this 
algorithm -- would increase the amount of computing time by about a 
trillion. Moore's law calculates that it would be 40 years before that 
homomorphic search would be as efficient as a search today, and I think 
he's being optimistic with even this most simple of examples.

Despite this, IBM's PR machine has been in overdrive about the 
discovery. Its press release makes it sound like this new homomorphic 
scheme is going to rewrite the business of computing: not just cloud 
computing, but "enabling filters to identify spam, even in encrypted 
email, or protection information contained in electronic medical 
records." Maybe someday, but not in my lifetime.

This is not to take anything away anything from Gentry or his discovery. 
Visions of a fully homomorphic cryptosystem have been dancing in 
cryptographers' heads for thirty years. I never expected to see one. It 
will be years before a sufficient number of cryptographers examine the 
algorithm that we can have any confidence that the scheme is secure, but 
-- practicality be damned -- this is an amazing piece of work.

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/homomorphic_enc.html


** *** ***** ******* *********** *************

     New Attack on AES



There's a new cryptanalytic attack on AES that is better than brute force:

"Abstract.  In this paper we present two related-key attacks on the full 
AES. For AES-256 we show the first key recovery attack that works for 
all the keys and has complexity 2^119, while the recent attack by 
Biryukov-Khovratovich-Nikolic works for a weak key class and has higher 
complexity. The second attack is the first cryptanalysis of the full 
AES-192. Both our attacks are boomerang attacks, which are based on the 
recent idea of finding local collisions in block ciphers and enhanced 
with the boomerang switching techniques to gain free rounds in the middle."

In an e-mail, the authors wrote:  "We also expect that a careful 
analysis may reduce the complexities. As a preliminary result, we think 
that the complexity of the attack on AES-256 can be lowered from 2^119 
to about 2^110.5 data and time.  We believe that these results may shed 
a new light on the design of the key-schedules of block ciphers, but 
they pose no immediate threat for the real world applications that use AES."

Agreed. While this attack is better than brute force -- and some 
cryptographers will describe the algorithm as "broken" because of it -- 
it is still far, far beyond our capabilities of computation.  The attack 
is, and probably forever will be, theoretical.  But remember: attacks 
always get better, they never get worse.  Others will continue to 
improve on these numbers.  While there's no reason to panic, no reason 
to stop using AES, no reason to insist that NIST choose another 
encryption standard, this will certainly be a problem for some of the 
AES-based SHA-3 candidate hash functions.

https://cryptolux.uni.lu/mediawiki/uploads/1/1a/Aes-192-256.pdf
https://cryptolux.org/FAQ_on_the_attacks


** *** ***** ******* *********** *************

     MD6 Withdrawn from SHA-3 Competition



In other SHA-3 news, Ron Rivest has suggested that his MD6 algorithm be 
withdrawn from the SHA-3 competition.  From an e-mail to a NIST mailing 
list:  "We suggest that MD6 is not yet ready for the next SHA-3 round, 
and we also provide some suggestions for NIST as the contest moves forward."

Basically, the issue is that in order for MD6 to be fast enough to be 
competitive, the designers have to reduce the number of rounds down to 
30-40, and at those rounds, the algorithm loses its proofs of resistance 
to differential attacks"  "Thus, while MD6 appears to be a robust and 
secure cryptographic hash algorithm, and has much merit for multi-core 
processors, our inability to provide a proof of security for a 
reduced-round (and possibly tweaked) version of MD6 against differential 
attacks suggests that MD6 is not ready for consideration for the next 
SHA-3 round."

This is a very classy withdrawal, as we expect from Ron Rivest -- 
especially given the fact that there are no attacks on it, while other 
algorithms have been seriously broken and their submitters keep trying 
to pretend that no one has noticed.

A copy of this blog post, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/md6.html


** *** ***** ******* *********** *************

     Ever Better Cryptanalytic Results Against SHA-1



The SHA family (which, I suppose, should really be called the MD4 
family) of cryptographic hash functions has been under attack for a long 
time. In 2005, we saw the first cryptanalysis of SHA-1 that was faster 
than brute force: collisions in 2^69 hash operations, later improved to 
2^63 operations. A great result, but not devastating. But remember the 
great truism of cryptanalysis: attacks always get better, they never get 
worse. Last week, devastating got a whole lot closer. A new attack can, 
at least in theory, find collisions in 2^52 hash operations -- well 
within the realm of computational possibility. Assuming the 
cryptanalysis is correct, we should expect to see an actual SHA-1 
collision within the year.

Note that this is a collision attack, not a pre-image attack. Most uses 
of hash functions don't care about collision attacks. But if yours does, 
switch to SHA-2 immediately.

This is why NIST is administering a SHA-3 competition for a new hash 
standard. And whatever algorithm is chosen, it will look nothing like 
anything in the SHA family (which is why I think it should be called the 
Advanced Hash Standard, or AHS).

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/ever_better_cry.html


** *** ***** ******* *********** *************

     Comments from Readers



There are thousands of comments -- many of them interesting -- on these 
topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/blog


** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing 
summaries, analyses, insights, and commentaries on security: computer 
and otherwise.  You can subscribe, unsubscribe, or change your address 
on the Web at <http://www.schneier.com/crypto-gram.html>.  Back issues 
are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to 
colleagues and friends who will find it valuable.  Permission is also 
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is the author of the 
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," 
and "Applied Cryptography," and an inventor of the Blowfish, Twofish, 
Phelix, and Skein algorithms.  He is the Chief Security Technology 
Officer of BT BCSG, and is on the Board of Directors of the Electronic 
Privacy Information Center (EPIC).  He is a frequent writer and lecturer 
on security topics.  See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter.  Opinions expressed are not 
necessarily those of BT.

Copyright (c) 2009 by Bruce Schneier.

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE





More information about the cypherpunks-legacy mailing list