CRYPTO-GRAM, May 15, 2009

Bruce Schneier schneier at SCHNEIER.COM
Fri May 15 00:13:07 PDT 2009


                 CRYPTO-GRAM

                 May 15, 2009

              by Bruce Schneier
      Chief Security Technology Officer, BT
             schneier at schneier.com
            http://www.schneier.com


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit 
<http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at 
<http://www.schneier.com/crypto-gram-0905.html>.  These same essays 
appear in the "Schneier on Security" blog: 
<http://www.schneier.com/blog>.  An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
     Fourth Annual Movie-Plot Threat Contest Winner
     Book Review: The Science of Fear
     An Expectation of Online Privacy
     News
     Malicious Contamination of the Food Supply
     Unfair and Deceptive Data Trade Practices
     Schneier News
     Mathematical Illiteracy
     Conficker
     Comments from Readers


** *** ***** ******* *********** *************

     Fourth Annual Movie-Plot Threat Contest Winner


For this contest, the goal was "to find an existing event somewhere in 
the industrialized world -- Third World events are just too easy -- and 
provide a conspiracy theory to explain how the terrorists were really 
responsible."

I thought it was straightforward enough, but, honestly, I wasn't very 
impressed with the submissions.  Nothing surprised me with its 
cleverness.  There were scary entries and there were plausible entries, 
but hardly any were both at the same time.  And I was amazed by how many 
people didn't bother to read the rules at all, and just submitted 
movie-plot threats.

But after reading through the entries, I have chosen a winner.  It's 
HJohn, for his kidnap-blackmail-terrorist connection:  "Though recent 
shooting sprees in churches, nursing homes, and at family outings appear 
unrelated, a terrifying link has been discovered. All perpetrators had 
small children who were abducted by terrorists, and perpetrators 
received a video of their children with hooded terrorists warning that 
their children would be beheaded if they do not engage in the suicidal 
rampage. The terror threat level has been raised to red as profiling, 
known associations, and criminal history are now useless in detecting 
who will be the next terrorist sniper or airline hijacker. Anyone who 
loves their children may be a potential terrorist."

Fairly plausible, and definitely scary.  Congratulations, HJohn.

A copy of this article, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/05/fourth_movie-pl.html


** *** ***** ******* *********** *************

     Book Review: The Science of Fear



Daniel Gardner's The Science of Fear was published last July, but I've 
only just gotten around to reading it. That was a big mistake. It's a 
fantastic look at how humans deal with fear: exactly the kind of thing I 
have been reading and writing about for the past couple of years. It's 
the book I wanted to write, and it's a great read.

Gardner writes about how the brain processes fear and risk, how it 
assesses probability and likelihood, and how it makes decisions under 
uncertainty. The book talks about all the interesting psychological 
studies -- cognitive psychology, evolutionary psychology, behavioral 
economics, experimental philosophy -- that illuminate how we think and 
act regarding fear. The book also talks about how fear is used to 
influence people, by marketers, by politicians, by the media. And 
lastly, the book talks about different areas where fear plays a part: 
health, crime, terrorism.

There have been a lot of books published recently that apply these new 
paradigms of human psychology to different domains -- to randomness, to 
traffic, to rationality, to art, to religion, and etc. -- but after you 
read a few you start seeing the same dozen psychology experiments over 
and over again. Even I did it, when I wrote about the psychology of 
security. But Gardner's book is different: he goes further, explains 
more, demonstrates his point with the more obscure experiments that most 
authors don't bother seeking out. His writing style is both easy to read 
and informative, a nice mix of data an anecdote. The flow of the book 
makes sense. And his analysis is spot-on.

My only problem with the book is that Gardner doesn't use standard names 
for the various brain heuristics he talks about. Yes, his names are more 
intuitive and evocative, but they're wrong. If you have already read 
other books in the field, this is annoying because you have to 
constantly translate into standard terminology. And if you haven't read 
anything else in the field, this is a real problem because you'll be 
needlessly confused when you read about these things in other books and 
articles.

So here's a handy conversion chart. Print it out and tape it to the 
inside front cover. Print another copy out and use it as a bookmark.

	Rule of Typical Things = representativeness heuristic
	Example Rule = availability heuristic
	Good-Bad Rule = affect heuristic
	confirmation bias = confirmation bias

That's it. That's the only thing I didn't like about the book. 
Otherwise, it's perfect. It's the book I wish I had written. Only I 
don't think I would have done as good a job as Gardner did. The Science 
of Fear should be required reading for...well, for everyone.

The paperback will be published in June.

http://www.amazon.com/exec/obidos/ASIN/0525950621/counterpane/

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/04/book_review_the.html


** *** ***** ******* *********** *************

     An Expectation of Online Privacy



If your data is online, it is not private. Oh, maybe it seems private. 
Certainly, only you have access to your e-mail. Well, you and your ISP. 
And the sender's ISP. And any backbone provider who happens to route 
that mail from the sender to you.  And, if you read your personal mail 
from work, your company. And, if they have taps at the correct points, 
the NSA and any other sufficiently well-funded government intelligence 
organization -- domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of 
us now use webmail. The general problem is that, for the most part, your 
online data is not under your control. Cloud computing and software as a 
service exacerbate this problem even more.

Your webmail is less under your control than it would be if you 
downloaded your mail to your computer. If you use Salesforce.com, you're 
relying on that company to keep your data private. If you use Google 
Docs, you're relying on Google. This is why the Electronic Privacy 
Information Center recently filed a complaint with the Federal Trade 
Commission: many of us are relying on Google's security, but we don't 
know what it is.

This is new. Twenty years ago, if someone wanted to look through your 
correspondence, he had to break into your house. Now, he can just break 
into your ISP. Ten years ago, your voicemail was on an answering machine 
in your office; now it's on a computer owned by a telephone company. 
Your financial accounts are on remote websites protected only by 
passwords; your credit history is collected, stored, and sold by 
companies you don't even know exist.

And more data is being generated. Lists of books you buy, as well as the 
books you look at, are stored in the computers of online booksellers. 
Your affinity card tells your supermarket what foods you like. What were 
cash transactions are now credit card transactions. What used to be an 
anonymous coin tossed into a toll booth is now an EZ Pass record of 
which highway you were on, and when. What used to be a face-to-face chat 
is now an e-mail, IM, or SMS conversation -- or maybe a conversation 
inside Facebook.

Remember when Facebook recently changed its terms of service to take 
further control over your data? They can do that whenever they want, you 
know.

We have no choice but to trust these companies with our security and 
privacy, even though they have little incentive to protect them. Neither 
ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs 
of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our 
protections against police abuse have been severely watered down. The 
courts have ruled that the police can search your data without a 
warrant, as long as others hold that data. If the police want to read 
the e-mail on your computer, they need a warrant; but they don't need 
one to read it from the backup tapes at your ISP.

This isn't a technological problem; it's a legal problem. The courts 
need to recognize that in the information age, virtual privacy and 
physical privacy don't have the same boundaries. We should be able to 
control our own data, regardless of where it is stored. We should be 
able to make decisions about the security and privacy of that data, and 
have legal recourse should companies fail to honor those decisions. And 
just as the Supreme Court eventually ruled that tapping a telephone was 
a Fourth Amendment search, requiring a warrant -- even though it 
occurred at the phone company switching office and not in the target's 
home or office -- the Supreme Court must recognize that reading personal 
e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, 
as the second half of a point/counterpoint with Marcus Ranum.
http://searchsecurity.techtarget.com/magazinePrintFriendly/0,296905,sid14_gci1354832,00.html 
or http://tinyurl.com/pnv8vq


** *** ***** ******* *********** *************

     News



New frontiers in biometrics.  Ears:
http://www.newscientist.com/article/mg20227035.200-our-ears-may-have-builtin-passwords.html 
or http://tinyurl.com/dlgmaj
Arm swinging:
http://techon.nikkeibp.co.jp/english/NEWS_EN/20090414/168716/
I guess biometrics is now the "it" thing to study.

Hacking a Time Magazine poll.  Not particularly subtle, but clever 
nonetheless:
http://musicmachinery.com/2009/04/15/inside-the-precision-hack/
http://www.theregister.co.uk/2009/04/17/time_top_100_hack/
http://musicmachinery.com/2009/04/27/moot-wins-time-inc-loses/

Department of Homeland Security recruitment drive:
http://news.yahoo.com/s/ap/20090418/ap_on_go_pr_wh/us_cyber_security

Funny "war on photography" anecdote:
http://sierracharlie.wordpress.com/2009/04/10/terror/

I was going to write a commentary on NSA Director General Alexander's 
keynote speech at the RSA Conference, but he didn't actually *say* anything.
http://www.schneier.com/blog/archives/2009/04/nsa_at_rsa.html

Low-tech impersonation trick at restaurants:
http://www.schneier.com/blog/archives/2009/04/low-tech_impers.html

Encrypting your USB drive is smart.  Writing the encryption key down is 
smart.  Writing it on a piece of paper and attaching it to the USB drive 
is not.
http://news.bbc.co.uk/1/hi/england/lancashire/8003757.stm

Hacking U.S. military satellites is more widespread than you might think:
http://www.wired.com/politics/security/news/2009/04/fleetcom

Fake facts on Twitter: the medium makes authentication hard.
http://www.schneier.com/blog/archives/2009/04/fake_facts_on_t.html

Remember those terrorism arrests that the UK government conducted, after 
a secret document was accidentally photographed?  No one was charged:
http://news.bbc.co.uk/2/hi/uk_news/8011955.stm
http://www.schneier.com/blog/archives/2009/04/how_not_to_carr.html

Cell phones and hostage situations:
http://www.schneier.com/blog/archives/2009/04/cell_phones_and.html

This apparently non-ironic video warns that people might impersonate 
census workers in an effort to rob you.  But while you shouldn't trust 
the ID of a stranger, you should trust that same stranger to give you a 
phone number where you can verify that ID.  This, of course, makes no sense.
http://www.keyt.com/news/local/43392637.html
Preventing impersonation is hard.
http://www.schneier.com/blog/archives/2009/01/impersonation.html

"No-fly" also means "no-flyover": plane from Paris to Mexico isn't 
allowed to fly over the United States.
http://www.schneier.com/blog/archives/2009/04/no-fly_also_mea.html

Lessons from the Columbine school shooting: it's not the high-tech gear, 
but trained and alert staff that actually make a difference:
http://www.schneier.com/blog/archives/2009/04/lessons_from_th_2.html

Ireland does away with electronic voting, returning to paper ballots 
again.  Smart country.
http://www.schneier.com/blog/archives/2009/04/ireland_does_aw.html

A sad tale of fingerprint biometrics gone wrong.  Amusing and interesting:
http://thedailywtf.com/Articles/Cracking-your-Fingers.aspx

Interesting article from The New York Times on preparing for cyberwar:
http://www.nytimes.com/2009/04/28/us/28cyber.html

And yet another New York Times cyberwar article, from two days later:
http://www.schneier.com/blog/archives/2009/05/yet_another_new.html
I was particularly disturbed by the last paragraph of the newspaper 
article:  "Introducing the possibility of a nuclear response to a 
catastrophic cyberattack would be expected to serve the same purpose." 
Nuclear war is not a suitable response to a cyberattack.

Law professor Googles Justice Scalia just to see what he can collect. 
Scalia isn't amused:
http://www.abajournal.com/weekly/fordham_law_class_collects_scalia_info_justice_is_steamed 
or http://tinyurl.com/crbzjg

Security considerations in the evolution of the human penis: a 
fascinating bit of evolutionary biology
http://www.scientificamerican.com/article.cfm?id=secrets-of-the-phallus 
or http://tinyurl.com/dy8vxz

The U.S. Air Force is using a secure version of MS Windows:
http://www.schneier.com/blog/archives/2009/05/secure_version.html

Lie detector charlatans:
http://www.schneier.com/blog/archives/2009/05/lie_detector_ch.html

Virginia health data held for ransom:
http://www.schneier.com/blog/archives/2009/05/virginia_data_r.html

MI6 and a lost memory stick:
http://www.schneier.com/blog/archives/2009/05/mi6_and_a_lost.html

Marc Rotenberg on security vs. privacy:
http://www.huffingtonpost.com/marc-rotenberg/privacy-vs-security-pr_b_71806.html 
or http://tinyurl.com/2hozm8

Researchers hijack a botnet:
http://www.schneier.com/blog/archives/2009/05/researchers_hij.html

The Zeus Trojan has a self-destruct option:
http://voices.washingtonpost.com/securityfix/2009/05/zeustracker_and_the_nuclear_op.html 
or http://tinyurl.com/odjwx8
This is bad.  I see it as a sign that the botnet wars are heating up, 
and botnet designers would rather destroy their networks than have them 
fall into "enemy" hands.

Using surveillance cameras to detect cashier cheating.
http://www.schneier.com/blog/archives/2009/05/using_surveilla.html

Software problems with a breath alcohol detector.
http://www.schneier.com/blog/archives/2009/05/software_proble.html

A U.S. District Court has ruled that the police do not need a warrant to 
place a GPS tracking device on someone's car:
http://www.schneier.com/blog/archives/2009/05/no_warrant_requ.html


** *** ***** ******* *********** *************

     Malicious Contamination of the Food Supply



Terrorists attacking our food supply is a nightmare scenario that has 
been given new life during the recent swine flu outbreak. Although it 
seems easy to do, understanding why it hasn't happened is important. 
G.R. Dalziel, at the Nanyang Technological University in Singapore, has 
written a report chronicling every confirmed case of malicious food 
contamination in the world since 1950: 365 cases in all, plus 126 
additional unconfirmed cases. What he found demonstrates the reality of 
terrorist food attacks.

It turns out 72% of the food poisonings occurred at the end of the food 
supply chain -- at home -- typically by a friend, relative, neighbor, or 
co-worker trying to kill or injure a specific person. A characteristic 
example is Heather Mook of York, who in 2007 tried to kill her husband 
by putting rat poison in his spaghetti.

Most of these cases resulted in fewer than five casualties -- Mook only 
injured her husband in this incident -- although 16% resulted in five or 
more. Of the 19 cases that claimed 10 or more lives, four involved 
serial killers operating over several years.

Another 23% of cases occurred at the retail or food service level. A 
1998 incident in Japan, where someone put arsenic in a curry sold at a 
summer festival, killing four and hospitalizing 63, is a typical 
example. Only 11% of these incidents resulted in 100 or more casualties, 
while 44% resulted in none.

There are very few incidents of people contaminating the actual food 
supply. People deliberately contaminated a water supply seven times, 
resulting in three deaths. There is only one example of someone 
deliberately contaminating a crop before harvest -- in Australia in 2006 
-- and the crops were recalled before they could be sold. And in the 
three cases of someone deliberately contaminating food during packaging 
and distribution, including a 2005 case in the UK where glass and 
needles were baked into loaves of bread, no one died or was injured.

This isn't the stuff of bioterrorism. The closest example occurred in 
1984 in the US, where members of a religious group known as the 
Rajneeshees contaminated several restaurant salad bars with salmonella 
enterica typhimurium, sickening 751, hospitalizing 45, but killing no 
one. In fact, no one knew this was malicious until a year later, when 
one of the perpetrators admitted it.

Almost all of the food contaminations used conventional poisons such as 
cyanide, drain cleaner, mercury, or weed killer. There were nine 
incidents of biological agents, including salmonella, ricin, and fecal 
matter, and eight cases of radiological matter. The 2006 London 
poisoning of the former KGB agent Alexander Litvinenko with polonium-210 
in his tea is an example of the latter.

And that assassination illustrates the real risk of malicious food 
poisonings. What is discussed in terrorist training manuals, and what 
the CIA is worried about, is the use of contaminated food in targeted 
assassinations. The quantities involved for mass poisonings are too 
great, the nature of the food supply too vast and the details of any 
plot too complicated and unpredictable to be a real threat. That becomes 
crystal clear as you read the details of the different incidents: it's 
hard to kill one person, and very hard to kill dozens. Hundreds, 
thousands: it's just not going to happen any time soon. The fear of 
bioterror is much greater, and the panic from any bioterror scare will 
injure more people, than bioterrorism itself.

Far more dangerous are accidental contaminations due to negligent 
industry practices, such as the 2006 spinach E coli and, more recently, 
peanut salmonella contaminations in the US, the 2008 milk contaminations 
in China, and the BSE-infected beef from earlier this decade. And the 
systems we have in place to deal with these accidental contaminations 
also work to mitigate any intentional ones.

In 2004, the then US secretary of health and human services, Tommy 
Thompson, said on Fox News: "I cannot understand why terrorists have not 
attacked our food supply. Because it is so easy to do."

Guess what? It's not at all easy to do.

Dalziel's report:
http://www.rsis.edu.sg/CENS/publications/reports/RSIS_Food%20Defence_170209.pdf 
or http://tinyurl.com/r96mtj

Thompson quote:
http://www.foxnews.com/story/0,2933,141044,00.html

This essay previously appeared in The Guardian.
http://www.guardian.co.uk/technology/2009/may/14/bruce-schneier-bioterrorism 
or http://tinyurl.com/pkuevo


** *** ***** ******* *********** *************

     Unfair and Deceptive Data Trade Practices



Do you know what your data did last night? Almost none of the more than 
27 million people who took the RealAge quiz realized that their personal 
health data was being used by drug companies to develop targeted e-mail 
marketing campaigns.

There's a basic consumer protection principle at work here, and it's the 
concept of "unfair and deceptive" trade practices. Basically, a company 
shouldn't be able to say one thing and do another: sell used goods as 
new, lie on ingredients lists, advertise prices that aren't generally 
available, claim features that don't exist, and so on.

Buried in RealAge's 2,400-word privacy policy is this disclosure: "If 
you elect to say yes to becoming a free RealAge Member we will 
periodically send you free newsletters and e-mails that directly promote 
the use of our site(s) or the purchase of our products or services and 
may contain, in whole or in part, advertisements for third parties which 
relate to marketed products of selected RealAge partners."

They maintain that when you join the website, you consent to receiving 
pharmaceutical company spam. But since that isn't spelled out, it's not 
really informed consent. That's deceptive.

Cloud computing is another technology where users entrust their data to 
service providers. Salesforce.com, Gmail, and Google Docs are examples; 
your data isn't on your computer -- it's out in the "cloud" somewhere -- 
and you access it from your web browser. Cloud computing has significant 
benefits for customers and huge profit potential for providers. It's one 
of the fastest growing IT market segments -- 69% of Americans now use 
some sort of cloud computing services -- but the business is rife with 
shady, if not outright deceptive, advertising.

Take Google, for example. Last month, the Electronic Privacy Information 
Center (I'm on its board of directors) filed a complaint with the 
Federal Trade Commission concerning Google's cloud computing services. 
On its website, Google repeatedly assures customers that their data is 
secure and private, while published vulnerabilities demonstrate that it 
is not. Google's not foolish, though; its Terms of Service explicitly 
disavow any warranty or any liability for harm that might result from 
Google's negligence, recklessness, malevolent intent, or even purposeful 
disregard of existing legal obligations to protect the privacy and 
security of user data. EPIC claims that's deceptive.

Facebook isn't much better. Its plainly written (and not legally 
binding) Statement of Principles contains an admirable set of goals, but 
its denser and more legalistic Statement of Rights and Responsibilities 
undermines a lot of it. One research group who studies these documents 
called it "democracy theater": Facebook wants the appearance of 
involving users in governance, without the messiness of actually having 
to do so. Deceptive.

These issues are not identical. RealAge is hiding what it does with your 
data. Google is trying to both assure you that your data is safe and 
duck any responsibility when it's not. Facebook wants to market a 
democracy but run a dictatorship. But they all involve trying to deceive 
the customer.

Cloud computing services like Google Docs, and social networking sites 
like RealAge and Facebook, bring with them significant privacy and 
security risks over and above traditional computing models. Unlike data 
on my own computer, which I can protect to whatever level I believe 
prudent, I have no control over any of these sites, nor any real 
knowledge of how these companies protect my privacy and security. I have 
to trust them.

This may be fine -- the advantages might very well outweigh the risks -- 
but users often can't weigh the trade-offs because these companies are 
going out of their way to hide the risks.

Of course, companies don't want people to make informed decisions about 
where to leave their personal data. RealAge wouldn't get 27 million 
members if its webpage clearly stated "you are signing up to receive 
e-mails containing advertising from pharmaceutical companies," and 
Google Docs wouldn't get five million users if its webpage said "We'll 
take some steps to protect your privacy, but you can't blame us if 
something goes wrong."

And of course, trust isn't black and white. If, for example, Amazon 
tried to use customer credit card info to buy itself office supplies, 
we'd all agree that that was wrong. If it used customer names to solicit 
new business from their friends, most of us would consider this wrong. 
When it uses buying history to try to sell customers new books, many of 
us appreciate the targeted marketing. Similarly, no one expects Google's 
security to be perfect. But if it didn't fix known vulnerabilities, most 
of us would consider that a problem.

This is why understanding is so important. For markets to work, 
consumers need to be able to make informed buying decisions. They need 
to understand both the costs and benefits of the products and services 
they buy. Allowing sellers to manipulate the market by outright lying, 
or even by hiding vital information, about their products breaks 
capitalism -- and that's why the government has to step in to ensure 
markets work smoothly.

Last month, Mary K. Engle, Acting Deputy Director of the FTC's Bureau of 
Consumer Protection said: "a company's marketing materials must be 
consistent with the nature of the product being offered. It's not enough 
to disclose the information only in a fine print of a lengthy online 
user agreement." She was speaking about Digital Rights Management and, 
specifically, an incident where Sony used a music copy protection scheme 
without disclosing that it secretly installed software on customers' 
computers. DRM is different from cloud computing or even online surveys 
and quizzes, but the principle is the same.

Engle again: "if your advertising giveth and your EULA [license 
agreement] taketh away don't be surprised if the FTC comes calling." 
That's the right response from government.

A version of this article originally appeared on The Wall Street Journal.
http://online.wsj.com/article/SB123997522418329223.html

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/04/unfair_and_dece.html


** *** ***** ******* *********** *************

     Schneier News



I'm speaking at the Computers, Freedom, and Privacy conference on June 2 
in Washington DC.
http://www.cfp2009.org/wiki/index.php/Main_Page

Marcus Ranum and I did a video version of our Face Off column.
http://searchsecurity.techtarget.com/video/0,297151,sid14_gci1355883,00.html 
or http://tinyurl.com/p9eznn

Interview with me from ThreatPost:
http://threatpost.com/blogs/bruce-schneier-cryptography-security-theater-and-psychology-fear 
or http://tinyurl.com/oyyeea
Slashdot thread on the interview:
http://it.slashdot.org/article.pl?sid=09/05/13/1822242

San Francisco restaurant reviews for the RSA Conference:
http://www.schneier.com/blog/archives/2009/04/san_francisco_r.html


** *** ***** ******* *********** *************

     Mathematical Illiteracy



This may be the stupidest example of risk assessment I've ever seen. 
It's a video clip from a recent Daily Show, about the dangers of the 
Large Hadron Collider.  The segment starts off slowly, but then there's 
an exchange with high school science teacher Walter L. Wagner, who 
insists the device has a 50-50 chance of destroying the world:

	"If you have something that can happen, and something that won't 
necessarily happen, it's going to either happen or it's going to not 
happen, and so the best guess is 1 in 2."

	"I'm not sure that's how probability works, Walter."

This is followed by clips of news shows taking the guy seriously.

In related news, almost four-fifths of Americans don't know that a 
trillion is a million million, and most think it's less than that.  Is 
it any wonder why we're having so much trouble with national budget debates?

http://www.thedailyshow.com/video/index.jhtml?videoId=225921&title=Large-Hadron-Collider 
or http://tinyurl.com/cevkwa
http://econ4u.org/blog/?p=587


** *** ***** ******* *********** *************

     Conficker



Conficker's April Fool's joke -- the huge, menacing build-up and then 
nothing -- is a good case study on how we think about risks, one whose 
lessons are applicable far outside computer security. Generally, our 
brains aren't very good at probability and risk analysis. We tend to use 
cognitive shortcuts instead of thoughtful analysis. This worked fine for 
the simple risks we encountered for most of our species' existence, but 
it's less effective against the complex risks society forces us to face 
today.

We tend to judge the probability of something happening on how easily we 
can bring examples to mind. It's why people tend to buy earthquake 
insurance after an earthquake, when the risk is lowest. It's why those 
of us who have been the victims of a crime tend to fear crime more than 
those who haven't. And it's why we fear a repeat of 9/11 more than other 
types of terrorism.

We fear being murdered, kidnapped, raped and assaulted by strangers, 
when friends and relatives are far more likely to do those things to us. 
We worry about plane crashes instead of car crashes, which are far more 
common. We tend to exaggerate spectacular, strange, and rare events, and 
downplay more ordinary, familiar, and common ones.

We also respond more to stories than to data. If I show you statistics 
on crime in New York, you'll probably shrug and continue your vacation 
planning. But if a close friend gets mugged there, you're more likely to 
cancel your trip.

And specific stories are more convincing than general ones. That is why 
we buy more insurance against plane accidents than against travel 
accidents, or accidents in general. Or why, when surveyed, we are 
willing to pay more for air travel insurance covering "terrorist acts" 
than "all possible causes". That is why, in experiments, people judge 
specific scenarios more likely than more general ones, even if the 
general ones include the specific.

Conficker's 1 April deadline was precisely the sort of event humans tend 
to overreact to. It's a specific threat, which convinces us that it's 
credible. It's a specific date, which focuses our fear. Our natural 
tendency to exaggerate makes it more spectacular, which further 
increases our fear. Its repetition by the media makes it even easier to 
bring to mind. As the story becomes more vivid, it becomes more convincing.

The New York Times called it an "unthinkable disaster", the television 
news show 60 Minutes said it could "disrupt the entire internet" and we 
at the Guardian warned that it might be a "deadly threat". Naysayers 
were few, and drowned out.

The first of April passed without incident, but Conficker is no less 
dangerous today. About 2.2m computers worldwide, are still infected with 
Conficker.A and B, and about 1.3m more are infected with the nastier 
Conficker.C. It's true that on 1 April Conficker.C tried a new trick to 
update itself, but its authors could have updated the worm using another 
mechanism any day. In fact, they updated it on 8 April, and can do so again.

And Conficker is just one of many, many dangerous worms being run by 
criminal organizations. It came with a date and got a lot of press -- 
that 1 April date was more hype than reality -- but it's not 
particularly special. In short, there are many criminal organizations on 
the internet using worms and other forms of malware to infect computers. 
They then use those computers to send spam, commit fraud, and infect 
more computers. The risks are real and serious. Luckily, keeping your 
anti-virus software up-to-date and not clicking on strange attachments 
can keep you pretty secure. Conficker spreads through a Windows 
vulnerability that was patched in October. You do have automatic update 
turned on, right?

But people being people, it takes a specific story for us to protect 
ourselves.

This essay previously appeared in The Guardian.
http://www.guardian.co.uk/technology/2009/apr/23/conficker-panic

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/04/conficker.html


** *** ***** ******* *********** *************


     Comments from Readers



There are hundreds of comments -- many of them interesting -- on these 
topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/blog


** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing 
summaries, analyses, insights, and commentaries on security: computer 
and otherwise.  You can subscribe, unsubscribe, or change your address 
on the Web at <http://www.schneier.com/crypto-gram.html>.  Back issues 
are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to 
colleagues and friends who will find it valuable.  Permission is also 
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is the author of the 
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," 
and "Applied Cryptography," and an inventor of the Blowfish, Twofish, 
Phelix, and Skein algorithms.  He is the Chief Security Technology 
Officer of BT BCSG, and is on the Board of Directors of the Electronic 
Privacy Information Center (EPIC).  He is a frequent writer and lecturer 
on security topics.  See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter.  Opinions expressed are not 
necessarily those of BT.

Copyright (c) 2009 by Bruce Schneier.

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE





More information about the cypherpunks-legacy mailing list