CRYPTO-GRAM, June 15, 2009

Bruce Schneier schneier at SCHNEIER.COM
Mon Jun 15 05:34:09 PDT 2009


                 CRYPTO-GRAM

                June 15, 2009

              by Bruce Schneier
      Chief Security Technology Officer, BT
             schneier at schneier.com
            http://www.schneier.com


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit 
<http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at 
<http://www.schneier.com/crypto-gram-0906.html>.  These same essays 
appear in the "Schneier on Security" blog: 
<http://www.schneier.com/blog>.  An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
     Obama's Cybersecurity Speech
     "Lost" Puzzle in Wired Magazine
     Last Month's Terrorism Arrests
     News
     Me on Full-Body Scanners in Airports
     Schneier News
     The Doghouse: Net1
     Cloud Computing
     The Second Interdisciplinary Workshop on Security and
        Human Behaviour
     Comments from Readers


** *** ***** ******* *********** *************

     Obama's Cybersecurity Speech



I am optimistic about President Obama's new cybersecurity policy and the 
appointment of a new "cybersecurity coordinator," though much depends on 
the details. What we do know is that the threats are real, from identity 
theft to Chinese hacking to cyberwar.

His principles were all welcome -- securing government networks, 
coordinating responses, working to secure the infrastructure in private 
hands (the power grid, the communications networks, and so on), although 
I think he's overly optimistic that legislation won't be required. I was 
especially heartened to hear his commitment to funding research. Much of 
the technology we currently use to secure cyberspace was developed from 
university research, and the more of it we finance today the more secure 
we'll be in a decade.

Education is also vital, although sometimes I think my parents need more 
cybersecurity education than my grandchildren do. I also appreciate the 
president's commitment to transparency and privacy, both of which are 
vital for security.

But the details matter. Centralizing security responsibilities has the 
downside of making security more brittle by instituting a single 
approach and a uniformity of thinking. Unless the new coordinator 
distributes responsibility, cybersecurity won't improve.

As the administration moves forward on the plan, two principles should 
apply. One, security decisions need to be made as close to the problem 
as possible. Protecting networks should be done by people who understand 
those networks, and threats needs to be assessed by people close to the 
threats. But distributed responsibility has more risk, so oversight is 
vital.

Two, security coordination needs to happen at the highest level 
possible, whether that's evaluating information about different threats, 
responding to an Internet worm or establishing guidelines for protecting 
personal information. The whole picture is larger than any single agency.

This essay originally appeared on The New York Times website, along with 
several others commenting on Obama's speech.
http://roomfordebate.blogs.nytimes.com/2009/05/29/a-plan-of-attack-in-cyberspace

All the essays are worth reading, although I want to specifically quote 
James Bamford making an important point I've repeatedly made:  "The 
history of White House czars is not a glorious one as anyone who has 
followed the rise and fall of the drug czars can tell. There is a lot of 
hype, a White House speech, and then things go back to normal. Power, 
the ability to cause change, depends primarily on who controls the money 
and who is closest to the president's ear.  Because the new cyber czar 
will have neither a checkbook nor direct access to President Obama, the 
role will be more analogous to a traffic cop than a czar."

Gus Hosein wrote a good essay on the need for privacy:  "Of course 
raising barriers around computer systems is certainly a good start. But 
when these systems are breached, our personal information is left 
vulnerable. Yet governments and companies are collecting more and more 
of our information.  The presumption should be that all data collected 
is vulnerable to abuse or theft. We should therefore collect only what 
is absolutely required."

I wrote something similar to my essay above in 2002, about the creation 
of the Department of Homeland Security:  "The human body defends itself 
through overlapping security systems. It has a complex immune system 
specifically to fight disease, but disease fighting is also distributed 
throughout every organ and every cell. The body has all sorts of 
security systems, ranging from your skin to keep harmful things out of 
your body, to your liver filtering harmful things from your bloodstream, 
to the defenses in your digestive system. These systems all do their own 
thing in their own way. They overlap each other, and to a certain extent 
one can compensate when another fails. It might seem redundant and 
inefficient, but it's more robust, reliable, and secure. You're alive 
and reading this because of it."

More news links on Obama's speech:
http://www.nytimes.com/2009/05/30/us/politics/30cyber.html
http://voices.washingtonpost.com/securityfix/2009/05/obama_cybersecurity_is_a_natio.html?wprss=securityfix 
or http://tinyurl.com/lrp9cm
http://www.google.com/hostednews/ap/article/ALeqM5i9mgJb3EsMIaA6aVcbSkp84g0sMwD98G2U0G0 
or http://tinyurl.com/maz4lh
http://www.networkworld.com/news/2009/052909-obama-security-coordinator.html 
or http://tinyurl.com/lbge8m
http://swampland.blogs.time.com/2009/05/29/obamas-cybersecurity-speech-why-bother/ 
or http://tinyurl.com/l8vhwf
http://www.theregister.co.uk/2009/05/29/obama_creates_cyber_post/

Good commentary from Gene Spafford:
http://www.cerias.purdue.edu/site/blog/post/on_cyber_czars_and_60-day_reports/ 
or http://tinyurl.com/nalj74

Good commentary from Bob Blakley:
http://notabob.blogspot.com/2009/06/cyber-security.html

Me in 2002:
http://www.schneier.com/crypto-gram-0212.html#3

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/05/obamas_cybersec.html


** *** ***** ******* *********** *************

     "Lost" Puzzle in Wired Magazine



For the April 09 issue of Wired Magazine, I was asked to create a 
cryptographic puzzle based on the television show Lost.  Specifically, I 
was given a "clue" to encrypt.

Details are in the links.  Creating something like this is very hard. 
The puzzle needs to be hard enough that people don't figure it out 
immediately, and easy enough that people eventually do figure it out. 
To make matters even more complicated, people will share their ideas on 
the Internet.  So if the solution requires -- and I'm making this up -- 
expertise in Mayan history, carburetor design, algebraic topology, and 
Russian folk dancing, those people are likely to come together on the 
Internet.  The puzzle has to be challenging for the group mind, not just 
for individual minds.

http://mestizorocks.blogspot.com/2009/05/spoiler-alert-lost-puzzle-solution-from.html 
or http://tinyurl.com/oy8bok
http://www.yesbutnobutyes.com/archives/2009/04/lost_wired_puzz.html
http://bradicali.blogspot.com/2009/04/major-major-progression-on-lost-numbers.html 
or http://tinyurl.com/n9dfdj


** *** ***** ******* *********** *************

     Last Month's Terrorism Arrests



I have four points to make on the arrest of the three men for plotting 
to blow up synagogues in New York.  One: There was little danger of an 
actual terrorist attack:  "Authorities said the four men have long been 
under investigation and there was little danger they could actually have 
carried out their plan, NBC News' Pete Williams reported."

And: "'They never got anywhere close to being able to do anything,' one 
official told NBC News.  'Still, it's good to have guys like this off 
the street.'"

Of course, politicians are using this incident to peddle more fear: 
"'This was a very serious threat that could have cost many, many lives 
if it had gone through,' Representative Peter T. King, Republican from 
Long Island, said in an interview with WPIX-TV. 'It would have been a 
horrible, damaging tragedy. There's a real threat from homegrown 
terrorists and also from jailhouse converts.'"

Two, they were caught by traditional investigation and intelligence. 
Not airport security.  Not warrantless eavesdropping.  But old-fashioned 
investigation and intelligence.  This is what works.  This is what keeps 
us safe.  I wrote an essay in 2004 that says exactly that.  "The only 
effective way to deal with terrorists is through old-fashioned police 
and intelligence work -- discovering plans before they're implemented 
and then going after the plotters themselves."

Three, they were idiots:  "The ringleader of the four-man homegrown 
terror cell accused of plotting to blow up synagogues in the Bronx and 
military planes in Newburgh admitted to a judge today that he had smoked 
pot before his bust last night.

"When U.S. Magistrate Judge Lisa M. Smith asked James Cromitie if his 
judgment was impaired during his appearance in federal court in White 
Plains, the 55-year-old confessed: 'No. I smoke it regularly.  I 
understand everything you are saying.'"

Four, an "informant" helped this group a lot:  "In April, Mr. Cromitie 
and the three other men selected the synagogues as their targets, the 
statement said. The informant soon helped them get the weapons, which 
were incapable of being fired or detonated, according to the authorities."

The warning I wrote in "Portrait of the Modern Terrorist as an Idiot" is 
timely again: "Despite the initial press frenzies, the actual details of 
the cases frequently turn out to be far less damning. Too often it's 
unclear whether the defendants are actually guilty, or if the police 
created a crime where none existed before."

Actually, that whole 2007 essay is timely again.  Some things never change.

http://www.msnbc.msn.com/id/30856404/
http://www.nytimes.com/2009/05/21/nyregion/21arrests.html
http://www.schneier.com/essay-038.html
http://www.nbcnewyork.com/news/local/Accused-.html

My "Portrait of the Modern Terrorist as an Idiot"
http://www.schneier.com/essay-174.html


** *** ***** ******* *********** *************

     News



Kylin is a secure operating system from China.  Seems to be a Linux variant.
http://washingtontimes.com/news/2009/may/12/china-bolsters-for-cyber-arms-race-with-us/ 
or http://tinyurl.com/qfwjos

A great movie-plot threat: pirates in Chesapeake Bay.
http://blogs.mddailyrecord.com/ontherecord/2009/05/12/pirates-on-the-bay/ 
or http://tinyurl.com/rch6xz
Remember: if you don't like something, claim that it will enable, 
embolden, or entice terrorists.  Works every time.

Invisible ink pen:
http://www.schneier.com/blog/archives/2009/05/invisible_ink_p.html

Microsoft bans memcopy() from its code base.  Interesting discussion in 
comments about whether this helps, or is mostly cosmetic.
http://www.schneier.com/blog/archives/2009/05/microsoft_bans.html

Your home/work location pair can uniquely identify you.  This is very 
troubling, given the number of location-based services springing up and 
the number of databases that are collecting location data.
http://www.schneier.com/blog/archives/2009/05/on_the_anonymit.html

IEDs are now weapons of mass destruction.
http://www.schneier.com/blog/archives/2009/05/ieds_are_now_we.html

Research into the insecurity of "secret questions."
http://www.schneier.com/blog/archives/2009/05/secret_question.html

Defending against movie-plot threats with movie characters:
http://www.schneier.com/blog/archives/2009/05/defending_again.html

Fantastic automatic dice thrower, a random number generator for computer 
games.
http://www.schneier.com/blog/archives/2009/05/automatic_dice.html

Steganography using TCP retransmission.  I don't think these sorts of 
things have any large-scale applications, but they are clever.
http://arxiv.org/abs/0905.0363

What do you do if you have too many background checks to do for people's 
security clearances, and not enough time to do them?
http://www.federaltimes.com/index.php?S=4104591
It's all a matter of incentives.  The investigators were rewarded for 
completing investigations, not for doing them well.

Man held for hours by immigration officials because he had no fingerprints:
http://www.reuters.com/article/oddlyEnoughNews/idUSTRE54Q42P20090527?feedType=RSS&feedName=oddlyEnoughNews&rpc=69 
or http://tinyurl.com/l6rbyq

And in other biometric news, four states have banned smiling in driver's 
license photographs.
http://www.usatoday.com/news/nation/2009-05-25-licenses_N.htm

Research on movie-plot threats: "Emerging Threats and Security Planning: 
How Should We Decide What Hypothetical Threats to Worry About?"
http://www.rand.org/pubs/occasional_papers/OP256/
http://www.rand.org/pubs/occasional_papers/2009/RAND_OP256.pdf

Secret government communications cables buried around Washington, DC:
http://www.schneier.com/blog/archives/2009/06/secret_govermen.html

This month's movie-plot idea: arming the Boston police with 
semi-automatic rifles:
http://www.schneier.com/blog/archives/2009/06/boston_police_g.html

I don't know how I missed this great series from Slate in February. 
It's eight essays exploring why there have been no follow-on terrorist 
attacks in the U.S. since 9/11 (not counting the anthrax mailings, I 
guess).  Read the whole thing.
http://www.schneier.com/blog/archives/2009/06/why_is_terroris.html
http://slate.com/id/2213025

In May's Crypto-Gram, I blogged about the Boston police seizing a 
student's computer for, among other things, running Linux.  Earlier this 
month, the Massachusetts Supreme Court threw out the search warrant.
http://www.schneier.com/blog/archives/2009/06/update_on_compu.html

This combination door lock is very pretty.  Of course, four digits is 
too short an entry code, but I like the overall design and the automatic 
rescrambling feature.  It's just a prototype, and not even a physical 
one at that.
http://www.yankodesign.com/2009/05/29/twist-shout-about-forgotting-the-code/ 
or http://tinyurl.com/lcv4tj

Earlier this year, I blogged about a self-defense pen that is likely to 
easily pass through airport security.  On the other hand, this normal 
pen in the shape of a bullet will probably get you in trouble.
http://www.pencity.com/cgi-bin/SoftCart.exe/Fisher/375BulletBP.htm?L+scstore+zize0529+1244045830 
or http://tinyurl.com/qtv8nn
http://www.schneier.com/blog/archives/2009/03/self-defense_pe.html

Time for some more fear about terrorists using maps and images.  (I 
thought I wrote a good blog post, but Crypto-Gram is already too long 
this month.  So read it online, please.)
http://www.schneier.com/blog/archives/2009/06/fear_of_aerial.html

If you think that under-20-year-olds don't care about privacy, this is 
an eloquent op-ed by two students about why CCTV cameras have no place 
in their UK school.
http://www.guardian.co.uk/commentisfree/libertycentral/2009/jun/03/cctv-classroom 
or http://tinyurl.com/pjtz2f

Here's a site that sells corrupted MS Word files.  The idea is that you 
e-mail one of the files to your professor when your homework is due, 
buying you a few hours -- or maybe days -- of extra time before your 
professor notices that it's corrupted.  On the one hand, this is clever. 
 But on the other hand, it's services like these that will force 
professors to treat corrupted attachments as work not yet turned in, and 
harm innocent homework submitters.
http://www.corrupted-files.com/Word.html

Here's how to make a corrupted pdf file for free:
http://blog.didierstevens.com/2009/06/09/quickpost-make-your-own-corrupted-pdfs-for-free/

Teaching children to spot terrorists: you can't make this stuff up.
http://www.schneier.com/blog/archives/2009/06/teaching_first-.html

Industry differences in types of security breaches.
http://www.schneier.com/blog/archives/2009/06/industry_differ.html

Malware steals ATM data
http://www.schneier.com/blog/archives/2009/06/malware_steals.html


** *** ***** ******* *********** *************

     Me on Full-Body Scanners in Airports



I'm very happy with this quote in a CNN.com story on "whole-body 
imaging" at airports:

"Bruce Schneier, an internationally recognized security technologist, 
said whole-body imaging technology 'works pretty well,' privacy rights 
aside. But he thinks the financial investment was a mistake. In a 
post-9/11 world, he said, he knows his position isn't 'politically 
tenable,' but he believes money would be better spent on 
intelligence-gathering and investigations.

"'It's stupid to spend money so terrorists can change plans,' he said by 
phone from Poland, where he was speaking at a conference. If terrorists 
are swayed from going through airports, they'll just target other 
locations, such as a hotel in Mumbai, India, he said.

"'We'd be much better off going after bad guys ... and back to pre-9/11 
levels of airport security,' he said. "There's a huge "cover your ass" 
factor in politics, but unfortunately, it doesn't make us safer.'"

I've written about "cover your ass" security in the past, but it's nice 
to see it in the press.
http://edition.cnn.com/2009/TRAVEL/05/18/airport.security.body.scans/?iref=mpstoryview 
or http://tinyurl.com/le9skw

Me on CYA security:
http://www.schneier.com/blog/archives/2007/02/cya_security_1.html


** *** ***** ******* *********** *************

     Schneier News

Marcus Ranum and I did two video versions of our Face-Off column: one on 
cloud computing:
http://searchsecurity.techtarget.com/video/0,297151,sid14_gci1355568,00.html 
or http://tinyurl.com/plvkkr
And the other on who should be in charge of cyber-security:
http://searchsecurity.techtarget.com/video/0,297151,sid14_gci1355883,00.html 
or http://tinyurl.com/p9eznn

Another interview with me on cloud computing:
http://www.vnunet.com/vnunet/video/2240924/bruce-schneier-cloud-security 
or http://tinyurl.com/dlrv56


** *** ***** ******* *********** *************

     The Doghouse: Net1



>From its website:  "The FTS Patent has been acclaimed by leading 
cryptographic authorities around the world as the most innovative and 
secure protocol ever invented to manage offline and online smart card 
related transactions. Please see the independent report by Bruce 
Schneider [sic] in his book entitled Applied Cryptography, 2nd Edition 
published in the late 1990s."

After I posted this on my blog, someone -- probably from the company -- 
said that it was referring to the UEPS protocol, discussed on page 589. 
 I still don't like the hyperbole and the implied endorsement in the quote.

http://www.aplitec.co.za/Products/Security.html


** *** ***** ******* *********** *************

     Cloud Computing



This year's overhyped IT concept is cloud computing. Also called 
software as a service (Saas), cloud computing is when you run software 
over the internet and access it via a browser. The Salesforce.com 
customer management software is an example of this. So is Google Docs. 
If you believe the hype, cloud computing is the future.

But hype aside, cloud computing is nothing new. It's the modern version 
of the timesharing model from the 1960s, which was eventually killed by 
the rise of the personal computer. It's what Hotmail and Gmail have been 
doing all these years, and it's social networking sites, remote backup 
companies, and remote email filtering companies such as MessageLabs. Any 
IT outsourcing -- network infrastructure, security monitoring, remote 
hosting -- is a form of cloud computing.

The old timesharing model arose because computers were expensive and 
hard to maintain. Modern computers and networks are drastically cheaper, 
but they're still hard to maintain. As networks have become faster, it 
is again easier to have someone else do the hard work. Computing has 
become more of a utility; users are more concerned with results than 
technical details, so the tech fades into the background.

But what about security? Isn't it more dangerous to have your email on 
Hotmail's servers, your spreadsheets on Google's, your personal 
conversations on Facebook's, and your company's sales prospects on 
salesforce.com's? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, 
your hardware, operating system and software vendors -- and your ISP. 
Any one of these can undermine your security: crash your systems, 
corrupt data, allow an attacker to get access to systems. We've spent 
decades dealing with worms and rootkits that target software 
vulnerabilities. We've worried about infected chips. But in the end, we 
have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further -- you now have to 
also trust your software service vendors -- but it doesn't fundamentally 
change anything. It's just another vendor we need to trust.

There is one critical difference. When a computer is within your 
network, you can protect it with other security systems such as 
firewalls and IDSs. You can build a resilient system that works even if 
those vendors you have to trust may not be as trustworthy as you like. 
With any outsourcing model, whether it be cloud computing or something 
else, you can't. You have to trust your outsourcer completely. You not 
only have to trust the outsourcer's security, but its reliability, its 
availability, and its business continuity.

You don't want your critical data to be on some cloud computer that 
abruptly disappears because its owner goes bankrupt. You don't want the 
company you're using to be sold to your direct competitor. You don't 
want the company to cut corners, without warning, because times are 
tight. Or raise its prices and then refuse to let you have your data 
back. These things can happen with software vendors, but the results 
aren't as drastic.

There are two different types of cloud computing customers. The first 
only pays a nominal fee for these services -- and uses them for free in 
exchange for ads: e.g., Gmail and Facebook. These customers have no 
leverage with their outsourcers. You can lose everything. Companies like 
Google and Amazon won't spend a lot of time caring. The second type of 
customer pays considerably for these services: to Salesforce.com, 
MessageLabs, managed network companies, and so on. These customers have 
more leverage, providing they write their service contracts correctly. 
Still, nothing is guaranteed.

Trust is a concept as old as humanity, and the solutions are the same as 
they have always been. Be careful who you trust, be careful what you 
trust them with, and be careful how much you trust them. Outsourcing is 
the future of computing. Eventually we'll get this right, but you don't 
want to be a casualty along the way.

This essay originally appeared in The Guardian.
http://www.guardian.co.uk/technology/2009/jun/04/bruce-schneier-cloud-computing 
or http://tinyurl.com/op7p7k

Another opinion:
http://1raindrop.typepad.com/1_raindrop/2009/06/begin-the-begin-cloud-security.html 
or http://tinyurl.com/mnc3lb

A rebuttal:
http://www.rationalsurvivability.com/blog/?p=952
http://www.rationalsurvivability.com/blog/?p=1013
The reason I am talking so much about cloud computing is that reporters 
and inverviewers keep asking me about it.  I feel kind of dragged into 
this whole thing.

At the Computers, Freedom, and Privacy conference earlier this month, 
Bob Gellman said that the nine most important words in cloud computing 
are: "terms of service," "location, location, location," and "provider, 
provider, provider" -- basically making the same point I did.  You need 
to make sure the terms of service you sign up for are ones you can live 
with.  You need to make sure the location of the provider doesn't 
subject you to any laws you can't live with.  And you need to make sure 
your provider is someone you're willing to work with.  Basically, if 
you're going to give someone else your data, you need to trust them.
http://www.worldprivacyforum.org/cloudprivacy.html

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/cloud_computing.html


** *** ***** ******* *********** *************

     The Second Interdisciplinary Workshop on Security and
        Human Behaviour



Last week, I was at SHB09, the Second Interdisciplinary Workshop on 
Security and Human Behaviour, at MIT.  This was a two-day gathering of 
computer security researchers, psychologists, behavioral economists, 
sociologists, philosophers, and others -- all of whom are studying the 
human side of security -- organized by Ross Anderson, Alessandro 
Acquisti, and myself.  I liveblogged the workshop; here are the talk 
summaries.  (People were invited to submit a link for themselves, and 
links to applicable things they wrote.  Those links will be included 
after each talk summary.)

The first session was about deception, moderated by David Clark.

Frank Stajano, Cambridge University, presented research with Paul 
Wilson, who films actual scams for "The Real Hustle." His point is that 
we build security systems based on our "logic," but users don't always 
follow our logic. It's fraudsters who really understand what people do, 
so we need to understand what the fraudsters understand. Things like 
distraction, greed, unknown accomplices, social compliance are important.
http://www.cl.cam.ac.uk/~fms27/
Usability of Security Management: Defining the Permissions of Guests
http://www.cl.cam.ac.uk/~fms27/papers/2006-JohnsonSta-guests.pdf

David Livingstone Smith, University of New England, is a philosopher by 
training, and goes back to basics: "What are we talking about?" A 
theoretical definition -- "that which something has to have to fall 
under a term" -- of deception is difficult to define. "Cause to have a 
false belief," from the Oxford English Dictionary, is inadequate. "To 
deceive is intentionally have someone to have a false belief" also 
doesn't work. "Intentionally causing someone to have a false belief that 
the speaker knows to be false" still isn't good enough. The fundamental 
problem is that these are anthropocentric definitions. Deception is not 
unique to humans; it gives organisms an evolutionary edge. For example, 
the mirror orchid fools a wasp into landing on it by looking like and 
giving off chemicals that mimic the female wasp. This example shows that 
we need a broader definition of "purpose." His formal definition: "For 
systems A and B, A deceives B iff A possesses some character C with 
proper function F, and B possesses a mechanism C* with the proper 
function F* of producing representations, such that the proper function 
of C is to cause C* to fail to perform F* by causing C* to form false 
representations, and C does so in virtue of performing F, and B's 
falsely representing enables some feature of A to perform its proper 
function."
http://www.realhumannature.com
Less than human: self-deception in the imagining of others
http://realhumannature.com/?page_id=61
Talk on Lying at La Ciudad de Las Ideas
http://www.laciudaddeideas.com/ciudad2/play.php?vid=106
a subsequent discussion
http://www.youtube.com/watch?v=OnjpoOhwEzk
Why War?
http://realhumannature.com/?page_id=26

I spoke next, about the psychology of Conficker, how the human brain 
buys security, and why science fiction writers shouldn't be hired to 
think about terrorism risks (to be published on Wired.com this week).
http://www.schneier.com/blog/archives/2009/04/conficker.html
http://www.schneier.com/essay-232.html

Dominic Johnson, University of Edinburgh, talked about his chapter in 
the book Natural Security: A Darwinian Approach to a Dangerous World. 
Life has 3.5 billion years of experience in security innovation; let's 
look at how biology approaches security. Biomimicry, ecology, 
paleontology, animal behavior, evolutionary psychology, immunology, 
epidemiology, selection, and adaption are all relevant. Redundancy is a 
very important survival tool for species. Here's an adaption example: 
The 9/11 threat was real and we knew about it, but we didn't do 
anything. His thesis: Adaptation to novel security threats tends to 
occur after major disasters. There are many historical examples of this; 
Pearl Harbor, for example. Causes include sensory biases, psychological 
biases, leadership biases, organizational biases, and political biases 
-- all pushing us towards maintaining the status quo. So it's natural 
for us to poorly adapt to security threats in the modern world. A 
questioner from the audience asked whether control theory had any 
relevance to this model.
http://dominicdpjohnson.com/
Paradigm Shifts in Security Strategy
http://www.cl.cam.ac.uk/~rja14/shb09/johnsond1.pdf
Perceptions of victory and defeat
http://dominicdpjohnson.com/publications/books.html

Jeff Hancock, Cornell University, studies interpersonal deception: how 
the way we lie to each other intersects with communications 
technologies; and how technologies change the way we lie, and can 
technology be used to detect lying? Despite new technology, people lie 
for traditional reasons. For example: on dating sites, men tend to lie 
about their height and women tend to lie about their weight. The 
recordability of the Internet also changes how we lie. The use of the 
first person singular tends to go down the more people lie. He verified 
this in many spheres, such as how people describe themselves in chat 
rooms, and true versus false statements that the Bush administration 
made about 9/11 and Iraq. The effect was more pronounced when 
administration officials were answering questions than when they were 
reading prepared remarks.
http://www.comm.cornell.edu/staff/employee/jeffrey_t_hancock.html
On Lying and Being Lied To: A Linguistic Analysis of Deception in 
Computer-Mediated Communication
http://www.cl.cam.ac.uk/~rja14/shb09/hancock1.pdf
Separating Fact From Fiction: An Examination of Deceptive 
Self-Presentation in Online Dating Profiles
http://www.cl.cam.ac.uk/~rja14/shb09/hancock2.pdf

The second session was about fraud. (These session subjects are only 
general. We tried to stick related people together, but there was the 
occasional oddball -- and scheduling constraint -- to deal with.)

Julie Downs, Carnegie Mellon University, is a psychologist who studies 
how people make decisions, and talked about phishing. To determine how 
people respond to phishing attempts -- what e-mails they open and when 
they click on links -- she watched as people interacted with their 
e-mail. She found that most people's strategies to deal with phishing 
attacks might have been effective 5-10 years ago, but are no longer 
sufficient now that phishers have adapted. She also found that educating 
people about phishing didn't make them more effective at spotting 
phishing attempts, but made them more likely to be afraid of doing 
anything on line. She found this same overreaction among people who were 
recently the victims of phishing attacks, but again people were no 
better separating real e-mail from phishing attempts. What does make a 
difference is contextual understanding: how to parse a URL, how and why 
the scams happen, what SSL does and doesn't do.
http://sds.hss.cmu.edu/src/faculty/downs.php
Behavioral Response to Phishing Risk
http://www.cl.cam.ac.uk/~rja14/shb09/downs1.pdf
Parents' vaccination comprehension and decisions
http://www.cl.cam.ac.uk/~rja14/shb09/downs2.pdf
The Psychology of Food Consumption
http://www.cl.cam.ac.uk/~rja14/shb09/downs3.pdf

Jean Camp, Indiana University, studies people taking risks online. Four 
points: 1) "people create mental models from internal narratives about 
risk," 2) "risk mitigating action is taken only if the risk is perceived 
as relevant," 3) "contextualizing risk can show risks as relevant," and 
4) "narrative can increase desire and capacity to use security tools." 
Stories matter: "people are willing to wash out their cat food cans and 
sweep up their sweet gum balls to be a good neighbor, but allow their 
computers to join zombie networks" because there's a good story in the 
former and none in the latter. She presented two experiments to 
demonstrate this. One was a video experiment watching business majors 
try to install PGP. No one was successful: there was no narrative, and 
the mixed metaphor of physical and cryptographic "key" confused people.
http://www.ljean.com/
Experimental Evaluation of Expert and Non-expert Computer Users' Mental 
Models of Security Risks
http://www.cl.cam.ac.uk/~rja14/shb08/camp.pdf

Matt Blaze, University of Pennsylvania, talked about electronic voting 
machines and fraud. He related the anecdote about actual electronic 
voting machine vote fraud in Kentucky from the second link. In the 
question session, he speculated about the difficulty of having a 
security model that would have captured the problem, and how to know 
whether that model was complete enough.
http://www.crypto.com/
Electronic vote rigging in Kentucky
http://www.crypto.com/blog/vote_fraud_in_kentucky/

Jeffrey Friedberg, Microsoft, discussed research at Microsoft around the 
Trust User Experience (TUX). He talked about the difficulty of verifying 
SSL certificates. Then he talked about how Microsoft added a "green bar" 
to signify trusted sites, and how people who learned to trust the green 
bar were fooled by "picture in picture attacks": where a hostile site 
embedded a green-bar browser window in its page. Most people don't 
understand that the information inside the browser window is arbitrary, 
but that the stuff around it is not. The user interface, user 
experience, mental models all matter. Designing and evaluating TUX is 
hard. From the questions: training doesn't help much, because given a 
plausible story, people will do things counter to their training.
http://www.mccullagh.org/image/10d-14/jeffrey-friedberg-microsoft.html
Internet Fraud Battlefield
http://www.cl.cam.ac.uk/~rja14/shb09/friedberg.pdf
End to End Trust and the Trust User Experience
http://www.microsoft.com/mscorp/twc/endtoendtrust/blogPosting.aspx?blogSource=20090423_friedberg.xml
Testimony on "spyware"
http://www.microsoft.com/presspass/exec/friedberg/04-29spyware.mspx

Stuart Schechter, Microsoft, presented this research on secret 
questions. Basically, secret questions don't work. They're easily 
guessable based on the most common answers; friends and relatives of 
people can easily predict unique answers; and people forget their 
answers. Even worse, the more memorable the question/answers are, the 
easier they are to guess. Having people write their own questions is no 
better: "What's my blood type?" "How tall am I?"
http://www.eecs.harvard.edu/~stuart/
It's no secret
http://research.microsoft.com/pubs/79594/oakland09.pdf
The Emperor's New Security Indicators
http://usablesecurity.org/emperor/emperor.pdf

Tyler Moore, Harvard University, discussed his empirical studies on 
online crime and defense. Fraudsters are good at duping users, but 
they're also effective at exploiting failures among IT professionals to 
perpetuate the infrastructure necessary to carry out these exploits on a 
large scale (hosting fake web pages, sending spam, laundering the 
profits via money mules, and so on). There is widespread refusal among 
the defenders to cooperate with each other, and attackers exploit these 
limitations. We are better at removing phishing websites than we are at 
defending against the money mules. Defenders tend to fix immediate 
problems, but not underlying problems.
http://people.seas.harvard.edu/~tmoore/
The Consequences of Non-Cooperation in the Fight Against Phishing
http://people.seas.harvard.edu/~tmoore/ecrime08.pdf
Information Security Economics -- and Beyond
http://www.cl.cam.ac.uk/~rja14/Papers/econ_czech.pdf

In the discussion phase, there was a lot of talk about the relationships 
between websites, like banks, and users -- and how that affects security 
for both good and bad. Jean Camp doesn't want a relationship with her 
bank, because that unduly invests her in the bank. (Someone from the 
audience pointed out that, as a U.S. taxpayer, she is already invested 
in her bank.) Angela Sasse said that the correct metaphor is "rules of 
engagement," rather than relationships.

Session three was titled "Usability."

Andrew Patrick, NRC Canada until he was laid off four days ago, talked 
about biometric systems and human behavior. Biometrics are used 
everywhere: for gym membership, at Disneyworld, at international 
borders. The government of Canada is evaluating using iris recognition 
at a distance for events like the 2010 Olympics. There are two different 
usability issues: with respect to the end user, and with respect to the 
authenticator. People's acceptance of biometrics is very much dependent 
on the context. And of course, biometrics are not secret. Patrick 
suggested that to defend ourselves against this proliferation of using 
biometrics for authentication, the individual should publish them. The 
rationale is that we're publishing them anyway, so we might as well do 
it knowingly.
http://andrewpatrick.ca
Fingerprint Concerns: Performance, Usability, and Acceptance of 
Fingerprint Biometric Systems
http://www.andrewpatrick.ca/essays/fingerprint-concerns-performance-usability-and-acceptance-of-fingerprint-biometric-systems

Luke Church, Cambridge University, talked about what he called 
"user-centered design." There's a economy of usability: "in order to 
make some things easier, we have to make some things harder" -- so it 
makes sense to make the commonly done things easier at the expense of 
the rarely done things. This has a lot of parallels with security. The 
result is "appliancisation" (with a prize for anyone who come up with a 
better name): the culmination of security behaviors and what the system 
can do embedded in a series of user choices. Basically, giving users 
meaningful control over their security. Luke discussed several benefits 
and problems with the approach.
http://www.lukechurch.net
SHB Position Paper
http://www.lukechurch.net/Professional/Publications/SHB-2009-06-TheUserExperienceOfComputerSecurity.pdf
Usability and the Common Criteria
http://www.lukechurch.net/Professional/Publications/WISA-2008-09-IntroducingUsabilityToTheCommonCriteria.pdf

Diana Smetters, Palo Alto Research Center, started with these premises: 
you can teach users, but you can't teach them very much, so you'd better 
carefully design systems so that you 1) minimize what they have to 
learn, 2) make it easier for them to learn it, and 3) maximize the 
benefit from what they learn. Too often, security is at odds with 
getting the job done. "As long as configuration errors (false alarms) 
are common, any technology that requires users to observe security 
indicators and react to them will fail as attacks can simply masquerade 
as errors, and users will rationally ignore them." She recommends 
meeting the user halfway by building new security models that actually 
fit the users' needs. (For example: Phishing is a mismatch problem, 
between what's in the user's head and where the URL is actually going. 
SSL doesn't work, but how should websites authenticate themselves to 
users? Her solution is protected links: a set of secure bookmarks in 
protected browsers. She went on to describe a prototype and tests run 
with user subjects.
http://www.parc.com/about/people/176/diana-smetters.html
Breaking out of the browser to defend against phishing attacks
http://www.parc.com/publication/2068/breaking-out-of-the-browser-to-defend-against-phishing-attacks.html
Building secure mashups
http://www.parc.com/publication/2054/building-secure-mashups.html
Ad-hoc guesting: when exceptions are the rule
http://www.usenix.org/event/upsec08/tech/full_papers/dalal/dalal.pdf

Jon Callas, PGP Corporation, used the metaphor of the "security cliff": 
you have to keep climbing until you get to the top and that's hard, so 
it's easier to just stay at the bottom. He wants more of a "security 
ramp," so people can reasonably stop somewhere in the middle. His idea 
is to have a few policies -- e-mail encryption, rules about USB drives 
-- and enforce them. This works well in organizations, where IT has 
dictatorial control over user configuration. If we can't teach users 
much, we need to enforce policies on users.
http://www.pgp.com/company/management.html
Improving Message Security With a Self-Assembling PKI
http://middleware.internet2.edu/pki03/presentations/03.pdf

Rob Reeder, Microsoft, presented a possible solution to the secret 
questions problem: social authentication. The idea is to use people you 
know (trustees) to authenticate who you are, and have them attest to the 
fact that you lost your password. He went on to describe how the 
protocol works, as well as several potential attacks against the 
protocol and defenses, and experiments that tested the protocol. In the 
question session he talked about people designating themselves as 
trustees, and how that isn't really a problem.
http://www.robreeder.com/
Expanding Grids for Visualizing and Authoring Computer Security Policies
http://www.robreeder.com/pubs/xGridsCHI2008.pdf

Lorrie Cranor, Carnegie Mellon University, talked about security 
warnings. The best option is to fix the hazard; the second best is to 
guard against it -- but far too often we just warn people about it. But 
since hazards are generally not very hazardous, most people just ignore 
them. "Often, software asks the user and provides little or no 
information to help user make this decision." Better is to use some sort 
of automated analysis to assist the user in responding to warnings. For 
websites, for example, the system should block sites with a high 
probability of danger, not bother users if there is a low probably of 
danger, and help the user make the decision in the grey area. She went 
on to describe a prototype and user studies done with the prototype; her 
paper will be presented at USENIX Security in August.
http://lorrie.cranor.org/
A Framework for Reasoning About the Human in the Loop
http://www.cylab.cmu.edu/default.aspx?id=2396
Timing Is Everything? The Effects of Timing and Placement of Online 
Privacy Indicators
http://www.guanotronic.com/~serge/papers/chi09a.pdf
School of Phish: A Real-Word Evaluation of Anti-Phishing Training
http://www.cylab.cmu.edu/research/techreports/tr_cylab09002.html
You've Been Warned: An Empirical Study of the Effectiveness of Web 
Browser Phishing Warnings
http://www.guanotronic.com/~serge/papers/warned.pdf

Much of the discussion centered on how bad the problem really is, and 
how much security is good enough. The group also talked about economic 
incentives companies have to either fix or ignore security problems, and 
whether market approaches (or, as Jean Camp called it, "the happy 
Libertarian market pony") are sufficient. Some companies have incentives 
to convince users to do the wrong thing, or at the very least to do 
nothing. For example, social networking sites are more valuable if 
people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. 
There's the problem of the bad guys getting on the whitelist, and the 
risk that organizations like the RIAA will use the whitelist to enforce 
copyright, or that large banks will use the whitelist as a tool to block 
smaller start-up banks. Another problem is that the user might not 
understand what a whitelist signifies.

Dave Clark from the audience: "It's not hard to put a seat belt on, and 
if you need a lesson, take a plane."  Kind of a one-note session. We 
definitely need to invite more psych people next time.

David Livingstone Smith moderated the fourth session, about (more or 
less) methodology.

Angela Sasse, University College London, has been working on usable 
security for over a dozen years. As part of a project called "Trust 
Economics," she looked at whether people comply with security policies 
and why they either do or do not. She found that there is a limit to the 
amount of effort people will make to comply -- this is less actual cost 
and more perceived cost. Strict and simple policies will be complied 
with more than permissive but complex policies. Compliance detection, 
and reward or punishment, also affect compliance. People justify 
noncompliance by "frequently made excuses."
http://www.cs.ucl.ac.uk/staff/a.sasse/
The Compliance Budget: Managing Security Behaviour in Organisations
http://hornbeam.cs.ucl.ac.uk/hcs/publications/Beautement+Sasse+Wonham_The%20Compliance%20Budget_Managing%20Security%20Behaviour%20in%20Organisations_NSPW2008.pdf
Human Vulnerabilities in Security Systems
http://www.ktn.qinetiq-tim.net/files/Public/whitepapers/HFWG%20White%20Paperfinal.pdf

Bashar Nuseibeh, Open University, talked about mobile phone security; 
specifically, Facebook privacy on mobile phones. He did something clever 
in his experiments. Because he wasn't able to interview people at the 
moment they did something -- he worked with mobile users -- he asked 
them to provide a "memory phrase" that allowed him to effectively 
conduct detailed interviews at a later time. This worked very well, and 
resulted in all sorts of information about why people made privacy 
decisions at that earlier time.
A Multi-Pronged Empirical Approach to Mobile Privacy Investigation
http://mcs.open.ac.uk/ban25/
Security Requirements Engineering: A Framework for Representation and 
Analysis
http://mcs.open.ac.uk/ban25/papers/Haley-TSE-04359475-for_web.pdf

James Pita, University of Southern California, studies security 
personnel who have to guard a physical location. In his analysis, there 
are limited resources -- guards, cameras, etc. -- and a set of locations 
that need to be guarded. An example would be the Los Angeles airport, 
where a finite number of K-9 units need to guard eight terminals. His 
model uses a Stackelberg game to minimize predictability (otherwise, the 
adversary will learn it and exploit it) while maximizing security. There 
are complications -- observational uncertainty and bounded rationally on 
the part of the attackers -- which he tried to capture in his model.
http://teamcore.usc.edu/pita/
http://mcs.open.ac.uk/ban25/papers/chi2008-Mancini-et-al.pdf
Deployed ARMOR Protection: The Application of a Game Theoretic Model for 
Security at the Los Angeles International Airport
http://teamcore.usc.edu/pita/publications/2008/AAMASind2008Final.pdf

Markus Jakobsson, Palo Alto Research Center, pointed out that auto 
insurers ask people if they smoke in order to get a feeling for whether 
they engage in high-risk behaviors. In his experiment, he selected 100 
people who were the victim of online fraud and 100 people who were not. 
He then asked them to complete a survey about different physical risks 
such as mountain climbing and parachute jumping, financial risks such as 
buying stocks and real estate, and Internet risks such as visiting porn 
sites and using public wi-fi networks. He found significant correlation 
between different risks, but I didn't see an overall pattern emerge. And 
in the discussion phase, several people had questions about the data. 
More analysis, and probably more data, is required. To be fair, he was 
still in the middle of his analysis.
http://www.informatics.indiana.edu/markus/
Male, late with your credit card payment, and like to speed? You will be 
phished!
http://www.cl.cam.ac.uk/~rja14/shb09/jakobsson-shb09.pdf
Social Phishing
http://www.indiana.edu/~phishing/social-network-experiment/phishing-preprint.pdf
Love and Authentication
http://www.ravenwhite.com/files/chi08JSWY.pdf
Quantifying the Security of Preference-Based Authentication
http://www.ravenwhite.com/files/quantifying.pdf

Rachel Greenstadt, Drexel University, discussed ways in which humans and 
machines can collaborate in making security decisions. These decisions 
are hard for several reasons: because they are context dependent, 
require specialized knowledge, are dynamic, and require complex risk 
analysis. And humans and machines are good at different sorts of tasks. 
Machine-style authentication: This guy I'm standing next to knows Jake's 
private key, so he must be Jake. Human-style authentication: This guy 
I'm standing next to looks like Jake and sounds like Jake, so he must be 
Jake. The trick is to design systems that get the best of these two 
authentication styles and not the worst. She described two experiments 
examining two decisions: should I log into this website (the phishing 
problem), and should I publish this anonymous essay or will my 
linguistic style betray me?
http://www.cs.drexel.edu/~greenie
Practical Attacks Against Authorship Recognition Techniques (pre-print)
http://www.cs.drexel.edu/~greenie/brennan_paper.pdf
Reinterpreting the Disclosure Debate for Web Infections
http://weis2008.econinfosec.org/papers/Greenstadt.pdf

Mike Roe, Microsoft, talked about crime in online games, particularly in 
Second Life and Metaplace. There are four classes of people on online 
games: explorers, socializers, achievers, and griefers. Griefers try to 
annoy socializers in social worlds like Second Life, or annoy achievers 
in competitive worlds like World of Warcraft. Crime is not necessarily 
economic; criminals trying to steal money is much less of a problem in 
these games than people just trying to be annoying. In the question 
session, Dave Clark said that griefers are a constant, but economic 
fraud grows over time. I responded that the two types of attackers are 
different people, with different personality profiles. I also pointed 
out that there is another kind of attacker: achievers who use illegal 
mechanisms to assist themselves.
http://research.microsoft.com/users/mroe/

In the discussion, Peter Neumann pointed out that safety is an emergent 
property, and requires security, reliability, and survivability. Others 
weren't so sure.

The first session of the second day was "Foundations," which is kind of 
a catch-all for a variety of things that didn't really fit anywhere 
else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences, talked 
about the lessons evolution teaches about living with risk. Successful 
species didn't survive by eliminating the risks of their environment, 
they survived by adaptation. Adaptation isn't always what you think. For 
example, you could view the collapse of the Soviet Union as a failure to 
adapt, but you could also view it as successful adaptation. Risk is 
good. Risk is essential for the survival of a society, because 
risk-takers are the drivers of change. In the discussion phase, John 
Mueller pointed out a key difference between human and biological 
systems: humans tend to respond dramatically to anomalous events (the 
anthrax attacks), while biological systems respond to sustained change. 
And David Livingstone Smith asked about the difference between 
biological adaptation that affects the reproductive success of an 
organism's genes, even at the expense of the organism, with security 
adaptation. (I recommend the book he edited: Natural Security: A 
Darwinian Approach to a Dangerous World.)
http://www.iclscharter.org/people.html
Darwinian Security
http://www.darwiniansecurity.org
Natural Security
http://www.youtube.com/watch?v=job2avPAbgU

Andrew Odlyzko, University of Minnesota, discussed human-space vs. 
cyberspace. People cannot build secure systems -- we know that -- but 
people also cannot live with secure systems. We require a certain amount 
of flexibility in our systems. And finally, people don't need secure 
systems. We survive with an astounding amount of insecurity in our 
world. The problem with cyberspace is that it was originally conceived 
as separate from the physical world, and that it could correct for the 
inadequacies of the physical world. Really, the two are intertwined, and 
that human space more often corrects for the inadequacies of cyberspace. 
Lessons: build messy systems, not clean ones; create a web of ties to 
other systems; create permanent records.
http://www.dtc.umn.edu/~odlyzko/
Network Neutrality, Search Neutrality, and the Never-Ending Conflict 
Between Efficiency and Fairness in Markets
http://www.cl.cam.ac.uk/~rja14/shb09/odlyzko.pdf
Economics, psychology, and sociology of security
http://www.dtc.umn.edu/~odlyzko/doc/econ.psych.security.pdf

danah boyd, Microsoft Research, does ethnographic studies of teens in 
cyberspace. Teens tend not to lie to their friends in cyberspace, but 
they lie to the system. Since an early age, they've been taught that 
they need to lie online to be safe. Teens regularly share their 
passwords: with their parents when forced, or with their best friend or 
significant other. This is a way of demonstrating trust. It's part of 
the social protocol for this generation. In general, teens don't use 
social media in the same way as adults do. And when they grow up, they 
won't use social media in the same way as today's adults do. Teens view 
privacy in terms of control, and take their cues about privacy from 
celebrities and how they use social media. And their sense of privacy is 
much more nuanced and complicated. In the discussion phase, danah wasn't 
sure whether the younger generation would be more or less susceptible to 
Internet scams than the rest of us -- they're not nearly as technically 
savvy as we might think they are. "The only thing that saves teenagers 
is fear of their parents"; they try to lock them out, and lock others 
out in the process. Socio-economic status matters a lot, in ways that 
she is still trying to figure out. There are three different types of 
social networks: personal networks, articulated networks, and behavioral 
networks, and they're different.
http://www.danah.org
Taken Out of Context -- American Teen Sociality in Networked Publics
http://www.danah.org/papers/TakenOutOfContext.pdf

Mark Levine, Lancaster University. He collected data from UK CCTV 
cameras, searches for aggressive behavior, and studies when and how 
bystanders either help escalate or de-escalate the situations. Results: 
as groups get bigger, there is no increase of anti-social acts and a 
significant increase in pro-social acts. He has much more analysis and 
results, too complicated to summarize here. One key finding: when a 
third party intervenes in an aggressive interaction, it is much more 
likely to de-escalate. Basically, groups can act against violence. "When 
it comes to violence (and security), group processes are part of the 
solution -- not part of the problem?"
http://www.psych.lancs.ac.uk/people/MarkLevine.html
The Kindness of Crowds
http://www.economist.com/science/displaystory.cfm?story_id=13176759
Intra-group Regulation of Violence: Bystanders and the (De)-escalation 
of Violence
http://www.cl.cam.ac.uk/~rja14/shb09/levine1.pdf

Jeff MacKie-Mason, University of Michigan, is an economist: "Security 
problems are incentive problems." He discussed motivation, and how to 
design systems to take motivation into account. Humans are smart 
devices; they can't be programmed, but they can be influenced through 
the sciences of motivational behavior: microeconomics, game theory, 
social psychology, psychodynamics, and personality psychology. He gave a 
couple of general examples of how these theories can inform security 
system design.
http://jeff-mason.com
Humans are smart devices, but not programmable
http://www.cl.cam.ac.uk/~rja14/shb09/mackie-mason.pdf
Security when people matter
http://hdl.handle.net/2027.42/55773
A Social Mechanism for Supporting Home Computer Security
http://hdl.handle.net/2027.42/63006

Joe Bonneau, Cambridge University, talked about social networks like 
Facebook, and privacy. People misunderstand why privacy and security is 
important in social networking sites like Facebook. People underestimate 
of what Facebook really is; it really is a reimplementation of the 
entire Internet. "Everything on the Internet is becoming social," and 
that makes security different. Phishing is different, 419-style scams 
are different. Social context makes some scams easier; social networks 
are fun, noisy, and unpredictable. "People use social networking systems 
with their brain turned off." But social context can be used to spot 
frauds and anomalies, and can be used to establish trust.
http://www.cl.cam.ac.uk/~jcb82/

Session Six -- "Terror" -- chaired by Stuart Schechter.

Bill Burns, Decision Research, studies social reaction to risk. He 
discussed his theoretical model of how people react to fear events, and 
data from the 9/11 attacks, the 7/7 bombings in the UK, and the 2008 
financial collapse. Basically, we can't remain fearful. No matter what 
happens, fear spikes immediately after and recovers 45 or so days 
afterwards. He believes that the greatest mistake we made after 9/11 was 
labeling the event as terrorism instead of an international crime.
http://www.decisionresearch.org/people/burns/
The Diffusion of Fear: Modeling Community Response to a Terrorist Strike
http://www.cl.cam.ac.uk/~rja14/shb08/burns.pdf

Chris Cocking, London Metropolitan University, looks at the group 
behavior of people responding to emergencies. Traditionally, most 
emergency planning is based on the panic model: people in crowds are 
prone to irrational behavior and panic. There's also a social attachment 
model that predicts that social norms don't break down in groups. He 
prefers a self-categorization approach: disasters create a common 
identity, which results in orderly and altruistic behavior among 
strangers. The greater the threat, the greater the common identity, and 
spontaneous resilience can occur. He displayed a photograph of "panic" 
in New York on 9/11 and showed how it wasn't panic at all. Panic seems 
to be more a myth than a reality. This has policy implications during an 
event: provide people with information, and people are more likely to 
underreact than overreact, if there is overreaction, it's because people 
are acting as individuals rather than groups, so those in authority 
should encourage a sense of collective identity. "Crowds can be part of 
the solution rather than part of the problem."
http://news.bbc.co.uk/1/hi/uk/4702659.stm
Effects of social identity on responses to emergency mass evacuation
http://www.sussex.ac.uk/affiliates/panic/

Richard John, University of Southern California, talked about the 
process of social amplification of risk (with respect to terrorism). 
Events result in relatively small losses; it's the changes in behavior 
following an event that result in much greater losses. There's a dynamic 
of risk perception, and it's very contextual. He uses vignettes to study 
how risk perception changes over time, and discussed some of the studies 
he's conducting and ideas for future studies.
http://www.usc.edu/schools/college/psyc/people/faculty1003386.html
Decision Analysis by Proxy for the Rational Terrorist
http://www.cl.cam.ac.uk/~rja14/shb09/john1.pdf

Mark Stewart, University of Newcastle, Australia, examines 
infrastructure security and whether the costs exceed the benefits. He 
talked about cost/benefit trade-off, and how to apply probabilistic 
terrorism risk assessment; then, he tried to apply this model to the 
U.S. Federal Air Marshal Service. His result: they're not worth it. You 
can quibble with his data, but the real value is a transparent process. 
During the discussion, I said that it is important to realize that risks 
can't be taken in isolation, that anyone making a security trade-off is 
balancing several risks: terrorism risks, political risks, the personal 
risks to his career, etc.
http://www.newcastle.edu.au/research-centre/cipar/
A risk and cost-benefit assessment of United States aviation security 
measures
http://polisci.osu.edu/faculty/jmueller/STEWJTS.PDF
Risk and Cost-Benefit Assessment of Counter-Terrorism Protective 
Measures to Infrastructure
http://nova.newcastle.edu.au/vital/access/manager/Repository/uon:3125

John Adams, University College London, applies his risk thermostat model 
to terrorism. He presented a series of amusing photographs of 
overreactions to risk, most of them not really about risk aversion but 
more about liability aversion. He talked about bureaucratic paranoia, as 
well as bureaucratic incitements to paranoia, and how this is beginning 
to backfire. People treat risks differently, depending on whether they 
are voluntary, impersonal, or imposed, and whether people have total 
control, diminished control, or no control.
http://john-adams.co.uk/about/
Deus e Brasileiro?
http://john-adams.co.uk/2008/12/31/deus-e-brasileiro/
Can Science Beat Terrorism?
http://john-adams.co.uk/2009/03/06/the-world-under-assault-can-science-beat-terrorism/
Bicycle bombs: a further inquiry
http://john-adams.co.uk/2009/01/16/bicycle-bombs-a-further-enquiry-and-a-new-theory/

Dan Gardner, Ottawa Citizen, talked about how the media covers risks, 
threats, attacks, etc. He talked about the various ways the media screws 
up, all of which were familiar to everyone. His thesis is not that the 
media gets things wrong in order to increase readership/viewership and 
therefore profits, but that the media gets things wrong because 
reporters are human. Bad news bias is not a result of the media hyping 
bad news, but the natural human tendency to remember the bad more than 
the good. The evening news is centered around stories because people -- 
including reporters -- respond to stories, and stories with novelty, 
emotion, and drama are better stories.
http://www.amazon.com/Science-Fear-Shouldnt-Ourselves-Greater/dp/0525950621

Some of the discussion was about the nature of panic: whether and where 
it exists, and what it looks like. Someone from the audience questioned 
whether panic was related to proximity to the event; someone else 
pointed out that people very close to the 7/7 bombings took pictures and 
made phone calls -- and that there was no evidence of panic. Also, on 
9/11 pretty much everyone below where the airplanes struck the World 
Trade Center got out safely; and everyone above couldn't get out, and 
died. Angela Sasse pointed out that the previous terrorist attack 
against the World Trade Center, and the changes made in evacuation 
procedures afterwards, contributed to the lack of panic on 9/11. Bill 
Burns said that the purest form of panic is a drowning person. Jean Camp 
asked whether the recent attacks against women's health providers should 
be classified as terrorism, or whether we are better off framing it as 
crime. There was also talk about sky marshals and their effectiveness. I 
said that it isn't sky marshals that are a deterrent, but the idea of 
sky marshals. Terence Taylor said that increasing uncertainty on the 
part of the terrorists is, in itself, a security measure. There was also 
a discussion about how risk-averse terrorists are; they seem to want to 
believe they have an 80% or 90% change of success before they will 
launch an attack.

The penultimate session of the conference was "Privacy," moderated by 
Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University, presented research on 
how people value their privacy. He started by listing a variety of 
cognitive biases that affect privacy decisions: illusion of control, 
overconfidence, optimism bias, endowment effect, and so on. He discussed 
two experiments. The first demonstrated a "herding effect": if a subject 
believes that others reveal sensitive behavior, the subject is more 
likely to also reveal sensitive behavior. The second examined the "frog 
effect": do privacy intrusions alert or desensitize people to revealing 
personal information? What he found is that people tend to set their 
privacy level at the beginning of a survey, and don't respond well to 
being asked easy questions at first and then sensitive questions at the 
end. In the discussion, Joe Bonneau asked him about the notion that 
people's privacy protections tend to ratchet up over time; he didn't 
have conclusive evidence, but gave several possible explanations for the 
phenomenon.
http://www.heinz.cmu.edu/~acquisti/
What Can Behavioral Economics Teach Us About Privacy?
http://www.heinz.cmu.edu/~acquisti/papers/Acquisti-Grossklags-Chapter-Etrics.pdf
Privacy in Electronic Commerce and the Economics of Immediate Gratification
http://www.heinz.cmu.edu/~acquisti/papers/privacy-gratification.pdf

Adam Joinson, University of Bath, also studies how people value their 
privacy. He talked about expressive privacy -- privacy that allows 
people to express themselves and form interpersonal relationships. His 
research showed that differences between how people use Facebook in 
different countries depend on how much people trust Facebook as a 
company, rather than how much people trust other Facebook users. Another 
study looked at posts from Secret Tweet and Twitter. He found 16 markers 
that allowed him to automatically determine which tweets contain 
sensitive personal information and which do not, with high probability. 
Then he tried to determine if people with large Twitter followings post 
fewer secrets than people who are only twittering to a few people. He 
found absolutely no difference.
http://www.joinson.com/
Privacy, Trust and Self-Disclosure Online
http://people.bath.ac.uk/aj266/pubs_pdf/joinson_et_al_HCI_final.pdf
Privacy concerns and privacy actions
http://people.bath.ac.uk/aj266/pubs_pdf/ijhcs.pdf

Peter Neumann, SRI, talked about lack of medical privacy (too many 
people have access to your data), about voting (the privacy problem 
makes the voting problem a lot harder, and the end-to-end voting 
security/privacy problem is much harder than just securing voting 
machines), and privacy in China (the government is requiring all 
computers sold in China to be sold with software allowing them to 
eavesdrop on the users). Any would-be solution needs to reflect the 
ubiquity of the threat. When we design systems, we need to anticipate 
what the privacy problems will be. Privacy problems are everywhere you 
look, and ordinary people have no idea of the depth of the problem.
http://www.csl.sri.com/users/neumann/
Holistic systems
http://www.csl.sri.com/neumann/holistic.pdf
Risks
http://www.csl.sri.com/users/neumann/#3
Identity and Trust in Context
http://www.csl.sri.com/neumann/idtrust09+x4.pdf

Eric Johnson, Dartmouth College, studies the information access problem 
from a business perspective. He's been doing field studies in companies 
like retail banks and investment banks, and found that role-based access 
control fails because companies can't determine who has what role. Even 
worse, roles change quickly, especially in large complex organizations. 
For example, one business group of 3000 people experiences 1000 role 
changes within three months. The result is that organizations do access 
control badly, either over-entitling or under-entitling people. But 
since getting the job done is the most important thing, organizations 
tend to over-entitle: give people more access than they need. His 
current work is to find the right set of incentives and controls to set 
access more properly. The challege is to do this without making people 
risk averse. In the discussion, he agreed that a perfect access control 
system is not possible, and that organizations should probably allow a 
certain amount of access control violations -- similar to the idea of 
posting a 55 mph speed limit but not ticketing people unless they go 
over 70 mph.
http://mba.tuck.dartmouth.edu/pages/faculty/eric.johnson/
Access Flexibility with Escalation and Audit
http://mba.tuck.dartmouth.edu/digital/Research/ResearchProjects/wise_v1.pdf
Security through Information Risk Management
http://mba.tuck.dartmouth.edu/digital/Research/ResearchProjects/JohnsonRiskManagement_Finald.pdf

Christine Jolls, Yale Law School, made the point that people regularly 
share their most private information with their intimates -- so privacy 
is not about secrecy, it's more about control. There are moments when 
people make pretty big privacy decisions. For example, they grant 
employers the rights to monitor their e-mail, or test their urine 
without notice. In general, courts hold that blanket signing away of 
privacy rights -- "you can test my urine on any day in the future" -- 
are not valid, but immediate signing away of privacy of privacy rights 
-- "you can test my urine today" -- are. Jolls believes that this is 
reasonable for several reasons, such as optimism bias and an overfocus 
on the present at the expense of the future. Without realizing it, the 
courts have implemented the system that behavioral economics would find 
optimal. During the discussion, she talked about how coercion figures 
into this; the U.S. legal system tends not to be concerned with it.
http://www.law.yale.edu/faculty/CJolls.htm
Rationality and Consent in Privacy Law
http://www.cl.cam.ac.uk/~rja14/shb09/jolls1.pdf
Employee Privacy
http://www.cl.cam.ac.uk/~rja14/shb09/jolls2.pdf

Andrew Adams, University of Reading, also looks at attitudes of privacy 
on social networking services. His results are preliminary, and based on 
interviews with university students in Canada, Japan, and the UK, and 
are very concordant with what danah boyd and Joe Bonneau said earlier. 
>From the UK: People join social networking sites to increase their 
level of interaction with people they already know in real life. 
Revealing personal information is okay, but revealing too much is bad. 
Even more interestingly, it's not okay to reveal more about others than 
they reveal themselves. From Japan: People are more open to making 
friends online. There's more anonymity. It's not okay to reveal 
information about others, but "the fault of this lies as much with the 
person whose data was revealed in not choosing friends wisely." This 
victim responsibility is a common theme with other privacy and security 
elements in Japan. Data from Canada is still being compiled.
http://www.personal.rdg.ac.uk/~sis00aaa/
Regulating CCTV
http://deposit.depot.edina.ac.uk/119/

Great phrase: the "laundry belt" -- close enough for students to go home 
on weekends with their laundry, but far enough away so they don't feel 
as if their parents are looking over their shoulder -- typically two 
hours by public transportation (in the UK).

The eighth, and final, session of the SHB09 was optimistically titled 
"How Do We Fix the World?" I moderated, which meant that my liveblogging 
was more spotty, especially in the discussion section.

David Mandel, Defense Research and Development Canada, is part of the 
Thinking, Risk, and Intelligence Group at DRDC Toronto. His first 
observation: "Be wary of purported world-fixers." His second 
observation: when you claim that something is broken, it is important to 
specify the respects in which it's broken and what fixed looks like. His 
third observation: it is also important to analyze the consequences of 
any potential fix. An analysis of the way things are is perceptually 
based, but an analysis of the way things should be is value-based. He 
also presented data showing that predictions made by intelligence 
analysts (at least in one Canadian organization) were pretty good.
http://mandel.socialpsychology.org/
Applied Behavioral Science in Support of Intelligence Analysis
http://www.cl.cam.ac.uk/~rja14/shb09/mandel.pdf
Radicalization: What does it mean?
http://individual.utoronto.ca/mandel/Mandel-radicalization.pdf
The Role of Instigators in Radicalization to Violent Extremism
http://individual.utoronto.ca/mandel/NATO_HFM140_Instigators_Mandel.pdf

Ross Anderson, Cambridge University, asked "Where's the equilibrium?" 
Both privacy and security are moving targets, but he expects that 
someday soon there will be a societal equilibrium. Incentives to price 
discriminate go up, and the cost to do so goes down. He gave several 
examples of database systems that reached very different equilibrium 
points, depending on corporate lobbying, political realities, public 
outrage, etc. He believes that privacy will be regulated, the only 
question being when and how. "Where will the privacy boundary end up, 
and why? How can we nudge it one way or another?"
http://www.cl.cam.ac.uk/~rja14/
Database State
http://www.cl.cam.ac.uk/~rja14/Papers/database-state.pdf
book chapters on psychology and terror
http://www.cl.cam.ac.uk/~rja14/Papers/SEv2-c02.pdf
http://www.cl.cam.ac.uk/~rja14/Papers/SEv2-c24.pdf

Alma Whitten, Google, presented a set of ideals about privacy (very 
European like) and some of the engineering challenges they present. 
"Engineering challenge #1: How to support access and control to personal 
data that isn't authenticated? Engineering challenge #2: How to inform 
users about both authenticated and unauthenticated data? Engineering 
challenge #3: How to balance giving users control over data collection 
versus detecting and stopping abuse? Engineering challenge #4: How to 
give users fine-grained control over their data without overwhelming 
them with options? Engineering challenge #5: How to link sequential 
actions while preventing them from being linkable to a person? 
Engineering challenge #6: How to make the benefits of aggregate data 
analysis apparent to users? Engineering challenge #7: How to avoid or 
detect inadvertent recording of data that can be linked to an 
individual?" (Note that Alma requested not to be recorded.)
http://gaudior.net/alma/
Why Johnny can't encrypt: A usability evaluation of PGP 5.0
http://gaudior.net/alma/johnny.pdf

John Mueller, Ohio State University, talked about terrorism and the 
Department of Homeland Security. Terrorism isn't a threat; it's a 
problem and a concern, certainly, but the word "threat" is still 
extreme. Al Qaeda isn't a threat, and they're the most serious potential 
attacker against the U.S. and Western Europe. And terrorists are 
overwhelmingly stupid. Meanwhile, the terrorism issue "has become a 
self-licking ice cream cone." In other words, it's now an 
ever-perpetuating government bureaucracy. There are virtually an 
infinite number of targets; the odds of any one target being targeted is 
effectively zero; terrorists pick targets largely at random; if you 
protect target, it makes other targets less safe; most targets are 
vulnerable in the physical sense, but invulnerable in the sense that 
they can be rebuilt relatively cheaply (even something like the 
Pentagon); some targets simply can't be protected; if you're going to 
protect some targets, you need to determine if they should really be 
protected. (I recommend his book, Overblown.)
http://psweb.sbs.ohio-state.edu/faculty/jmueller/
Reacting to Terrorism: Probabilities, Consequences, and the Persistence 
of Fear
http://psweb.sbs.ohio-state.edu/faculty/jmueller/ISA2007T.PDF
Evaluating Measures to Protect the Homeland from Terrorism
http://psweb.sbs.ohio-state.edu/faculty/jmueller/ISA9.PDF
Terrorphobia: Our False Sense of Insecurity
http://www.the-american-interest.com/ai2/article.cfm?Id=418&MId=19

Adam Shostack, Microsoft, pointed out that even the problem of figuring 
out what part of the problem to work on first is difficult. One of the 
issues is shame. We don't want to talk about what's wrong, so we can't 
use that information to determine where we want to go. We make excuses 
-- customers will flee, people will sue, stock prices will go down -- 
even though we know that those excuses have been demonstrated to be false.
http://www.homeport.org/~adam/
http://newschoolsecurity.com/

During the discussion, there was a lot of talk about the choice between 
informing users and bombarding them with information they can't 
understand. And lots more that I couldn't transcribe.

And that's it. SHB09 was a fantastic workshop, filled with interesting 
people and interesting discussion. Next year in the other Cambridge.

Ross Anderson and Adam Shostack wrote talk summaries, too.  And Matt 
Blaze recorded audio:
http://www.lightbluetouchpaper.org/2009/06/11/security-and-human-behaviour-2009/
http://newschoolsecurity.com/2009/06/shb-session-1-deception/
http://www.crypto.com/blog/shb09/


** *** ***** ******* *********** *************

     Comments from Readers



There are thousands of comments -- many of them interesting -- on these 
topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/blog


** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing 
summaries, analyses, insights, and commentaries on security: computer 
and otherwise.  You can subscribe, unsubscribe, or change your address 
on the Web at <http://www.schneier.com/crypto-gram.html>.  Back issues 
are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to 
colleagues and friends who will find it valuable.  Permission is also 
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is the author of the 
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," 
and "Applied Cryptography," and an inventor of the Blowfish, Twofish, 
Phelix, and Skein algorithms.  He is the Chief Security Technology 
Officer of BT BCSG, and is on the Board of Directors of the Electronic 
Privacy Information Center (EPIC).  He is a frequent writer and lecturer 
on security topics.  See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter.  Opinions expressed are not 
necessarily those of BT.

Copyright (c) 2009 by Bruce Schneier.

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE





More information about the cypherpunks-legacy mailing list