CRYPTO-GRAM, November 15, 2009

Bruce Schneier schneier at SCHNEIER.COM
Sat Nov 14 20:24:50 PST 2009


                 CRYPTO-GRAM

              November 15, 2009

              by Bruce Schneier
      Chief Security Technology Officer, BT
             schneier at schneier.com
            http://www.schneier.com


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit 
<http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at 
<http://www.schneier.com/crypto-gram-0911.html>.  These same essays 
appear in the "Schneier on Security" blog: 
<http://www.schneier.com/blog>.  An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
     Beyond Security Theater
     Fear and Overreaction
     News
     Zero-Tolerance Policies
     Security in a Reputation Economy
     Schneier News
     The Commercial Speech Arms Race
     The Doghouse: ADE 651
     "Evil Maid" Attacks on Encrypted Hard Drives
     Is Antivirus Dead?


** *** ***** ******* *********** *************

     Beyond Security Theater



[I was asked to write this essay for the "New Internationalist" (n. 427, 
November 2009, pp. 10--13).  It's nothing I haven't said before, but I'm 
pleased with how this essay came together.]

Terrorism is rare, far rarer than many people think. It's rare because 
very few people want to commit acts of terrorism, and executing a 
terrorist plot is much harder than television makes it appear. The best 
defenses against terrorism are largely invisible: investigation, 
intelligence, and emergency response. But even these are less effective 
at keeping us safe than our social and political policies, both at home 
and abroad. However, our elected leaders don't think this way: they are 
far more likely to implement security theater against movie-plot threats.

A movie-plot threat is an overly specific attack scenario. Whether it's 
terrorists with crop dusters, terrorists contaminating the milk supply, 
or terrorists attacking the Olympics, specific stories affect our 
emotions more intensely than mere data does. Stories are what we fear. 
It's not just hypothetical stories: terrorists flying planes into 
buildings, terrorists with bombs in their shoes or in their water 
bottles, and terrorists with guns and bombs waging a co-ordinated attack 
against a city are even scarier movie-plot threats because they actually 
happened.

Security theater refers to security measures that make people feel more 
secure without doing anything to actually improve their security. An 
example: the photo ID checks that have sprung up in office buildings. 
No-one has ever explained why verifying that someone has a photo ID 
provides any actual security, but it looks like security to have a 
uniformed guard-for-hire looking at ID cards. Airport-security examples 
include the National Guard troops stationed at US airports in the months 
after 9/11 -- their guns had no bullets. The US colour-coded system of 
threat levels, the pervasive harassment of photographers, and the metal 
detectors that are increasingly common in hotels and office buildings 
since the Mumbai terrorist attacks, are additional examples.

To be sure, reasonable arguments can be made that some terrorist targets 
are more attractive than others: airplanes because a small bomb can 
result in the death of everyone aboard, monuments because of their 
national significance, national events because of television coverage, 
and transportation because of the numbers of people who commute daily. 
But there are literally millions of potential targets in any large 
country (there are five million commercial buildings alone in the US), 
and hundreds of potential terrorist tactics; it's impossible to defend 
every place against everything, and it's impossible to predict which 
tactic and target terrorists will try next.

Feeling and Reality

Security is both a feeling and a reality. The propensity for security 
theater comes from the interplay between the public and its leaders. 
When people are scared, they need something done that will make them 
feel safe, even if it doesn't truly make them safer. Politicians 
naturally want to do something in response to crisis, even if that 
something doesn't make any sense.

Often, this "something" is directly related to the details of a recent 
event: we confiscate liquids, screen shoes, and ban box cutters on 
airplanes. But it's not the target and tactics of the last attack that 
are important, but the next attack. These measures are only effective if 
we happen to guess what the next terrorists are planning. If we spend 
billions defending our rail systems, and the terrorists bomb a shopping 
mall instead, we've wasted our money. If we concentrate airport security 
on screening shoes and confiscating liquids, and the terrorists hide 
explosives in their brassieres and use solids, we've wasted our money. 
Terrorists don't care what they blow up and it shouldn't be our goal 
merely to force the terrorists to make a minor change in their tactics 
or targets.

Our penchant for movie plots blinds us to the broader threats. And 
security theater consumes resources that could better be spent elsewhere.

Any terrorist attack is a series of events: something like planning, 
recruiting, funding, practicing, executing, aftermath. Our most 
effective defenses are at the beginning and end of that process -- 
intelligence, investigation, and emergency response -- and least 
effective when they require us to guess the plot correctly. By 
intelligence and investigation, I don't mean the broad data-mining or 
eavesdropping systems that have been proposed and in some cases 
implemented -- those are also movie-plot stories without much basis in 
actual effectiveness -- but instead the traditional "follow the 
evidence" type of investigation that has worked for decades.

Unfortunately for politicians, the security measures that work are 
largely invisible. Such measures include enhancing the 
intelligence-gathering abilities of the secret services, hiring cultural 
experts and Arabic translators, building bridges with Islamic 
communities both nationally and internationally, funding police 
capabilities -- both investigative arms to prevent terrorist attacks, 
and emergency communications systems for after attacks occur -- and 
arresting terrorist plotters without media fanfare. They do not include 
expansive new police or spying laws. Our police don't need any new laws 
to deal with terrorism; rather, they need apolitical funding. These 
security measures don't make good television, and they don't help, come 
re-election time. But they work, addressing the reality of security 
instead of the feeling.

The arrest of the "liquid bombers" in London is an example: they were 
caught through old-fashioned intelligence and police work. Their choice 
of target (airplanes) and tactic (liquid explosives) didn't matter; they 
would have been arrested regardless.

But even as we do all of this we cannot neglect the feeling of security, 
because it's how we collectively overcome the psychological damage that 
terrorism causes. It's not security theater we need, it's direct appeals 
to our feelings. The best way to help people feel secure is by acting 
secure around them. Instead of reacting to terrorism with fear, we -- 
and our leaders -- need to react with indomitability.

Refuse to Be Terrorized

By not overreacting, by not responding to movie-plot threats, and by not 
becoming defensive, we demonstrate the resilience of our society, in our 
laws, our culture, our freedoms. There is a difference between 
indomitability and arrogant "bring 'em on" rhetoric. There's a 
difference between accepting the inherent risk that comes with a free 
and open society, and hyping the threats.

We should treat terrorists like common criminals and give them all the 
benefits of true and open justice -- not merely because it demonstrates 
our indomitability, but because it makes us all safer. Once a society 
starts circumventing its own laws, the risks to its future stability are 
much greater than terrorism.

Supporting real security even though it's invisible, and demonstrating 
indomitability even though fear is more politically expedient, requires 
real courage. Demagoguery is easy. What we need is leaders willing both 
to do what's right and to speak the truth.

Despite fearful rhetoric to the contrary, terrorism is not a 
transcendent threat. A terrorist attack cannot possibly destroy a 
country's way of life; it's only our reaction to that attack that can do 
that kind of damage. The more we undermine our own laws, the more we 
convert our buildings into fortresses, the more we reduce the freedoms 
and liberties at the foundation of our societies, the more we're doing 
the terrorists' job for them.

We saw some of this in the Londoners' reaction to the 2005 transport 
bombings. Among the political and media hype and fearmongering, there 
was a thread of firm resolve. People didn't fall victim to fear. They 
rode the trains and buses the next day and continued their lives. 
Terrorism's goal isn't murder; terrorism attacks the mind, using victims 
as a prop. By refusing to be terrorized, we deny the terrorists their 
primary weapon: our own fear.

Today, we can project indomitability by rolling back all the fear-based 
post-9/11 security measures. Our leaders have lost credibility; getting 
it back requires a decrease in hyperbole. Ditch the invasive mass 
surveillance systems and new police state-like powers. Return airport 
security to pre-9/11 levels. Remove swagger from our foreign policies. 
Show the world that our legal system is up to the challenge of 
terrorism. Stop telling people to report all suspicious activity; it 
does little but make us suspicious of each other, increasing both fear 
and helplessness.

Terrorism has always been rare, and for all we've heard about 9/11 
changing the world, it's still rare. Even 9/11 failed to kill as many 
people as automobiles do in the US every single month. But there's a 
pervasive myth that terrorism is easy. It's easy to imagine terrorist 
plots, both large-scale "poison the food supply" and small-scale "10 
guys with guns and cars." Movies and television bolster this myth, so 
many people are surprised that there have been so few attacks in Western 
cities since 9/11. Certainly intelligence and investigation successes 
have made it harder, but mostly it's because terrorist attacks are 
actually hard. It's hard to find willing recruits, to co-ordinate plans, 
and to execute those plans -- and it's easy to make mistakes.

Counterterrorism is also hard, especially when we're psychologically 
prone to muck it up. Since 9/11, we've embarked on strategies of 
defending specific targets against specific tactics, overreacting to 
every terrorist video, stoking fear, demonizing ethnic groups, and 
treating the terrorists as if they were legitimate military opponents 
who could actually destroy a country or a way of life -- all of this 
plays into the hands of terrorists. We'd do much better by leveraging 
the inherent strengths of our modern democracies and the natural 
advantages we have over the terrorists: our adaptability and 
survivability, our international network of laws and law enforcement, 
and the freedoms and liberties that make our society so enviable. The 
way we live is open enough to make terrorists rare; we are observant 
enough to prevent most of the terrorist plots that exist, and 
indomitable enough to survive the even fewer terrorist plots that 
actually succeed. We don't need to pretend otherwise.

Commentary:
http://www.motherjones.com/kevin-drum/2009/11/security-theater
http://jamesfallows.theatlantic.com/archives/2009/11/the_right_kind_of_security.php
http://www.economist.com/blogs/gulliver/2009/11/the_future_of_security.cfm


** *** ***** ******* *********** *************

     Fear and Overreaction



It's hard work being prey. Watch the birds at a feeder. They're 
constantly on alert, and will fly away from food -- from easy nutrition 
-- at the slightest movement or sound. Given that I've never, ever seen 
a bird plucked from a feeder by a predator, it seems like a whole lot of 
wasted effort against not very big a threat.

Assessing and reacting to risk is one of the most important things a 
living creature has to deal with. The amygdala, an ancient part of the 
brain that first evolved in primitive fishes, has that job. It's what's 
responsible for the fight-or-flight reflex. Adrenaline in the 
bloodstream, increased heart rate, increased muscle tension, sweaty 
palms; that's the amygdala in action. And it works fast, faster than 
consciousnesses: show someone a snake and their amygdala will react 
before their conscious brain registers that they're looking at a snake.

Fear motivates all sorts of animal behaviors. Schooling, flocking, and 
herding are all security measures. Not only is it less likely that any 
member of the group will be eaten, but each member of the group has to 
spend less time watching out for predators. Animals as diverse as 
bumblebees and monkeys both avoid food in areas where predators are 
common. Different prey species have developed various alarm calls, some 
surprisingly specific. And some prey species have even evolved to react 
to the alarms given off by other species.

Evolutionary biologist Randolph Nesse has studied animal defenses, 
particularly those that seem to be overreactions. These defenses are 
mostly all-or-nothing; a creature can't do them halfway. Birds flying 
off, sea cucumbers expelling their stomachs, and vomiting are all 
examples. Using signal detection theory, Nesse showed that 
all-or-nothing defenses are expected to have many false alarms. "The 
smoke detector principle shows that the overresponsiveness of many 
defenses is an illusion. The defenses appear overresponsive because they 
are 'inexpensive' compared to the harms they protect against and because 
errors of too little defense are often more costly than errors of too 
much defense."

So according to the theory, if flight costs 100 calories, both in flying 
and lost eating time, and there's a 1 in 100 chance of being eaten if 
you don't fly away, it's smarter for survival to use up 10,000 calories 
repeatedly flying at the slightest movement even though there's a 99 
percent false alarm rate. Whatever the numbers happen to be for a 
particular species, it has evolved to get the trade-off right.

This makes sense, until the conditions that the species evolved under 
change quicker than evolution can react to. Even though there are far 
fewer predators in the city, birds at my feeder react as if they were in 
the primal forest. Even birds safe in a zoo's aviary don't realize that 
the situation has changed.

Humans are both no different and very different. We, too, feel fear and 
react with our amygdala, but we also have a conscious brain that can 
override those reactions. And we too live in a world very different from 
the one we evolved in. Our reflexive defenses might be optimized for the 
risks endemic to living in small family groups in the East African 
highlands in 100,000 BC, not 2009 New York City. But we can go beyond 
fear, and actually think sensibly about security.

Far too often, we don't. We tend to be poor judges of risk. We overact 
to rare risks, we ignore long-term risks, we magnify risks that are also 
morally offensive. We get risks wrong -- threats, probabilities, and 
costs -- all the time. When we're afraid, really afraid, we'll do almost 
anything to make that fear go away. Both politicians and marketers have 
learned to push that fear button to get us to do what they want.

One night last month, I was awakened from my hotel-room sleep by a loud, 
piercing alarm. There was no way I could ignore it, but I weighed the 
risks and did what any reasonable person would do under the 
circumstances: I stayed in bed and waited for the alarm to be turned 
off. No point getting dressed, walking down ten flights of stairs, and 
going outside into the cold for what invariably would be a false alarm 
-- serious hotel fires are very rare. Unlike the bird in an aviary, I 
knew better.

You can disagree with my risk calculus, and I'm sure many hotel guests 
walked downstairs and outside to the designated assembly point. But it's 
important to recognize that the ability to have this sort of discussion 
is uniquely human. And we need to have the discussion repeatedly, 
whether the topic is the installation of a home burglar alarm, the 
latest TSA security measures, or the potential military invasion of 
another country. These things aren't part of our evolutionary history; 
we have no natural sense of how to respond to them. Our fears are often 
calibrated wrong, and reason is the only way we can override them.

This essay first appeared on DarkReading.com.
http://www.darkreading.com/blog/archives/2009/11/its_hard_work_b.html

Animal behaviors:
http://judson.blogs.nytimes.com/2009/09/29/where-tasty-morsels-fear-to-tread/ 
or http://tinyurl.com/yhosh54
http://judson.blogs.nytimes.com/2009/10/06/leopard-behind-you/

Nesse paper:
http://www-personal.umich.edu/~nesse/Articles/Nesse-DefenseReg-EHB-2005.pdf 
or http://tinyurl.com/yz8zmxh

Evaluating risk:
http://www.schneier.com/essay-162.html
http://www.schneier.com/essay-171.html
http://www.schneier.com/essay-170.html
http://www.schneier.com/essay-155.html

Hotel fires are rare:
http://www.emergency-management.net/hotel_fire.htm


** *** ***** ******* *********** *************

     News



Fugitive caught after uploading his status on Facebook:
http://www.schneier.com/blog/archives/2009/10/helpful_hint_fo.html

Six years of Microsoft Patch Tuesdays:
http://www.schneier.com/blog/archives/2009/10/six_years_of_pa.html

A computer card counter detects human card counters; all it takes is a 
computer that can track every card:
http://www.schneier.com/blog/archives/2009/10/computer_card_c.html

A woman posts a horrible story of how she was mistreated by the TSA, and 
the TSA responds by releasing the video showing she was lying.
http://www.schneier.com/blog/archives/2009/10/tsa_successfull.html

Australia man receives reduced sentence due to encryption:
http://www.news.com.au/couriermail/story/0,23739,26232570-952,00.html

Steve Ballmer blames the failure of Windows Vista on security:
http://www.schneier.com/blog/archives/2009/10/ballmer_blames.html

James Bamford on the NSA
http://www.schneier.com/blog/archives/2009/10/james_bamford_o.html

CIA invests in social-network data mining:
http://www.wired.com/dangerroom/2009/10/exclusive-us-spies-buy-stake-in-twitter-blog-monitoring-firm/ 
or http://tinyurl.com/yl3zud2
http://www.visibletechnologies.com/press/pr_20091019.html

Interesting story of a 2006 Wal-Mart hack from, probably, Minsk.
http://www.wired.com/threatlevel/2009/10/walmart-hack/

Ross Anderson has put together a great resource page on security and 
psychology:
http://www.cl.cam.ac.uk/~rja14/psysec.html

Best Buy sells surveillance tracker: only $99.99.
http://www.bestbuy.com/site/olspage.jsp?skuId=9540703&productCategoryId=pcmcat193100050013&type=product&id=1218123143064 
or http://tinyurl.com/yf2nsb8
You can also use an iPhone as a tracking device:
http://ephermata.livejournal.com/204026.html

A critical essay on the TSA from a former assistant police chief:
http://www.hlswatch.com/2009/10/15/bdo-i-have-the-right-to-refuse-this-searchb/ 
or http://tinyurl.com/ydbox3o
Follow-up essay by the same person:
http://www.hlswatch.com/2009/11/10/where-are-all-the-white-guys-update-on-do-i-have-the-right-to-refuse-this-search/

The U.S. Deputy Director of National Intelligence for Collection gives a 
press conference on the new Utah data collection facility.
http://link.brightcove.com/services/player/bcpid25071315001?bclid=28735328001&bctid=46104965001 
or http://tinyurl.com/yfzb7qm
Transcript:
http://www.dni.gov/speeches/20091023_speech.pdf

"Capability of the People's Republic of China to Conduct Cyber Warfare 
and Computer Network Exploitation," prepared for the US-China Economic 
and Security Review Commission, Northrop Grumman Corporation, October 9, 
2009.
http://www.uscc.gov/researchpapers/2009/NorthropGrumman_PRC_Cyber_Paper_FINAL_Approved%20Report_16Oct2009.pdf 
or http://tinyurl.com/ygcmh9b

Squirrel terrorists attacking our critical infrastructure.
http://notionscapital.wordpress.com/2009/10/24/terrorists-strike-u-s-infrastructure/ 
or http://tinyurl.com/ykgtadb
We have a cognitive bias to exaggerate risks caused by other humans, and 
downplay risks caused by animals (and, even more, by natural phenomena).

To aid their Wall Street investigations, the FBI used DCSNet, its 
massive surveillance system.
http://www.wallstreetandtech.com/blog/archives/2009/10/how_prosecutors.html;jsessionid=0RDUIPWAEEFWFQE1GHPCKHWATMY32JVN 
or http://tinyurl.com/yhnt22q

Detecting terrorists by smelling fear:
http://www.schneier.com/blog/archives/2009/11/detecting_terro.html

In the "Open Access Journal of Forensic Psychology", there's a paper 
about the problems with unscientific security:  "A Call for 
Evidence-Based Security Tools":
http://www.schneier.com/blog/archives/2009/11/the_problems_wi_1.html

Mossad hacked a Syrian official's computer; it was unattended in a hotel 
room at the time.
http://www.haaretz.com/hasen/spages/1125312.html
Remember the evil maid attack: if an attacker gets hold of your computer 
temporarily, he can bypass your encryption software.
http://www.schneier.com/blog/archives/2009/10/evil_maid_attac.html

Recently I wrote about the difficulty of making role-based access 
control work, and how research at Dartmouth showed that it was better to 
let people take the access control they need to do their jobs, and audit 
the results.  This interesting paper, "Laissez-Faire File Sharing," 
tries to formalize that sort of access control.
http://www.cs.columbia.edu/~smb/papers/nspw-use.pdf
http://www.schneier.com/essay-288.html

I have refrained from commenting on the case against Najibullah Zazi, 
simply because it's so often the case that the details reported in the 
press have very little do with reality.  My suspicion was that he was, 
as in so many other cases, an idiot who couldn't do any real harm and 
was turned into a bogeyman for political purposes.  However, John 
Mueller -- who I've written about before -- has done the research.
http://www.schneier.com/blog/archives/2009/11/john_mueller_on_1.html

Interesting research: "Countering Kernel Rootkits with Lightweight Hook 
Protection," by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.
http://www.schneier.com/blog/archives/2009/11/protecting_oss.html

Airport thieves prefer stealing black luggage; it's obvious why if you 
think about it.
http://www.schneier.com/blog/archives/2009/11/thieves_prefer.html

We've seen lots of rumors about attacks against the power grid, both in 
the U.S. and elsewhere, of people hacking the power grid.  President 
Obama mentioned it in his May cybersecurity speech:  "In other countries 
cyberattacks have plunged entire cities into darkness."  Seems the 
source of these rumors has been Brazil.
http://www.schneier.com/blog/archives/2009/11/hacking_the_bra.html

FBI/CIA/NSA information sharing before 9/11:
http://www.schneier.com/blog/archives/2009/11/fbiciansa_infor.html

Blowfish in fiction:
http://www.schneier.com/blog/archives/2009/11/blowfish_in_fic.html


** *** ***** ******* *********** *************

     Zero-Tolerance Policies



Recent stories have documented the ridiculous effects of zero-tolerance 
weapons policies in a Delaware school district: a first-grader expelled 
for taking a camping utensil to school, a 13-year-old expelled after 
another student dropped a pocketknife in his lap, and a seventh-grader 
expelled for cutting paper with a utility knife for a class project. 
Where's the common sense? the editorials cry.

These so-called zero-tolerance policies are actually zero-discretion 
policies. They're policies that must be followed, no situational 
discretion allowed. We encounter them whenever we go through airport 
security: no liquids, gels or aerosols. Some workplaces have them for 
sexual harassment incidents; in some sports a banned substance found in 
a urine sample means suspension, even if it's for a real medical 
condition. Judges have zero discretion when faced with mandatory 
sentencing laws: three strikes for drug offences and you go to jail, 
mandatory sentencing for statutory rape (underage sex), etc. A national 
restaurant chain won't serve hamburgers rare, even if you offer to sign 
a waiver. Whenever you hear "that's the rule, and I can't do anything 
about it" -- and they're not lying to get rid of you -- you're butting 
against a zero discretion policy.

These policies enrage us because they are blind to circumstance. 
Editorial after editorial denounced the suspensions of elementary school 
children for offenses that anyone with any common sense would agree were 
accidental and harmless. The Internet is filled with essays 
demonstrating how the TSA's rules are nonsensical and sometimes don't 
even improve security.  I've written some of them.  What we want is for 
those involved in the situations to have discretion.

However, problems with discretion were the reason behind these mandatory 
policies in the first place. Discretion is often applied inconsistently. 
One school principal might deal with knives in the classroom one way, 
and another principal another way. Your drug sentence could depend 
considerably on how sympathetic your judge is, or on whether she's 
having a bad day.

Even worse, discretion can lead to discrimination. Schools had weapons 
bans before zero-tolerance policies, but teachers and administrators 
enforced the rules disproportionately against African-American students. 
Criminal sentences varied by race, too.  The benefit of zero-discretion 
rules and laws is that they ensure that everyone is treated equally.

Zero-discretion rules also protect against lawsuits.  If the rules are 
applied consistently, no parent, air traveler or defendant can claim he 
was unfairly discriminated against.

So that's the choice. Either we want the rules enforced fairly across 
the board, which means limiting the discretion of the enforcers at the 
scene at the time, or we want a more nuanced response to whatever the 
situation is, which means we give those involved in the situation more 
discretion.

Of course, there's more to it than that. The problem with the 
zero-tolerance weapons rules isn't that they're rigid, it's that they're 
poorly written.

What constitutes a weapon?  Is it any knife, no matter how small? 
Should the penalties be the same for a first grader and a high school 
student? Does intent matter? When an aspirin carried for menstrual 
cramps becomes "drug possession," you know there's a badly written rule 
in effect.

It's the same with airport security and criminal sentencing. Broad and 
simple rules may be simpler to follow -- and require less thinking on 
the part of those enforcing them -- but they're almost always far less 
nuanced than our complex society requires. Unfortunately, the more 
complex the rules are, the more they're open to interpretation and the 
more discretion the interpreters have.

The solution is to combine the two, rules and discretion, with 
procedures to make sure they're not abused. Provide rules, but don't 
make them so rigid that there's no room for interpretation. Give the 
people in the situation -- the teachers, the airport security agents, 
the policemen, the judges -- discretion to apply the rules to the 
situation. But -- and this is the important part -- allow people to 
appeal the results if they feel they were treated unfairly. And 
regularly audit the results to ensure there is no discrimination or 
favoritism.  It's the combination of the four that work: rules plus 
discretion plus appeal plus audit.

All systems need some form of redress, whether it be open and public 
like a courtroom or closed and secret like the TSA. Giving discretion to 
those at the scene just makes for a more efficient appeals process, 
since the first level of appeal can be handled on the spot.

Zachary, the Delaware first grader suspended for bringing a combination 
fork, spoon and knife camping utensil to eat his lunch with, had his 
punishment unanimously overturned by the school board. This was the 
right decision; but what about all the other students whose parents 
weren't as forceful or media-savvy enough to turn their child's plight 
into a national story?  Common sense in applying rules is important, but 
so is equal access to that common sense.

This essay originally appeared on the Minnesota Public Radio website.
http://minnesota.publicradio.org/display/web/2009/11/03/schneier/

http://www.nytimes.com/2009/10/12/education/12discipline.html
http://www.philly.com/inquirer/opinion/20091016_Editorial__Zero_common_sense.html 
or http://tinyurl.com/yls568f
http://www.lancastereaglegazette.com/article/20091020/OPINION04/910200313/Let-reason-be-driver-of-effective-zero-tolerance-policy 
or http://tinyurl.com/yhcvxpu
http://www.dallasnews.com/sharedcontent/dws/dn/localnews/columnists/jragland/stories/DN-ragland_17met.ART.State.Edition1.4c22b1f.html 
or http://tinyurl.com/yh7ehpn
http://www.htrnews.com/article/20091017/MAN06/910170416
http://www.baylor.edu/lariat/news.php?action=story&story=63347
http://www.sdnn.com/sandiego/2009-10-20/columns/marsha-sutton-zero-tolerance-equals-zero-brain-function 
or http://tinyurl.com/ygfhysa
http://www.delmarvanow.com/article/20091020/DW02/910200338

Another example:
A former soldier who handed a discarded shotgun in to police faces at 
least five years imprisonment for "doing his duty".
http://www.thisissurreytoday.co.uk/news/Ex-soldier-faces-jail-handing-gun/article-1509082-detail/article.html 
or http://tinyurl.com/y9spuad


** *** ***** ******* *********** *************

     Security in a Reputation Economy



In the past, our relationship with our computers was technical. We cared 
what CPU they had and what software they ran. We understood our networks 
and how they worked. We were experts, or we depended on someone else for 
expertise. And security was part of that expertise.

This is changing. We access our email via the web, from any computer or 
from our phones. We use Facebook, Google Docs, even our corporate 
networks, regardless of hardware or network. We, especially the younger 
of us, no longer care about the technical details. Computing is 
infrastructure; it's a commodity. It's less about products and more 
about services; we simply expect it to work, like telephone service or 
electricity or a transportation network.

Infrastructures can be spread on a broad continuum, ranging from generic 
to highly specialized. Power and water are generic; who supplies them 
doesn't really matter. Mobile phone services, credit cards, ISPs, and 
airlines are mostly generic. More specialized infrastructure services 
are restaurant meals, haircuts, and social networking sites. Highly 
specialized services include tax preparation for complex businesses; 
management consulting, legal services, and medical services.

Sales for these services are driven by two things: price and trust. The 
more generic the service is, the more price dominates. The more 
specialized it is, the more trust dominates. IT is something of a 
special case because so much of it is free. So, for both specialized IT 
services where price is less important and for generic IT services -- 
think Facebook -- where there is no price, trust will grow in 
importance. IT is becoming a reputation-based economy, and this has 
interesting ramifications for security.

Some years ago, the major credit card companies became concerned about 
the plethora of credit-card-number thefts from sellers' databases. They 
worried that these might undermine the public's trust in credit cards as 
a secure payment system for the internet. They knew the sellers would 
only protect these databases up to the level of the threat to the 
seller, and not to the greater level of threat to the industry as a 
whole. So they banded together and produced a security standard called 
PCI. It's wholly industry-enforced by an industry that realized its 
reputation was more valuable than the sellers' databases.

A reputation-based economy means that infrastructure providers care more 
about security than their customers do. I realized this 10 years ago 
with my own company. We provided network-monitoring services to large 
corporations, and our internal network security was much more extensive 
than our customers'. Our customers secured their networks -- that's why 
they hired us, after all -- but only up to the value of their networks. 
If we mishandled any of our customers' data, we would have lost the 
trust of all of our customers.

I heard the same story at an ENISA conference in London last June, when 
an IT consultant explained that he had begun encrypting his laptop years 
before his customers did. While his customers might decide that the risk 
of losing their data wasn't worth the hassle of dealing with encryption, 
he knew that if he lost data from one customer, he risked losing all of 
his customers.

As IT becomes more like infrastructure, more like a commodity, expect 
service providers to improve security to levels greater than their 
customers would have done themselves.

In IT, customers learn about company reputation from many sources: 
magazine articles, analyst reviews, recommendations from colleagues, 
awards, certifications, and so on. Of course, this only works if 
customers have accurate information. In a reputation economy, companies 
have a motivation to hide their security problems.

You've all experienced a reputation economy: restaurants. Some 
restaurants have a good reputation, and are filled with regulars. When 
restaurants get a bad reputation, people stop coming and they close. 
Tourist restaurants -- whose main attraction is their location, and 
whose customers frequently don't know anything about their reputation -- 
can thrive even if they aren't any good. And sometimes a restaurant can 
keep its reputation -- an award in a magazine, a special occasion 
restaurant that "everyone knows" is the place to go -- long after its 
food and service have declined.

The reputation economy is far from perfect.

This essay originally appeared in "The Guardian."
http://www.guardian.co.uk/technology/2009/nov/11/schneier-reputation-it-security 
or http://tinyurl.com/yha3nbj


** *** ***** ******* *********** *************

     Schneier News



I'm speaking at the Internet Governance Forum in Sharm el-Sheikh, Egypt, 
on November 16 and 17.
http://igf09.eg/home.html

I'm speaking at the 2009 SecAU Security Congress in Perth on December 2 
and 3.
http://scissec.scis.ecu.edu.au/conferences2008/

I'm speaking at an Open Rights Group event in London on December 4.
http://www.openrightsgroup.org/blog/2009/bruce-schneier-event

I'm speaking at the First IEEE Workshop on Information Forensics and 
Security in London on December 8.
http://www.wifs09.org/

I'm speaking at the UCL Centre for Security and Crime Science in London 
on December 7.
http://www.cscs.ucl.ac.uk/

I'm speaking at the Young Professionals in Foreign Policy in London on 
December 7.
http://www.ypfp.org/content/event/London

I'm speaking at the Iberic Web Application Security Conference in Madrid 
on December 10.
December 10-11, 2009
http://www.ibwas.com/

Article on me from a Luxembourg magazine.
http://www.paperjam.lu/archives/2009/11/2310_Technologie_Security/index.html 
or http://tinyurl.com/y95mcpq

Interview with me on CNet.com:
http://news.cnet.com/8301-27080_3-10381460-245.html?tag=newsLeadStoriesArea.1 
or http://tinyurl.com/yf5otcu

Video interview with me, conducted at the Information Security Decisions 
conference in Chicago in October.
http://searchsecurity.techtarget.com/video/0,297151,sid14_gci1372839,00.html 
or http://tinyurl.com/yk4othd

A month ago, ThatsMyFace.com approached me about making a Bruce Schneier 
action figure.  It's $100.  I'd like to be able to say something like 
"half the proceeds are going to EPIC and EFF," but they're not.  That's 
the price for custom orders.  I don't even get a royalty.  The company 
is working on lowering the price, and they've said that they'll put a 
photograph of an actual example on the webpage.  I've told them that at 
$100 no one will buy it, but at $40 it's a funny gift for your corporate 
IT person.  So e-mail the company if you're interested, and if they get 
enough interest they'll do a bulk order.
http://www.thatsmyface.com/f/bruce_schneier


** *** ***** ******* *********** *************

     The Commercial Speech Arms Race



A few years ago, a company began to sell a liquid with identification 
codes suspended in it. The idea was that you would paint it on your 
stuff as proof of ownership. I commented that I would paint it on 
someone else's stuff, then call the police.

I was reminded of this recently when a group of Israeli scientists 
demonstrated that it's possible to fabricate DNA evidence. So now, 
instead of leaving your own DNA at a crime scene, you can leave 
fabricated DNA. And it isn't even necessary to fabricate. In Charlie 
Stross's novel "Halting State," the bad guys foul a crime scene by 
blowing around the contents of a vacuum cleaner bag, containing the DNA 
of dozens, if not hundreds, of people.

This kind of thing has been going on for ever. It's an arms race, and 
when technology changes, the balance between attacker and defender 
changes. But when automated systems do the detecting, the results are 
different. Face recognition software can be fooled by cosmetic surgery, 
or sometimes even just a photograph. And when fooling them becomes 
harder, the bad guys fool them on a different level. Computer-based 
detection gives the defender economies of scale, but the attacker can 
use those same economies of scale to defeat the detection system.

Google, for example, has anti-fraud systems that detect and shut down 
advertisers who try to inflate their revenue by repeatedly clicking on 
their own AdSense ads. So people built bots to repeatedly click on the 
AdSense ads of their competitors, trying to convince Google to kick them 
out of the system.

Similarly, when Google started penalizing a site's search engine 
rankings for having "bad neighbors" -- backlinks from link farms, adult 
or gambling sites, or blog spam -- people engaged in sabotage: they 
built link farms and left blog comment spam linking to their 
competitors' sites.

The same sort of thing is happening on Yahoo Answers. Initially, 
companies would leave answers pushing their products, but Yahoo started 
policing this. So people have written bots to report abuse on all their 
competitors. There are Facebook bots doing the same sort of thing.

Last month, Google introduced Sidewiki, a browser feature that lets you 
read and post comments on virtually any webpage. People and industries 
are already worried about the effects unrestrained commentary might have 
on their businesses, and how they might control the comments. I'm sure 
Google has sophisticated systems ready to detect commercial interests 
that try to take advantage of the system, but are they ready to deal 
with commercial interests that try to frame their competitors? And do we 
want to give one company the power to decide which comments should rise 
to the top and which get deleted?

Whenever you build a security system that relies on detection and 
identification, you invite the bad guys to subvert the system so it 
detects and identifies someone else. Sometimes this is hard -- leaving 
someone else's fingerprints on a crime scene is hard, as is using a mask 
of someone else's face to fool a guard watching a security camera -- and 
sometimes it's easy. But when automated systems are involved, it's often 
very easy. It's not just hardened criminals that try to frame each 
other, it's mainstream commercial interests.

With systems that police internet comments and links, there's money 
involved in commercial messages -- so you can be sure some will take 
advantage of it. This is the arms race. Build a detection system, and 
the bad guys try to frame someone else. Build a detection system to 
detect framing, and the bad guys try to frame someone else framing 
someone else. Build a detection system to detect framing of framing, and 
well, there's no end, really. Commercial speech is on the internet to 
stay; we can only hope that they don't pollute the social systems we use 
so badly that they're no longer useful.

This essay originally appeared in "The Guardian."
http://www.guardian.co.uk/technology/2009/oct/15/bruce-schneier-internet-security 
or http://tinyurl.com/yfbsb42

"Smart Water" liquid identification:
http://www.schneier.com/blog/archives/2005/02/smart_water.html

Fabricating DNA evidence:
http://www.nytimes.com/2009/08/18/science/18dna.html

Fooling face recognition software:
http://staging.spectrum.ieee.org/computing/embedded-systems/computerized-facerecognition-technology-foiled 
or http://tinyurl.com/yz9x4pf
http://www.theregister.co.uk/2009/02/19/facial_recognition_fail/

Google's AdSense:
http://www.wmtips.com/adsense/what-you-need-know-about-adsense.htm

Sidewiki:
http://www.google.com/sidewiki/intl/en/index.html:
http://www.pcworld.com/article/172490/google_sidewiki_a_first_look.html 
or http://tinyurl.com/lgpxp8

Sidewiki fears:
http://impactiviti.wordpress.com/2009/09/29/googles-sidewiki-game-changer-for-pharma-social-media/ 
or http://tinyurl.com/yl4ul3g
http://www.4hoteliers.com/4hots_fshw.php?mwi=4448
http://talkbiz.com/blog/google-steals-the-web/


** *** ***** ******* *********** *************

     The Doghouse: ADE 651



A divining rod to find explosives in Iraq:
http://www.schneier.com/blog/archives/2009/11/the_doghouse_ad.html


** *** ***** ******* *********** *************

     "Evil Maid" Attacks on Encrypted Hard Drives



Earlier this month, Joanna Rutkowska implemented the "evil maid" attack 
against TrueCrypt.  The same kind of attack should work against any 
whole-disk encryption, including PGP Disk and BitLocker.  Basically, the 
attack works like this:

Step 1:  Attacker gains access to your shut-down computer and boots it 
from a separate volume.  The attacker writes a hacked bootloader onto 
your system, then shuts it down.

Step 2:  You boot your computer using the attacker's hacked bootloader, 
entering your encryption key.  Once the disk is unlocked, the hacked 
bootloader does its mischief.  It might install malware to capture the 
key and send it over the Internet somewhere, or store it in some 
location on the disk to be retrieved later, or whatever.

You can see why it's called the "evil maid" attack; a likely scenario is 
that you leave your encrypted computer in your hotel room when you go 
out to dinner, and the maid sneaks in and installs the hacked 
bootloader.  The same maid could even sneak back the next night and 
erase any traces of her actions.

This attack exploits the same basic vulnerability as the "Cold Boot" 
attack from last year, and the "Stoned Boot" attack from earlier this 
year, and there's no real defense to this sort of thing.  As soon as you 
give up physical control of your computer, all bets are off.  From CRN: 
"Similar hardware-based attacks were among the main reasons why 
Symantec's CTO Mark Bregman was recently advised by 'three-letter 
agencies in the US Government' to use separate laptop and mobile device 
when traveling to China, citing potential hardware-based compromise."

PGP sums it up in their blog.  "No security product on the market today 
can protect you if the underlying computer has been compromised by 
malware with root level administrative privileges. That said, there 
exists well-understood common sense defenses against 'Cold Boot,' 
'Stoned Boot.' 'Evil Maid,' and many other attacks yet to be named and 
publicized."

The defenses are basically two-factor authentication: a token you don't 
leave in your hotel room for the maid to find and use.  The maid could 
still corrupt the machine, but it's more work than just storing the 
password for later use.  Putting your data on a thumb drive and taking 
it with you doesn't work; when you return you're plugging your thumb 
into a corrupted machine.

The real defense here is trusted boot, something Trusted Computing is 
supposed to enable.  And the only way to get that is from Microsoft's 
BitLocker hard disk encryption, if your computer has a TPM module 
version 1.2 or later.

In the meantime, people who encrypt their hard drives, or partitions on 
their hard drives, have to realize that the encryption gives them less 
protection than they probably believe.  It protects against someone 
confiscating or stealing their computer and then trying to get at the 
data.  It does not protect against an attacker who has access to your 
computer over a period of time during which you use it, too.

Evil Maid attacks:
http://theinvisiblethings.blogspot.com/2009/10/evil-maid-goes-after-truecrypt.html 
or http://tinyurl.com/yzbbgc3

Cold Boot and Stoned Boot attacks:
http://citp.princeton.edu/memory/
http://www.stoned-vienna.com/
http://blogs.zdnet.com/security/?p=4662&tag=nl.e019
http://www.crn.com.au/News/155836,safety-first-for-it-executives-in-china.aspx 
or http://tinyurl.com/p2wqxq

PGP's  commentary:
http://blog.pgp.com/index.php/2009/10/evil-maid-attack/

Trusted Computing:
http://www.schneier.com/blog/archives/2005/08/trusted_computi.html


** *** ***** ******* *********** *************

     Is Antivirus Dead?



This essay previously appeared in "Information Security Magazine," as 
the second half of a point-counterpoint with Marcus Ranum.  You can read 
his half here as well:
http://searchsecurity.techtarget.com/magazinePrintFriendly/0,296905,sid14_gci1373562,00.html 
or http://tinyurl.com/yz2rtbs


Security is never black and white. If someone asks, "For best security, 
should I do A or B?" the answer almost invariably is both. But security 
is always a trade-off. Often it's impossible to do both A and B -- 
there's no time to do both, it's too expensive to do both, or whatever 
-- and you have to choose. In that case, you look at A and B and you 
make you best choice. But it's almost always more secure to do both.

Yes, antivirus programs have been getting less effective as new viruses 
are more frequent and existing viruses mutate faster. Yes, antivirus 
companies are forever playing catch-up, trying to create signatures for 
new viruses. Yes, signature-based antivirus software won't protect you 
when a virus is new, before the signature is added to the detection 
program. Antivirus is by no means a panacea.

On the other hand, an antivirus program with up-to-date signatures will 
protect you from a lot of threats. It'll protect you against viruses, 
against spyware, against Trojans -- against all sorts of malware. It'll 
run in the background, automatically, and you won't notice any 
performance degradation at all. And -- here's the best part -- it can be 
free. AVG won't cost you a penny. To me, this is an easy trade-off, 
certainly for the average computer user who clicks on attachments he 
probably shouldn't click on, downloads things he probably shouldn't 
download, and doesn't understand the finer workings of Windows Personal 
Firewall.

Certainly security would be improved if people used whitelisting 
programs such as Bit9 Parity and Savant Protection -- and I personally 
recommend Malwarebytes' Anti-Malware -- but a lot of users are going to 
have trouble with this. The average user will probably just swat away 
the "you're trying to run a program not on your whitelist" warning 
message or -- even worse -- wonder why his computer is broken when he 
tries to run a new piece of software. The average corporate IT 
department doesn't have a good idea of what software is running on all 
the computers within the corporation, and doesn't want the 
administrative overhead of managing all the change requests. And 
whitelists aren't a panacea, either: they don't defend against malware 
that attaches itself to data files (think Word macro viruses), for example.

One of the newest trends in IT is consumerization, and if you don't 
already know about it, you soon will. It's the idea that new 
technologies, the cool stuff people want, will become available for the 
consumer market before they become available for the business market. 
What it means to business is that people -- employees, customers, 
partners -- will access business networks from wherever they happen to 
be, with whatever hardware and software they have. Maybe it'll be the 
computer you gave them when you hired them. Maybe it'll be their home 
computer, the one their kids use. Maybe it'll be their cell phone or 
PDA, or a computer in a hotel's business center. Your business will have 
no way to know what they're using, and -- more importantly -- you'll 
have no control.

In this kind of environment, computers are going to connect to each 
other without a whole lot of trust between them. Untrusted computers are 
going to connect to untrusted networks. Trusted computers are going to 
connect to untrusted networks. The whole idea of "safe computing" is 
going to take on a whole new meaning -- every man for himself. A 
corporate network is going to need a simple, dumb, signature-based 
antivirus product at the gateway of its network. And a user is going to 
need a similar program to protect his computer.

Bottom line: antivirus software is neither necessary nor sufficient for 
security, but it's still a good idea. It's not a panacea that magically 
makes you safe, nor is it is obsolete in the face of current threats. As 
countermeasures go, it's cheap, it's easy, and it's effective. I haven't 
dumped my antivirus program, and I have no intention of doing so anytime 
soon.

Problems with anti-virus software:
http://www.csoonline.com/article/495827/Experts_Only_Time_to_Ditch_the_Antivirus_ 
or http://tinyurl.com/nqo68f
http://www.computerworld.com/s/article/print/9077338/The_future_of_antivirus 
or http://tinyurl.com/yfrv86s
http://www.pcworld.com/article/130455/is_desktop_antivirus_dead.html
http://www.businessweek.com/technology/content/jan2007/tc20070122_300717.htm 
or http://tinyurl.com/ycdkmd3

AVG:
http://free.avg.com/us-en/homepage

Consumerization:
http://arstechnica.com/business/news/2008/07/analysis-it-consumerization-and-the-future-of-work.ars 
or http://tinyurl.com/yd6kxgs


** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing 
summaries, analyses, insights, and commentaries on security: computer 
and otherwise.  You can subscribe, unsubscribe, or change your address 
on the Web at <http://www.schneier.com/crypto-gram.html>.  Back issues 
are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to 
colleagues and friends who will find it valuable.  Permission is also 
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is the author of the 
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," 
and "Applied Cryptography," and an inventor of the Blowfish, Twofish, 
Threefish, Helix, Phelix, and Skein algorithms.  He is the Chief 
Security Technology Officer of BT BCSG, and is on the Board of Directors 
of the Electronic Privacy Information Center (EPIC).  He is a frequent 
writer and lecturer on security topics.  See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter.  Opinions expressed are not 
necessarily those of BT.

Copyright (c) 2009 by Bruce Schneier.

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE





More information about the cypherpunks-legacy mailing list