CRYPTO-GRAM, May 15, 2006

Bruce Schneier schneier at COUNTERPANE.COM
Mon May 15 00:57:31 PDT 2006


                 CRYPTO-GRAM

                May 15, 2006

              by Bruce Schneier
               Founder and CTO
      Counterpane Internet Security, Inc.
           schneier at counterpane.com
            http://www.schneier.com
           http://www.counterpane.com


A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit
<http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at
<http://www.schneier.com/crypto-gram-0604.html>.  These same essays
appear in the "Schneier on Security" blog:
<http://www.schneier.com/blog>.  An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
     Movie Plot Threat Contest: Status Report
     Who Owns Your Computer?
     Crypto-Gram Reprints
     Identity-Theft Disclosure Laws
     When "Off" Doesn't Mean Off
     News
     RFID Cards and Man-in-the-Middle Attacks
     Software Failure Causes Airport Evacuation
     Counterpane News
     Microsoft's BitLocker
     The Security Risk of Special Cases
     Comments from Readers


** *** ***** ******* *********** *************

     Movie Plot Threat Contest: Status Report



On the first of last month, I announced my (possibly First) Movie-Plot
Threat Contest.

"Entrants are invited to submit the most unlikely, yet still plausible,
terrorist attack scenarios they can come up with.

"Your goal: cause terror. Make the American people notice. Inflict
lasting damage on the U.S. economy. Change the political landscape, or
the culture. The more grandiose the goal, the better.

"Assume an attacker profile on the order of 9/11: 20 to 30 unskilled
people, and about $500,000 with which to buy skills, equipment, etc."

As of the end of the month, the blog post has 782 comments.  I expected
a lot of submissions, but the response has blown me away.

Looking over the different terrorist plots, they seem to fall into
several broad categories.  The first category consists of attacks
against our infrastructure: the food supply, the water supply, the
power infrastructure, the telephone system, etc.  The idea is to
cripple the country by targeting one of the basic systems that make it
work.

The second category consists of big-ticket plots.  Either they have
very public targets -- blowing up the Super Bowl, the Oscars, etc. --
or they have high-tech components: nuclear waste, anthrax, chlorine
gas, a full oil tanker, etc.  And they are often complex and hard to
pull off.  This is the 9/11 idea: a single huge event that affects the
entire nation.

The third category consists of low-tech attacks that go on and
on.  Several people imagined a version of the DC sniper scenario, but
with multiple teams.  The teams would slowly move around the country,
perhaps each team starting up after the previous one was captured or
killed.  Other people suggested a variant of this with small bombs in
random public locations around the country.

(There's a fourth category: actual movie plots.  Some entries are
comical, unrealistic, have science fiction premises, etc.  I'm not even
considering those.)

The better ideas tap directly into public fears.  In my book, Beyond
Fear, I discussed five different tendencies people have to exaggerate
risks: to believe that something is more risky than it actually is.

1. People exaggerate spectacular but rare risks and downplay common risks.

2. People have trouble estimating risks for anything not exactly like
their normal situation.

3. Personified risks are perceived to be greater than anonymous risks.

4. People underestimate risks they willingly take and overestimate
risks in situations they can't control.

5. People overestimate risks that are being talked about and remain an
object of public scrutiny.

The best plot ideas leverage one or more of those
tendencies.  Big-ticket attacks leverage the first.  Infrastructure and
low-tech attacks leverage the fourth.  And every attack tries to
leverage the fifth, especially those attacks that go on and on.  I'm
willing to bet that when I find a winner, it will be the plot that
leverages the greatest number of those tendencies to the best possible
advantage.

I also got a bunch of e-mails from people with ideas they thought too
terrifying to post publicly.  Some of them wouldn't even tell them to
me.  I also received e-mails from people accusing me of helping the
terrorists by giving them ideas.

But if there's one thing this contest demonstrates, it's that good
terrorist ideas are a dime a dozen.  Anyone can figure out how to cause
terror.  The hard part is execution.

Some of the submitted plots require minimal skill and
equipment.  Twenty guys with cars and guns -- that sort of
thing.  Reading through them, you have to wonder why there have been no
terrorist attacks in the U.S. since 9/11.  I don't believe the
"flypaper theory" that the terrorists are all in Iraq instead of in the
U.S.  And despite all the ineffectual security we've put in place since
9/11, I'm sure we have had some successes in intelligence and
investigation -- and have made it harder for terrorists to operate both
in the U.S. and abroad.

But mostly, I think terrorist attacks are much harder than most of us
think.  It's harder to find willing recruits than we think.  It's
harder to coordinate plans.  It's harder to execute those
plans.  Terrorism is rare, and for all we've heard about 9/11 changing
the world, it's still rare.

The submission deadline was the end of April month, but please keep
posting plots if you think of them.  And please read through some of
the others and comment on them; I'm curious as to what other people
think are the most interesting, compelling, realistic, or effective
scenarios.

I'm reading through them, and will have a winner by the next Crypto-Gram.

Contest:
http://www.schneier.com/blog/archives/2006/04/announcing_movi.html

Flypaper theory:
http://en.wikipedia.org/wiki/Flypaper_theory_%28strategy%29

The contest made The New York Times:
http://www.nytimes.com/2006/04/23/movies/23peterson.html?ex=1303444800&e
n=c7ccc8d756fc98e7&ei=5090&partner=rssuserland&emc=rss or
http://tinyurl.com/qyh3b


** *** ***** ******* *********** *************

     Who Owns Your Computer?



When technology serves its owners, it is liberating. When it is
designed to serve others, over the owner's objection, it is oppressive.
There's a battle raging on your computer right now -- one that pits you
against worms and viruses, Trojans, spyware, automatic update features
and digital rights management technologies. It's the battle to
determine who owns your computer.

You own your computer, of course. You bought it. You paid for it. But
how much control do you really have over what happens on your machine?
Technically you might have bought the hardware and software, but you
have less control over what it's doing behind the scenes.

Using the hacker sense of the term, your computer is "owned" by other
people.

It used to be that only malicious hackers were trying to own your
computers. Whether through worms, viruses, Trojans or other means, they
would try to install some kind of remote-control program onto your
system. Then they'd use your computers to sniff passwords, make
fraudulent bank transactions, send spam, initiate phishing attacks and
so on. Estimates are that somewhere between hundreds of thousands and
millions of computers are members of remotely controlled "bot"
networks. Owned.

Now, things are not so simple. There are all sorts of interests vying
for control of your computer. There are media companies that want to
control what you can do with the music and videos they sell you. There
are companies that use software as a conduit to collect marketing
information, deliver advertising or do whatever it is their real owners
require. And there are software companies that are trying to make money
by pleasing not only their customers, but other companies they ally
themselves with. All these companies want to own your computer.

Some examples:

1. Entertainment software: In October 2005, it emerged that Sony had
distributed a rootkit with several music CDs -- the same kind of
software that crackers use to own people's computers. This rootkit
secretly installed itself when the music CD was played on a computer.
Its purpose was to prevent people from doing things with the music that
Sony didn't approve of: It was a DRM system. If the exact same piece of
software had been installed secretly by a hacker, this would have been
an illegal act. But Sony believed that it had legitimate reasons for
wanting to own its customers' machines.

2. Antivirus: You might have expected your antivirus software to detect
Sony's rootkit. After all, that's why you bought it. But initially, the
security programs sold by Symantec and others did not detect it,
because Sony had asked them not to. You might have thought that the
software you bought was working for you, but you would have been wrong.

3. Internet services: Hotmail allows you to blacklist certain e-mail
addresses, so that mail from them automatically goes into your spam
trap. Have you ever tried blocking all that incessant marketing e-mail
from Microsoft? You can't.

4. Application software: Internet Explorer users might have expected
the program to incorporate easy-to-use cookie handling and pop-up
blockers. After all, other browsers do, and users have found them
useful in defending against Internet annoyances. But Microsoft isn't
just selling software to you; it sells Internet advertising as well. It
isn't in the company's best interest to offer users features that would
adversely affect its business partners.

5. Spyware: Spyware is nothing but someone else trying to own your
computer. These programs eavesdrop on your behavior and report back to
their real owners -- sometimes without your knowledge or consent --
about your behavior.

6. Update: Automatic update features are another way software companies
try to own your computer. While they can be useful for improving
security, they also require you to trust your software vendor not to
disable your computer for nonpayment, breach of contract or other
presumed infractions.

Adware, software-as-a-service and Google Desktop search are all
examples of some other company trying to own your computer. And Trusted
Computing will only make the problem worse.

There is an inherent insecurity to technologies that try to own
people's computers: They allow individuals other than the computers'
legitimate owners to enforce policy on those machines. These systems
invite attackers to assume the role of the third party and turn a
user's device against him.

Remember the Sony story: The most insecure feature in that DRM system
was a cloaking mechanism that gave the rootkit control over whether you
could see it executing or spot its files on your hard disk. By taking
ownership away from you, it reduced your security.

If left to grow, these external control systems will fundamentally
change your relationship with your computer. They will make your
computer much less useful by letting corporations limit what you can do
with it. They will make your computer much less reliable because you
will no longer have control of what is running on your machine, what it
does, and how the various software components interact. At the extreme,
they will transform your computer into a glorified boob tube.

You can fight back against this trend by only using software that
respects your boundaries. Boycott companies that don't honestly serve
their customers, that don't disclose their alliances, that treat users
like marketing assets. Use open-source software -- software created and
owned by users, with no hidden agendas, no secret alliances and no
back-room marketing deals.

Just because computers were a liberating force in the past doesn't mean
they will be in the future. There is enormous political and economic
power behind the idea that you shouldn't truly own your computer or
your software, despite having paid for it.

This essay originally appeared on Wired.com.
http://www.wired.com/news/columns/1,70802-0.html

Trusted computing:
http://www.schneier.com/crypto-gram-0208.html#1


** *** ***** ******* *********** *************

     Crypto-Gram Reprints



Crypto-Gram is currently in its ninth year of publication.  Back issues
cover a variety of security-related topics, and can all be found on
<http://www.schneier.com/crypto-gram-back.html>.  These are a selection
of articles that appeared in this calendar month in other years.

REAL-ID
http://www.schneier.com/crypto-gram-0505.html#2

Should Terrorism be Reported in the News?
http://www.schneier.com/crypto-gram-0505.html#3

Combating Spam
http://www.schneier.com/crypto-gram-0505.html#15

Warrants as a Security Countermeasure
http://www.schneier.com/crypto-gram-0405.html#1

National Security Consumers
http://www.schneier.com/crypto-gram-0405.html#9

Encryption and Wiretapping
http://www.schneier.com/crypto-gram-0305.html#1

Unique E-Mail Addresses and Spam
http://www.schneier.com/crypto-gram-0305.html#6

Secrecy, Security, and Obscurity
http://www.schneier.com./crypto-gram-0205.html#1

Fun with Fingerprint Readers
http://www.schneier.com./crypto-gram-0205.html#5

What Military History Can Teach Network Security, Part 2
http://www.schneier.com/crypto-gram-0105.html#1

The Futility of Digital Copy Protection
http://www.schneier.com/crypto-gram-0105.html#3

Security Standards
http://www.schneier.com/crypto-gram-0105.html#7

Safe Personal Computing
http://www.schneier.com/crypto-gram-0105.html#8

Computer Security: Will we Ever Learn?
http://www.schneier.com/crypto-gram-0005.html#1

Trusted Client Software
http://www.schneier.com/crypto-gram-0005.html#6

The IL*VEYOU Virus (Title bowdlerized to foil automatic e-mail filters.)
http://www.schneier.com/crypto-gram-0005.html#ilyvirus

The Internationalization of Cryptography
http://www.schneier.com/crypto-gram-9905.html#international

The British discovery of public-key cryptography
http://www.schneier.com/crypto-gram-9805.html#nonsecret


** *** ***** ******* *********** *************

     Identity-Theft Disclosure Laws



California was the first state to pass a law requiring companies that
keep personal data to disclose when that data is lost or stolen. Since
then, many states have followed suit. Now Congress is debating federal
legislation that would do the same thing nationwide.

Except that it won't do the same thing: The federal bill has become so
watered down that it won't be very effective. I would still be in favor
of it -- a poor federal law is better than none -- if it didn't also
pre-empt more-effective state laws, which makes it a net loss.

Identity theft is the fastest-growing area of crime. It's badly named
-- your identity is the one thing that cannot be stolen -- and is
better thought of as fraud by impersonation. A criminal collects enough
personal information about you to be able to impersonate you to banks,
credit card companies, brokerage houses, etc. Posing as you, he steals
your money, or takes a destructive joyride on your good credit.

Many companies keep large databases of personal data that is useful to
these fraudsters. But because the companies don't shoulder the cost of
the fraud, they're not economically motivated to secure those databases
very well. In fact, if your personal data is stolen from their
databases, they would much rather not even tell you: Why deal with the
bad publicity?

Disclosure laws force companies to make these security breaches public.
This is a good idea for three reasons. One, it is good security
practice to notify potential identity theft victims that their personal
information has been lost or stolen. Two, statistics on actual data
thefts are valuable for research purposes. And three, the potential
cost of the notification and the associated bad publicity naturally
leads companies to spend more money on protecting personal information
-- or to refrain from collecting it in the first place.

Think of it as public shaming. Companies will spend money to avoid the
PR costs of this shaming, and security will improve. In economic terms,
the law reduces the externalities and forces companies to deal with the
true costs of these data breaches.

This public shaming needs the cooperation of the press and,
unfortunately, there's an attenuation effect going on. The first major
breach after California passed its disclosure law -- SB1386 -- was in
February 2005, when ChoicePoint sold personal data on 145,000 people to
criminals. The event was all over the news, and ChoicePoint was shamed
into improving its security.

Then LexisNexis exposed personal data on 300,000 individuals. And
Citigroup lost data on 3.9 million individuals. SB1386 worked; the only
reason we knew about these security breaches was because of the law.
But the breaches came in increasing numbers, and in larger quantities.
After a while, it was no longer news. And when the press stopped
reporting, the "cost" of these breaches to the companies declined.

Today, the only real cost that remains is the cost of notifying
customers and issuing replacement cards. It costs banks about $10 to
issue a new card, and that's money they would much rather not have to
spend. This is the agenda they brought to the federal bill, cleverly
titled the Data Accountability and Trust Act, or DATA.

Lobbyists attacked the legislation in two ways. First, they went after
the definition of personal information. Only the exposure of very
specific information requires disclosure. For example, the theft of a
database that contained people's first *initial*, middle name, last
name, Social Security number, bank account number, address, phone
number, date of birth, mother's maiden name and password would not have
to be disclosed, because "personal information" is defined as "an
individual's first and last name in combination with ..." certain other
personal data.

Second, lobbyists went after the definition of "breach of security."
The latest version of the bill reads: "The term 'breach of security'
means the unauthorized acquisition of data in electronic form
containing personal information that establishes a reasonable basis to
conclude that there is a significant risk of identity theft to the
individuals to whom the personal information relates."

Get that? If a company loses a backup tape containing millions of
individuals' personal information, it doesn't have to disclose if it
believes there is no "significant risk of identity theft." If it leaves
a database exposed, and has absolutely no audit logs of who accessed
that database, it could claim it has no "reasonable basis" to conclude
there is a significant risk. Actually, the company could point to a ID
Analytics study that showed the probability of fraud to someone who has
been the victim of this kind of data loss to be less than 1 in 1,000 --
which is not a "significant risk" -- and then not disclose the data
breach at all.

Even worse, this federal law pre-empts the 23 existing state laws --
and others being considered -- many of which contain stronger
individual protections. So while DATA might look like a law protecting
consumers nationwide, it is actually a law protecting companies with
large databases *from* state laws protecting consumers.

So in its current form, this legislation would make things worse, not
better.

Of course, things are in flux. They're *always* in flux. The language
of the bill has changed regularly over the past year, as various
committees got their hands on it. There's also another bill, HR3997,
which is even worse. And even if something passes, it has to be
reconciled with whatever the Senate passes, and then voted on again. So
no one really knows what the final language will look like.

But the devil is in the details, and the only way to protect us from
lobbyists tinkering with the details is to ensure that the federal bill
does not pre-empt any state bills: that the federal law is a minimum,
but that states can require more.

That said, disclosure is important, but it's not going to solve
identity theft. As I've written previously, the reason theft of
personal information is so common is that the data is so valuable. The
way to mitigate the risk of fraud due to impersonation is not to make
personal information harder to steal, it's to make it harder to use.

Disclosure laws only deal with the economic externality of data brokers
protecting your personal information. What we really need are laws
prohibiting credit card companies and other financial institutions from
granting credit to someone using your name with only a minimum of
authentication.

But until that happens, we can at least hope that Congress will refrain
from passing bad bills that override good state laws -- and helping
criminals in the process.

California's SB 1386:
http://info.sen.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_2002
0926_chaptered.html or http://tinyurl.com/dgh0

Existing state disclosure laws:
http://www.pirg.org/consumer/credit/statelaws.htm
http://www.cwalsh.org/cgi-bin/blosxom.cgi/2006/04/20#breachlaws

HR 4127 - Data Accountability and Trust Act:
http://thomas.loc.gov/cgi-bin/query/C?c109:./temp/~c109XvxF76

HR 3997:
http://thomas.loc.gov/cgi-bin/query/C?c109:./temp/~c109gnLQGA

ID Analytics study:
http://www.idanalytics.com/news_and_events/20051208.htm

My essay on identity theft:
http://www.schneier.com/blog/archives/2005/04/mitigating_iden.html

A version of this essay originally appeared on Wired.com:
http://www.wired.com/news/columns/0,70690-0.html


** *** ***** ******* *********** *************

     When "Off" Doesn't Mean Off



According to the specs of the new Nintendo Wii (its new game machine),
"Wii can communicate with the Internet even when the power is turned
off."  Nintendo accentuates the positive: "This WiiConnect24 service
delivers a new surprise or game update, even if users do not play with
Wii," while ignoring the possibility that Nintendo can deactivate a
game if it chooses to do so, or that someone else can deliver a
different -- not so wanted -- surprise.

We all know that, but what's interesting here is that Nintendo is
changing the meaning of the word "off."  We are all conditioned to
believe that "off" means off, and therefore safe.  But in Nintendo's
case, "off" really means something like "on standby."  If users expect
the Nintendo Wii to be truly off, they need to pull the power plug --
assuming there isn't a battery foiling that tactic.  There seems to be
no way to disconnect the Internet, as the Nintendo Wii is wireless only.

Maybe there is no way to turn the Nintendo Wii off.

There's a serious security problem here, made worse by a bad user
interface.  "Off" should mean off.

http://wii.nintendo.com/hardware.html


** *** ***** ******* *********** *************

     News



It's a provocative headline: "Triple DES Upgrades May Introduce New ATM
Vulnerabilities."  Basically, at the same time ATM machine owners
upgrading their encryption to triple-DES, they're also moving the
communications links from dedicated lines to the Internet.  And while
the protocol encrypts PINs, it doesn't encrypt any of the other
information, such as card numbers and expiration dates.  So it's the
move from dedicated lines to the Internet that's adding the
insecurities, not the triple-DES upgrades.
http://www.paymentsnews.com/2006/04/redspin_triple_.html

Someone filed change-of-address forms with the post office to divert
other people's mail to himself.  170 times.  "Postal Service
spokeswoman Patricia Licata said a credit card is required for security
reasons. 'We have systems in place to prevent this type of occurrence,'
she said, but declined further comment on the specific case until
officials have time to analyze what happened."  Sounds like those
systems don't work very well.
http://www.wvec.com/news/local/stories/wvec_local_041306_mail_scam.31210
0f4.html

A deniable file system:
http://www.schneier.com/blog/archives/2006/04/deniable_file_s.html

Great hoax video: graffiti on Air Force One:
http://www.stillfree.com/
http://abcnews.go.com/Technology/wireStory?id=1875386

The Department of Homeland Security has released a Request for Proposal
-- that's the document asking industry if anyone can do what it wants
-- for the Secure Border Initiative.
http://www.washingtontechnology.com/news/1_1/daily_news/28381-1.html

Stuntz and Solove Debate Privacy and Transparency
http://www.tnr.com/user/nregi.mhtml?i=20060417&s=stuntz041706
http://www.concurringopinions.com/archives/2006/04/william_stuntzs.html#
more or http://tinyurl.com/o4jte
http://www.tnr.com/user/nregi.mhtml?i=20060417&s=stuntz041706
http://www.concurringopinions.com/archives/2006/04/stuntz_responds.html
or http://tinyurl.com/mqrzt

Terrorist travel advisory:  "My son and I woke up Sunday morning and
drove a rented truck to New York City to move his worldly goods into an
apartment there. As we made it to the Holland Tunnel, after traveling
the Tony Soprano portion of the Jersey Turnpike with a blue moon in our
eyes, the woman in the tollbooth informed us that, since 9/11, trucks
were not allowed in the tunnel; we'd have to use the Lincoln Tunnel,
she said. So if you are a terrorist trying to get into New York from
Jersey, be advised that you're going to have to use the Lincoln Tunnel."
http://www.post-gazette.com/pg/06110/683563-294.stm

The Kryptos Sculpture is located in the center of the CIA Headquarters
in Langley, VA.  It was designed in 1990, and contains a four-part
encrypted puzzle.  The first three parts have been solved, but now
we've learned that the second-part solution was wrong and has been
re-solved:
http://www.elonka.com/kryptos/CorrectedK2Announcement.html
http://www.wired.com/news/technology/0,70701-0.html
More on the sculpture:
http://en.wikipedia.org/wiki/Kryptos
http://www.elonka.com/kryptos/
Blog entry URL:
http://www.schneier.com/blog/archives/2006/04/the_kryptos_scu.html

Mafia boss secures his data with Caesar cipher.
http://dsc.discovery.com/news/briefs/20060417/mafiaboss_tec.html

Microsoft Vista's endless security warnings:
http://www.winsupersite.com/reviews/winvista_5308_05.asp
The problem with lots of warning dialog boxes is that they don't
provide security.  Users stop reading them.  They think of them as
annoyances, as an extra click required to get a feature to
work.  Clicking through gets embedded into muscle memory, and when it
actually matters the user won't even realize it.
http://www.codinghorror.com/blog/archives/000571.html
http://west-wind.com/weblog/posts/4678.aspx
These dialog boxes are not security for the user, they're CYA security
*from* the user. When some piece of malware trashes your system,
Microsoft can say: "You gave the program permission to do that; it's
not our fault."  Warning dialog boxes are only effective if the user
has the ability to make intelligent decisions about the warnings.  If
the user cannot do that, they're just annoyances.  And they're
annoyances that don't improve security.
http://blogs.zdnet.com/Ou/?p=209

Digital cameras have unique fingerprints:
http://www.eurekalert.org/pub_releases/2006-04/bu-bur041806.php
Interesting research, but there's one important aspect of this
fingerprint that the article did not talk about: how easy is it to
forge?  Can someone analyze 100 images from a given camera, and then
doctor a pre-existing picture so that it appeared to come from that
camera?  My guess is that it can be done relatively easily.

Kaspersky Labs reports on extortion scams using malware:
http://www.viruslist.com/en/analysis?pubid=184012401#crypto
Among other worms, the article discusses the GpCode.ac worm, which
encrypts data using 56-bit RSA (no, that's not a typo).  The whole
article is interesting reading.

Larry Beinhart makes an interesting case for the elimination of most
government secrecy.
http://www.buzzflash.com/contributors/06/04/con06131.html
He has a good argument, although I think the issue is a bit more
complicated.
http://www.schneier.com/crypto-gram-0205.html#1

"Security Myths and Passwords," by Gene Spafford:
http://www.cerias.purdue.edu/weblogs/spaf/general/post-30

There was a code in the judge's ruling on the Da Vinci Code plagiarism
case.  It was solved way too quickly after it was discovered, because
the judge gave out some really obvious hints.  But you can read about
it here:
http://www.schneier.com/blog/archives/2006/04/da_vinci_code_r.html

As an aside, I am mentioned in Da Vinci Code.  No, really.  Page 199 of
the American hardcover edition.  "Da Vinci had been a cryptography
pioneer, Sophie knew, although he was seldom given credit.  Sophie's
university instructors, while presenting computer encryption methods
for securing data, praised modern cryptologists like Zimmermann and
Schneier but failed to mention that it was Leonardo who had invented
one of the first rudimentary forms of public key encryption centuries
ago."  That's right.  I am a realistic background detail.
http://fishbowl.pastiche.org/2004/07/06/house_of_cards

Technology Review has an interesting article discussing some of the
technologies used by the NSA in its warrantless wiretapping program,
some of them from the killed Total Information Awareness (TIA) program.
http://www.technologyreview.com/read_article.aspx?ch=infotech&sc=&id=167
41&pg=1 or http://tinyurl.com/ruafx

John Dvorak argues that Internet Explorer was Microsoft's greatest
mistake ever.  Certainly its decision to tightly integrate IE with the
operating system -- done as an anti-competitive maneuver against
Netscape during the Browser Wars -- has resulted in some enormous
security problems that Microsoft has still not recovered from.  Not
even with the introduction of IE7.
http://www.pcmag.com/print_article2/0,1217,a=176507,00.asp

Security in comics: attackers are adaptable:
http://www.comics.com/comics/hedge/archive/hedge-20060423.html

We've talked about counterfeit money, counterfeit concert tickets,
counterfeit police credentials, and counterfeit police
departments.  Here's a story about a counterfeit company.
http://www.iht.com/articles/2006/04/27/business/nec.php

Verizon has announced that it has activated the Access Overload Control
(ACCOLC) system, allowing some cell phones to have priority access to
the network, even when the network is overloaded.  Sounds like you're
going to have to enter some sort of code into your handset.  I wonder
how long before someone hacks that system.
http://www.pcsintel.com/content/view/1293/0/

An arson squad blows up a news rack, mistaking a promotion for Tom
Cruise's new movie for a bomb.  Really; you can't make this kind of
stuff up.
http://www.editorandpublisher.com/eandp/news/article_display.jsp?vnu_con
tent_id=1002425411 or http://tinyurl.com/n3286

Assault weapon that passes through X-ray machines.
http://www.promoinnovations.com/xray.htm

A man sues Compaq for false advertising.  He bought the computer
because it was advertised as totally secure.  But after he committed
some crimes and the FBI got his computer, they were able to recover his
data.  This is what I said in the article: "Unfortunately, this
probably isn't a great case.  Here's a man who's not going to get much
sympathy. You want a defendant who bought the Compaq computer, and
then, you know, his competitor, or a rogue employee, or someone who
broke into his office, got the data. That's a much more sympathetic
defendant."
http://hartfordadvocate.com/gbase/News/content?oid=oid:153106

Infant identity theft victim:
http://www.abcnews.go.com/US/story?id=155878&page=1

An improv group in New York dressed up like Best Buy employees and went
into a store, secretly videotaping the results.  My favorite
part:  "Security guards and managers started talking to each other
frantically on their walkie-talkies and headsets. 'Thomas Crown Affair!
Thomas Crown Affair!,' one employee shouted. They were worried that we
were using our fake uniforms to stage some type of elaborate heist. 'I
want every available employee out on the floor RIGHT NOW!'"
http://www.improveverywhere.com/mission_view.php?mission_id=57

Stealing cars with laptops:
http://www.leftlanenews.com/2006/05/03/gone-in-20-minutes-using-laptops-
to-steal-cars/ or http://tinyurl.com/mkr9s
http://slashdot.org/articles/06/05/03/1928256.shtml

The rapper MC Plus+ has written a song about cryptography, "Alice and
Bob."  It mentions DES, AES, Blowfish, RSA, SHA-1, and more.  And me!
http://www.cs.purdue.edu/homes/anavabi/mp3/MC%20Plus+%20-%20Algorhythms%
20-%20Alice%20and%20Bob.mp3 or http://tinyurl.com/8jov2
Here's an article about "geeksta rap."
http://www.wired.com/news/culture/0,1284,67970,00.html

The DHS secretly shares European air passenger data in violation of
agreement:
http://www.aclu.org/privacy/spying/25335prs20060425.html

Shell has suspended its chip-and-pin payment system in the UK, after
fraudsters stole over one million pounds.  Lots of details on my blog:
http://www.schneier.com/blog/archives/2006/05/shell_suspends.html

According to this article, the ultimate terrorist threat is flying
robot drones.  The article really pegs the movie-plot threat hype-meter.
http://www.physorg.com/news66197469.html

A reporter finds an old British Airways boarding pass, and proceeds to
use it to find everything else about the person.
http://www.guardian.co.uk/g2/story/0,,1766138,00.html
Notice the economic pressures:  "'The problem here is that a commercial
organisation is being given the task of collecting data on behalf of a
foreign government, for which it gets no financial reward, and which
offers no business benefit in return,' says Laurie. 'Naturally, in such
a case, they will seek to minimise their costs, which they do by
handing the problem off to the passengers themselves. This has the neat
side-effect of also handing off liability for data errors.'"

Five stories of RFID hacking:
http://www.wired.com/wired/archive/14.05/rfid.html

And IBM thinks it has a solution: a removable tag that reduces the
range of the RFID chip:
http://wired.com/news/technology/0,70793-0.html
Why not disable it entirely?

Serious computer problems inside the NSA:
http://www.baltimoresun.com/news/custom/attack/bal-te.nsa26feb26,0,63111
75.story or http://tinyurl.com/rgrso

Meanwhile, the NSA is building a massive traffic-analysis database on
Americans' calling patterns:
http://www.usatoday.com/news/washington/2006-05-10-nsa_x.htm
http://www.prospect.org/weblog/2006/05/post_336.html#002317
http://glenngreenwald.blogspot.com/2006/05/no-need-for-congress-no-need-
for.html
http://www.orinkerr.com/2006/05/11/thoughts-on-the-legality-of-the-lates
t-nsa-surveillance-program/
http://www.orinkerr.com/2006/05/12/more-thoughts-on-the-legality-of-the-
nsa-call-records-program/

Major vulnerability found in Diebold election machines.  This one is a
big deal.
http://www.insidebayarea.com/ci_3805089
http://www.blackboxvoting.org/BBVtsxstudy.pdf

Comparing the security of election machines with the security of slot
machines:
http://www.washingtonpost.com/wp-dyn/content/graphic/2006/03/16/GR200603
1600213.html or http://tinyurl.com/gda98

Thief disguises himself as a museum guard and tricks employees into
giving him 200,000 euros:
http://today.reuters.com/news/articlenews.aspx?type=oddlyEnoughNews&stor
yid=2006-05-03T204308Z_01_L02306327_RTRUKOC_0_US-ITALY-THIEF.xml or
http://tinyurl.com/j3q6k

Fascinating first-person account of being on the TSA's watch list:
http://arstechnica.com/news.ars/post/20060506-6767.html

Reconceptualizing national intelligence:
http://www.fas.org/blog/secrecy/2006/05/curing_analytic_pathologies.html
 or http://tinyurl.com/lc2of

Public-key cryptography for digital notarization in Pennsylvania.
http://www.nationalnotary.org/news/index.cfm?Text=newsNotary&newsID=851
or http://tinyurl.com/r9z4w
http://www.eweek.com/article2/0,1895,1955701,00.asp


** *** ***** ******* *********** *************

     RFID Cards and Man-in-the-Middle Attacks



Recent articles about a proposed US-Canada and US-Mexico travel
document (kind of like a passport, but less useful), with an embedded
RFID chip that can be read up to 25 feet away, have once again made
RFID security newsworthy.

My views have not changed.  The most secure solution is a smart card
that only works in contact with a reader; RFID is much more risky.  But
if we're stuck with RFID, the combination of shielding for the chip,
basic access control security measures, and some positive action by the
user to get the chip to operate is a good one.  The devil is in the
details, of course, but those are good starting points.

And when you start proposing chips with a 25-foot read range, you need
to worry about man-in-the-middle attacks.  An attacker could
potentially impersonate the card of a nearby person to an official
reader, just by relaying messages to and from that nearby person's card.

Here's how the attack would work.  In this scenario, customs Agent
Alice has the official card reader.  Bob is the innocent traveler, in
line at some border crossing.  Mallory is the malicious attacker, ahead
of Bob in line at the same border crossing, who is going to impersonate
Bob to Alice.  Mallory's equipment includes an RFID reader and transmitter.

Assume that the card has to be activated in some way.  Maybe the cover
has to be opened, or the card taken out of a sleeve.  Maybe the card
has a button to push in order to activate it.  Also assume the card has
come challenge-reply security protocol and an encrypted key exchange
protocol of some sort.

1. Alice's reader sends a message to Mallory's RFID chip.

2. Mallory's reader/transmitter receives the message, and rebroadcasts
it to Bob's chip.  (Bob is somewhere else, out of Alice's range.)

3. Bob's chip responds normally to a valid message from Alice's
reader.  He has no way of knowing that Mallory relayed the message.

4. Mallory's reader transmitter receives Bob's message and rebroadcasts
it to Alice.  Alice has no way of knowing that the message was relayed.

5. Mallory continues to relay messages back and forth between Alice and
Bob.

Defending against this attack is hard.  (I talk more about the attack
in Applied Cryptography, Second Edition, page 109.)  Time stamps don't
help.  Encryption doesn't help.  It works because Mallory is simply
acting as an amplifier.  Mallory might not be able to read the
messages.  He might not even know who Bob is.  But he doesn't
care.  All he knows is that Alice thinks he's Bob.

Precise timing can catch this attack, because of the extra delay that
Mallory's relay introduces.  But I don't think this is part of the spec.

The attack can be easily countered if Alice looks at Mallory's card and
compares the information printed on it with what she's receiving over
the RFID link.  But near as I can tell, the point of the 25-foot read
distance is so cards can be authenticated in bulk, from a distance.

According to the news.com article: "Homeland Security has said, in a
government procurement notice posted in September, that "read ranges
shall extend to a minimum of 25 feet" in RFID-equipped identification
cards used for border crossings. For people crossing on a bus, the
proposal says, 'the solution must sense up to 55 tokens.'"

If Mallory is on that bus, he can impersonate any nearby Bob who
activates his RFID card early.  And at a crowded border crossing, the
odds of some Bob doing that are pretty good.

>From the Federal Computer Week article: "If that were done, the PASS
system would automatically screen the cardbearers against criminal
watch lists and put the information on the border guard's screen by the
time the vehicle got to the station, Williams said."

And would predispose the guard to think that everything's okay, even if
it isn't.

I don't think people are thinking this one through.

http://news.com.com/New+RFID+travel+cards+could+pose+privacy+threat/2100
-1028_3-6062574.html or http://tinyurl.com/le82d
http://www.fcw.com/article94113-04-18-06-Web

My views on RFID identity cards:
http://www.schneier.com/blog/archives/2005/08/rfid_passport_s_1.html


** *** ***** ******* *********** *************

     Software Failure Causes Airport Evacuation



Last month I wrote about airport passenger screening, and mentioned
that the X-ray equipment inserts "test" bags into the stream in order
to keep screeners more alert.  That system failed pretty badly earlier
this week at Atlanta's Hartsfield-Jackson Airport, when a false alarm
resulted in a two-hour evacuation of the entire airport.

The screening system injects test images onto the screen.  Normally the
software flashes the words "This is a test" on the screen after a brief
delay, but this time the software failed to indicate that.  The
screener noticed the image (of a "suspicious device," according to CNN)
and, per procedure, screeners manually checked the bags on the conveyor
belt for it.  They couldn't find it, of course, but they evacuated the
airport and spent two hours vainly searching for it.

Hartsfield-Jackson is the country's busiest passenger airport.  It's
Delta's hub city.  The delays were felt across the country for the rest
of the day.

Okay, so what went wrong here?  Clearly the software failed.  Just as
clearly the screener procedures didn't fail -- everyone did what they
were supposed to do.

What is less obvious is that the system failed.  It failed, because it
was not designed to fail well.  A small failure -- in this case, a
software glitch in a single X-ray machine -- cascaded in such a way as
to shut down the entire airport.  This kind of failure magnification is
common in poorly designed security systems.  Better would be for there
to be individual X-ray machines at the gates -- I've seen this design
at several European airports -- so that when there's a problem the
effects are restricted to that gate.

Of course, this distributed security solution would be more
expensive.  But I'm willing to bet it would be cheaper overall, taking
into account the cost of occasionally clearing out an airport.

http://www.cnn.com/2006/US/04/20/atlanta.airport/index.html

What I wrote last month:
http://www.schneier.com/blog/archives/2006/03/airport_passeng.html


** *** ***** ******* *********** *************

     Counterpane News



On May 23, Schneier will be opening a new speaking series by the ACLU
with a talk on "The Future of Privacy."
http://www.aclu.org/privacy/25551res20060512.html

Schneier will be speaking at the Gartner IT Security Summit in
Washington DC, June 5-7:
http://www.gartner.com/2_events/conferences/sec12.jsp

Schneier will be speaking at the ACLU New Jersey Membership Conference:
https://www.aclu-nj.org/events/aclunjmembershipconference

Schneier will be speaking at the ACLU Vermont Privacy Conference:
http://www.acluvt.org/news/display.php?sid=1145047166&PHPSESSID=31bdcefa
418904b0caab1ffbde1f8a64 or http://tinyurl.com/pdzyy

Tipping Point is offering Managed Security Services through an alliance
with Counterpane:
http://www.counterpane.com/pr-20060501.html


** *** ***** ******* *********** *************

     Microsoft's BitLocker



BitLocker Drive Encryption is a new security feature in Windows Vista,
designed to work with the Trusted Platform Module (TPM).  Basically, it
encrypts the C drive with a computer-generated key.  In its basic mode,
an attacker can still access the data on the drive by guessing the
user's password, but would not be able to get at the drive by booting
the disk up using another operating system, or removing the drive and
attaching it to another computer.

There are several modes for BitLocker.  In the simplest mode, the TPM
stores the key and the whole thing happens completely invisibly.  The
user does nothing differently, and notices nothing different.

The BitLocker key can also be stored on a USB drive.  Here, the user
has to insert the USB drive into the computer during boot.  Then
there's a mode that uses a key stored in the TPM and a key stored on a
USB drive.  And finally, there's a mode that uses a key stored in the
TPM and a four-digit PIN that the user types into the computer.  This
happens early in the boot process, when there's still ASCII text on the
screen.

Note that if you configure BitLocker with a USB key or a PIN, password
guessing doesn't work.  BitLocker doesn't even let you get to a
password screen to try.

For most people, basic mode is the best.  People will keep their USB
key in their computer bag with their laptop, so it won't add much
security.  But if you can force users to attach it to their key chains
-- remember that you only need the key to boot the computer, not to
operate the computer -- and convince them to go through the trouble of
sticking it in their computer every time they boot, then you'll get a
higher level of security.

There is a recovery key: optional but strongly encouraged.  It is
automatically generated by BitLocker, and it can be sent to some
administrator or printed out and stored in some secure location.  There
are ways for an administrator to set group policy settings mandating
this key.

There aren't any back doors for the police, though.

You can get BitLocker to work in systems without a TPM, but it's
kludgy.  You can only configure it for a USB key.  And it only will
work on some hardware: because BitLocker starts running before any
device drivers are loaded, the BIOS must recognize USB drives in order
for BitLocker to work.

Encryption particulars:  The default data encryption algorithm is
AES-128-CBC with an additional diffuser. The diffuser is designed to
protect against ciphertext-manipulation attacks, and is independently
keyed from AES-CBC so that it cannot damage the security you get from
AES-CBC.   Administrators can select the disk encryption algorithm
through group policy.  Choices are 128-bit AES-CBC plus the diffuser,
256-bit AES-CBC plus the diffuser, 128-bit AES-CBC, and 256-bit
AES-CBC.  (My advice: stick with the default.)  The key management
system uses 256-bit keys wherever possible. The only place where a
128-bit key limit is hard-coded is the recovery key, which is 48 digits
(including checksums).  It's shorter because it has to be typed in
manually; typing in 96 digits will piss off a lot of people -- even if
it is only for data recovery.

So, does this destroy dual-boot systems?  Not really.  If you have
Vista running, then set up a dual boot system, BitLocker will consider
this sort of change to be an attack and refuse to run.  But then you
can use the recovery key to boot into Windows, then tell BitLocker to
take the current configuration -- with the dual boot code -- as
correct.  After that, your dual boot system will work just fine, or so
I've been told.  You still won't be able to share any files on your C
drive between operating systems, but you will be able to share files on
any other drive.

The problem is that it's impossible to distinguish between a legitimate
dual boot system and an attacker trying to use another OS -- whether
Linux or another instance of Vista -- to get at the volume.

BitLocker is not a DRM system.  However, it is straightforward to turn
it into a DRM system.  Simply give programs the ability to require that
files be stored only on BitLocker-enabled drives, and then only be
transferable to other BitLocker-enabled drives.  How easy this would be
to implement, and how hard it would be to subvert, depends on the
details of the system.

BitLocker is also not a panacea.  But it does mitigate a specific but
significant risk: the risk of attackers getting at data on drives
directly.  It allows people to throw away or sell old drives without
worry.  It allows people to stop worrying about their drives getting
lost or stolen.  It stops a particular attack against data.

Right now BitLocker is only in the Ultimate and Enterprise editions of
Vista.  It's a feature that is turned off by default.  It is also
Microsoft's first TPM application.  Presumably it will be enhanced in
the future: allowing the encryption of other drives would be a good
next step, for example.

http://www.microsoft.com/technet/windowsvista/library/help/b7931dd8-3152
-4d3a-a9b5-84621660c5f5.mspx?mfr=true or http://tinyurl.com/fywd7
http://www.microsoft.com/technet/windowsvista/library/c61f2a12-8ae6-4957
-b031-97b4d762cf31.mspx or http://tinyurl.com/h4nc8

Niels Ferguson on back doors:
http://blogs.msdn.com/si_team/archive/2006/03/02/542590.aspx

BitLocker and dual boot systems:
http://www.theregister.co.uk/2006/04/27/schneier_infosec/
http://arstechnica.com/journals/microsoft.ars/2006/4/28/3782


** *** ***** ******* *********** *************

     The Security Risk of Special Cases



In Beyond Fear, I wrote about the inherent security risks of exceptions
to a security policy.  Here's an example, from airport security in Ireland.

Police officers are permitted to bypass airport security at the Dublin
Airport.  They flash their ID, and walk around the checkpoints.

"A female member of the airport search unit is undergoing re-training
after the incident in which a Department of Transport inspector passed
unchecked through security screening.

"It is understood that the department official was waved through
security checks having flashed an official badge. The inspector
immediately notified airport authorities of a failure in vetting
procedures. Only gardai are permitted to pass unchecked through security."

There are two ways this failure could have happened.  One, security
person could have thought that Department of Transportation officials
have the same privileges as police officers.  And two, the security
person could have thought she was being shown a police ID.

This could have just as easily been a bad guy showing a fake police
ID.  My guess is that the security people don't check them all that
carefully.

The meta-point is that exceptions to security are themselves security
vulnerabilities.  As soon as you create a system by which some people
can bypass airport security checkpoints, you invite the bad guys to try
and use that system.  There are reasons why you might want to create
those alternate paths through security, of course, but the trade-offs
should be well thought out.

http://archives.tcm.ie/businesspost/2006/04/16/story13502.asp


** *** ***** ******* *********** *************

     Comments from Readers



There are hundreds of comments -- many of them interesting -- on these
topics on my blog. Search for the story you want to comment on, and
join in.

http://www.schneier.com/blog


** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on security: computer and otherwise.  You
can subscribe, unsubscribe, or change your address on the Web at
<http://www.schneier.com/crypto-gram.html>.  Back issues are also
available at that URL.

Comments on CRYPTO-GRAM should be sent to
schneier at counterpane.com.  Permission to print comments is assumed
unless otherwise stated.  Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who
will find it valuable.  Permission is granted to reprint CRYPTO-GRAM,
as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is the author of
the best sellers "Beyond Fear," "Secrets and Lies," and "Applied
Cryptography," and an inventor of the Blowfish and Twofish
algorithms.  He is founder and CTO of Counterpane Internet Security
Inc., and is a member of the Advisory Board of the Electronic Privacy
Information Center (EPIC).  He is a frequent writer and lecturer on
security topics.  See <http://www.schneier.com>.

Counterpane is the world's leading protector of networked information -
the inventor of outsourced security monitoring and the foremost
authority on effective mitigation of emerging IT threats. Counterpane
protects networks for Fortune 1000 companies and governments
world-wide.  See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter.  Opinions expressed are not
necessarily those of Counterpane Internet Security, Inc.

Copyright (c) 2006 by Bruce Schneier.

----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820            http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]





More information about the cypherpunks-legacy mailing list