cypherpunks-legacy
Threads by month
- ----- 2025 -----
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2005 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2004 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2003 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2002 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2001 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2000 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1999 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1998 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1997 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1996 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1995 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1994 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1993 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1992 -----
- December
- November
- October
- September
July 2018
- 1371 participants
- 9656 discussions
Link: http://slashdot.org/article.pl?sid=04/10/11/0239205
Posted by: timothy, on 2004-10-11 05:29:00
from the don't-look-just-tell dept.
[1]MinimeMongo writes that the "Associated Press reports that China's
police ministry on Sunday [2]handed out rewards of up to $240 to
people who reported pornographic Web sites in a campaign to stamp out
online smut...The online crackdown is part of a sweeping official
morality campaign launched this year on orders from communist
leaders."
References
1. mailto:6cgi-9w09.xemaps@com
2.
http://www.newsday.com/technology/business/wire/sns-ap-china-porn-rewards,0…
812553.story?coll=sns-ap-technology-headlines
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net
[demime 1.01d removed an attachment of type application/pgp-signature]
1
0
Been looking for a distributed filesystem with no luck, any ideas?
Distributed meaning that there is no single central access point.
So the concept of mounting might mean amongst the roving set able
to service it, or some underlying graph of it, something like:
mount -t dht dht://<seed_node> /dist1
Users add their free block devices to the global backing store which
was initialized with certain ZFS-like integrity and redundancy
guarantees.
Files would never vanish unless there are no longer enough blocks
in the backing store to meet the init time guarantees.
Users could either copy their hierarchies into the space, or attach
them into the space for continued local maintenance.
The one time init setting of a space could include whether pki
recognized root users could maintain the overall hierarchy. Uid's
might be an insertion node id.
There may need to be voting authority on file/tree expiry if under
space pressure, perhaps bitcoin-like, with the metrics established
at init time.
Users could add their block devices to whatever pool had the metrics
they like.
Anononymity and crypto would provide incentive to donate resources
since unlike say bittorrent, no legal fear means no hit and run
required.
I don't really know what it might look like. Just that it needs
sha2/3 integrity, redundancy, and file lifetime guarantees. It needs
to be global, anonymous, and be a usable file system. And somehow
deal with abusive fill such as dd if=/dev/zero of=zero, which implies
some kind of moderated hierarchies appointed by the initializing
entity.
AFS is nice that users can bolt their filespace into the tree, and
it has filesystem semantics.
ZFS/BTRFS is nice due to their sha256 integrity, raidz redundancy
and simple backing block device ideas.
FreeNet/GnuNet/BitTorrent and all other 'filesharing' protocols are
no good because there is no guarantee that files will not vanish.
And they have no filesystem semantics, only push/fetch.
RedHat GFS / DragonFly HAMMER are interesting as a distributed
filesystem in which real work can be done on live files.
Tahoe-LAFS is nice due to adding in block devices, but no good
because of the central access point. Perhaps that could be distributed?
Phantom/I2P/Tor could be used as the backend IP transport.
http://en.wikipedia.org/wiki/List_of_file_systems#Distributed_parallel_faul…
The only thing that makes it worth while is that a lot of people
have free space, want to give an get data, safely, and don't want
to see their work in populating it wasted... so it can't go away.
_______________________________________________
p2p-hackers mailing list
p2p-hackers(a)lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
Last I heard sealand was defunct I remember the hosting havenco went dark I
thought sealand shutdown too
On Oct 11, 2012 10:59 AM, "jamie rishaw" <j(a)arpa.com> wrote:
> +++
> ATH0
>
> http://goo.gl/EdN3C [SealandGov.org]
> also,
> http://www.guardian.co.uk/uk/2012/oct/10/prince-sealand-dies
>
> -j
> --
> "sharp, dry wit and brash in his dealings with contestants." - Forbes
> /* - teh jamie. ; uri -> http://about.me/jgr */
>
> California Voter? Vote YES on Prop 34. http://YesOn34.org/
>
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
On 10/3/05, Jason Holt <jason(a)lunkwill.org> wrote:
>
> More thoughts regarding the tokens vs. certs decision, and also multi-use:
This is a good summary of the issues. With regard to turning client
certs on and off: from many years of experience with anonymous and
pseudonymous communication, the big usability problem is remembering
which mode you are in - whether you are identified or anonymous. This
relates to the technical problem of preventing data from one mode from
leaking over into the other.
The best solution is to use separate logins for the two modes. This
prevents any technical leakage such as cookies or certificates.
Separate desktop pictures and browser skins can be selected to provide
constant cues about the mode. Using this method it would not be
necessary to be asked on every certificate usage, so that problem with
certs would not arise.
(As far as the Chinese dissident using net cafes, if they are using
Tor at all it might be via a USB token like the one (formerly?)
available from virtualprivacymachine.com. The browser on the token can
be configured to hold the cert, making it portable.)
Network eavesdropping should not be a major issue for a pseudonym
server. Attackers would have little to gain for all their work. The
user is accessing the server via Tor so their anonymity is still
protected.
Any solution which waits for Wikimedia to make changes to their
software will probably be long in coming. When Jimmy Wales was asked
whether their software could allow logins for "trusted" users from
otherwise blocked IPs, he didn't have any idea. The technical people
are apparently in a separate part of the organization. Even if Jimmy
endorsed an idea for changing Wikipedia, he would have to sell it to
the technical guys, who would then have to implement and test it in
their Wiki code base, then it would have to be deployed in Wikipedia
(which is after all their flagship product and one which they would
want to be sure not to break).
Even once this happened, the problem is only solved for that one case
(possibly also for other users of the Wiki code base). What about
blogs or other web services that may decide to block Tor? It would be
better to have a solution which does not require customization of the
web service software. That approach tries to make the Tor tail wag the
Internet dog.
The alternative of running a pseudonym based web proxy that only lets
"good" users pass through will avoid the need to customize web
services on an individual basis, at the expense of requiring a
pseudonym quality administrator who cancels nyms that misbehave. For
forward secrecy, this service would expunge its records of which nyms
had been active, after a day or two (long enough to make sure no
complaints are going to come back).
As far as the Unlinkable Serial Transactions proposal, the gist of it
is to issue a new blinded token whenever one is used. That's a clever
idea but it is not adequate for this situtation, because abuse
information is not available until after the fact. By the time a
complaint arises the miscreant will have long ago received his new
blinded token and the service will have no way to stop him from
continuing to use it.
I could envision a complicated system whereby someone could use a
token on Monday to access the net, then on Wednesday they would become
eligible to exchange that token for a new one, provided that it had
not been black-listed due to complaints in the interim. This adds
considerable complexity, including the need to supply people with
multiple initial tokens so that they could do multiple net accesses
while waiting for their tokens to be eligible for exchange; the risk
that exchange would often be followed immediately by use of the new
token, harming unlinkability; the difficulty in fully black-listing a
user who has multiple independent tokens, when each act of abuse
essentially just takes one of his tokens away from him. Overall this
would be too cumbersome and problematic to use for this purpose.
Providing forward secrecy by having the nym-based web proxy erase its
records every two days is certainly less secure than doing it by
cryptographic means, but at the same time it is more secure than
trusting every web service out there to take similar actions to
protect its clients. Until a clean and unemcumbered technological
approach is available, this looks like a reasonable compromise.
CP
---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo(a)metzdowd.com
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
1
0
CRYPTO-GRAM
November 15, 2009
by Bruce Schneier
Chief Security Technology Officer, BT
schneier(a)schneier.com
http://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit
<http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at
<http://www.schneier.com/crypto-gram-0911.html>. These same essays
appear in the "Schneier on Security" blog:
<http://www.schneier.com/blog>. An RSS feed is available.
** *** ***** ******* *********** *************
In this issue:
Beyond Security Theater
Fear and Overreaction
News
Zero-Tolerance Policies
Security in a Reputation Economy
Schneier News
The Commercial Speech Arms Race
The Doghouse: ADE 651
"Evil Maid" Attacks on Encrypted Hard Drives
Is Antivirus Dead?
** *** ***** ******* *********** *************
Beyond Security Theater
[I was asked to write this essay for the "New Internationalist" (n. 427,
November 2009, pp. 10--13). It's nothing I haven't said before, but I'm
pleased with how this essay came together.]
Terrorism is rare, far rarer than many people think. It's rare because
very few people want to commit acts of terrorism, and executing a
terrorist plot is much harder than television makes it appear. The best
defenses against terrorism are largely invisible: investigation,
intelligence, and emergency response. But even these are less effective
at keeping us safe than our social and political policies, both at home
and abroad. However, our elected leaders don't think this way: they are
far more likely to implement security theater against movie-plot threats.
A movie-plot threat is an overly specific attack scenario. Whether it's
terrorists with crop dusters, terrorists contaminating the milk supply,
or terrorists attacking the Olympics, specific stories affect our
emotions more intensely than mere data does. Stories are what we fear.
It's not just hypothetical stories: terrorists flying planes into
buildings, terrorists with bombs in their shoes or in their water
bottles, and terrorists with guns and bombs waging a co-ordinated attack
against a city are even scarier movie-plot threats because they actually
happened.
Security theater refers to security measures that make people feel more
secure without doing anything to actually improve their security. An
example: the photo ID checks that have sprung up in office buildings.
No-one has ever explained why verifying that someone has a photo ID
provides any actual security, but it looks like security to have a
uniformed guard-for-hire looking at ID cards. Airport-security examples
include the National Guard troops stationed at US airports in the months
after 9/11 -- their guns had no bullets. The US colour-coded system of
threat levels, the pervasive harassment of photographers, and the metal
detectors that are increasingly common in hotels and office buildings
since the Mumbai terrorist attacks, are additional examples.
To be sure, reasonable arguments can be made that some terrorist targets
are more attractive than others: airplanes because a small bomb can
result in the death of everyone aboard, monuments because of their
national significance, national events because of television coverage,
and transportation because of the numbers of people who commute daily.
But there are literally millions of potential targets in any large
country (there are five million commercial buildings alone in the US),
and hundreds of potential terrorist tactics; it's impossible to defend
every place against everything, and it's impossible to predict which
tactic and target terrorists will try next.
Feeling and Reality
Security is both a feeling and a reality. The propensity for security
theater comes from the interplay between the public and its leaders.
When people are scared, they need something done that will make them
feel safe, even if it doesn't truly make them safer. Politicians
naturally want to do something in response to crisis, even if that
something doesn't make any sense.
Often, this "something" is directly related to the details of a recent
event: we confiscate liquids, screen shoes, and ban box cutters on
airplanes. But it's not the target and tactics of the last attack that
are important, but the next attack. These measures are only effective if
we happen to guess what the next terrorists are planning. If we spend
billions defending our rail systems, and the terrorists bomb a shopping
mall instead, we've wasted our money. If we concentrate airport security
on screening shoes and confiscating liquids, and the terrorists hide
explosives in their brassieres and use solids, we've wasted our money.
Terrorists don't care what they blow up and it shouldn't be our goal
merely to force the terrorists to make a minor change in their tactics
or targets.
Our penchant for movie plots blinds us to the broader threats. And
security theater consumes resources that could better be spent elsewhere.
Any terrorist attack is a series of events: something like planning,
recruiting, funding, practicing, executing, aftermath. Our most
effective defenses are at the beginning and end of that process --
intelligence, investigation, and emergency response -- and least
effective when they require us to guess the plot correctly. By
intelligence and investigation, I don't mean the broad data-mining or
eavesdropping systems that have been proposed and in some cases
implemented -- those are also movie-plot stories without much basis in
actual effectiveness -- but instead the traditional "follow the
evidence" type of investigation that has worked for decades.
Unfortunately for politicians, the security measures that work are
largely invisible. Such measures include enhancing the
intelligence-gathering abilities of the secret services, hiring cultural
experts and Arabic translators, building bridges with Islamic
communities both nationally and internationally, funding police
capabilities -- both investigative arms to prevent terrorist attacks,
and emergency communications systems for after attacks occur -- and
arresting terrorist plotters without media fanfare. They do not include
expansive new police or spying laws. Our police don't need any new laws
to deal with terrorism; rather, they need apolitical funding. These
security measures don't make good television, and they don't help, come
re-election time. But they work, addressing the reality of security
instead of the feeling.
The arrest of the "liquid bombers" in London is an example: they were
caught through old-fashioned intelligence and police work. Their choice
of target (airplanes) and tactic (liquid explosives) didn't matter; they
would have been arrested regardless.
But even as we do all of this we cannot neglect the feeling of security,
because it's how we collectively overcome the psychological damage that
terrorism causes. It's not security theater we need, it's direct appeals
to our feelings. The best way to help people feel secure is by acting
secure around them. Instead of reacting to terrorism with fear, we --
and our leaders -- need to react with indomitability.
Refuse to Be Terrorized
By not overreacting, by not responding to movie-plot threats, and by not
becoming defensive, we demonstrate the resilience of our society, in our
laws, our culture, our freedoms. There is a difference between
indomitability and arrogant "bring 'em on" rhetoric. There's a
difference between accepting the inherent risk that comes with a free
and open society, and hyping the threats.
We should treat terrorists like common criminals and give them all the
benefits of true and open justice -- not merely because it demonstrates
our indomitability, but because it makes us all safer. Once a society
starts circumventing its own laws, the risks to its future stability are
much greater than terrorism.
Supporting real security even though it's invisible, and demonstrating
indomitability even though fear is more politically expedient, requires
real courage. Demagoguery is easy. What we need is leaders willing both
to do what's right and to speak the truth.
Despite fearful rhetoric to the contrary, terrorism is not a
transcendent threat. A terrorist attack cannot possibly destroy a
country's way of life; it's only our reaction to that attack that can do
that kind of damage. The more we undermine our own laws, the more we
convert our buildings into fortresses, the more we reduce the freedoms
and liberties at the foundation of our societies, the more we're doing
the terrorists' job for them.
We saw some of this in the Londoners' reaction to the 2005 transport
bombings. Among the political and media hype and fearmongering, there
was a thread of firm resolve. People didn't fall victim to fear. They
rode the trains and buses the next day and continued their lives.
Terrorism's goal isn't murder; terrorism attacks the mind, using victims
as a prop. By refusing to be terrorized, we deny the terrorists their
primary weapon: our own fear.
Today, we can project indomitability by rolling back all the fear-based
post-9/11 security measures. Our leaders have lost credibility; getting
it back requires a decrease in hyperbole. Ditch the invasive mass
surveillance systems and new police state-like powers. Return airport
security to pre-9/11 levels. Remove swagger from our foreign policies.
Show the world that our legal system is up to the challenge of
terrorism. Stop telling people to report all suspicious activity; it
does little but make us suspicious of each other, increasing both fear
and helplessness.
Terrorism has always been rare, and for all we've heard about 9/11
changing the world, it's still rare. Even 9/11 failed to kill as many
people as automobiles do in the US every single month. But there's a
pervasive myth that terrorism is easy. It's easy to imagine terrorist
plots, both large-scale "poison the food supply" and small-scale "10
guys with guns and cars." Movies and television bolster this myth, so
many people are surprised that there have been so few attacks in Western
cities since 9/11. Certainly intelligence and investigation successes
have made it harder, but mostly it's because terrorist attacks are
actually hard. It's hard to find willing recruits, to co-ordinate plans,
and to execute those plans -- and it's easy to make mistakes.
Counterterrorism is also hard, especially when we're psychologically
prone to muck it up. Since 9/11, we've embarked on strategies of
defending specific targets against specific tactics, overreacting to
every terrorist video, stoking fear, demonizing ethnic groups, and
treating the terrorists as if they were legitimate military opponents
who could actually destroy a country or a way of life -- all of this
plays into the hands of terrorists. We'd do much better by leveraging
the inherent strengths of our modern democracies and the natural
advantages we have over the terrorists: our adaptability and
survivability, our international network of laws and law enforcement,
and the freedoms and liberties that make our society so enviable. The
way we live is open enough to make terrorists rare; we are observant
enough to prevent most of the terrorist plots that exist, and
indomitable enough to survive the even fewer terrorist plots that
actually succeed. We don't need to pretend otherwise.
Commentary:
http://www.motherjones.com/kevin-drum/2009/11/security-theater
http://jamesfallows.theatlantic.com/archives/2009/11/the_right_kind_of_secu…
http://www.economist.com/blogs/gulliver/2009/11/the_future_of_security.cfm
** *** ***** ******* *********** *************
Fear and Overreaction
It's hard work being prey. Watch the birds at a feeder. They're
constantly on alert, and will fly away from food -- from easy nutrition
-- at the slightest movement or sound. Given that I've never, ever seen
a bird plucked from a feeder by a predator, it seems like a whole lot of
wasted effort against not very big a threat.
Assessing and reacting to risk is one of the most important things a
living creature has to deal with. The amygdala, an ancient part of the
brain that first evolved in primitive fishes, has that job. It's what's
responsible for the fight-or-flight reflex. Adrenaline in the
bloodstream, increased heart rate, increased muscle tension, sweaty
palms; that's the amygdala in action. And it works fast, faster than
consciousnesses: show someone a snake and their amygdala will react
before their conscious brain registers that they're looking at a snake.
Fear motivates all sorts of animal behaviors. Schooling, flocking, and
herding are all security measures. Not only is it less likely that any
member of the group will be eaten, but each member of the group has to
spend less time watching out for predators. Animals as diverse as
bumblebees and monkeys both avoid food in areas where predators are
common. Different prey species have developed various alarm calls, some
surprisingly specific. And some prey species have even evolved to react
to the alarms given off by other species.
Evolutionary biologist Randolph Nesse has studied animal defenses,
particularly those that seem to be overreactions. These defenses are
mostly all-or-nothing; a creature can't do them halfway. Birds flying
off, sea cucumbers expelling their stomachs, and vomiting are all
examples. Using signal detection theory, Nesse showed that
all-or-nothing defenses are expected to have many false alarms. "The
smoke detector principle shows that the overresponsiveness of many
defenses is an illusion. The defenses appear overresponsive because they
are 'inexpensive' compared to the harms they protect against and because
errors of too little defense are often more costly than errors of too
much defense."
So according to the theory, if flight costs 100 calories, both in flying
and lost eating time, and there's a 1 in 100 chance of being eaten if
you don't fly away, it's smarter for survival to use up 10,000 calories
repeatedly flying at the slightest movement even though there's a 99
percent false alarm rate. Whatever the numbers happen to be for a
particular species, it has evolved to get the trade-off right.
This makes sense, until the conditions that the species evolved under
change quicker than evolution can react to. Even though there are far
fewer predators in the city, birds at my feeder react as if they were in
the primal forest. Even birds safe in a zoo's aviary don't realize that
the situation has changed.
Humans are both no different and very different. We, too, feel fear and
react with our amygdala, but we also have a conscious brain that can
override those reactions. And we too live in a world very different from
the one we evolved in. Our reflexive defenses might be optimized for the
risks endemic to living in small family groups in the East African
highlands in 100,000 BC, not 2009 New York City. But we can go beyond
fear, and actually think sensibly about security.
Far too often, we don't. We tend to be poor judges of risk. We overact
to rare risks, we ignore long-term risks, we magnify risks that are also
morally offensive. We get risks wrong -- threats, probabilities, and
costs -- all the time. When we're afraid, really afraid, we'll do almost
anything to make that fear go away. Both politicians and marketers have
learned to push that fear button to get us to do what they want.
One night last month, I was awakened from my hotel-room sleep by a loud,
piercing alarm. There was no way I could ignore it, but I weighed the
risks and did what any reasonable person would do under the
circumstances: I stayed in bed and waited for the alarm to be turned
off. No point getting dressed, walking down ten flights of stairs, and
going outside into the cold for what invariably would be a false alarm
-- serious hotel fires are very rare. Unlike the bird in an aviary, I
knew better.
You can disagree with my risk calculus, and I'm sure many hotel guests
walked downstairs and outside to the designated assembly point. But it's
important to recognize that the ability to have this sort of discussion
is uniquely human. And we need to have the discussion repeatedly,
whether the topic is the installation of a home burglar alarm, the
latest TSA security measures, or the potential military invasion of
another country. These things aren't part of our evolutionary history;
we have no natural sense of how to respond to them. Our fears are often
calibrated wrong, and reason is the only way we can override them.
This essay first appeared on DarkReading.com.
http://www.darkreading.com/blog/archives/2009/11/its_hard_work_b.html
Animal behaviors:
http://judson.blogs.nytimes.com/2009/09/29/where-tasty-morsels-fear-to-trea…
or http://tinyurl.com/yhosh54
http://judson.blogs.nytimes.com/2009/10/06/leopard-behind-you/
Nesse paper:
http://www-personal.umich.edu/~nesse/Articles/Nesse-DefenseReg-EHB-2005.pdf
or http://tinyurl.com/yz8zmxh
Evaluating risk:
http://www.schneier.com/essay-162.html
http://www.schneier.com/essay-171.html
http://www.schneier.com/essay-170.html
http://www.schneier.com/essay-155.html
Hotel fires are rare:
http://www.emergency-management.net/hotel_fire.htm
** *** ***** ******* *********** *************
News
Fugitive caught after uploading his status on Facebook:
http://www.schneier.com/blog/archives/2009/10/helpful_hint_fo.html
Six years of Microsoft Patch Tuesdays:
http://www.schneier.com/blog/archives/2009/10/six_years_of_pa.html
A computer card counter detects human card counters; all it takes is a
computer that can track every card:
http://www.schneier.com/blog/archives/2009/10/computer_card_c.html
A woman posts a horrible story of how she was mistreated by the TSA, and
the TSA responds by releasing the video showing she was lying.
http://www.schneier.com/blog/archives/2009/10/tsa_successfull.html
Australia man receives reduced sentence due to encryption:
http://www.news.com.au/couriermail/story/0,23739,26232570-952,00.html
Steve Ballmer blames the failure of Windows Vista on security:
http://www.schneier.com/blog/archives/2009/10/ballmer_blames.html
James Bamford on the NSA
http://www.schneier.com/blog/archives/2009/10/james_bamford_o.html
CIA invests in social-network data mining:
http://www.wired.com/dangerroom/2009/10/exclusive-us-spies-buy-stake-in-twi…
or http://tinyurl.com/yl3zud2
http://www.visibletechnologies.com/press/pr_20091019.html
Interesting story of a 2006 Wal-Mart hack from, probably, Minsk.
http://www.wired.com/threatlevel/2009/10/walmart-hack/
Ross Anderson has put together a great resource page on security and
psychology:
http://www.cl.cam.ac.uk/~rja14/psysec.html
Best Buy sells surveillance tracker: only $99.99.
http://www.bestbuy.com/site/olspage.jsp?skuId=9540703&productCategoryId=pcm…
or http://tinyurl.com/yf2nsb8
You can also use an iPhone as a tracking device:
http://ephermata.livejournal.com/204026.html
A critical essay on the TSA from a former assistant police chief:
http://www.hlswatch.com/2009/10/15/b do-i-have-the-right-to-refuse-this-searchb/
or http://tinyurl.com/ydbox3o
Follow-up essay by the same person:
http://www.hlswatch.com/2009/11/10/where-are-all-the-white-guys-update-on-d…
The U.S. Deputy Director of National Intelligence for Collection gives a
press conference on the new Utah data collection facility.
http://link.brightcove.com/services/player/bcpid25071315001?bclid=287353280…
or http://tinyurl.com/yfzb7qm
Transcript:
http://www.dni.gov/speeches/20091023_speech.pdf
"Capability of the People's Republic of China to Conduct Cyber Warfare
and Computer Network Exploitation," prepared for the US-China Economic
and Security Review Commission, Northrop Grumman Corporation, October 9,
2009.
http://www.uscc.gov/researchpapers/2009/NorthropGrumman_PRC_Cyber_Paper_FIN…
or http://tinyurl.com/ygcmh9b
Squirrel terrorists attacking our critical infrastructure.
http://notionscapital.wordpress.com/2009/10/24/terrorists-strike-u-s-infras…
or http://tinyurl.com/ykgtadb
We have a cognitive bias to exaggerate risks caused by other humans, and
downplay risks caused by animals (and, even more, by natural phenomena).
To aid their Wall Street investigations, the FBI used DCSNet, its
massive surveillance system.
http://www.wallstreetandtech.com/blog/archives/2009/10/how_prosecutors.html…
or http://tinyurl.com/yhnt22q
Detecting terrorists by smelling fear:
http://www.schneier.com/blog/archives/2009/11/detecting_terro.html
In the "Open Access Journal of Forensic Psychology", there's a paper
about the problems with unscientific security: "A Call for
Evidence-Based Security Tools":
http://www.schneier.com/blog/archives/2009/11/the_problems_wi_1.html
Mossad hacked a Syrian official's computer; it was unattended in a hotel
room at the time.
http://www.haaretz.com/hasen/spages/1125312.html
Remember the evil maid attack: if an attacker gets hold of your computer
temporarily, he can bypass your encryption software.
http://www.schneier.com/blog/archives/2009/10/evil_maid_attac.html
Recently I wrote about the difficulty of making role-based access
control work, and how research at Dartmouth showed that it was better to
let people take the access control they need to do their jobs, and audit
the results. This interesting paper, "Laissez-Faire File Sharing,"
tries to formalize that sort of access control.
http://www.cs.columbia.edu/~smb/papers/nspw-use.pdf
http://www.schneier.com/essay-288.html
I have refrained from commenting on the case against Najibullah Zazi,
simply because it's so often the case that the details reported in the
press have very little do with reality. My suspicion was that he was,
as in so many other cases, an idiot who couldn't do any real harm and
was turned into a bogeyman for political purposes. However, John
Mueller -- who I've written about before -- has done the research.
http://www.schneier.com/blog/archives/2009/11/john_mueller_on_1.html
Interesting research: "Countering Kernel Rootkits with Lightweight Hook
Protection," by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.
http://www.schneier.com/blog/archives/2009/11/protecting_oss.html
Airport thieves prefer stealing black luggage; it's obvious why if you
think about it.
http://www.schneier.com/blog/archives/2009/11/thieves_prefer.html
We've seen lots of rumors about attacks against the power grid, both in
the U.S. and elsewhere, of people hacking the power grid. President
Obama mentioned it in his May cybersecurity speech: "In other countries
cyberattacks have plunged entire cities into darkness." Seems the
source of these rumors has been Brazil.
http://www.schneier.com/blog/archives/2009/11/hacking_the_bra.html
FBI/CIA/NSA information sharing before 9/11:
http://www.schneier.com/blog/archives/2009/11/fbiciansa_infor.html
Blowfish in fiction:
http://www.schneier.com/blog/archives/2009/11/blowfish_in_fic.html
** *** ***** ******* *********** *************
Zero-Tolerance Policies
Recent stories have documented the ridiculous effects of zero-tolerance
weapons policies in a Delaware school district: a first-grader expelled
for taking a camping utensil to school, a 13-year-old expelled after
another student dropped a pocketknife in his lap, and a seventh-grader
expelled for cutting paper with a utility knife for a class project.
Where's the common sense? the editorials cry.
These so-called zero-tolerance policies are actually zero-discretion
policies. They're policies that must be followed, no situational
discretion allowed. We encounter them whenever we go through airport
security: no liquids, gels or aerosols. Some workplaces have them for
sexual harassment incidents; in some sports a banned substance found in
a urine sample means suspension, even if it's for a real medical
condition. Judges have zero discretion when faced with mandatory
sentencing laws: three strikes for drug offences and you go to jail,
mandatory sentencing for statutory rape (underage sex), etc. A national
restaurant chain won't serve hamburgers rare, even if you offer to sign
a waiver. Whenever you hear "that's the rule, and I can't do anything
about it" -- and they're not lying to get rid of you -- you're butting
against a zero discretion policy.
These policies enrage us because they are blind to circumstance.
Editorial after editorial denounced the suspensions of elementary school
children for offenses that anyone with any common sense would agree were
accidental and harmless. The Internet is filled with essays
demonstrating how the TSA's rules are nonsensical and sometimes don't
even improve security. I've written some of them. What we want is for
those involved in the situations to have discretion.
However, problems with discretion were the reason behind these mandatory
policies in the first place. Discretion is often applied inconsistently.
One school principal might deal with knives in the classroom one way,
and another principal another way. Your drug sentence could depend
considerably on how sympathetic your judge is, or on whether she's
having a bad day.
Even worse, discretion can lead to discrimination. Schools had weapons
bans before zero-tolerance policies, but teachers and administrators
enforced the rules disproportionately against African-American students.
Criminal sentences varied by race, too. The benefit of zero-discretion
rules and laws is that they ensure that everyone is treated equally.
Zero-discretion rules also protect against lawsuits. If the rules are
applied consistently, no parent, air traveler or defendant can claim he
was unfairly discriminated against.
So that's the choice. Either we want the rules enforced fairly across
the board, which means limiting the discretion of the enforcers at the
scene at the time, or we want a more nuanced response to whatever the
situation is, which means we give those involved in the situation more
discretion.
Of course, there's more to it than that. The problem with the
zero-tolerance weapons rules isn't that they're rigid, it's that they're
poorly written.
What constitutes a weapon? Is it any knife, no matter how small?
Should the penalties be the same for a first grader and a high school
student? Does intent matter? When an aspirin carried for menstrual
cramps becomes "drug possession," you know there's a badly written rule
in effect.
It's the same with airport security and criminal sentencing. Broad and
simple rules may be simpler to follow -- and require less thinking on
the part of those enforcing them -- but they're almost always far less
nuanced than our complex society requires. Unfortunately, the more
complex the rules are, the more they're open to interpretation and the
more discretion the interpreters have.
The solution is to combine the two, rules and discretion, with
procedures to make sure they're not abused. Provide rules, but don't
make them so rigid that there's no room for interpretation. Give the
people in the situation -- the teachers, the airport security agents,
the policemen, the judges -- discretion to apply the rules to the
situation. But -- and this is the important part -- allow people to
appeal the results if they feel they were treated unfairly. And
regularly audit the results to ensure there is no discrimination or
favoritism. It's the combination of the four that work: rules plus
discretion plus appeal plus audit.
All systems need some form of redress, whether it be open and public
like a courtroom or closed and secret like the TSA. Giving discretion to
those at the scene just makes for a more efficient appeals process,
since the first level of appeal can be handled on the spot.
Zachary, the Delaware first grader suspended for bringing a combination
fork, spoon and knife camping utensil to eat his lunch with, had his
punishment unanimously overturned by the school board. This was the
right decision; but what about all the other students whose parents
weren't as forceful or media-savvy enough to turn their child's plight
into a national story? Common sense in applying rules is important, but
so is equal access to that common sense.
This essay originally appeared on the Minnesota Public Radio website.
http://minnesota.publicradio.org/display/web/2009/11/03/schneier/
http://www.nytimes.com/2009/10/12/education/12discipline.html
http://www.philly.com/inquirer/opinion/20091016_Editorial__Zero_common_sens…
or http://tinyurl.com/yls568f
http://www.lancastereaglegazette.com/article/20091020/OPINION04/910200313/L…
or http://tinyurl.com/yhcvxpu
http://www.dallasnews.com/sharedcontent/dws/dn/localnews/columnists/jraglan…
or http://tinyurl.com/yh7ehpn
http://www.htrnews.com/article/20091017/MAN06/910170416
http://www.baylor.edu/lariat/news.php?action=story&story=63347
http://www.sdnn.com/sandiego/2009-10-20/columns/marsha-sutton-zero-toleranc…
or http://tinyurl.com/ygfhysa
http://www.delmarvanow.com/article/20091020/DW02/910200338
Another example:
A former soldier who handed a discarded shotgun in to police faces at
least five years imprisonment for "doing his duty".
http://www.thisissurreytoday.co.uk/news/Ex-soldier-faces-jail-handing-gun/a…
or http://tinyurl.com/y9spuad
** *** ***** ******* *********** *************
Security in a Reputation Economy
In the past, our relationship with our computers was technical. We cared
what CPU they had and what software they ran. We understood our networks
and how they worked. We were experts, or we depended on someone else for
expertise. And security was part of that expertise.
This is changing. We access our email via the web, from any computer or
from our phones. We use Facebook, Google Docs, even our corporate
networks, regardless of hardware or network. We, especially the younger
of us, no longer care about the technical details. Computing is
infrastructure; it's a commodity. It's less about products and more
about services; we simply expect it to work, like telephone service or
electricity or a transportation network.
Infrastructures can be spread on a broad continuum, ranging from generic
to highly specialized. Power and water are generic; who supplies them
doesn't really matter. Mobile phone services, credit cards, ISPs, and
airlines are mostly generic. More specialized infrastructure services
are restaurant meals, haircuts, and social networking sites. Highly
specialized services include tax preparation for complex businesses;
management consulting, legal services, and medical services.
Sales for these services are driven by two things: price and trust. The
more generic the service is, the more price dominates. The more
specialized it is, the more trust dominates. IT is something of a
special case because so much of it is free. So, for both specialized IT
services where price is less important and for generic IT services --
think Facebook -- where there is no price, trust will grow in
importance. IT is becoming a reputation-based economy, and this has
interesting ramifications for security.
Some years ago, the major credit card companies became concerned about
the plethora of credit-card-number thefts from sellers' databases. They
worried that these might undermine the public's trust in credit cards as
a secure payment system for the internet. They knew the sellers would
only protect these databases up to the level of the threat to the
seller, and not to the greater level of threat to the industry as a
whole. So they banded together and produced a security standard called
PCI. It's wholly industry-enforced by an industry that realized its
reputation was more valuable than the sellers' databases.
A reputation-based economy means that infrastructure providers care more
about security than their customers do. I realized this 10 years ago
with my own company. We provided network-monitoring services to large
corporations, and our internal network security was much more extensive
than our customers'. Our customers secured their networks -- that's why
they hired us, after all -- but only up to the value of their networks.
If we mishandled any of our customers' data, we would have lost the
trust of all of our customers.
I heard the same story at an ENISA conference in London last June, when
an IT consultant explained that he had begun encrypting his laptop years
before his customers did. While his customers might decide that the risk
of losing their data wasn't worth the hassle of dealing with encryption,
he knew that if he lost data from one customer, he risked losing all of
his customers.
As IT becomes more like infrastructure, more like a commodity, expect
service providers to improve security to levels greater than their
customers would have done themselves.
In IT, customers learn about company reputation from many sources:
magazine articles, analyst reviews, recommendations from colleagues,
awards, certifications, and so on. Of course, this only works if
customers have accurate information. In a reputation economy, companies
have a motivation to hide their security problems.
You've all experienced a reputation economy: restaurants. Some
restaurants have a good reputation, and are filled with regulars. When
restaurants get a bad reputation, people stop coming and they close.
Tourist restaurants -- whose main attraction is their location, and
whose customers frequently don't know anything about their reputation --
can thrive even if they aren't any good. And sometimes a restaurant can
keep its reputation -- an award in a magazine, a special occasion
restaurant that "everyone knows" is the place to go -- long after its
food and service have declined.
The reputation economy is far from perfect.
This essay originally appeared in "The Guardian."
http://www.guardian.co.uk/technology/2009/nov/11/schneier-reputation-it-sec…
or http://tinyurl.com/yha3nbj
** *** ***** ******* *********** *************
Schneier News
I'm speaking at the Internet Governance Forum in Sharm el-Sheikh, Egypt,
on November 16 and 17.
http://igf09.eg/home.html
I'm speaking at the 2009 SecAU Security Congress in Perth on December 2
and 3.
http://scissec.scis.ecu.edu.au/conferences2008/
I'm speaking at an Open Rights Group event in London on December 4.
http://www.openrightsgroup.org/blog/2009/bruce-schneier-event
I'm speaking at the First IEEE Workshop on Information Forensics and
Security in London on December 8.
http://www.wifs09.org/
I'm speaking at the UCL Centre for Security and Crime Science in London
on December 7.
http://www.cscs.ucl.ac.uk/
I'm speaking at the Young Professionals in Foreign Policy in London on
December 7.
http://www.ypfp.org/content/event/London
I'm speaking at the Iberic Web Application Security Conference in Madrid
on December 10.
December 10-11, 2009
http://www.ibwas.com/
Article on me from a Luxembourg magazine.
http://www.paperjam.lu/archives/2009/11/2310_Technologie_Security/index.html
or http://tinyurl.com/y95mcpq
Interview with me on CNet.com:
http://news.cnet.com/8301-27080_3-10381460-245.html?tag=newsLeadStoriesArea…
or http://tinyurl.com/yf5otcu
Video interview with me, conducted at the Information Security Decisions
conference in Chicago in October.
http://searchsecurity.techtarget.com/video/0,297151,sid14_gci1372839,00.html
or http://tinyurl.com/yk4othd
A month ago, ThatsMyFace.com approached me about making a Bruce Schneier
action figure. It's $100. I'd like to be able to say something like
"half the proceeds are going to EPIC and EFF," but they're not. That's
the price for custom orders. I don't even get a royalty. The company
is working on lowering the price, and they've said that they'll put a
photograph of an actual example on the webpage. I've told them that at
$100 no one will buy it, but at $40 it's a funny gift for your corporate
IT person. So e-mail the company if you're interested, and if they get
enough interest they'll do a bulk order.
http://www.thatsmyface.com/f/bruce_schneier
** *** ***** ******* *********** *************
The Commercial Speech Arms Race
A few years ago, a company began to sell a liquid with identification
codes suspended in it. The idea was that you would paint it on your
stuff as proof of ownership. I commented that I would paint it on
someone else's stuff, then call the police.
I was reminded of this recently when a group of Israeli scientists
demonstrated that it's possible to fabricate DNA evidence. So now,
instead of leaving your own DNA at a crime scene, you can leave
fabricated DNA. And it isn't even necessary to fabricate. In Charlie
Stross's novel "Halting State," the bad guys foul a crime scene by
blowing around the contents of a vacuum cleaner bag, containing the DNA
of dozens, if not hundreds, of people.
This kind of thing has been going on for ever. It's an arms race, and
when technology changes, the balance between attacker and defender
changes. But when automated systems do the detecting, the results are
different. Face recognition software can be fooled by cosmetic surgery,
or sometimes even just a photograph. And when fooling them becomes
harder, the bad guys fool them on a different level. Computer-based
detection gives the defender economies of scale, but the attacker can
use those same economies of scale to defeat the detection system.
Google, for example, has anti-fraud systems that detect and shut down
advertisers who try to inflate their revenue by repeatedly clicking on
their own AdSense ads. So people built bots to repeatedly click on the
AdSense ads of their competitors, trying to convince Google to kick them
out of the system.
Similarly, when Google started penalizing a site's search engine
rankings for having "bad neighbors" -- backlinks from link farms, adult
or gambling sites, or blog spam -- people engaged in sabotage: they
built link farms and left blog comment spam linking to their
competitors' sites.
The same sort of thing is happening on Yahoo Answers. Initially,
companies would leave answers pushing their products, but Yahoo started
policing this. So people have written bots to report abuse on all their
competitors. There are Facebook bots doing the same sort of thing.
Last month, Google introduced Sidewiki, a browser feature that lets you
read and post comments on virtually any webpage. People and industries
are already worried about the effects unrestrained commentary might have
on their businesses, and how they might control the comments. I'm sure
Google has sophisticated systems ready to detect commercial interests
that try to take advantage of the system, but are they ready to deal
with commercial interests that try to frame their competitors? And do we
want to give one company the power to decide which comments should rise
to the top and which get deleted?
Whenever you build a security system that relies on detection and
identification, you invite the bad guys to subvert the system so it
detects and identifies someone else. Sometimes this is hard -- leaving
someone else's fingerprints on a crime scene is hard, as is using a mask
of someone else's face to fool a guard watching a security camera -- and
sometimes it's easy. But when automated systems are involved, it's often
very easy. It's not just hardened criminals that try to frame each
other, it's mainstream commercial interests.
With systems that police internet comments and links, there's money
involved in commercial messages -- so you can be sure some will take
advantage of it. This is the arms race. Build a detection system, and
the bad guys try to frame someone else. Build a detection system to
detect framing, and the bad guys try to frame someone else framing
someone else. Build a detection system to detect framing of framing, and
well, there's no end, really. Commercial speech is on the internet to
stay; we can only hope that they don't pollute the social systems we use
so badly that they're no longer useful.
This essay originally appeared in "The Guardian."
http://www.guardian.co.uk/technology/2009/oct/15/bruce-schneier-internet-se…
or http://tinyurl.com/yfbsb42
"Smart Water" liquid identification:
http://www.schneier.com/blog/archives/2005/02/smart_water.html
Fabricating DNA evidence:
http://www.nytimes.com/2009/08/18/science/18dna.html
Fooling face recognition software:
http://staging.spectrum.ieee.org/computing/embedded-systems/computerized-fa…
or http://tinyurl.com/yz9x4pf
http://www.theregister.co.uk/2009/02/19/facial_recognition_fail/
Google's AdSense:
http://www.wmtips.com/adsense/what-you-need-know-about-adsense.htm
Sidewiki:
http://www.google.com/sidewiki/intl/en/index.html:
http://www.pcworld.com/article/172490/google_sidewiki_a_first_look.html
or http://tinyurl.com/lgpxp8
Sidewiki fears:
http://impactiviti.wordpress.com/2009/09/29/googles-sidewiki-game-changer-f…
or http://tinyurl.com/yl4ul3g
http://www.4hoteliers.com/4hots_fshw.php?mwi=4448
http://talkbiz.com/blog/google-steals-the-web/
** *** ***** ******* *********** *************
The Doghouse: ADE 651
A divining rod to find explosives in Iraq:
http://www.schneier.com/blog/archives/2009/11/the_doghouse_ad.html
** *** ***** ******* *********** *************
"Evil Maid" Attacks on Encrypted Hard Drives
Earlier this month, Joanna Rutkowska implemented the "evil maid" attack
against TrueCrypt. The same kind of attack should work against any
whole-disk encryption, including PGP Disk and BitLocker. Basically, the
attack works like this:
Step 1: Attacker gains access to your shut-down computer and boots it
from a separate volume. The attacker writes a hacked bootloader onto
your system, then shuts it down.
Step 2: You boot your computer using the attacker's hacked bootloader,
entering your encryption key. Once the disk is unlocked, the hacked
bootloader does its mischief. It might install malware to capture the
key and send it over the Internet somewhere, or store it in some
location on the disk to be retrieved later, or whatever.
You can see why it's called the "evil maid" attack; a likely scenario is
that you leave your encrypted computer in your hotel room when you go
out to dinner, and the maid sneaks in and installs the hacked
bootloader. The same maid could even sneak back the next night and
erase any traces of her actions.
This attack exploits the same basic vulnerability as the "Cold Boot"
attack from last year, and the "Stoned Boot" attack from earlier this
year, and there's no real defense to this sort of thing. As soon as you
give up physical control of your computer, all bets are off. From CRN:
"Similar hardware-based attacks were among the main reasons why
Symantec's CTO Mark Bregman was recently advised by 'three-letter
agencies in the US Government' to use separate laptop and mobile device
when traveling to China, citing potential hardware-based compromise."
PGP sums it up in their blog. "No security product on the market today
can protect you if the underlying computer has been compromised by
malware with root level administrative privileges. That said, there
exists well-understood common sense defenses against 'Cold Boot,'
'Stoned Boot.' 'Evil Maid,' and many other attacks yet to be named and
publicized."
The defenses are basically two-factor authentication: a token you don't
leave in your hotel room for the maid to find and use. The maid could
still corrupt the machine, but it's more work than just storing the
password for later use. Putting your data on a thumb drive and taking
it with you doesn't work; when you return you're plugging your thumb
into a corrupted machine.
The real defense here is trusted boot, something Trusted Computing is
supposed to enable. And the only way to get that is from Microsoft's
BitLocker hard disk encryption, if your computer has a TPM module
version 1.2 or later.
In the meantime, people who encrypt their hard drives, or partitions on
their hard drives, have to realize that the encryption gives them less
protection than they probably believe. It protects against someone
confiscating or stealing their computer and then trying to get at the
data. It does not protect against an attacker who has access to your
computer over a period of time during which you use it, too.
Evil Maid attacks:
http://theinvisiblethings.blogspot.com/2009/10/evil-maid-goes-after-truecry…
or http://tinyurl.com/yzbbgc3
Cold Boot and Stoned Boot attacks:
http://citp.princeton.edu/memory/
http://www.stoned-vienna.com/
http://blogs.zdnet.com/security/?p=4662&tag=nl.e019
http://www.crn.com.au/News/155836,safety-first-for-it-executives-in-china.a…
or http://tinyurl.com/p2wqxq
PGP's commentary:
http://blog.pgp.com/index.php/2009/10/evil-maid-attack/
Trusted Computing:
http://www.schneier.com/blog/archives/2005/08/trusted_computi.html
** *** ***** ******* *********** *************
Is Antivirus Dead?
This essay previously appeared in "Information Security Magazine," as
the second half of a point-counterpoint with Marcus Ranum. You can read
his half here as well:
http://searchsecurity.techtarget.com/magazinePrintFriendly/0,296905,sid14_g…
or http://tinyurl.com/yz2rtbs
Security is never black and white. If someone asks, "For best security,
should I do A or B?" the answer almost invariably is both. But security
is always a trade-off. Often it's impossible to do both A and B --
there's no time to do both, it's too expensive to do both, or whatever
-- and you have to choose. In that case, you look at A and B and you
make you best choice. But it's almost always more secure to do both.
Yes, antivirus programs have been getting less effective as new viruses
are more frequent and existing viruses mutate faster. Yes, antivirus
companies are forever playing catch-up, trying to create signatures for
new viruses. Yes, signature-based antivirus software won't protect you
when a virus is new, before the signature is added to the detection
program. Antivirus is by no means a panacea.
On the other hand, an antivirus program with up-to-date signatures will
protect you from a lot of threats. It'll protect you against viruses,
against spyware, against Trojans -- against all sorts of malware. It'll
run in the background, automatically, and you won't notice any
performance degradation at all. And -- here's the best part -- it can be
free. AVG won't cost you a penny. To me, this is an easy trade-off,
certainly for the average computer user who clicks on attachments he
probably shouldn't click on, downloads things he probably shouldn't
download, and doesn't understand the finer workings of Windows Personal
Firewall.
Certainly security would be improved if people used whitelisting
programs such as Bit9 Parity and Savant Protection -- and I personally
recommend Malwarebytes' Anti-Malware -- but a lot of users are going to
have trouble with this. The average user will probably just swat away
the "you're trying to run a program not on your whitelist" warning
message or -- even worse -- wonder why his computer is broken when he
tries to run a new piece of software. The average corporate IT
department doesn't have a good idea of what software is running on all
the computers within the corporation, and doesn't want the
administrative overhead of managing all the change requests. And
whitelists aren't a panacea, either: they don't defend against malware
that attaches itself to data files (think Word macro viruses), for example.
One of the newest trends in IT is consumerization, and if you don't
already know about it, you soon will. It's the idea that new
technologies, the cool stuff people want, will become available for the
consumer market before they become available for the business market.
What it means to business is that people -- employees, customers,
partners -- will access business networks from wherever they happen to
be, with whatever hardware and software they have. Maybe it'll be the
computer you gave them when you hired them. Maybe it'll be their home
computer, the one their kids use. Maybe it'll be their cell phone or
PDA, or a computer in a hotel's business center. Your business will have
no way to know what they're using, and -- more importantly -- you'll
have no control.
In this kind of environment, computers are going to connect to each
other without a whole lot of trust between them. Untrusted computers are
going to connect to untrusted networks. Trusted computers are going to
connect to untrusted networks. The whole idea of "safe computing" is
going to take on a whole new meaning -- every man for himself. A
corporate network is going to need a simple, dumb, signature-based
antivirus product at the gateway of its network. And a user is going to
need a similar program to protect his computer.
Bottom line: antivirus software is neither necessary nor sufficient for
security, but it's still a good idea. It's not a panacea that magically
makes you safe, nor is it is obsolete in the face of current threats. As
countermeasures go, it's cheap, it's easy, and it's effective. I haven't
dumped my antivirus program, and I have no intention of doing so anytime
soon.
Problems with anti-virus software:
http://www.csoonline.com/article/495827/Experts_Only_Time_to_Ditch_the_Anti…
or http://tinyurl.com/nqo68f
http://www.computerworld.com/s/article/print/9077338/The_future_of_antivirus
or http://tinyurl.com/yfrv86s
http://www.pcworld.com/article/130455/is_desktop_antivirus_dead.html
http://www.businessweek.com/technology/content/jan2007/tc20070122_300717.htm
or http://tinyurl.com/ycdkmd3
AVG:
http://free.avg.com/us-en/homepage
Consumerization:
http://arstechnica.com/business/news/2008/07/analysis-it-consumerization-an…
or http://tinyurl.com/yd6kxgs
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing
summaries, analyses, insights, and commentaries on security: computer
and otherwise. You can subscribe, unsubscribe, or change your address
on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues
are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to
colleagues and friends who will find it valuable. Permission is also
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies,"
and "Applied Cryptography," and an inventor of the Blowfish, Twofish,
Threefish, Helix, Phelix, and Skein algorithms. He is the Chief
Security Technology Officer of BT BCSG, and is on the Board of Directors
of the Electronic Privacy Information Center (EPIC). He is a frequent
writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not
necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
CRYPTO-GRAM
November 15, 2009
by Bruce Schneier
Chief Security Technology Officer, BT
schneier(a)schneier.com
http://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit
<http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at
<http://www.schneier.com/crypto-gram-0911.html>. These same essays
appear in the "Schneier on Security" blog:
<http://www.schneier.com/blog>. An RSS feed is available.
** *** ***** ******* *********** *************
In this issue:
Beyond Security Theater
Fear and Overreaction
News
Zero-Tolerance Policies
Security in a Reputation Economy
Schneier News
The Commercial Speech Arms Race
The Doghouse: ADE 651
"Evil Maid" Attacks on Encrypted Hard Drives
Is Antivirus Dead?
** *** ***** ******* *********** *************
Beyond Security Theater
[I was asked to write this essay for the "New Internationalist" (n. 427,
November 2009, pp. 10--13). It's nothing I haven't said before, but I'm
pleased with how this essay came together.]
Terrorism is rare, far rarer than many people think. It's rare because
very few people want to commit acts of terrorism, and executing a
terrorist plot is much harder than television makes it appear. The best
defenses against terrorism are largely invisible: investigation,
intelligence, and emergency response. But even these are less effective
at keeping us safe than our social and political policies, both at home
and abroad. However, our elected leaders don't think this way: they are
far more likely to implement security theater against movie-plot threats.
A movie-plot threat is an overly specific attack scenario. Whether it's
terrorists with crop dusters, terrorists contaminating the milk supply,
or terrorists attacking the Olympics, specific stories affect our
emotions more intensely than mere data does. Stories are what we fear.
It's not just hypothetical stories: terrorists flying planes into
buildings, terrorists with bombs in their shoes or in their water
bottles, and terrorists with guns and bombs waging a co-ordinated attack
against a city are even scarier movie-plot threats because they actually
happened.
Security theater refers to security measures that make people feel more
secure without doing anything to actually improve their security. An
example: the photo ID checks that have sprung up in office buildings.
No-one has ever explained why verifying that someone has a photo ID
provides any actual security, but it looks like security to have a
uniformed guard-for-hire looking at ID cards. Airport-security examples
include the National Guard troops stationed at US airports in the months
after 9/11 -- their guns had no bullets. The US colour-coded system of
threat levels, the pervasive harassment of photographers, and the metal
detectors that are increasingly common in hotels and office buildings
since the Mumbai terrorist attacks, are additional examples.
To be sure, reasonable arguments can be made that some terrorist targets
are more attractive than others: airplanes because a small bomb can
result in the death of everyone aboard, monuments because of their
national significance, national events because of television coverage,
and transportation because of the numbers of people who commute daily.
But there are literally millions of potential targets in any large
country (there are five million commercial buildings alone in the US),
and hundreds of potential terrorist tactics; it's impossible to defend
every place against everything, and it's impossible to predict which
tactic and target terrorists will try next.
Feeling and Reality
Security is both a feeling and a reality. The propensity for security
theater comes from the interplay between the public and its leaders.
When people are scared, they need something done that will make them
feel safe, even if it doesn't truly make them safer. Politicians
naturally want to do something in response to crisis, even if that
something doesn't make any sense.
Often, this "something" is directly related to the details of a recent
event: we confiscate liquids, screen shoes, and ban box cutters on
airplanes. But it's not the target and tactics of the last attack that
are important, but the next attack. These measures are only effective if
we happen to guess what the next terrorists are planning. If we spend
billions defending our rail systems, and the terrorists bomb a shopping
mall instead, we've wasted our money. If we concentrate airport security
on screening shoes and confiscating liquids, and the terrorists hide
explosives in their brassieres and use solids, we've wasted our money.
Terrorists don't care what they blow up and it shouldn't be our goal
merely to force the terrorists to make a minor change in their tactics
or targets.
Our penchant for movie plots blinds us to the broader threats. And
security theater consumes resources that could better be spent elsewhere.
Any terrorist attack is a series of events: something like planning,
recruiting, funding, practicing, executing, aftermath. Our most
effective defenses are at the beginning and end of that process --
intelligence, investigation, and emergency response -- and least
effective when they require us to guess the plot correctly. By
intelligence and investigation, I don't mean the broad data-mining or
eavesdropping systems that have been proposed and in some cases
implemented -- those are also movie-plot stories without much basis in
actual effectiveness -- but instead the traditional "follow the
evidence" type of investigation that has worked for decades.
Unfortunately for politicians, the security measures that work are
largely invisible. Such measures include enhancing the
intelligence-gathering abilities of the secret services, hiring cultural
experts and Arabic translators, building bridges with Islamic
communities both nationally and internationally, funding police
capabilities -- both investigative arms to prevent terrorist attacks,
and emergency communications systems for after attacks occur -- and
arresting terrorist plotters without media fanfare. They do not include
expansive new police or spying laws. Our police don't need any new laws
to deal with terrorism; rather, they need apolitical funding. These
security measures don't make good television, and they don't help, come
re-election time. But they work, addressing the reality of security
instead of the feeling.
The arrest of the "liquid bombers" in London is an example: they were
caught through old-fashioned intelligence and police work. Their choice
of target (airplanes) and tactic (liquid explosives) didn't matter; they
would have been arrested regardless.
But even as we do all of this we cannot neglect the feeling of security,
because it's how we collectively overcome the psychological damage that
terrorism causes. It's not security theater we need, it's direct appeals
to our feelings. The best way to help people feel secure is by acting
secure around them. Instead of reacting to terrorism with fear, we --
and our leaders -- need to react with indomitability.
Refuse to Be Terrorized
By not overreacting, by not responding to movie-plot threats, and by not
becoming defensive, we demonstrate the resilience of our society, in our
laws, our culture, our freedoms. There is a difference between
indomitability and arrogant "bring 'em on" rhetoric. There's a
difference between accepting the inherent risk that comes with a free
and open society, and hyping the threats.
We should treat terrorists like common criminals and give them all the
benefits of true and open justice -- not merely because it demonstrates
our indomitability, but because it makes us all safer. Once a society
starts circumventing its own laws, the risks to its future stability are
much greater than terrorism.
Supporting real security even though it's invisible, and demonstrating
indomitability even though fear is more politically expedient, requires
real courage. Demagoguery is easy. What we need is leaders willing both
to do what's right and to speak the truth.
Despite fearful rhetoric to the contrary, terrorism is not a
transcendent threat. A terrorist attack cannot possibly destroy a
country's way of life; it's only our reaction to that attack that can do
that kind of damage. The more we undermine our own laws, the more we
convert our buildings into fortresses, the more we reduce the freedoms
and liberties at the foundation of our societies, the more we're doing
the terrorists' job for them.
We saw some of this in the Londoners' reaction to the 2005 transport
bombings. Among the political and media hype and fearmongering, there
was a thread of firm resolve. People didn't fall victim to fear. They
rode the trains and buses the next day and continued their lives.
Terrorism's goal isn't murder; terrorism attacks the mind, using victims
as a prop. By refusing to be terrorized, we deny the terrorists their
primary weapon: our own fear.
Today, we can project indomitability by rolling back all the fear-based
post-9/11 security measures. Our leaders have lost credibility; getting
it back requires a decrease in hyperbole. Ditch the invasive mass
surveillance systems and new police state-like powers. Return airport
security to pre-9/11 levels. Remove swagger from our foreign policies.
Show the world that our legal system is up to the challenge of
terrorism. Stop telling people to report all suspicious activity; it
does little but make us suspicious of each other, increasing both fear
and helplessness.
Terrorism has always been rare, and for all we've heard about 9/11
changing the world, it's still rare. Even 9/11 failed to kill as many
people as automobiles do in the US every single month. But there's a
pervasive myth that terrorism is easy. It's easy to imagine terrorist
plots, both large-scale "poison the food supply" and small-scale "10
guys with guns and cars." Movies and television bolster this myth, so
many people are surprised that there have been so few attacks in Western
cities since 9/11. Certainly intelligence and investigation successes
have made it harder, but mostly it's because terrorist attacks are
actually hard. It's hard to find willing recruits, to co-ordinate plans,
and to execute those plans -- and it's easy to make mistakes.
Counterterrorism is also hard, especially when we're psychologically
prone to muck it up. Since 9/11, we've embarked on strategies of
defending specific targets against specific tactics, overreacting to
every terrorist video, stoking fear, demonizing ethnic groups, and
treating the terrorists as if they were legitimate military opponents
who could actually destroy a country or a way of life -- all of this
plays into the hands of terrorists. We'd do much better by leveraging
the inherent strengths of our modern democracies and the natural
advantages we have over the terrorists: our adaptability and
survivability, our international network of laws and law enforcement,
and the freedoms and liberties that make our society so enviable. The
way we live is open enough to make terrorists rare; we are observant
enough to prevent most of the terrorist plots that exist, and
indomitable enough to survive the even fewer terrorist plots that
actually succeed. We don't need to pretend otherwise.
Commentary:
http://www.motherjones.com/kevin-drum/2009/11/security-theater
http://jamesfallows.theatlantic.com/archives/2009/11/the_right_kind_of_secu…
http://www.economist.com/blogs/gulliver/2009/11/the_future_of_security.cfm
** *** ***** ******* *********** *************
Fear and Overreaction
It's hard work being prey. Watch the birds at a feeder. They're
constantly on alert, and will fly away from food -- from easy nutrition
-- at the slightest movement or sound. Given that I've never, ever seen
a bird plucked from a feeder by a predator, it seems like a whole lot of
wasted effort against not very big a threat.
Assessing and reacting to risk is one of the most important things a
living creature has to deal with. The amygdala, an ancient part of the
brain that first evolved in primitive fishes, has that job. It's what's
responsible for the fight-or-flight reflex. Adrenaline in the
bloodstream, increased heart rate, increased muscle tension, sweaty
palms; that's the amygdala in action. And it works fast, faster than
consciousnesses: show someone a snake and their amygdala will react
before their conscious brain registers that they're looking at a snake.
Fear motivates all sorts of animal behaviors. Schooling, flocking, and
herding are all security measures. Not only is it less likely that any
member of the group will be eaten, but each member of the group has to
spend less time watching out for predators. Animals as diverse as
bumblebees and monkeys both avoid food in areas where predators are
common. Different prey species have developed various alarm calls, some
surprisingly specific. And some prey species have even evolved to react
to the alarms given off by other species.
Evolutionary biologist Randolph Nesse has studied animal defenses,
particularly those that seem to be overreactions. These defenses are
mostly all-or-nothing; a creature can't do them halfway. Birds flying
off, sea cucumbers expelling their stomachs, and vomiting are all
examples. Using signal detection theory, Nesse showed that
all-or-nothing defenses are expected to have many false alarms. "The
smoke detector principle shows that the overresponsiveness of many
defenses is an illusion. The defenses appear overresponsive because they
are 'inexpensive' compared to the harms they protect against and because
errors of too little defense are often more costly than errors of too
much defense."
So according to the theory, if flight costs 100 calories, both in flying
and lost eating time, and there's a 1 in 100 chance of being eaten if
you don't fly away, it's smarter for survival to use up 10,000 calories
repeatedly flying at the slightest movement even though there's a 99
percent false alarm rate. Whatever the numbers happen to be for a
particular species, it has evolved to get the trade-off right.
This makes sense, until the conditions that the species evolved under
change quicker than evolution can react to. Even though there are far
fewer predators in the city, birds at my feeder react as if they were in
the primal forest. Even birds safe in a zoo's aviary don't realize that
the situation has changed.
Humans are both no different and very different. We, too, feel fear and
react with our amygdala, but we also have a conscious brain that can
override those reactions. And we too live in a world very different from
the one we evolved in. Our reflexive defenses might be optimized for the
risks endemic to living in small family groups in the East African
highlands in 100,000 BC, not 2009 New York City. But we can go beyond
fear, and actually think sensibly about security.
Far too often, we don't. We tend to be poor judges of risk. We overact
to rare risks, we ignore long-term risks, we magnify risks that are also
morally offensive. We get risks wrong -- threats, probabilities, and
costs -- all the time. When we're afraid, really afraid, we'll do almost
anything to make that fear go away. Both politicians and marketers have
learned to push that fear button to get us to do what they want.
One night last month, I was awakened from my hotel-room sleep by a loud,
piercing alarm. There was no way I could ignore it, but I weighed the
risks and did what any reasonable person would do under the
circumstances: I stayed in bed and waited for the alarm to be turned
off. No point getting dressed, walking down ten flights of stairs, and
going outside into the cold for what invariably would be a false alarm
-- serious hotel fires are very rare. Unlike the bird in an aviary, I
knew better.
You can disagree with my risk calculus, and I'm sure many hotel guests
walked downstairs and outside to the designated assembly point. But it's
important to recognize that the ability to have this sort of discussion
is uniquely human. And we need to have the discussion repeatedly,
whether the topic is the installation of a home burglar alarm, the
latest TSA security measures, or the potential military invasion of
another country. These things aren't part of our evolutionary history;
we have no natural sense of how to respond to them. Our fears are often
calibrated wrong, and reason is the only way we can override them.
This essay first appeared on DarkReading.com.
http://www.darkreading.com/blog/archives/2009/11/its_hard_work_b.html
Animal behaviors:
http://judson.blogs.nytimes.com/2009/09/29/where-tasty-morsels-fear-to-trea…
or http://tinyurl.com/yhosh54
http://judson.blogs.nytimes.com/2009/10/06/leopard-behind-you/
Nesse paper:
http://www-personal.umich.edu/~nesse/Articles/Nesse-DefenseReg-EHB-2005.pdf
or http://tinyurl.com/yz8zmxh
Evaluating risk:
http://www.schneier.com/essay-162.html
http://www.schneier.com/essay-171.html
http://www.schneier.com/essay-170.html
http://www.schneier.com/essay-155.html
Hotel fires are rare:
http://www.emergency-management.net/hotel_fire.htm
** *** ***** ******* *********** *************
News
Fugitive caught after uploading his status on Facebook:
http://www.schneier.com/blog/archives/2009/10/helpful_hint_fo.html
Six years of Microsoft Patch Tuesdays:
http://www.schneier.com/blog/archives/2009/10/six_years_of_pa.html
A computer card counter detects human card counters; all it takes is a
computer that can track every card:
http://www.schneier.com/blog/archives/2009/10/computer_card_c.html
A woman posts a horrible story of how she was mistreated by the TSA, and
the TSA responds by releasing the video showing she was lying.
http://www.schneier.com/blog/archives/2009/10/tsa_successfull.html
Australia man receives reduced sentence due to encryption:
http://www.news.com.au/couriermail/story/0,23739,26232570-952,00.html
Steve Ballmer blames the failure of Windows Vista on security:
http://www.schneier.com/blog/archives/2009/10/ballmer_blames.html
James Bamford on the NSA
http://www.schneier.com/blog/archives/2009/10/james_bamford_o.html
CIA invests in social-network data mining:
http://www.wired.com/dangerroom/2009/10/exclusive-us-spies-buy-stake-in-twi…
or http://tinyurl.com/yl3zud2
http://www.visibletechnologies.com/press/pr_20091019.html
Interesting story of a 2006 Wal-Mart hack from, probably, Minsk.
http://www.wired.com/threatlevel/2009/10/walmart-hack/
Ross Anderson has put together a great resource page on security and
psychology:
http://www.cl.cam.ac.uk/~rja14/psysec.html
Best Buy sells surveillance tracker: only $99.99.
http://www.bestbuy.com/site/olspage.jsp?skuId=9540703&productCategoryId=pcm…
or http://tinyurl.com/yf2nsb8
You can also use an iPhone as a tracking device:
http://ephermata.livejournal.com/204026.html
A critical essay on the TSA from a former assistant police chief:
http://www.hlswatch.com/2009/10/15/b do-i-have-the-right-to-refuse-this-searchb/
or http://tinyurl.com/ydbox3o
Follow-up essay by the same person:
http://www.hlswatch.com/2009/11/10/where-are-all-the-white-guys-update-on-d…
The U.S. Deputy Director of National Intelligence for Collection gives a
press conference on the new Utah data collection facility.
http://link.brightcove.com/services/player/bcpid25071315001?bclid=287353280…
or http://tinyurl.com/yfzb7qm
Transcript:
http://www.dni.gov/speeches/20091023_speech.pdf
"Capability of the People's Republic of China to Conduct Cyber Warfare
and Computer Network Exploitation," prepared for the US-China Economic
and Security Review Commission, Northrop Grumman Corporation, October 9,
2009.
http://www.uscc.gov/researchpapers/2009/NorthropGrumman_PRC_Cyber_Paper_FIN…
or http://tinyurl.com/ygcmh9b
Squirrel terrorists attacking our critical infrastructure.
http://notionscapital.wordpress.com/2009/10/24/terrorists-strike-u-s-infras…
or http://tinyurl.com/ykgtadb
We have a cognitive bias to exaggerate risks caused by other humans, and
downplay risks caused by animals (and, even more, by natural phenomena).
To aid their Wall Street investigations, the FBI used DCSNet, its
massive surveillance system.
http://www.wallstreetandtech.com/blog/archives/2009/10/how_prosecutors.html…
or http://tinyurl.com/yhnt22q
Detecting terrorists by smelling fear:
http://www.schneier.com/blog/archives/2009/11/detecting_terro.html
In the "Open Access Journal of Forensic Psychology", there's a paper
about the problems with unscientific security: "A Call for
Evidence-Based Security Tools":
http://www.schneier.com/blog/archives/2009/11/the_problems_wi_1.html
Mossad hacked a Syrian official's computer; it was unattended in a hotel
room at the time.
http://www.haaretz.com/hasen/spages/1125312.html
Remember the evil maid attack: if an attacker gets hold of your computer
temporarily, he can bypass your encryption software.
http://www.schneier.com/blog/archives/2009/10/evil_maid_attac.html
Recently I wrote about the difficulty of making role-based access
control work, and how research at Dartmouth showed that it was better to
let people take the access control they need to do their jobs, and audit
the results. This interesting paper, "Laissez-Faire File Sharing,"
tries to formalize that sort of access control.
http://www.cs.columbia.edu/~smb/papers/nspw-use.pdf
http://www.schneier.com/essay-288.html
I have refrained from commenting on the case against Najibullah Zazi,
simply because it's so often the case that the details reported in the
press have very little do with reality. My suspicion was that he was,
as in so many other cases, an idiot who couldn't do any real harm and
was turned into a bogeyman for political purposes. However, John
Mueller -- who I've written about before -- has done the research.
http://www.schneier.com/blog/archives/2009/11/john_mueller_on_1.html
Interesting research: "Countering Kernel Rootkits with Lightweight Hook
Protection," by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.
http://www.schneier.com/blog/archives/2009/11/protecting_oss.html
Airport thieves prefer stealing black luggage; it's obvious why if you
think about it.
http://www.schneier.com/blog/archives/2009/11/thieves_prefer.html
We've seen lots of rumors about attacks against the power grid, both in
the U.S. and elsewhere, of people hacking the power grid. President
Obama mentioned it in his May cybersecurity speech: "In other countries
cyberattacks have plunged entire cities into darkness." Seems the
source of these rumors has been Brazil.
http://www.schneier.com/blog/archives/2009/11/hacking_the_bra.html
FBI/CIA/NSA information sharing before 9/11:
http://www.schneier.com/blog/archives/2009/11/fbiciansa_infor.html
Blowfish in fiction:
http://www.schneier.com/blog/archives/2009/11/blowfish_in_fic.html
** *** ***** ******* *********** *************
Zero-Tolerance Policies
Recent stories have documented the ridiculous effects of zero-tolerance
weapons policies in a Delaware school district: a first-grader expelled
for taking a camping utensil to school, a 13-year-old expelled after
another student dropped a pocketknife in his lap, and a seventh-grader
expelled for cutting paper with a utility knife for a class project.
Where's the common sense? the editorials cry.
These so-called zero-tolerance policies are actually zero-discretion
policies. They're policies that must be followed, no situational
discretion allowed. We encounter them whenever we go through airport
security: no liquids, gels or aerosols. Some workplaces have them for
sexual harassment incidents; in some sports a banned substance found in
a urine sample means suspension, even if it's for a real medical
condition. Judges have zero discretion when faced with mandatory
sentencing laws: three strikes for drug offences and you go to jail,
mandatory sentencing for statutory rape (underage sex), etc. A national
restaurant chain won't serve hamburgers rare, even if you offer to sign
a waiver. Whenever you hear "that's the rule, and I can't do anything
about it" -- and they're not lying to get rid of you -- you're butting
against a zero discretion policy.
These policies enrage us because they are blind to circumstance.
Editorial after editorial denounced the suspensions of elementary school
children for offenses that anyone with any common sense would agree were
accidental and harmless. The Internet is filled with essays
demonstrating how the TSA's rules are nonsensical and sometimes don't
even improve security. I've written some of them. What we want is for
those involved in the situations to have discretion.
However, problems with discretion were the reason behind these mandatory
policies in the first place. Discretion is often applied inconsistently.
One school principal might deal with knives in the classroom one way,
and another principal another way. Your drug sentence could depend
considerably on how sympathetic your judge is, or on whether she's
having a bad day.
Even worse, discretion can lead to discrimination. Schools had weapons
bans before zero-tolerance policies, but teachers and administrators
enforced the rules disproportionately against African-American students.
Criminal sentences varied by race, too. The benefit of zero-discretion
rules and laws is that they ensure that everyone is treated equally.
Zero-discretion rules also protect against lawsuits. If the rules are
applied consistently, no parent, air traveler or defendant can claim he
was unfairly discriminated against.
So that's the choice. Either we want the rules enforced fairly across
the board, which means limiting the discretion of the enforcers at the
scene at the time, or we want a more nuanced response to whatever the
situation is, which means we give those involved in the situation more
discretion.
Of course, there's more to it than that. The problem with the
zero-tolerance weapons rules isn't that they're rigid, it's that they're
poorly written.
What constitutes a weapon? Is it any knife, no matter how small?
Should the penalties be the same for a first grader and a high school
student? Does intent matter? When an aspirin carried for menstrual
cramps becomes "drug possession," you know there's a badly written rule
in effect.
It's the same with airport security and criminal sentencing. Broad and
simple rules may be simpler to follow -- and require less thinking on
the part of those enforcing them -- but they're almost always far less
nuanced than our complex society requires. Unfortunately, the more
complex the rules are, the more they're open to interpretation and the
more discretion the interpreters have.
The solution is to combine the two, rules and discretion, with
procedures to make sure they're not abused. Provide rules, but don't
make them so rigid that there's no room for interpretation. Give the
people in the situation -- the teachers, the airport security agents,
the policemen, the judges -- discretion to apply the rules to the
situation. But -- and this is the important part -- allow people to
appeal the results if they feel they were treated unfairly. And
regularly audit the results to ensure there is no discrimination or
favoritism. It's the combination of the four that work: rules plus
discretion plus appeal plus audit.
All systems need some form of redress, whether it be open and public
like a courtroom or closed and secret like the TSA. Giving discretion to
those at the scene just makes for a more efficient appeals process,
since the first level of appeal can be handled on the spot.
Zachary, the Delaware first grader suspended for bringing a combination
fork, spoon and knife camping utensil to eat his lunch with, had his
punishment unanimously overturned by the school board. This was the
right decision; but what about all the other students whose parents
weren't as forceful or media-savvy enough to turn their child's plight
into a national story? Common sense in applying rules is important, but
so is equal access to that common sense.
This essay originally appeared on the Minnesota Public Radio website.
http://minnesota.publicradio.org/display/web/2009/11/03/schneier/
http://www.nytimes.com/2009/10/12/education/12discipline.html
http://www.philly.com/inquirer/opinion/20091016_Editorial__Zero_common_sens…
or http://tinyurl.com/yls568f
http://www.lancastereaglegazette.com/article/20091020/OPINION04/910200313/L…
or http://tinyurl.com/yhcvxpu
http://www.dallasnews.com/sharedcontent/dws/dn/localnews/columnists/jraglan…
or http://tinyurl.com/yh7ehpn
http://www.htrnews.com/article/20091017/MAN06/910170416
http://www.baylor.edu/lariat/news.php?action=story&story=63347
http://www.sdnn.com/sandiego/2009-10-20/columns/marsha-sutton-zero-toleranc…
or http://tinyurl.com/ygfhysa
http://www.delmarvanow.com/article/20091020/DW02/910200338
Another example:
A former soldier who handed a discarded shotgun in to police faces at
least five years imprisonment for "doing his duty".
http://www.thisissurreytoday.co.uk/news/Ex-soldier-faces-jail-handing-gun/a…
or http://tinyurl.com/y9spuad
** *** ***** ******* *********** *************
Security in a Reputation Economy
In the past, our relationship with our computers was technical. We cared
what CPU they had and what software they ran. We understood our networks
and how they worked. We were experts, or we depended on someone else for
expertise. And security was part of that expertise.
This is changing. We access our email via the web, from any computer or
from our phones. We use Facebook, Google Docs, even our corporate
networks, regardless of hardware or network. We, especially the younger
of us, no longer care about the technical details. Computing is
infrastructure; it's a commodity. It's less about products and more
about services; we simply expect it to work, like telephone service or
electricity or a transportation network.
Infrastructures can be spread on a broad continuum, ranging from generic
to highly specialized. Power and water are generic; who supplies them
doesn't really matter. Mobile phone services, credit cards, ISPs, and
airlines are mostly generic. More specialized infrastructure services
are restaurant meals, haircuts, and social networking sites. Highly
specialized services include tax preparation for complex businesses;
management consulting, legal services, and medical services.
Sales for these services are driven by two things: price and trust. The
more generic the service is, the more price dominates. The more
specialized it is, the more trust dominates. IT is something of a
special case because so much of it is free. So, for both specialized IT
services where price is less important and for generic IT services --
think Facebook -- where there is no price, trust will grow in
importance. IT is becoming a reputation-based economy, and this has
interesting ramifications for security.
Some years ago, the major credit card companies became concerned about
the plethora of credit-card-number thefts from sellers' databases. They
worried that these might undermine the public's trust in credit cards as
a secure payment system for the internet. They knew the sellers would
only protect these databases up to the level of the threat to the
seller, and not to the greater level of threat to the industry as a
whole. So they banded together and produced a security standard called
PCI. It's wholly industry-enforced by an industry that realized its
reputation was more valuable than the sellers' databases.
A reputation-based economy means that infrastructure providers care more
about security than their customers do. I realized this 10 years ago
with my own company. We provided network-monitoring services to large
corporations, and our internal network security was much more extensive
than our customers'. Our customers secured their networks -- that's why
they hired us, after all -- but only up to the value of their networks.
If we mishandled any of our customers' data, we would have lost the
trust of all of our customers.
I heard the same story at an ENISA conference in London last June, when
an IT consultant explained that he had begun encrypting his laptop years
before his customers did. While his customers might decide that the risk
of losing their data wasn't worth the hassle of dealing with encryption,
he knew that if he lost data from one customer, he risked losing all of
his customers.
As IT becomes more like infrastructure, more like a commodity, expect
service providers to improve security to levels greater than their
customers would have done themselves.
In IT, customers learn about company reputation from many sources:
magazine articles, analyst reviews, recommendations from colleagues,
awards, certifications, and so on. Of course, this only works if
customers have accurate information. In a reputation economy, companies
have a motivation to hide their security problems.
You've all experienced a reputation economy: restaurants. Some
restaurants have a good reputation, and are filled with regulars. When
restaurants get a bad reputation, people stop coming and they close.
Tourist restaurants -- whose main attraction is their location, and
whose customers frequently don't know anything about their reputation --
can thrive even if they aren't any good. And sometimes a restaurant can
keep its reputation -- an award in a magazine, a special occasion
restaurant that "everyone knows" is the place to go -- long after its
food and service have declined.
The reputation economy is far from perfect.
This essay originally appeared in "The Guardian."
http://www.guardian.co.uk/technology/2009/nov/11/schneier-reputation-it-sec…
or http://tinyurl.com/yha3nbj
** *** ***** ******* *********** *************
Schneier News
I'm speaking at the Internet Governance Forum in Sharm el-Sheikh, Egypt,
on November 16 and 17.
http://igf09.eg/home.html
I'm speaking at the 2009 SecAU Security Congress in Perth on December 2
and 3.
http://scissec.scis.ecu.edu.au/conferences2008/
I'm speaking at an Open Rights Group event in London on December 4.
http://www.openrightsgroup.org/blog/2009/bruce-schneier-event
I'm speaking at the First IEEE Workshop on Information Forensics and
Security in London on December 8.
http://www.wifs09.org/
I'm speaking at the UCL Centre for Security and Crime Science in London
on December 7.
http://www.cscs.ucl.ac.uk/
I'm speaking at the Young Professionals in Foreign Policy in London on
December 7.
http://www.ypfp.org/content/event/London
I'm speaking at the Iberic Web Application Security Conference in Madrid
on December 10.
December 10-11, 2009
http://www.ibwas.com/
Article on me from a Luxembourg magazine.
http://www.paperjam.lu/archives/2009/11/2310_Technologie_Security/index.html
or http://tinyurl.com/y95mcpq
Interview with me on CNet.com:
http://news.cnet.com/8301-27080_3-10381460-245.html?tag=newsLeadStoriesArea…
or http://tinyurl.com/yf5otcu
Video interview with me, conducted at the Information Security Decisions
conference in Chicago in October.
http://searchsecurity.techtarget.com/video/0,297151,sid14_gci1372839,00.html
or http://tinyurl.com/yk4othd
A month ago, ThatsMyFace.com approached me about making a Bruce Schneier
action figure. It's $100. I'd like to be able to say something like
"half the proceeds are going to EPIC and EFF," but they're not. That's
the price for custom orders. I don't even get a royalty. The company
is working on lowering the price, and they've said that they'll put a
photograph of an actual example on the webpage. I've told them that at
$100 no one will buy it, but at $40 it's a funny gift for your corporate
IT person. So e-mail the company if you're interested, and if they get
enough interest they'll do a bulk order.
http://www.thatsmyface.com/f/bruce_schneier
** *** ***** ******* *********** *************
The Commercial Speech Arms Race
A few years ago, a company began to sell a liquid with identification
codes suspended in it. The idea was that you would paint it on your
stuff as proof of ownership. I commented that I would paint it on
someone else's stuff, then call the police.
I was reminded of this recently when a group of Israeli scientists
demonstrated that it's possible to fabricate DNA evidence. So now,
instead of leaving your own DNA at a crime scene, you can leave
fabricated DNA. And it isn't even necessary to fabricate. In Charlie
Stross's novel "Halting State," the bad guys foul a crime scene by
blowing around the contents of a vacuum cleaner bag, containing the DNA
of dozens, if not hundreds, of people.
This kind of thing has been going on for ever. It's an arms race, and
when technology changes, the balance between attacker and defender
changes. But when automated systems do the detecting, the results are
different. Face recognition software can be fooled by cosmetic surgery,
or sometimes even just a photograph. And when fooling them becomes
harder, the bad guys fool them on a different level. Computer-based
detection gives the defender economies of scale, but the attacker can
use those same economies of scale to defeat the detection system.
Google, for example, has anti-fraud systems that detect and shut down
advertisers who try to inflate their revenue by repeatedly clicking on
their own AdSense ads. So people built bots to repeatedly click on the
AdSense ads of their competitors, trying to convince Google to kick them
out of the system.
Similarly, when Google started penalizing a site's search engine
rankings for having "bad neighbors" -- backlinks from link farms, adult
or gambling sites, or blog spam -- people engaged in sabotage: they
built link farms and left blog comment spam linking to their
competitors' sites.
The same sort of thing is happening on Yahoo Answers. Initially,
companies would leave answers pushing their products, but Yahoo started
policing this. So people have written bots to report abuse on all their
competitors. There are Facebook bots doing the same sort of thing.
Last month, Google introduced Sidewiki, a browser feature that lets you
read and post comments on virtually any webpage. People and industries
are already worried about the effects unrestrained commentary might have
on their businesses, and how they might control the comments. I'm sure
Google has sophisticated systems ready to detect commercial interests
that try to take advantage of the system, but are they ready to deal
with commercial interests that try to frame their competitors? And do we
want to give one company the power to decide which comments should rise
to the top and which get deleted?
Whenever you build a security system that relies on detection and
identification, you invite the bad guys to subvert the system so it
detects and identifies someone else. Sometimes this is hard -- leaving
someone else's fingerprints on a crime scene is hard, as is using a mask
of someone else's face to fool a guard watching a security camera -- and
sometimes it's easy. But when automated systems are involved, it's often
very easy. It's not just hardened criminals that try to frame each
other, it's mainstream commercial interests.
With systems that police internet comments and links, there's money
involved in commercial messages -- so you can be sure some will take
advantage of it. This is the arms race. Build a detection system, and
the bad guys try to frame someone else. Build a detection system to
detect framing, and the bad guys try to frame someone else framing
someone else. Build a detection system to detect framing of framing, and
well, there's no end, really. Commercial speech is on the internet to
stay; we can only hope that they don't pollute the social systems we use
so badly that they're no longer useful.
This essay originally appeared in "The Guardian."
http://www.guardian.co.uk/technology/2009/oct/15/bruce-schneier-internet-se…
or http://tinyurl.com/yfbsb42
"Smart Water" liquid identification:
http://www.schneier.com/blog/archives/2005/02/smart_water.html
Fabricating DNA evidence:
http://www.nytimes.com/2009/08/18/science/18dna.html
Fooling face recognition software:
http://staging.spectrum.ieee.org/computing/embedded-systems/computerized-fa…
or http://tinyurl.com/yz9x4pf
http://www.theregister.co.uk/2009/02/19/facial_recognition_fail/
Google's AdSense:
http://www.wmtips.com/adsense/what-you-need-know-about-adsense.htm
Sidewiki:
http://www.google.com/sidewiki/intl/en/index.html:
http://www.pcworld.com/article/172490/google_sidewiki_a_first_look.html
or http://tinyurl.com/lgpxp8
Sidewiki fears:
http://impactiviti.wordpress.com/2009/09/29/googles-sidewiki-game-changer-f…
or http://tinyurl.com/yl4ul3g
http://www.4hoteliers.com/4hots_fshw.php?mwi=4448
http://talkbiz.com/blog/google-steals-the-web/
** *** ***** ******* *********** *************
The Doghouse: ADE 651
A divining rod to find explosives in Iraq:
http://www.schneier.com/blog/archives/2009/11/the_doghouse_ad.html
** *** ***** ******* *********** *************
"Evil Maid" Attacks on Encrypted Hard Drives
Earlier this month, Joanna Rutkowska implemented the "evil maid" attack
against TrueCrypt. The same kind of attack should work against any
whole-disk encryption, including PGP Disk and BitLocker. Basically, the
attack works like this:
Step 1: Attacker gains access to your shut-down computer and boots it
from a separate volume. The attacker writes a hacked bootloader onto
your system, then shuts it down.
Step 2: You boot your computer using the attacker's hacked bootloader,
entering your encryption key. Once the disk is unlocked, the hacked
bootloader does its mischief. It might install malware to capture the
key and send it over the Internet somewhere, or store it in some
location on the disk to be retrieved later, or whatever.
You can see why it's called the "evil maid" attack; a likely scenario is
that you leave your encrypted computer in your hotel room when you go
out to dinner, and the maid sneaks in and installs the hacked
bootloader. The same maid could even sneak back the next night and
erase any traces of her actions.
This attack exploits the same basic vulnerability as the "Cold Boot"
attack from last year, and the "Stoned Boot" attack from earlier this
year, and there's no real defense to this sort of thing. As soon as you
give up physical control of your computer, all bets are off. From CRN:
"Similar hardware-based attacks were among the main reasons why
Symantec's CTO Mark Bregman was recently advised by 'three-letter
agencies in the US Government' to use separate laptop and mobile device
when traveling to China, citing potential hardware-based compromise."
PGP sums it up in their blog. "No security product on the market today
can protect you if the underlying computer has been compromised by
malware with root level administrative privileges. That said, there
exists well-understood common sense defenses against 'Cold Boot,'
'Stoned Boot.' 'Evil Maid,' and many other attacks yet to be named and
publicized."
The defenses are basically two-factor authentication: a token you don't
leave in your hotel room for the maid to find and use. The maid could
still corrupt the machine, but it's more work than just storing the
password for later use. Putting your data on a thumb drive and taking
it with you doesn't work; when you return you're plugging your thumb
into a corrupted machine.
The real defense here is trusted boot, something Trusted Computing is
supposed to enable. And the only way to get that is from Microsoft's
BitLocker hard disk encryption, if your computer has a TPM module
version 1.2 or later.
In the meantime, people who encrypt their hard drives, or partitions on
their hard drives, have to realize that the encryption gives them less
protection than they probably believe. It protects against someone
confiscating or stealing their computer and then trying to get at the
data. It does not protect against an attacker who has access to your
computer over a period of time during which you use it, too.
Evil Maid attacks:
http://theinvisiblethings.blogspot.com/2009/10/evil-maid-goes-after-truecry…
or http://tinyurl.com/yzbbgc3
Cold Boot and Stoned Boot attacks:
http://citp.princeton.edu/memory/
http://www.stoned-vienna.com/
http://blogs.zdnet.com/security/?p=4662&tag=nl.e019
http://www.crn.com.au/News/155836,safety-first-for-it-executives-in-china.a…
or http://tinyurl.com/p2wqxq
PGP's commentary:
http://blog.pgp.com/index.php/2009/10/evil-maid-attack/
Trusted Computing:
http://www.schneier.com/blog/archives/2005/08/trusted_computi.html
** *** ***** ******* *********** *************
Is Antivirus Dead?
This essay previously appeared in "Information Security Magazine," as
the second half of a point-counterpoint with Marcus Ranum. You can read
his half here as well:
http://searchsecurity.techtarget.com/magazinePrintFriendly/0,296905,sid14_g…
or http://tinyurl.com/yz2rtbs
Security is never black and white. If someone asks, "For best security,
should I do A or B?" the answer almost invariably is both. But security
is always a trade-off. Often it's impossible to do both A and B --
there's no time to do both, it's too expensive to do both, or whatever
-- and you have to choose. In that case, you look at A and B and you
make you best choice. But it's almost always more secure to do both.
Yes, antivirus programs have been getting less effective as new viruses
are more frequent and existing viruses mutate faster. Yes, antivirus
companies are forever playing catch-up, trying to create signatures for
new viruses. Yes, signature-based antivirus software won't protect you
when a virus is new, before the signature is added to the detection
program. Antivirus is by no means a panacea.
On the other hand, an antivirus program with up-to-date signatures will
protect you from a lot of threats. It'll protect you against viruses,
against spyware, against Trojans -- against all sorts of malware. It'll
run in the background, automatically, and you won't notice any
performance degradation at all. And -- here's the best part -- it can be
free. AVG won't cost you a penny. To me, this is an easy trade-off,
certainly for the average computer user who clicks on attachments he
probably shouldn't click on, downloads things he probably shouldn't
download, and doesn't understand the finer workings of Windows Personal
Firewall.
Certainly security would be improved if people used whitelisting
programs such as Bit9 Parity and Savant Protection -- and I personally
recommend Malwarebytes' Anti-Malware -- but a lot of users are going to
have trouble with this. The average user will probably just swat away
the "you're trying to run a program not on your whitelist" warning
message or -- even worse -- wonder why his computer is broken when he
tries to run a new piece of software. The average corporate IT
department doesn't have a good idea of what software is running on all
the computers within the corporation, and doesn't want the
administrative overhead of managing all the change requests. And
whitelists aren't a panacea, either: they don't defend against malware
that attaches itself to data files (think Word macro viruses), for example.
One of the newest trends in IT is consumerization, and if you don't
already know about it, you soon will. It's the idea that new
technologies, the cool stuff people want, will become available for the
consumer market before they become available for the business market.
What it means to business is that people -- employees, customers,
partners -- will access business networks from wherever they happen to
be, with whatever hardware and software they have. Maybe it'll be the
computer you gave them when you hired them. Maybe it'll be their home
computer, the one their kids use. Maybe it'll be their cell phone or
PDA, or a computer in a hotel's business center. Your business will have
no way to know what they're using, and -- more importantly -- you'll
have no control.
In this kind of environment, computers are going to connect to each
other without a whole lot of trust between them. Untrusted computers are
going to connect to untrusted networks. Trusted computers are going to
connect to untrusted networks. The whole idea of "safe computing" is
going to take on a whole new meaning -- every man for himself. A
corporate network is going to need a simple, dumb, signature-based
antivirus product at the gateway of its network. And a user is going to
need a similar program to protect his computer.
Bottom line: antivirus software is neither necessary nor sufficient for
security, but it's still a good idea. It's not a panacea that magically
makes you safe, nor is it is obsolete in the face of current threats. As
countermeasures go, it's cheap, it's easy, and it's effective. I haven't
dumped my antivirus program, and I have no intention of doing so anytime
soon.
Problems with anti-virus software:
http://www.csoonline.com/article/495827/Experts_Only_Time_to_Ditch_the_Anti…
or http://tinyurl.com/nqo68f
http://www.computerworld.com/s/article/print/9077338/The_future_of_antivirus
or http://tinyurl.com/yfrv86s
http://www.pcworld.com/article/130455/is_desktop_antivirus_dead.html
http://www.businessweek.com/technology/content/jan2007/tc20070122_300717.htm
or http://tinyurl.com/ycdkmd3
AVG:
http://free.avg.com/us-en/homepage
Consumerization:
http://arstechnica.com/business/news/2008/07/analysis-it-consumerization-an…
or http://tinyurl.com/yd6kxgs
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing
summaries, analyses, insights, and commentaries on security: computer
and otherwise. You can subscribe, unsubscribe, or change your address
on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues
are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to
colleagues and friends who will find it valuable. Permission is also
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies,"
and "Applied Cryptography," and an inventor of the Blowfish, Twofish,
Threefish, Helix, Phelix, and Skein algorithms. He is the Chief
Security Technology Officer of BT BCSG, and is on the Board of Directors
of the Electronic Privacy Information Center (EPIC). He is a frequent
writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not
necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
On 10/3/05, Jason Holt <jason(a)lunkwill.org> wrote:
>
> More thoughts regarding the tokens vs. certs decision, and also multi-use:
This is a good summary of the issues. With regard to turning client
certs on and off: from many years of experience with anonymous and
pseudonymous communication, the big usability problem is remembering
which mode you are in - whether you are identified or anonymous. This
relates to the technical problem of preventing data from one mode from
leaking over into the other.
The best solution is to use separate logins for the two modes. This
prevents any technical leakage such as cookies or certificates.
Separate desktop pictures and browser skins can be selected to provide
constant cues about the mode. Using this method it would not be
necessary to be asked on every certificate usage, so that problem with
certs would not arise.
(As far as the Chinese dissident using net cafes, if they are using
Tor at all it might be via a USB token like the one (formerly?)
available from virtualprivacymachine.com. The browser on the token can
be configured to hold the cert, making it portable.)
Network eavesdropping should not be a major issue for a pseudonym
server. Attackers would have little to gain for all their work. The
user is accessing the server via Tor so their anonymity is still
protected.
Any solution which waits for Wikimedia to make changes to their
software will probably be long in coming. When Jimmy Wales was asked
whether their software could allow logins for "trusted" users from
otherwise blocked IPs, he didn't have any idea. The technical people
are apparently in a separate part of the organization. Even if Jimmy
endorsed an idea for changing Wikipedia, he would have to sell it to
the technical guys, who would then have to implement and test it in
their Wiki code base, then it would have to be deployed in Wikipedia
(which is after all their flagship product and one which they would
want to be sure not to break).
Even once this happened, the problem is only solved for that one case
(possibly also for other users of the Wiki code base). What about
blogs or other web services that may decide to block Tor? It would be
better to have a solution which does not require customization of the
web service software. That approach tries to make the Tor tail wag the
Internet dog.
The alternative of running a pseudonym based web proxy that only lets
"good" users pass through will avoid the need to customize web
services on an individual basis, at the expense of requiring a
pseudonym quality administrator who cancels nyms that misbehave. For
forward secrecy, this service would expunge its records of which nyms
had been active, after a day or two (long enough to make sure no
complaints are going to come back).
As far as the Unlinkable Serial Transactions proposal, the gist of it
is to issue a new blinded token whenever one is used. That's a clever
idea but it is not adequate for this situtation, because abuse
information is not available until after the fact. By the time a
complaint arises the miscreant will have long ago received his new
blinded token and the service will have no way to stop him from
continuing to use it.
I could envision a complicated system whereby someone could use a
token on Monday to access the net, then on Wednesday they would become
eligible to exchange that token for a new one, provided that it had
not been black-listed due to complaints in the interim. This adds
considerable complexity, including the need to supply people with
multiple initial tokens so that they could do multiple net accesses
while waiting for their tokens to be eligible for exchange; the risk
that exchange would often be followed immediately by use of the new
token, harming unlinkability; the difficulty in fully black-listing a
user who has multiple independent tokens, when each act of abuse
essentially just takes one of his tokens away from him. Overall this
would be too cumbersome and problematic to use for this purpose.
Providing forward secrecy by having the nym-based web proxy erase its
records every two days is certainly less secure than doing it by
cryptographic means, but at the same time it is more secure than
trusting every web service out there to take similar actions to
protect its clients. Until a clean and unemcumbered technological
approach is available, this looks like a reasonable compromise.
CP
---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo(a)metzdowd.com
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
1
0
Link: http://slashdot.org/article.pl?sid=04/04/14/1856224
Posted by: timothy, on 2004-04-14 19:21:00
Topic: wireless, 55 comments
from the network-is-the-network dept.
infractor writes "ZDNet [1]is reporting that the Linux based
[2]LocustWorld Mesh system now has [3]SIP routing at every node. The
[4]LocustWorld boxes have been widely used in [5]community broadband
projects where DSL is not available, so successfully that they have
been [6]seen as a threat to next generation mobile networks. With the
addition of VoIP support, these mesh networks can now compete with the
telcos on voice as well as data services. [7]More details here."
[8]Click Here
References
1.
http://news.zdnet.co.uk/communications/wireless/0,39020348,39151531,00.htm
2. http://locustworld.com/
3. http://www.faqs.org/rfcs/rfc2543.html
4. file://slashdot.org/article.pl?sid=02/10/31/2049201&tid=126
5. http://www.muniwireless.com/archives/000201.html
6. http://locustworld.com/media/timesbusiness290104.gif
7. http://locustworld.com/siprouting.php
8.
http://ads.osdn.com/?ad_id=2872&alloc_id=7019&site_id=1&request_id=7791007&…
=click&page=%2farticle%2epl
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net
[demime 1.01d removed an attachment of type application/pgp-signature]
1
0
Dave,
This entry from the blog at wired.com might be good for the IP
list. The best part is at the end. Good old traceroute!
--------------------------------------------------------
The Newbie's Guide to Detecting the NSA
http://blog.wired.com/27BStroke6/#1510938 ... "With that in mind,
here's the 27B Stroke 6 guide to detecting if your traffic is being
funneled into the secret room on San Francisco's Folsom street. If
you're a Windows user, fire up an MS-DOS command prompt. Now type
tracert followed by the domain name of the website, e-mail host, VoIP
switch, or whatever destination you're interested in. Watch as the
program spits out your route, line by line. C:\> tracert nsa.gov 1 2
ms 2 ms 2 ms 12.110.110.204 [...] 7 11 ms 14 ms 10 ms
as-0-0.bbr2.SanJose1.Level3.net [64.159.0.218] 8 13 12 19 ms
ae-23-56.car3.SanJose1.Level3.net [4.68.123.173] 9 18 ms 16 ms 16 ms
192.205.33.17 10 88 ms 92 ms 91 ms tbr2-p012201.sffca.ip.att.net
[12.123.13.186] 11 88 ms 90 ms 88 ms tbr1-cl2.sl9mo.ip.att.net
[12.122.10.41] 12 89 ms 97 ms 89 ms tbr1-cl4.wswdc.ip.att.net
[12.122.10.29] 13 89 ms 88 ms 88 ms ar2-a3120s6.wswdc.ip.att.net
[12.123.8.65] 14 102 ms 93 ms 112 ms 12.127.209.214 15 94 ms 94 ms 93
ms 12.110.110.13 16 * * * 17 * * * 18 * * In the above example, my
traffic is jumping from Level 3 Communications to AT&T's network in
San Francisco, presumably over the OC-48 circuit that AT&T tapped on
February 20th, 2003, according to the Klein docs. The magic string
you're looking for is sffca.ip.att.net. If it's present immediately
above or below a non-att.net entry, then -- by Klein's allegations --
your packets are being copied into room 641A, and from there,
illegally, to the NSA. Of course, if Marcus is correct and AT&T has
installed these secret rooms all around the country, then any att.net
entry in your route is a bad sign.
-------------------------------------
You are subscribed as eugen(a)leitl.org
To manage your subscription, go to
http://v2.listbox.com/member/?listname=ip
Archives at: http://www.interesting-people.org/archives/interesting-people/
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
1
0
Begin forwarded message:
1
0