cypherpunks-legacy
Threads by month
- ----- 2025 -----
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2005 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2004 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2003 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2002 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2001 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2000 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1999 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1998 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1997 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1996 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1995 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1994 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1993 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1992 -----
- December
- November
- October
- September
July 2018
- 1371 participants
- 9656 discussions
[linux-elitists] Two on RFID from Politech: Hack the tech, & Gilmore's dystopia
by Karsten M. Self 06 Jul '18
by Karsten M. Self 06 Jul '18
06 Jul '18
RFID has been in the news and play recently. I even heard a somewhat
informed discussion on KQED's "California XXX" Saturday.
The first article covers John Gilmore's dystopian view of RFID. Imagine
being able to create weapons which indipendently target specific IDs.
This sort of activity is hard to hack. It's also a partial _current_
reality:
- OBL was tracked, according to reports, via his satellite phone,
until he became aware of this, and stopped using same (possibly even
sending it on a distracting separate track from himself for a time).
- More locally, militia movements which had used anonymous phone cards
to make "untraceable" phone calls instead were tracked on the basis
of traffic analysis. While a given card wasn't allocated to an
individual, it was identifiable by account, and could be flagged for
monitoring if it called other numbers of known interest.
I'm sure that states such as, say, Israel, would have a significant
interest in munitions having characteristics described by Gilmore.
The second covers a "hacking the system" concept. I'd considered
something similar myself, though different in approach. Rather than
finding RFID chips and "redistributing" them, why not create
programmable RFID broadcasters which could spoof other chips, and
distribute these. The idea being to pollute any RFID detectors with a
vast spew of superfluous data.
There are a couple of implications here which are pretty clear. Many of
us carry a set of identifyable broadcast appliances already, and this
will increase. These signatures are difficult to mask. The more likely
response will be to find these signatures, and to the extent they're
broadcastable, clone them and distribute them more widely (specific
seeding). This will make the specific signatures less reliable for
either legitimate or illegitimate use.
At the same time, legitimate business uses of RFID monitoring will
probably be highly specific in their focus on data interest. There's
simply going to be too much data floating around, most of it not
interesting, to be able to work with reasonably. This would be further
encouraged by seeding of noise data closely resembling legitimate keys.
Predictability of RFID sequences, and known legit or covert use of data
will be key in determining both utility and countermeasure activities
concerning RFID.
----- Forwarded message from Declan McCullagh <declan(a)well.com> -----
Date: Fri, 30 Apr 2004 00:24:45 -0400
From: Declan McCullagh <declan(a)well.com>
To: politech(a)politechbot.com
Subject: John Gilmore's horrific, dystopian view of an RFID world
[priv]
[I always learn something from John Gilmore, and this is no
exception. Although parts of his dystopia are already true: I
travel with a cell phone, 802.1x devices, and Bluetooth devices that
broadcast my identity (to a sufficiently savvy adversary) even more
efficiently than an RFID tag would... --Declan]
-------- Original Message --------
Subject: Re: [Politech] Computerworld falls for RFID "sniper rifle" hoax?
Date: Wed, 28 Apr 2004 13:21:35 -0700
From: John Gilmore <gnu(a)toad.com>
To: Declan McCullagh <declan(a)well.com>
CC: politech(a)politechbot.com
References: <408F2D74.8040301(a)well.com>
Nice hoax. But the opposite is more likely to come true. Rather
than shooting RFID chips into people, people with RFID chips already
in or on them will be shot. People with RFID chips in their
clothing, books, bags, or bodies could be targeted by "smart
projectiles" that will zero in on that particular Smart.
Today's "smart bombs" already self-guide toward laser-identified or
RF-identified or heat-identified targets.
The technical challenges involved in guiding a missile toward an
RFID chip would probably relate to the speed of the missile compared
to the range at which the RFID chip can be made to respond and the
agility with which the missile can change course.
Such a missile could probably more easily be designed to *arm* or
*trigger* its explosion when a particular RFID chip is in range.
That way, if fired at innocents, it would be a dud that would only
cause minimal damage, but if fired at the right person, it would
blow up.
But we need not get so science-fiction about it. Rather than bring
the mountain to Mohammed, let's let Mohammed come to the mountain.
Let's see what this technology would do for an everyday practice of
today's freedom fighters who are defending their country by opposing
one of the US Government's current wars of occupation. In order to
comply with government labeling mandates resulting from the huge
Firestone tire recall, Michelin has announced that it plans to put
RFID chips in every tire it sells to car makers (and eventually in
every tire they sell). Similar plans are afoot for many other
automotive and personal products.
Imagine being able to bury an explosive in a roadway -- that would
only go off when a particular car drove over it. You could bury
these bombs months in advance, in any or every major or minor
roadway. You could change the targeting whenever you liked (e.g.
via driving a radio-equipped car over it and transmitting new
instructions to it). You could give it a whole list of cars that it
would explode for, or a set of cars and dates.
If you put such bombs throughout a metropolitan area, a car could
drive through the area for months without triggering anything --
taking evasive routes, etc. But on the appointed day, each the
bombs surrounding the area would know to go off when that same car
passed. Without the responsible parties having to visit the sites
later than days or weeks beforehand (making them hard to catch or
deter).
Such explosives would be detectable by their radio emissions -- RFID
pings. But in a world where RFID pings are being transmitted by
everything around you, including every cellphone and doorframe and
cash register and ATM machine and camera and car and computer and
palmtop and parking meter and cop car ... you won't even notice.
Places with "congestion pricing" like central London, or any toll
road anywhere, would even have plenty of active RFID readers buried
in the roadway already. And I'm sure the cops anywhere would love
to have them for tracking where everybody is driving --
individually.
Welcome to automated personal death. Courtesy of RFID and leading
shortsighted global corporations, with government encouragement.
John
----- End forwarded message -----
And item #2: hacking the system.
----- Forwarded message from Declan McCullagh <declan(a)well.com> -----
Date: Wed, 05 May 2004 00:41:47 -0400
From: Declan McCullagh <declan(a)well.com>
To: politech(a)politechbot.com
Subject: Hack the tech: a possible counter-RFID strategy [priv]
-------- Original Message --------
Subject: A possible counter-RFID strategy
Date: Mon, 3 May 2004 07:57:30 -0400
From: Rich Kulawiec <rsk(a)firemountain.net>
To: Declan McCullagh <declan(a)well.com>
(An edit of something I sent to the folks at nocards.org last summer)
Having followed the recent RFID-related messages on Politech, I
thought I'd send this along.
First, a small historical diversion: back in the 1980's, there were
rumors that the NSA had a complete Usenet feed going into its data
centers. In reaction, Usenet article authors began to include what
were called "NSA fodder" in the headers and bodies of their
articles; text strings like:
Moscow nuke Iran Kremlin secret spy CIA transmission
were put there to (at least in theory) cause the text-analysis
programs and perhaps the human beings analyzing the incoming data at
the NSA to work a bit harder.
Nobody (I hope) took this very seriously, but it does illustrate an
interesting point about approaches to frustrating unwanted data
collection, and that is that there are two ways to do that:
1. Deny the data to the collectors. 2. Give them all the
data they could possibly hope for... but fill it with so
much noise that it's useless.
In the case of RFID tags, so many people are all over their
deployment that approach #1 may now be effectively impossible.
Fine. Let them knock themselves out putting RFID tags on and in
everything and tracking them and accumulating all the data, and
spending lots and lots of money and time setting all that up.
Meanwhile, let's try approach #2.
After all, there's no reason why you and I can't have our own RFID
scanners, and locate the tags that we happen to find in our
possession, now is there? And if I felt like, oh, removing the tag
from my new shirt and sticking it in a city bus seat, or extracting
the tag from a new lawn sprinkler and putting it in on a shopping
cart back at the store where I bought it, well, why not?
Now imagine the consequences if 20 million people did the same.
We could even have little exchanges where we throw all our tags in a
pile and randomly take some away to play with -- the point being
that then not even *we* know what happened to them.
I find it very satisfying to think that someone trying to figure out
where my bicycle helmet is at the moment will actually be tracking a
Walmart (rushing headlong toward adoption of RFID) manager's car
that happened to parked somewhere nearby when I felt like
transplanting the RFID tag.
RFID tags from all kinds of things could be randomly planted
everywhere: in an airplane seat, in a newspaper at the library, in a
copy of a rented video, EVERYWHERE. Some could be transplanted to
similar items; others to completely different ones. And so on.
I'm not suggesting that anyone abandon the fight against the
intrusive and abusive uses of RFID by any means; I'm just suggesting
that one possible countermeasure to make whatever deployment goes
forward far less effective than its backers hope is to cause their
RFID trackers to record huge amounts of completely useless data. [1]
This is relatively easy to do, and could actually be turned into a
rather amusing exercise in competitive ingenuity. [2]
But more seriously, if a sufficient number of people participate,
and thus a sufficient number of RFID tags are pressed into service
generating bogus data, it will discredit them and devalue their
usefulness, thus discouraging their further adoption and
undercutting attempts to rely on them for some of their more
Orwellian possible uses.
It's a shame that something like this is necessary: but given the
total lack of respect for privacy and any semblance of
self-restraint on the part of governments and corporations, it is.
--Rsk
[1] Most importantly, "useless data" that will be very difficult to
distinguish from useful data. Every communications engineer learns
that separating signal from noise is relatively easy when they have
very different properties, but much harder when they're the same.
Hence the need to transplant at least some RFID tags to similar
items, thus generating bogus but hard-to-spot-as-bogus data.
[2] "I'd like to thank you for coming to testify before our
committee today, Mr. Ashton, and as my first question, I'd like you
to explain why the Senate's RFID scanner indicates that you walked
in here with a cheese grater, a copy of the latest Harry Potter
video, a forklift, and the latest issue of 'Motorcycle Babes' on
your person."
----- End forwarded message -----
--
Karsten M. Self <kmself(a)ix.netcom.com> http://kmself.home.netcom.com/
What Part of "Gestalt" don't you understand?
Kerry '04 http://www.johnkerry.com/
_______________________________________________
linux-elitists
http://zgp.org/mailman/listinfo/linux-elitists
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net
[demime 1.01d removed an attachment of type application/pgp-signature]
1
0
Rich Jones wrote:
> Jacob - Are you aware of TorProxy / Shadow Browser for Android -
> http://www.cl.cam.ac.uk/research/dtg/android/tor/ - is this going to
> building on that? Either way, I'm excited. I've gotten quite good at
> Android stuff and would be interested in helping out, if you need a
> hand.
Hi Rich,
Yes - we've been somewhat in contact with the authors of TorProxy and
Shadow Browser. They did great work and it's quite a slick pair of
applications. However, the TorProxy in the Android market is absolutely
unsafe to use. It is based on research code that was never intended for
high security needs or real serious public use:
http://archives.seul.org/or/java/Sep-2009/msg00003.html
Rather, we're working on building an Android package we've codenamed Orbot:
https://svn.torproject.org/svn/projects/android/trunk/Orbot/
Orbot will replace the TorProxy component and it includes the C
reference implementation of Tor. It will also ship with Privoxy
(although we're also looking into Polipo) to provide an HTTP proxy as
well as the normal SOCKS4A/5 proxy interface into the Tor network.
We don't have a great solution for Shadow at this point and it's
non-trivial to sew it into Orbot. Nathan has a better grasp on the
Android internals that make the web browser component complicated across
Android versions. Perhaps he'll weigh in on it...
In any case, we're may move to a hybrid model for some mobile phones.
It's easy to provide a compiled Tor binary (the C reference
implementation) and a Java Tor implementation [0] in a single container.
This should allow for greater compatibility and hopefully everyone will
have better anonymity as a result.
Best,
Jake
[0] http://github.com/brl/JTor
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
https://secure.wikileaks.org/wiki/On_the_take_and_loving_it
Grant code 'MDA904' - National Security Agency
The NSA has pushed tens or hundreds of millions into the academy
through research grants using one particular grant code. ...
John
---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo(a)metzdowd.com
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
Rich Jones wrote:
> Jacob - Are you aware of TorProxy / Shadow Browser for Android -
> http://www.cl.cam.ac.uk/research/dtg/android/tor/ - is this going to
> building on that? Either way, I'm excited. I've gotten quite good at
> Android stuff and would be interested in helping out, if you need a
> hand.
Hi Rich,
Yes - we've been somewhat in contact with the authors of TorProxy and
Shadow Browser. They did great work and it's quite a slick pair of
applications. However, the TorProxy in the Android market is absolutely
unsafe to use. It is based on research code that was never intended for
high security needs or real serious public use:
http://archives.seul.org/or/java/Sep-2009/msg00003.html
Rather, we're working on building an Android package we've codenamed Orbot:
https://svn.torproject.org/svn/projects/android/trunk/Orbot/
Orbot will replace the TorProxy component and it includes the C
reference implementation of Tor. It will also ship with Privoxy
(although we're also looking into Polipo) to provide an HTTP proxy as
well as the normal SOCKS4A/5 proxy interface into the Tor network.
We don't have a great solution for Shadow at this point and it's
non-trivial to sew it into Orbot. Nathan has a better grasp on the
Android internals that make the web browser component complicated across
Android versions. Perhaps he'll weigh in on it...
In any case, we're may move to a hybrid model for some mobile phones.
It's easy to provide a compiled Tor binary (the C reference
implementation) and a Java Tor implementation [0] in a single container.
This should allow for greater compatibility and hopefully everyone will
have better anonymity as a result.
Best,
Jake
[0] http://github.com/brl/JTor
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
At 3:36 PM +1000 8/11/02, David Hillary wrote:
> I think that tax havens such as the Cayman Islands should be ranked
> among the freest in the world. No taxes on business or individuals
> for a start. Great environment for banking and commerce. Good
> protection of property rights. Small non-interventionist
> government.
Clearly you've never met "Triumph", the Fabulous Crotch-Sniffing
Caymanian Customs Wonder Dog at extreme close range, or heard the
story about the expat's college age kid, actually born on Cayman, who
was literally exiled from the island when the island constabulary
"discovered" a marijuana seed or three in his summer-break rental car
a few years back.
I mean, his old man was some senior cheese at Global Crossing at the
time, but this was back when they could do no wrong. If that's what
they did to *his* kid, imagine what some poor former
junk-bond-hustler might have to deal with someday for, say, the odd
unauthorized Cuban nightlife excursion. A discretely folded twenty
keeps the stamp off your passport on the ground in Havana, and a
bottle of Maker's Mark goes a long way towards some interesting
nocturnal diversion when you get there and all, but still, you can't
help thinking that Uncle's going to come a-knockin', and that Cayman
van's going to stop rockin' some day, and when it does, it ain't
gonna be pretty.
Closer to home, conceptually at least, a couple of cryptogeeken were
hustled off and strip-searched, on the spot, when they landed on
Grand Cayman for the Financial Cryptography conference there a couple
of years ago. Like lots of cypherpunks, these guys were active
shooters in the Bay Area, and they had stopped in Jamaica, Mon, for a
few days on the way to Grand Cayman. Because they, and their stuff,
reeked on both counts, they were given complementary colorectal
examinations and an entertaining game of 20 questions, or two,
courtesy of the Caymanian Federales, after the obligatory fun and
games with a then-snarling Crotch-Sniffing Caymanian Wonder Dog.
Heck, I had to completely unpack *all* my stuff for a nice, well-fed
Caymanian customs lady just to get *out* of the country when I left.
Besides, tax havens are being increasingly constrained as to their
activities these days, because they cost the larger nation-states too
much in the way of "escaped" "revenue", or at least the perception of
same in the local "free" press. Obviously, if your money "there"
isn't exchangeable into your money "here", it kind of defeats the
purpose of keeping your money "there" in the first place, giving
folks like FinCEN lots of leverage when financial treaties come up
for renegotiation due to changes in technology, like on-line
credit-card and securities clearing, or the odd governmental or
quango re-org, like they are wont to do increasingly in the EU, and
the US.
As a result, the veil of secrecy went in Switzerland quite a while
ago. The recent holocaust deposit thing was just the bride and groom
on that particular wedding-cake, and, as goes Switzerland, so goes
Luxembourg, and of course Lichtenstein, which itself is usually
accessible only through Switzerland. Finally, of course, the Caymans
themselves will cough up depositor lists whenever Uncle comes calling
about one thing or another on an increasingly longer list of fishing
pretexts.
At this point, the "legal", state-backed pecuniary privacy pickings
are kind of thin on the ground. I mean, I'm not sure I'd like to keep
my money in, say, Vanuatu. Would you? Remember, this is a place where
a bandana hanging on a string across an otherwise public road will
close it down until the local erst-cannibal hunter-gatherer turned
statutorily-permanent landowner figures out just what his new or
imagined property rights are this afternoon.
The point is, any cypherpunk worth his salt will tell you that only
solution to financial or any other, privacy, is to make private
transactions on the net, cheaper, and more secure, than "transparent"
transactions currently are in meatspace. Then things get *real*
interesting, and financial privacy -- and considerably more personal
freedom -- will just be the icing on the wedding cake. Bride and
groom action figures sold separately, of course.
Cheers,
RAH
(Who went to FC2K at the Grand Cayman Marriott in February that year.
Nice place, I liked Anguilla better though, at least at the time, and
I haven't been back to either since. The beaches are certainly better
in Anguilla, and the "private" banking system there is probably just
as porous as Cayman's is, by this point. If I were to pick up and
move Somewhere Free outside Your Friendly Neighborhood Unipolar
Superpower, New Zealand is somewhere near the top of my list, and
Chile would be next, though things change quickly out there in
ballistic-missile flyover country. In that vein, who knows, maybe
we're in for some kind of latter-day Peloponnesian irony, and
*Russia* will end up the freest place on earth someday. Stranger
things have happened in the last couple of decades, yes?)
-----BEGIN PGP SIGNATURE-----
Version: PGP 7.5
iQA/AwUBPVYS48PxH8jf3ohaEQKwtgCgw/XSwzauabEP/8jDvUVk/rgFdroAn0xf
Owk90GoK+X5Pv+bGoKXCwzBK
=1w9d
-----END PGP SIGNATURE-----
--
-----------------
R. A. Hettinga <mailto: rah(a)ibuc.com>
The Internet Bearer Underwriting Corporation <http://www.ibuc.com/>
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
subscribe: send blank email to dgcchat-join(a)lists.goldmoney.com
unsubscribe: send blank email to dgcchat-leave(a)lists.goldmoney.com
digest: send an email to dgcchat-request(a)lists.goldmoney.com
with "set yourname(a)yourdomain.com digest=on" in the message body
--- end forwarded text
--
-----------------
R. A. Hettinga <mailto: rah(a)ibuc.com>
The Internet Bearer Underwriting Corporation <http://www.ibuc.com/>
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
1
0
[linux-elitists] Two on RFID from Politech: Hack the tech, & Gilmore's dystopia
by Karsten M. Self 06 Jul '18
by Karsten M. Self 06 Jul '18
06 Jul '18
RFID has been in the news and play recently. I even heard a somewhat
informed discussion on KQED's "California XXX" Saturday.
The first article covers John Gilmore's dystopian view of RFID. Imagine
being able to create weapons which indipendently target specific IDs.
This sort of activity is hard to hack. It's also a partial _current_
reality:
- OBL was tracked, according to reports, via his satellite phone,
until he became aware of this, and stopped using same (possibly even
sending it on a distracting separate track from himself for a time).
- More locally, militia movements which had used anonymous phone cards
to make "untraceable" phone calls instead were tracked on the basis
of traffic analysis. While a given card wasn't allocated to an
individual, it was identifiable by account, and could be flagged for
monitoring if it called other numbers of known interest.
I'm sure that states such as, say, Israel, would have a significant
interest in munitions having characteristics described by Gilmore.
The second covers a "hacking the system" concept. I'd considered
something similar myself, though different in approach. Rather than
finding RFID chips and "redistributing" them, why not create
programmable RFID broadcasters which could spoof other chips, and
distribute these. The idea being to pollute any RFID detectors with a
vast spew of superfluous data.
There are a couple of implications here which are pretty clear. Many of
us carry a set of identifyable broadcast appliances already, and this
will increase. These signatures are difficult to mask. The more likely
response will be to find these signatures, and to the extent they're
broadcastable, clone them and distribute them more widely (specific
seeding). This will make the specific signatures less reliable for
either legitimate or illegitimate use.
At the same time, legitimate business uses of RFID monitoring will
probably be highly specific in their focus on data interest. There's
simply going to be too much data floating around, most of it not
interesting, to be able to work with reasonably. This would be further
encouraged by seeding of noise data closely resembling legitimate keys.
Predictability of RFID sequences, and known legit or covert use of data
will be key in determining both utility and countermeasure activities
concerning RFID.
----- Forwarded message from Declan McCullagh <declan(a)well.com> -----
Date: Fri, 30 Apr 2004 00:24:45 -0400
From: Declan McCullagh <declan(a)well.com>
To: politech(a)politechbot.com
Subject: John Gilmore's horrific, dystopian view of an RFID world
[priv]
[I always learn something from John Gilmore, and this is no
exception. Although parts of his dystopia are already true: I
travel with a cell phone, 802.1x devices, and Bluetooth devices that
broadcast my identity (to a sufficiently savvy adversary) even more
efficiently than an RFID tag would... --Declan]
-------- Original Message --------
Subject: Re: [Politech] Computerworld falls for RFID "sniper rifle" hoax?
Date: Wed, 28 Apr 2004 13:21:35 -0700
From: John Gilmore <gnu(a)toad.com>
To: Declan McCullagh <declan(a)well.com>
CC: politech(a)politechbot.com
References: <408F2D74.8040301(a)well.com>
Nice hoax. But the opposite is more likely to come true. Rather
than shooting RFID chips into people, people with RFID chips already
in or on them will be shot. People with RFID chips in their
clothing, books, bags, or bodies could be targeted by "smart
projectiles" that will zero in on that particular Smart.
Today's "smart bombs" already self-guide toward laser-identified or
RF-identified or heat-identified targets.
The technical challenges involved in guiding a missile toward an
RFID chip would probably relate to the speed of the missile compared
to the range at which the RFID chip can be made to respond and the
agility with which the missile can change course.
Such a missile could probably more easily be designed to *arm* or
*trigger* its explosion when a particular RFID chip is in range.
That way, if fired at innocents, it would be a dud that would only
cause minimal damage, but if fired at the right person, it would
blow up.
But we need not get so science-fiction about it. Rather than bring
the mountain to Mohammed, let's let Mohammed come to the mountain.
Let's see what this technology would do for an everyday practice of
today's freedom fighters who are defending their country by opposing
one of the US Government's current wars of occupation. In order to
comply with government labeling mandates resulting from the huge
Firestone tire recall, Michelin has announced that it plans to put
RFID chips in every tire it sells to car makers (and eventually in
every tire they sell). Similar plans are afoot for many other
automotive and personal products.
Imagine being able to bury an explosive in a roadway -- that would
only go off when a particular car drove over it. You could bury
these bombs months in advance, in any or every major or minor
roadway. You could change the targeting whenever you liked (e.g.
via driving a radio-equipped car over it and transmitting new
instructions to it). You could give it a whole list of cars that it
would explode for, or a set of cars and dates.
If you put such bombs throughout a metropolitan area, a car could
drive through the area for months without triggering anything --
taking evasive routes, etc. But on the appointed day, each the
bombs surrounding the area would know to go off when that same car
passed. Without the responsible parties having to visit the sites
later than days or weeks beforehand (making them hard to catch or
deter).
Such explosives would be detectable by their radio emissions -- RFID
pings. But in a world where RFID pings are being transmitted by
everything around you, including every cellphone and doorframe and
cash register and ATM machine and camera and car and computer and
palmtop and parking meter and cop car ... you won't even notice.
Places with "congestion pricing" like central London, or any toll
road anywhere, would even have plenty of active RFID readers buried
in the roadway already. And I'm sure the cops anywhere would love
to have them for tracking where everybody is driving --
individually.
Welcome to automated personal death. Courtesy of RFID and leading
shortsighted global corporations, with government encouragement.
John
----- End forwarded message -----
And item #2: hacking the system.
----- Forwarded message from Declan McCullagh <declan(a)well.com> -----
Date: Wed, 05 May 2004 00:41:47 -0400
From: Declan McCullagh <declan(a)well.com>
To: politech(a)politechbot.com
Subject: Hack the tech: a possible counter-RFID strategy [priv]
-------- Original Message --------
Subject: A possible counter-RFID strategy
Date: Mon, 3 May 2004 07:57:30 -0400
From: Rich Kulawiec <rsk(a)firemountain.net>
To: Declan McCullagh <declan(a)well.com>
(An edit of something I sent to the folks at nocards.org last summer)
Having followed the recent RFID-related messages on Politech, I
thought I'd send this along.
First, a small historical diversion: back in the 1980's, there were
rumors that the NSA had a complete Usenet feed going into its data
centers. In reaction, Usenet article authors began to include what
were called "NSA fodder" in the headers and bodies of their
articles; text strings like:
Moscow nuke Iran Kremlin secret spy CIA transmission
were put there to (at least in theory) cause the text-analysis
programs and perhaps the human beings analyzing the incoming data at
the NSA to work a bit harder.
Nobody (I hope) took this very seriously, but it does illustrate an
interesting point about approaches to frustrating unwanted data
collection, and that is that there are two ways to do that:
1. Deny the data to the collectors. 2. Give them all the
data they could possibly hope for... but fill it with so
much noise that it's useless.
In the case of RFID tags, so many people are all over their
deployment that approach #1 may now be effectively impossible.
Fine. Let them knock themselves out putting RFID tags on and in
everything and tracking them and accumulating all the data, and
spending lots and lots of money and time setting all that up.
Meanwhile, let's try approach #2.
After all, there's no reason why you and I can't have our own RFID
scanners, and locate the tags that we happen to find in our
possession, now is there? And if I felt like, oh, removing the tag
from my new shirt and sticking it in a city bus seat, or extracting
the tag from a new lawn sprinkler and putting it in on a shopping
cart back at the store where I bought it, well, why not?
Now imagine the consequences if 20 million people did the same.
We could even have little exchanges where we throw all our tags in a
pile and randomly take some away to play with -- the point being
that then not even *we* know what happened to them.
I find it very satisfying to think that someone trying to figure out
where my bicycle helmet is at the moment will actually be tracking a
Walmart (rushing headlong toward adoption of RFID) manager's car
that happened to parked somewhere nearby when I felt like
transplanting the RFID tag.
RFID tags from all kinds of things could be randomly planted
everywhere: in an airplane seat, in a newspaper at the library, in a
copy of a rented video, EVERYWHERE. Some could be transplanted to
similar items; others to completely different ones. And so on.
I'm not suggesting that anyone abandon the fight against the
intrusive and abusive uses of RFID by any means; I'm just suggesting
that one possible countermeasure to make whatever deployment goes
forward far less effective than its backers hope is to cause their
RFID trackers to record huge amounts of completely useless data. [1]
This is relatively easy to do, and could actually be turned into a
rather amusing exercise in competitive ingenuity. [2]
But more seriously, if a sufficient number of people participate,
and thus a sufficient number of RFID tags are pressed into service
generating bogus data, it will discredit them and devalue their
usefulness, thus discouraging their further adoption and
undercutting attempts to rely on them for some of their more
Orwellian possible uses.
It's a shame that something like this is necessary: but given the
total lack of respect for privacy and any semblance of
self-restraint on the part of governments and corporations, it is.
--Rsk
[1] Most importantly, "useless data" that will be very difficult to
distinguish from useful data. Every communications engineer learns
that separating signal from noise is relatively easy when they have
very different properties, but much harder when they're the same.
Hence the need to transplant at least some RFID tags to similar
items, thus generating bogus but hard-to-spot-as-bogus data.
[2] "I'd like to thank you for coming to testify before our
committee today, Mr. Ashton, and as my first question, I'd like you
to explain why the Senate's RFID scanner indicates that you walked
in here with a cheese grater, a copy of the latest Harry Potter
video, a forklift, and the latest issue of 'Motorcycle Babes' on
your person."
----- End forwarded message -----
--
Karsten M. Self <kmself(a)ix.netcom.com> http://kmself.home.netcom.com/
What Part of "Gestalt" don't you understand?
Kerry '04 http://www.johnkerry.com/
_______________________________________________
linux-elitists
http://zgp.org/mailman/listinfo/linux-elitists
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net
[demime 1.01d removed an attachment of type application/pgp-signature]
1
0
CRYPTO-GRAM
August 15, 2009
by Bruce Schneier
Chief Security Technology Officer, BT
schneier(a)schneier.com
http://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit
<http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at
<http://www.schneier.com/crypto-gram-0908.html>. These same essays
appear in the "Schneier on Security" blog:
<http://www.schneier.com/blog>. An RSS feed is available.
** *** ***** ******* *********** *************
In this issue:
Risk Intuition
Privacy Salience and Social Networking Sites
Building in Surveillance
News
Laptop Security while Crossing Borders
Self-Enforcing Protocols
Schneier News
Another New AES Attack
Lockpicking and the Internet
Comments from Readers
** *** ***** ******* *********** *************
Risk Intuition
People have a natural intuition about risk, and in many ways it's very
good. It fails at times due to a variety of cognitive biases, but for
normal risks that people regularly encounter, it works surprisingly
well: often better than we give it credit for. This struck me as I
listened to yet another conference presenter complaining about security
awareness training. He was talking about the difficulty of getting
employees at his company to actually follow his security policies:
encrypting data on memory sticks, not sharing passwords, not logging in
from untrusted wireless networks. "We have to make people understand the
risks," he said.
It seems to me that his co-workers understand the risks better than he
does. They know what the real risks are at work, and that they all
revolve around not getting the job done. Those risks are real and
tangible, and employees feel them all the time. The risks of not
following security procedures are much less real. Maybe the employee
will get caught, but probably not. And even if he does get caught, the
penalties aren't serious.
Given this accurate risk analysis, any rational employee will regularly
circumvent security to get his or her job done. That's what the company
rewards, and that's what the company actually wants.
"Fire someone who breaks security procedure, quickly and publicly," I
suggested to the presenter. "That'll increase security awareness faster
than any of your posters or lectures or newsletters." If the risks are
real, people will get it.
You see the same sort of risk intuition on motorways. People are less
careful about posted speed limits than they are about the actual speeds
police issue tickets for. It's also true on the streets: people respond
to real crime rates, not public officials proclaiming that a
neighborhood is safe.
The warning stickers on ladders might make you think the things are
considerably riskier than they are, but people have a good intuition
about ladders and ignore most of the warnings. (This isn't to say that
some people don't do stupid things around ladders, but for the most part
they're safe. The warnings are more about the risk of lawsuits to ladder
manufacturers than risks to people who climb ladders.)
As a species, we are naturally tuned in to the risks inherent in our
environment. Throughout our evolution, our survival depended on making
reasonably accurate risk management decisions intuitively, and we're so
good at it, we don't even realize we're doing it.
Parents know this. Children have surprisingly perceptive risk intuition.
They know when parents are serious about a threat and when their threats
are empty. And they respond to the real risks of parental punishment,
not the inflated risks based on parental rhetoric. Again, awareness
training lectures don't work; there have to be real consequences.
It gets even weirder. The University College London professor John Adams
popularized the metaphor of a mental risk thermostat. We tend to seek
some natural level of risk, and if something becomes less risky, we tend
to make it more risky. Motorcycle riders who wear helmets drive faster
than riders who don't.
Our risk thermostats aren't perfect (that newly helmeted motorcycle
rider will still decrease his overall risk) and will tend to remain
within the same domain (he might drive faster, but he won't increase his
risk by taking up smoking), but in general, people demonstrate an innate
and finely tuned ability to understand and respond to risks.
Of course, our risk intuition fails spectacularly and often, with
regards to rare risks, unknown risks, voluntary risks, and so on. But
when it comes to the common risks we face every day -- the kinds of
risks our evolutionary survival depended on -- we're pretty good.
So whenever you see someone in a situation who you think doesn't
understand the risks, stop first and make sure you understand the risks.
You might be surprised.
This essay previously appeared in The Guardian.
http://www.guardian.co.uk/technology/2009/aug/05/bruce-schneier-risk-securi…
or http://tinyurl.com/ngu224
Risk thermostat:
http://www.amazon.com/Risk-John-Adams/dp/1857280687/ref=sr_1_1?ie=UTF8&…
or http://tinyurl.com/kwmuz9
http://davi.poetry.org/blog/?p=4492
Failures in risk intuition
http://www.schneier.com/essay-155.html
http://www.schneier.com/essay-171.html
** *** ***** ******* *********** *************
Privacy Salience and Social Networking Sites
Reassuring people about privacy makes them more, not less, concerned.
It's called "privacy salience," and Leslie John, Alessandro Acquisti,
and George Loewenstein -- all at Carnegie Mellon University --
demonstrated this in a series of clever experiments. In one, subjects
completed an online survey consisting of a series of questions about
their academic behavior -- "Have you ever cheated on an exam?" for
example. Half of the subjects were first required to sign a consent
warning -- designed to make privacy concerns more salient -- while the
other half did not. Also, subjects were randomly assigned to receive
either a privacy confidentiality assurance, or no such assurance. When
the privacy concern was made salient (through the consent warning),
people reacted negatively to the subsequent confidentiality assurance
and were less likely to reveal personal information.
In another experiment, subjects completed an online survey where they
were asked a series of personal questions, such as "Have you ever tried
cocaine?" Half of the subjects completed a frivolous-looking survey --
How BAD are U??" -- with a picture of a cute devil. The other half
completed the same survey with the title "Carnegie Mellon University
Survey of Ethical Standards," complete with a university seal and
official privacy assurances. The results showed that people who were
reminded about privacy were less likely to reveal personal information
than those who were not.
Privacy salience does a lot to explain social networking sites and their
attitudes towards privacy. From a business perspective, social
networking sites don't want their members to exercise their privacy
rights very much. They want members to be comfortable disclosing a lot
of data about themselves.
Joseph Bonneau and Soeren Preibusch of Cambridge University have been
studying privacy on 45 popular social networking sites around the world.
(You may not have realized that there *are* 45 popular social networking
sites around the world.) They found that privacy settings were often
confusing and hard to access; Facebook, with its 61 privacy settings, is
the worst. To understand some of the settings, they had to create
accounts with different settings so they could compare the results.
Privacy tends to increase with the age and popularity of a site.
General-use sites tend to have more privacy features than niche sites.
But their most interesting finding was that sites consistently hide any
mentions of privacy. Their splash pages talk about connecting with
friends, meeting new people, sharing pictures: the benefits of
disclosing personal data.
These sites do talk about privacy, but only on hard-to-find privacy
policy pages. There, the sites give strong reassurances about their
privacy controls and the safety of data members choose to disclose on
the site. There, the sites display third-party privacy seals and other
icons designed to assuage any fears members have.
It's the Carnegie Mellon experimental result in the real world. Users
care about privacy, but don't really think about it day to day. The
social networking sites don't want to remind users about privacy, even
if they talk about it positively, because any reminder will result in
users remembering their privacy fears and becoming more cautious about
sharing personal data. But the sites also need to reassure those
"privacy fundamentalists" for whom privacy is always salient, so they
have very strong pro-privacy rhetoric for those who take the time to
search them out. The two different marketing messages are for two
different audiences.
Social networking sites are improving their privacy controls as a result
of public pressure. At the same time, there is a counterbalancing
business pressure to decrease privacy; watch what's going on right now
on Facebook, for example. Naively, we should expect companies to make
their privacy policies clear to allow customers to make an informed
choice. But the marketing need to reduce privacy salience will frustrate
market solutions to improve privacy; sites would much rather obfuscate
the issue than compete on it as a feature.
This essay originally appeared in the Guardian.
http://www.guardian.co.uk/technology/2009/jul/15/privacy-internet-facebook
or http://tinyurl.com/ml7kv4
Privacy experiments:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1430482
Privacy and social networking sites:
http://www.cl.cam.ac.uk/~jcb82/doc/privacy_jungle_bonneau_preibusch.pdf
Facebook:
http://www.insidefacebook.com/2009/05/13/facebook-privacy-guide/
http://www.nytimes.com/external/readwriteweb/2009/06/24/24readwriteweb-the-…
or http://tinyurl.com/lgpfh8
http://www.allfacebook.com/2009/02/facebook-privacy
** *** ***** ******* *********** *************
Building in Surveillance
China is the world's most successful Internet censor. While the Great
Firewall of China isn't perfect, it effectively limits information
flowing in and out of the country. But now the Chinese government is
taking things one step further.
Under a requirement taking effect soon, every computer sold in China
will have to contain the Green Dam Youth Escort software package.
Ostensibly a pornography filter, it is government spyware that will
watch every citizen on the Internet.
Green Dam has many uses. It can police a list of forbidden Web sites. It
can monitor a user's reading habits. It can even enlist the computer in
some massive botnet attack, as part of a hypothetical future cyberwar.
China's actions may be extreme, but they're not unique. Democratic
governments around the world -- Sweden, Canada and the United Kingdom,
for example -- are rushing to pass laws giving their police new powers
of Internet surveillance, in many cases requiring communications system
providers to redesign products and services they sell.
Many are passing data retention laws, forcing companies to keep
information on their customers. Just recently, the German government
proposed giving itself the power to censor the Internet.
The United States is no exception. The 1994 CALEA law required phone
companies to facilitate FBI eavesdropping, and since 2001, the NSA has
built substantial eavesdropping systems in the United States. The
government has repeatedly proposed Internet data retention laws,
allowing surveillance into past activities as well as present.
Systems like this invite criminal appropriation and government abuse.
New police powers, enacted to fight terrorism, are already used in
situations of normal crime. Internet surveillance and control will be no
different.
Official misuses are bad enough, but the unofficial uses worry me more.
Any surveillance and control system must itself be secured. An
infrastructure conducive to surveillance and control invites
surveillance and control, both by the people you expect and by the
people you don't.
China's government designed Green Dam for its own use, but it's been
subverted. Why does anyone think that criminals won't be able to use it
to steal bank account and credit card information, use it to launch
other attacks, or turn it into a massive spam-sending botnet?
Why does anyone think that only authorized law enforcement will mine
collected Internet data or eavesdrop on phone and IM conversations?
These risks are not theoretical. After 9/11, the National Security
Agency built a surveillance infrastructure to eavesdrop on telephone
calls and e-mails within the United States.
Although procedural rules stated that only non-Americans and
international phone calls were to be listened to, actual practice didn't
always match those rules. NSA analysts collected more data than they
were authorized to, and used the system to spy on wives, girlfriends,
and famous people such as President Clinton.
But that's not the most serious misuse of a telecommunications
surveillance infrastructure. In Greece, between June 2004 and March
2005, someone wiretapped more than 100 cell phones belonging to members
of the Greek government -- the prime minister and the ministers of
defense, foreign affairs and justice.
Ericsson built this wiretapping capability into Vodafone's products, and
enabled it only for governments that requested it. Greece wasn't one of
those governments, but someone still unknown -- a rival political party?
organized crime? -- figured out how to surreptitiously turn the feature on.
Researchers have already found security flaws in Green Dam that would
allow hackers to take over the computers. Of course there are additional
flaws, and criminals are looking for them.
Surveillance infrastructure can be exported, which also aids
totalitarianism around the world. Western companies like Siemens, Nokia,
and Secure Computing built Iran's surveillance infrastructure. U.S.
companies helped build China's electronic police state. Twitter's
anonymity saved the lives of Iranian dissidents -- anonymity that many
governments want to eliminate.
Every year brings more Internet censorship and control -- not just in
countries like China and Iran, but in the United States, the United
Kingdom, Canada and other free countries.
The control movement is egged on by both law enforcement, trying to
catch terrorists, child pornographers and other criminals, and by media
companies, trying to stop file sharers.
It's bad civic hygiene to build technologies that could someday be used
to facilitate a police state. No matter what the eavesdroppers and
censors say, these systems put us all at greater risk. Communications
systems that have no inherent eavesdropping capabilities are more secure
than systems with those capabilities built in.
This essay previously appeared -- albeit with fewer links -- on the
Minnesota Public Radio website.
http://minnesota.publicradio.org/display/web/2009/07/30/schneier/
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/08/building_in_sur.html
** *** ***** ******* *********** *************
News
Data can leak through power lines; the NSA has known about this for decades:
http://news.bbc.co.uk/2/hi/technology/8147534.stm
These days, there's a lot of open research on side channels.
http://www.schneier.com/blog/archives/2008/10/remotely_eavesd.html
http://www.schneier.com/blog/archives/2009/06/eavesdropping_o_3.html
http://www.schneier.com/paper-side-channel.html
South Africa takes its security seriously. Here's an ATM that
automatically squirts pepper spray into the faces of "people tampering
with the card slots." Sounds cool, but these kinds of things are all
about false positives:
http://www.guardian.co.uk/world/2009/jul/12/south-africa-cash-machine-peppe…
or http://tinyurl.com/nj5zks
Cybercrime paper: "Distributed Security: A New Model of Law
Enforcement," Susan W. Brenner and Leo L. Clarke. It's from 2005, but
I'd never seen it before.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=845085
Cryptography has zero-knowledge proofs, where Alice can prove to Bob
that she knows something without revealing it to Bob. Here's something
similar from the real world. It's a research project to allow weapons
inspectors from one nation to verify the disarming of another nation's
nuclear weapons without learning any weapons secrets in the process,
such as the amount of nuclear material in the weapon.
http://news.bbc.co.uk/2/hi/europe/8154029.stm
I wrote about mapping drug use by testing sewer water in 2007, but
there's new research:
http://www.schneier.com/blog/archives/2009/07/mapping_drug_us.html
Excellent article detailing the Twitter attack.
http://www.techcrunch.com/2009/07/19/the-anatomy-of-the-twitter-attack/
or http://tinyurl.com/lderkq
Social Security numbers are not random. In some cases, you can predict
them with date and place of birth.
http://www.nhregister.com/articles/2009/07/07/news/a1_--_id_theft.txt
http://redtape.msnbc.com/2009/07/theres-a-new-reason-to-worry-about-the-sec…
or http://tinyurl.com/n8o7kf
http://www.wired.com/wiredscience/2009/07/predictingssn/
http://www.cnn.com/2009/US/07/10/social.security.numbers/index.html
http://www.pnas.org/content/106/27/10975
http://www.pnas.org/content/early/2009/07/02/0904891106.full.pdf
http://www.heinz.cmu.edu/~acquisti/ssnstudy/
I don't see any new insecurities here. We already know that Social
Security numbers are not secrets. And anyone who wants to steal a
million SSNs is much more likely to break into one of the gazillion
databases out there that store them.
NIST has announced the 14 SHA-3 candidates that have advanced to the
second round: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grostl,
Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein. In
February, I chose my favorites: Arirang, BLAKE, Blue Midnight Wish,
ECHO, Grostl, Keccak, LANE, Shabal, and Skein. Of the ones NIST
eventually chose, I am most surprised to see CubeHash and most surprised
not to see LANE.
http://csrc.nist.gov/groups/ST/hash/sha-3/Round2/submissions_rnd2.html
http://www.schneier.com/essay-249.html
http://csrc.nist.gov/groups/ST/hash/sha-3/index.html
http://www.skein-hash.info/
Nice description of the base rate fallacy.
http://news.bbc.co.uk/2/hi/uk_news/magazine/8153539.stm
This is funny: "Tips for Staying Safe Online":
http://www.schneier.com/blog/archives/2009/07/tips_for_stayin.html
Seems like the Swiss may be running out of secure gold storage. If this
is true, it's a real security issue. You can't just store the stuff
behind normal locks. Building secure gold storage takes time and money.
http://www.commodityonline.com/news/Swiss-banks-have-no-space-left-for-gold…
or http://tinyurl.com/kqpm8w
I am reminded of a related problem the EU had during the transition to
the euro: where to store all the bills and coins before the switchover
date. There wasn't enough vault space in banks, because the vast
majority of currency is in circulation. It's a similar problem,
although the EU banks could solve theirs with lots of guards, because it
was only a temporary problem.
A large sign saying "United States" at a border crossing was deemed a
security risk:
http://www.schneier.com/blog/archives/2009/07/large_signs_a_s.html
Clever new real estate scam:
http://www.schneier.com/blog/archives/2009/07/new_real_estate.html
Bypassing the iPhone's encryption. I want more technical details.
http://www.wired.com/gadgetlab/2009/07/iphone-encryption/
Excellent essay by Jonathan Zittrain on the risks of cloud computing:
http://www.nytimes.com/2009/07/20/opinion/20zittrain.html
Here's me on cloud computing:
http://www.schneier.com/blog/archives/2009/06/cloud_computing.html
More fearmongering. The headline is "Terrorists could use internet to
launch nuclear attack: report." The subhead: "The risk of
cyber-terrorism escalating to a nuclear strike is growing daily,
according to a study."
http://www.guardian.co.uk/technology/2009/jul/24/internet-cyber-attack-terr…
or http://tinyurl.com/mhfdyy
Note the weasel words in the article. The study "suggests that under
the right circumstances." We're "leaving open the possibility." The
report "outlines a number of potential threats and situations" where the
bad guys could "make a nuclear attack more likely." Gadzooks. I'm
tired of this idiocy. Stop overreacting to rare risks. Refuse to be
terrorized, people.
http://www.schneier.com/essay-171.html
http://www.schneier.com/essay-124.html
Interesting TED talk by Eve Ensler on security. She doesn't use any of
the terms, but in the beginning she's echoing a lot of the current
thinking about evolutionary psychology and how it relates to security.
http://www.ted.com/talks/eve_ensler_on_security.html
In cryptography, we've long used the term "snake oil" to refer to crypto
systems with good marketing hype and little actual security. It's the
phrase I generalized into "security theater." Well, it turns out that
there really is a snake oil salesman.
http://blogs.reuters.com/oddly-enough/2009/07/24/we-found-him-he-really-exi…
or http://tinyurl.com/mo75tu
Research that proves what we already knew: too many security warnings
results in complacency.
http://lorrie.cranor.org/pubs/sslwarnings.pdf
The New York Times has an editorial on regulating chemical plants.
http://www.nytimes.com/2009/08/04/opinion/04tue2.html
The problem is a classic security externality, which I wrote about in 2007.
http://www.schneier.com/essay-194.html
Good essay on security vs. usability: "When Security Gets in the Way."
http://jnd.org/dn.mss/when_security_gets_in_the_way.html
A 1934 story from the International Herald Tribune shows how we reacted
to the unexpected 75 years ago:
http://www.schneier.com/blog/archives/2009/08/how_we_reacted.html
New airport security hole: funny.
http://scienceblogs.com/gregladen/2009/07/overheard_at_airport.php
Here's some complicated advice on securing passwords that -- I'll bet --
no one follows. Of the ten rules, I regularly break seven. How about you?
http://windowssecrets.com/2009/08/06/01-Gmail-flaw-shows-value-of-strong-pa…
or http://tinyurl.com/px784h
Here's my advice on choosing secure passwords.
http://www.wired.com/politics/security/commentary/securitymatters/2007/01/7…
or http://tinyurl.com/2beaq2
"An Ethical Code for Intelligence Officers"
http://www.schneier.com/blog/archives/2009/08/an_ethical_code.html
Man-in-the-middle trucking attack:
http://www.schneier.com/blog/archives/2009/08/man-in-the-midd.html
"On Locational Privacy, and How to Avoid Losing it Forever"
http://www.eff.org/wp/locational-privacy
** *** ***** ******* *********** *************
Laptop Security while Crossing Borders
Last year, I wrote about the increasing propensity for governments,
including the U.S. and Great Britain, to search the contents of people's
laptops at customs. What we know is still based on anecdote, as no
country has clarified the rules about what their customs officers are
and are not allowed to do, and what rights people have.
Companies and individuals have dealt with this problem in several ways,
from keeping sensitive data off laptops traveling internationally, to
storing the data -- encrypted, of course -- on websites and then
downloading it at the destination. I have never liked either solution. I
do a lot of work on the road, and need to carry all sorts of data with
me all the time. It's a lot of data, and downloading it can take a long
time. Also, I like to work on long international flights.
There's another solution, one that works with whole-disk encryption
products like PGP Disk (I'm on PGP's advisory board), TrueCrypt, and
BitLocker: Encrypt the data to a key you don't know.
It sounds crazy, but stay with me. Caveat: Don't try this at home if
you're not very familiar with whatever encryption product you're using.
Failure results in a bricked computer. Don't blame me.
Step One: Before you board your plane, add another key to your
whole-disk encryption (it'll probably mean adding another "user") -- and
make it random. By "random," I mean really random: Pound the keyboard
for a while, like a monkey trying to write Shakespeare. Don't make it
memorable. Don't even try to memorize it.
Technically, this key doesn't directly encrypt your hard drive. Instead,
it encrypts the key that is used to encrypt your hard drive -- that's
how the software allows multiple users.
So now there are two different users named with two different keys: the
one you normally use, and some random one you just invented.
Step Two: Send that new random key to someone you trust. Make sure the
trusted recipient has it, and make sure it works. You won't be able to
recover your hard drive without it.
Step Three: Burn, shred, delete or otherwise destroy all copies of that
new random key. Forget it. If it was sufficiently random and
non-memorable, this should be easy.
Step Four: Board your plane normally and use your computer for the whole
flight.
Step Five: Before you land, delete the key you normally use.
At this point, you will not be able to boot your computer. The only key
remaining is the one you forgot in Step Three. There's no need to lie to
the customs official, which in itself is often a crime; you can even
show him a copy of this article if he doesn't believe you.
Step Six: When you're safely through customs, get that random key back
from your confidant, boot your computer and re-add the key you normally
use to access your hard drive.
And that's it.
This is by no means a magic get-through-customs-easily card. Your
computer might be impounded, and you might be taken to court and
compelled to reveal who has the random key.
But the purpose of this protocol isn't to prevent all that; it's just to
deny any possible access to your computer to customs. You might be
delayed. You might have your computer seized. (This will cost you any
work you did on the flight, but -- honestly -- at that point that's the
least of your troubles.) You might be turned back or sent home. But when
you're back home, you have access to your corporate management, your
personal attorneys, your wits after a good night's sleep, and all the
rights you normally have in whatever country you're now in.
This procedure not only protects you against the warrantless search of
your data at the border, it also allows you to deny a customs official
your data without having to lie or pretend -- which itself is often a crime.
Now the big question: Who should you send that random key to?
Certainly it should be someone you trust, but -- more importantly -- it
should be someone with whom you have a privileged relationship.
Depending on the laws in your country, this could be your spouse, your
attorney, your business partner or your priest. In a larger company, the
IT department could institutionalize this as a policy, with the help
desk acting as the key holder.
You could also send it to yourself, but be careful. You don't want to
e-mail it to your webmail account, because then you'd be lying when you
tell the customs official that there is no possible way you can decrypt
the drive.
You could put the key on a USB drive and send it to your destination,
but there are potential failure modes. It could fail to get there in
time to be waiting for your arrival, or it might not get there at all.
You could airmail the drive with the key on it to yourself a couple of
times, in a couple of different ways, and also fax the key to yourself
... but that's more work than I want to do when I'm traveling.
If you only care about the return trip, you can set it up before you
return. Or you can set up an elaborate one-time pad system, with
identical lists of keys with you and at home: Destroy each key on the
list you have with you as you use it.
Remember that you'll need to have full-disk encryption, using a product
such as PGP Disk, TrueCrypt or BitLocker, already installed and enabled
to make this work.
I don't think we'll ever get to the point where our computer data is
safe when crossing an international border. Even if countries like the
U.S. and Britain clarify their rules and institute privacy protections,
there will always be other countries that will exercise greater latitude
with their authority. And sometimes protecting your data means
protecting your data from yourself.
This essay originally appeared on Wired.com.
http://www.wired.com/politics/security/commentary/securitymatters/2009/07/s…
or http://tinyurl.com/nw6bkd
** *** ***** ******* *********** *************
Self-Enforcing Protocols
There are several ways two people can divide a piece of cake in half.
One way is to find someone impartial to do it for them. This works, but
it requires another person. Another way is for one person to divide the
piece, and the other person to complain (to the police, a judge, or his
parents) if he doesn't think it's fair. This also works, but still
requires another person -- at least to resolve disputes. A third way is
for one person to do the dividing, and for the other person to choose
the half he wants.
That third way, known by kids, pot smokers, and everyone else who needs
to divide something up quickly and fairly, is called cut-and-choose.
People use it because it's a self-enforcing protocol: a protocol
designed so that neither party can cheat.
Self-enforcing protocols are useful because they don't require trusted
third parties. Modern systems for transferring money -- checks, credit
cards, PayPal -- require trusted intermediaries like banks and credit
card companies to facilitate the transfer. Even cash transfers require
a trusted government to issue currency, and they take a cut in the form
of seigniorage. Modern contract protocols require a legal system to
resolve disputes. Modern commerce wasn't possible until those systems
were in place and generally trusted, and complex business contracts
still aren't possible in areas where there is no fair judicial system.
Barter is a self-enforcing protocol: nobody needs to facilitate the
transaction or resolve disputes. It just works.
Self-enforcing protocols are safer than other types because participants
don't gain an advantage from cheating. Modern voting systems are rife
with the potential for cheating, but an open show of hands in a room --
one that everyone in the room can count for himself -- is
self-enforcing. On the other hand, there's no secret ballot, late
voters are potentially subjected to coercion, and it doesn't scale well
to large elections. But there are mathematical election protocols that
have self-enforcing properties, and some cryptographers have suggested
their use in elections.
Here's a self-enforcing protocol for determining property tax: the
homeowner decides the value of the property and calculates the resultant
tax, and the government can either accept the tax or buy the home for
that price. Sounds unrealistic, but the Greek government implemented
exactly that system for the taxation of antiquities. It was the easiest
way to motivate people to accurately report the value of antiquities.
And shotgun clauses in contracts are essentially the same thing.
A VAT, or value-added tax, is a self-enforcing alternative to sales tax.
Sales tax is collected on the entire value of the thing at the point
of retail sale; both the customer and the storeowner want to cheat the
government. But VAT is collected at every step between raw materials
and that final customer; it's the difference between the price of the
materials sold and the materials bought. Buyers wants official receipts
with as high a purchase price as possible, so each buyer along the chain
keeps each seller honest. Yes, there's still an incentive to cheat on
the final sale to the customer, but the amount of tax collected at that
point is much lower.
Of course, self-enforcing protocols aren't perfect. For example,
someone in a cut-and-choose can punch the other guy and run away with
the entire piece of cake. But perfection isn't the goal here; the goal
is to reduce cheating by taking away potential avenues of cheating.
Self-enforcing protocols improve security not by implementing
countermeasures that prevent cheating, but by leveraging economic
incentives so that the parties don't want to cheat.
One more self-enforcing protocol. Imagine a pirate ship that encounters
a storm. The pirates are all worried about their gold, so they put
their personal bags of gold in the safe. During the storm, the safe
cracks open, and all the gold mixes up and spills out on the floor. How
do the pirates determine who owns what? They each announce to the group
how much gold they had. If the total of all the announcements matches
what's in the pile, it's divided as people announced. If it's
different, then the captain keeps it all. I can think of all kinds of
ways this can go wrong -- the captain and one pirate can collude to
throw off the total, for example -- but it is self-enforcing against
individual misreporting.
This essay originally appeared on ThreatPost.
http://threatpost.com/blogs/value-self-enforcing-protocols
** *** ***** ******* *********** *************
Schneier News
I am speaking at the OWASP meeting in Minneapolis on August 24:
http://www.owasp.org/index.php/Minneapolis_St_Paul
Audio from my Black Hat talk is here:
http://www.blackhat.com/html/bh-usa-09/bh-usa-09-archives.html#Schneier
or http://tinyurl.com/mvewwx
** *** ***** ******* *********** *************
Another New AES Attack
A new and very impressive attack against AES has just been announced.
Over the past couple of months, there have been two new cryptanalysis
papers on AES. The attacks presented in the papers are not practical --
they're far too complex, they're related-key attacks, and they're
against larger-key versions and not the 128-bit version that most
implementations use -- but they are impressive pieces of work all the same.
This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry
Khovratovich, and Adi Shamir, is much more devastating. It is a
completely practical attack against ten-round AES-256:
Abstract. AES is the best known and most widely used
block cipher. Its three versions (AES-128, AES-192, and AES-256)
differ in their key sizes (128 bits, 192 bits and 256 bits) and in
their number of rounds (10, 12, and 14, respectively). In the case
of AES-128, there is no known attack which is faster than the
2^128 complexity of exhaustive search. However, AES-192
and AES-256 were recently shown to be breakable by attacks which
require 2^176 and 2^119 time, respectively. While these
complexities are much faster than exhaustive search, they are
completely non-practical, and do not seem to pose any real threat
to the security of AES-based systems.
In this paper we describe several attacks which can break with
practical complexity variants of AES-256 whose number of rounds
are comparable to that of AES-128. One of our attacks uses only
two related keys and 2^39^ time to recover the complete
256-bit key of a 9-round version of AES-256 (the best previous
attack on this variant required 4 related keys and 2^120
time). Another attack can break a 10 round version of AES-256 in
2^45 time, but it uses a stronger type of related subkey
attack (the best previous attack on this variant required 64
related keys and 2^172 time).
They also describe an attack against 11-round AES-256 that requires 2^70
time -- almost practical.
These new results greatly improve on the Biryukov, Khovratovich, and
Nikolic papers mentioned above, and a paper I wrote with six others in
2000, where we describe a related-key attack against 9-round AES-256
(then called Rijndael) in 2^224. (This again proves the cryptographer's
adage: attacks always get better, they never get worse.)
By any definition of the term, this is a huge result.
There are three reasons not to panic:
* The attack exploits the fact that the key schedule for 256-bit
version is pretty lousy -- something we pointed out in our 2000 paper --
but doesn't extend to AES with a 128-bit key.
* It's a related-key attack, which requires the cryptanalyst to have
access to plaintexts encrypted with multiple keys that are related in a
specific way.
* The attack only breaks 11 rounds of AES-256. Full AES-256 has 14 rounds.
Not much comfort there, I agree. But it's what we have.
Cryptography is all about safety margins. If you can break n rounds of
a cipher, you design it with 2n or 3n rounds. What we're learning is
that the safety margin of AES is much less than previously believed.
And while there is no reason to scrap AES in favor of another algorithm,
NST should increase the number of rounds of all three AES variants. At
this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and
AES-256 at 28 rounds. Of maybe even more; we don't want to be revising
the standard again and again.
And for new applications I suggest that people don't use AES-256.
AES-128 provides more than enough security margin for the foreseeable
future. But if you're already using AES-256, there's no reason to change.
The paper:
http://eprint.iacr.org/2009/374
Older AES cryptanalysis papers:
http://eprint.iacr.org/2009/241
http://eprint.iacr.org/2009/317
AES:
http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf
http://www.schneier.com/blog/archives/2009/07/new_attack_on_a.html
http://www.schneier.com/paper-rijndael.pdf
** *** ***** ******* *********** *************
Lockpicking and the Internet
Physical locks aren't very good. They keep the honest out, but any
burglar worth his salt can pick the common door lock pretty quickly.
It used to be that most people didn't know this. Sure, we all watched
television criminals and private detectives pick locks with an ease only
found on television and thought it realistic, but somehow we still held
onto the belief that our own locks kept us safe from intruders.
The Internet changed that.
First was the MIT Guide to Lockpicking, written by the late Bob ("Ted
the Tool") Baldwin. Then came Matt Blaze's 2003 paper on breaking master
key systems. After that, came a flood of lockpicking information on the
Net: opening a bicycle lock with a Bic pen, key bumping, and more. Many
of these techniques were already known in both the criminal and
locksmith communities. The locksmiths tried to suppress the knowledge,
believing their guildlike secrecy was better than openness. But they've
lost: never has there been more public information about lockpicking --
or safecracking, for that matter.
Lock companies have responded with more complicated locks, and more
complicated disinformation campaigns.
There seems to be a limit to how secure you can make a wholly mechanical
lock, as well as a limit to how large and unwieldy a key the public will
accept. As a result, there is increasing interest in other lock
technologies.
As a security technologist, I worry that if we don't fully understand
these technologies and the new sorts of vulnerabilities they bring, we
may be trading a flawed technology for an even worse one. Electronic
locks are vulnerable to attack, often in new and surprising ways.
Start with keypads, more and more common on house doors. These have the
benefit that you don't have to carry a physical key around, but there's
the problem that you can't give someone the key for a day and then take
it away when that day is over. As such, the security decays over time --
the longer the keypad is in use, the more people know how to get in.
More complicated electronic keypads have a variety of options for
dealing with this, but electronic keypads work only when the power is
on, and battery-powered locks have their own failure modes. Plus, far
too many people never bother to change the default entry code.
Keypads have other security failures, as well. I regularly see keypads
where four of the 10 buttons are more worn than the other six. They're
worn from use, of course, and instead of 10,000 possible entry codes, I
now have to try only 24.
Fingerprint readers are another technology, but there are many known
security problems with those. And there are operational problems, too:
They're hard to use in the cold or with sweaty hands; and leaving a key
with a neighbor to let the plumber in starts having a spy-versus-spy feel.
Some companies are going even further. Earlier this year, Schlage
launched a series of locks that can be opened either by a key, a
four-digit code, or the Internet. That's right: The lock is online. You
can send the lock SMS messages or talk to it via a website, and the lock
can send you messages when someone opens it -- or even when someone
tries to open it and fails.
Sounds nifty, but putting a lock on the Internet opens up a whole new
set of problems, none of which we fully understand. Even worse: Security
is only as strong as the weakest link. Schlage's system combines the
inherent "pickability" of a physical lock, the new vulnerabilities of
electronic keypads, and the hacking risk of online. For most
applications, that's simply too much risk.
This essay previously appeared on DarkReading.com.
http://www.darkreading.com/blog/archives/2009/08/locks.html
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/08/lockpicking_and.html
** *** ***** ******* *********** *************
Comments from Readers
There are thousands of comments -- many of them interesting -- on these
topics on my blog. Search for the story you want to comment on, and join in.
http://www.schneier.com/blog
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing
summaries, analyses, insights, and commentaries on security: computer
and otherwise. You can subscribe, unsubscribe, or change your address
on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues
are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to
colleagues and friends who will find it valuable. Permission is also
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies,"
and "Applied Cryptography," and an inventor of the Blowfish, Twofish,
Phelix, and Skein algorithms. He is the Chief Security Technology
Officer of BT BCSG, and is on the Board of Directors of the Electronic
Privacy Information Center (EPIC). He is a frequent writer and lecturer
on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not
necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
06 Jul '18
AT&T
Spaf and Dave, I was watching CNN where they were interviewing a
former CIA head, who was talking about the CIA whistleblower who was
fired a few months back.
He pointed out that while leaking any classified information to the
press is a definite no no, there are plenty of avenues for whistle
blowers, such as approaching the senate / congressional committees
that have oversight over Intelligence. He also mentioned another
internal grievance handling channel that could have been used.
These committees are bilateral, and do take action more often than
not - according to what I heard on CNN (and based on what I have read
about these committees and how they work)
suresh
David Farber wrote:
> From: Gene Spafford < spaf(a)cerias.purdue.edu>
> Anyone with a security clearance, a military commission, or
> Federal office swears an oath to uphold the Constitution and the
> laws of the United States. If that person observes activity that
> he/she judges to be violations of the Constitution committed under
> color of authority, then how can the oath be upheld without
> possibly disclosing information? Given a choice between upholding
> the Constitution or being compliant with orders intended to cover
> up violations of law seems to be clear although potentially
> fraught with personal danger.
-------------------------------------
You are subscribed as web(a)reportica.net
To manage your subscription, go to
http://v2.listbox.com/member/?listname=ip
Archives at: http://www.interesting-people.org/archives/interesting-
people/
--
Sheryl Coe
web(a)reportica.net
Reportica
www.Reportica.net
______________________
-------------------------------------
You are subscribed as eugen(a)leitl.org
To manage your subscription, go to
http://v2.listbox.com/member/?listname=ip
Archives at: http://www.interesting-people.org/archives/interesting-people/
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
[demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
1
0
06 Jul '18
AT&T [Whistleblower Protection]
Many do not know that intelligence employees are excluded by law from
whistleblower protection under the Patriot Act and previous law as well.
Until we have slogged through the fine print, we just don't know what
is legal anymore. Whistleblowers have had a very hard time being
'seen' by congressional committees. Whistleblowers like Sibel Edmonds
are treated like hot potatoes until they leak to the press and a
groundswell of concern forces congress to invite them into a closed
session. And then... not much happens. That's just where we find
ourselves in 2006 in America...
This is one key to the popularity of the relatively new blogger Glenn
Greenwald (author of new book, see his site below). He's just one
person, but he tries to do the legwork to actually read the laws,
such as the Patriot Act, that our 'lawmakers' pass without reading.
He's just one person, but that's the work that needs to be done.
- Sheryl Coe
Glenn Greenwald, of Unclaimed Territory
Original: http://glenngreenwald.blogspot.com/2006/05/no-need-for-
congress-no-need-for.html
(2) The legal and constitutional issues, especially at first glance
and without doing research, reading cases, etc., are complicated and,
in the first instance, difficult to assess, at least for me. That was
also obviously true for Qwest's lawyers, which is why they requested
a court ruling and, when the administration refused, requested an
advisory opinion from DoJ.
But not everyone is burdened by these difficulties. Magically, hordes
of brilliant pro-Bush legal scholars have been able to determine
instantaneously -- as in, within hours of the program's disclosure --
that the program is completely legal and constitutional (just like so
many of them were able confidently to opine within hours of the
disclosure of the warrantless eavesdropping program that it, too, was
perfectly legal and constitutional).
Government Accountability Project
Original: http://www.whistleblower.org/content/press_detail.cfm?
press_id=446
CIA Leaks Investigation Highlights Need for Whistleblower Law Reform
Washington, D.C. b Today, the Government Accountability Project
proclaims that the CIA's public efforts to crackdown on leaks of
classified information demonstrate the need for Congress to approve
meaningful whistleblower protections for employees who decide to
disclose classified evidence of government wrongdoing, misconduct and
illegality. http://www.whistleblower.org/content/press_detail.cfm?
press_id=446
>From Russell Tice via DemocracyNow:
Original: http://www.democracynow.org/article.pl?
sid=06/04/04/1420212&mode=thread&tid=25
And the intelligence community, all of the whistleblower protection
laws are -- pretty much exempt the intelligence community. So the
intelligence community can put forth their lip service about, 'Oh,
yeah, we want you to put report waste fraud abuse,' or 'You shall
report suspicions of espionage,' but when they retaliate you for
doing so, you pretty much have no recourse. I think a lot of people
don't realize that.
>From by Mike Whitney at Znet:
original: http://www.zmag.org/content/showarticle.cfm?ItemID=6848
Intelligence reform has been a stealth-project from the get-go. [...]
Instead of addressing the underlying issues, the new bill eviscerates
what's left of the Bill of Rights and hands over more power to Bush.
Now, Bush is free to hand-pick the men he wants for top-level
Intelligence positions without Senate confirmation - an invitation to
create his personal security apparatus without congressional
interference. The bill also decreases Congress' powers of oversight.
The new Intelligence Director can exempt his office from "audits and
investigations, and Congress will not receive reports from an
objective internal auditor." In other words, Congress has limited its
own access to critical information of how taxpayer dollars are being
spent. They've simply given up their role of checking for
presidential abuse.
The bill "eliminates provisions to ensure that it (Congress) receives
timely access to intelligence, and it also allows the White House's
Office of Management and Budget to screen testimony before the
Intelligence Director presents it to the Congress." So, now Bush can
either stonewall Congress entirely or just cherry-pick the tidbits he
doesn't mind handing over. The Congress is just paving the way for
even greater secrecy.
Needless to say, all the whistle-blower protections have been removed
from the new bill. In this new paradigm of Mafia-style governance the
only unpardonable offense is reporting the crimes of one's bosses.
Now, the Bush Fedayeen can purge the entire intelligence apparatus
and no one will be the wiser.
On 5/15/06, David Farber <dave(a)farber.net> wrote:
Begin forwarded message:
1
0
[Whistleblower Protection]
X-Mailer: Apple Mail (2.750)
Reply-To: dave(a)farber.net
Begin forwarded message:
1
0