poison pill for leakers

Eugen Leitl eugen at leitl.org
Fri Jul 6 03:58:24 PDT 2012


http://www.wired.com/dangerroom/2012/07/fog-computing/all/

Feds Look to Fight Leaks With bFog of Disinformationb

By Noah Shachtman July 3, 2012 | 6:30 am | Categories: Info War

Air Force One waits for U.S. President Barack Obama in the fog at Londonbs
Stansted Airport, Friday, April 3, 2009. Photo: AP / Kirsty Wigglesworth

Pentagon-funded researchers have come up with a new plan for busting leakers:
Spot them by how they search, and then entice the secret-spillers with decoy
documents that will give them away.

Computer scientists call it it bFog Computingb b a play on todaybs cloud
computing craze. And in a recent paper for Darpa, the Pentagonbs premiere
research arm, researchers say theybve built ba prototype for automatically
generating and distributing believable misinformation b& and then tracking
access and attempted misuse of it. We call this bdisinformation technology.bb

Two small problems: Some of the researchersb techniques are barely
distinguishable from spammersb tricks. And they could wind up undermining
trust among the nationbs secret-keepers, rather than restoring it.

The Fog Computing project is part of a broader assault on so-called binsider
threats,b launched by Darpa in 2010 after the WikiLeaks imbroglio. Today,
Washington is gripped by another frenzy over leaks b this time over
disclosures about U.S. cyber sabotage and drone warfare programs. But the
reactions to these leaks has been schizophrenic, to put it generously. The
nationbs top spy says Americabs intelligence agencies will be strapping
suspected leakers to lie detectors b even though the polygraph machines are
famously flawed. An investigation into who spilled secrets about the Stuxnet
cyber weapon and the drone bkill listb has already ensnared hundreds of
officials b even though the reporters who disclosed the info patrolled the
halls of power with the White Housebs blessing.

That leaves electronic tracking as the best means of shutting leakers down.
And while you can be sure that counterintelligence and Justice Department
officials are going through the e-mails and phone calls of suspected leakers,
such methods have their limitations. Hence the interest in Fog Computing.

An Air Force poster, warning troops to maintain operational security, or
bOPSEC.b Courtesy USAF

The first goal of Fog Computing is to bury potentially valuable information
in a pile of worthless data, making it harder for a leaker to figure out what
to disclose.

bImagine if some chemist invented some new formula for whatever that was of
great value, growing hair, and they then placed the true [formula] in the
midst of a hundred bogus ones,b explains Salvatore Stolfo, the Columbia
University computer science professor who coined the Fog Computing term.
bThen anybody who steals the set of documents would have to test each formula
to see which one actually works. It raises the bar against the adversary.
They may not really get what theybre trying to steal.b

The next step: Track those decoy docs as they cross the firewall. For that,
Stolfo and his colleagues embed documents with covert beacons called bweb
bugs,b which can monitor usersb activities without their knowledge. Theybre
popular with online ad networks. bWhen rendered as HTML, a web bug triggers a
server update which allows the sender to note when and where the web bug was
viewed,b the researchers write. bTypically they will be embedded in the HTML
portion of an email message as a non-visible white on white image, but they
have also been demonstrated in other forms such as Microsoft Word, Excel, and
PowerPoint documents.b

bUnfortunately, they have been most closely associated with unscrupulous
operators, such as spammers, virus writers, and spyware authors who have used
them to violate users privacy,b the researchers admit. bOur work leverages
the same ideas, but extends them to other document classes and is more
sophisticated in the methods used to draw attention. In addition, our targets
are insiders who should have no expectation of privacy on a system they
violate.b

Steven Aftergood, who studies classification policies for the Federation of
American Scientists, wonders whether the whole approach isnbt a little off
base, given Washingtonbs funhouse system for determining what should be
secret. In June, for example, the National Security Agency refused to
disclose how many Americans it had wiretapped without a warrant. The reason?
It would violate Americansb privacy to say so.

bIf only researchers devoted as much ingenuity to combating spurious secrecy
and needless classification. Shrinking the universe of secret information
would be a better way to simplify the task of securing the remainder,b
Aftergood tells Danger Room in an e-mail. bThe Darpa approach seems to be
based on an assumption that whatever is classified is properly classified and
that leaks may occur randomly throughout the system. But neither of those
assumptions is likely to be true.b

Stolfo, for his part, insists that hebs merely doing bbasic research,b and
nothing Pentagon-specific. What Darpa, the Office of Naval Research, and
other military technology organizations do with the decoy work is bnot my
area of expertise,b he adds. However, Stolfo has set up a firm, Allure
Security Technology Inc., bto create industrial strength software a company
can actually use,b as he puts it. That software should be ready to implement
by the end of the year.

It will include more than bugged documents. Stolfo and his colleagues have
also been working on what they call a bmisbehavior detectionb system. It
includes some standard network security tools, like an intrusion detection
system that watches out for unauthorized exfiltration of data. And it has
some rather non-standard components b like an alert if a person searches his
computer for something surprising.

Pfc. Bradley Manning is escorted to a courthouse in December 2011. His
alleged disclosures to WikiLeaks kickstarted Pentagon interest in catching
so-called binsider threats.b Photo: Patrick Semansky/AP

bEach user searches their own file system in a unique manner. They may use
only a few specific system functions to find what they are looking for.
Furthermore, it is unlikely a masquerader will have full knowledge of the
victim userbs file system and hence may search wider and deeper and in a less
targeted manner than would the victim user. Hence, we believe search behavior
is a viable indicator for detecting malicious intentions,b Stolfo and his
colleagues write.

In their initial experiments, the researchers claim, they were about to
bmodel all search actions of a userb in a mere 10 seconds.  They then gave 14
students unlimited access to the same file system for 15 minutes each. The
students were told to comb the machine for anything that might be used to
financial gain. The researchers say they caught all 14 searchers. bWe can
detect all masquerader activity with 100 percent accuracy, with a false
positive rate of 0.1 percent.b

Grad students may be a little easier to model than national security
professionals, who have to radically alter their search patterns in the wake
of major events. Consider the elevated interest in al-Qaida after 9/11, or
the desire to know more about WikiLeaks after Bradley Manning allegedly
disclosed hundreds of thousands of documents to the group.

Other Darpa-backed attempts to find a signature for squirrely behavior are
either just getting underway, or havenbt fared particularly well. In
December, the agency recently handed out $9 million to a Georgia Tech-led
consortium with the goal of mining 250 million e-mails, IMs and file
transfers a day for potential leakers.  The following month, a
Pentagon-funded research paper (.pdf) noted the promise of bkeystroke
dynamics b technology to distinguish people based on their typing rhythms b
[which] could revolutionize insider-threat detection. b Well, in theory. In
practice, such systemsb berror rates vary from 0 percent to 63 percent,
depending on the user. Impostors triple their chance of evading detection if
they touch type.b

For more reliable results, Stolfo aims to marry his misbehavior-modeling with
the decoy documents and with other so-called benticing information.b Stolfo
and his colleagues also use bhoneytokensb b small strings of tempting
information, like online bank accounts or server passwords b as bait. Theybll
get a one-time credit card number, link it to a PayPal account, and see if
any charges are mysteriously rung up. Theybll generate a Gmail account, and
see who starts spamming.

Most intriguingly, perhaps, is Stolfobs suggestion in a separate paper (.pdf)
to fill up social networks with decoy accounts b and inject poisonous
information into peoplebs otherwise benign social network profiles.

bThink of advanced privacy settings [in sites like Facebook] where I choose
to include my real data to my closest friends [but] everybody else gets
access to a different profile with  information that is bogus. And I would be
alerted when bad guys try to get that info about me,b Stolfo tells Danger
Room. bThis is a way to create fog so that now you no longer know the truth
abut a person through this artificial avatars or artificial profiles.b

So sure, Fog Computing could eventually become a way to keep those Facebooked
pictures of your cat free from prying eyes. If youbre in the U.S. government,
on the other hand, the system could be a method for hiding the truth about
something far more substantive.

Noah Shachtman

Noah Shachtman is a contributing editor at Wired magazine, a nonresident
fellow at the Brookings Institution and the editor of this little blog right
here.

Read more by Noah Shachtman

Follow @dangerroom on Twitter.





More information about the cypherpunks-legacy mailing list