cypherpunks
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
January 2019
- 30 participants
- 195 discussions
26 Jan '19
https://archive.org/details/DarkSideOfTheKremlin
infohash:305a8569f0728290a171fb843a02c0cd21db9a97
infohash:fe604c9b2698dc48df9a35d31dab143c1f4eb277
:)
https://twitter.com/NatSecGeek
https://twitter.com/DDoSecrets
http://ddosecretspzwfy7.onion/
Distributed Denial of Secrets (“DDOS”) is a transparency collective,
aimed at enabling the free transmission of data in the public
interest. We aim to avoid any political, corporate or personal
leanings, and to act as a simple beacon of available information. As a
collective, we do not support any cause, idea or message beyond
ensuring that information is available to those who need it most - the
people.
While we are happy to serve as an index to data of all varities, all
much meet the following two criteria:
Is the data of public interest?
Can a prima facie case be made for the veracity of the contents?
Unless already public, or as authorized by our source, we do not
disclose the providing party of any received information, and we are
fully commited to ensuring their anonymity from all threats. We can
never advise on the perfect procedure for transferring data to us or
anyone else, but we can act as a shield for that process and share
advice from our experience. Often our role is to not just make data
available, but to act as a anonymity guard to pass data to journalists
and other figures best positioned to interrogate it.
2
1
Tor: Cross Jurisdiction Traffic Monitor and Circuit Reconstruction, DeepCorr Flow AI, App DeAnon
by grarpamp 26 Jan '19
by grarpamp 26 Jan '19
26 Jan '19
https://arxiv.org/abs/1808.09237
We model and analyze passive adversaries that monitors Tor traffic crossing the
border of a jurisdiction an adversary is controlling. We show that a
single adversary
is able to connect incoming and outgoing traffic of their border,
tracking the traffic,
and cooperating adversaries are able to reconstruct parts of the Tor
network, revealing
user-server relationships. In our analysis we created two algorithms
to estimate the
capabilities of the adversaries. The first generates Tor-like traffic
and the second
analyzes and reconstructs the simulated data.
https://arxiv.org/pdf/1808.07285
Flow correlation is the core technique used in a multitude of
deanonymization attacks on Tor. Despite the importance of flow
correlation attacks on Tor, existing flow correlation techniques are
considered to be ineffective and unreliable in linking Tor flows
when applied at a large scale, i.e., they impose high rates of false
positive error rates or require impractically long flow observations
to be able to make reliable correlations. In this paper, we show that,
unfortunately, flow correlation attacks can be conducted on Tor
traffic with drastically higher accuracies than before by leveraging
emerging learning mechanisms. We particularly design a system,
called DeepCorr, that outperforms the state-of-the-art by signifi-
cant margins in correlating Tor connections. DeepCorr leverages
an advanced deep learning architecture to learn a flow correlation
function tailored to Tor's complex network--this is in contrast to
previous works' use of generic statistical correlation metrics to cor-
related Tor flows. We show that with moderate learning, DeepCorr
can correlate Tor connections (and therefore break its anonymity)
with accuracies significantly higher than existing algorithms, and
using substantially shorter lengths of flow observations. For in-
stance, by collecting only about 900 packets of each target Tor flow
(roughly 900KB of Tor data), DeepCorr provides a flow correlation
accuracy of 96% compared to 4% by the state-of-the-art system of
RAPTOR using the same exact setting.
We hope that our work demonstrates the escalating threat of
flow correlation attacks on Tor given recent advances in learning
algorithms, calling for the timely deployment of effective counter-
measures by the Tor community.
https://arxiv.org/pdf/1901.04434
In this work we show that Tor is vulnerable to app deanonymization
attacks on Android devices through network traffic analysis. For this
purpose, we describe a general methodology for performing an attack
that allows to deanonymize the apps running on a target smartphone
using Tor, which is the victim of the attack. Then, we discuss a
Proof-of-Concept, implementing the methodology, that shows how the
attack can be performed in practice and allows to assess the
deanonymization accuracy that it is possible to achieve. While attacks
against Tor anonymity have been already gained considerable attention
in the context of website fingerprinting in desktop environments, to
the best of our knowledge this is the first work that highlights Tor
vulnerability to apps deanonymization attacks on Android devices. In
our experiments we achieved an accuracy of 97%
1
0
https://arxiv.org/abs/1809.09086
Tor and I2P are well-known anonymity networks used by many
individuals to protect their online privacy and anonymity. Tor's
centralized directory services facilitate the understanding of the Tor
network, as well as the measurement and visualization of its structure
through the Tor Metrics project. In contrast, I2P does not rely on
centralized directory servers, and thus obtaining a complete view of
the network is challenging. In this work, we conduct an empirical
study of the I2P network, in which we measure properties including
population, churn rate, router type, and the geographic distribution
of I2P peers. We find that there are currently around 32K active I2P
peers in the network on a daily basis. Of these peers, 14K are located
behind NAT or firewalls.
Using the collected network data, we examine the blocking resistance
of I2P against a censor that wants to prevent access to I2P using
address-based blocking techniques. Despite the decentralized
characteristics of I2P, we discover that a censor can block more than
95% of peer IP addresses known by a stable I2P client by operating
only 10 routers in the network. This amounts to severe network
impairment: a blocking rate of more than 70% is enough to cause
significant latency in web browsing activities, while blocking more
than 90% of peer IP addresses can make the network unusable. Finally,
we discuss the security consequences of the network being blocked, and
directions for potential approaches to make I2P more resistant to
blocking.
1
0
Ars Technica: Google asks Supreme Court to overrule disastrous ruling on API copyrights
by jim bell 25 Jan '19
by jim bell 25 Jan '19
25 Jan '19
Ars Technica: Google asks Supreme Court to overrule disastrous ruling on API copyrights.
https://arstechnica.com/tech-policy/2019/01/google-asks-supreme-court-to-ov…
1
0
25 Jan '19
https://www.youtube.com/watch?v=W28cg4aYf6E
https://twitter.com/officialmcafee/status/1087772979730239490
https://twitter.com/officialmcafee/status/1087837064006172678
https://twitter.com/officialmcafee/status/1087847768318717953
https://twitter.com/officialmcafee/status/1087879069805486081
https://twitter.com/officialmcafee/status/1087891254875168768
https://twitter.com/officialmcafee/status/1087968125054844928
https://twitter.com/officialmcafee/status/1088167776651563010
https://www.youtube.com/watch?v=GFkc122LCxY
https://www.youtube.com/watch?v=Yf-mCv0RBH4
https://www.youtube.com/watch?v=bb9s5S2NzoA
https://www.youtube.com/watch?v=8SdeiT0_FWM
https://www.youtube.com/watch?v=JlR7CuE0Us4
https://www.youtube.com/watch?v=JNK8Bp9sZOA
https://www.youtube.com/watch?v=KMjCie-fTRo
https://www.youtube.com/watch?v=PjQ-AfRNG18
https://www.youtube.com/watch?v=MG0bAaK7p9s
https://www.youtube.com/watch?v=hx3yTWkN3fI
https://www.youtube.com/watch?v=n9M69LpV2I4
https://www.youtube.com/watch?v=D-Mppo9XoPU
https://www.youtube.com/watch?v=_EdXuazXKBA
https://www.youtube.com/watch?v=5GmwSgCfn38
https://www.youtube.com/watch?v=HqI0jbKGaT8
https://www.youtube.com/watch?v=E9CSx7Hjbg0
https://www.youtube.com/watch?v=W28cg4aYf6E
https://loggiaonfire.com/magazine/the_entire_mcafee_2020_campaign_platform_…
https://loggiaonfire.com/magazine/10_things_anyone_can_do_to_help_mcafee_20…
http://www.whoismcafee.com/
https://www.facebook.com/mcafeelp
https://www.lp.org/membership/
https://mcafee2020hq.com/
https://www.youtube.com/watch?v=HSV5bZGwDYY
Send that eviction notice!
2
3
Re: [tor-talk] [Cryptography] Implementing full Internet IPv6 end-to-end encryption based on Cryptographically Generated Address
by grarpamp 24 Jan '19
by grarpamp 24 Jan '19
24 Jan '19
On 1/24/19, Alec Muffett <alec.muffett(a)gmail.com> wrote:
> On Thu, 24 Jan 2019 at 19:33, grarpamp <grarpamp(a)gmail.com> wrote:
>
>> As readers may be aware,
>> Tor has an interesting capability via OnionCat and OnionVPN
>> ...
>> There's an open project for anyone who wants it...
>> To bring IPv6 over v3 onions to Tor.
> I'm wondering: could you please expand upon how this compares in importance
> to simply promoting the native adoption of Tor v3 Onion Networking, amongst
> the community of tool-developers and tool-users whom you envision the above
> solution (OnionCat/OnionVPN/IP-routing) benefitting?
As before, yes v3 is a great update, useful for many, even indeed
properly set for users by default. But not for all users, as some may
explicitly choose to trade features of one version, say v3, for long
term capabilities still needed from, or only present for the future
in, another version, say v2.
v3 should be "promoted", yet that shouldn't be at expense
or exclusive to anything else, nor should anything else be
diminishing to it. Modularity works there.
And since v3 is now the default, the low cost work
of "promoting" it is now more or less done.
So perhaps v3 can be set aside on its own for now
to consider what "native" might mean in larger context...
Killing v2, while at the same time not working on porting the
curious IPv6 feature to vN, would be a foot shooting regression
upon the future. While easy way out for some to propose,
it's probably not the right approach.
Maybe yes it's about tool devs and users... about apps...
how to see folks using crypto privacy currency overlays
messaging etc, more or less seamlessly, being protected
from surveillance and censorship and enjoying free speech,
scalable production class networks... do you...
a) Wait forever until some critical number or combination
of overlay networks and new apps, necessarily specifically
exclusively and natively written for those nets alone...
is reached, having mass effect at that point...
or
b) Try to provide extension API's for your overlay of the
year that works with whatever apps people are already using
today, and provide interop API's between overlays, until some
overlays prove resistant enough for general usage tomorrow.
(Recall that tor still emits an adversary warning on startup
and has entire classes of unsolved weaknesses, as
with any other overlay today.)
Or elements of both and add win from both ends.
Or something else or more.
Historically, most overlays have failed at (a) because
native never widely happened, and because of failing
at (b), along with not yet having sufficient attack resistance,
and plain old tunnel vision competition in narrow and
easier fields (say delivering a message)... perhaps are
missing out on some adoption wins in areas where they
are actually suitable enough.
IPv6 is a potential already "native" area here...
Pick any list of "end user" or "server" applications, generally
any IP capable thing out there that people plug into the internet
(these days most everything is becoming IPv6 capable so let's
just forget about IPv4). Well, the millions of apps out there all
speak IP, and do not speak end to end bidirectional onion or i2p
or anything else, let alone ride on or utilize any auth or uniqueness
expectations therein.
Yes, various LD_PRELOAD and packet filter torifying methods
covers some things as a hack.
However the problem is most apparent with apps that include
addressing info in their data not just use it only for binding
the network stack. Or that use anything other than TCP.
Or that need to route, P2P, DHT, etc. Features break or the app
simply won't work. Bittorrent is one such very popular application,
many more exist, or could exist. Many of which might
need to make use of addressing in data to scale, or UDP
for efficiency or mixing.
Further, trying to plug apps into these overlays is complex and
thereby offputting and risky for all but expert users and admins.
Now if you had some simple range of IPv6 for them to bind
to and filter, or even an AF_WIDE, that becomes a lot easier
to adopt and manage as well.
And of course AF_WIDE, or AF_OVERLAY, though it would
require code in all apps, would be a third party supportable
modular library once plugged in as a compile option to the
thousands of popular apps out there.
Engineer something like that and magic starts to happen.
Or since IPv6, crypto, networks, apps, etc are decades old
now, gather todays knowledge for a try at starting from the
top again.
Perhaps a larger aspect is... everyone in the space should probably
be thinking about these things. Are these tools and overlays
just some ad-hoc complex limited usage highly optimized things
for geeks, activists, particular communities, etc? 100k's of users
or less with very limited interop capabilities.
Or is something larger and more important being built?
500M+'s of users, new RFC's and hardware level everywhere,
universal link level full time crypto and padding, are there even
such things being designed. How to fit a picture with or evolve
todays mainstream app and use models. How to get there?
Where is there? When?
Someone should start a conference, not on what attendee
and projects are doing, but maybe on working to discover
the meta of where much of the space should be heading,
perhaps even together.
1
0
24 Jan '19
https://theintercept.com/2019/01/24/computer-supply-chain-attacks/
Everybody Does It: The Messy Truth About Infiltrating Computer Supply Chains
https://theintercept.com/staff/micah-lee/https://theintercept.com/staff/mol…
[Micah Lee](https://theintercept.com/staff/micah-lee/), [Henrik Moltke](https://theintercept.com/staff/moltke/)
January 24 2019, 6:55 p.m.
In October, Bloomberg Businessweek published an alarming [story](https://www.bloomberg.com/news/features/2018-10-04/the-big-hack-how-…: Operatives working for China’s People’s Liberation Army had secretly implanted microchips into motherboards made in China and sold by U.S.-based Supermicro. This allegedly gave Chinese spies clandestine access to servers belonging to over 30 American companies, including Apple, Amazon, and various government suppliers, in an operation known as a “supply chain attack,” in which malicious hardware or software is inserted into products before they are shipped to surveillance targets.
Bloomberg’s report, based on 17 anonymous sources, including “six current and former senior national security officials,” began to crumble soon after publication as key parties issued swift and unequivocal denials. Apple [said](https://www.apple.com/newsroom/2018/10/what-businessweek-got-wrong-ab… that “there is no truth” to the claim that it discovered malicious chips in its servers. Amazon [said](https://aws.amazon.com/blogs/security/setting-the-record-straight-on-… the Bloomberg report had “so many inaccuracies … as it relates to Amazon that they’re hard to count.” Supermicro [stated](https://www.supermicro.com/newsroom/pressreleases/2018/press181004_… it never heard from customers about any malicious chips or found any, including in an audit it [hired](https://techcrunch.com/2018/12/11/supermicro-says-investigation-firm… another company to conduct. Spokespeople for the Department of Homeland Security and the U.K.’s National Cyber Security Centre [said](https://www.reuters.com/article/us-china-cyber-dhs/dhs-says-no-reason… they saw no reason to doubt the companies’ denials. Two named sources in the story have publicly stated that they’re skeptical of its conclusions.
But while Bloomberg’s story may well be completely (or partly) wrong, the danger of China compromising hardware supply chains is very real, judging from classified intelligence documents. U.S. spy agencies were warned about the threat in stark terms nearly a decade ago and even assessed that China was adept at corrupting the software bundled closest to a computer’s hardware at the factory, threatening some of the U.S. government’s most sensitive machines, according to documents provided by National Security Agency whistleblower Edward Snowden. The documents also detail how the U.S. and its allies have themselves systematically targeted and subverted tech supply chains, with the NSA conducting its own such operations, including in China, in partnership with the CIA and other intelligence agencies. The documents also disclose supply chain operations by German and French intelligence.
What’s clear is that supply chain attacks are a well-established, if underappreciated, method of surveillance — and much work remains to be done to secure computing devices from this type of compromise.
“An increasing number of actors are seeking the capability to target … supply chains and other components of the U.S. information infrastructure,” the intelligence community stated in a secret 2009 report. “Intelligence reporting provides only limited information on efforts to compromise supply chains, in large part because we do not have the access or technology in place necessary for reliable detection of such operations.”
Nicholas Weaver, a security researcher of the International Computer Science Institute, affiliated with the University of California, Berkeley, told The Intercept, “The Bloomberg/SuperMicro story was so disturbing because an attack as described would have worked, even if at this point we can safely conclude that the Bloomberg story itself is bovine excrement. And now if I’m China, I’d be thinking, ‘I’m doing the time, might as well do the crime!’”
While the Bloomberg story painted a dramatic picture, the one that emerges from the Snowden documents is fragmented and incomplete — but grounded in the deep intelligence resources available to the U.S. government. This story is an attempt to summarize what that material has to say about supply chain attacks, from undisclosed documents we’re publishing for the first time today, documents that have been published already, and documents that have been published only in part or with little to no editorial commentary. The documents we draw on were written between 2007 and 2013; supply chain vulnerabilities have apparently been a problem for a long time.
None of the material reflects directly on Bloomberg Businessweek’s specific claims. The publication has not commented on the controversy around its reporting beyond this statement: “Bloomberg Businessweek’s investigation is the result of more than a year of reporting, during which we conducted more than 100 interviews. Seventeen individual sources, including government officials and insiders at the companies, confirmed the manipulation of hardware and other elements of the attacks. We also published three companies’ full statements, as well as a statement from China’s Ministry of Foreign Affairs. We stand by our story and are confident in our reporting and sources.”
[In this picture taken on May 8, 2017, workers build smartphone chip component circuits at the Oppo factory in Dongguan.Chinese smartphone maker Oppo began life selling DVD players in the in the southern manufacturing hub of Dongguan a little more than a decade ago and only broke into the handset market in 2011. But with an aggressive marketing strategy and concentration on bricks-and-mortar stores in small and medium-sized cities -- rather than relying on online customers -- sales have soared. / AFP PHOTO / Nicolas ASFOURI / TO GO WITH China-US-SKorea-telecommunication-wireless-Oppo, FOCUS by Julien GIRAULT (Photo credit should read NICOLAS ASFOURI/AFP/Getty Images)]
Workers build smartphone chip component circuits at the smartphone maker factory in Dongguan, China, on May 8, 2017.
Photo: Nicolas Asfouri/AFP/Getty Images
U.S. “Critical Infrastructure” Is Vulnerable to Supply Chain Attacks
The U.S. government as a general matter takes seriously the possibility of supply chain tampering, and of China in particular conducting such meddling, including during manufacturing, according to government documents.
A classified 2011 Department of Defense “[Strategy for Operating in Cyberspace](https://theintercept.com/document/2019/01/23/dod-2011-strategy-… refers to supply chain vulnerabilities as one of the “central aspects of the cyber threat,” adding that the U.S.’s reliance on foreign factories and suppliers “provides broad opportunities for foreign actors to subvert and interdict U.S. supply chains at points of design, manufacture, service, distribution, and disposal.”
Chinese hardware providers could position themselves in U.S. industry to compromise “critical infrastructure upon which DoD depends,” according to the document.
Another classified document, a [2009 National Intelligence Estimate](https://theintercept.com/document/2019/01/23/national-intelligenc… about “The Global Cyber Threat to the US Information Infrastructure,” assessed with “high confidence” that there was an increased “potential for persistent, stealthy subversions” in technology supply chains due to globalization and with “moderate confidence” that this would occur in part by tampering with manufacturing and by “taking advantage of insiders.” Such “resource-intensive tactics” would be adopted, the document claimed, to counter additional security on classified U.S. networks.
Each National Intelligence Estimate focuses on a particular issue and represents the collective judgment of all U.S. intelligence agencies, as distilled by the director of national intelligence. The 2009 NIE singled out China and Russia as “the greatest cyber threats” to the U.S. and its allies, saying that Russia had the ability to conduct supply chain operations and that China was conducting “insider access, close access, remote access, and probably supply chain operations.” In a section devoted to “Outside Reviewers’ Comments,” one such reviewer, a former executive at a maker of communications hardware, suggested that the intelligence community look more closely at the Chinese supply chain. The reviewer added:
> The deep influence of the Chinese government on their electronics manufacturers, the increasing complexity and sophistication of these products, and their pervasive presence in global communications networks increases the likelihood of the subtle compromise — perhaps a systemic but deniable compromise — of these products.
The NIE even flagged supply chain attacks as a threat to the integrity of electronic voting machines, since the machines are “subject to many of the same vulnerabilities as other computers,” although it noted that, at the time in 2009, U.S. intelligence was not aware of any attempts “to use cyber attacks to affect U.S. elections.”
Beyond mostly vague concerns involving Russia and China, the U.S. intelligence community did not know what to make of the vulnerability of computer supply chains. Conducting such attacks was “difficult and resource-intensive,” according to the NIE, but beyond that, it had little information to understand the scope of the problem: “The unwillingness of victims and investigating agencies to report incidents” and the lack of technology to detect tampering meant that “considerable uncertainty overshadows our assessment of the threat posed by supply chain operations,” the NIE said.
A section within the 2011 Department of Defense Strategy for Operating in Cyberspace is devoted to the risk of supply chain attacks. This section describes a strategy to “manage and mitigate the risk of untrustworthy technology used by the telecommunications sector,” in part by bolstering U.S. manufacturing, to be fully operational by 2016, two years after Bloomberg said the Supermicro supply chain attack occurred. It’s not clear if the strategy ever became operational; the Defense Department, which [published](https://csrc.nist.gov/presentations/2011/department-of-defense-s… an unclassified version of the same document, did not respond to a request for comment. But the 2009 NIE said that “exclusion of foreign software and hardware from sensitive networks and applications is already extremely difficult” and that even if an exclusion policy were successful “opportunities for subversion will still exist through front companies in the United States and adversary use of insider access in US companies.”
A third document, a [page](https://theintercept.com/document/2019/01/23/intellipedia-supply-chai… on “Supply Chain Cyber Threats” from Intellipedia, an internal wiki for the U.S. intelligence community, included classified passages echoing similar worries about supply chains. A snapshot of the page from 2012 included a section, attributed to the CIA, saying that “the specter of computer hardware subversion causing weapons to fail in times of crisis, or secretly corrupting crucial data, is a growing concern. Computer chips are increasingly complex and subtle modifications made in design or manufacturing processes could be made impossible to detect with the practical means currently available.” Another passage, attributed to the Defense Intelligence Agency, flagged application servers, routers, and switches as among the hardware likely “vulnerable to the global supply chain threat” and added that “supply chain concerns will be exacerbated as U.S. providers of cybersecurity products and services are acquired by foreign firms.”
A 2012 snapshot of a different [Intellipedia page](https://theintercept.com/document/2019/01/23/intellipedia-air-gapped-… listed supply chain attacks first among threats to so-called air-gapped computers, which are kept isolated from the internet and are used by spy agencies to handle particularly sensitive information. The document also said that Russia “has experience with supply chain operations” and stated that “Russian software companies have set up offices in the United States, possibly to deflect attention from their Russian origins and to be more acceptable to U.S. government purchasing agents.” (Similar [concerns](https://www.nytimes.com/2018/01/01/technology/kaspersky-lab-antiv… over Russian antivirus software firm Kaspersky Lab led to a recent [ban on](https://www.reuters.com/article/us-usa-cyber-kaspersky/trump-signs-into… the use of Kaspersky software within the U.S. government.) Kaspersky Lab has repeatedly denied that it has ties to any government and said it would not help a government with cyber espionage. Kaspersky is even [reported](https://www.politico.com/story/2019/01/09/russia-kaspersky-lab-ns… to have helped expose former NSA contractor Harold T. Martin III, who was charged with large-scale theft of classified data from the NSA.
[Components are seen on a circuit board inside Huawei Technologies Co.'s S12700 Series Agile Switches on display in an exhibition hall at the company's headquarters in Shenzhen, China, on Tuesday, June 5, 2018. Facebook Inc. said it had data-sharing partnerships with four Chinese consumer-device makers, including Huawei, escalating concerns that the social network has consistently failed to tell users how their personal information flows beyond Facebook. Photographer: Giulia Marchi/Bloomberg via Getty Images]
Components are seen on a circuit board inside Huawei Technologies Co.’s S12700 Series Agile Switches on display in an exhibition hall at the company’s headquarters in Shenzhen, China, on Tuesday, June 5, 2018.
Photo: Giulia Marchi/Bloomberg via Getty Images
Chinese Telecom Firm Seen as Threat
Beyond broad worries, the U.S. intelligence community had some specific concerns about China’s ability to use the supply chain for espionage.
The 2011 Defense Department strategy document said, without elaborating, that Chinese telecommunications equipment providers suspected of ties to the People’s Liberation Army “pursue inroads into the U.S. telecommunications infrastructure.”
This may be a reference, at least in part, to Huawei, the Chinese telecommunications giant that the department feared would create backdoors in equipment sold to U.S. communications providers. The NSA went as far as to hack into Huawei’s corporate communications, looking for links between the company and the People’s Liberation Army, as reported jointly [by the New York Times](https://www.nytimes.com/2014/03/23/world/asia/nsa-breached-chinese-s… and the German news magazine Der Spiegel. The report cited no evidence linking Huawei to the People’s Liberation Army, and a spokesperson from the company told the publications it was ironic that “they are doing to us is what they have always charged that the Chinese are doing through us.”
The U.S. intelligence community appeared concerned that Huawei might help the Chinese government tap into a sensitive transatlantic telecommunications cable known as “TAT-14,” according to a [top-secret NSA briefing on Huawei](https://theintercept.com/document/2019/01/23/prc-information-warfar…. The cable carried defense industry communication on a segment between New Jersey and Denmark; a 2008 upgrade was contracted to Mitsubishi, which “subcontracted the work Out to Huawei. Who in turn upgraded the system with a High End router of their own,” as the document put it. As a broader concern, the document added that there were indications the Chinese government might use Huawei’s “market penetration for its own SIGINT purposes” — that is, for signals intelligence. A Huawei spokesperson did not comment in time for publication.
Firmware Attacks Worry U.S. Intelligence
In other documents, spy agencies flagged another specific concern, China’s growing prowess at exploiting the BIOS, or the Basic Input/Output System. The BIOS, which is also known by the acronyms EFI and UEFI, is the first code that gets executed when a computer is powered on before launching an operating system like Windows, macOS, or Linux. The software that makes up the BIOS is stored on a chip on the computer’s motherboard, not on the hard drive; it is often referred to as “firmware” because it is tied so closely to the hardware. Like any software, the BIOS can be modified to be malicious and is a particularly good target for computer attacks because it resides outside the operating system and thus, cannot be easily detected. It is not even affected when a user erases the hard drive or installs a fresh operating system.
The Defense Intelligence Agency believed that China’s capability at exploiting the BIOS “reflects a qualitative leap forward in exploitation that is difficult to detect,” according to the “BIOS Implants” section in the [Intellipedia article](https://theintercept.com/document/2019/01/23/intellipedia-air-gapp… on threats to air-gapped computers. The section further stated that “recent reporting,” presumably involving BIOS implants, “corroborates the tentative view in a 2008 national intelligence estimate that China is capable of intrusions more sophisticated than those currently observed by U.S. network defenders.”
A 2012 snapshot of another Intellipedia page, on “[BIOS Threats](https://theintercept.com/document/2019/01/23/intellipedia-bios-thr… flags the BIOS’s vulnerability to supply chain meddling and insider threats. Significantly, the document also appears to refer to the U.S. intelligence community’s discovery of BIOS malware from China’s People’s Liberation Army, stating that “PLA and [[Russian](https://theintercept.com/2017/08/02/white-house-says-russias-hacke…)] MAKERSMARK versions do not appear to have a common link beyond the interest in developing more persistent and stealthy” forms of hacking. The “versions” mentioned appear to be instances of malicious BIOS firmware from both countries, judging from footnotes and other context in the document.
The Intellipedia page also contained indications that China may have figured out a way to compromise the BIOS software that’s manufactured by two companies, American Megatrends, commonly known as AMI, and Phoenix Technologies, which makes Award BIOS chips.
In a paragraph marked top secret, the page stated, “Among currently compromised are AMI and Award based BIOS versions. The threat that BIOS implants pose increases significantly for systems running on compromised versions.” After these two sentences, concluding the paragraph, is a footnote to a top-secret document, which The Intercept has not seen, titled “Probable Contractor to PRC People’s Liberation Army Conducts Computer Network Exploitation Against Taiwan Critical Infrastructure Networks; Develops Network Attack Capabilities.”
The word “compromised” could have different meanings in this context and does not necessarily indicate that a successful Chinese attack occurred; it could simply mean that specific versions of AMI and Phoenix’s Award BIOS software contained vulnerabilities that U.S. spies knew about. “It’s very puzzling that we haven’t seen evidence of more firmware attacks,” said Trammell Hudson, a security researcher at the hedge fund Two Sigma Investments and co-discoverer of a series of BIOS vulnerabilities in MacBooks [known](https://trmm.net/Thunderstrike2_details) as [Thunderstrike](https://www.wired.com/2015/08/researchers-create-first-firmw…. “Most every security conference debuts several new vulnerability proof-of-concepts, but … the only public disclosure of compromised firmware in the wild” came in 2015, when Kaspersky Lab announced the discovery of malicious hard drive firmware from an [advanced hacking operation](https://www.ibtimes.co.uk/equation-group-meet-nsa-gods-cyber-espionage-1488327) dubbed Equation Group. “Either as an industry we’re not very good at detecting them, or these firmware attacks and hardware implants are only used in very tailored access operations.”
Hudson added, “It is quite worrisome that many systems never receive firmware updates after they ship, and the numerous embedded devices in a system are even less likely to receive updates. Any compromises against the older versions have a ‘forever day’ aspect that means that they will remain useful for adversaries against systems that might be in use for many years.”
American Megatrends issued the following statement: “The BIOS firmware industry, and computing as a whole, has taken incredible steps towards security since 2012. The information in the Snowden document concerns platforms that pre-date current BIOS-level security. We have processes in place to identify security vulnerabilities in boot firmware and promptly provide the mitigation to our OEM and ODM customers for their platforms.”
Phoenix Technologies issued the following statement: “The attacks described in the document are well-understood in the industry. Award BIOS was superseded by today’s more secure UEFI framework which contained mitigations for these types of firmware attacks many years ago.”
Successful Supply Chain Attacks by France, Germany, and the U.S.
The Snowden documents reviewed so far discuss, in often vague and uncertain terms, what U.S. intelligence believes its adversaries like China and Russia are capable of. But these documents and others also discuss in much more specific terms what the U.S. and its allies are capable of, including descriptions of specific, successful supply chain operations. They also describe in broad strokes the capabilities of various NSA programs and units against supply chains.
The Intellipedia page on [threats to air-gapped networks](https://theintercept.com/document/2019/01/24/intellipedia-air-gap… disclosed that as of 2005, Germany’s foreign intelligence agency, the BND, “has established a few commercial front companies that it would use to gain supply chain access to unidentified computer components.” The page attributes this knowledge to “information obtained during an official liaison exchange.” The page did not mention who BND’s target was or what sorts of activities the front companies were engaged in.
BND has been “setting up front companies for both HUMINT and SIGINT operations since the 1950s,” said Erich Schmidt-Eenboom, German author and BND expert, using the jargon terms for intelligence gathered both by human spies and through electronic eavesdropping, respectively. “As a rule, a full-time BND employee will found a small GmbH [company], which is responsible for a single operation. In the SIGINT area, this GmbH also maintains contacts with industrial partners.”
BND did not respond to a request for comment.
The Intellipedia page also stated that, beginning in 2002, France’s intelligence agency, DGSE, “delivered computers and fax equipment to Senegal’s security services and by 2004 could access all the information processed by these systems, according to a cooperative source with indirect access.” Senegal is a former French colony. Representatives of the Senegalese government did not respond to a request for comment. DGSE declined to comment.
Left/Top: Intercepted packages are opened carefully. Right/Bottom: A “load station” implants a beacon.Photos: NSA
Much of what’s been reported about the U.S.’s supply chain attack capabilities came from a June 2010 NSA document that The Intercept’s co-founder Glenn Greenwald published with his 2014 book “No Place to Hide.” The document, an article from an internal NSA news site called SIDtoday, was [published again](http://www.spiegel.de/media/media-35669.pdf) in 2015 in Der Spiegel with fewer redactions (but without any new analysis).
SIDtoday concisely explained one NSA approach to supply chain attacks (formatting is from the original article):
> Shipments of computer network devices (servers, routers, etc.) being delivered to our targets throughout the world are intercepted. Next, they are redirected to a secret location where Tailored Access Operations/Access Operations (AO – S326) employees, with the support of the Remote Operations Center (S321), enable the installation of beacon implants directly into our targets’ electronic devices. These devices are then re-packaged and placed back into transit to the original destination.
Supply chain “interdiction” attacks like this involve compromising computer hardware while it’s being transported to the customer. They target a different part of the supply chain than the attack described by Bloomberg; Bloomberg’s story said Chinese spies installed malicious microchips into server motherboards while they were being manufactured at the factory, rather than while they were in transit. The NSA document said its interdiction attacks “are some of the most productive operations in TAO,” or Tailored Access Operations, NSA’s offensive hacking unit, “because they pre-position access points into hard target networks around the world.” (TAO is known today as Computer Network Operations.)
Interdicting specific shipments may carry less risk for a spy agency than implanting malicious microchips en masse at factories. “A design/manufacturing attack of the sort alleged by Bloomberg is plausible,” Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, told The Intercept. “That’s exactly why the story was such a big deal. But just because it’s plausible doesn’t mean it’s happened, and Bloomberg just didn’t bring in enough evidence, in my opinion, to support their claim.” She added, “What we do know is that a design/manufacturing attack is highly risky for the attacker and that there are many less risky alternatives that are better suited to the task at hand.”
[In this Friday, Sept. 21, 2012 photo, a Syrian man looks at his mobile phone in the Bustan al-Qasr neighborhood of Aleppo, Syria. The Britain-based Syrian Observatory for Human Rights said Friday that nearly 30,000 Syrians have been killed during the 18-month uprising against the Assad regime. (AP Photo/ Manu Brabo)]
A Syrian man looks at his mobile phone in the Bustan al-Qasr neighborhood of Aleppo, Syria, on Sept. 21, 2012.
Photo: Manu Brabo/AP
The 2010 document also described a successful NSA attack against the state-run Syrian Telecommunications Establishment. The NSA knew the company had ordered computer network devices for its internet backbone, so the agency interdicted these devices and redirected them to a “load station,” where the agency implanted “beacons” and then placed the devices back into transit.
Several months after Syrian Telecom received the devices, one of the beacons “called back to the NSA covert infrastructure.” At that point, the NSA used its implant to survey the network where the device was installed and discovered that the device gave much greater access than expected; in addition to the internet backbone, it provided access into the national cellular network operated by Syrian Telecom, since the cellular traffic traversed the backbone.
“Since the STE GSM [cellular] network has never been exploited, this new access represented a real coup,” the author of the NSA document wrote. This allowed the NSA to “automatically exfiltrate” information about Syria Telecom cellular subscribers, including who they called, when, and their geographical locations as they carried their phones throughout the day. It also enabled the NSA to gain further access to cellular networks in the region.
Document: NSA
Another NSA document describes a different successful attack conducted by the agency. A slide from a 2013 NSA “program management review” described a [top-secret supply chain operation](https://theintercept.com/document/2019/01/24/nsa-supply-chain-at… targeting a Voice-Over-IP network for classified online phone calls. At an “overseas location,” the NSA intercepted an order of equipment for this network from a manufacturer in China and compromised it with implant beacons.
“The analysis and reporting on this target identified, with high granularity, [the target’s] method of hardware procurement,” stated a presentation slide. “As a result of these efforts, NSA and its [Intelligence Community] partners are now positioned for success with future opportunities.”
NSA Operations in “Adversary Space”
In addition to information about specific supply chain operations by the U.S. and its allies, Snowden documents also include more general information about U.S. capabilities.
Computer hardware can be altered at various points along the supply chain, from design to manufacturing to storage to shipment. The U.S. is among the small number of countries that could, in theory, compromise devices at many different points in this pipeline, thanks to its resources and geographic reach.
Document: NSA
This was underlined in a [top-secret 2011 presentation](https://theintercept.com/document/2019/01/24/special-collecti… about the Special Collection Service, a joint NSA/CIA spying program operating out of U.S. diplomatic facilities overseas. It referred to 80 global SCS sites as “points of presence” providing a “home field advantage in [the] adversary’s space,” from which “human enabled SIGINT,” can be conducted, and where supply chain “opportunities” present themselves, a suggestion that the NSA and CIA conduct supply chain attacks from U.S. embassies and consulates around the world. (The presentation was [published](http://www.spiegel.de/international/the-germany-file-of-edward-snowden-documents-available-for-download-a-975917.html) by Der Spiegel in 2014, alongside 52 other documents, and apparently never written about. The Intercept is republishing it to include the speaker notes.)
One program that goes after computer supply chains in this manner is the NSA’s SENTRY OSPREY, in which the agency uses human spies to bug digital intelligence sources, or, as the top-secret [briefing](https://theintercept.com/document/2014/10/10/national-initiative-… published by [The Intercept in 2014](https://theintercept.com/2014/10/10/core-secrets/) puts it, “employs its own HUMINT assets […] to support SIGINT operations,” including “close access” operations that essentially put humans right up against physical infrastructure. These operations, conducted in conjunction with partners like the CIA, FBI, and Defense Intelligence Agency, appear to have included attempts to implant bugs and compromise supply chains; a [2012 classification guide](https://theintercept.com/document/2014/10/10/target-exploitation-cla… said they included “supply chain-enabling” and “hardware implant-enabling” — as well as “forward-based [program] presence” at sites in Beijing, as well as South Korea and Germany, all home to telecommunications manufacturers. Another program, SENTRY OWL, works “with specific foreign partners… and foreign commercial industry entities” to make devices and products “exploitable for SIGINT,” according to the briefing.
The Persistence Division
The NSA’s Tailored Access Operations played a critical role in the U.S. government’s supply chain interdiction operations. In addition to helping intercept shipments of computer hardware to secretly install hardware implants, one division of TAO, known as the “Persistence Division,” was tasked with actually creating the implants.
Document: NSA
A 2007 [top-secret presentation](https://theintercept.com/document/2019/01/24/tailored-access-… about TAO described “sophisticated” covert hacking of software, including firmware, over a computer network “or by physical interdiction,” and credits these attacks with providing U.S. spy agencies “some of their most significant successes.”
Another [document](http://www.spiegel.de/media/media-35661.pdf), a 2007 NSA wiki page titled “Intern Projects,” first[published](http://www.spiegel.de/international/world/new-snowden-docs-indicate-scope-of-nsa-preparations-for-cyber-battle-a-1013409.html) by Der Spiegel, described “ideas about possible future projects for the Persistence Division.” The projects described involved adding new capabilities to the NSA’s existing malicious firmware-based implants. These implants could be inserted into target computers via supply chain attacks.
One potential project proposed to expand a type of BIOS malware to work with computers running the Linux operating system and to offer more ways to exploit Windows computers.
Another suggested targeting so-called virtualization technology on computer processors, which allows the processors to more efficiently and reliably segregate so-called virtual machines, software to simulate multiple computers on a single computer. The proposed project would develop a “hypervisor implant,” indicating that it intended to target the software that coordinates the operation of virtual machines, known as the hypervisor. Hypervisors and virtual machines are used widely by cloud hosting providers. The implant would leverage support for virtual machines in both Intel and AMD processors. (Intel and AMD did not respond to requests for comment.)
Another possible project envisioned attaching a short hop radio to a hard drive’s serial port and communicating with it using a firmware implant. Yet another aimed to develop firmware implants targeting hard drives built by U.S. data storage company Seagate. (Seagate did not respond to a request for comment.)
[Intercept_microchip_spot_FLAT2-1540333528]
Illustration: Oliver Munday for The Intercept
Where to Hide Your Hardware Implant?
One of the reasons spy agencies like the NSA fear supply chain compromise is that there are so many places on a typical computer to hide a spy implant.
“Servers today have dozens of components with firmware and hundreds of active components,” said Joe FitzPatrick, a hardware security trainer and researcher. “The only way to give it a truly clean bill of health is in-depth destructive testing that depends on a ‘gold standard’ good reference to compare to — except defining that ‘gold standard’ is difficult to impossible. The much greater risk is that even perfect hardware can have vulnerable firmware and software.”
The [Intellipedia page about supply chain threats](https://theintercept.com/document/2019/01/24/intellipedia-supply-c… lists and analyzes the various pieces of hardware where a computer could be compromised, including power supplies (“could be set to … self-destruct, damage the computer’s motherboard … or even start a fire or explosion”); network cards (“well-positioned to plant malware and exfiltrate information”); disk controllers (“Better than a root kit”); and the graphics processing unit, or GPU (“well positioned to scan the computer’s screen for sensitive information”).
According to the Bloomberg report, Chinese spies connected their malicious microchip to baseboard management controllers, or BMCs, miniature computers that are hooked into servers to give administrators remote access to troubleshoot or reboot the servers.
FitzPatrick, quoted by Bloomberg, is skeptical of the Supermicro story, [including](https://risky.biz/RB517_feature/) its description of how spies exploited the BMCs. But experts agreed that placing a backdoor into the BMC would be a good way to compromise a server. In a [follow-up story](https://www.bloomberg.com/news/articles/2018-10-09/new-evidence-of-h…, Bloomberg alleged that a “major U.S. telecommunications company” discovered a Supermicro server with an implant built into the Ethernet network card, which is one of the pieces of hardware listed in the Intellipedia page that’s vulnerable to supply chain attacks. FitzPatrick was, again, [skeptical](https://twitter.com/securelyfitz/status/1049699417840791552) of the claims.
Join Our Newsletter
Original reporting. Fearless journalism. Delivered to you.
I’m in
After the Bloomberg story was published, in a [blog post](https://www.lawfareblog.com/us-government-needs-better-immunize-itsel… on Lawfare, Weaver, the Berkeley security researcher, argued that the U.S. government should reduce the number of “components that need to execute with integrity” to only the central processing unit, or CPU, and require that that these “trusted base” components used in government systems be manufactured in the U.S., and by U.S. companies. In this way, the rest of the computer could be safely manufactured in China — systems would work securely even if components outside the trusted base, such as the motherboard, carried malicious implants. Apple’s iPhone and Intel’s Boot Guard, he argued, already work in this way. Due to the government’s purchasing power, “it should be plausible to write supply rules that, after a couple years, effectively require that U.S. government systems are built in a way that resists most supply chain attacks,” he told The Intercept.
While supply chain operations are used in real cyberattacks, they seem to be rare compared to more traditional forms of hacking, like spear-phishing and malware attacks over the internet. The NSA uses them to access “isolated or complex networks,” according to a 2007 top-secret presentation about TAO.
“Supply chain attacks are something individuals, companies, and governments need to be aware of. The potential risk needs to be weighed against other factors,” FitzPatrick said. “The reality is that most organizations have plenty of vulnerabilities that don’t require supply chain attacks to exploit.”
Documents
Documents published with this article:
- [DoD 2011 Strategy for Operating in Cyberspace – Supply Chain Excerpts](https://theintercept.com/document/2019/01/24/dod-2011-strategy-fo…
- [Intellipedia – Air Gapped Network Threats](https://theintercept.com/document/2019/01/24/intellipedia-air-gapp…
- [Intellipedia – BIOS Threats](https://theintercept.com/document/2019/01/24/intellipedia-bios-thr…
- [Intellipedia – Supply Chain Cyber Threats](https://theintercept.com/document/2019/01/24/intellipedia-supply-c…
- [NSA Supply Chain Attack From PMR 4-24-13](https://theintercept.com/document/2019/01/24/nsa-supply-chain-atta…
- [National Intelligence Estimate 2009 Global Cyber Threat – Supply Chain Excerpts](https://theintercept.com/document/2019/01/24/national-intelligenc…
- [PRC Information Warfare & Huawei](https://theintercept.com/document/2019/01/24/prc-information-warfar…
- [Special Collection Service – Pacific SIGDEV Conference March 2011 – Supply Chain Excepts](https://theintercept.com/document/2019/01/24/special-collection-se…
- [Tailored Access Operations 2007](https://theintercept.com/document/2019/01/24/tailored-access-operatio…
2
1
Re: [Cryptography] Implementing full Internet IPv6 end-to-end encryption based on Cryptographically Generated Address
by grarpamp 24 Jan '19
by grarpamp 24 Jan '19
24 Jan '19
On 1/24/19, Christian Huitema <huitema(a)huitema.net> wrote:
> If you want real integration with IPv6 addressing, the crypto systems
> can really only use 64 bits. The top 64 bits are claimed by the routing
> system, and the network providers just won't let you put something
> arbitrary there. That's a big issue, because if your cryptographic proof
> of ownership relies on matching 64 bits, then it is not much of a proof.
Depends on if and to what extent one needs or wishes to speak
to the clearnet hardware that currently exists...
"
Yes, one cannot rationally overload all 128 bits for that without colliding
upon allocated IPv6 space that may appear in one's host stack.
However the 1:1 key network can be larger than 64 or 80 bit. One could
easily play with up to say 125 bits by squatting on entirely
unallocated space. (Unlike the clear mistake CJDNS made by
squatting on space already allocated for a specific and conflicting
real world in stack purpose.) Obviously the common library widths
of 96 and 112 could be keyed. And request could be made for a
formal allocation if compatibility and compliance was felt needed
by some mental gymnastics.
https://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xhtml
https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-speci…
https://www.iana.org/assignments/ipv6-unicast-address-assignments/ipv6-unic…
"
Agreed, going from say 64 to 125 bits is not the huge or
universally strong and ideal leap that could be sought for.
And there is the question of secure levels and offloading
(or onboarding as it may be)... at what level would many
users perhaps choose to share their cat pictures...
64, 80, 120, 512? And their finances, affairs, work, email?
Were a slider to exist, where would offloading bulk traffic,
of any given content or purpose, end up being set, and
start to occur? How would you run each as needed?
Do you need a future 8192-bit IPvN in backbone routers
and host stacks to achieve certain strengths and utility
goals? Or can you do it some other way?
> The CGA specification (RFC 3972) tried to mitigate that by introducing
> the "SEC" field. It tries to make the proof harder by specifying a
> number of matching zeroes. There is a tradeoff there. We wanted to
> encode the number of zeroes in the address itself: we feared that doing
> otherwise would make address spoofing too easy. But we were also worried
> about birthday paradox issues. Each bit allocated to the SEC field is
> one fewer bit available to differentiate the addresses of hosts on the
> same network. The compromise was to pick a 16 bit granularity.
> ...
For reference...
https://tools.ietf.org/html/rfc3972
> ...
> Bottom line, back in 2005 we had high hopes that CGA would enable all
> kinds of security improvements, be it end-to-end IPSEC or secure IP
> Mobility. That did not happen, and hardly anybody uses CGA today. Lots
> of the work was done at Microsoft Research, but Microsoft never found a
> real reason to deploy CGA in Windows. The real reason is that 64 bits is
> too small for crypto.
ORCHIDv2 is also commonly noted...
https://tools.ietf.org/html/rfc7343
An IPv6 Prefix for Overlay Routable Cryptographic Hash Identifiers
Version 2 (ORCHIDv2)
This document specifies an updated Overlay Routable Cryptographic
Hash Identifiers (ORCHID) format that obsoletes that in RFC 4843.
These identifiers are intended to be used as endpoint identifiers at
applications and Application Programming Interfaces (APIs) and not as
identifiers for network location at the IP layer, i.e., locators.
They are designed to appear as application-layer entities and at the
existing IPv6 APIs, but they should not appear in actual IPv6
headers. To make them more like regular IPv6 addresses, they are
expected to be routable at an overlay level. Consequently, while
they are considered non-routable addresses from the IPv6-layer
perspective, all existing IPv6 applications are expected to be able
to use them in a manner compatible with current IPv6 addresses.
The Overlay Routable Cryptographic Hash Identifiers originally
defined in RFC 4843 lacked a mechanism for cryptographic algorithm
agility. The updated ORCHID format specified in this document
removes this limitation by encoding, in the identifier itself, an
index to the suite of cryptographic algorithms in use.
1
0
deception is becoming all the rage these days. done right, it provides a unique window into attacker intention and capability.
rather than the isolated, fabricated structures commonly used (honey-*) consider instrumenting your actual systems with traps for the nefarious:
- set browser strings showing out of date, vulnerable versions
- leave packages old, but replace with source built updates
- instrument applications with sanitizers and hardened allocators
- jail and container, to observe unexpected calls
exploits, like any other software, are fragile! slight changes in build and configuration can render even the most expensive and carefully constructed chain impotent.
some things will get through (think: logic bugs, rather than technical exploitation) so keep your most sensitive work on truly hardened systems with strong compartmentalization and attack surface reduction. (yes, Qubes is still better than a vanilla distro! it's amazing how much malware simply aborts when it finds itself in a virtualized environment. feature, not bug! :)
---
https://www.springer.com/cda/content/document/cda_downloaddocument/97833193…
[Page 1]
Cyber Security DeceptionMohammed H. Almeshekah and Eugene H. Spafford
Abstract Our physical and digital worlds are converging at a rapid pace, putting
a lot of our valuable information in digital formats. Currently, most computer
systems’ predictable responses provide attackers with valuable information on how
to infiltrate them. In this chapter, we discuss how the use of deception can play a
prominent role in enhancing the security of current computer systems. We show
how deceptive techniques have been used in many successful computer breaches.
Phishing, social engineering, and drive-by-downloads are some prime examples.
We discuss why deception has only been used haphazardly in computer security.
Additionally, we discuss some of the unique advantages deception-based security
mechanisms bring to computer security. Finally, we present a framework where
deception can be planned and integrated into computer defenses.1 IntroductionMost data is digitized and stored in organizations’ servers, making them a valuable
target. Advanced persistent threats (APT), corporate espionage, and other forms of
attacks are continuously increasing. Companies reported 142 million unsuccessful
attacks in the first half of 2013, as reported by Fortinet [[1](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] In addition, a recent
Verizon Data Breach Investigation Report (DBIR) points out that currently deployed
protection mechanisms are not adequate to address current threats [[1](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] The report
states that 66 % of the breaches took months or years to discover, rising from 56 %
in 2012. Furthermore, 84% of these attacks only took hours or less to infiltrate
computer systems [[1](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] Moreover, the report states that only 5 % of these breaches
were detected using traditional intrusion detection systems (IDSs) while 69 % were
detected by external parties [[1](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)]
These numbers are only discussing attacks that were discovered. Because only
5 % of the attacks are discovered using traditional security tools, it is likely that theM.H. Almeshekah ( )
King Saud University, Riyadh, Saudi Arabia
e-mail: meshekah(a)ksu.edu.sa
E.H. Spafford
Purdue University, West Lafayette, IN, USA
e-mail: spaf(a)purdue.edu
© Springer International Publishing Switzerland 2016
S. Jajodia et al. (eds.), Cyber Deception, DOI 10.1007/978-3-319-32699-3_2
25
---------------------------------------------------------------
[Page 2]
26
M.H. Almeshekah and E.H. Spaffordreality is significantly worse as there are unreported and undiscovered attacks. These
findings show that the status quo of organizations’ security posture is not enough to
address current threats.
Within computer systems, software and protocols have been written for decades
with an intent of providing useful feedback to every interaction. The original design
of these systems is structured to ease the process of error detection and correction by
informing the user about the exact reason why an interaction failed. This behavior
enhances the efforts of malfeasors by giving them information that helps them to
understand why their attack was not successful, refine their attacks and tools, and
then re-attack. As a result, these systems are helpful to attackers and guide them
throughout their attack. Meanwhile, targeted systems learn nothing about these
attempts, other than a panic in the security team. In fact, in many cases multiple
attempts that originate from the same entity are not successfully correlated.
Deception-based techniques provide significant advantages over traditional se-
curity controls. Currently, most security tools are responsive measures to attackers’
probes to previously known vulnerabilities. Whenever an attack surfaces, it is
hit hard with all preventative mechanisms at the defender’s disposal. Eventually,
persistent attackers find a vulnerability that leads to a successful infiltration by
evading the way tools detect probes or by finding new unknown vulnerabilities.
This security posture is partially driven by the assumption that “hacking-back” is
unethical, while there is a difference between the act of “attacking back” and the act
of deceiving attackers.
There is a fundamental difference in how deception-based mechanisms work in
contrast to traditional security controls. The latter usually focuses on attackers’
actions—detecting or preventing them—while the former focuses on attackers’
perceptions—manipulating them and therefore inducing adversaries to take action-
s/inactions in ways that are advantageous to targeted systems; traditional security
controls position themselves in response to attackers’ actions while deception-based
tools are positioned in prospect of such actions.1.1 DefinitionOne of the most widely accepted definitions of computer-security deception is the
one by Yuill [[2](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] Computer Deception is “Planned actions taken to mislead attackers
and to thereby cause them to take (or not take) specific actions that aid computer-
security defenses.” We adapt this definition and add “confusion” as one of goals
of using deceit (the expression of things that are not true) in computer system
protection. Therefore, the definition of defensive computer deception we will use
throughout this chapter is
Definition 1. Deception is “Planned actions taken to mislead and/or confuse
attackers and to thereby cause them to take (or not take) specific actions that aid
computer-security defenses.”
---------------------------------------------------------------
[Page 3]
Cyber Security Deception
272 A Brief HistoryThroughout history, deception has evolved to find its natural place in our societies
and eventually our technical systems. Deception and decoy-based mechanisms have
been used in security for more than two decades in mechanisms such as honeypots
and honeytokens. An early example of how deception was used to attribute and
study attackers can be seen in the work of Cheswick in his well-known paper “An
Evening with Berferd” [[3](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] He discusses how he interacted with an attacker in real
time providing him with fabricated responses. Two of the earliest documented uses
of deceptive techniques for computer security are in the work of Cliff Stoll in his
book “The Cuckoo’s Egg” [[4](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] and the work of Spafford in his own lab [[5](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] The
Deception Toolkit (DTK),[1](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ… by Fred Cohen in 1997 was one of the first
publicly available tools to use deception for the purpose of computer defenses.
In late 1990s, “honeypots”—“a component that provides its value by being
attacked by an adversary” i.e. deceiving the attacker to interact with them—
have been used in computer security. In 2003, Spitzner published his book on
“Honeypots” discussing how they can be used to enhance computer defenses [[6](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)]
Following on the idea of honeypots, a proliferation of “honey-*” prefixed tools
have been proposed. Additionally, with the release of Tripwire, Kim and Spafford
suggested the use of planted files that should not be accessed by normal users, with
interesting names and/or locations and serving as bait that will trigger an alarm if
they are accessed by intruders [[7](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https… Honey-Based Tools2.1.1 Honeypots
Honeypots have been used in multiple security applications such as detecting and
stopping spam[2](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:… analyzing malware [[8](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] In addition, honeypots have been used
to secure databases [[9](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] They are starting to find their way into mobile environments
[[10](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] where some interesting results have been reported [[11](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
Honeypots in the literature come in two different types: server honeypot and
client honeypot. The server honeypot is a computer system that contains no valuable
information and is designed to appear vulnerable for the goal of enticing attackers to
access them. Client honeypots are more active. These are vulnerable user agents that
troll many servers actively trying to get compromised [[12](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] When such incidents
happen, the client honeypots report the servers that are infecting users’ clients.
Honeypots have been used in computing in four main areas as we discuss in the
following paragraphs.1http://www.all.net/dtk/.
2http://www.projecthoneypot.org.
---------------------------------------------------------------
[Page 4]
28
M.H. Almeshekah and E.H. SpaffordDetection
Honeypots provide an additional advantage over traditional detection mechanisms
such as Intrusion Detection Systems (IDS) and anomaly detection. First, they
generate less logging data as they are not intended to be used as part of normal
operations and thus any interaction with them is illicit. Second, the rate of false
positive is low as no one should interact with them for normal operations. Angnos-
takis et al. proposed an advanced honeypot-based detection architecture in the use
of shadow honeypots [[13](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] In their scheme they position Anomaly Detection Sensors
(ADSs) in front of the real system where a decision is made as whether to send the
request to a shadow machine or to the normal machine. The scheme attempts to
integrate honeypots with real systems by seamlessly diverting suspicious traffic to
the shadow system for further investigation. Finally, honeypots are also helpful in
detecting industry-wide attacks and outbreaks, e.g. the case of the Slammer worm
as discussed in [[14](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
Prevention
Honeypots are used in prevention where they assist in slowing down the attackers
and/or deterring them. Sticky honeypots are one example of machines that utilize
unused IP address space and interact with attackers probing the network to slow
them down [[15](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] In addition, Cohen argues that by using his Deception ToolKit
(DTK) we can deter attackers confusing them and introducing risk on their side
[[16](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] However, we are not aware of any studies that investigated those claims.
Beyond the notion of enticement and traps used in honeypots, deception has been
studied from other perspectives. For example, Rowe et al. present a novel way of
using honeypots for deterrence [[17](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] They protect systems by making them look
like a honeypot and therefore deter attackers from accessing them. Their observation
stemmed from the developments of anti-honeypots techniques that employ advanced
methods to detect if the current system is a honeypot [[18](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
Response
One of the advantages of using honeypots is that they are totally independent
systems that can be disconnected and analyzed after a successful attack on them
without hindering the functionality of the production systems. This simplifies the
task of forensic analysts as they can preserve the attacked state of the system and
extensively analyze what went wrong.
---------------------------------------------------------------
[Page 5]
Cyber Security Deception
29Research
Honeypots are heavily used in analyzing and researching new families of malware.
The honeynet project[3](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHET… an “international non-profit security research organization,
dedicated to investigating the latest attacks and developing open source security
tools to improve Internet security.” For example, the HoneyComb system uses
honeypots to create unique attack signatures [[19](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Other more specific tools, such
as dionaea,[4](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHE… designed to capture a copy of computer malware for further
study. Furthermore, honeypots help in inferring and understanding some widespread
attacks such as Distributed Denial of Service (DDoS) [[20](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
2.1.2 Other Honey Prefixed Tools
The prefix “honey-*” has been used to refer to a wide range of techniques that
incorporate the act of deceit in them. The basic idea behind the use of the prefix
word “honey” in these techniques is that they need to entice attackers to interact
with them, i.e. fall for the bait—the “honey.” When such an interaction occurs the
value of these methods is realized.
The term honeytokens has been proposed by Spitzner [[21](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] to refer to honeypots
but at a smaller granularity. Stoll used a number of files with enticing names and
distributed them in the targeted computer systems, acting as a beaconing mechanism
when they are accessed, to track down Markus Hess [[4](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] Yuill et al. coined the term
honeyfiles to refer to these files [[22](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] HoneyGen was also used to refer to tools that
are used to generate honeytokens [[23](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
Most recently, a scheme named Honeywords was proposed by Jules and Rivest
to confuse attackers when they crack a stolen hashed password file [[24](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] by hiding
the real password among a list of “fake” ones. Their scheme augmenting password
databases with an additional .N
1/ fake credentials [[24](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] If the DB is stolen and
cracked, attackers are faced with N different passwords to choose from where only
one of them is the correct one. However, if they use any of the fake ones the system
triggers an alarm alerting system administrators that the DB has been cracked.2.2 Limitations of Isolated Use of DeceptionHoneypot-based tools are a valuable technique used for the detection, prevention,
and response to cyber attacks as we discuss in this chapter. Nevertheless, those
techniques suffer from the following major limitations:3www.honeynet.org[.](http://www.honeynet.org)
4http://dionaea.carnivore.it/[.](http://dionaea.carnivore.it/)
---------------------------------------------------------------
[Page 6]
30
M.H. Almeshekah and E.H. Spafford• As the prefix honey-* indicates, for such techniques to become useful, the
adversary needs to interact with them. Attackers and malware are increasingly
becoming sophisticated and their ability to avoid honeypots is increasing [[25](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
• Assuming we manage to lure the attacker into our honeypot, we need to be able
to continuously deceive them that they are in the real system. Chen et al. study
such a challenge and show that some malware, such as polymorphic malware,
not only detects honeypots, but also changes its behavior to deceive the honeypot
itself [[25](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] In this situation, attackers are in a position where they have the ability
to conduct counter-deception activities by behaving in a manner that is different
than how would they do in a real environment.
• To learn about attackers’ objectives and attribute them, we need them to interact
with the honeypot systems. However, with a high-interaction honeypot there is
a risk that attackers might exploit the honeypot itself and use it as a pivot point
to compromise other, more sensitive, parts of the organization’s internal systems.
Of course, with correct separation and DMZs we can alleviate the damage, but
many organizations consider the risk intolerable and simply avoid using such
tools.
• As honeypots are totally “fake systems” many tools currently exist to identify
whether the current system is a honeypot or not [[18](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http… [25](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] This fundamental
limitation is intrinsic in their design.3 Deception as a Security TechniqueAchieving security cannot be done with single, silver-bullet solutions; instead, good
security involves a collection of mechanisms that work together to balance the cost
of securing our systems with the possible damage caused by security compromises,
and drive the success rate of attackers to the lowest possible level. In Fig. [1](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…, we
present a taxonomy of protection mechanisms commonly used in systems. The
diagram shows four major categories of protection mechanisms and illustrates how
they intersect achieving multiple goals.
The rationale behind having these intersecting categories is that a single layer of
security is not adequate to protect organizations so multi-level security controls are
needed [[26](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] In this model, the first goal is to deny unauthorized access and isolate
our information systems from untrusted agents. However, if adversaries succeed
in penetrating these security controls, we should have degradation and obfuscation
mechanisms in place that slow the lateral movement of attackers in penetrating our
internal systems. At the same time, this makes the extraction of information from
penetrated systems more challenging.
Even if we slow the attackers down and obfuscate our information, advanced
adversaries may explore our systems undetected. This motivates the need for a
third level of security controls that involves using means of deceit and negative
information. These techniques are designed to lead attackers astray and augment
our systems with decoys to detect stealthy adversaries. Furthermore, this deceitful
information will waste the time of the attackers and/or add risk during their
---------------------------------------------------------------
[Page 7]
Cyber Security Deception
31
Fig. 1 Taxonomy of information protection mechanismsinfiltration. The final group of mechanisms in our taxonomy is designed to attribute
attackers and give us the ability to have counter-operations. Booby-trapped software
is one example of counter-operations that can be employed.
Securing a system is an economic activity and organizations have to strike the
right balance between cost and benefits. Our taxonomy provides a holistic overview
of security controls, with an understanding of the goals of each group and how can
they interact with each other. This empowers decision makers on what and which
security controls they should deploy.
---------------------------------------------------------------
[Page 8]
32
M.H. Almeshekah and E.H. SpaffordDespite all the efforts organizations have in place, attackers might infiltrate
information systems, and operate without being detected or slowed. In addition,
persistent adversaries might infiltrat the system and passively observe for a while
to avoid being detected and/or slowed when moving on to their targets. As a result,
a deceptive layer of defense is needed to augment our systems with negative and
deceiving information to lead attackers astray. We may also significantly enhance
organizational intrusion detection capabilities by deploying detection methods using
multiple, additional facets.
Deception techniques are an integral part of human nature that is used around us
all the time. As an example of a deception widely used in sports: teams attempt to
deceive the other team into believing they are following a particular plan so as to
influence their course of action. Use of cosmetics may also be viewed as a form of
mild deception. We use white lies in conversation to hide mild lapses in etiquette. In
cybersecurity, deception and decoy-based mechanisms haven been used in security
for more than two decades in technologies such as honeypots and honeytokens.
When attackers infiltrate the system and successfully overcome traditional
detection and degradation mechanisms we would like to have the ability to not
only obfuscate our data, but also lead the attackers astray by deceiving them
and drawing their attention to other data that are false or intestinally misleading.
Furthermore, exhausting the attacker and causing frustration is also a successful
defensive outcome. This can be achieved by planting fake keys and/or using schemes
such as endless files [[5](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] These files look small on the organization servers but when
downloaded to be exfiltrated will exhaust the adversaries’ bandwidth and raise some
alarms. Moreover, with carefully designed deceiving information we can even cause
damage at the adversaries’ servers. A traditional, successful, deception technique
can be learned from the well-known story of Farewell Dossier during the cold war
where the CIA provided modified items to a Soviet spy ring. When the Soviets used
these designs thinking they are legitimate, it resulted in a major disaster affecting a
trans-Siberian pipeline.
When we inject false information we cause some confusion for the adversaries
even if they have already obtained some sensitive information; the injection of
negative information can degrade and/or devalue the correct information obtained
by adversaries. Heckman and his team, from Lockheed Martin, conducted an
experiment between a red and a blue team using some deception techniques, where
they found some interesting results [[27](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Even after the red team successfully
attacked and infiltrate the blue system and obtained sensitive information, the blue
team injected some false information in their system that led the red team to devalue
the information they had obtained, believing that the new values were correct.
Another relationship can be observed between the last group of protection tech-
niques, namely attribution, and deception techniques. Deception-based mechanisms
are an effective way to lure attackers to expose themselves and their objectives when
we detect them accessing things and conducting unusual activities. Other tools, such
as anomaly-based IDS, have similar goals, but the advantage deception-based tools
have is that there is a clear line between normal user activities and abnormal ones.
This is because legitimate users are clearly not supposed to access this information.
---------------------------------------------------------------
[Page 9]
Cyber Security Deception
33This difference significantly enhances the effectiveness of deception-based security
controls and reduces the number of false-positives, as well as the size of the system’s
log file.3.1 Advantages of Using Deception in Computer DefensesReginald Jones, the British scientific military intelligence scholar, concisely articu-
lated the relationship between security and deception. He referred to security as a
“negative activity, in that you are trying to stop the flow of clues to an opponent” and
it needs its other counterpart, namely deception, to have a competitive advantage in
a conflict [[28](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] He refers to deception as the “positive counterpart to security” that
provides false clues to be fed to opponents.
By intelligently using deceptive techniques, system defenders can mislead
and/or confuse attackers, thus enhancing their defensive capabilities over time.
By exploiting attackers’ unquestioned trust of computer system responses, system
defenders can gain an edge and position themselves a step ahead of compromise
attempts. In general, deception-based security defenses bring the following unique
advantages to computer systems [[29](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
1. Increases the entropy of leaked information about targeted systems during
compromise attempts.
When a computer system is targeted, the focus is usually only on protecting
and defending it. With deception, extra defensive measures can be taken by
feeding attackers false information that will, in addition to defending the targeted
system, cause intruders to make wrong actions/inactions and draw incorrect
conclusions. With the increased spread of APT attacks and government/corporate
espionage threats such techniques can be effective.
When we inject false information we cause some confusion for the adversaries
even if they have already obtained some sensitive information; the injection
of negative information can degrade and devalue the correct information ob-
tained by adversaries. Heckman and her team, developed a tool, referred to as
“Blackjack,” that dynamically copies an internal state of a production server—
after removing sensitive information and injecting deceit—and then directs
adversaries to that instance [[27](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Even after the red team successfully attacked
and infiltrated the blue systems and obtained sensitive information, the blue team
injected some false information in their system that led the red team to devalue
the information they had obtained, believing that the new values were correct.
---------------------------------------------------------------
[Page 10]
34
M.H. Almeshekah and E.H. Spafford2. Increases the information obtained from compromise attempts.
Many security controls are designed to create a boundary around computer
systems automatically stopping any illicit access attempts. This is becoming
increasingly challenging as such boundaries are increasingly blurring, partly as
a result of recent trends such as “consumerization”[5](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https://www.springer.com/cda/content/document/cda_downloaddocument/9783319326979-c2.pdf%3FSGWID%3D0-0-45-1579369-p179938846+&cd=16&hl=en&ct=clnk&gl=us&client=ubuntu#10)[[30](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https://www.springer.com/cda/content/document/cda_downloaddocument/9783319326979-c2.pdf%3FSGWID%3D0-0-45-1579369-p179938846+&cd=16&hl=en&ct=clnk&gl=us&client=ubuntu#27)]. Moreover, because
of the low cost on the adversaries’ side, and the existence of many automated
exploitation tools, attackers can continuously probe computer systems until
they find a vulnerability to infiltrate undetected. During this process, systems’
defenders learn nothing about the intruders’ targets. Ironically, this makes the
task of defending a computer system harder after every unsuccessful attack.
We conjecture that incorporating deception-based techniques can enhance our
understanding of compromise attempts using the illicit probing activity as
opportunity to enhance our understanding of the threats and, therefore, better
protect our systems over time.
3. Give defenders an edge in the OODA loop.
The OODA loop (for Observe, Orient, Decide, and Act) is a cyclic process
model, proposed by John Boyd, by which an entity reacts to an event [[31](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] The
victory in any tactical conflict requires executing this loop in a manner that
is faster than the opponent. The act of defending a computer system against
persistent attacks can be viewed as an OODA loop race between the attacker and
the defender. The winner of this conflict is the entity that executes this loop faster.
One critical advantage of deception-based defenses is that they give defenders an
edge in such a race as they actively feed adversaries deceptive information that
affects their OODA loop, more specifically the “observe” and “orient” stages
of the loop. Furthermore, slowing the adversary’s process gives defenders more
time to decide and act. This is especially crucial in the situation of surprise, which
is a common theme in digital attacks.
4. Increases the risk of attacking computer systems from the adversaries’ side.
Many current security controls focus on preventing the actions associated
with illicit attempts to access computer systems. As a result, intruders are using
this accurate negative feedback as an indication that their attempts have been
detected. Subsequently, they withdraw and use other, more stealthy, methods of
infiltration. Incorporating deceit in the design of computer systems introduces a
new possibility that adversaries need to account for; namely that they have been
detected and currently deceived. This new possibility can deter attackers who are
not willing to take the risk of being deceived, and further analyzed. In addition,
such technique gives systems’ defenders the ability to use intruders’ infiltration
attempts to their advantage by actively feeding them false information.5This term is widely used to refer to enterprises’ employees bringing their own digital devises andusing them to access the companies’ resources.
---------------------------------------------------------------
[Page 11]
Cyber Security Deception
353.2 Deception in the Cyber Kill-ChainThe cyber kill-chain introduced by Lockheed Martin researchers advocates an
intelligence-driven security model [[32](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] The main premise behind this model is that
for attackers to be successful they need to go through all these steps in the chain in
sequence. Breaking the chain at any step will break the attack and the earlier that
we break it the better we prevent the attackers from attacking our systems.
The cyber kill-chain model is a good framework to demonstrate the effectiveness
of incorporating deception at multiple levels in the chain. With the same underlying
principle of the kill-chain—early detection of adversaries—we argue that the earlier
we detect adversaries, the better we are at deceiving them and learning more about
their methods and techniques. We postulate that full intelligence cannot be gathered
without using some means of deception techniques.
Also, the better we know our enemies the better we can defend against them.
By using means of deception we can continuously learn about attackers at different
levels of the kill-chain and enhance our capabilities of detecting them and reducing
their abilities to attack us. This negative correlation is an interesting relationship
between our ability to detect attackers and their ability to probe our resources.
There is a consensus that we would like to be at least one step ahead of adver-
saries when they attack our systems. We argue that by intelligently incorporating
deception methods in our security models we can start achieving that. This is
because the further we enhance our abilities to detect adversaries the further ahead
of them we position ourselves. If we take an example of external network probing,
if we simply detect an attack and identify a set of IP address and domain names as
“bad,” we do not achieve much: these can be easily changed and adversaries will
become more careful not to raise an alarm the next time they probe our systems.
However, if we go one more step to attribute them by factors that are more difficult
to change it can cause greater difficulty for future attacks. For example, if we are able
to deceive attackers in manners that allow us to gather more information that allows
us to distinguish them based on fixed artifacts (such as distinctive protocol headers,
known tools and/or behavior and traits) we have a better position for defense. The
attackers will now have a less clear idea of how we are able to detect them, and
when they know, it should be more difficult for them to change these attributes.
The deployment of the cyber kill-chain was seen as fruitful for Lockheed when
they were able to detect an intruder who successfully logged into their system using
the SecurID attack [[33](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] We adopt this model with slight modification to better
reflect our additions.
Many deception techniques, such as honeypots, work in isolation and inde-
pendently of other parts of current information systems. This design decision has
been partly driven by the security risks associated with honeypots. We argue that
intelligently augmenting our systems with interacting deception-based techniques
can significantly enhance our security and gives us the ability to achieve deception
in depth. If we examine Table [1](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…, we can see that we can apply deception at every
stage of the cyber kill-chain, allowing us to break the chain and possibly attribute
---------------------------------------------------------------
[Page 12]
36
M.H. Almeshekah and E.H. Spafford
Table 1 Mapping deception to the kill-chain model
Cyber kill-chain phase
Deception
Reconnaissance
Artificial ports, fake sites
Weaponization and delivery
Create artificial bouncing back, sticky honeypots
Exploitation and installation
Create artificial exploitation response
Command and control (operation)
Honeypot
Lateral movement and persistence
HoneyAccounts, honeyFiles
Staging and exfiltration
Honeytokens, endless files, fake keysattackers. At the reconnaissance stage we can lure adversaries by creating a site
and have honey-activities that mimic a real-world organization. As an example,
an organization can subscribe with a number of cloud service providers and have
honey activities in place while monitoring any activities that signal external interest.
Another example is to address the problem of spear-phishing by creating a number
of fake persons and disseminating their information into the Internet while at
the same monitoring their contact details to detect any probing activities; some
commercial security firms currently do this.3.3 Deception and ObscurityDeception always involves two basic steps, hiding the real and showing the false.
This, at first glance, contradicts the widely believed misinterpretation of Kerckhoff’s
principle; “no security through obscurity.” A more correct English translation of
Kerckhoff’s principle is the one provided by Petitcolas in [[34](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http… system must not require secrecy and can be stolen by the enemy without causing
trouble.The misinterpretation leads some security practitioners to believe that any
“obscurity” is ineffective, while this is not the case. Hiding a system from an
attacker or having a secret password does increase the work factor for the attacker—
until the deception is detected and defeated. So long as the security does not
materially depend on the obscurity, the addition of misdirection and deceit provides
an advantage. It is therefore valuable for a designer to include such mechanisms in
a comprehensive defense, with the knowledge that such mechanisms should not be
viewed as primary defenses.
In any system design there are three levels of viewing a system’s behavior and
responses to service requests [[29](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
• Truthful. In such systems, the processes will always respond to any input with
full “honesty.” In other words, the system’s responses are always “trusted” and
accurately represent the internal state of the machine. For example, when the user
---------------------------------------------------------------
[Page 13]
Cyber Security Deception
37asks for a particular network port, a truthful system responds with either a real
port number or denies the request giving the specific reason of such denial.
• Naively Deceptive. In such systems, the processes attempt to deceive the
interacting user by crafting an artificial response. However, if the user knows
the deceptive behavior, e.g. by analyzing the previous deceptive response used
by the system, the deception act becomes useless and will only alert the user
that the system is trying to deceive her. For example, the system can designate
a specific port that is used for deceptive purposes. When the attacker asks for
a port, without carrying the appropriate permissions, this deceptive port is sent
back.
• Intelligently Deceptive. In this case, the systems “deceptive behavior” is indistin-
guishable from the normal behavior even if the user has previously interacted
with the system. For example, an intelligently-deceptive system responds to
unauthorized port listening requests the same as a normal allowed request. How-
ever, extra actions are taken to monitor the port, alert the system administrators,
and/or sandbox the listening process to limit the damage if the process downloads
malicious content.3.4 Offensive DeceptionOffensively, many current, common attacks use deceptive techniques as a corner-
stone of their success. For example, phishing attacks often use two-level deceptive
techniques; they deceive users into clicking on links that appear to be coming from
legitimate sources, which take them to the second level of deception where they will
be presented with legitimate-looking websites luring them to give their credentials.
The “Nigerian 419” scams are another example of how users are deceived into
providing sensitive information with the hope of receiving a fortune later.
In many of these cases, attackers focus on deceiving users as they are usually
the most vulnerable component. Kevin Mitnick showed a number of examples in
his book, “The Art of Deception” [[35](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] of how he used social engineering, i.e.,
deceptive skills to gain access to many computer systems. Trojan horses, which are
more than 30 years old, are a prime example of how deception has been used to
infiltrate systems.
Phishing, Cross-site Scripting (XSS) [[36](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] and Cross-site Request Forgery
(XSRF) [[37](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] are some examples of using deception. Despite more than a decade of
research by both the academic and private sectors, these problems are causing more
damage every year. XSS and XSRF have remained on the OWASP’s top ten list since
the first time they were added in 2007 [[38](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] The effectiveness of offensive deception
techniques should motivate security researchers to think of positive applications for
deception in security defenses.
---------------------------------------------------------------
[Page 14]
38
M.H. Almeshekah and E.H. Spafford4 A Framework to Integrate Deception in Computer
DefensesWe presented a framework that can be used to plan and integrate deception in
computer security defenses [[39](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Many computer defenses that use deception were
ad-hoc attempts to incorporate deceptive elements in their design. We show how our
framework can be used to incorporate deception in many parts of a computer system
and discuss how we can use such techniques effectively. A successful deception
should present plausible alternative(s) to the truth and these should be designed to
exploit specific adversaries’ biases, as we will discuss later.
The framework discussed in this chapter is based on the general deception
model discussed by Bell and Whaley in [[40](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] There are three general phases of any
deceptive component; namely planning, implementing and integrating, and finally
monitoring and evaluating. In the following sections we discuss each one of those
phases in more detail. The framework is depicted in Fig. [3](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https… The Role of BiasesIn cognitive psychology a bias refers toAn inclination to judge others or interpret situations based on a personal and oftentimes
unreasonable point of view [[41](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http… are a cornerstone component to the success of any deception-based
mechanism. The target of the deception needs to be presented with a plausible
“deceit” to successfully deceive and/or confuse him. If the target perceives this
deceit to be non-plausible she is more inclined to reject it instead of believing it,
or at least raise her suspicions about the possibility of currently being deceived.
A successful deception should exploit a bias in the attackers’ perception and provide
them with one or more plausible alternative information other than the truth.
Thompson et al. discuss four major groups of biases any analysts need to be
aware of: personal biases, cultural biases, organizational biases, and cognitive biases
[[42](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] It can be seen in Fig. [2](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https… the more specific the bias being exploited in a
deceptive security tool is, the less such a tool can be generalized, For example,
exploiting a number of personal biases, specific to an attacker, might not be
easily generalized to other adversaries who attack your system. However, the more
specific the choice of bias enhances the effectiveness of the deceptive component.
This is true partly because cognitive biases are well-known and adversaries might
intentionally guard themselves with an additional layer of explicit reasoning to
minimize their effects in manipulating their perceptions. In the following paragraphs
we discuss each one of these classes of biases.
---------------------------------------------------------------
[Page 15]
Cyber Security Deception
39
Fig. 2 Deception target
biases4.1.1 Personal Biases
Personal biases are those biases that originate from either first-hand experiences
or personal traits, as discussed by Jervis in [[43](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] These biases can be helpful
in designing deceptive components/operation; however, they are (1) harder to
obtain as they require specific knowledge of potential attackers and (2) they make
deceptive components less applicable to a wider range of attackers while becoming
more powerful against specific attackers. Personal biases have been exploited in
traditional deception operations in war, such as exploiting the arrogance of Hitler’s
administration in World War II as part of Operation Fortitude [[41](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
4.1.2 Cultural Biases
Hofstede refers to cultural biases as the “software of the mind” [[44](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] They represent
the mental and cognitive ways of thinking, perception, and action by humans
belonging to these cultures. In a study conducted by Guss and Dorner, they found
that cultures influenced the subjects’ perception, strategy development and decision
choices, even though all those subjects were presented with the same data [[45](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
Hofstede discusses six main dimensions of cultures and assigns quantitative values
to those dimensions for each culture in his website (geerte-hofstede.com) Also,
he associates different behavior that correlates with his measurements. Theses
dimensions are:
1. Power Distance Index (PDI)—PDI is a measure of the expectation and accep-
tance that “power is distributed unequally.” Hofstede found that cultures with
high PDI tend to have a sense of loyalty, show of strength, and preference to
in-group-person. This feature can be exploited by a deception planner focusing
on the attacker’s sense of pride to reveal himself, knowing that the attack is
originating from a high PDI culture with a show-of-strength property.
2. Individualism versus Collectivism (IVC)—A collectivist society values the
“betterment of a group” at the expense of the individual. Hofstede found that
most cultures are collectivist, i.e. with low IVC index.
3. Masculine versus Feminine (MVF)—A masculine culture is a culture where
“emotional gender roles are clearly distinct.” For example, an attacker coming
---------------------------------------------------------------
[Page 16]
40
M.H. Almeshekah and E.H. Spaffordfrom a masculine culture is more likely to discredit information and warnings
written by or addressed to a female. In this case, this bias can be exploited to
influence attackers’ behaviors.
4. Uncertainty Avoidance Cultures (UAI)—This measures the cultural response
to the unknown or the unexpected. High UAI means that this culture has a fairly
structured response to uncertainty making the attackers’ anticipation of deception
and confusion a much easier task.
5. Long-Term Orientation Versus Short-Term Orientation (LTO vs. STO)—
STO cultures usually seek immediate gratification. For example, the defender
may sacrifice information of lesser importance to deceive an attacker into
thinking that such information is of importance, in support of an over-arching
goal of protecting the most important information.
6. Indulgence versus Restraint (IVR)—This dimension characterizes cultures on
their norms of how they choose activities for leisure time and happiness.
Wirtz and Godson summarize the importance of accounting for cultures while
designing deception in the following quote; “To be successful the deceiver must
recognize the target’s perceptual context to know what (false) pictures of the world
will appear plausible” [[46](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
4.1.3 Organizational Biases
Organizational biases are of importance when designing deception for an target
within a heavily structured environment [[41](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] In such organizations there are many
keepers who have the job of analyzing information and deciding what is to be passed
to higher levels of analysts. This is one example of how organizational biases can
be used. These biases can be exploited causing important information to be marked
as less important while causing deceit to be passed to higher levels. One example of
organizational biases is uneven distribution of information led to uneven perception
and failure to anticipate the Pearl Harbor attack in 1941 by the United States [[41](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
4.1.4 Cognitive Biases
Cognitive biases are common among all humans across all cultures, personali-
ties, and organizations. They represent the “innate ways human beings perceive,
recall, and process information” [[41](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] These biases have long been studied by
many researchers around the world in many disciplines (particularly in cognitive
psychology); they are of importance to deception design as well as computing.
Tversky and Kahneman proposed three general heuristics our minds seem to
use to reduce a complex task to a simpler judgment decision—especially under
conditions of uncertainty—thus leading to some predictable biases [[47](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] These
are: representativeness, availability, and anchoring and adjustment. They defined
the representativeness heuristic as a “heuristic to evaluate the probability of an
---------------------------------------------------------------
[Page 17]
Cyber Security Deception
41event by the degree to which it is (i) similar in essential properties to its parent
population; and (ii) reflects the salient features of the process by which it is
generated” [[47](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] The availability heuristic is another bias that assess the likelihood
of an uncertain event by the ease with which someone can bring it to mind. Finally,
the anchoring/adjustment heuristic is a bias that causes us to make estimations closer
to the initial values we have been provided with than is otherwise warranted.
Solman presented a discussion of two reasoning systems postulated to be
common in humans: associative (system 1) and rule-based (system 2) [[48](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] System 1
is usually automatic and heuristic-based, and is usually governed by habits. System
2 is usually more logical with rules and principles. Both systems are theorized to
work simultaneously in the human brain; deception targets System 1 to achieve
more desirable reactions.
In 1994, Tversky and Koehler argued that people do not subjectively attach
probability judgments to events; instead they attach probabilities to the description
of these events [[49](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] That is, two different descriptions of the same event often lead
people to assign different probabilities to their likelihood. Moreover, the authors
postulate that the more explicit and detailed the description of the event is, the
higher the probability people assign to it. In addition, they found that unpacking the
description of the event into several disjoint components increases the probability
people attach to it. Their work provides an explanation for the errors often found
in probability assessments associated with the “conjunction fallacy” [[50](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Tversky
and Kahneman found that people usually would give a higher probability to the
conjunction of two events, e.g. P(X and Y), than a single event, e.g. P(X) or P(Y).
They showed that humans are usually more inclined to believe a detailed story with
explicit details over a short compact one.4.2 Planning DeceptionThere are six essential steps to planning a successful deception-based defensive
component. The first, and often neglected, step is specifying exactly the strategic
goals the defender wants to achieve. Simply augmenting a computer system with
honey-like components, such as honeypots and honeyfiles, gives us a false sense
that we are using deception to lie to adversaries. It is essential to detail exactly
what are the goals of using any deception-based mechanisms. As an example, it
is significantly different to set up a honeypot for the purpose of simply capturing
malware than having a honeypot to closely monitor APT-like attacks.
After specifying the strategic goals of the deception process, we need to
specify—in the second step of the framework—how the target (attacker) should
react to the deception. This determination is critical to the long-term success of
any deceptive process. For example the work of Zhao and Mannan in [[51](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] deceive
attackers launching online guessing attacks into believing that they have found a
correct username and password. The strategic goal of this deception process is to
direct an attacker to a “fake” account thus wasting their resources and monitoring
---------------------------------------------------------------
[Page 18]
42
M.H. Almeshekah and E.H. Spaffordtheir activities to learn about their objectives. It is crucial to analyze how the
target should react after the successful “fake” login. The obvious reaction is that
the attacker would continue to laterally move in the target system, attempting
further compromise. However, an alternative response is that the attacker ceases
the guessing attack and reports to its command and control that a successful
username/password pair has been found. In consideration of the second alternative
we might need to maintain the username/password pair of the fake account and keep
that account information consistent for future targeting.
Moreover, part of this second step is to specify how we desire an attacker to react
such that we may try to influence his perception and thus lead him to the desired
reaction. Continuing with the example in the previous paragraph, if we want the
attacker to login again so we have more time to monitor and setup a fake account,
we might cause an artificial network disconnection that will cause the target to login
again.
4.2.1 Adversaries’ Biases
Deception-based defenses are useful tools that have been shown to be effective in
many human conflicts. Their effectiveness relies on the fact that they are designed to
exploit specific biases in how people think, making them appear to be plausible but
false alternatives to the hidden truth, as discussed above. These mechanisms give
defenders the ability to learn more about their attackers, reduce indirect information
leakages in their systems, and provide an advantage with regard to their defenses.
Step 3 of planning deception is to understand the attackers’ biases. As discussed
earlier, biases are a cornerstone component to the success of any deception-based
mechanisms. The deceiver needs to present a plausible deceit to successfully
deceive and/or confuse an adversary. If attackers decide that such information is
not plausible they are more inclined to reject it, or at least raise their suspicions
about the possibility of currently being deceived. When the defender determines
the strategic goal of the deception and the desired reactions by the target, he needs
to investigate the attacker’s biases to decide how best to influence the attacker’s
perception to achieve the desired reactions.
One example of using biases in developing some deceptive computer defenses
is using the “confirmation bias” to lead adversaries astray and waste their time and
resources. Confirmation bias is defined as “the seeking or interpreting of evidence
in ways that are partial to existing beliefs, expectations, or a hypothesis in hand”
[[52](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] A computer defender can use this bias in responding to a known adversarial
probing of the system’s perimeter. Traditional security defenses are intended to
detect and prevent such activity, by simply dropping such requests or actively
responding with an explicit denial. Taking this a step further by exploiting some
pre-existing expectation, i.e. the confirmation bias, we might provide a response
that the system is being taken down for some regular maintenance or as a result of
some unexpected failure. With such a response, the defender manages to prevent
illicit activity, provide a pause to consider next steps for the defender, and perhaps
waste the adversary’s time as they wait or investigate other alternatives to continue
their attacks.
---------------------------------------------------------------
[Page 19]
Cyber Security Deception
43Cultural biases play an important role in designing deceptive responses, as
discussed in Sect. [4.1.2](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:h…. For example, some studies found relationships between
the type of computer attacks and the culture/country from which the attack
originated [[53](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
In computing, the conjunction fallacy bias, discussed in Sect. [4.1.4](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:h…, can be
exploited by presenting the deception story as a conjunction of multiple detailed
components. For example, if deceivers want to misinform an attacker probing their
system by creating an artificial network failure, instead of simply blocking these
attempts, it is better to give a longer story. A message that says “Sorry the network
is down for some scheduled network maintenance. Please come back in three hours”
is more plausible than simply saying “The network is down” and thus more likely
to be believed.
4.2.2 Creating the Deception Story
After analyzing attackers’ biases the deceiver needs to decide exactly what compo-
nents to simulate/dissimulate; namely step 4 of the framework in Fig. [3](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https….
In Fig. [4](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https… provide an overview of the different system components where
deception can be applied, exploiting the attacker’s biases to achieve the desired
reaction. Overall, deceit can be injected into the functionality and/or state of our
systems. We give a discussion of each one of these categories below and present
some examples.
System’s Decisions
We can apply deception to the different decisions any computer system makes.
As an example, Zhao and Mannan work in [[51](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] apply deception at the system’s
authentication decision where they deceive adversaries by giving them access to
“fake” accounts in the cases of online guessing attacks. Another system’s decision
we can use concerns firewalls. Traditionally, we add firewall rules that prevent
specific IP addresses from interacting with our systems after detecting that they
are sources of some attacks. We consider this another form of data leakage in
accordance with the discussion of Zhao and Mannan in [[51](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Therefore, we can
augment firewalls by applying deception to their decisions by presenting adversaries
with plausible responses other than simply denying access.
System’s Software and Services
Reconnaissance is the first stage of any attack on any computing system, as
identified in the kill-chain model [[32](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Providing fake systems and services has been
the main focus of honeypot-based mechanisms. Honeypots discussed earlier in this
chapter are intended to provide attackers with a number of fake systems running
---------------------------------------------------------------
[Page 20]
44
M.H. Almeshekah and E.H. Spafford
Fig. 3 Framework to incorporate deception in computer security defensesfake services. Moreover, we can use deception to mask the identities of our current
existing software/services. The work of Murphy et al. in [[54](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] recommended the use
of operating system obfuscation tools for Air Force computer defenses.
System’s Internal and Public Data
A honeyfile, discussed above, is an example of injecting deceit into the system’s
internal data. It can be applied to the raw data in computer systems, e.g., files and
directories, or to the administrative data that are used to make decisions and/or
---------------------------------------------------------------
[Page 21]
Cyber Security Deception
45
Fig. 4 Computer systems components where deception can be integrated withmonitor the system’s activities. An example applying deception to the administrative
data can be seen in the honeywords proposal [[24](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Deceit can also be injected into
the public data about our systems. Wang et al. made the case of disseminating public
data about some “fake” personnel for the purpose of catching attacks such as spear
phishing [[55](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Cliff Stoll did this during the story of his book [[4](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…)] In addition, we
note that this category also includes offline stored data such as back-ups that can be
used as a focus of deception.
System’s Activity
Different activities within a system are considered as one source of information
leakage. For example, traffic flow analysis has long been studied as a means for
attackers to deduce information [[56](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Additionally, a system’s activity has been
used as a means of distinguishing between a “fake” and a real system [[25](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] We can
intelligently inject some data about activities into our system to influence attackers’
perception and, therefore, their reactions.
System’s Weaknesses
Adversaries probe computer systems trying to discover and then exploit any
weakness (vulnerability). Often, these adversaries come prepared with a list of
possible vulnerabilities and then try to use them until they discover something that
works. Traditional security mechanisms aid adversaries by quickly and promptly
responding back to any attempt to exploit fixed, i.e. patched, vulnerabilities with
a denial response. This response leaks information that these vulnerabilities are
known and fixed. When we inject deceit into this aspect of our systems we can
misinform adversaries by confusing them—by not giving them a definitive answer
whether the exploit has succeeded—or by deceiving them by making it appear as if
the vulnerability has been exploited.
---------------------------------------------------------------
[Page 22]
46
M.H. Almeshekah and E.H. SpaffordSystem’s Damage Assessment
This relates to the previous component; however, the focus here is to make the
attacker perceive that the damage caused is more or less than the real damage.
We may want the adversary to believe that he has caused more damage than what
has happened so as to either stop the attack or cause the attacker to become less
aggressive. This is especially important in the context of the OODA loop discussed
earlier in this chapter. We might want the adversary to believe that he has caused less
damage if we want to learn more about the attacker by prompting a more aggressive
attack.
System’s Performance
Influencing the attacker’s perception of system’s performance may put the deceiver
at an advantageous position. This has been seen in the use of sticky honeypots
and tarpits discussed at the beginning of this chapter that are intended to slow the
adversary’s probing activity. Also, tarpits have been used to throttle the spread of
network malware. In a related fashion, Somayaji et al. proposed a method to deal
with intrusions by slowing the operating system response to a series of anomalous
system calls [[57](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
System’s Configurations
Knowledge of the configuration of the defender’s systems and networks is often of
great importance to the success of the adversary’s attack. In the lateral movement
phase of the kill-chain adversarial model, attackers need to know how and where
to move to act on their targets. In the red-teaming experiment by Cohen and Koike
they deceived adversaries to attack the targeted system in a particular sequence from
a networking perspective [[58](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
After deciding which components to simulate/dissimulate, we can apply one of
Bell and Whaley’s techniques discussed in [[29](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] We give an example of how each
one of these techniques can be used in the following paragraphs.
• Using Masking—This has been used offensively where attackers hide potentially
damaging scripts in the background of the page by matching the text color with
the background color. When we apply hiding to software and services, we can
hide the fact that we are running some specific services when we detect a probing
activity. For example, when we receive an SSH connection request from a known
bad IP address we can mask our SSHd demon and respond as if the service is not
working or as if it is encountering an error.
• Using Repackaging—In several cases it might be easier to “repackage” data as
something else. In computing, repackaging has long been used to attack computer
users. The infamous cross-site scripting (XSS) attack uses this technique where
---------------------------------------------------------------
[Page 23]
Cyber Security Deception
47an attacker masks a dangerous post as harmless to steal the user’s cookies when
they view such post. Another example can be seen in the cross-site request
forgery (XSRF) attacks where an adversary deceives a user into visiting some
innocuous looking web pages that silently instruct the user’s browser to engage
in some unwanted activities. In addition, repackaging techniques are used by
botnet Trojans that repackage themselves as anti-virus software to deceive users
into installing them so an attacker can take control of their machines. From the
defensive standpoint, a repackaging act can be seen in HoneyFiles, discussed
above, that repackage themselves as normal files while acting internally as silent
alarms to system administrators when accessed.
• Using Dazzling—This is considered to be the weakest form of dissimulation,
where we confuse the targeted objects with others. An example of using dazzling
can be seen in the “honeywords” proposal [[24](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] The scheme confuses each user’s
hashed password with an extra .N
1/ hashes of other, similar, passwords
dazzling an attacker who obtains the credentials database.
• Using Mimicking—In computing, phishing attacks are a traditional example of
an unwanted deceiving login page mimicking a real website login. An attacker
takes advantage of users by deceiving them into giving up their credentials by
appearing as the real site. From a defensive perspective, we can apply mimicking
to software and services by making our system mimic the responses of a different
system, e.g., respond as if we are running a version of Windows XP while we
are running Windows 7. This will waste attackers’ resources in trying to exploit
our Windows 7 machine thinking it is Windows XP, as well as increase the
opportunity for discovery. This is seen in the work of Murphy et al. in operating
system obfuscation [[54](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
• Using Inventing—Mimicking requires the results to look like something else;
when this is not easy to achieve invention can be used instead. This technique
has seen the most research in the application of deception to computer security
defenses. Honeypots are one prominent example of inventing a number of nodes
in an organizations with the goal of deceiving an attacker that they are real
systems.
• Using Decoying—This technique is used to attract adversaries’ attention away
from the most valuable parts of a computer system. Honeypots are used, in some
cases, to deceive attackers by showing that these systems are more vulnerable
than other parts of the organization and therefore capture attackers’ attention.
This can be seen in the work of Carroll and Grosu [[59](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
After deciding which deceptive technique to use we need to analyze the patterns
attackers perceive and then apply one or more of those techniques to achieve the
desired reactions.
Deceit is an active manipulation of reality. We argue that reality can be manip-
ulated in one of three general ways, as depicted in Fig. [5](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…. We can manufacture
reality, alter reality, and/or hide reality. This can be applied to any one of the
components we discussed above.
---------------------------------------------------------------
[Page 24]
48
M.H. Almeshekah and E.H. Spafford
Fig. 5 Creating deceit. (a) Manipulation of reality. (b) Deception can be applied to the nature,
existence and/or value of dataIn addition, reality manipulation is not only to be applied to the existence of
the data in our systems—it can be applied to two other features of the data. As
represented in Fig. [5](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:https…, we can manipulate the reality with respect to the existence of
data, nature of the data, and/or value of the data. The existence of the data can be
manipulated not only for the present but also when the data has been created. This
can be achieved for example with the manipulation of time stamps. With regard to
the nature of the data, we can manipulate the size of the data, such as in the example
of endless files, when and why the data has been created. The value of the data can
also be manipulated. For example, log files are usually considered important data
that adversaries try to delete to cover their tracks. Making a file appear as a log file
will increase its value from the adversary’s perspective.
At this step, it is crucial to specify exactly when the deception process should
be activated. It is usually important that legitimate users’ activity should not be
hindered by the deceptive components. Optimally, the deception should only be
activated in the case of malicious interactions. However, we recognize that this may
not always be possible as the lines between legitimate and malicious activities might
be blurry. We argue that there are many defensive measures that can apply some
deceptive techniques in place of the traditional denial-based defenses that can make
these tradeoffs.
4.2.3 Feedback Channels and Risks
Deception-based defenses are not a single one-time defensive measure, as is the case
with many advanced computer defenses. It is essential to monitor these defenses,
and more importantly measure the impact they have on attackers’ perceptions
and actions. This is step 5 in the deception framework. We recognize that if an
attacker detects that he is being deceived, he can use this to his advantage to make
a counter-deception reaction. To successfully monitor such activities we need to
clearly identity the deception channels that can and should be used to monitor and
measure any adversary’s perceptions and actions.
---------------------------------------------------------------
[Page 25]
Cyber Security Deception
49In the sixth and final step before implementation and integration, we need to
consider that deception may introduce some new risks for which organizations need
to account. For example, the fact that adversaries can launch a counter-deception
operation is a new risk that needs to be analyzed. In addition, an analysis needs to
done on the effects of deception on normal users’ activities. The defender needs
to accurately identify potential risks associated with the use of such deceptive
components and ensure that residual risks are accepted and well identified.4.3 Implementing and Integrating DeceptionMany deception-based mechanisms are implemented as a separate disjoint com-
ponent from real production systems, as in the honeypot example. With the
advancement of many detection techniques used by adversaries and malware,
attackers can detect whether they are in real system or a “fake” system [[25](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] and then
change behavior accordingly, as we discussed earlier in this chapter. A successful
deception operation needs to be integrated with the real operation. The honeywords
proposal [[24](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] is an example of this tight integration as there is no obvious way to
distinguish between a real and a “fake” password.4.4 Monitoring and Evaluating the Use of DeceptionIdentifying and monitoring the feedback channels is critical to the success of
any deception operation/component. Hesketh discussed three general categories of
signals that can be used to know whether a deception was successful or not [[60](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)]
1. The target acts in the wrong time and/or place.
2. The target acts in a way that is wasteful of his resources.
3. The target delays acting or stop acting at all.
Defenders need to monitor all the feedback channels identified in step 5 of the
framework. We note that there are usually three general outputs from the use of
any deceptive components. The adversary might (1) believe it, where the defender
usually sees one of the three signs of a successful deception highlighted above,
(2) suspect it or (3) disbelieve it. When an attacker suspects that a deceptive
component is being used, we should make the decision whether to increase the level
of deception or stop the deceptive component to avoid exposure. Often deception
can be enhanced by presenting more (and perhaps, true) information that makes
the deception story more plausible. This can be included as a feedback loop in
the framework. This observation should be analyzed by the defender to review his
analysis of the attacker’s biases, (i.e., step 3), and the methodology used to create
the deceit (i.e., step 4). Furthermore, the deceiver might employ multiple levels of
deception based on the interaction with the attacker during the attack.
---------------------------------------------------------------
[Page 26]
50
M.H. Almeshekah and E.H. SpaffordWhen an attacker disbelieves the presented deceit we need to have an active
monitoring and a detailed plan of action. This should be part the sixth step
of planning in our framework where risks are assessed. In addition, during our
discussions with security practitioners many have indicated that some attackers
often act aggressively when they realize that they have been deceived. This can
be one of the signals that is used during the monitoring stage to measure attackers’
reaction of the deceptive component. In addition, this behavior can be used as one
of the biases to be exploited by other deceptive mechanisms that may focus on
deceiving the attacker about the system’s damage assessment.Acknowledgements The material in the chapter is derived from [[29](https://webcache.googleusercontent.com/search?q=cache:S_BHqeXHETIJ:http…)] Portions of this work
were supported by National Science Foundation Grant EAGER-1548114, by Northrop Grumman
Corporation (NGCRC), and by sponsors of the Center for Education and Research in Information
Assurance and Security (CERIAS).References1. Verizon, “Threats on the Horizon – The Rise of the Advanced Persistent Threat.” [http://www.](http://www.verizonenterprise.com/DBIR/)[verizonenterprise.com/…. J. J. Yuill, Defensive Computer-Security Deception Operations: Processes, Principles and
Techniques. PhD Dissertation, North Carolina State University, 2006.
3. B. Cheswick, “An Evening with Berferd in Which a Cracker is Lured, Endured, and Studied,”
in Proceedings of Winter USENIX Conference, (San Francisco), 1992.
4. C. P. Stoll, The Cuckoo’s Egg: Tracing a Spy Through the Maze of Computer Espionage.
Doubleday, 1989.
5. E. H. Spafford, “More than Passive Defense.” http://goo.gl/5lwZup, 2011.
6. L. Spitzner, Honeypots: Tracking Hackers. Addison-Wesley Reading, 2003.
7. G. H. Kim and E. H. Spafford, “Experiences with Tripwire: Using Integrity Checkers for
Intrusion Detection,” tech. rep., Department of Computer, Purdue University, West Lafayette,
IN, 1994.
8. D. Dagon, X. Qin, G. Gu, W. Lee, J. Grizzard, J. Levine, and H. Owen, “Honeystat: Local
Worm Detection Using Honeypots,” in Recent Advances in Intrusion Detection, pp. 39–58,
Springer, 2004.
9. C. Fiedler, “Secure Your Database by Building HoneyPot Architecture Using a SQL Database
Firewall.” http://goo.gl/yr55Cp.
10. C. Mulliner, S. Liebergeld, and M. Lange, “Poster: Honeydroid-Creating a Smartphone
Honeypot,” in IEEE Symposium on Security and Privacy, 2011.
11. M. Wählisch, A. Vorbach, C. Keil, J. Schönfelder, T. C. Schmidt, and J. H. Schiller, “Design,
Implementation, and Operation of a Mobile Honeypot,” tech. rep., Cornell University Library,
2013.
12. C. Seifert, I. Welch, and P. Komisarczuk, “Honeyc: The Low Interaction Client Honeypot,”
Proceedings of the 2007 NZCSRCS, 2007.
13. K. G. Anagnostakis, S. Sidiroglou, P. Akritidis, K. Xinidis, E. Markatos, and A. D. Keromytis,
“Detecting Targeted Attacks Using Shadow Honeypots,” in Proceedings of the 14th USENIX
Security Symposium, 2005.
14. D. Moore, V. Paxson, S. Savage, C. Shannon, S. Staniford, and N. Weaver, “Inside the Slammer
Worm,” IEEE Security & Privacy, vol. 1, no. 4, pp. 33–39, 2003.
15. T. Liston, “LaBrea: “Sticky” Honeypot and IDS.” [http://labrea.sourceforge.net/labrea-info.](http://labrea.sourceforge.net/l…, 2009.
---------------------------------------------------------------
[Page 27]
Cyber Security Deception
51
16. F. Cohen, “The Deception Toolkit.” http://www.all.net/dtk/, 1998.
17. N. Rowe, E. J. Custy, and B. T. Duong, “Defending Cyberspace with Fake Honeypots,” Journal
of Computers, vol. 2, no. 2, pp. 25–36, 2007.
18. T. Holz and F. Raynal, “Detecting Honeypots and Other Suspicious Environments,” in
Information Assurance Workshop, pp. 29–36, IEEE, 2005.
19. C. Kreibich and J. Crowcroft, “Honeycomb: Creating Intrusion Detection Signatures Using
Honeypots,” ACM SIGCOMM Computer Communication Review, vol. 34, no. 1, pp. 51–56,
2004.
20. D. Moore, C. Shannon, D. J. Brown, G. M. Voelker, and S. Savage, “Inferring Internet
Denial-of-Service Activity,” ACM Transactions on Computer Systems (TOCS), vol. 24, no. 2,
pp.115–139, 2006.
21. L. Spitzner, “Honeytokens: The Other Honeypot.” [http://www.symantec.com/connect/articles/](http://www.symantec.com/connect/…, 2003.22. J. J. Yuill, M. Zappe, D. Denning, and F. Feer, “Honeyfiles: Deceptive Files for Intrusion
Detection,” in Information Assurance Workshop, pp. 116–122, IEEE, 2004.
23. M. Bercovitch, M. Renford, L. Hasson, A. Shabtai, L. Rokach, and Y. Elovici, “HoneyGen:
An Automated Honeytokens Generator,” in IEEE International Conference on Intelligence
and Security Informatics (ISI’11), pp. 131–136, IEEE, 2011.
24. A. Juels and R. L. Rivest, “Honeywords: Making Password-Cracking Detectable,” in Pro-
ceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security,
pp. 145–160, ACM, 2013.
25. X. Chen, J. Andersen, Z. M. Mao, M. Bailey, and J. Nazario, “Towards an Understanding of
Anti-Virtualization and Anti-Debugging Behavior in Modern Malware,” in IEEE International
Conference on Dependable Systems and Networks, pp. 177–186, IEEE, 2008.
26. M. Sourour, B. Adel, and A. Tarek, “Ensuring Security-In-Depth Based on Heterogeneous
Network Security Technologies,” International Journal of Information Security, vol. 8, no. 4,
pp. 233–246, 2009.
27. K. Heckman, “Active Cyber Network Defense with Denial and Deception.” [http://goo.gl/](http://goo.gl/Typwi4)[Typwi4](http://goo.gl/Typwi4), Mar. 2013.28. R. V. Jones, Reflections on Intelligence. London: William Heinemann Ltd, 1989.
29. M. H. Almeshekah, Using Deception to Enhance Security: A Taxonomy, Model and Novel
Uses. PhD thesis, Purdue University, 2015.
30. M. Harkins, “A New Security Architecture to Improve Business Agility,” in Managing Risk
and Information Security, pp. 87–102, Springer, 2013.
31. J. Boyd, “The Essence of Winning and Losing.” http://www.danford.net/boyd/essence.htm[,](http://www.danford.net/boyd/esse…
1995.
32. E. M. Hutchins, M. J. Cloppert, and R. M. Amin, “Intelligence-Driven Computer Network
Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains,” Leading
Issues in Information Warfare & Security Research, vol. 1, p. 80, 2011.
33. K. J. Higgins, “How Lockheed Martin’s ’Kill Chain’ Stopped SecurID Attack.” [http://goo.gl/](http://goo.gl/r9ctmG)[r9ctmG](http://goo.gl/r9ctmG), 2013.34. F. Petitcolas, “La Cryptographie Militaire.” http://goo.gl/e5IOj1[.](http://goo.gl/e5IOj1)
35. K. D. Mitnick and W. L. Simon, The Art of Deception: Controlling the Human Element of
Security. Wiley, 2003.
36. P. Vogt, F. Nentwich, N. Jovanovic, E. Kirda, C. Kruegel, and G. Vigna, “Cross-Site Scripting
Prevention with Dynamic Data Tainting and Static Analysis,” in The 2007 Network and
Distributed System Security Symposium (NDSS’07), 2007.
37. A. Barth, C. Jackson, and J. C. Mitchell, “Robust Defenses for Cross-Site Request Forgery,”
Proceedings of the 15th ACM Conference on Computer and Communications Security
(CCS’08), 2008.
38.O. W. A. S. P. (OWASP), “OWASP Top 10.” [http://owasptop10.googlecode.com/files/](http://owasptop10.googlecode.com/f… Top 10 - 2013.pdf)[OWASPTop10-2013.pdf](http://owasptop10.googlecode.com/files/OWASP Top 10 - 2013.pdf), 2013.
---------------------------------------------------------------
[Page 28]
52
M.H. Almeshekah and E.H. Spafford
39. M. H. Almeshekah and E. H. Spafford, “Planning and Integrating Deception into Com-
puter Security Defenses,” in New Security Paradigms Workshop (NSPW’14), (Victoria, BC,
Canada), 2014.
40. J. B. Bell and B. Whaley, Cheating and Deception. Transaction Publishers New Brunswick,
1991.
41. M. Bennett and E. Waltz, Counterdeception Principles and Applications for National Security.
Artech House, 2007.
42. J. R. Thompson, R. Hopf-Wichel, and R. E. Geiselman, “The Cognitive Bases of Intelligence
Analysis,” tech. rep., US Army Research Institute for the Behavioral and Social Sciences, 1984.
43. R. Jervis, Deception and Misperception in International Politics. Princeton University Press,
1976.
44. G. Hofstede, G. Hofstede, and M. Minkov, Cultures and Organizations. McGraw-Hill, 3rd ed.,
2010.
45. D. Gus and D. Dorner, “Cultural Difference in Dynamic Decision-Making Strategies in a
Non-lines, Time-delayed Task,” Cognitive Systems Research, vol. 12, no. 3–4, pp. 365–376,
2011.
46. R. Godson and J. Wirtz, Strategic Denial and Deception. Transaction Publishers, 2002.
47. A. Tversky and D. Kahneman, “Judgment under Uncertainty: Heuristics and Biases.,” Science,
vol. 185, pp. 1124–31, Sept. 1974.
48. S. A. Sloman, “The Empirical Case for Two Systems of Reasoning,” Psychological Bulletin,
vol. 119, no. 1, pp. 3–22, 1996.
49. A. Tversky and D. Koehler, “Support Theory: A Nonextensional Representation of Subjective
Probability.,” Psychological Review, vol. 101, no. 4, p. 547, 1994.
50. A. Tversky and D. Kahneman, “Extensional Versus Intuitive Reasoning: The Conjunction
Fallacy in Probability Judgment,” Psychological review, vol. 90, no. 4, pp. 293–315, 1983.
51. L. Zhao and M. Mannan, “Explicit Authentication Response Considered Harmful,” in New
Security Paradigms Workshop (NSPW ’13), (New York, New York, USA), pp. 77–86, ACM
Press, 2013.
52. R. S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of
General Psychology, vol. 2, pp. 175–220, June 1998.
53. C. Sample, “Applicability of Cultural Markers in Computer Network Attacks,” in 12th
European Conference on Information Warfare and Security, (University of Jyvaskyla, Finland),
pp. 361–369, 2013.
54. S. B. Murphy, J. T. McDonald, and R. F. Mills, “An Application of Deception in Cyberspace:
Operating System Obfuscation,” in Proceedings of the 5th International Conference on
Information Warfare and Security (ICIW 2010), pp. 241–249, 2010.
55. W. Wang, J. Bickford, I. Murynets, R. Subbaraman, A. G. Forte, and G. Singaraju, “Detecting
Targeted Attacks by Multilayer Deception,” Journal of Cyber Security and Mobility, vol. 2,
no. 2, pp. 175–199, 2013.
56. X. Fu, On Traffic Analysis Attacks and Countermeasures. PhD Dissertation, Texas A & M
University, 2005.
57. S. A. Hofmeyr, S. Forrest, and A. Somayaji, “Intrusion Detection Using Sequences of System
Calls,” Journal of Computer Security, vol. 6, no. 3, pp. 151–180, 1998.
58. F. Cohen and D. Koike, “Misleading Attackers with Deception,” in Proceedings from the 5th
annual IEEE SMC Information Assurance Workshop, pp. 30–37, IEEE, 2004.
59. T. E. Carroll and D. Grosu, “A Game Theoretic Investigation of Deception in Network
Security,” Security and Communication Networks, vol. 4, no. 10, pp. 1162–1172, 2011.
60. R. Hesketh, Fortitude: The D-Day Deception Campaign. Woodstock, NY: Overlook Hardcover,
2000.
---------------------------------------------------------------
[Page 29]
http://www.springer.com/978-3-319-32697-9
1
0
https://www.reddit.com/r/voynich
https://sciencesurvey.link/
https://sites.google.com/site/48questions48answers
The Solution to the Voynich Manuscript: MS 408.
You have come to the webpage for the academic papers:
1. Linguistic Missing Links.
https://ling.auf.net/lingbuzz/003737
https://sites.google.com/site/48questions48answers/home/Linguistic-missing-…
2. Linguistically Dating and Locating Manuscript MS408.
https://ling.auf.net/lingbuzz/003808
https://sites.google.com/site/48questions48answers/home/Linguistic-missing-…
3. Consonants & Vowels, Castles & Volcanoes.
https://ling.auf.net/lingbuzz/004381
These papers will be of interest and use to scholars from various
academic disciplines: linguistics, semiotics, graphology, cryptology,
Medieval history and so on. The manuscript is demonstrably shown
to be written in proto-Romance language, using proto-Italic alphabet.
This makes it a unique document in both respects.
The manuscript dates to 1444 from the island of Ischia, when
proto-Romance was the common language of the population of the
Mediterranean region. It is closest to modern Portuguese, Catalan
and Galician, because Ischia was part of the Crown of Aragon in the
15th century.
https://voynich.ninja/
https://ling.auf.net/lingbuzz
https://www.youtube.com/watch?v=hH_TKz4IaA4
https://www.youtube.com/watch?v=p6keMgLmFEk
https://www.youtube.com/watch?v=4cRlqE3D3RQ
https://www.youtube.com/watch?v=8nHbImkFKE4
https://www.youtube.com/watch?v=lhtZc-nFNt0
https://ciphermysteries.com/2017/11/10/gerard-cheshire-vulgar-latin-siren-c…
https://www.reddit.com/r/AskReddit/comments/8ut6ed/like_the_voynich_manuscr…
https://www.reddit.com/r/UnresolvedMysteries/comments/a8g8jy/mysterious_boo…
https://www.reddit.com/search?q=voynich+manuscript&sort=top
https://www.reddit.com/r/symbology
https://www.reddit.com/r/occult
https://www.reddit.com/r/astrology
https://www.reddit.com/r/medievalart
https://www.reddit.com/r/historyofmedicine
https://www.reddit.com/r/tarot
Cicada3301
1
0