the complex social construction of trust in cryptographic systems

coderman coderman@gmail.com
Wed Feb 3 05:24:54 PST 2016


Conspicuous Chatter
Traffic analysis, anonymous and covert communications, and other magic.

https://conspicuouschatter.wordpress.com/2016/02/03/the-social-construction-of-trust-in-cryptographic-systems/


The Social Construction of Trust in Cryptographic Systems
3 February 2016

(This is an extract from my contribution to Harper, Richard.
“Introduction and Overview”, Trust, Computing, and Society. Ed.
Richard H. R. Harper. 1st ed. New York: Cambridge University Press,
2014. pp. 3-14. Cambridge Books Online. Web. 03 February 2016.
http://dx.doi.org/10.1017/CBO9781139828567.003)

Cryptography has been used for centuries to secure military,
diplomatic, and commercial communications that may fall into the hands
of enemies and competitors (Kahn 1996). Traditional cryptography
concerns itself with a simple problem: Alice wants to send a message
to Bob over some communication channel that may be observed by Eve,
but without Eve being able to read the content of the message. To do
this, Alice and Bob share a short key, say a passphrase or a poem.
Alice then uses this key to scramble (or encrypt) the message, using a
cipher, and sends the message to Bob. Bob is able to use the shared
key to invert the scrambling (or “decrypt”) and recover the message.
The hope is that Eve, without the knowledge of the key, will not be
able to unscramble the message, thus preserving its confidentiality.

It is important to note that in this traditional setting we have not
removed the need for a secure channel. The shared key needs to be
exchanged securely, because its compromise would allow Eve to read
messages. Yet, the hope is that the key is much shorter than the
messages subsequently exchanged, and thus easier to transport securely
once (by memorizing it or by better physical security). What about the
cipher? Should the method by which the key and the message are
combined not be kept secret? In “La Cryptographie Militaire” in 1883,
Auguste Kerckhoffs stated a number of principles, including that only
the key should be considered secret, not the cipher method itself
(Kerckhoffs 1883). Both the reliance on a small key and the fact that
other aspects of the system are public is an application of the
minimization principle we have already seen in secure system
engineering. It is by minimizing what has to be trusted for the
security policy to hold that one can build and verify secure systems –
in the context of traditional cryptography, in principle, this is just
a short key.

Kerckhoffs argues that only the key, not the secrecy of the cipher is
in the trusted computing base. But a key property of the cipher is
relied on: Eve must not be able to use an encrypted message and
knowledge of the cipher to recover the message without access to the
secret key. This is very different from previous security assumptions
or components of the TCB. It is not about the physical restrictions on
Eve, and it is not about the logical operations of the computer
software and hardware that could be verified by careful inspection. It
comes down to an assumption that Eve cannot solve a somehow difficult
mathematical problem. Thus, how can you trust a cipher? How can you
trust that the adversary cannot solve a mathematical problem?

To speak the truth, this was not a major concern until relatively
recently, compared with the long history of cryptography. Before
computers, encoding and decoding had to be performed by hand or using
electromechanical machines. Concerns such as usability, speed, cost of
the equipment, and lack of decoding errors were the main concerns in
choosing a cipher. When it comes to security, it was assumed that if a
“clever person” proposes a cipher, then it would take someone much
cleverer than them to decode it. It was even sometimes assumed that
ciphers were of such complexity that there was “no way” to decode
messages without the key. The assumption that other nations may not
have a supply of “clever” people may have to do with a colonial
ideology of nineteenth and early twentieth centuries. Events leading
to the 1950s clearly contradict this: ciphers used by major military
powers were often broken by their opponents.

In 1949, Claude Shannon set out to define what a perfect cipher would
be. He wanted it to be “impossible” to solve the mathematical problem
underlying the cipher (Shannon 1949). The results of this seminal work
are mixed. On the positive side, there is a perfect cipher that, no
matter how clever an adversary is, cannot be solved – the one-time
pad. On the down side, the key of the cipher is as long as the
message, must be absolutely random, and can only be used once.
Therefore the advantage of short keys, in terms of minimizing their
exposure, is lost and the cost of generating keys is high (avoiding
bias in generating random keys is harder than expected). Furthermore,
Shannon proves that any cipher with smaller keys cannot be perfectly
secure. Because the one-time pad is not practical in many cases, how
can one trust a cipher with short keys, knowing that its security
depends on the complexity of finding a solution? For about thirty
years, the United States and the UK followed a very pragmatic approach
to this: they kept the cryptological advances of World War II under
wraps; they limited the export of cryptographic equipment and know-how
through export regulations; and their signal intelligence agencies –
the NSA and GCHQ, respectively – became the largest worldwide
employers of mathematicians and the largest customers of
supercomputers. Additionally, in their roles in eavesdropping on their
enemies’ communications, they evaluated the security of the systems
used to protect government communications. The assurance in
cryptography came at the cost of being the largest organizations that
know about cryptography in the world.

The problem with this arrangement is that it relies on a monopoly of
knowledge around cryptology. Yet, as we have seen with the advent of
commercial telecommunications, cryptography becomes important for
nongovernment uses. Even the simplest secure remote authentication
mechanism requires some cryptography if it is to be used over insecure
channels. Therefore, keeping cryptography under wraps is not an
option: in 1977, the NSA approved the IBM design for a public cipher,
the Data Encryption Standard (DES), for public use. It was
standardized in 1979 by the US National Institute for Standards and
Technology (NIST).

The publication of DES launched a wide interest in cryptography in the
public academic community. Many people wanted to understand how it
works and why it is secure. Yet, the fact that the NSA tweaked its
design, for undisclosed reasons, created widespread suspicion in the
cipher. The fear was that a subtle flaw was introduced to make
decryption easy for intelligence agencies. It is fair to say that many
academic cryptographers did not trust DES!

Another important innovation in 1976 was presented by Whitfield Diffie
and Martin Hellman in their work “New Directions in Cryptography”
(Diffie & Hellman 1976). They show that it is possible to preserve the
confidentiality of a conversation over a public channel, without
sharing a secret key! This is today known as “Public Key
Cryptography,” because it relies on Alice knowing a public key for
Bob, shared with anyone in the world, and using it to encrypt a
message. Bob has the corresponding private part of the key, and is the
only one that can decode messages used with the public key. In 1977,
Ron Rivest, Adi Shamir, and Leonard Adleman proposed a further system,
the RSA, that also allowed for the equivalent of “digital signatures”
(Rivest et al. 1978).

What is different in terms of trusting public key cryptography versus
traditional ciphers? Both the Diffie-Hellman system and the RSA system
base their security on number theoretic problems. For example, RSA
relies on the difficulty of factoring integers with two very large
factors (hundreds of digits). Unlike traditional ciphers – such as DES
– that rely on many layers of complex problems, public key algorithms
base their security on a handful of elegant number theoretic problems.

Number theory, a discipline that G.H. Hardy argued at the beginning of
the twentieth century was very pure in terms of its lack of any
practical application (Hardy & Snow 1967), quickly became the deciding
factor on whether one can trust the most significant innovation in the
history of cryptology! As a result, a lot of interest and funding
directed academic mathematicians to study whether the mathematical
problems underpinning public key cryptography were in fact difficult
and how difficult the problems were.

Interestingly, public key cryptography does not eliminate the need to
totally trust the keys. Unlike traditional cryptography, there is no
need for Bob to share a secret key with Alice to receive confidential
communications. Instead, Bob needs to keep the private key secret and
not share it with anyone else. Maintaining the confidentiality of
private keys is simpler than sharing secret keys safely, but it is far
from trivial given their long-term nature. What needs to be shared is
Bob’s public key. Furthermore, Alice need to be sure she is using the
public key associated with the Bob’s private key; if Eve convinces
Alice to use an arbitrary public key to encrypt a message to Bob, then
Eve could decrypt all messages.

The need to securely associate public keys with entities has been
recognized early on. Diffie and Hellman proposed to publish a book, a
bit like the phone register, associating public keys with people. In
practice, a public key infrastructure is used to do this: trusted
authorities, like Verisign, issue digital certificates to attest that
a particular key corresponds to a particular Internet address. These
authorities are in charge of ensuring that the identity, the keys, and
their association are correct. The digital certificates are “signed”
using the signature key of the authorities that anyone can verify.

The use of certificate authorities is not a natural architecture in
many cases. If Alice and Bob know each other, they can presumably use
another way to ensure Alice knows the correct public key for Bob.
Similarly, if a software vendor wants to sign updates for their own
software, they can presumably embed the correct public key into it,
instead of relying on public key authorities to link their own key
with their own identity.

The use of public key infrastructures (PKI) is necessary in case Alice
wants to communicate with Bob without them having any previous
relationship. In that case Alice, given only a valid name for Bob, can
establish a private channel to Bob (as long as it trusts the PKI).
This is often confused: the PKI ensures that Alice talks to Bob, but
not that Bob is “trustworthy” in any other way. For example, a Web
browser can establish a secure channel to a Web service that is
compromised or simply belong to the mafia. The secrecy provided by the
channel does not, in that case, provide any guarantees as to the
operation of the Web service. Recently, PKI services and browsers have
tried to augment their services by only issuing certificates to
entities that are verified as somehow legitimate.

Deferring the link between identities and public keys to trusted third
parties places this third party in a system’s TCB. Can certification
authorities be trusted to support your security policy? In some ways,
no. As implemented in current browsers, any certification authority
(CA) can sign a digital certificate for any site on the Internet
(Ellison & Schneier 2000). This means that a rogue national CA (say,
from Turkey) can sign certificates for the U.S. State Department, that
browsers will believe. In 2011, the Dutch certificate authority
Diginotar was hacked, and their secret signature key was stolen
(Fox-IT 2012). As a result, fake certificates were issued for a number
of sensitive sites. Do CAs have incentives to protect their key? Do
they have enough incentives to check the identity of the people or
entities behind the certificates they sign?

Cryptographic primitives like ciphers and digital signatures have been
combined in a variety of protocols. One of the most famous is the
Secure Socket Layer SSL or TLS, which provides encryption to access
encrypted Web sites on the Internet (all sites following the https://
protocol). Interestingly, once secure primitives are combined into
larger protocols, their composition is not guaranteed to be secure.
For example a number of problems have been identified against SSL and
TLS that are not related to the weaknesses of the basic ciphers used
(Vaudenay 2002).

The observation that cryptographic schemes are brittle and could be
insecure even if they rely on secure primitives (as did many deployed
protocols) led to a crisis within cryptologic research circles. The
school of “provable security” proposes that rigorous proofs of
security should accompany any cryptographic protocol to ensure it is
secure. In fact “provable security” is a bit of a misnomer: the basic
building blocks of cryptography, namely public key schemes and ciphers
cannot be proved secure, as Shannon argued. So a security proof is
merely a reduction proof: it shows that any weakness in the complex
cryptographic scheme can be reduced to a weakness in one of the
primitives, or a well-recognized cryptographic hardness assumption. It
effectively proves that a complex cryptographic scheme reduces to the
security of a small set of cryptographic components, not unlike
arguments about a small Trusted Computing Base. Yet, even those proofs
of security often work at a certain level of abstraction and often do
not include all details of the protocol. Furthermore, not all
properties can be described in the logic used to perform the proofs.
As a result, even provably secure protocols have been found to have
weaknesses (Pfitzmann & Waidner 1992).

So, the question of “How much can you trust cryptography?” has in part
itself been reduced to “How much can you trust the correctness of a
mathematical proof on a model of the world?” and “How much can one
trust that a correct proof in a model applies to the real world?”
These are deep epistemological questions, and it is somehow ironic
that national, corporate, and personal security depends on them. In
addition to these, one may have to trust certificate authorities and
assumptions on the hardness of deep mathematical problems. Therefore,
it is fair to say that trust in cryptographic mechanisms is an
extremely complex social process.



More information about the cypherpunks mailing list