RISKS-LIST: Risks-Forum Digest Tuesday 15 October 2013 Volume 27 : Issue 53
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy
***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at http://www.risks.org as
http://catless.ncl.ac.uk/Risks/27.53.html
The current issue can be found at
http://www.csl.sri.com/users/risko/risks.txt
Contents:
Azerbaijan releases election results -- before the election started (PGN)
Computer Failure Cuts off Access to Food Benefits (PGN)
Another botched Black Tuesday for MS (Woody Leonhard via Gene Wirchenko)
D-Link SOHO Routers reported to contain backdoor (Bob Gezelter)
Russian government's political comment trolling operation exposed
(Lauren Weinstein)
EFF Resigns from Global Network Initiative (EFF)
Re: "Let's build a more secure Internet" (Peter Houppermans, Bob Frankston,
Fred Cohen)
Re: Why the NSA's attacks on the Internet must be made public (Fred Cohen)
Re: NSA data center 'meltdowns' force year-long delay (Paul Saffo)
Correction re: Cyber Schools Fleece Taxpayers (Gene Wirchenko)
Re: Our Founding Fathers ... (Thor Lancelot Simon)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Wed, 9 Oct 2013 22:39:07 PDT
From: "Peter G. Neumann"
Eli Dourado, *The New York Times*, 8 Oct 2013 Can we ever trust the Internet again?
As usual, the press gets it wrong soup to nuts. Starting with the premise that the Internet was ever worthy of trust in the first place, which leads to the question - trust for what? If you trusted the Internet for integrity, confidentiality, availability, use control, or accountability, you were making a mistake, and this is nothing new. I refer you to the series of articles I wrote in the mid-1990s called Internet Holes and the continuation of that series through the present day (http://all.net/Analyst/index.html). Not that the problems began then...
In the wake of the disclosures about the National Security Agency's surveillance programs, considerable attention has been focused on the agency's collaboration with companies like Microsoft, Apple and Google, which according to leaked documents appear to have programmed "back door" encryption weaknesses into popular consumer products and services like Hotmail, iPhones and Android phones.
The difference being that they used legal process or money to get willing cooperation? Does anybody really believe that this wasn't being done earlier by planted insiders? And why worry about the NSA when they are only one of more than 100 countries likely undertaking the same sort of thing (many known to be doing so) since the beginning of the Internet.
But while such vulnerabilities are worrisome, equally important - and because of their technical nature, far less widely understood - are the weaknesses that the N.S.A. seems to have built into the very infrastructure of the Internet.
We didn't need them to build weaknesses in. The commercial companies are perfectly capable of doing it intentionally and by accident. Weaknesses were always there. In terms of understanding, while I believe the press widely ignored these issues for much of the last 30+ years, the information protection field has been pointing them out since the technology was put into use.
The concern is that even if consumer software companies like Microsoft and telecommunications companies like AT&T and Verizon stop cooperating with the N.S.A., your online security will remain compromised as long as the agency can still take advantage of weaknesses in the Internet itself.
As they always have and always likely will.
Fortunately, there is something we can do: encourage the development of an "open hardware" movement - an extension of the open-source movement that has led to software products like the Mozilla browser and the Linux operating system.
Open software has nothing on closed software in terms of protection, In fact, arguably, closed source has produced fewer vulnerabilities per line of code over time than open source. I say "arguably" because, as a field, we have few and poorly collected metrics of such things. But those metrics seem to indicate that open source is not more secure as a rule.
The open-source movement champions an approach to product development in which there is universal access to a blueprint, as well as universal ability to modify and redistribute the blueprint. Wikipedia is perhaps the best-known example of a product inspired by the movement. Open-source advocates typically emphasize two kinds of freedom that their products afford: they are available free of charge, and they can be used and manipulated free of restrictions.
Open source is not the same as free, not the same as anybody can (legally) modify it, or any such thing. It just means you can see the "blueprint".
But there is a third kind of freedom inherent in open-source systems: the freedom to audit. With open-source software, independent security experts can scrutinize the code for vulnerabilities - whether accidentally or intentionally introduced. The more auditing by the programming masses, the better the security. As the open-source software advocate Eric S. Raymond has put it, "given enough eyeballs, all bugs are shallow."
This is a fallacy. It is simply not true that more eyes makes better security or that "all bugs are shallow" as a side effect. Experiments have historically shown that even if we point out the location of an intentional Trojan horse to within a few hundred lines of code, experts don;t find it. And automated software doesn't even look for the sorts of intentional subversion that is used in many Trojan horses.
Perhaps the greatest open-source success story is the Internet itself - at least its "soft" parts. The Internet's communications protocols and the software that implements them are collaboratively engineered by loose networks of programmers working outside the control of any single person, company or government. The Internet Engineering Task Force, which develops core Internet protocols, does not even have formal membership and seeks contributions from developers all over the world.
And the Internet is full of holes. It is the best example of how open source does not provide protection. And its success is largely because it (the process) doesn't seek to provide protection. The Internet is designed for functionality - widespread, general, rapidly deployed, easily developed, flexible, changeable, etc. functionality. As such, it is designed to support rapid change, not stability. It is designed to be redundant, recoverable, etc. NOT private, unalterable, etc. "Security" is afforded by this approach, but not secrecy, integrity, use control, or accountability. Availability is somewhat questionable. The security provided is the ability to change, learn, adapt, create, do your own thing, etc.
But the problem is that the physical layer of the Internet's infrastructure - the hardware that transmits, directs and relays traffic online, as well as its closely knit software (or "firmware") - is not open-source. It is made by commercial computing companies like Cisco, Hewlett-Packard and Juniper Networks according to proprietary designs, and then sold to governments, universities, private companies and anyone else who wants to set up a network.
Making it "open source" will not help the situation. It will likely reveal far more vulnerabilities, but not fix them, and not reveal the tricky ones. But it will certainly cause these companies financial problems as their technical advantages over competitors will collapse, and their investment in new technology be reduced, thus reducing innovation and rate of progress.
There is reason to be skeptical about the security of these networking products. The hardware firms that make them often compete for contracts with the United States military and presumably face considerable pressure to maintain good relations with the government. It stands to reason that such pressure might lead companies to collaborate with the government on surveillance-related requests.
And those made in China have Chinese Trojan horses.
Because these hardware designs are closed to public scrutiny, it is relatively easy for surveillance at the Internet's infrastructural level to go undetected. To make the Internet less susceptible to mass surveillance, we need to recreate the physical layer of its infrastructure on the basis of open-source principles.
This won't work. It will just make it more expensive to run the government surveillance programs, costing the taxpayers more money and forcing the NSA back into the darker corners.
At the moment, the open hardware movement is limited mostly to hobbyists - engineers who use the Internet to collaboratively build "open" devices like the RepRap 3D printer.
Which uses what open source processor chips? None! They all depend on proprietary chips.
But the Internet community, through a concerted effort like the one that currently sustains the Internet's software architecture, could also develop open-source, Internet-grade hardware. Governments like Brazil's that have forsworn further involvement with American Internet companies could adopt such nonproprietary equipment designs and have them manufactured locally, free from any N.S.A. interference.
As if this would free them. It won't.
The result would be Internet infrastructure, both hardware and software, that was 100 percent open and auditable.
Again, a fantasy. Even if realized, it would not accomplish the stated goal.
The "open source" version of the Internet would not be an improvement. It is
already largely open source, and has all of the problems that the
Information age portends. It is an inherent property of the information age
that in order to have effective protection, we need to restrain ourselves
from doing the wrong thing in high volume and an effective government has to
restrain itself or be restrained by its people. But this is nothing
new. Perhaps we need well armed Internet militias.
Draft of the Xth amendment: A well regulated Militia, being necessary to the
security of a free State, the right of the people to keep and bear Internet
Arms, shall not be infringed.
Fred Cohen - 925-454-0171 - All.Net & Affiliated Companies
http://all.net/ PO Box 811 Pebble Beach, CA 93953
------------------------------
Date: Mon, 14 Oct 2013 07:01:57 -0700
From: Fred Cohen
Among IT security professionals, it has been long understood that the public disclosure of vulnerabilities is the only consistent way to improve security. That's why researchers publish information about vulnerabilities in computer software and operating systems, cryptographic algorithms, and consumer products like implantable medical devices, cars, and CCTV cameras.
This is a fallacy. There is no substantial science behind the asserted claim (that disclosure improves protection) and no statistics behind the actual claim (that IT security professionals have long understood that or even agree to the asserted claim). The rest of the article repeats this mistake. It asserts cause and effect without a substantial basis.
It's folly to believe that any NSA hacking technique will remain secret for very long.
Really! You may rest assured that they have plenty of methods that, while published long ago in some form, remain largely a secret to anyone who is affected by them. That's because, as a community, we don't bother to review the literature before proclaiming ourselves experts. Nothing I have seen published about what the NSA is asserted to have done is a big secret in terms of the ability to do it. The secret (if there is one) is that they did do it, with whom, etc. The techniques I have heard about are hardly a secret. Bribe a company, extort a company, plant an insider, plant a Trojan, not new, not secret methods. In terms of longevity, I would bet that there are lots of things still secret from the 1950s, some of which died with those who held them.
The NSA has two conflicting missions. Its eavesdropping mission has been getting all the headlines, but it also has a mission to protect US military and critical infrastructure communications from foreign attack. Historically, these two missions have not come into conflict. During the cold war, for example, we would defend our systems and attack Soviet systems.
The equities issue has always been present, and the equities have
historically always favored attack over defense. The question that needs to
be addressed is how this balance should be as opposed to how it has been. My
personal view is that the defense should be favored far more than it is at
present or has been in the past, but then I am a defender.
The reason for my view? Because the US and our allies are asymmetrically
dependent on information and technology. So successful attack can hurt us a
lot more than it hurts them. Meanwhile, successful defense depends on
knowledge, skills, effort, etc. which we presumably have more of then our
enemies. So if we build strong defenses that require ongoing effort, we will
win as long as we are willing to spend the effort and they are not. Of
course if it takes too much effort, it will sap our strength... and
somewhere in there is an equation to be produced and solved.
Fred Cohen - 925-454-0171 - All.Net & Affiliated Companies
http://all.net/ PO Box 811 Pebble Beach, CA 93953
------------------------------
Date: Wed, 09 Oct 2013 20:05:33 -0700
From: Paul Saffo
A couple thousand years ago, the way you moved from Slave or peon to Citizen in Imperial Rome was you raised enough money to afford a sword and shield ...
This is empirically false, and it's a shame to see made-up "facts" given
credibility by appearing in RISKS. Without this and the several other
similar assertions of "fact" in the piece I quote above, I'm not sure there
is any support for its argument at all.
If you'd like to know how changes in status really took place in Imperial
(or pre-Imperial) Rome, I can recommend Crook, J.A., _Law and Life Of Rome_,
90 B.C. - A.D. 212 (Ithaca: Cornell, 1967).
Thor Lancelot Simon, : Public Access Networks Corp., tls@panix.com
------------------------------
Date: Sun, 7 Oct 2012 20:20:16 -0900
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
if possible and convenient for you. The mailman Web interface can
be used directly to subscribe and unsubscribe:
http://lists.csl.sri.com/mailman/listinfo/risks
Alternatively, to subscribe or unsubscribe via e-mail to mailman
your FROM: address, send a message to
risks-request@csl.sri.com
containing only the one-word text subscribe or unsubscribe. You may
also specify a different receiving address: subscribe address= ... .
You may short-circuit that process by sending directly to either
risks-subscribe@csl.sri.com or risks-unsubscribe@csl.sri.com
depending on which action is to be taken.
Subscription and unsubscription requests require that you reply to a
confirmation message sent to the subscribing mail address. Instructions
are included in the confirmation message. Each issue of RISKS that you
receive contains information on how to post, unsubscribe, etc.
=> The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) is online.
http://www.CSL.sri.com/risksinfo.html
*** Contributors are assumed to have read the full info file for guidelines.
=> .UK users may contact
participants (1)
-
RISKS List Owner