Re: Please send me SSL problems...
I'd just like to let all cypherpunks know that I'm really interested in getting any feedback you might have about security problems with Netscape products. I'm particularly interested in bugs in the our implementation of SSL, and problems in the protocol that are not addressed in SSL 3.0.
We have been collecting comments on SSL 3.0, and have started incorporating that feedback into our spec. Please don't assume that our lack of response means that we are ignoring your comments. Between Navigator 2.0 and things like the SSL challenge and the RNG fire drill, we just have not had the time to get a new rev of the spec out. Hopefully soon...
Jeff, the SSL specification has a severe *architectural* problem - it assumes that Internet Protocols are APIs - interface standards, and that you can just slide a "layer" underneath without anyone noticing. Such is not the case - all the Internet Protocols are real protocol standards, in that they specify the syntax, order, and semantics of the actual bits on the wire. The IETF quite explicitly doesn't care about APIs - that's a host software issue, and it doesn't matter what the host software looks like (or even what the machine looks like), so long as it gets the bits on the wire right, according to the protocol spec. This is how the Internet can make very strong guarantees about interoperability. You can't fiddle with a communication protocol without getting agreement from everyone about the change, or extend it in a way that is compatible with the protocol you're modifying, on a per-protocol basis (e.g. adding a TELNET negotiation option to TELNET for encryption, an FTP command to FTP, etc). Otherwise, all you've done is made a private, non-interoperable change to an existing protocol that guarantees interoperability *failures* between systems that implement the existing specification, versus your own version of HTTP, or TELNET, or whatever. In short, the SSL specification, as written, proposes to change all Internet application protocols, globally - "slide in a layer." That's not how it's done, and it's not the right place to do it, even if it appears to work in an enclave of systems. About the SSL protocol, encryption algorithms, or the SQA that went into 'em, I think other people have expounded on those issues eloquently, and so I have nothing to add to that. Erik Fair
Jeff, the SSL specification has a severe *architectural* problem - it assumes that Internet Protocols are APIs - interface standards, and that you can just slide a "layer" underneath without anyone noticing. Such is not the case - all the Internet Protocols are real protocol standards, in that they specify the syntax, order, and semantics of the actual bits on the wire. The IETF quite explicitly doesn't care about APIs - that's a host software issue, and it doesn't matter what the host software looks like (or even what the machine looks like), so long as it gets the bits on the wire right, according to the protocol spec. This is how the Internet can make very strong guarantees about interoperability.
I agree with parts of this and disagree with other parts. The IETF does not as a whole care about APIs. The one exception being the GSS API which appears to be intended as a means of cicumventing ITAR. Nobody asked me about GSS API but a lot of people have assumed that because it comes from the IETF it should be the basis for the Web security protocols. I'm affraid that I can't see any real connection between the GSS view of the world and my own. Hence I find that API more of a hinderance (having to explain why not to use it) rather than a help. The specific criticism of SSL, that it is layer replacement highlights a fundamental error made by many IETF people. The purpose of a layered protocol model is precisely to permit the underlying layers to be altered without affecting the upper layers. NNTP runs very happily on either TCP/IP or on DECnet for example. Where I think SSL went wrong was in the approach taken to URLs. Rather than define HTTPS://foo.com/ it should specify a new transport HTTP://foo.com:80:SSL/ I think the blame for that mess should be laid at another door however. Basically the URI working group should have understood this issue and defined a syntax for handling both SSL like objects and also DECNET, ATM. This would fit much better with the idea of SSL as being a wrapper for an arbitrary protocol. I think its worth pointing out that the people working at Netscape now are a rather different bunch to the original team. Phill
A few commnets from Tim Hudson who has put SSL into telnet and ftp, he is not on this list but since he is my personal 'put SSL into applications' person (I just write the library :-), I felt his comments would be better than mine :-) On Wed, 20 Sep 1995, Erik E. Fair wrote:
Jeff, the SSL specification has a severe *architectural* problem - it assumes that Internet Protocols are APIs - interface standards, and that ... You can't fiddle with a communication protocol without getting agreement from everyone about the change, or extend it in a way that is compatible with the protocol you're modifying, on a per-protocol basis (e.g. adding a TELNET negotiation option to TELNET for encryption, an FTP command to FTP, etc). Otherwise, all you've done is made a private, non-interoperable
[tjh] I agree with this statement - application of SSL at the TCP level for all communication is possible but *not* desirable in the general case - i.e. for internet communication. A much better approach (and the one that I have taken for adding SSL into TELNET and FTP) is to use *existing* negotiation mechanisms for dynamically switching on SSL for a given link based on determining dynamically if the server you are connecting to will support it. Naturally you want options at both the server and the client that enable you to: - fall back to "normal/insecure" mode if SSL is not available - drop the connection in the client if SSL is not negotiated - drop the connection in the server if SSL is not negotiated My aim was when adding SSL (in the form of SSLeay) into an existing server was *always* to be able to run the *one* server for both the "old" and the "new" protocol. I really was getting annoyed at seeing announcements of yet-another-security package that could be installed that provided another potentially insecure access path into the system that only supported connecting to it with it's own fixed protocol. SSL can be seen in it simpliest form as just a nice mechanism for dynamically negotiating a *cipher* - this is how I initially set things up so that the "normal" authentication mechanisms had to still be used for connection - i.e. SSLtelnet still required the normal account password to get access. This has since been "enhanced" so that you can switch on an option that uses a certificate exchange as the security access mechanism (this is not switched on by default). For TELNET the "best" place to start seemed to be the work done with SRATELNET ... it already had all the hooks in the right places for using the RFC-defined TELNET extensions that enabled negotiation of authentication and encryption. (the documentation that came with SRAtelnet was also nice and clear too). For FTP there was a similar was of doing things so I used it too ... and FTP is a *great* example of a protocol where doing things at the TCP level (tranparently) would be "bad" - it used two ports ... one of which is usually dynamically allocated ... and you certainly don't want to redo the initial SSL negotiation for each file that you transfer! (SSLftp reuses the session ID). Another thing that is worth noting (and worth looking at too) is the different API offered in SSLREF and SSLeay (... naturally I prefer SSLeay as I have influence over the author ;-) ... From what I know of the SSLREF API, it takes the approach of providing wrapper functions that you use *instead* of the "normal" functions ... there is (not using the right names) SSLaccept and SSLconnect that you use that perform the accept() and connect() along with all the other things required in the SSL protocol being hidden which sounds nice until you want to do something like FTP ... where the connection for the DATA socket is formed in the opposite direction to the CONTROL socket - with SSLeay you do the accept() and connect() yourself ... as per normal and then run SSL_accept() or SSL_connect() which does the "logical" SSL stuff ... so in FTP I can do a connect() and then an SSL_accept() which looks funny but is the "right" thing to do. SSLeay has only 2 function calls that operate on socket file descriptors, a singe read() and a single write(). The most recent version will handle non-blocking IO if the application passes a file desciptor with it turned on. SSLeay does not do a single setsockopt(), ioctl(), fcntl(), accept(), bind(), select() etc. If you haven't looked at SSLeay or looked and the SSL protocol itself then you really should grab it and have a read (while ignoring the politics and the WWW hype over SSL). Tim [eay] While there are problems with certificate distribution, this will be overcome. Ever tried general inter-realm authentication with Kerberos? Both SSLref and SSLeay interoperate. From what I know of the SSLref API, our API's are quite different. Just because SSLref may 'force' you towards a particular style of SSL use does not mean the protocol forces you to use it this way. eric Standard billboard http://www.psy.uq.oz.au/~ftp/Crypto/ ftp.psy.uq.oz.au:/pub/Crypto/SSL/ ftp.psy.uq.oz.au:/pub/Crypto/SSLapps/ -- Eric Young | Signature removed since it was generating AARNet: eay@mincom.oz.au | more followups than the message contents :-)
I don't think that the API that SSLRef export is not particularly interesting. We have no attachment to that API. I would expect someone who gets SSLRef to rework the API to suit their application. --Jeff -- Jeff Weinstein - Electronic Munitions Specialist Netscape Communication Corporation jsw@netscape.com - http://home.netscape.com/people/jsw Any opinions expressed above are mine.
On Sep 20, 4:35am, "Erik E. Fair" (Time Keeper) wrote:
Subject: Re: Please send me SSL problems...
Jeff, the SSL specification has a severe *architectural* problem - it assumes that Internet Protocols are APIs - interface standards, and that you can just slide a "layer" underneath without anyone noticing. Such is not the case - all the Internet Protocols are real protocol standards, in that they specify the syntax, order, and semantics of the actual bits on the wire. The IETF quite explicitly doesn't care about APIs - that's a host software issue, and it doesn't matter what the host software looks like (or even what the machine looks like), so long as it gets the bits on the wire right, according to the protocol spec. This is how the Internet can make very strong guarantees about interoperability.
You can't fiddle with a communication protocol without getting agreement from everyone about the change, or extend it in a way that is compatible with the protocol you're modifying, on a per-protocol basis (e.g. adding a TELNET negotiation option to TELNET for encryption, an FTP command to FTP, etc). Otherwise, all you've done is made a private, non-interoperable change to an existing protocol that guarantees interoperability *failures* between systems that implement the existing specification, versus your own version of HTTP, or TELNET, or whatever. In short, the SSL specification, as written, proposes to change all Internet application protocols, globally - "slide in a layer." That's not how it's done, and it's not the right place to do it, even if it appears to work in an enclave of systems.
My view of SSL is that it should not generally be considered a transparent layer that can be plugged in below any application. I don't consider HTTP on top of SSL to be the same as HTTP, or something that can totally replace HTTP. Thats why we use a different port and call it https: and not http. I think using TELNET and FTP as examples of protocols that can be transparently layered on top of SSL was unfortunate. I've looked at what it takes to make some existing protocols work with SSL, and I'm not convinced that its always appropriate. For example FTP and RCMD use multiple connections, which is a royal pain. It seems that the thing you are objecting to is the wording in the spec, in the "motivation" section, that appears to suggest that the entire internet could run on top of SSL. I think that section of the spec could just be chopped out, and SSL would still be useful today without pretentions of world domination. If a secure IP standard emerges that is widely deployed and provides similar services, I don't see why SSL couldn't just go away (this is my opinion, not an official position of netscape). This was sort of off the top of my head. I've not spent long hours contemplating these questions... --Jeff -- Jeff Weinstein - Electronic Munitions Specialist Netscape Communication Corporation jsw@netscape.com - http://home.netscape.com/people/jsw Any opinions expressed above are mine.
participants (4)
-
Eric Young -
Erik E. Fair -
hallam@w3.org -
Jeff Weinstein