Maybe It's Snake Oil All the Way Down

Ian Grigg iang at systemics.com
Tue Jun 3 05:40:05 PDT 2003


Eric Rescorla wrote:
> 
> Ian Grigg <iang at systemics.com> writes:
> > Eric Murray wrote:
> > It may be that the SSL underlying code is
> > perfect.  But that the application is weak
> > because the implementor didn't understand
> > how to drive it;  in which case, if he can
> > roll his own, he may end up with a more
> > secure overall package.
> I don't think this is likely to be true. In my experience,
> people who learn enough to design their own thing also learn
> enough to be able to do SSL properly.

True, although, that begs the question as
to how they learn.  Only by doing, I'd say.
I think one learns a lot more from making
mistakes and building ones own attempt than
following the words of wise.

> > > SSLv2, which was also designed by an
> > > individual, also had major flaws.  And that was the
> > > second cut!  I haven't seen v1, maybe Eric can
> > > shed some light on how bad it was.
> >
> > [ Someone commented before that v1 was not deemed
> > serious (Marc A?) and v2 was the more acceptable
> > starting point (Weinsteins?). ]
> That's not true as far as I know. V1 and V2 were designed
> by the same guy (Kipp Hickman). V1 is actually very similar
> to V2, except that the integrity stuff is all screwed up.
> As far as I can tell, the fact of the matter is that Kipp
> didn't understand the security issues until Abadi and
> to some extent Schiffman sold them some clues.


OK.  Then I am confused about the post that
came out recently.  It would be very interesting
to hear the story, written up.


> > Sure.  If someone does roll their own, then they
> > should get it reviewed.
> That's not my experience. WEP and PPTP come to mind.

Ah, good point:  There should be some
point on that list about building ones
cryptosystem outside the domain of an
institution, which tends to have too
many conflicting requirements, and
cannot limit itself to a simple system.

(And, yes, some protocols don't get
peer reviewed.  I wasn't debating that.)


> > But, that assumes an awful lot.  For a start,
> > that it exists.  SSL is touted as the answer
> > to everything, but it seems to be a connection
> > oriented protocol, which would make it less
> > use for speech, media, mail, chat (?), by way
> > of example.

> SSL is quite fine for chat, actually. It's one of the
> major things that people use for IM. The issue with
> speech and media isn't connection-orientation but
> rather datagram versus stream data.

I knew I was in trouble on chat, that's
why I stuck the interrogation mark in
there :-)  We recently added an email-like
capability to our (homegrown) crypto
system, and intend to expand that to
chat.  But, in order to do that, we
have to expand the crypto subsystem
(SOX) to include connection-oriented
modes.

[ Hence, an open question floating
around here is "why don't we use SSL"
which hasn't been definitively answered
as yet. ]

> > Then there is understanding, both of the
> > protocol, and the project's needs.  I know
> > that when I'm in a big project and I come
> > across a complex new requirement, often, it
> > is an open question as to whether make or
> > buy is the appropriate choice.  I do know
> > that 'make' will always teach me about the
> > subject, and eventually, it will teach me
> > which one to buy, or it will give me a
> > system tuned to my needs.

> The history of people who go this course suggests otherwise.
> They generally get lousy solutions.

I think it would be very interesting to
do a study of all the cryptosystems out
there and measure what succeeds, what
doesn't, what's secure, and what's not.

What cost too much money and what saved
money.

One of the issues that we see is that
too many security people assume that
"insecure" is "bad".  What they fail
to perceive is that an insecure system
is often sufficient for the times and
places.

WEP for example is perfectly fine, unless
you are attacked by a guy with a WEP
cracking kit!  Then it's a perfectly
lousy cryptosubsystem.

It's like the GSM story, whereby 8 years
down the track, Lucky Green cracked the
crypto by probing the SIMs to extract
the secret algorithm over a period of
many months (which algorithm then fell to
Ian Goldberg and Dave Wagner in a few hours).

In that case, some GSM guy said that, it
was good because it worked for 8 years,
that shows the design was good, doesn't
it?

And Lucky said, now you've got to replace
hundreds of millions of SIMs, that's got
to be a bad design, no?

(Lucky might be able to confirm the real
story there.)

Different ways of looking at the same
thing.  They are both valid points of
view.  To work out the difference, we
need to go to costs and benefits.  Who
won and who lost?  I never heard how
it panned out.

-- 
iang





More information about the cypherpunks-legacy mailing list