Re: the underground software vulnerability marketplace and its hazards (fwd)
Right. And I fail to see how any of this is dangerous. Clearly people are free to sell information they create to anyone they choose under any terms they choose. (For example the iDEFENSE promise of the author to not otherwise reveal for 2 weeks to give iDEFENSE some value.) This commercialisation seems like a _good thing_ as it may lead to more breaks being discovered, and hence more secure software. (It won't remain secret for very long -- given the existance of anonymous remailers etc., but the time-delay in release allows the information intermediary -- such as iDEFENSE -- to sell the information to parties who would like it early, businesses for example people with affected systems. Criminal crackers who can exploit the information just assist in setting a fair price and forcing vendors and businesses to recognise the true value of the information. Bear in mind the seller can not know or distinguish between a subscriber who wants the information for their own defense (eg a bank or e-commerce site, managed security service provider), and a cracker who intends to exploit the information (criminal organisation, crackers for amusement or discovery of further inforamtion, private investigators, government agencies doing offensive information warfare domesticaly or internationally). I don't see any particular moral obligation for people who put their own effort into finding a flaw to release it to everyone at the same time. Surely they can release it earlier to people who pay them to conduct their research, and by extension to people who act as intermediaries for the purpose of negotiating better terms or being able to package the stream of ongoing breaks into more comprehensive subscription service. I think HP were wrong, and find their actions in trying to use legal scare tactics reprehensible: they should either negotiate a price, or wait for the information to become generally available. Adam On Thu, Aug 22, 2002 at 08:02:16AM -0700, Steve Schear wrote:
On August 7th, an entity known as "iDEFENSE" sent out an announcement, which is appended to this email. Briefly, "iDEFENSE", which bills itself as "a global security intelligence company", is offering cash for information about security vulnerabilities in computer software that are not publicly known, especially if you promise not to tell anyone else.
If this kind of secret traffic is allowed to continue, it will pose a very serious threat to our computer communications infrastructure.
A more serious and credible threat would be an escrow/verification service which could support blacknet style auctions. It could also make the hacker's time valuable enough to support a decent lifestyle fostering an cottage industry.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
Adam Back wrote:
I think HP were wrong, and find their actions in trying to use legal scare tactics reprehensible: they should either negotiate a price, or wait for the information to become generally available.
Amen. Incidentally I was put under a lot of pressure when releasing the OpenSSL advisory a few weeks ago to allow CERT to notify "vendors" before going on general release. I have a big problem with this - who decides who are "vendors", and how? And why should I abide by their decision? Why should I pick CERT and not some other route to release the information? Also, if the "vendors" were playing the free software game properly, they wouldn't _need_ advance notification - their customers would have source, and could apply the patches, just like real humans. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ Available for contract work. "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
Ben Laurie wrote:
Incidentally I was put under a lot of pressure when releasing the OpenSSL advisory a few weeks ago to allow CERT to notify "vendors" before going on general release. I have a big problem with this - who decides who are "vendors", and how? And why should I abide by their decision? Why should I pick CERT and not some other route to release the information?
I agree that such pressure is pretty reprehensible. As others in this thread have said, it's your decision how you want to publish the information. People should respect that decision. However...
Also, if the "vendors" were playing the free software game properly, they wouldn't _need_ advance notification - their customers would have source, and could apply the patches, just like real humans.
I agree with that to a certain extent. However, we (RSA) recently had to release patches to several versions of Xcert's old Sentry CA because of the OpenSSL fixes. I do not know how our customers would have been helped by having the source. First, I want to point out that Xcert's use of OpenSSL was entirely in agreement with OpenSSL's license. The fact that we built closed-source product atop OpenSSL was playing the game properly, as far as the rules were laid out. (If you think OpenSSL's users should behave differently, change the license!) Even if we gave our customers our source code, we had made a few changes to the OpenSSL code for use in Sentry CA. Mostly to deal with things like PKCS#11 and ECC (we used OpenSSL for crypto, some ASN.1 and SSL). So patches don't necessarily apply perfectly cleanly (though these ones did). It seems unreasonable for us to expect our customers to make the appropriate changes themselves. (We even had to make our own patch for a particularly early version of Sentry CA that used a verison of OpenSSL that did not get a patch from openssl.org. There's nothing like money to bring out the whore in all of us...) Also, one of the selling points of Sentry CA was that it's thoroughly tested. We had to make sure that the patches didn't break the product. Again, we can't really expect our customers to do that themselves. Now, I'm a big fan of open-source software, and am very sympathetic to its ideas in many ways. All I'm trying to point out is that the issues aren't necessarily so black-and-white. We certainly could have benefitted from advanced notice of the flaws, but I personally think that "vendors" shouldn't get first dibs at any patches. That said, I don't really know what we could've done with the news while waiting for OpenSSL's patches to come out. So the way things happened is probably the fairest outcome possible. It was a rough couple of weeks for us, though, getting our own fixes together while OpenSSL was sitting pretty. Customers don't seem to like _knowing_ they're vulnerable, for some reason... (I speak for myself, and these opinions are my own, and I might even be lying about everything.) M. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@wasabisystems.com
Marc Branchaud wrote:
Ben Laurie wrote:
Incidentally I was put under a lot of pressure when releasing the OpenSSL advisory a few weeks ago to allow CERT to notify "vendors" before going on general release. I have a big problem with this - who decides who are "vendors", and how? And why should I abide by their decision? Why should I pick CERT and not some other route to release the information?
I agree that such pressure is pretty reprehensible. As others in this thread have said, it's your decision how you want to publish the information. People should respect that decision.
However...
Also, if the "vendors" were playing the free software game properly, they wouldn't _need_ advance notification - their customers would have source, and could apply the patches, just like real humans.
I agree with that to a certain extent. However, we (RSA) recently had to release patches to several versions of Xcert's old Sentry CA because of the OpenSSL fixes. I do not know how our customers would have been helped by having the source.
First, I want to point out that Xcert's use of OpenSSL was entirely in agreement with OpenSSL's license. The fact that we built closed-source product atop OpenSSL was playing the game properly, as far as the rules were laid out. (If you think OpenSSL's users should behave differently, change the license!)
I have two points to make about this: a) We can't change the licence (until we rewrite the whole thing). b) I like BSD-style licences because it means people get to use the software even if they are doing the wrong thing - but I do hope they'll see the light in the end and do the right thing.
Even if we gave our customers our source code, we had made a few changes to the OpenSSL code for use in Sentry CA. Mostly to deal with things like PKCS#11 and ECC (we used OpenSSL for crypto, some ASN.1 and SSL).
Correct answer: contribute the patches back to OpenSSL, then you don't have this problem.
So patches don't necessarily apply perfectly cleanly (though these ones did). It seems unreasonable for us to expect our customers to make the appropriate changes themselves. (We even had to make our own patch for a particularly early version of Sentry CA that used a verison of OpenSSL that did not get a patch from openssl.org. There's nothing like money to bring out the whore in all of us...)
That's probably fuller of holes than I care to think about.
Also, one of the selling points of Sentry CA was that it's thoroughly tested. We had to make sure that the patches didn't break the product. Again, we can't really expect our customers to do that themselves.
Vulnerable or potentially flakey would be their choice until you've done your testing. I know what I would choose.
Now, I'm a big fan of open-source software, and am very sympathetic to its ideas in many ways. All I'm trying to point out is that the issues aren't necessarily so black-and-white. We certainly could have benefitted from advanced notice of the flaws, but I personally think that "vendors" shouldn't get first dibs at any patches. That said, I don't really know what we could've done with the news while waiting for OpenSSL's patches to come out.
There would have been patches, too, of course.
So the way things happened is probably the fairest outcome possible. It was a rough couple of weeks for us, though, getting our own fixes together while OpenSSL was sitting pretty. Customers don't seem to like _knowing_ they're vulnerable, for some reason...
Because they know that the attackers know, too, of course. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ Available for contract work. "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff
Research into defining and addressing classes of vulnerabilities can't happen without libraries of available vulnerability code. I can think of three researchers into automated methods for addressing vulnerabilities who griped, uninvited, about the quality of the existing vulnerability sites. Doing research into a set requires that you have enough examples, in the open, that you can define a set, and that the set is added to from time to time so you can make and test predictions. I feel fairly confident in saying that without full disclosure, we wouldn't have Stackguard, ITS4, Nissus, or snort. And the security admin's job would be a lot harder. Clearly, people should not be restricted from doing what they want with information. However, if you are concerned about the state of computer security, then I think encouraging more and better communication amongst "white hats" is a good idea. (An interesting question is 'Is there a difference between selling information you know you have and information you expect to have?' which is what many security companies have been doing for a while: Hiring the people who find exploits to find them for their commercial profit. The difference is that those security companies paid salary, not contracting rates.) Adam On Thu, Aug 22, 2002 at 04:54:51PM +0100, Adam Back wrote: | Right. And I fail to see how any of this is dangerous. | | Clearly people are free to sell information they create to anyone they | choose under any terms they choose. (For example the iDEFENSE promise | of the author to not otherwise reveal for 2 weeks to give iDEFENSE | some value.) | | This commercialisation seems like a _good thing_ as it may lead to | more breaks being discovered, and hence more secure software. | | (It won't remain secret for very long -- given the existance of | anonymous remailers etc., but the time-delay in release allows the | information intermediary -- such as iDEFENSE -- to sell the | information to parties who would like it early, businesses for example | people with affected systems. | | Criminal crackers who can exploit the information just assist in | setting a fair price and forcing vendors and businesses to recognise | the true value of the information. Bear in mind the seller can not | know or distinguish between a subscriber who wants the information for | their own defense (eg a bank or e-commerce site, managed security | service provider), and a cracker who intends to exploit the | information (criminal organisation, crackers for amusement or | discovery of further inforamtion, private investigators, government | agencies doing offensive information warfare domesticaly or | internationally). | | I don't see any particular moral obligation for people who put their | own effort into finding a flaw to release it to everyone at the same | time. Surely they can release it earlier to people who pay them to | conduct their research, and by extension to people who act as | intermediaries for the purpose of negotiating better terms or being | able to package the stream of ongoing breaks into more comprehensive | subscription service. | | I think HP were wrong, and find their actions in trying to use legal | scare tactics reprehensible: they should either negotiate a price, or | wait for the information to become generally available. | | Adam | | On Thu, Aug 22, 2002 at 08:02:16AM -0700, Steve Schear wrote: | > >On August 7th, an entity known as "iDEFENSE" sent out an announcement, | > >which is appended to this email. Briefly, "iDEFENSE", which bills | > >itself as "a global security intelligence company", is offering cash | > >for information about security vulnerabilities in computer software | > >that are not publicly known, especially if you promise not to tell | > >anyone else. | > > | > >If this kind of secret traffic is allowed to continue, it will pose a | > >very serious threat to our computer communications infrastructure. | > | > A more serious and credible threat would be an escrow/verification service | > which could support blacknet style auctions. It could also make the | > hacker's time valuable enough to support a decent lifestyle fostering an | > cottage industry. | -- "It is seldom that liberty of any kind is lost all at once." -Hume
On Thu, 22 Aug 2002, Adam Shostack wrote:
Clearly, people should not be restricted from doing what they want with information. However, if you are concerned about the state of computer security, then I think encouraging more and better communication amongst "white hats" is a good idea.
Yes, I think all exploits need to be published. I'm not sure how soon is soon enough - a month from discovery to publication seems ok to me. but that's easy to argue with too.
(An interesting question is 'Is there a difference between selling information you know you have and information you expect to have?'
Hmmm... anyone want to create a futures market for code exploits?
which is what many security companies have been doing for a while: Hiring the people who find exploits to find them for their commercial profit. The difference is that those security companies paid salary, not contracting rates.)
My experience with contracting rates is much better than paid salary. the difference is that salary jobs are longer term, it's something a company wants to do for a long time. Contract jobs are short term. I think it's true that exploits will always be there to find, and it definitly in a security company's best interest to have people continuously looking for problems. Who they tell and when becomes an interesting topic in and of itself, but I think it's important that all security problems be published within a reasonable time. Patience, persistence, truth, Dr. mike
On Thu, 22 Aug 2002, Adam Back wrote:
Right. And I fail to see how any of this is dangerous.
Depends on how it's used. Hammers can be dangerous.
Clearly people are free to sell information they create to anyone they choose under any terms they choose. (For example the iDEFENSE promise of the author to not otherwise reveal for 2 weeks to give iDEFENSE some value.)
Yup. I suspect they won't get paid until after the 2 weeks is up to ensure that too.
This commercialisation seems like a _good thing_ as it may lead to more breaks being discovered, and hence more secure software.
Maybe.
(It won't remain secret for very long -- given the existance of anonymous remailers etc., but the time-delay in release allows the information intermediary -- such as iDEFENSE -- to sell the information to parties who would like it early, businesses for example people with affected systems.
Or al-quida like operations. By accident of course!
Criminal crackers who can exploit the information just assist in setting a fair price and forcing vendors and businesses to recognise the true value of the information. Bear in mind the seller can not know or distinguish between a subscriber who wants the information for their own defense (eg a bank or e-commerce site, managed security service provider), and a cracker who intends to exploit the information (criminal organisation, crackers for amusement or discovery of further inforamtion, private investigators, government agencies doing offensive information warfare domesticaly or internationally).
Seems like you're assuming the cracker is pointed at a specific target to begin with. I think it's more of a crap shoot, and iDEFENSE is hoping a few will be really worth while for the 100's that aren't. iDEFENSE has to find the subscriber after the fact, not before (I think).
I don't see any particular moral obligation for people who put their own effort into finding a flaw to release it to everyone at the same time. Surely they can release it earlier to people who pay them to conduct their research, and by extension to people who act as intermediaries for the purpose of negotiating better terms or being able to package the stream of ongoing breaks into more comprehensive subscription service.
I think HP were wrong, and find their actions in trying to use legal scare tactics reprehensible: they should either negotiate a price, or wait for the information to become generally available.
If I were HP I'd have done the same thing they did - why be pushed around when you can fight back? I think the crackers screwed up, they should have given a presentation to HP with a proof that there's a crack, and then request (politely) some compensation for where it was. by making it a reasonable request, HP saves engineering time and their software, and the crackers get into business. If they'd gone in with a "win-win" attitude, the crackers would have made money, HP would have saved a lot of money, and everyone would be a lot happier. "moral obligation" and "mental attitude" are not the same thing, but I think the right attitude would make the morals a lot simpler. So rather than paying paltry sums to crackers, iDEFENSE might do better as a agency for crackers. If they do the business to business end for the crackers, and negotiate contracts, then they get a cut, and the crackers get a lot more motivation to go find problems. I think everybody can win then, so long as the exploits are in fact published. Patience, persistence, truth, Dr. mike
participants (5)
-
Adam Back
-
Adam Shostack
-
Ben Laurie
-
Marc Branchaud
-
Mike Rosing