Lauren Weinstein, founder of People for Internet Responsibility, has come out with a new spam solution at http://www.pfir.org/tripoli-overview. According to this proposal, the Internet email architecture would be revamped. Each piece of mail would include a PIT, a Payload Identity Token, emphasis on Identity. This would be a token certifying that you were an Authorized Email User as judged by the authorities. Based on your PIT, the receiving email software could decide to reject your email. It is anticipated that all Pits considered acceptable by the vast majority of all Tripoli-compliant software user would be digitally signed by one or more designated, trustworthy, third-pary authorities who would be delegated the power to certify the validity of identity and other relevant information within Pits. In other words, here comes Verisign again. It is anticipated that in most cases, in order for the sender of an e-mail message to become initially certified by a Pit Certification Authority (PCA), the sender would need to first formally accept Terms of Service (ToS) that may well prohibit the sending of spam, and equally importantly, would authorize the certification authority to "downgrade" the sender's authentication certification in the case of spam or other ToS violations. Thus you have to be politically acceptable to the Powers That Be in order to receive your license to email, aka your PIT. And be careful what you say or your PIT will be downgraded. Unfortunately he doesn't discuss various crypto protocol issues: If the PIT is just a datum, what keeps someone from stealing your PIT and spams with it? If the PIT is a cert on a key, what do you sign? The message? What if it gets munged in transit, as messages do? You've just lost most of your email reliability. Or maybe you sign the current date/time? Then delayed mail is dead mail. Or maybe you respond to a challenge and sign that? That won't work if relays are involved, because they can't sign for you. Spam is a problem, but it's no excuse to add more centralized administrative control to the Internet. Far better to go with a decentralized solution like camram.sourceforge.net, basically a matter of looking for hashcash in the mail headers. This raises the cost to spammers without significantly impacting normal users.
Yes, there is some discussion of it on slashdot, including several other people who have commented similarly to anonymous that it is a pretty big privacy invasion and centralised control point problem. The claim that you can optionally be anonymous and not use a cert, or get an anonymous cert is plainly practically bogus. You'd stand about as much chance of having your mail read as if you shared mail hub with spamford wallace -- ie 90+% of internet mail infrastructure would drop your mail on the floor on the presumption it was spam. Plus a point I made in that thread is that it is often not in the internet user's interests to non-repudiably sign every message they send just to be able to send mail because that lends amunition to hostile recipients who from time-to-time target internet users for bullshit libel and unauthorised investment advice etc. Companies also are I would expect somewhat sensitive to not signing everything for similar reasons as those behind their retention policies where they have policies of deleteing emails, files and shredding paper files after some period. In addition PKIs because of the infrastructure requirements have probem complex to setup and administer. So now we've taken one hard problem (stopping spam) and added another hard problem (hierarchical PKI deployment) and somehow this is supposed to be effective at stopping spam. In addition unless there is significant financial cost for certificates and/or signifcant and enforceable financial penalty and good identification and registration procedures enforced by the CAs it wouldn't even slow spammers who would just get a cert, spam, get revoked, get another cert and repeat. Certificate revocation is already a weak point of PKI technology, and to reasonably stop spam before the spammer manages to send too many millions of spams with a cert, you have to revoke the cert PDQ! And finally it all ends up being no more than an expensive implementation of blacklists (or I suppose more properly whitelists), because the CAs are maintaining lists of people who have not yet been revoked as spammers. Some click through agreement isn't going to stop spammers. Legislation or legal or financial threat is going to stop spammers either because any level of registration time identity verification that is plausibly going to be accepted by users, and this is also limited by the cost -- higher assurance is more cost which users also won't be willing to accept -- will be too easy for the spammers to fake. And email is international and laws are not. It is pretty much an "internet drivers license" for email. I also think that fully distributed systems such as hashcash are more suitable for a global internet service. My preferred method for deploying hashcash is as a token exempting it's sender from bayesian filtering, and any other content based or sender based filtering. That way as an email user you have an incentive to install a hashcash plugin http://www.cypherspace.org/hashcash/ because it will ensure your mail does not get deleted by ever-more aggressive filtering and scattergun blackhole systems. The camram system http://www.camram.org/ is a variant of this. It also more directly addresses the problem: it makes it more expensive for spammers to send the volumes of mail they need to to break even. Adam On Fri, May 09, 2003 at 03:50:02AM +0200, Nomen Nescio wrote:
Lauren Weinstein, founder of People for Internet Responsibility, has come out with a new spam solution at http://www.pfir.org/tripoli-overview.
According to this proposal, the Internet email architecture would be revamped. Each piece of mail would include a PIT, a Payload Identity Token, emphasis on Identity. This would be a token certifying that you were an Authorized Email User as judged by the authorities. Based on your PIT, the receiving email software could decide to reject your email.
It is anticipated that all Pits considered acceptable by the vast majority of all Tripoli-compliant software user would be digitally signed by one or more designated, trustworthy, third-pary authorities who would be delegated the power to certify the validity of identity and other relevant information within Pits.
In other words, here comes Verisign again.
It is anticipated that in most cases, in order for the sender of an e-mail message to become initially certified by a Pit Certification Authority (PCA), the sender would need to first formally accept Terms of Service (ToS) that may well prohibit the sending of spam, and equally importantly, would authorize the certification authority to "downgrade" the sender's authentication certification in the case of spam or other ToS violations.
Thus you have to be politically acceptable to the Powers That Be in order to receive your license to email, aka your PIT. And be careful what you say or your PIT will be downgraded.
Unfortunately he doesn't discuss various crypto protocol issues:
If the PIT is just a datum, what keeps someone from stealing your PIT and spams with it?
If the PIT is a cert on a key, what do you sign? The message? What if it gets munged in transit, as messages do? You've just lost most of your email reliability.
Or maybe you sign the current date/time? Then delayed mail is dead mail.
Or maybe you respond to a challenge and sign that? That won't work if relays are involved, because they can't sign for you.
Spam is a problem, but it's no excuse to add more centralized administrative control to the Internet. Far better to go with a decentralized solution like camram.sourceforge.net, basically a matter of looking for hashcash in the mail headers. This raises the cost to spammers without significantly impacting normal users.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
If one wants a globally visible address, like publishing e-mail address on webbed space, then one will be globally reachable. It's like walking on the street - everyone sees you, including display ads, which is why they cost so much in cities. If you *don't* want to be globally visible, you don't need conmen selecting who will see you. You simply selectively give your e-mail address to those who you want to see you. This is extremely simple concept with zero cost of implementation. ===== end (of original message) Y-a*h*o-o (yes, they scan for this) spam follows: __________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo. http://search.yahoo.com
the proposal in the past has been that ISPs filter spam at ingress from their customers. the counter-argument has been that there are lots of ISPs that can't be trusted to do it. So it is much easier for ISPs to have lists of other trusted &/or untrusted ISPs that they will accept email from. It is orders of magnitude easier (and more efficient) for ISPs to do ingress filtering for SPAM and effectively ISP blacklists than it is to populate the whole consumer infrastructure. There are still some ways to slip thru the cracks with small amounts .... but it isn't the 40-80% volume of all email that is seen today. It does have an analogous downside to the individual privacy issues ... which are that the big ISPs could use blacklisting for other purposes than addressing SPAM issues. Some of the ingress filtering pushback may be similar to the early counter-arguments for packet ingress filtering related to ip-address spoofing. however, that seemed to be more a case of disparity among the router vendors in which could & could not implement ingress filtering. as majority of the router vendors achieved such capability ... the push-back significantly reduced. http://www.garlic.com/~lynn/rfcidx7.htm#2267 2267 - Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing, Ferguson P., Senie D., 1998/01/23 (10pp) (.txt=21032) (Obsoleted by 2827) there already are logs relating ingress email to originating ISP customer id. that could be made available via some sort of legal action. the only issue then is the strength of authentication that is performed on customer connection to the ISP ... rather than some sort of origin authentication for every email. At 03:40 AM 5/9/2003 +0100, Adam Back wrote:
Yes, there is some discussion of it on slashdot, including several other people who have commented similarly to anonymous that it is a pretty big privacy invasion and centralised control point problem.
The claim that you can optionally be anonymous and not use a cert, or get an anonymous cert is plainly practically bogus. You'd stand about as much chance of having your mail read as if you shared mail hub with spamford wallace -- ie 90+% of internet mail infrastructure would drop your mail on the floor on the presumption it was spam.
Plus a point I made in that thread is that it is often not in the internet user's interests to non-repudiably sign every message they send just to be able to send mail because that lends amunition to hostile recipients who from time-to-time target internet users for bullshit libel and unauthorised investment advice etc.
Companies also are I would expect somewhat sensitive to not signing everything for similar reasons as those behind their retention policies where they have policies of deleteing emails, files and shredding paper files after some period.
In addition PKIs because of the infrastructure requirements have probem complex to setup and administer. So now we've taken one hard problem (stopping spam) and added another hard problem (hierarchical PKI deployment) and somehow this is supposed to be effective at stopping spam.
In addition unless there is significant financial cost for certificates and/or signifcant and enforceable financial penalty and good identification and registration procedures enforced by the CAs it wouldn't even slow spammers who would just get a cert, spam, get revoked, get another cert and repeat.
Certificate revocation is already a weak point of PKI technology, and to reasonably stop spam before the spammer manages to send too many millions of spams with a cert, you have to revoke the cert PDQ!
And finally it all ends up being no more than an expensive implementation of blacklists (or I suppose more properly whitelists), because the CAs are maintaining lists of people who have not yet been revoked as spammers. Some click through agreement isn't going to stop spammers. Legislation or legal or financial threat is going to stop spammers either because any level of registration time identity verification that is plausibly going to be accepted by users, and this is also limited by the cost -- higher assurance is more cost which users also won't be willing to accept -- will be too easy for the spammers to fake. And email is international and laws are not.
It is pretty much an "internet drivers license" for email.
I also think that fully distributed systems such as hashcash are more suitable for a global internet service. My preferred method for deploying hashcash is as a token exempting it's sender from bayesian filtering, and any other content based or sender based filtering.
That way as an email user you have an incentive to install a hashcash plugin http://www.cypherspace.org/hashcash/ because it will ensure your mail does not get deleted by ever-more aggressive filtering and scattergun blackhole systems. The camram system http://www.camram.org/ is a variant of this.
It also more directly addresses the problem: it makes it more expensive for spammers to send the volumes of mail they need to to break even.
Adam
On Fri, May 09, 2003 at 03:50:02AM +0200, Nomen Nescio wrote:
Lauren Weinstein, founder of People for Internet Responsibility, has come out with a new spam solution at http://www.pfir.org/tripoli-overview.
According to this proposal, the Internet email architecture would be revamped. Each piece of mail would include a PIT, a Payload Identity Token, emphasis on Identity. This would be a token certifying that you were an Authorized Email User as judged by the authorities. Based on your PIT, the receiving email software could decide to reject your email.
It is anticipated that all Pits considered acceptable by the vast majority of all Tripoli-compliant software user would be digitally signed by one or more designated, trustworthy, third-pary authorities who would be delegated the power to certify the validity of identity and other relevant information within Pits.
In other words, here comes Verisign again.
It is anticipated that in most cases, in order for the sender of an e-mail message to become initially certified by a Pit Certification Authority (PCA), the sender would need to first formally accept Terms of Service (ToS) that may well prohibit the sending of spam, and equally importantly, would authorize the certification authority to "downgrade" the sender's authentication certification in the case of spam or other ToS violations.
Thus you have to be politically acceptable to the Powers That Be in order to receive your license to email, aka your PIT. And be careful what you say or your PIT will be downgraded.
Unfortunately he doesn't discuss various crypto protocol issues:
If the PIT is just a datum, what keeps someone from stealing your PIT and spams with it?
If the PIT is a cert on a key, what do you sign? The message? What if it gets munged in transit, as messages do? You've just lost most of your email reliability.
Or maybe you sign the current date/time? Then delayed mail is dead mail.
Or maybe you respond to a challenge and sign that? That won't work if relays are involved, because they can't sign for you.
Spam is a problem, but it's no excuse to add more centralized administrative control to the Internet. Far better to go with a decentralized solution like camram.sourceforge.net, basically a matter of looking for hashcash in the mail headers. This raises the cost to spammers without significantly impacting normal users.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
-- Anne & Lynn Wheeler http://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
On Fri, May 09, 2003 at 10:11:52AM -0600, Anne & Lynn Wheeler wrote:
So it is much easier for ISPs to have lists of other trusted &/or untrusted ISPs that they will accept email from.
Any internet user needs to be able to send mail to any other internet user. Which means the default has to be open (blacklists rather than whitelists). Then you have the blackhole lists like ORBs etc, which block domains used predominantly by spammers. But the problem is spammers don't stay in one place, they buy service from ISPs and spam flat-out until the ISP notices and cancels the account. Some ISPs are more grey -- they want to make money from spammers by providing them service, and some ISPs just don't notice or respond that quickly. The ISP can't distinguish spammers from non-spammers when they receive customer orders. The blackhole people are arbitrary vigilantes by and large, so the overall effect you might argue does reduce spam, but it also results in lost mail. My experience was I couldn't get mail from my brother who was using btinternet, one of the largest ISPs in the UK because some idiot blackholer blackholed their dynamic IPs. Not doubt there were at some time some spammers using BTinternet as with just about any other ISP. Recently I couldn't receive mail from John Gilmore, and so it goes. So I don't see how this is a "solution", rather it is just a broken countermeasure with scatter gun fall-out of false positives for all the other people who find themselves sharing the same ISP as spammers long enough for the blackhole people to add them. Adam
Currently ISPs typically "notice" when they get complaints. ISPs could do a much better job of actively noticing and limiting mail at ingress ... as opposed to waiting for somebody to complain and canceling the account. Many of the recent statements about ISPs can't limit email at ingress dynamically are similar to the comments about not being able to filter ingress packets if their origin address didn't match the ip address of the sender (as stated in the original posting) ... per the ingress packet filtering RFC referenced in the original post. My original post mentioned that the ISPs could then do their own effort of blacklisting (of other ISPs). Currently spam blacklists can be somewhat like vigilantes .... with the argument analogy that since vigilantly justice can make mistakes then there shouldn't be any highway patrol, FBI, and/or secret service. ISPs would be expected to filter on ingress of email from their own customers .... and even if the 10 top ISPs blacklisted other ISPs that didn't do a reasonable job of ingress filtering ... it could start to put a big dent in the spamming business, possibly cutting it from 40-80% of existing email down under 5-10%. It is sort of like stop signs and stop lights .... there are typically hundreds of more intersections than there are traffic enforcers .... however with sufficient leverage ... it can significantly improve the situation ... even if it can be proved that it can't, absolutely, 100% guarantee one hundred percent compliance. I didn't make any statement about ISPs attempting to identify spammers when they register the account .... the original post was only with regard to ISPs doing active email ingress filtering. My ISP recognizes and bills me extra if I'm simultaneously connected multiple times ... there is a little latitude for modem hanging, my dropping the line ... but the modem not reporting it ... and my connecting on a different modem. It also does traffic load-leveling if I really try and hit it hard. If it can bill extra for simultaneous connects and traffic load leveling, it can do both packet ingress filtering and email ingress filtering. past thread drawing the analogy that the information superhighway is something like the wild west .... w/o traffic rules, traffic signs, traffic lights, speed limits, and enforcement. start with a couple hundred people in town .... and went to millions ... and there still isn't even any rule about which side of the road people should be driving on. http://www.garlic.com/~lynn/2001m.html#30 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement At 06:02 AM 5/10/2003 +0100, Adam Back wrote:
On Fri, May 09, 2003 at 10:11:52AM -0600, Anne & Lynn Wheeler wrote:
So it is much easier for ISPs to have lists of other trusted &/or untrusted ISPs that they will accept email from.
Any internet user needs to be able to send mail to any other internet user. Which means the default has to be open (blacklists rather than whitelists). Then you have the blackhole lists like ORBs etc, which block domains used predominantly by spammers. But the problem is spammers don't stay in one place, they buy service from ISPs and spam flat-out until the ISP notices and cancels the account. Some ISPs are more grey -- they want to make money from spammers by providing them service, and some ISPs just don't notice or respond that quickly. The ISP can't distinguish spammers from non-spammers when they receive customer orders. The blackhole people are arbitrary vigilantes by and large, so the overall effect you might argue does reduce spam, but it also results in lost mail.
My experience was I couldn't get mail from my brother who was using btinternet, one of the largest ISPs in the UK because some idiot blackholer blackholed their dynamic IPs. Not doubt there were at some time some spammers using BTinternet as with just about any other ISP. Recently I couldn't receive mail from John Gilmore, and so it goes.
So I don't see how this is a "solution", rather it is just a broken countermeasure with scatter gun fall-out of false positives for all the other people who find themselves sharing the same ISP as spammers long enough for the blackhole people to add them.
Adam
-- Anne & Lynn Wheeler http://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
On Fri, May 09, 2003 at 11:35:36PM -0600, Anne & Lynn Wheeler wrote:
Currently ISPs typically "notice" when they get complaints. ISPs could do a much better job of actively noticing and limiting mail at ingress ... as opposed to waiting for somebody to complain and canceling the account.
So this would be the block port 25 except to ISP run mail hub approach? Firstly that only works for end-users; larger customers want their own mail delivery and no abitrary restrictions on what they can do with their pipe. Then what is the ISP going to notice? He shouldn't be actively monitoring his customers traffic. There are lots of tunneling protocols, authentication is weak, spam can identify other people as the sender (to some extent), host security is weak, hosts are vulnerable to viruses. Recently there was a virus with a payload of an open proxy, which it was suspected was distributed by spammers, or at least the spammers had discovered it and were using it. So I understand what you're describing, but it sounds lik a big messy nightmare, which is pretty much where we are now and rapidly getting worse.
My original post mentioned that the ISPs could then do their own effort of blacklisting (of other ISPs).
Let's try something concrete: say some spammer starts using AOL to send a batch to Eathlink. So Earthlink notices and blocks AOL. If you seriously think this is the outcome, then email reliability planet-wide has probably just dropped by 1% (or whatever fraction of internet email travels from AOL->earthlink). Repeat for all major ISPs who are being abused by spammers with disposable free AOL CDs, accounts bought with stolen credit cards, or just regular paid service. Messy right? So I think it is not realistic to assume ISPs can do this without massive reliaibility loss. Typically I'm presuming blackhole lists don't block large ISPs (modulo the BTinternet example I gave) because of the fall out. Basically any ISP of any size has an ongoing turn-around of some proportion of their users who are repeat hit and run-spammers. So a blackhole approach can stop a static IP leased to a spammer, used by the spammer only, but the same approach applied to the hit and run cheaper ISP account using type customers (dynamic IP) causes no end of reliability issues. Analogies about the wild west don't really help in thinking about solutions I think. I like the decentralised nature of the internet. I don't want to have to show government ID to obtain an internet drivers license to send email. When I buy a pipe onto the internet I don't want "no server" AUPs, nor a mish-bash of blocked ports. I understand the problem is hard to address, but let's not damage the useful decentralised open architecture of the internet trying! Adam
do you think that earthlink would automatically blacklist aol if it found incoming spam from aol? I think that earthlink would contact aol and say ... your ingress filtering doesn't seem to be working. It would only be after all attempts to understand aol's ingress filtering that earthlink might take action. again ... it is analogous to somebody hearing about traffic lights for the first time and coming up with all the reasons why people would ignore traffic lights. I would claim that the current issue isn't that spam exists (aka traffic violations), it is that there are billions of spams each day. and that this easily cuts the majority of it if the top ten start doing ingress mail filtering and that ingress mail filtering is orders of magnitude more efficient than other kinds of solutions. the blacklisting isn't for the mistakes ... it is for the ISPs that obviously aren't going to follow the traffic rules. so there are lots of kinds of tunneling. the major ISPs are already doing ingress filtering for email not coming from a recognizable customer. so tunneling actually reduces to a common vulnerability with ISPs not doing ingress email filtering (aka the tunneling issue to a ISP that isn't doing ingress email filtering is common vulnerability with a customer directly getting an email account with ISP that isn't doing ingress email filtering). So the issue comes back to ISPs that are recognized as not doing ingress email filtering. So lets say this gets something like 80 percent of the traffic violations. So the majority of the random traffic violations are now starting to be taken care of. There are 1) the corporations effectively operating as private ISPs, 2) compromised machines, 3) random anarchy. So both #2 and #3 are vulnerabilities treated just the same as a real spammer getting a real account and directly doing spam. These two vulnerabilities should be caught be ingress email filtering. Real spammers caught by ISP ingress filtering, compromised machines caught by ISP ingress filtering, and hit&run anarchist caught by ingress filtering .... all appear to be a common vulnerability caught by ingress email filtering. The issues actual reduce to a very few simple, non-complex vulnerabilities from a business process standpoint (ignoring all the technology twists and turns): 1) ISPs that do ingress email filtering and 2) ISPs that do not do ingress email filtering. If ISPs are doing ingress email filtering .... then all the situations of known spammers, spammers masquerading enormously getting accounts, spammers compromising other machines and masquerading enormously, tunneling, etc ... all get taken care of. There are still the periodic traffic accidents where somebody might be able to do a couple hundred before getting cut .... but it probably reduces over 90 percent of the traffic. So the remain issue is whether an ISP is following the traffic laws and doing ingress email filtering or flagrantly flaunting the law and letting millions of spam thru. This is regardless of whether it is a real public ISP ... or effectively a corporate/private ISP. The other ISPs then use blacklisting. The first line of defense is that all ISPs are to do ingress email filtering and the 2nd line of defense is that the major ISPs do blacklisting on the ISPs that obviously are flaunting the law. The primary business issue is that majority of spam is being done for some profit .... that the cost of sending the spam is less than the expected financial return. This should address the 99 percentile. Again, it is very simple, first line of defense is ingress email filtering. This is only a moderate extension of what the major ISPs are currently doing with regard to not accepting email from entities that are obviously not their customers, current traffic limiting business rules, etc. The second line of defense is blacklisting ISPs that aren't following the traffic rules. I claim, it actually is rather much simpler and much more effective. So back to the obvious traffic violations. One is the compromised machines. Large proportion of the compromised machines are their because they all got hit by spamming virus. I claim, that over time if over 90 percent of spamming gets cut ... then 90 percent of the machines that get compromised by virus in spam can also get cut. Situation is now down to large number of compromised machines each sending couple hundred emails each ... staying under the ingress filtering radar. That is orders of magnitude better than the current situation but it is starting to reduce the case to manageable traffic violations. So this scenario gets down to providing significantly more focus on compromised machines ... and back to a recent comment about lots of vendors saying that consumers won't pay for better security ... because they have no motivation. This is somewhat the insurance industry theory of improving on severity of traffic accidents (what motivated automobile manufactory to build safer cars). My ISP currently charges me extra over the flat rate for certain behavioral activities. Violating ingress email filtering rules would be such a valid inducement. I get ingress email filtering accident insurance the premiums are based on the integrity of the machine i'm operating. So, two simple rules .... 1) ISPs do ingress email filtering, and 2) ISPs blacklist other ISPs that flagrantly violate the ingress email filtering rules. With a sizeable reduction in spam, there is corresponding sizeable reduction in compromised machines. However, compromised machines that do spam and hit the ISPs ingress email filtering rules, get fined. It is treated as accident and operating an unsafe vehicle. You can get accident and fine insurance .... but the premium is related to kind of machine you operate. Some inducement for consuming public to purchase safer machines. The two simple rules ... with the fines for violations then provides some inducement for consumer buying habit regarding purchasing safer machines. And it is all quite similar to policies and practices currently in place. -- Anne & Lynn Wheeler http://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
On Sat, May 10, 2003 at 09:36:43AM -0600, Anne & Lynn Wheeler wrote:
do you think that earthlink would automatically blacklist aol if it found incoming spam from aol? I think that earthlink would contact aol and say ... your ingress filtering doesn't seem to be working. It would only be after all attempts to understand aol's ingress filtering that earthlink might take action.
well, I don't know about those two, but I've found Road Runner sometimes blocking mail from ameritech, or at least from part of ameritech. When I asked why my mail to their user was being bounced, their reply was that someone on my subnet was spamming. So then I just had to disconnect my dsl line and reconnect to get on a different subnet and then my mail would go thru, but what sheer idiocy. -- Harmon Seaver CyberShamanix http://www.cybershamanix.com
Who cares? How fast did Ip6 proliferate again? Right. So why expect that normal SMTP will be banished? And even if it is, you can always run your own alternate server without the PIT bulshit. If we turn the problem on it's head, all servers should use TLS and identify themselves to each other as well as encrypt the traffic. This way, you can weed our spammers by eliminating known spam servers and it won't kill remailers. Speaking of which, what's to stop a remailer from using a verisign signed PIT anyway after removing the original? Exit nodes of remailers are traceable anyway. Even so, there's always the opportunity for self signed or test PIT's, etc... If by "receiving email software" we're talking about your mail program, it doesn't matter much at all. If we mean an MTA (sendmail, postfix, qmail) then it becomes an issue only when you don't control the MTA. Which they claim will not happen during the transition phase. Also: "The Tripoli Pit concept does not require that all senders' messages be authenticated to the same level. It would be completely possible for a sender to generate a message (and associated Pit) that was not fully authenticated or that even was anonymous (within the bounds of associated MTAs/relays and the underlying Internet or local operating system environments to offer anonymous messages or transport parameters). "It is recognized that there are important situations where it may be highly desirable to receive e-mail from poorly- authenticated or completely unauthenticated sources, for example, in the case of a whistleblower submission address, government agencies, or a range of other situations." There certainly is the danger that everyone would opt to not accept anonymous emails, but then alternate means of communication would stil proliferate... say like usenet, but over p2p nets, or whatever. ----------------------Kaos-Keraunos-Kybernetos--------------------------- + ^ + :25Kliters anthrax, 38K liters botulinum toxin, 500 tons of /|\ \|/ :sarin, mustard and VX gas, mobile bio-weapons labs, nukular /\|/\ <--*-->:weapons.. Reasons for war on Iraq - GWB 2003-01-28 speech. \/|\/ /|\ :Found to date: 0. Cost of war: $800,000,000,000 USD. \|/ + v + : The look on Sadam's face - priceless! --------_sunder_@_sunder_._net_------- http://www.sunder.net ------------ On Fri, 9 May 2003, Nomen Nescio wrote:
Lauren Weinstein, founder of People for Internet Responsibility, has come out with a new spam solution at http://www.pfir.org/tripoli-overview.
According to this proposal, the Internet email architecture would be revamped. Each piece of mail would include a PIT, a Payload Identity Token, emphasis on Identity. This would be a token certifying that you were an Authorized Email User as judged by the authorities. Based on your PIT, the receiving email software could decide to reject your email.
On Fri, May 09, 2003 at 03:50:02AM +0200, Nomen Nescio wrote:
Lauren Weinstein, founder of People for Internet Responsibility, has come out with a new spam solution at http://www.pfir.org/tripoli-overview.
[deletia]
Thus you have to be politically acceptable to the Powers That Be in order to receive your license to email, aka your PIT. And be careful what you say or your PIT will be downgraded.
Weinsteins proposal is DOA because of the centralized control and the lack of anonymity (oh, but Pit issuers may issue special anonymous certs to "applicants with special needs for identity protection (e.g., human rights groups operating in "hostile" areas, etc.)". Right.) The people like us who would implement it won't. But it's technically possible. The technological issues are the easy part. Creating a new email system is one thing, getting people to use it is another. This idea is pretty unrealistic... sort of the Underpants Gnomes plan for ridding the world of spam: 1. create completely new parallel email system 2. ??? 3. no more spam!
Unfortunately he doesn't discuss various crypto protocol issues:
If the PIT is just a datum, what keeps someone from stealing your PIT and spams with it?
If the PIT is a cert on a key, what do you sign? The message? What if it gets munged in transit, as messages do? You've just lost most of your email reliability.
Or maybe you sign the current date/time? Then delayed mail is dead mail.
Or maybe you respond to a challenge and sign that? That won't work if relays are involved, because they can't sign for you.
I read it as the Pit is a signature over the Pit contents and the email. It'd include the certs needed to authenticate to the appropriate CA. A PKCS#7 detached signature or similar structure would work fine. The crypto part is the one part that's easy.
Spam is a problem, but it's no excuse to add more centralized administrative control to the Internet. Far better to go with a decentralized solution like camram.sourceforge.net, basically a matter of looking for hashcash in the mail headers. This raises the cost to spammers without significantly impacting normal users.
See the 'getting people to use it' argument above. Solve that and any of 20 different technical solutions would work. Eric
I rather liked the suggestion someone made a while ago that involves paying the recipient when sending email to them. If they reply, you get your money back. But if you spam, it would rapidly become expensive. However, that involves financial payments again, and nobody is willing to do financial anything in a way that allows anonymous players. So if we care about the ability to have anonymous email, we can simply eliminate from consideration anything that requires a paid email license or financial payments to be made in exchange for the right to send mail. There is a better way, of course. But it may not be as profitable for the people who want to sell certs, so nobody's pushing it right now. Remember the "hashcash" proposal from a few years ago? It basically involved the recipient setting some computational task that would take a couple of CPU seconds to complete and demanding the results (from the sending machine) before it would accept an email. IIRC, it was proposed with a probabilistic task, but there's no reason why it couldn't be done with a more precisely controlled linear task such as repeated squaring under a modulus. Or maybe you could ask distributed.net to find a way to use CPU cycles beneficially and provably, and require some number of work-packets to be completed before the mail is delivered. The computational task can get arbitrarily larger, if the recipient system doesn't like the look of the mail. I can picture the MDA going, "wow, I decrypted this one, but it scores 9.2 on my procmail filter scale, so I better ask for and get fifteen MIPS-minutes of CPU time before I actually deliver it." Stuff like this can be done anonymously, can be done on the recipient and sender machines, can depend on filters (the MDA sees it after it arrives and gets decrypted) and limits the per-machine rate at which a spammer can send spam. It requires no central keying authority, no registrations or controls, allows random email from people you don't know or haven't heard from in a while to reach you, is a barrier that's fully customizable at the recipient site, can be implemented purely in software (meaning nobody has to get a licence or a subscription or vouched for by someone else to send mail), and if someone really *does* care enough to dedicate fifteen MIPS-minutes of CPU to getting an advertisement through to you, it probably means he's got a specific reason to believe that it's actually something you'll be interested in, rather than just being a "bottom feeder" who sends out a million emails in the hopes of one response. SMTP is a hole, and needs replaced. We have the technology. It'll work. Bear --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
I think that any loading of e-mail with computational tasks has to follow the paradigm of the current system, where A sends mail to B and there is absolutely no other communication betwen B and A or C for that matter. For instance, take an assymetric algo, where t=Hard(x) and x=Easy(t). x is something that B can verify is (almost) unique to the message, like x = (B's e-mail address) + timestamp (must be within last n hours). A burns cycles to do t=Hard(x) and sends t with the message to B. B verifies x with x=Easy(t) and accepts or rejects message based on that. The drawback is that sending any mail costs time. I queue mail and in few minutes t is computed and mail sent. It could be as little as minute per message to discourage spam. No intermediaries, no communication protocols. ===== end (of original message) Y-a*h*o-o (yes, they scan for this) spam follows: __________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo. http://search.yahoo.com
The computational task can get arbitrarily larger, if the recipient system doesn't like the look of the mail. I can picture the MDA going, "wow, I decrypted this one, but it scores 9.2 on my procmail filter scale, so I better ask for and get fifteen MIPS-minutes of CPU time before I actually deliver it."
Stuff like this can be done anonymously, can be done on the recipient and sender machines, can depend on filters (the MDA sees it after it arrives and gets decrypted) and limits the per-machine rate at which a spammer can send spam.
This doesn't fit Joe Lunchbox's current model in which he dumps his outgoing mail onto his provider's server and turns off his machine. His provider either has to deliver synchronously and bounce the computational payment burden back to Joe, pay it for him, or bounce the message. In the latter case, the receiver who demanded cycles needs to recognize the problem it set and accept the answer on a later date. Matt Crawford
On Mon, 12 May 2003, Matt Crawford wrote:
This doesn't fit Joe Lunchbox's current model in which he dumps his outgoing mail onto his provider's server and turns off his machine. His provider either has to deliver synchronously and bounce the computational payment burden back to Joe, pay it for him, or bounce the message. In the latter case, the receiver who demanded cycles needs to recognize the problem it set and accept the answer on a later date.
I submit that if Joe Lunchbox is not spamming, he is unlikely to need to change his habits regarding having his machine available for a computational burden. The mail he sends to people known to him will not ordinarily trip spamfilters at the recieving end that would make such requests. Likewise, all the people who use remailers to send anonymously. As long as what they're sending isn't identifiable as spam, the remailer won't get a CPU-time request. Bear
And what about people that use something underpowered like a Palm IV to send email? Does it really make sense to force their little dragonball powered machines to do a whole lot of math? ----------------------Kaos-Keraunos-Kybernetos--------------------------- + ^ + :25Kliters anthrax, 38K liters botulinum toxin, 500 tons of /|\ \|/ :sarin, mustard and VX gas, mobile bio-weapons labs, nukular /\|/\ <--*-->:weapons.. Reasons for war on Iraq - GWB 2003-01-28 speech. \/|\/ /|\ :Found to date: 0. Cost of war: $800,000,000,000 USD. \|/ + v + : The look on Sadam's face - priceless! --------_sunder_@_sunder_._net_------- http://www.sunder.net ------------ On Mon, 12 May 2003, bear wrote:
I submit that if Joe Lunchbox is not spamming, he is unlikely to need to change his habits regarding having his machine available for a computational burden. The mail he sends to people known to him will not ordinarily trip spamfilters at the recieving end that would make such requests.
Likewise, all the people who use remailers to send anonymously. As long as what they're sending isn't identifiable as spam, the remailer won't get a CPU-time request.
So, what's my reason to accept a "payment in cpu time"? As best as I can tell, a "payment in cpu time" means that someone *else* doesn't get a payment in cpu time with their spam. I still get the spam. It seems analagous to a protocol that proves that someone burned a dollar bill. A scheme where I actually get something of value might have a bit more traction.. - Bill --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
On Mon, May 12, 2003 at 03:46:05PM -0400, Bill Sommerfeld wrote:
So, what's my reason to accept a "payment in cpu time"? As best as I can tell, a "payment in cpu time" means that someone *else* doesn't get a payment in cpu time with their spam. I still get the spam.
In the short term (when hardly anyone is sending hashcash tokens) accepting a payment means that you exempt it from your other filtering rules, which means that your filters are less likely to accidentally delete mail you wanted to read. Your reason to send hashcash tokens is to make it less likely that the recipient's filters will accidentally delete your mail.
It seems analagous to a protocol that proves that someone burned a dollar bill.
Very analogous.
A scheme where I actually get something of value might have a bit more traction..
I think I agree that a real cashable payment for 0.1c would be preferable; however the infrastructure to support it is many orders of magnitude harder to setup. It will also likely have to be run as a business because of the setup and ongoing hierarchically organized infrastructure costs. And it's not clear it will profitably scale down to payments that low. And if there is real money on your machine you'll start to see viruses attempting to steal that money. Also the payment system better support privacy or email privacy would have just disappeared. Adam
There's been a lot of talk lately about various sender-pays, proof of work, and related schemes for dealing with the spam problem. I am interested in building a sender-pays anonymous value-associated stamp system using Wagnerian cash. This would involve a commercial mint, email client plugins, and all the rest. I have talent, technology, and time. But I don't have any money. Anyone who wants to try it, and has funds available, should contact me. Thanks, Patrick --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
At 03:46 PM 5/12/03 -0400, Bill Sommerfeld wrote:
So, what's my reason to accept a "payment in cpu time"? As best as I can tell, a "payment in cpu time" means that someone *else* doesn't get a payment in cpu time with their spam. I still get the spam.
The realistic benefit is that you can use something like hashcash as one of your spam filtering rules. Anyone who is spending 1/2 sec on a reasonable machine per e-mail sent isn't likely to be spamming you, because that won't scale up very well for sending out thousands of e-mails at a time. The problem is that until it is widely adopted, it's not a very useful additional filter. There are actually dozens of similar ways to stop nearly all spam, if you can deploy them all over the net at once. But deploying anything all over the net at once isn't practical, so instead, each user or ISP tries to find some workable solution for the problem, typically involving changing his filtering rules every few months and spending a minute or two a day going through his spam folder, making sure he's not throwing away something valuable.
- Bill
--John Kelsey, kelsey.j@ix.netcom.com PGP: FA48 3237 9AD5 30AC EEDD BBC8 2A80 6948 4CAA F259 --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
----- Original Message ----- From: "John Kelsey" <kelsey.j@ix.netcom.com> Subject: Re: A Trial Balloon to Ban Email?
At 03:46 PM 5/12/03 -0400, Bill Sommerfeld wrote:
So, what's my reason to accept a "payment in cpu time"? As best as I can tell, a "payment in cpu time" means that someone *else* doesn't get a payment in cpu time with their spam. I still get the spam.
The realistic benefit is that you can use something like hashcash as one of your spam filtering rules. Anyone who is spending 1/2 sec on a reasonable machine per e-mail sent isn't likely to be spamming you, because that won't scale up very well for sending out thousands of e-mails at a time. The problem is that until it is widely adopted, it's not a very useful additional filter.
There are actually dozens of similar ways to stop nearly all spam, if you can deploy them all over the net at once. But deploying anything all over the net at once isn't practical, so instead, each user or ISP tries to find some workable solution for the problem, typically involving changing his filtering rules every few months and spending a minute or two a day going through his spam folder, making sure he's not throwing away something valuable.
I disagree. If you assume that the entire internet will eventually take up on the process, start with a rule that says "if it has a hashcash token don't process the other rules." Obviously at first this rule would be hit rarely, but a big PR campaign surrounding it would get to people, as would implementing it in Outlook. Eventually your other rules would be rarely hit, and you could change them to simply discard. Once it's everywhere you can begin culling the bad ones. I just don't see where the necessary overhead bult into the servers will take place, or be justified. Joe Trust Laboratories Changing Software Development http://www.trustlaboratories.com
On Tue, May 13, 2003 at 08:21:16PM -0700, Joseph Ashwood wrote:
From: "John Kelsey" <kelsey.j@ix.netcom.com>
[...]. Anyone who is spending 1/2 sec on a reasonable machine per e-mail sent isn't likely to be spamming you, because that won't scale up very well for sending out thousands of e-mails at a time. The problem is that until it is widely adopted, it's not a very useful additional filter.
The short term usefulness of a hashcash / PoW filter when used with bayesian filters (which I think is what Joseph is saying below) is that you are less likely to accidentally lose mail due to Bayesian filters. Ideally blackhole lists should also be exempted if there is hashcash (they are another big source of loss of email, I've been hit by that a number of times). I suspect increasngly more email will be lost to filters and blackhole lists because the anti-spam people are becoming increasingly gung-ho and sweeping in their blackholing and filtering because the problem is accelerating out of control, so the short term function of hashcash to improve email reliability could be a useful extra function. (Estimates vary but at ASRG kick off at IETF there were some very high per month growth figures (10% and higher per month) for spam which were far in excess of (non-spam) email growth). Similarly your incentive to send hashcash in the short term is to avoid your own mail similarly being swallowed by blackholes and Bayesian filtering false positives. The limitation with blackholes is it depends on the blackhole implementation, some are simply refusing the TCP connection at firewall level; others are accepting but giving you a 500 (or whatever it is) response code explaining why -- but that is already too early for them to have read the X-Hashcash headder. One way around that is to include hashcash as an ESMTP address parameter which I understand allows you to say things after the RCPT TO, but even that may be too late (if they already said go away after the HELO). Another approach but only longer term and it is debatably too aggressive/draconian, and in the short term has same problem as TCP rejection of blackholed IPs would be integration of hashcash into TCP like syncookie (see section 4.2 hashcash cookies of [1]) so that the mailer can reject port 25 connections which don't have hashcash tokens. Or perhaps (less aggressively) to use a getsocketops or ioctl to read from the socket whether the sender is using hashcash or not. One problem with this approach is the PoW received by the MTA may not be convincing to the recipient, so there remains risk that the recipient could be spammed by a colluding or host compromised MTA at their ISP. (You could add envelope recipient emails to the puzzle, but that's sufficiently SMTP related you'd just as well send it in SMTP). Another integration point could be IPSEC. On the interactive connection DoS hardening side, there was a paper about using Juel's and Brainard's Client Puzzles [2] (which is a known solution puzzle where the server has to issue the challenge interactively) for SSL DoS hardening [3]. More recently, though I haven't obtained a copy yet, Xiaofeng Wang and Michael Reiter have a paper about an implementation hardening the linux kernel TCP stack against DoS using puzzles [4], I'm presuming this is similar to the hashcash-cookie approach from the abstract, though I'm not sure which puzzle they used. (Not sure what the puzzle auction mechanism is). Adam [1] Aug 02 - "Hashcash - A Denial of Service Counter Measure" (5 years on), Tech Report, Adam Back http://www.cypherspace.org/adam/hashcash/hashcash.pdf [2] Ari Juels and John Brainard. Client puzzles: A cryptographic countermeasure against connection depletion attacks. In Network and Distributed System Security Symposium, 1999. Also available as http://www.rsasecurity.com/rsalabs/staff/bios/ajuels/publications/client-puz... [3] Drew Dean and Adam Stubblefield. Using cleint puzzles to protect tls. In Proceedings of the 10th USENIX Security Symposium, Aug 2001. Also available as http://www.cs.rice.edu/~astubble/papers.html [4] XiaoFeng Wang and Michael Reiter, "Defending Against Denial-of-Service Attacks with Puzzle Auctions", IEEE Symposium on Security and Privacy 2003 http://www.computer.org/proceedings/sp/1940/19400078abs.htm Joseph Ashwood wrote:
I disagree. If you assume that the entire internet will eventually take up on the process, start with a rule that says "if it has a hashcash token don't process the other rules." Obviously at first this rule would be hit rarely, but a big PR campaign surrounding it would get to people, as would implementing it in Outlook. Eventually your other rules would be rarely hit, and you could change them to simply discard. Once it's everywhere you can begin culling the bad ones. I just don't see where the necessary overhead bult into the servers will take place, or be justified.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
On Thu, May 15, 2003 at 09:56:17AM +0100, Adam Back wrote:
The limitation with blackholes is it depends on the blackhole implementation, some are simply refusing the TCP connection at firewall level; others are accepting but giving you a 500 (or whatever it is) response code explaining why -- but that is already too early for them to have read the X-Hashcash headder. One way around that is to include hashcash as an ESMTP address parameter which I understand allows you to say things after the RCPT TO, but even that may be too late (if they already said go away after the HELO).
There is already a reasonably good proof-of-work mechanism built into SMTP-- START_TLS. Any server that is willing to do TLS with mine is very unlikely to be a spammer. In fact a quick check of about 8000 spams I have shows that two of them used TLS. (both in the last week. hmm.) While it's true that the TLS protocol allows a client to subject a server to a DOS attack by getting the server to do the expensive crypto operation first (as the Dean & Subblefield paper points out) in order for a MTA to deliver mail, it's got to complete the TLS handshake. So, to fix the spam problem, all we have to do is require START_TLS. :-) Now, to generate an 8192-bit key.... Eric
At 10:22 AM 05/16/2003 -0700, Eric Murray wrote:
There is already a reasonably good proof-of-work mechanism built into SMTP-- START_TLS.
Any server that is willing to do TLS with mine is very unlikely to be a spammer. In fact a quick check of about 8000 spams I have shows that two of them used TLS. (both in the last week. hmm.)
Steve Bellovin pointed out that spammers who use open relays and open proxies will happily burn those CPUs doing proof-of-work as well as burning their bandwidth multiplying spam. That's not necessarily a _bad_ thing, if it gets the attention of the people running the relay/proxy machines (:-) But it's a basic problem with link-based proof-of-work like START_TLS as opposed to end-to-end proof-of-work mechanisms in the message itself. If you do link-based, the pnly last relay site needs to do the work, so the spammer can steal CPU from lots of machines without burning his own. If you do message-based proof-of-work, it's much harder to get a proxy or relay to do the work, as opposed to using the spammer's own machine. START_TLS and other link-based mechanisms _do_ have the benefit of harassing dialup and DSL spammers, who are using their own CPUs without relays, so it at least gets rid of some of the ankle-biters and forces spammers to abuse relays and proxies, which may be easier to identify (especially because they're using START_TLS...) This has the side benefit that it cuts down on the use of dial/dsl blacklists, which are one of the extremely annoying sources of collateral damage in the anti-spam world.
I submit that if Joe Lunchbox is not spamming, he is unlikely to need to change his habits regarding having his machine available
Mostly unrelated to this, but something's just occurred to me. Probably I'm being really stupid, but ... for the receiving MTA to know that the problem has been processed properly, it would have to know the answer. How does it know what the answer should be? --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
I submit that if Joe Lunchbox is not spamming, he is unlikely to need to change his habits regarding having his machine available
Mostly unrelated to this, but something's just occurred to me. Probably I'm being really stupid, but ... for the receiving MTA to know that the
----- Original Message ----- From: "Paul Walker" <paul@black-sun.demon.co.uk> Subject: Re: A Trial Balloon to Ban Email? problem
has been processed properly, it would have to know the answer. How does it know what the answer should be?
That one's easy. Use a problem that is not in P but is in NP. To make it clearer to most people, use a problem that can be verified cheaply, but that can't be solved cheaply. Since it's only everyone's computer Minesweeper is an example of such a problem. Once a solution has been found it is easy enough to verify that it is correct (all bombs marked, all non-bomb places revealed), but it can be prohibitively expensive to compute a large grid. Other common examples include jigsaw puzzles, digits of pi, etc. More functional puzzles for this purpose are NP-complete problems; e.g. traveling salesman, Hamiltonian cycle, SAT, etc. Right now another couple of good examples would be discrete logarithm, and integer factoring. In all these cases verifying the solution is cheap (generally travelling the path in the NP-complete problems, or computing the values in the DL and IF). Verifying that the puzzle is valid is only slightly more difficult, but retaining an active list of problems would solve the issue (but open up the possibility of DOS attacks). Basically it's a fairly easily solved problem. Joe
On Monday 12 May 2003 07:09 pm, Joseph Ashwood wrote:
That one's easy. Use a problem that is not in P but is in NP. To make it clearer to most people, use a problem that can be verified cheaply, but that can't be solved cheaply.
Please permit me to join the dense crowd. Now that I've proved my labor, how do I attach the proof to the email? Obviously, some parts of the message are added to a hash, but which parts? If it's the body, is whitespace damage still an issue?
At 11:52 PM 05/12/2003 -0500, Roy M.Silvernail wrote:
On Monday 12 May 2003 07:09 pm, Joseph Ashwood wrote:
That one's easy. Use a problem that is not in P but is in NP. To make it clearer to most people, use a problem that can be verified cheaply, but that can't be solved cheaply.
Please permit me to join the dense crowd. Now that I've proved my labor, how do I attach the proof to the email? Obviously, some parts of the message are added to a hash, but which parts? If it's the body, is whitespace damage still an issue?
The obvious mechanisms for including it are a header line, X-Hashcash-Version-1212: 0x20A13490B8219048243 which is easy pretty easy for almost anybody to add. You could also do an ESMTP extension of some sort, which is much more annoying to add, but lets you reject non-hashcashed messages before receiving them. (The ESMTP approach also has the problem that it's only useful for direct connections, as opposed to mail relayed through your ISP, so that probably isn't as interesting.) Some of the hashcash proposals have required near-real-time interaction between the sender's client and the recipient's server, to collect the string of the day or string of the moment, which has privacy/anonymity problems, while others either use a fixed or slowly changing parameter set, e.g. find a string that matches the first N bits of the SHA1 of recipient@example.com-YYYYMMDDHH or recipient@example.com-YYYYMMDDHH-KEYPHRASE So the recipient's mail server or client looks for the X-Hashcash string, makes sure it isn't recipient@example.com-YYYYMMDDHH-KEYPHRASE, hashes it and makes sure that the number matches, and you're good to go.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Roy M.Silvernail (2003-05-13 04:52Z) wrote:
On Monday 12 May 2003 07:09 pm, Joseph Ashwood wrote:
That one's easy. Use a problem that is not in P but is in NP. To make it clearer to most people, use a problem that can be verified cheaply, but that can't be solved cheaply.
Please permit me to join the dense crowd. Now that I've proved my labor, how do I attach the proof to the email? Obviously, some parts of the message are added to a hash, but which parts? If it's the body, is whitespace damage still an issue?
The message-id would need to be included. Lots of people filter duplicate messages, and those who don't probably should. If spammers try to replay, their duplicates get dropped. If they don't reply using the same message id, they're forced to regenerate hashcash tokens. Using duplicate message ids is an RFC violation, and just using those in the hash avoids the complication of mangled message bodies. It also gets rid of idiot MUAs which don't include message ids. The mess seems to occur when considering how to verify that that particular message, with a particular message id, wasn't bcc'd to) to 10 billion other people. How do you determine whether a Delivered-To header, if a mail server was even nice enough to indicate which envelope to: address it used in the history of a message instance, indicates a mailing list or an individual? How do you know whether any hashcash token that may have been generated based on a particular envelop to: address is valid or corresponds to a delivery list with so many people that the hashcash should be invalidated and whitelisting required? If envelope to: addresses are not each required to have separate hashcash tokens, doesn't the whole scheme fall apart? I don't know that including a Date: header in the hash improves the situation. - -- Freedom's untidy, and free people are free to make mistakes and commit crimes and do bad things. They're also free to live their lives and do wonderful things. --Rumsfeld, 2003-04-11 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2rc2 (GNU/Linux) iEYEARECAAYFAj7BMzQACgkQnH0ZJUVoUkPPcwCgyznLWmSJjLLjqc+N8QTRkahx NIQAn2EtKQE32V5XfS6sXWtu0JeegZll =nBxD -----END PGP SIGNATURE-----
On Tuesday 13 May 2003 01:02 pm, Justin wrote:
The message-id would need to be included. Lots of people filter duplicate messages, and those who don't probably should. If spammers try to replay, their duplicates get dropped. If they don't reply using the same message id, they're forced to regenerate hashcash tokens. Using duplicate message ids is an RFC violation, and just using those in the hash avoids the complication of mangled message bodies. It also gets rid of idiot MUAs which don't include message ids.
The mess seems to occur when considering how to verify that that particular message, with a particular message id, wasn't bcc'd to) to 10 billion other people.
Right you are, unless the tokens are centrally cleared. Dupe message-ids are only a violation if you get caught by the same server, so power spamers will sort their lists into bombing runs of one address per victim SMTP server and only need one token per run. Doesn't eliminate their work factor, but it does reduce it.
I don't know that including a Date: header in the hash improves the situation.
Don't think so. Dates can be duped along with message-ids and they still get one trip around the servers on the same token. I don't see this working without some kind of online clearing. Hey, you DBC guys... how do you stiffen up an offline clearing protocol like this?
Well there are different things you could hash. This simplest is just to hash the recipient address and the current time (to a day resolution). The recipient looks at the token and knows it is addressed to him because it's his address. He stores it in his double spend database and won't accept the same token twice. After the validity period of a token has expired he can remove it from his double-psend database to avoid the database growing indefinately. (He can reject out-of-date mail based purely on it's date). Hashing the message body is generally a bad idea because of minor transformations that happen as mail traverses MTAs and gateways. In fact I don't see a need to hash anything else if you're happy keeping a double-spend database. Adam On Mon, May 12, 2003 at 11:52:57PM -0500, Roy M.Silvernail wrote:
On Monday 12 May 2003 07:09 pm, Joseph Ashwood wrote:
That one's easy. Use a problem that is not in P but is in NP. To make it clearer to most people, use a problem that can be verified cheaply, but that can't be solved cheaply.
Please permit me to join the dense crowd. Now that I've proved my labor, how do I attach the proof to the email? Obviously, some parts of the message are added to a hash, but which parts? If it's the body, is whitespace damage still an issue?
And what happens when there's a network outage and a message gets stuck in the queue for a day on another server? You know a backup MX server when yours is hosed? Do you not accept the mail because the current day doesn't match what's in the message? Or do you accept mails from a day ago? a week ago? a year ago? 1922? 2nd, why wouldn't the spammer just adjust and send an email to each recipient with a random, but properly hashed token to match the target address + today's date? More work for sure, but if enough targets start adopting it, the spammer will adapt. The token doesn't have to contain an actual valid coin, and you'll only find out when you try to cash it. ----------------------Kaos-Keraunos-Kybernetos--------------------------- + ^ + :25Kliters anthrax, 38K liters botulinum toxin, 500 tons of /|\ \|/ :sarin, mustard and VX gas, mobile bio-weapons labs, nukular /\|/\ <--*-->:weapons.. Reasons for war on Iraq - GWB 2003-01-28 speech. \/|\/ /|\ :Found to date: 0. Cost of war: $800,000,000,000 USD. \|/ + v + : The look on Sadam's face - priceless! --------_sunder_@_sunder_._net_------- http://www.sunder.net ------------ On Wed, 14 May 2003, Adam Back wrote:
Well there are different things you could hash. This simplest is just to hash the recipient address and the current time (to a day resolution).
The recipient looks at the token and knows it is addressed to him because it's his address. He stores it in his double spend database and won't accept the same token twice.
After the validity period of a token has expired he can remove it from his double-psend database to avoid the database growing indefinately. (He can reject out-of-date mail based purely on it's date).
<SNIP>
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Adam Back (2003-05-14 05:27Z) wrote:
Well there are different things you could hash. This simplest is just to hash the recipient address and the current time (to a day resolution).
The recipient looks at the token and knows it is addressed to him because it's his address. He stores it in his double spend database and won't accept the same token twice.
This is just broken. How do you know what address the sender was sending to? You have no reliable access to envelope to: addresses. Joe bcc's james@nowhere.net, politely generating a hashcash token over james@nowhere.net and the mesID. Nowhere.net expands that alias to james.t.doe@treas.gov. Whoops. Hashcash is now invalid, as there's no reliable mechanism MTAs use to note the original address before the change. Some might note it with Delivered-To:, others note it in received headers, others (qmail, ahem) don't note it at all. Worse, even if there were a reliable mechanism, all it takes is one loose cannon with an open mass-mail list and as long as it doesn't delete whatever header (maybe delivered-to:, maybe something else) that indicates the list was an envelope to: address, one hashcash token works for one email to the entire list.
After the validity period of a token has expired he can remove it from his double-psend database to avoid the database growing indefinately. (He can reject out-of-date mail based purely on it's date).
Isn't it simpler to use message IDs for replay detection? No need to look for replays using another mechanism when there's already one that works fine, and that many people use for dup detection today.
Hashing the message body is generally a bad idea because of minor transformations that happen as mail traverses MTAs and gateways.
No argument there. - -- Freedom's untidy, and free people are free to make mistakes and commit crimes and do bad things. They're also free to live their lives and do wonderful things. --Rumsfeld, 2003-04-11 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2rc2 (GNU/Linux) iEYEARECAAYFAj7CV34ACgkQnH0ZJUVoUkPNkACeJBwnnFNrk7aipazqOVDxaNa2 KRwAoMGCd4CtMkJhZD7zC3sy0mBWiSTK =EEDd -----END PGP SIGNATURE-----
On Wed, May 14, 2003 at 02:49:34PM +0000, Justin wrote:
Adam Back (2003-05-14 05:27Z) wrote:
Well there are different things you could hash. This simplest is just to hash the recipient address and the current time (to a day resolution).
The recipient looks at the token and knows it is addressed to him because it's his address. He stores it in his double spend database and won't accept the same token twice.
This is just broken.
How do you know what address the sender was sending to? You have no reliable access to envelope to: addresses.
Well the address the token was minted for is contined in the hashcash header, and the recipient knows what email addresses he accepts mail for. To take your example:
Joe bcc's james@nowhere.net, politely generating a hashcash token over james@nowhere.net and the mesID. Nowhere.net expands that alias to james.t.doe@treas.gov.
The sender as he Bcc'd james@nowhere.net thinks this the recipient's address, so it delivers to envelope address james@nowhere.net and for that delivery adds header: X-Hashcash: 0:030514:james@nowhere.net:b384c3cc66319383 Then the .forward file forwards to james.t.doe@treas.gov, who reads his mail; his MUA sees that the message is to james@nowhere.net an address he reads mail for, checks the collision: % echo -n 0:030514:james@nowhere.net:b384c3cc66319383 | sha1 000002e07c7aac5697396f41dbb277aee02f6517 sees there are enough bits of collision, and if he hasn't seen this token before he accepts the mail. The message will also contain one hashcash header per to or cc recipient. (Bcc recipients must be delivered separately because otherwise bcc semantics are lost -- other recipients should not learn from the hashcash headers that the bcc recipient received the mail).
Worse, even if there were a reliable mechanism, all it takes is one loose cannon with an open mass-mail list and as long as it doesn't delete whatever header (maybe delivered-to:, maybe something else) that indicates the list was an envelope to: address, one hashcash token works for one email to the entire list.
I take it this comment is about mailing lists? Mailing lists have to be treated separately. The sender probably can't afford to create a token for each recipient. (Also he doesn't know the recipient's addresses). Mailing lists deal with spam with filtered versions of lists.
After the validity period of a token has expired he can remove it from his double-psend database to avoid the database growing indefinately. (He can reject out-of-date mail based purely on it's date).
Isn't it simpler to use message IDs for replay detection? No need to look for replays using another mechanism when there's already one that works fine, and that many people use for dup detection today.
You have to cope with multiple hashcash headers when a mail has multiple recipients, Message-ID only suports one header. For USENET postings putting the hashcash token in the Message-ID can work because USENET uses the Message-ID to supress duplicates in it's flooding algorithm, and you could argue that there is just one recipient: USENET (or the cross-posted group list). Adam
Justin wrote:
Well there are different things you could hash. This simplest is just to hash the recipient address and the current time (to a day resolution).
The recipient looks at the token and knows it is addressed to him because it's his address. He stores it in his double spend database and won't accept the same token twice.
This is just broken.
How do you know what address the sender was sending to? You have no reliable access to envelope to: addresses.
Why do you care about that? All you care about is that the intended recipient on the mail you actually see is an address you are willing to read mail for. If there is no to: field, or if the "to:" is an address you don't think is yours, just drop the mail. Plenty of places already filter out incoming mail with no "to:" anyway. As others have pointed out, if a mechanism like this is meant to give a clue to your filters (or SpamAssassin, or whatever) that something is likely not spam: then it does not need to be of any value to the recipient. All the hash need do is indictate that the originator has thought about the recipient for long enough to make the hash. You don't really need to store the hash for any longer than a day or two (so only one spammer can use one hash), and you can't respend it because it is only good for sending messages to you. So if this was implemented we get incentive to design a new kind of hashing algorithm, one designed to be difficult to run, because all it is needed for is to prove that someone bothered enough to spend the time. Also it needs to map one plaintext to many valid hashes of course as others said thats easier when you include the "from:" in the hash or allow some arbitrary field. I still don't think it's going to happen though
At 04:53 PM 05/15/2003 +0100, ken wrote:
So if this was implemented we get incentive to design a new kind of hashing algorithm, one designed to be difficult to run, because all it is needed for is to prove that someone bothered enough to spend the time. Also it needs to map one plaintext to many valid hashes of course as others said thats easier when you include the "from:" in the hash or allow some arbitrary field.
The hash is easy to do - Given a target "T", provide a string "X" for Bit(i,SHA1(X)) == Bit(i,SHA1(T)) for i=1...n, and Substring(SHA1(X),N+1,160) != Substring(SHA1(T),N+1,160). You'll need to try roughly 2**N inputs to find one.
That particular approach is vulnerable to precomputation and amortization fo computation against different target strings. ie Attacker can pre compute and store 2**N inputs and have fair chance of being able to solve by lookup. Similarly he can for the same cost find collisions on SHA1(T) and SHA1(T') simultaneously. What the original hashcash function did was look for Bit(i,SHA1(T||X)) == B(i,SHA1(T)) for i = 1..n that way the candidate solutions are useless against other targets. A more recent simplifcation is to just use the all 0 bit string as the target. So you're looking for Bit(i,SHA1(T||X)) = 0 for i = 1..n. Adam On Fri, May 16, 2003 at 05:20:44PM -0700, Bill Stewart wrote:
The hash is easy to do - Given a target "T", provide a string "X" for Bit(i,SHA1(X)) == Bit(i,SHA1(T)) for i=1...n, and Substring(SHA1(X),N+1,160) != Substring(SHA1(T),N+1,160).
You'll need to try roughly 2**N inputs to find one.
On Sun, 2003-05-11 at 16:05, Paul Walker wrote:
I submit that if Joe Lunchbox is not spamming, he is unlikely to need to change his habits regarding having his machine available
Mostly unrelated to this, but something's just occurred to me. Probably I'm being really stupid, but ... for the receiving MTA to know that the problem has been processed properly, it would have to know the answer. How does it know what the answer should be?
I believe the usual approach to this is to have it be a asymmetrictry hard problem - i.e. factor some primes to do the work (hard), multiply them to validate answer (easy). -- Nathan ------------------------------------------------------------ Nathan Neulinger EMail: nneul@umr.edu University of Missouri - Rolla Phone: (573) 341-4841 Computing Services Fax: (573) 341-4216 --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
On Tue, May 13, 2003 at 07:23:23AM -0500, Nathan Neulinger wrote:
I believe the usual approach to this is to have it be a asymmetrictry hard problem - i.e. factor some primes to do the work (hard), multiply them to validate answer (easy).
Okay, so it was me being stupid. :-) Thanks both. -- Paul
"Paul Walker" <paul@black-sun.demon.co.uk> writes:
I submit that if Joe Lunchbox is not spamming, he is unlikely to need to change his habits regarding having his machine available
Mostly unrelated to this, but something's just occurred to me. Probably I'm being really stupid, but ... for the receiving MTA to know that the problem has been processed properly, it would have to know the answer. How does it know what the answer should be?
The same way you know you have the right answer with certain other hard problems -- you choose a problem that's one-way hard. For example: factoring. Factoring a large number is hard. Verifying you have the right answer is easy (you just multiply the factors and see if you've got the right answer). So, just choose from the class of self-verifying problems. OTOH, I still think a micro-payment postage system is a better idea. The sender puts a micro-payment into the mail header to pay the recipient to accept/read the message. For non-spam, the receipient doesn't need to cash the payment (or can just return it to the sander). For spam, the receipient collects the money (thereby costing the spammer real $$$ to send spam, if most receipients actually collect). The only remaining architectural problem is how to handle mailing lits. -derek -- Derek Atkins Computer and Internet Security Consultant derek@ihtfp.com www.ihtfp.com --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
----- Original Message ----- From: "Derek Atkins" <derek@ihtfp.com> Subject: Re: A Trial Balloon to Ban Email?
OTOH, I still think a micro-payment postage system is a better idea. The sender puts a micro-payment into the mail header to pay the recipient to accept/read the message. For non-spam, the receipient doesn't need to cash the payment (or can just return it to the sander). For spam, the receipient collects the money (thereby costing the spammer real $$$ to send spam, if most receipients actually collect). The only remaining architectural problem is how to handle mailing lits.
So you're expecting that everyone will be honest about cashing micropayments? That seems rather silly, if such a mechanism were to become required on the internet I'd simply retire today, sign my email accounts (all except 1) up on every spam list, every mailing list, everything that would get me thousands of tokens a day, have an automated script cash all the tokens for me, and I'm generally considered fairly scrupulous. Additionally there is one major flaw in your design, what's to stop the spammers from using fake micropayments? The fact that people who believe it is spam will be unable to cash them? Like they really care about the people who delete their email. Or were you planning on every intermediate mail forwarder (all 14 of them between your sending and my recieving on this list) taking the time out of their busy schedule to verify the micropayments. It won't work, the micropayment will be widely reused anyway, the spammers depending on the bulk of the sends reaching their targets before the micropayment is cashed. This will in turn increase the burden on the intermediate servers; because the spammers obviously have to send out far more now (because so many of their messages never reach the servers), and the servers need to verify the payments (otherwise the payments mean nothing). The entire solution only raises the backlog of spam, raises the requirements for intermediate servers, raises the requriements for end servers, and introduces new methods of mass abuse. Doesn't exactly sound like something I want sitting on my network. Joe
At 12:09 PM 05/13/2003 -0700, Joseph Ashwood wrote:
----- Original Message ----- From: "Derek Atkins" <derek@ihtfp.com> Subject: Re: A Trial Balloon to Ban Email?
[Micropayment, sender pays recipient, refund for non-spam]
So you're expecting that everyone will be honest about cashing micropayments? That seems rather silly, if such a mechanism were to become required on the internet I'd simply retire today, sign my email accounts (all except 1) up on every spam list, every mailing list, everything that would get me thousands of tokens a day, have an automated script cash all the tokens for me, and I'm generally considered fairly scrupulous.
I can see the advertising campaign for this now:
You, yes YOU!! Can M4K3 M0N3Y FA$T! Just By READING EMAIL!!! FIND OUT HOW BY SENDING US $9.95 or 10.2Euros or 1 Gram of e-Gold!
It's frustrating, because just about any set of who-pays-whom-for-email fails badly other than sender-pays-recipient-somehow.
On Tue, May 13, 2003 at 09:06:18AM -0400, Derek Atkins wrote:
OTOH, I still think a micro-payment postage system is a better idea. The sender puts a micro-payment into the mail header to pay the recipient to accept/read the message. For non-spam, the receipient doesn't need to cash the payment (or can just return it to the sander). For spam, the receipient collects the money (thereby costing the spammer real $$$ to send spam, if most receipients actually collect). The only remaining architectural problem is how to handle mailing lits.
If we assume an environment where a payor/spender can later check to see if their payment was cashed, this also creates a relatively cheap way for spammers to create or validate a list of working email addresses. Hash-based lists of spam messages have this property, too - a recipient of a unique message implicitly validates their email address by reporting the message or its hash to a public database of known spams, if the sender of the message cares to go back and check to see which of their sent messages have been reported. Exploits of those features may be a few steps down the road in the spam arms race, but it's not unthinkable ... -- Greg Broiles gbroiles@parrhesia.com
At 1:11 PM -0700 5/13/03, Greg Broiles wrote:
If we assume an environment where a payor/spender can later check to see if their payment was cashed, this also creates a relatively cheap way for spammers to create or validate a list of working email addresses.
This problem could be eliminated if the ISP(s) collected the money, regardless of whether the mail could be delivered or not. It's sounding more and more like the postal system. Perhaps we can get spam-stamps to subsidize regular email. Cheers - Bill ------------------------------------------------------------------------- Bill Frantz | Due process for all | Periwinkle -- Consulting (408)356-8506 | used to be the | 16345 Englewood Ave. frantz@pwpconsult.com | American way. | Los Gatos, CA 95032, USA
On Tue, May 13, 2003 at 01:11:29PM -0700, Greg Broiles wrote:
If we assume an environment where a payor/spender can later check to see if their payment was cashed, this also creates a relatively cheap way for spammers to create or validate a list of working email addresses.
Greg is right, and raises a point I hadn't considered before. But then again if I charge $.25 to send me mail in a hypothetical micropayment system (and I'd hope a social custom would arise making it tacky to retain the money if the mail were not spam), I'd be happy to let everyone know I have a working email address. -Declan --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
Yes, but how will you stop the spammer from double spending the same $0.25 micropayment on all of his 170,000 email addresses? Depending on whether you check that there is a payment attached or not, and also check it with the bank before delivering it, you'd have already wasted your bandwith and possibly have accepted a spam into your mail spool. At that point you have: 1. already had a slice of your bandwidth eaten by the spammer, plus some cpu cycles verifying that there exists a coin. Spammer +1, you -1. 2. You now have to verify that the coin is a coin and not just some random junk - you waste some cpu cycles here. If you don't validate that the coin hasn't been doubly spent, you haven't made that $0.25 and have accepted a spam - not that you will personally read it, but your system did (cpu, bandwith and some disk storage until it throws it to the hungry maw of /dev/null.) spammer +1, you -2. 3. Presumably you'll want to validate/cash the coin. If you do, you'll need to talk to the bank in order to prevent the spammer from double spending and to actually collect your quarter v-cash. By doing that on a spam, you're taking part of a DDoS against the bank as only the 1st guy on the spammer's list to talk to the bank will get the coin - because everyone presumably will chose to cash the coin. If you don't cash nor validate the coin with the bank, you haven't made your vquarted, spammer +10 point, you -1, bank -10,000. The spammer doesn't give a shit, he just wants to get as many emails out there as possible. In fact, he mostly doesn't care whether you filter or not - he makes his money when he sends the spam, not when you read it. Of course, he can charge more for "real, verified" email addresses, but that's less important. What's the score again? Oh yeah, game over, insert quarter to play again. A beautiful example of creating a cryptographic solution that doesn't quite work in real life. ----------------------Kaos-Keraunos-Kybernetos--------------------------- + ^ + :25Kliters anthrax, 38K liters botulinum toxin, 500 tons of /|\ \|/ :sarin, mustard and VX gas, mobile bio-weapons labs, nukular /\|/\ <--*-->:weapons.. Reasons for war on Iraq - GWB 2003-01-28 speech. \/|\/ /|\ :Found to date: 0. Cost of war: $800,000,000,000 USD. \|/ + v + : The look on Sadam's face - priceless! --------_sunder_@_sunder_._net_------- http://www.sunder.net ------------ On Tue, 13 May 2003, Declan McCullagh wrote:
Greg is right, and raises a point I hadn't considered before. But then again if I charge $.25 to send me mail in a hypothetical micropayment system (and I'd hope a social custom would arise making it tacky to retain the money if the mail were not spam), I'd be happy to let everyone know I have a working email address.
If you go to an ISP collects model, see how this changes the picture. At 6:57 AM -0700 5/14/03, Sunder wrote:
Yes, but how will you stop the spammer from double spending the same $0.25 micropayment on all of his 170,000 email addresses? Depending on whether you check that there is a payment attached or not, and also check it with the bank before delivering it, you'd have already wasted your bandwith and possibly have accepted a spam into your mail spool.
ISP receives mail header. As soon as the coin appears, it: (1) Check it against the in-memory Bloom filter of already seen coins. If it passes, goto collect. (2) Check it against the local database of already seen coins. (Because Bloom filters can give false positives.) If it is in the database, drop the mail and the connection. Result: no mail in the spool, and minimum bandwidth lost. (collect) Add the coin to the Bloom filter and to the database. Collect the money from the bank. If the bank says, "double spent", drop the connection and the mail as above. Note that this system will work well against spammers who blast out identical coins to a lot of addresses at an ISP. Now spammers can engage in a DOS attack against this system by using junk coins. It won't help them get the spam thru, and it will be detected when there is a TCP connection between their machine/open relay/etc. and the ISP machine. That will go a long way toward locating them in meat space, so fraud charges can be brought. Cheers - Bill ------------------------------------------------------------------------- Bill Frantz | Due process for all | Periwinkle -- Consulting (408)356-8506 | used to be the | 16345 Englewood Ave. frantz@pwpconsult.com | American way. | Los Gatos, CA 95032, USA --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
In the spirit of I. F. Stone... Buried in the last paragraphs of an article in yesterday's San Jose Mercury News (Thursday, May 15) that starts out talking about members of congress saying that the US is ill-prepared to defend against an attack on critical computer systems is the following gem. "Last fall's legislation authorized the National Science Foundation to spend $110.25 million on cyber-security research, but the agency is requesting only about $51 million. DARPA's unclassified budget for cyber-security research has actually declined, from about $90 million in 2000 to $30 million in 2003. But Tether [Tony Tether, director of DARPA] said those figures were misleading, because more projects are now classified. He estimated the agency will spend about $100 million on cyber-security research in 2004." Note also that DARPA's support of the OpenBSD project has been dropped (see http://www.openbsd.org/). Do these changes mean that the US is trying to protect "critical infrastructure" using classified techniques so other nation's systems can be hacked while US ones are safe? Inquiring minds want to know. Cheers - Bill ------------------------------------------------------------------------- Bill Frantz | Due process for all | Periwinkle -- Consulting (408)356-8506 | used to be the | 16345 Englewood Ave. frantz@pwpconsult.com | American way. | Los Gatos, CA 95032, USA --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
The only remaining architectural problem is how to handle mailing lists.
To keep my subscription active, I have to deposit some postal credits with the list server. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
-- On 13 May 2003 at 9:06, Derek Atkins wrote:
OTOH, I still think a micro-payment postage system is a better idea. The sender puts a micro-payment into the mail header to pay the recipient to accept/read the message. For non-spam, the receipient doesn't need to cash the payment (or can just return it to the sander). For spam, the receipient collects the money (thereby costing the spammer real $$$ to send spam, if most receipients actually collect). The only remaining architectural problem is how to handle mailing lits.
Recipients whitelist the mailing list, or better still its digital signature. Mailing list operator collects the micropayments on submissions. --digsig James A. Donald 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG Xk9R3hEjL27Vh4JwzxHMmoB1TfEiftAXvdhzpKyb 4fEwddb+ZTQFP9ep7mGzY5moueUOD0FeCIlksgaM6 --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
Bear discussed using hashcash-alike tokens as a challenge response from the filtering MTA back to the sender giving the sender a chance to compute a hashcash token. This approach has the problem you identify -- namely that email is store and forward; email can and often does go through multiple MTAs on it's path to delivery, and the MTA doing the filtering may be multiple hops from the sender. Indeed sometimes the filterer is the end-user who is also intermittently connected. It's more convenient and fits better in the store-and-forward setting if all email already includes the token at time of sending. If it turns out to be needed, then there is no interactive challenge-response needed. Then the question is whether computing the token at sending time would be incovenient for the normal sender. This depends on what parameters you choose. A few seconds probably wouldn't be noticed, especially as with deep MUA integration the token can be computed on each recipient address as soon as it is selected for receipt. Depending on MUA usage therefore the token could be computed while the sender is composing the message. In addition it is expected that there would be a mechanism whereby regular correspondents would white list each other. (Probably automatically via their mail clients). Whether you think a few seconds is sufficient depends on your views of the economics of spamming. Ie how close to losing break-even the spammers are, and whether a few seconds of CPU per message is enough to significantly increase the cost. This article for example discusses the economics of spam: http://www.eprivacygroup.com/article/articlestatic/58/1/6 they give an example of a spam campaign with a 0.0023% response rate, and a yeild of $19 per response. They estimate the cost of sending the spam was less than 0.01c per message. I've seen significantly lower estimates for the sending costs. To deter a given spam campaign we just have to increase the cost to the point of making it unprofitable given the response rate and profit per responder. The other side of this equation is what a second of CPU costs in monetary terms to a spammer. (To an end user it is essentially free because his CPU is mostly idle anyway; the limiting factor for the user is his preference for fast mail delivery (and in the dialup case an unwillingness to sit waiting for tokens to be calcluated before his mail can be sent). Adam On Mon, May 12, 2003 at 08:53:25AM -0500, Matt Crawford wrote:
This doesn't fit Joe Lunchbox's current model in which he dumps his outgoing mail onto his provider's server and turns off his machine. His provider either has to deliver synchronously and bounce the computational payment burden back to Joe, pay it for him, or bounce the message. In the latter case, the receiver who demanded cycles needs to recognize the problem it set and accept the answer on a later date.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
At 04:45 PM 5/12/2003, Adam Back wrote:
Whether you think a few seconds is sufficient depends on your views of the economics of spamming. Ie how close to losing break-even the spammers are, and whether a few seconds of CPU per message is enough to significantly increase the cost. This article for example discusses the economics of spam:
http://www.eprivacygroup.com/article/articlestatic/58/1/6
they give an example of a spam campaign with a 0.0023% response rate, and a yeild of $19 per response. They estimate the cost of sending the spam was less than 0.01c per message. I've seen significantly lower estimates for the sending costs. To deter a given spam campaign we just have to increase the cost to the point of making it unprofitable given the response rate and profit per responder. The other side of this equation is what a second of CPU costs in monetary terms to a spammer.
Assuming that a CPU costs $500 and that its value can be amortized over 2 years, CPU costs .0016 cents/second. Based on the numbers enough, the revenue/spam sent is .044 cents. Thus, the breakeven point is 27.6 seconds/message: assuming other costs are minimal, you have to require > 27.6 seconds of CPU calculation from an email submittant to ruin the spamming business model. A few thoughts on this: - You have to adjust the size of the calculation frequently to keep up with Moore's law (although the time/$500 CPU is constant, assuming constant profitability for spam) - If spammers have new technology or economies of scale available to them, it's going to adversely affect everyone else. (That is, if you're using an 18-month-old CPU and CPU-seconds cost you twice what they cost in the volume it costs spammers, your $500 computer will have to spend 2 minutes of time to calculate a token it takes a spammer 30 seconds to calculate). - This is going to dramatically increase the costs of sending bulk e-mail for non-spammers: for example, I get airline specials a few times a week; they must send millions of these. - The CPU time required here is several orders of magnitude larger than the cryptographic costs associated with SSL, and SSL is not broadly accepted at least in part due to the CPU cost associated with with it; this implies to me that there will be substantial resistance. - The CPU costs associated with SSL engendered a substantial market in cryptographic accelerators intended to reduce the cost to do an RSA private key operation. Presumably, a system like this will create such a market for e-mail token accelerators: unfortunately, this is exactly the kind of new tech / economy of scale envisioned above: we may end up with a situation where a calculation which costs a spammer .044 cents will take the average user's CPU 10 minutes or more to calculate. - Tim
Let me say it again... Economics is technology, government is technology. Neither are a 'fact of nature' or a 'natural law'. They are a direct result of the way we look at the cosmos -and how we *interpret* natural law-. To put it another way, they are each -primarily- *ego*. -WE- -make- the world by the -choices- -WE- -make-. Stop acting like a -victim of circumstances-. If other technology changes then economics change. It is not immutable. Don't confuse 'economics' (which is a function of psychology) with 'supply and demand' (which is a function of the 3 laws of thermodynamics). Not the same thing. Use the right technology and the problem is -removed- from 'economic' consideration. You're using the wrong technology. On Mon, 12 May 2003, Tim Dierks wrote:
At 04:45 PM 5/12/2003, Adam Back wrote:
Whether you think a few seconds is sufficient depends on your views of the economics of spamming. Ie how close to losing break-even the spammers are, and whether a few seconds of CPU per message is enough to significantly increase the cost. This article for example discusses the economics of spam:
http://www.eprivacygroup.com/article/articlestatic/58/1/6
they give an example of a spam campaign with a 0.0023% response rate, and a yeild of $19 per response. They estimate the cost of sending the spam was less than 0.01c per message. I've seen significantly lower estimates for the sending costs. To deter a given spam campaign we just have to increase the cost to the point of making it unprofitable given the response rate and profit per responder. The other side of this equation is what a second of CPU costs in monetary terms to a spammer.
Assuming that a CPU costs $500 and that its value can be amortized over 2 years, CPU costs .0016 cents/second.
Based on the numbers enough, the revenue/spam sent is .044 cents. Thus, the breakeven point is 27.6 seconds/message: assuming other costs are minimal, you have to require > 27.6 seconds of CPU calculation from an email submittant to ruin the spamming business model.
A few thoughts on this: - You have to adjust the size of the calculation frequently to keep up with Moore's law (although the time/$500 CPU is constant, assuming constant profitability for spam) - If spammers have new technology or economies of scale available to them, it's going to adversely affect everyone else. (That is, if you're using an 18-month-old CPU and CPU-seconds cost you twice what they cost in the volume it costs spammers, your $500 computer will have to spend 2 minutes of time to calculate a token it takes a spammer 30 seconds to calculate). - This is going to dramatically increase the costs of sending bulk e-mail for non-spammers: for example, I get airline specials a few times a week; they must send millions of these. - The CPU time required here is several orders of magnitude larger than the cryptographic costs associated with SSL, and SSL is not broadly accepted at least in part due to the CPU cost associated with with it; this implies to me that there will be substantial resistance. - The CPU costs associated with SSL engendered a substantial market in cryptographic accelerators intended to reduce the cost to do an RSA private key operation. Presumably, a system like this will create such a market for e-mail token accelerators: unfortunately, this is exactly the kind of new tech / economy of scale envisioned above: we may end up with a situation where a calculation which costs a spammer .044 cents will take the average user's CPU 10 minutes or more to calculate.
- Tim
-- ____________________________________________________________________ We are all interested in the future for that is where you and I are going to spend the rest of our lives. Criswell, "Plan 9 from Outer Space" ravage@ssz.com jchoate@open-forge.org www.ssz.com www.open-forge.org --------------------------------------------------------------------
... but i would contend that the infrastructure costs associated with a billion or two spams per day are significantly higher than the costs that are currently being incurred by the spammers .... in effect the industry as a whole is underwriting a significant percentage of the actual costs, which makes spamming such an attractive economic activity. one of the issues is to reflect the fully loaded costs of a billion or two spams per day back to the spammers. -- Anne & Lynn Wheeler http://www.garlic.com/~lynn/ Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
To respond on the comments on costs of spamming and costs of CPU, the figures one can draw from various papers and articles are highly variable, one suspects they are variously including operator time, electricity, spam software purchase, and email address list purchase. To bring it back to just the raw computational costs (equipment amortized plus electricity) lets do some rough estimates for this. To take Tim's estimate $500 machine amortized over 2 years seems entirely reasonable, say this machine has a 1Ghz CPU. I'll add ADSL line $500/year for a 1Mbit uplink, and say $200/year in electricity for a total of $950/year. For spamming without hashcash let's say that it can send customized mail messages of size 1KB each, and by pipelining it manages to max the link and send 64 messages/second. (Divide by 2 to account for unreachable addresses, etc). I make that 0.00005c / message. Presuming the same machine is mostly unloaded, and the spammer wants to send the same number of mails he needs a bank of 63 additional CPUs each at a cost of 450/year (amortized cost+electricity) for a total of $29300/year, so now his spamming costs 0.0015 / message, and the purely computational costs have increased by a factor of 30. On could imagine this would reduce the amount of untargetted spam a lot. Clearly you will still receive spam, just less of it, or more targetted to be likely to interest you etc. Other issues include that perhaps the spammer can get bandwidth cheaper per Mbit if he needs more than 1Mbit, which would tend to reduce purely computational cost of spamming (without tokens). A 1 second CPU cost on a 1Ghz machine should be negligible and acceptable to an email user even if the computation happens while he waits after he clicks the send button. If he is on a DSL or similar it could be backgrounded. On dialup delivery is slow anyway and a second probably wouldn't be noticed. Dialup users also often batch their mail sending (deliver later from a local MUA maintained queue). An additional cost for spammers is acquiring the email lists. However this cost can be amortized across multiple spamming campaigns on behalf of different spam clients, and mostly seems to consist of emails gathered from a web spider if one takes the claims of the CDT spam report, so is itself just a bandwidth cost. We could probably as was previously noted get away with a marginally larger delay if tokens are only required to recipients who have never replied to us in the past. If one accepts these figures, at 1 second CPU per sent mail for new recipients, perhaps it may even be economical for ISPs to do the computation as part of mail service. If we could think of a distributed way to precompute the token and yet still have distributed verification without infrastructure, we could increase the cost to 5 mins without normal users noticing. It is not obvious how one would do this however as unless the entire computation is tailored to the recipient, parts of the computation could be re-used across multiple recipients. As Tim notes' Moore's law requires that we increase the collision cost over time. (But this is not so hard to do -- I can think of a simple fully scalable mechanisms to achieve this slowly increasing distribution of a minimum bit collision). The possibility for accelerator hardware is definitely a limiting factor. Counter-measures to this which have been suggested include (a) changing the algorithm over time with an authenticated code update mechanism; (b) defining a cost function which makes use of features of general purpose computers -- eg. IEEE floating point hardware, memory, cache, larger code footprint algorithm etc. This could in theory mean that absent sufficient market general purpose CPUs remain the most cost effective approach; (c) memory bound functions such as [1] which are limited by memory latency rather than CPU speed. Memory bound functions have their own economic arguments (see conclusions section of the paper): perhaps accelerator hardware is also a problem because all you need is a memory chip, plus a really cheap CPU; they mean the most cost effective hardware to buy is the cheapest CPU and so perhaps 2 or 3 times cheaper than best Mhz/$; plus they intentionally consume memory data footprint which can interfere with applications. Another possibility with accelerator hardware; if ISPs were the primary deployers, then they are better positioned to buy accelerator hardware to compete head on with spammers. Adam [1] http://research.microsoft.com/research/sv/PennyBlack/demo/lbdgn.pdf C. Dwork, A. Goldberg, and M. Naor, "On Memory-Bound Functions for Fighting Spam", Proceedings of CRYPTO 2003, to appear. On Mon, May 12, 2003 at 09:18:25PM -0400, Tim Dierks wrote:
At 04:45 PM 5/12/2003, Adam Back wrote:
Whether you think a few seconds is sufficient depends on your views of the economics of spamming. Ie how close to losing break-even the spammers are, and whether a few seconds of CPU per message is enough to significantly increase the cost. This article for example discusses the economics of spam:
http://www.eprivacygroup.com/article/articlestatic/58/1/6
they give an example of a spam campaign with a 0.0023% response rate, and a yeild of $19 per response. They estimate the cost of sending the spam was less than 0.01c per message. I've seen significantly lower estimates for the sending costs. To deter a given spam campaign we just have to increase the cost to the point of making it unprofitable given the response rate and profit per responder. The other side of this equation is what a second of CPU costs in monetary terms to a spammer.
Assuming that a CPU costs $500 and that its value can be amortized over 2 years, CPU costs .0016 cents/second.
Based on the numbers enough, the revenue/spam sent is .044 cents. Thus, the breakeven point is 27.6 seconds/message: assuming other costs are minimal, you have to require > 27.6 seconds of CPU calculation from an email submittant to ruin the spamming business model.
A few thoughts on this: - You have to adjust the size of the calculation frequently to keep up with Moore's law (although the time/$500 CPU is constant, assuming constant profitability for spam) - If spammers have new technology or economies of scale available to them, it's going to adversely affect everyone else. (That is, if you're using an 18-month-old CPU and CPU-seconds cost you twice what they cost in the volume it costs spammers, your $500 computer will have to spend 2 minutes of time to calculate a token it takes a spammer 30 seconds to calculate). - This is going to dramatically increase the costs of sending bulk e-mail for non-spammers: for example, I get airline specials a few times a week; they must send millions of these. - The CPU time required here is several orders of magnitude larger than the cryptographic costs associated with SSL, and SSL is not broadly accepted at least in part due to the CPU cost associated with with it; this implies to me that there will be substantial resistance. - The CPU costs associated with SSL engendered a substantial market in cryptographic accelerators intended to reduce the cost to do an RSA private key operation. Presumably, a system like this will create such a market for e-mail token accelerators: unfortunately, this is exactly the kind of new tech / economy of scale envisioned above: we may end up with a situation where a calculation which costs a spammer .044 cents will take the average user's CPU 10 minutes or more to calculate.
--------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
-- On 12 May 2003 at 21:18, Tim Dierks wrote: Assuming that a CPU costs $500 and that its value can be amortized over 2 years, CPU costs .0016 cents/second. To say the same thing in different words, the spammer's unattended computer costs 0.0016cents per second, the non spammer's computer is worth about 0.5cents per second, because there is an impatient user sitting there waiting for the mail to complete. Thus the non spammer's computer time costs approximately four hundred times as much as the spammer's computer time.
From this, I conclude that hashcash is not economically viable. We have to use a form of cash that is similarly valuable for spammers and non spammers.
--digsig James A. Donald 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG QLQau7uADLb/zG+C/w+cIuiW5I9NSD4m6LNPbwYK 4zNtefDWUbC4Pp6JJTh53TS6UPtqXu/hY1EPp5PPv --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
The other side of this equation is what a second of CPU costs in monetary terms to a spammer. (To an end user it is essentially free because his CPU is mostly idle anyway; the limiting factor for the user is his preference for fast mail delivery (and in the dialup case an unwillingness to sit waiting for tokens to be calcluated before his mail can be sent).
If you believe http://news.bbc.co.uk/1/hi/technology/2988209.stm, spammers are beginning to use viruses to deploy spam relays. If a spammer has a zombie army of a few thousand compromised systems, the spammer's cpu time costs for hashcash will also essentially be free. - Bill --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
At 09:45 PM 5/12/03 +0100, Adam Back wrote:
In addition it is expected that there would be a mechanism whereby regular correspondents would white list each other. (Probably automatically via their mail clients).
Whether you think a few seconds is sufficient depends on your views of the economics of spamming. Ie how close to losing break-even the spammers are, and whether a few seconds of CPU per message is enough to significantly increase the cost.
Two points. First, Joe Sixpack won't use it if it requires an extra click; but he might if the mail queueing is in the background. Second, spammers use trojans that establish local mail relays (!) You think they won't steal some cycles to pollute? Ok, three points. If you're sending from your PDA, either deal with the battery-life-loss as a cost of emailing from your PDA, or have your net-connected host do the work. Again, transparently, or no one will use it. Personally, I favor an Assasination Politics flavor solution, but that's unlikely to gain widespread favor :-)
Lauren Weinstein, founder of People for Internet Responsibility, has come out with a new spam solution at http://www.pfir.org/tripoli-overview.
Phil Hallam-Baker made of Verisign made a similar proposal. (I missed responding to an earlier post on this thread that said "here comes Verisign" or such-like.) Unfortunately, I can't find his post any more; I think it was on one of the XML security WG lists, but the message has already been purged from my mailbox. On thing interesting about the VRSN proposal was they they had a hardware implementation of the "velocity checker" as an option. And a patent applied for that. /r$ --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
On Fri, May 09, 2003 at 03:50:02AM +0200, Nomen Nescio wrote:
Lauren Weinstein, founder of People for Internet Responsibility, has come out with a new spam solution at http://www.pfir.org/tripoli-overview.
According to this proposal, the Internet email architecture would be revamped. Each piece of mail would include a PIT, a Payload Identity Token, emphasis on Identity. This would be a token certifying that you were an Authorized Email User as judged by the authorities. Based on your PIT, the receiving email software could decide to reject your email.
I doubt that any kind of anti-spam mechanism which requires such a certification will be widely accepted. And I do not believe that any cryptographical method can be deployed widely enough to provide security against spam. Cryptography is simply too complicated and too error/theft-of-secret prone to be used in common. (If anyone is interested, I've made an alternative proposal based on non-cryptographic DNS-based lightweight authentication/authorization, available at http://www.ietf.org/internet-drafts/draft-danisch-dns-rr-smtp-01.txt ) regards Hadmut --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com
participants (29)
-
Adam Back
-
Anne & Lynn Wheeler
-
bear
-
Bill Frantz
-
Bill Sommerfeld
-
Bill Stewart
-
David Honig
-
Declan McCullagh
-
Derek Atkins
-
Eric Murray
-
Greg Broiles
-
Hadmut Danisch
-
Harmon Seaver
-
James A. Donald
-
Jim Choate
-
John Kelsey
-
Joseph Ashwood
-
Justin
-
ken
-
Matt Crawford
-
Morlock Elloi
-
Nathan Neulinger
-
Nomen Nescio
-
Patrick
-
Paul Walker
-
Rich Salz
-
Roy M.Silvernail
-
Sunder
-
Tim Dierks