SSL search attacks
-----BEGIN PGP SIGNED MESSAGE----- From: Scott Brickner <sjb@austin.ibm.com>
We've identified several forms of "real-world retaliation:"
1) "Result hoarding" - failure to report a found key 2) "Segment hoarding" - requesting more segments than one can hope to search 3) Denial of service - preventing access to the server
The "random search" method eliminates all three of these at about 37% higher cost in search time, on the average. I submit that if we *really* were trying to break something important, we could design a system which eliminated the first two and adequately limited the third, but at *much* less cost.
The problems in the current system were to be expected of a first attempt. In the future: Only the server assigns segments, only the assignee may report the status of a segment, and after all segments are NAKed we know condition 1 has occurred, at which time we start over, but never assign the same segment to the same searcher. Limit the number of segments which may be outstanding with one searcher at one time as a function of work rate. Deploy redundant servers.
BEAAAT STATE! Push 'em back.. WAAAAAAY BAAAACK. (relevant comments follow) From: tcmay@got.net (Timothy C. May)
An interesting question: Is it a valid approach for J. Random User to "claim" some chunk of keyspace to search?
If the "reward" of finding the gold buried in the keyspace (a key that meets the challenge) is high and the cost of claiming the keyspace is low (or nil), then game theory tells us that some folks will be tempted to claim a bigger chunk of keyspace than they can possibly process.
What can be done to reduce this effect?
In regard to both messages, I think that with sequentially allocated keyspace an attacker who knows the real key would have trouble getting the right segment unless s/he grabbed a big enough piece. If the search is restarted, we know something's up. Ensuring that nobody gets to search keyspace they searched before would be one improvement. A random (instead of sequential) allocation _by the keyserver_ (out of unallocated piecemeal segments) would also take some work to implement.
On the negative side, ostracize or punish those who bite off more than they can chew. This approach is fraught with dangers.
If the search wraps around to catch the UNACK'ed pieces, this type of oversight will only slow down the actual discovery of the key. Failure to report a found key, though, is a bit different. I would not be opposed to having my program report possible hits, with the server being what discovers if I've found it or not.
On the positive side, let everyone simply attack the keyspace as they see fit, picking random parts to attack. This should not be "worse" than a factor of several from a "perfectly coordinated" attack. (I haven't spent time calculating this, but my intuition is that a random attack, with overlapping keyspace, is not a lot less efficiently attacked than attempting to arrange for no overlaps...just based on my mental picture of dropping line segments randomly on some interval and figuring coverage of the line segment.)
Why not have a random backup-mode, in case someone does mount a denial of service attack. Or imploy a combination of the two modes. The machines running brloop can search sequentially (out of the middle 50%?) and the machines not connected search randomly (out of the outside 50%?). Or, venturing further into the I-wonder-who's-gonna-code-this world, log the random searches for possible conversion to an exhaustive search later. It would be nice to be able to hit the emergency button and switch to random mode, but currently I don't think there's a need to actually use it. Don -----BEGIN PGP SIGNATURE----- Version: 2.6.2 iQB1AwUBMEN/U8La+QKZS485AQHNcgL+ItlNLYcsIjjlQPQJBxgts66GXPMs3ijb QIcqiAbrg4cq7F9xWNRvZa9LTvw75UUM1+PmItGkSUuqOqvJ9VkzaUp8/Sf5zuDs 5XTlJLVhYa7qQzY4Ov4a3k0ora0SPvKh =wyzo -----END PGP SIGNATURE----- <don@cs.byu.edu> fRee cRyPTo! jOin the hUnt or BE tHe PrEY PGP key - http://bert.cs.byu.edu/~don or PubKey servers (0x994b8f39) June 7&14, 1995: 1st amendment repealed. Death threats ALWAYS pgp signed * This user insured by the Smith, Wesson, & Zimmermann insurance company *
don@cs.byu.edu writes
From: Scott Brickner <sjb@austin.ibm.com>
The problems in the current system were to be expected of a first attempt. In the future: Only the server assigns segments, only the assignee may report the status of a segment, and after all segments are NAKed we know condition 1 has occurred, at which time we start over, but never assign the same segment to the same searcher. Limit the number of segments which may be outstanding with one searcher at one time as a function of work rate. Deploy redundant servers.
BEAAAT STATE! Push 'em back.. WAAAAAAY BAAAACK. (relevant comments follow)
I suppose this does seem like a "statist" protocol, but let's look at the purpose. The whole idea of the central server was to permit a *coordinated* attack on the key. We've established that there is a 1/e cost factor in removing the central server. I just threw out these items as specific changes which could defend against the identified attack modes *without* losing the benefit of the central coordination. In order for the coordinator to be successful, there must be a mechanism to ensure that someone who knows the key can't break the system by just reporting "I searched this segment and didn't find it." This means that the server should consider such statements as irrelevant, unless it was the *server* who suggested that the user search the space. This makes the likelihood of the key's segment being assigned to a "bad guy" pretty low. The server *could* take unsolicited NAKs "under advisement", and hand them out at a slower rate than unACKed segments, but this still allows the "result hoarder" to slow down the attack.
In regard to both messages, I think that with sequentially allocated keyspace an attacker who knows the real key would have trouble getting the right segment unless s/he grabbed a big enough piece. If the search is restarted, we know something's up. Ensuring that nobody gets to search keyspace they searched before would be one improvement.
Hence the prohibition against (as Tim put it) "J. Random User claiming keyspace".
A random (instead of sequential) allocation _by the keyserver_ (out of unallocated piecemeal segments) would also take some work to implement.
I don't think it would really be that hard, if one were willing to go with less than "cryptographic" strength in the PRNG, which I don't think is really necessary here. The problem is that it's irrelevant to the problem. Random allocation at the server is equivalent to simply "shuffling" the segments before assignment, which doesn't affect the rate at which the space is searched.
From: tcmay@got.net (Timothy C. May)
On the negative side, ostracize or punish those who bite off more than they can chew. This approach is fraught with dangers.
If the search wraps around to catch the UNACK'ed pieces, this type of oversight will only slow down the actual discovery of the key. Failure to report a found key, though, is a bit different. I would not be opposed to having my program report possible hits, with the server being what discovers if I've found it or not.
I'm not sure I follow you, here. The search wraps around on the unACKed segments because the work was assigned, but not (as far as the server knows) completed. This doesn't slow down the discovery of the key, it just reflects the *real* composite key testing rate as opposed to the *apparent* rate (which is based on the rate at which the segments are assigned). The server doesn't consider a segement "done" until it gets an ACK or NAK.
On the positive side, let everyone simply attack the keyspace as they see fit, picking random parts to attack. This should not be "worse" than a factor of several from a "perfectly coordinated" attack. (I haven't spent time calculating this, but my intuition is that a random attack, with overlapping keyspace, is not a lot less efficiently attacked than attempting to arrange for no overlaps...just based on my mental picture of dropping line segments randomly on some interval and figuring coverage of the line segment.)
NB: Elsewhere, Tim provides an argument showing the efficiency of the random attack to be 1/e worse than the coordinated attack (about 37%).
Why not have a random backup-mode, in case someone does mount a denial of service attack. Or imploy a combination of the two modes. The machines running brloop can search sequentially (out of the middle 50%?) and the machines not connected search randomly (out of the outside 50%?). Or, venturing further into the I-wonder-who's-gonna-code-this world, log the random searches for possible conversion to an exhaustive search later.
It would be nice to be able to hit the emergency button and switch to random mode, but currently I don't think there's a need to actually use it.
I still don't see how the server can use unsolicited NAKs as anything other than a nominal reduction in the probability that the key is in the NAKed segment. Perhaps this does give an idea of a server strategy to do *just* that, though. The server maintains a list of the unique users who have reported an unsolicited NAK for each segment. Requests for work are filled by randomly selecting segments, with the highest weight going to the segments with the fewest unsolicited NAKs, but only segments with *solicited* NAKs and those assigned, but with no response, are not considered. If the weight were inversely proportional to the square of the number of unsolicited NAKs (plus one), then segments which have a lot of NAKs won't likely be assigned until the end of the jobs. When a segment with unsolicited NAKs is assigned, further weight might be given to unsolicited NAKs from those users in the future, reflecting an improvement in their reputation. The biggest problem with this scenario is that it requires a potentially *huge* amount of storage on the server. Another alternative that comes to mind is to hand out segments with unsolicited NAKs to some of the slower machines. Since their contribution to the overall search rate is small, there's less of a hit taken by assigning them potentially redundant work. As they provide verification of the data reported as unsolicited NAKs, the server's reputation data is improved, and the search can concentrate even more on the unACKed segments.
-----BEGIN PGP SIGNED MESSAGE----- Hello don@cs.byu.edu and cypherpunks@toad.com and Scott Brickner <sjb@austin.ibm.com> Scott wrote:
don@cs.byu.edu writes
From: Scott Brickner <sjb@austin.ibm.com>
...[only server assigns segments, client may ack only assigned segments]...
BEAAAT STATE! Push 'em back.. WAAAAAAY BAAAACK. (relevant comments follow)
...
*coordinated* attack on the key. We've established that there is a 1/e cost factor in removing the central server. I just threw out these ...
Wouldn't it be possible to reduce the cost? Each client could pick a segment at random, check it and then broadcast a NAK. Other clients would then know that the segment in question has been done, and avoid picking it in the future. If you are worried about collisions, one could also have IGRAB, which would advise others that someone is working on a segment (you can still collide, but not so often). One advantage is that it is not necessary to have a central infinitely trusted server. (Nothing personal, but bogus server is an attack.) NAKs and IGRABs would be weighted by the trust accorded to the entity that originated them. Notes: * "broadcast" is probably best done with a fairly sparse graph, otherwise one will get too much communications. * since there is no "server", I should replace "client" with another word. * there is no incentive to send NAKs (they diminish your own chance of hitting the jackpot). How could this be avoided? * the NAKs could be sent by e-mail, thus allowing badly connected and/or anonymous entities to participate. Am I making any sense at all? Jiri - -- If you want an answer, please mail to <jirib@cs.monash.edu.au>. On sweeney, I may delete without reading! PGP 463A14D5 (but it's at home so it'll take a day or two) PGP EF0607F9 (but it's at uni so don't rely on it TOo much) -----BEGIN PGP SIGNATURE----- Version: 2.6.2i iQCVAwUBMEWAKixV6mvvBgf5AQEnkQQA0/+19hwKS204HjinHiLH5atzrv4CQu4G Gtpxoq4R+VQgVmsUdYjPsUXce3Cu8KlFuRuJwjhnRuqQxUs53uVkKxo/peoV8xZr FNguipHzgVu7T9t/hNQwiUDIudkv9mCpP4V27CU31GIt3BpzmfiCJLryFjI0kqKe PXAB0khlKvY= =pbWn -----END PGP SIGNATURE-----
Jiri Baum writes
*coordinated* attack on the key. We've established that there is a 1/e cost factor in removing the central server. I just threw out these ...
Wouldn't it be possible to reduce the cost?
Each client could pick a segment at random, check it and then broadcast a NAK. Other clients would then know that the segment in question has been done, and avoid picking it in the future. If you are worried about collisions, one could also have IGRAB, which would advise others that someone is working on a segment (you can still collide, but not so often).
This only reduces the cost if everyone is playing fair. In practice, it will usually *increase* the cost. A denial of service attack can be mounted by the owner of the key just by anonymously NAKing the segment with the key. Then you have to search the *whole* keyspace, fail to find it, and start over with a new strategy.
One advantage is that it is not necessary to have a central infinitely trusted server. (Nothing personal, but bogus server is an attack.)
An attack on what? The overall model here is that someone presents the world at large with a problem to solve. Someone else volunteers to coordinate the effort by providing a server. Providing a bogus server is an attack in the sense that it wastes the CPU cycles of the clients, but they're junk cycles anyway. It's kind of like the issue about being "unable to participate" because the group effort ignores the efforts of random searchers. Those searchers *aren't* participating, and not ignoring them opens the server to attack. An "effort" coordinated by a bogus server is no effort at all. My point is that the "random" efforts are no different than everyone working on the problem independently, each picking a random place to start and going sequentially from there.
NAKs and IGRABs would be weighted by the trust accorded to the entity that originated them.
This is similar to what I outlined yesterday afternoon. Let unsolicited NAKs and IGRABs represent adjustments to the probability that a segment is assigned to a client *inside* the group. Invalid unsolicited NAKs don't destroy the current search, they only slow it down slightly --- but less than a fully random effort.
Notes: * the NAKs could be sent by e-mail, thus allowing badly connected and/or anonymous entities to participate.
This could be done in any case. It just slows down the effective search rate of the e-mail participants. This might be an argument in favor of requesting more space as you get near the end of your current space, though. When the communications latency starts to approach the segment search time, you cut down your waiting time by prefetching work.
-----BEGIN PGP SIGNED MESSAGE----- Hello cypherpunks@toad.com and Scott Brickner <sjb@austin.ibm.com> Scott Brickner <sjb@austin.ibm.com> writes:
Jiri Baum writes ...
Each client could pick a segment at random, check it and then broadcast a NAK. Other clients would then know that the segment in question has ... This only reduces the cost if everyone is playing fair. In practice, ...
No worse than fake NAKs to the central server (viz comment below).
One advantage is that it is not necessary to have a central infinitely trusted server. (Nothing personal, but bogus server is an attack.)
An attack on what? The overall model here is that someone presents ...
An attack on the attempt. If the key owner also volunteers a server, then half the CPU cycles will report to that server (and be given useless chunks of keyspace) thus halving the CPU power available to the usual server ("half" in an infinitely naive world, of course). The approach I suggested basically corresponds to everyone maintaining hir own server; servers that trust each other will coordinate. An attacker can of course NAK the key segment, but only those that trust the attacker will take any notice.
My point is that the "random" efforts are no different than everyone working on the problem independently, each picking a random place to start and going sequentially from there.
The difference is that in this scheme everyone does coordinate, only it's peer-peer rather than client-server.
NAKs and IGRABs would be weighted by the trust accorded to the entity that originated them.
This is similar to what I outlined yesterday afternoon. Let unsolicited ...
I think that's where it came from. I really should provide citations, shouldn't I... ...
Invalid unsolicited NAKs don't destroy the current search, they only slow it down slightly --- but less than a fully random effort.
Similarly in the peer-peer approach, the effort is coordinated but untrusted NAKs slow it down only slightly. The only "solicited" NAKs will be your own. Hope that makes sense... Jiri - -- If you want an answer, please mail to <jirib@cs.monash.edu.au>. On sweeney, I may delete without reading! PGP 463A14D5 (but it's at home so it'll take a day or two) PGP EF0607F9 (but it's at uni so don't rely on it too much) -----BEGIN PGP SIGNATURE----- Version: 2.6.2i iQCVAwUBMEpXLSxV6mvvBgf5AQFn2QP/eJ0BlATPHS2xoLoJuHdJYR7Y5gN5scmK DHOby7rGJ3Rj6CZ6PrdkQVf9ckUdmUwhCzAiCi3wnPHPf0gi4rPjLyBpmyTgl8yA q+VqYPkBAflwHqXIsqbxx94PiZayt8b578Qtqoa2jJzjSCKMa8IonWGeztP/xNxa FCmJDocudq4= =r/Hv -----END PGP SIGNATURE-----
participants (3)
-
don@cs.byu.edu -
Jiri Baum -
Scott Brickner