Re: SSL search attack
At 7:25 AM 9/1/95, Daniel R. Oelke wrote:
I see nothing wrong with the concept of being allocated an initial chunk and having the scan software attempt to ACK it when 50% of it has been searched. A successful ACK would allow the releasing of a new chunk (in response) equal in size to the returned chunk. A failure of the Server to accept the ACK would trigger a retry at set intervals (such as 75% and 100% or 60/70/80/90/100%) until the Server responds. Thus the scanner is always in possession of a Full Sized Chuck to scan (so long as the Server accepts an ACK before the 100% done mark) and temporary failures will not stop the process of a scanner as currently happens.
The only way this can work is if the server is told it is a 50%/75%/etc size ACK, and then latter the server is ACKed for the full 100%.
Why? Because what happens if the client dies immediately after doing the ACK - maybe only 51% of that space has been searched, yet the server has already seen an ACK for it.
You NEVER claim to have searched space until you have actually done so.
That is exactly what I was arguing against - but the first sentance of what I quoted was saying was ok.
Assuming that you are multi-threaded--- Simply run two "workers" on the same machine. If there are delays in getting keys assigned, the two will soon get out of phase and keep the cpu busy.
I kind of like that idea... Dan ------------------------------------------------------------------ Dan Oelke Alcatel Network Systems droelke@aud.alcatel.com Richardson, TX
ACK ACK ACK ACK ACK
I've just kinda been watching this debate for a while, so I may well have missed some of the more interesting details; if so, I apologize for my noise in advance. I work on a lot of commercial software under constraints of scalability much like the SSL "attack server" being discussed here. My instincts tell me that in this situation the whole process would be *much* simpler if the basic idea of keeping the central server (or the family of distributed servers in those models) completely "informed" by all the attacking clients were abandoned. Tim May's "random attack" idea was extremely attractive, I thought. However, I think that it'd be possible to take advantage of the fact that the keyspace itself is basically constant (until the keysize is increased in the protocol under attack, of course). I mean, 40 bits is 40 bits. Similarly, the capacity of most clients will be fairly consistent. (I have access (in theory, of course; don't mention this to my management) (hi todd) to a hundred or so CPU's here, and that doesn't really change too often.) Rather than apportion the search space out dynamically on each attack, why not simply allow attack clients to "subscribe" on a semi-permanent basis? All the server would have to do is make batches of ciphertext available for cracking. When a request comes in from a subscriber for a copy of some ciphertext, the server knows (or at least can legitimately suspect) that that subscriber's already-known keyspace will be searched. As far as getting acknowledgements of search completion, again the server can by inference assume that (based on the prior establishment of client capabilities) after a pre-determined period of time the key sub-space will have been searched. It might be appropriate for clients to send back NACK messages, in case for example somebody shuts down the client's network unexpectedly. Assuming this goes pretty smoothly one would hope that the number of failures would be considerably smaller than the number of successs. Again, ignore me if I'm blind to something obvious. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | Nobody's going to listen to you if you just | Mike McNally (m5@tivoli.com) | | stand there and flap your arms like a fish. | Tivoli Systems, Austin TX | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Scott Brickner writes:
I think your assumption that available CPU is approximately constant is incorrect. Different participants have different constraints...
Hmm. I suppose that's probably true for some more than others. Again, hmm.
Also, the "subscription" process is somewhat discouraging to those who participate for the prize.
Ah. That looks like one of those little details that got by me. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | Nobody's going to listen to you if you just | Mike McNally (m5@tivoli.com) | | stand there and flap your arms like a fish. | Tivoli Systems, Austin TX | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I see nothing wrong with the concept of being allocated an initial chunk and having the scan software attempt to ACK it when 50% of it has been searched. A successful ACK would allow the releasing of a new chunk (in You NEVER claim to have searched space until you have actually done so. That is exactly what I was arguing against - but the first sentance of what I quoted was saying was ok.
No -- If you ask for 2 segments, then when you are 50% done, it is OK to ACK the *FIRST* segment.
Assuming that you are multi-threaded--- Simply run two "workers" on the same machine. If there are delays in getting keys assigned, the two will soon get out of phase and keep the cpu busy. I kind of like that idea...
I thought of that, but: 1) for the same server load, it doubles the number of unACKed segments 2) if process A is lagging process B, then when process B finishes and is idle waiting for the server, process A will run faster and thus reduce the lag. This will make the processes drift into phase. I'm not convinced one way or the other.
participants (3)
-
droelke@rdxsunhost.aud.alcatel.com -
m5@dev.tivoli.com -
Piete Brooks