--
There is however a huge problem replace SHA-1 by something else from now to tomorrow: Other algorithms are not as well anaylyzed and compared against SHA-1 as for example AES to DES are; so there is no immediate successor of SHA-1 of whom we can be sure to withstand the possible new techniques. Second, SHA-1 is tightly integrated in many protocols without a fallback algorithms (OpenPGP: fingerprints, MDC, default signature algorithm and more).
They reduced the break time of SHA1 from 2^80 to 2^69. Presumably they will succeed in reducing the break time of SHA256 from 2^128 to a mere 2^109 or so. So SHA256 should be OK. 2^69 is damn near unbreakable. 2^80 is really unbreakable. --digsig James A. Donald 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG IQqit8pqSokARYxy1xVLrTaVRSKMAGvz2MXbQqXi 4DAQZgw0sbP3OcD3kgO+x7f+VfsPD4E8EBsB96d/D
----- Original Message ----- From: "James A. Donald" <jamesd@echeque.com> Subject: Re: SHA1 broken?
2^69 is damn near unbreakable.
I believe you are incorrect in this statement. It is a matter of public record that RSA Security's DES Challenge II was broken in 72 hours by $250,000 worth of semi-custom machine, for the sake of solidity let's assume they used 2^55 work to break it. Now moving to a completely custom design, bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 work in 72 hours (give or take a couple orders of magnitude). This puts the 2^69 work well within the realm of realizable breaks, assuming your attackers are smallish businesses, and if your attackers are large businesses with substantial resources the break can be assumed in minutes if not seconds. 2^69 is completely breakable. Joe
I believe you are incorrect in this statement. It is a matter of public record that RSA Security's DES Challenge II was broken in 72 hours by $250,000 worth of semi-custom machine, for the sake of solidity let's assume they used 2^55 work to break it. Now moving to a completely custom design, bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 work in 72 hours (give or take a couple orders of magnitude). This puts the 2^69 work well within the realm of realizable breaks, assuming your attackers are smallish businesses, and if your attackers are large businesses with substantial resources the break can be assumed in minutes if not seconds.
2^69 is completely breakable. Joe Its fine assuming that moore's law will hold forever, but without
Joseph Ashwood wrote: that you can't really extrapolate a future tech curve. with *todays* technology, you would have to spend an appreciable fraction of the national budget to get a one-per-year "break", not that anything that has been hashed with sha-1 can be considered breakable (but that would allow you to (for example) forge a digital signature given an example) This of course assumes that the "break" doesn't match the criteria from the previous breaks by the same team - ie, that you *can* create a collision, but you have little or no control over the plaintext for the colliding elements - there is no way to know as the paper hasn't been published yet.
On 1108637369 seconds since the Beginning of the UNIX epoch Dave Howe wrote:
Its fine assuming that moore's law will hold forever, but without that you can't really extrapolate a future tech curve. with *todays* technology, you would have to spend an appreciable fraction of the national budget to get a one-per-year "break", not that anything that has been hashed with sha-1 can be considered breakable (but that would allow you to (for example) forge a digital signature given an example)
I think that it is generally prudent to make the most ``conservative'' assumption with regards to Moore's Law in any given context. I.e. bet that it will continue when determining how easy your security is to brute force, and assume that it will not when writing code. -- Roland Dowdeswell http://www.Imrryr.ORG/~elric/
----- Original Message ----- From: "Dave Howe" <DaveHowe@gmx.co.uk> Sent: Thursday, February 17, 2005 2:49 AM Subject: Re: SHA1 broken?
I believe you are incorrect in this statement. It is a matter of public record that RSA Security's DES Challenge II was broken in 72 hours by $250,000 worth of semi-custom machine, for the sake of solidity let's assume they used 2^55 work to break it. Now moving to a completely custom design, bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 work in 72 hours (give or take a couple orders of magnitude). This puts the 2^69 work well within the realm of realizable breaks, assuming your attackers are smallish businesses, and if your attackers are large businesses with substantial resources the break can be assumed in minutes if not seconds.
2^69 is completely breakable. Joe Its fine assuming that moore's law will hold forever, but without that you can't really extrapolate a future tech curve. with *todays* technology, you would have to spend an appreciable fraction of the national budget to get a one-per-year "break", not that anything that has been hashed with sha-1 can be considered breakable (but that would allow you to (for example) forge a digital signature given an example) This of course assumes that the "break" doesn't match the criteria from
Joseph Ashwood wrote: the previous breaks by the same team - ie, that you *can* create a collision, but you have little or no control over the plaintext for the colliding elements - there is no way to know as the paper hasn't been published yet.
I believe you substantially misunderstood my statements, 2^69 work is doable _now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 years to the present (and hence through known data) leads to a situation where the 2^69 work is achievable today in a reasonable timeframe (3 days), assuming reasonable quantities of available money ($500,000US). There is no guessing about what the future holds for this, the 2^69 work is NOW. ----- Original Message ----- From: "Trei, Peter" <ptrei@rsasecurity.com> To: "Dave Howe" <DaveHowe@gmx.co.uk>; "Cypherpunks" <cypherpunks@al-qaeda.net>; "Cryptography" <cryptography@metzdowd.com>
Actually, the final challenge was solved in 23 hours, about 1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding the key after only 24% of the keyspace had been searched.
More recently, RC5-64 was solved about a year ago. It took d.net 4 *years*. 2^69 remains non-trivial.
What you're missing in this is that Deep Crack was already a year old at the time it was used for this, I was assuming that the most recent technologies would be used, so the 1998 point for Deep Crack was the critical point. Also if you check the real statistics for RC5-64 you will find that Distributed.net suffered from a major lack of optimization on the workhorse of the DES cracking effort (DEC Alpha processor) even to the point where running the X86 code in emulation was faster than the native code. Since an Alpha Processor had been the breaking force for DES Challenge I and a factor of > 1/3 for III this crippled the performance resulting in the Alphas running at only ~2% of their optimal speed, and the x86 systems were running at only about 50%. Based on just this 2^64 should have taken only 1.5 years. Additionally add in that virtually the entire Alpha community pulled out because we had better things to do with our processors (e.g. IIRC the same systems rendered Titanic) and Distributed.net was effectively sucked dry of workhorse systems, so a timeframe of 4-6 months is more likely, without any custom hardware and rather sad software optimization. Assuming that the new attacks can be pipelined (the biggest problem with the RC5-64 optimizations was pipeline breaking) it is entirely possible to use modern technology along with GaAs substrate to generate chips in the 10-20 GHz range, or about 10x the speed available to Distributed.net. Add targetted hardware to the mix, deep pipelining, and massively multiprocessors and my numbers still hold, give or take a few orders of magnitude (the 8% of III done by Deep Crack in 23 hours is only a little over 2 orders of magnitude off, so within acceptable bounds). 2^69 is achievable, it may not be pretty, and it certainly isn't kind to the security of the vast majority of "secure" infrastructure, but it is achievable and while the cost bounds may have to be shifted, that is achievable as well. It is still my view that everyone needs to keep a close eye on their hashes, make sure the numbers add up correctly, it is simply my view now that SHA-1 needs to be put out to pasture, and the rest of the SHA line needs to be heavily reconsidered because of their close relation to SHA-1. The biggest unknown surrounding this is the actual amount of work necessary to perform the 2^69, if the workload is all XOR then the costs and timeframe I gave are reasonably pessimistic, but if the required operations are dynamically sized mulitplies then the time*cost is off by some very large amounts. Even simple bulk computation assuming full pipelining says that 4700 4 GHz to complete 2^69 operations in 1 year, even assuming using full 3.8 GHz pentium 4s instead of a more optimal package only leads to a processor cost of 3.1 million for a 1 year 2^69, dropping that down to 2.4GHz celerons requires 7800 of them, but only $538,000. Moving to DSPs and FPGAs the costs will drop substantially, but I don't feel like looking it up, and as the costs drop the number of processors that can be used increases linearly additionally as the individual speeds drop the purchase cost drops better than linearly. I am quite confident that with careful engineering a custom box could be produced for the $500,000 mark that would do 2^69 operations in the proper timeframe. With deep pipelining any complexity of 2^69 operations could be done in the timeframe, but will scale the price. I suppose I should also point out an unspoken qualifier, I am assuming a large number of these machines will be built reducing the engineering overhead to miniscule, for a one-off project this will likely be the dominant cost. 2^69 work is achievable, the cost multiplier associated will be the determining factor. Joe
----- Original Message ----- From: "Joseph Ashwood" <ashwood@msn.com> Sent: Friday, February 18, 2005 3:11 AM [the attack is reasonable] Reading through the summary I found a bit of information that means my estimates of workload have to be re-evaluated. Page 1 "Based on our estimation, we expect that real collisions of SHA1 reduced to 70-steps can be found using todays supercomputers." This is a very important statement for estimating the real workload, assuming there is an implicit "in one year" in there, and assuming BlueGene (Top 500 list slot 1) this represents 22937.6 GHz*years, or slightly over 2^69 clock cycles, I am obviously still using gigahertz because information gives us nothing better to work from. This clearly indicates that the operations used for the workload span multiple processor clocks, and performing a gross estimation based on pure guesswork I'm guessing that my numbers are actually off by a factor of between 50 and 500, this factor will likely work cleanly in either adjusting the timeframe or production cost. My suggestion though to make a switch away from SHA-1 as soon as reasonable, and to prepare to switch hashes very quickly in the future remains the same, the march of processor progress is not going to halt, and the advance of cryptographic attacks will not halt which will inevitably squeeze SHA-1 to broken. I would actually argue that the 2^80 strength it should have is enough to begin its retirement, 2^80 has been "strong enough" for a decade in spite of the march of technology. Under the processor speed enhancements that have happened over the last decade we should have increased the keylength already to accomodate for dual core chips running at 20 times the speed for a total of 40 times the prior speed (I was going to use Spec data for a better calculation but I couldn'd immediately find specs for a Pentium Pro 200) by adding at least 5 bits preferrably 8 to our necessary protection profile. Joe
Joseph Ashwood wrote:
I believe you substantially misunderstood my statements, 2^69 work is doable _now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 years to the present (and hence through known data) leads to a situation where the 2^69 work is achievable today in a reasonable timeframe (3 days), assuming reasonable quantities of available money ($500,000US). There is no guessing about what the future holds for this, the 2^69 work is NOW. I wasn't aware that FPGA technology had improved that much if any - feel free to correct my misapprehension in that area though :)
On Sat, Feb 19, 2005 at 03:53:53PM +0000, Dave Howe wrote:
I wasn't aware that FPGA technology had improved that much if any - feel free to correct my misapprehension in that area though :)
FPGAs are too slow (and too expensive), if you want lots of SHA-1 performance, use a crypto processor (or lots of forthcoming C5J mini-ITX boards), or an ASIC. Assuming, fast SHA-1 computation is the basis for the attack -- we do not know that. While looking, came across http://www.ietf.org/proceedings/02jul/slides/saag-1.pdf "We really DO NOT need SHA-256 for Message Authentication", mid-2002. -- Eugen* Leitl <a href="http://leitl.org">leitl</a> ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net [demime 1.01d removed an attachment of type application/pgp-signature]
I wasn't aware that FPGA technology had improved that much if any - feel free to correct my misapprehension in that area though :) FPGAs are too slow (and too expensive), if you want lots of SHA-1
On Sat, Feb 19, 2005 at 03:53:53PM +0000, Dave Howe wrote: performance, use a crypto processor (or lots of forthcoming C5J mini-ITX boards), or an ASIC. Assuming, fast SHA-1 computation is the basis for the attack -- we do not know that. Indeed so. however, the argument "in 1998, a FPGA machine broke a DES key in 72 hours, therefore TODAY..." assumes that (a) the problems are comparable, and (b) that moores law has been applied to FPGAs as well as CPUs. I am unaware of any massive improvement (certainly to the scale of
Eugen Leitl wrote: the comparable improvement in CPUs) in FPGAs, and the ones I looked at a a few days ago while researching this question seemed to have pretty much the same spec sheet as the ones I looked at back then. However, I am not a gate array techie, and most of my experience with them has been small (two-three chip) devices at very long intervals, purely for my own interest. It is possible there has been a quantum leap foward in FPGA tech or some substitute tech that can perform massively parallel calculations, on larger block sizes and hence more operations, at a noticably faster rate than the DES cracker could back then. Schneier apparently believes there has been - but is simply applying moore's law to the machine from back then, and that may not be true unless he knows something I don't (I assume he knows lots of things I don't, but of course he may not have thought this one though :)
----- Original Message ----- From: "Dave Howe" <DaveHowe@gmx.co.uk> Subject: Re: SHA1 broken?
Indeed so. however, the argument "in 1998, a FPGA machine broke a DES key in 72 hours, therefore TODAY..." assumes that (a) the problems are comparable, and (b) that moores law has been applied to FPGAs as well as CPUs.
That is only misreading my statements and missing a very large portion where I specifically stated that the new machine would need to be custom instead of semi-custom. The proposed system was not based on FPGAs, instead it would need to be based on ASICs engineered using modern technology, much more along the lines of a DSP. The primary gains available are actually from the larger wafers in use now, along with the transistor shrinkage. Combined these have approximately kept the cost in line with Moore's law, and the benefits of custom engineering account for the rest. So for exact details about how I did the calculations I assumed Moore's law for speed, and an additional 4x improvement from custom chips instead of of the shelf. In order to verify the calculations I also redid them assuming DSPs which should be capable of processing the data (specifically from TI), I came to a cost within a couple orders of magnitude although the power consumption would be substantially higher. Joe
participants (6)
-
Dave Howe
-
Eugen Leitl
-
James A. Donald
-
Joseph Ashwood
-
R.A. Hettinga
-
Roland Dowdeswell