Possible crypto backdoor in RFC-2631 Diffie-Hellman Key Agreement Method
Possible crypto backdoor in RFC-2631 Diffie-Hellman Key Agreement Method I am n00b at crypto so this might not make any sense. In DH, if one can select group parameters (g,q,p) he can break both parties private very fast time IMHO. The RFC: https://tools.ietf.org/html/rfc2631 The main problem appears: https://tools.ietf.org/html/rfc2631#section-2.2.2 2.2.2. Group Parameter Validation The ASN.1 for DH keys in [PKIX] includes elements j and validation- Parms which MAY be used by recipients of a key to verify that the group parameters were correctly generated. Two checks are possible: 1. Verify that p=qj + 1. This demonstrates that the parameters meet the X9.42 parameter criteria. 2. Verify that when the p,q generation procedure of [FIPS-186] Appendix 2 is followed with seed 'seed', that p is found when 'counter' = pgenCounter. The main problem appears MAY. As I read it, implementation MAY NOT verify it. Sketch of the attack: Chose $q$ product of small primes $p_i$. Solve the discrete logarithm modulo $p_i$ for the public keys. Apply the Chinese remainder theorem to get the privates keys. (This is well known method for DL and for this reason the group order must be prime [160 bits ;)]). Would be interested how implementations implement this MAY. Let me know if there is better list for this. -- georgi
One saving grace about RFC 2631 was that it was pretty much universally ignored for the reason that it was, well, a pretty stupid way to do things, so the number of affected implementations would be approximately zero. (I only know of one, rather minor, vendor who implemented it. Microsoft implemented it in receive-only mode solely so that they couldn't be accused of being non-standards-compliant, but I'd be very surprised if there was anything still around that supported it. For starters you'd need to be able to find a CA that could issue you a DH certificate...). Peter.
On Thu, Sep 03, 2015 at 11:59:11AM +0000, Peter Gutmann wrote:
One saving grace about RFC 2631 was that it was pretty much universally ignored for the reason that it was, well, a pretty stupid way to do things, so the number of affected implementations would be approximately zero.
Anyway, I would appreciate if someone checks if current implementations accept composite $q$.
(I only know of one, rather minor, vendor who implemented it. Microsoft implemented it in receive-only mode solely so that they couldn't be accused of being non-standards-compliant, but I'd be very surprised if there was anything still around that supported it. For starters you'd need to be able to find a CA that could issue you a DH certificate...).
What do you mean by DH certificate? Can DH sign?
Peter.
Georgi Guninski <guninski@guninski.com> writes:
Anyway, I would appreciate if someone checks if current implementations accept composite $q$.
Well, I think the problem will be finding any implementation of this at all, or at least any that's still around now.
What do you mean by DH certificate?
The static DH parameters need to be turned into a certificate by a CA. I don't know of any public CA that can issue these. Peter.
On Thu, Sep 03, 2015 at 01:33:48PM +0000, Peter Gutmann wrote:
Georgi Guninski <guninski@guninski.com> writes:
Anyway, I would appreciate if someone checks if current implementations accept composite $q$.
Well, I think the problem will be finding any implementation of this at all, or at least any that's still around now.
What do you mean by DH certificate?
The static DH parameters need to be turned into a certificate by a CA. I don't know of any public CA that can issue these.
Peter.
Well openessl appears to support dhparam: https://www.openssl.org/docs/manmaster/apps/dhparam.html (maybe one needs to patch the source). Maybe the same approach will work for DSA.
Georgi Guninski <guninski@guninski.com> writes:
Well openessl appears to support dhparam: https://www.openssl.org/docs/manmaster/apps/dhparam.html
That just indicates support for PKCS #3 DH parameters, not anything else. In any case the page also says: OpenSSL currently only supports the older PKCS#3 DH, not the newer X9.42 DH. so that explicitly precludes using it in certs, even if code elsewhere would support such usage. I've gone through my (sizeable) cert collection and found a single example of X9.42 certs, created by a USG contracting company paid to develop the code for this and dating from 1996. The certs are signed with a test DSA key, and contain a number of errors (zero-length fields, the DH key is marked as a CA signing key, etc). Peter.
On Thu, Sep 03, 2015 at 11:59:11AM +0000, Peter Gutmann wrote:
One saving grace about RFC 2631 was that it was pretty much universally ignored for the reason that it was, well, a pretty stupid way to do things, so the number of affected implementations would be approximately zero.
Even if "affected implementations would be approximately zero", can we count this as "crypto backdoored RFC" as per OP?
Georgi Guninski <guninski@guninski.com> writes:
Even if "affected implementations would be approximately zero", can we count this as "crypto backdoored RFC" as per OP?
Oh sure, it's definitely broken. OTOH I'm not sure if it's a deliberate backdoor, the whole thing is such a bad design to begin with that something like this is really just the icing on the cake. It may be worth submitting an erratum to the RFC that mentions the problem, just in case anyone is actually crazy enough to want to implement this in the future. Peter.
On Thu, Sep 03, 2015 at 11:59:11AM +0000, Peter Gutmann wrote:
the number of affected implementations would be approximately zero.
openssl's DSA appears to check primality of q. Attached are pub and private key with q composite (beware the private key might not be generated correctly).
On Fri, Sep 04, 2015 at 11:26:05AM +0300, Georgi Guninski wrote:
openssl's DSA appears to check primality of q.
This almost sure is wrong. openssl's DSA verify/sign don't check the primality of $q$. tested on openssl 1.0.1g (I know it is old). Got hurt by this backdoor: i = BN_num_bits(dsa->q); /* fips 186-3 allows only different sizes for q */ if (i != 160 && i != 224 && i != 256) { DSAerr(DSA_F_DSA_DO_VERIFY,DSA_R_BAD_Q_VALUE); return -1; } Attached are private and private keys, with $q$ composite and equal to: 604462909807314587353111 * 1208925819614629174706189 Session with 1.0.1g: fuuu:cp /tmp/key-comp2.* . fuuu:echo "fuck" > foo.txt fuuu:./apps/openssl dgst -dss1 -sign key-comp2.key foo.txt > sigfile.bin fuuu:./apps/openssl dgst -verify key-comp2.pub -signature sigfile.bin foo.txt Verified OK Cheers, -- georgi
On Fri, Sep 04, 2015 at 03:08:16PM +0300, Georgi Guninski wrote:
On Fri, Sep 04, 2015 at 02:34:37PM +0300, Georgi Guninski wrote:
tested on openssl 1.0.1g (I know it is old).
Same on latest openssl-1.0.1p.
This works with openssl 1.0.1p over SSL. Attached is self signed cert and the priv. key. Session: ./apps/openssl s_server -accept 8080 -cert ./cacert2.pem -key ./key-comp2.key -HTTP openssl s_client -connect localhost:8080 Server public key is 1204 bit Verify return code: 18 (self signed certificate) sage: q=0x008000000000000000001d8000000000000000012b sage: factor(q) 604462909807314587353111 * 1208925819614629174706189
On Sat, Sep 5, 2015 at 5:28 AM, Georgi Guninski <guninski@guninski.com> wrote: ...
This works with openssl 1.0.1p over SSL.
Attached is self signed cert and the priv. key.
Session: ./apps/openssl s_server -accept 8080 -cert ./cacert2.pem -key ./key-comp2.key -HTTP
openssl s_client -connect localhost:8080
Server public key is 1204 bit Verify return code: 18 (self signed certificate)
sage: q=0x008000000000000000001d8000000000000000012b sage: factor(q) 604462909807314587353111 * 1208925819614629174706189
Georgi, just a quick note to thank you for sharing your research and taking time to verify your findings against OpenSSL. I've been researching cryptographic backdoors -- you may want to review this http://illusoryTLS.com/ -- and the lack of checks on group parameters, malicious or otherwise (*), is to me yet another cause for concern. Great catch! (*) It would be interesting to look at the story of RFC-2631, as Bernstein, Lange, and Niederhagen did for the Dual EC standard https://projectbullrun.org/dual-ec/ Cheers, -- Alfonso
On Sat, Sep 05, 2015 at 06:37:09AM +0000, Alfonso De Gregorio wrote:
(*) It would be interesting to look at the story of RFC-2631, as Bernstein, Lange, and Niederhagen did for the Dual EC standard https://projectbullrun.org/dual-ec/
2631 is on wikipedia's page for DH. Another concern for backdoor is the FIPS in this thread, which requires small subgroup (as low as 160 bits). Having in mind for generic primes DL is subexponential (IIRC something like GNFS), the complexity of DL in small subgroup is questionable. Just to note so far this thread questions: 1. DH's RFC 2. DSA as implemented by openssl 3. FIPS requiring small subgroup. -- georgi
On Sat, Sep 5, 2015 at 7:07 AM, Georgi Guninski <guninski@guninski.com> wrote:
On Sat, Sep 05, 2015 at 06:37:09AM +0000, Alfonso De Gregorio wrote:
(*) It would be interesting to look at the story of RFC-2631, as Bernstein, Lange, and Niederhagen did for the Dual EC standard https://projectbullrun.org/dual-ec/
2631 is on wikipedia's page for DH.
Sure, the questions are: What is the origin of the current wording of the standard, that opens an avenue for lax checks for group parameters? Or, if, as you correctly pointed out, an implementation MAY NOT check group parameters, which entity deserves credit for it? Interestingly, a review of revisions (using rfcdiff) shows that the current wording was introduced in draft #1 of draft-ietf-smime-x942 https://tools.ietf.org/rfcdiff?difftype=--hwdiff&url2=draft-ietf-smime-x942-01.txt. This is dated October 1998. Yet, it is still not clear if the diff is to be attributed to Rescorla, or any other contributor to the this standardization effort. Cheers, -- Alfonso
On Sat, Sep 05, 2015 at 07:41:11AM +0000, Alfonso De Gregorio wrote:
Sure, the questions are: What is the origin of the current wording of the standard, that opens an avenue for lax checks for group parameters? Or, if, as you correctly pointed out, an implementation MAY NOT check group parameters, which entity deserves credit for it?
IMHO I haven't demonstrated attack against DH yet (believe it is possible). The current examples are against DSA, not DH.
On Sat, Sep 5, 2015 at 8:07 AM, Georgi Guninski <guninski@guninski.com> wrote: ...
IMHO I haven't demonstrated attack against DH yet (believe it is possible).
The current examples are against DSA, not DH.
Correct. I have the same feeling. I hope further research will prove both to be wrong about this. Cheers, -- Alfonso
Alfonso De Gregorio <alfonso.degregorio@gmail.com> writes:
Sure, the questions are: What is the origin of the current wording of the standard, that opens an avenue for lax checks for group parameters? Or, if, as you correctly pointed out, an implementation MAY NOT check group parameters, which entity deserves credit for it?
You need to go back to the original source of all the DLP stuff, which is DSA / FIPS 186. Now that didn't require any validation of anything until FIPS 186-3 came along in June 2009, and that in turn points to SP 800-89, which has a section 4 "Assurance of Domain Parameter Validity". This one gets really complicated because you can get the domain parameters from all over the place (generated yourself, provided for you by a third party, found at the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard', ...). So if you generate them yourself, you're OK. If you get them from a CA then you don't need to care because if the CA wants to attack you then they can just issue a forged cert in your name and don't need to worry about backdooring the params (in any case using shared params is a bad idea because they allow forgery of signatures on certificates. Suppose that the certificate contains a copy of the certificate signer's DSA parameters, and the verifier of the certificate has a copy of the signer's public key but not the signer's DSA parameters (which are shared with other keys). If the verifier uses the DSA parameters from the certificate along with the signer's public key to verify the signature on the certificate, then an attacker can create bogus certificates by choosing a random u and finding its inverse v modulo q (uv is congruent to 1 modulo q). Then take the certificate signer's public key g^x and compute g' = (g^x)^u. Then g'^v = g^x. Using the DSA parameters p, q, g', the signer's public key corresponds to the private key v, which the attacker knows. The attacker can then create a bogus certificate, put parameters (p, q, g') in it, and sign it with the DSA private key v to create an apparently valid certificate). Finally, if you get them from the disused lavatory then you deserve everything you get^H^H^H^H^H^H^H^HFIPS 186-3 has validation requirements that use the optional j and seed parameters, but I've never seen them used anywhere so even though the validation requirements exist, you can't apply them. The real question though is, why would anyone use parameters they didn't generate themselves? All DSA implementations I've seen (apart from some experimental code from the 1990s, which also encoded the j/seed values) generate all the parameters themselves, it's not like ECDSA where everyone ends up using some shared values that a "trusted" external party provides them. Peter.
On Sat, Sep 5, 2015 at 1:31 PM, Georgi Guninski <guninski@guninski.com> wrote:
On Sat, Sep 05, 2015 at 11:45:07AM +0000, Peter Gutmann wrote:
The real question though is, why would anyone use parameters they didn't generate themselves? All DSA implementations I've seen (apart from some
What about MITM in DH -- where do you get the keys from in this case?
A key-recovery attack may allow the retroactive decryption of past communication sessions, if the network endpoints rely on fixed Diffie-Hellman. Of course, whenever an attacker can successfully mount a MITM attack the current sessions are compromised. Cheers, -- Alfonso
On Sat, Sep 05, 2015 at 02:06:22PM +0000, Alfonso De Gregorio wrote:
On Sat, Sep 5, 2015 at 1:31 PM, Georgi Guninski <guninski@guninski.com> wrote:
On Sat, Sep 05, 2015 at 11:45:07AM +0000, Peter Gutmann wrote:
The real question though is, why would anyone use parameters they didn't generate themselves? All DSA implementations I've seen (apart from some
What about MITM in DH -- where do you get the keys from in this case?
A key-recovery attack may allow the retroactive decryption of past communication sessions, if the network endpoints rely on fixed Diffie-Hellman. Of course, whenever an attacker can successfully mount a MITM attack the current sessions are compromised.
Thanks. Are you referring to "DH as per the fucked RFC" or as "DH implemented properly"?
On Sat, Sep 5, 2015 at 2:31 PM, Georgi Guninski <guninski@guninski.com> wrote:
On Sat, Sep 05, 2015 at 02:06:22PM +0000, Alfonso De Gregorio wrote:
On Sat, Sep 5, 2015 at 1:31 PM, Georgi Guninski <guninski@guninski.com> wrote:
On Sat, Sep 05, 2015 at 11:45:07AM +0000, Peter Gutmann wrote:
The real question though is, why would anyone use parameters they didn't generate themselves? All DSA implementations I've seen (apart from some
What about MITM in DH -- where do you get the keys from in this case?
A key-recovery attack may allow the retroactive decryption of past communication sessions, if the network endpoints rely on fixed Diffie-Hellman. Of course, whenever an attacker can successfully mount a MITM attack the current sessions are compromised.
Thanks. Are you referring to "DH as per the fucked RFC" or as "DH implemented properly"?
I'm concerned with Fixed Diffie-Hellman implemented properly. Cheers, -- Alfonso
On Sat, Sep 05, 2015 at 02:41:51PM +0000, Alfonso De Gregorio wrote:
A key-recovery attack may allow the retroactive decryption of past communication sessions, if the network endpoints rely on fixed Diffie-Hellman. Of course, whenever an attacker can successfully mount a MITM attack the current sessions are compromised.
Thanks. Are you referring to "DH as per the fucked RFC" or as "DH implemented properly"?
I'm concerned with Fixed Diffie-Hellman implemented properly.
Do you have example of application which distinguishes proper DH from non-proper DH?
On Sat, Sep 5, 2015 at 3:02 PM, Georgi Guninski <guninski@guninski.com> wrote: ...
I'm concerned with Fixed Diffie-Hellman implemented properly.
Do you have example of application which distinguishes proper DH from non-proper DH?
I'm confused. What do you mean by proper DH vs non-proper DH? Are you referring to the performance of group parameters validation or lack of the same, or something else? Cheers, -- Alfonso
On Sat, Sep 05, 2015 at 03:21:30PM +0000, Alfonso De Gregorio wrote:
On Sat, Sep 5, 2015 at 3:02 PM, Georgi Guninski <guninski@guninski.com> wrote: ...
I'm concerned with Fixed Diffie-Hellman implemented properly.
Do you have example of application which distinguishes proper DH from non-proper DH?
I'm confused. What do you mean by proper DH vs non-proper DH? Are you referring to the performance of group parameters validation or lack of the same, or something else?
I mean: non-proper DH is implementation which doesn't return error/aborts if $q$ is composite. $q$ is defined in the RFC.
On Sat, Sep 5, 2015 at 3:25 PM, Georgi Guninski <guninski@guninski.com> wrote: ...
I mean: non-proper DH is implementation which doesn't return error/aborts if $q$ is composite. $q$ is defined in the RFC.
I'm not aware of any implementation that fails to abort is q is composite. As a case in point, OpenSSL versions implementing X9.42 DH (1.0.2-Beta2 and above) test both p and q for primality: int DH_check(const DH *dh, int *ret) { /* ... */ if (dh->q) { /* ... */ if (!BN_is_prime_ex(dh->q, BN_prime_checks, ctx, NULL)) *ret |= DH_CHECK_Q_NOT_PRIME; } and if (!BN_is_prime_ex(dh->p, BN_prime_checks, ctx, NULL)) *ret |= DH_CHECK_P_NOT_PRIME; else if (!dh->q) { /* ... */ } I have no evidence though that application built on OpenSSL call DH_check() function every time they need to. Cheers, -- Alfonso
On Sat, Sep 05, 2015 at 03:40:24PM +0000, Alfonso De Gregorio wrote:
On Sat, Sep 5, 2015 at 3:25 PM, Georgi Guninski <guninski@guninski.com> wrote: ...
I mean: non-proper DH is implementation which doesn't return error/aborts if $q$ is composite. $q$ is defined in the RFC.
I'm not aware of any implementation that fails to abort is q is composite.
As a case in point, OpenSSL versions implementing X9.42 DH (1.0.2-Beta2 and above) test both p and q for primality:
int DH_check(const DH *dh, int *ret) { /* ... */
if (dh->q) { /* ... */ if (!BN_is_prime_ex(dh->q, BN_prime_checks, ctx, NULL)) *ret |= DH_CHECK_Q_NOT_PRIME;
In 1.0.1p is_prime() is such a mess, it appears to often return $-1$ by quick audit. Did you check the explicit POC in this thread against this version of openssl?
On Sat, Sep 5, 2015 at 4:06 PM, Georgi Guninski <guninski@guninski.com> wrote:
On Sat, Sep 05, 2015 at 03:40:24PM +0000, Alfonso De Gregorio wrote:
On Sat, Sep 5, 2015 at 3:25 PM, Georgi Guninski <guninski@guninski.com> wrote: ...
I mean: non-proper DH is implementation which doesn't return error/aborts if $q$ is composite. $q$ is defined in the RFC.
I'm not aware of any implementation that fails to abort is q is composite.
As a case in point, OpenSSL versions implementing X9.42 DH (1.0.2-Beta2 and above) test both p and q for primality:
int DH_check(const DH *dh, int *ret) { /* ... */
if (dh->q) { /* ... */ if (!BN_is_prime_ex(dh->q, BN_prime_checks, ctx, NULL)) *ret |= DH_CHECK_Q_NOT_PRIME;
In 1.0.1p is_prime() is such a mess, it appears to often return $-1$ by quick audit.
Did you check the explicit POC in this thread against this version of openssl?
Yes, I did. The DSA PoC works again OpenSSL version 1.0.2d (snapshot). Cheers, -- Alfonso
Georgi Guninski <guninski@guninski.com> writes:
On Sat, Sep 05, 2015 at 11:45:07AM +0000, Peter Gutmann wrote:
The real question though is, why would anyone use parameters they didn't generate themselves? All DSA implementations I've seen (apart from some
What about MITM in DH -- where do you get the keys from in this case?
Whose DH? There are three major users of this on the public Internet, IPsec, TLS, and SSH, all of which have the server provide the DH values. MITM'ing yourself isn't much of an achievement. I haven't seen anything about this (so far) that doesn't class it as a purely certificational weakness. Consider the following equivalent of the flaw, but for RSA: I stand up a TLS server and provision it with a cert where the server-auth key has exponent 1. There is nothing in any spec that I can immediately think of that says that you have to reject keys with e=1 (e.g. RFC 3447 just says it's "a positive integer"). Most implementation were quite happy to accept e=1 keys until maybe two years ago when there was some bad publicity about them which forced vendors to fix the problem, but before that no-one bothered rejecting such obviously invalid keys. Use of e=1 keys was even a documented Windows "feature" to allow plaintext key export while still being FIPS 140 compliant [0]. This isn't any deliberately-inserted backdoor in the RFC, it's just sloppy wording. In any case though if I configure my server with a key I know to be broken then any problems I encounter are my own fault. The reductio ad absurdam form of this is that I stand up a TLS server which serves the private key to anyone that connects to it (or puts it in the SSH banner, or whatever). OK, so I've proven that I can backdoor myself. I can't see how a third-party attacker can do anything though (for DH, RSA, or just straight publish-the-key) unless I help them do it. Peter. [0] Where "FIPS" = "Farcical Information Processing Security".
On Sun, Sep 06, 2015 at 07:56:07AM +0000, Peter Gutmann wrote:
I haven't seen anything about this (so far) that doesn't class it as a purely certificational weakness. Consider the following equivalent of the flaw, but
OK, you might be right. Summary of my verbiage on this list is here: https://j.ludost.net/blog/archives/2015/09/05/rfc-2631_fips_186-3_and_openss... besides DH: 2) openssl 1.0.1p accepts composite $q$ in DSA 3) fips 160? forces small subgroup as low as 160 bits and openssl 1.0.1p insists on this. The repeat, the DL is subexponential in the whole group of order $p-1$ and I don't exclude the possibility to be easier in the small forced subgroup. Have fun, -- georgi
Observe that reusage of group parameters in DH appears common: Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf p.3 Table 1: Top 512-bit DH primes for TLS. 8.4% of Alexa Top 1M HTTPS domains allow DHE_EXPORT, of which 92.3% use one of the two most popular primes, shown here.
On Sat, Sep 05, 2015 at 11:45:07AM +0000, Peter Gutmann wrote:
So if you generate them yourself, you're OK. If you get them from a CA then you don't need to care because if the CA wants to attack you then they can just issue a forged cert in your name and don't need to worry about backdooring the params (in any case using shared params is a bad idea because they allow forgery of signatures on certificates. Suppose that the certificate contains a copy of the certificate signer's DSA parameters, and the verifier of the certificate has a copy of the signer's public key but not the signer's DSA parameters (which are shared with other keys). If the verifier uses the DSA parameters from the certificate along with the signer's public key to verify the signature on the certificate, then an attacker can create bogus certificates by choosing a random u and finding its inverse v modulo q (uv is congruent to 1 modulo q). Then take the certificate signer's public key g^x and compute g' = (g^x)^u. Then g'^v = g^x. Using the DSA parameters p, q, g', the signer's public key corresponds to the private key v, which the attacker knows. The attacker can then create a bogus certificate, put parameters (p, q, g') in it, and sign it with the DSA private key v to create an apparently valid certificate).
Sorry but I don't understand the final stage of the attack. If I follow correctly, you start from public DSA key with strong parameters and produce another keypair, which is related to the original key, but is distinct from it. What is the final stage of the attack?
On Sat, Sep 05, 2015 at 07:41:11AM +0000, Alfonso De Gregorio wrote:
parameters? Or, if, as you correctly pointed out, an implementation MAY NOT check group parameters, which entity deserves credit for it?
If you feel like debugging RFC, start from: RFC: 2119 https://tools.ietf.org/html/rfc2119#section-5 5. MAY This word, or the adjective "OPTIONAL", mean that an item is truly optional. This includes many backdoors per lack of formalism. IMHO RFC must use only MUST or "MUST NOT" to make the ``formal model'' soundly defined (recursively RFC compliant). Suppose implementation X1 follows MAY and X2 does not. Observe that in real world neither X1 nor X2 need be RFC compliant (like malware). Even if they are compliant, this might cause troubles. In my DSA SSL example (which might be technical bug in openssl, but not necessarily technical bug in hypothetical DH implementation), the key/cert wasn't RFC compliant, but passed verification. Cheers, -- georgi
On Sat, Sep 5, 2015 at 11:50 AM, Georgi Guninski <guninski@guninski.com> wrote: ...
If you feel like debugging RFC, start from:
RFC: 2119
https://tools.ietf.org/html/rfc2119#section-5 5. MAY This word, or the adjective "OPTIONAL", mean that an item is truly optional.
This includes many backdoors per lack of formalism.
IMHO RFC must use only MUST or "MUST NOT" to make the ``formal model'' soundly defined (recursively RFC compliant).
While I sympathize with your point of view, and while I would welcome a full equivalence of implementations, exclusivity of mandatory requirements is neither a principle governing today's standardization works, nor, sure enough, a principle that guided the standardization of protocols back in the 1990s. The key words defined in RFC 2119 reflect one one or any combinations of the following: * A robustness principle, codified in the Postel's Law; * Economic interests at stake; * Understanding of the subject matter. Today our community has finally reconsidered the principle that, asking designers to "[b]e conservative in what [they] send, [but] be liberal in what [they] accept", promised robustness on the internet. But the incentives are still the same; interoperability and security are always in tension. It is worth to note that, yesterday as today, we need a better understanding of the subject matter. It should have been obvious that a validation of group parameters has security implications. And, just like any and all security relevant requirements, it should have been made a mandatory check. I second Peter's recommendation; consider filing an erratum. Cheers, -- Alfonso
On Sat, Sep 05, 2015 at 01:41:23PM +0000, Alfonso De Gregorio wrote:
I second Peter's recommendation; consider filing an erratum.
I strongly doubt I will do this. We don't negotiate with turrorists ;-) btw, asked about parts of this thread here: http://lists.randombit.net/pipermail/cryptography/ don't see it in the archives yet, though I received it. Cheers, -- georgi
This is also on popular? forums: [0] https://news.ycombinator.com/item?id=10175284 [1] https://www.reddit.com/r/crypto/comments/3jumon/rfc2631_fips_1863_and_openss... Comments in [0] suggest "formal verification". Likely the lovely micro$oft will classify this email as "self promotion". Scumbags, linking is legal at least in EU (so far).
On Mon, Sep 7, 2015 at 11:25 AM, Georgi Guninski <guninski@guninski.com> wrote:
This is also on popular? forums:
[0] https://news.ycombinator.com/item?id=10175284 [1] https://www.reddit.com/r/crypto/comments/3jumon/rfc2631_fips_1863_and_openss...
Comments in [0] suggest "formal verification".
The only hope to have a formal verification that extends also to algebraic properties, is to start from formal specifications. A top-down approach in stark contrast with the dynamic, agile, and pragmatic "ship, then test" paradigm [1] and the "don't worry, be crappy" mantra [2], repeated by entrepreneurs innovating the most. We need better security trade-offs. -- Alfonso [1] http://guykawasaki.com/the_art_of_boot/ [2] http://guykawasaki.com/the_art_of_inno/
On Mon, Sep 07, 2015 at 12:07:14PM +0000, Alfonso De Gregorio wrote:
Comments in [0] suggest "formal verification".
The only hope to have a formal verification that extends also to algebraic properties, is to start from formal specifications. A top-down approach in stark contrast with the dynamic, agile, and pragmatic "ship, then test" paradigm [1] and the "don't worry, be crappy" mantra [2], repeated by entrepreneurs innovating the most.
We need better security trade-offs.
Re "formal verification". I am skidiot at formal verification (FW). So far my best achievement in FW is "Axiom free proof of False" in Coq (is it Cock?). I did this by native code execution via plugins in Coq (the plugins were part of the "pr00f"), which in theory can falsify other proofs depending on the file permissions. Much letter I learned that the lovely micro$oft heavily depend on plugins in their Cock "pr00fs" and accidentally something broke the check of the "pr00f" for result of significant importance (something like "coqchk") due to some fault in the plugin and/or Coq. The p00f failure was discussed on academic site. In short, I consider Coq a charlatan tool, and likely security vulnerability, since a proof can easily execute native code. Let me know if you need further references. btw, doesn't your post contradict another post of yours here: https://cpunks.org/pipermail/cypherpunks/2015-September/009032.html
While I sympathize with your point of view, and while I would welcome a full equivalence of implementations, ...
On Mon, Sep 7, 2015 at 12:30 PM, Georgi Guninski <guninski@guninski.com> wrote: ...
btw, doesn't your post contradict another post of yours here: https://cpunks.org/pipermail/cypherpunks/2015-September/009032.html
It doesn't, as long as we don't confuse what is desirable -- and indeed it is so -- with the practically and systematically attainable. Or, to paraphrase Danny Strong, idealism loses to pragmatism when it comes to engineering security. I'm not even persuaded that writing a formal specifications gives us always the ability to check the equivalence of implementations. As a negative case in point, take languages/protocols and their parsers. A grammar can be understood as a specification. Still, "arithmetically checking the computational equivalence of parsers [...] is decidable up to a level of computational power required to parse the language, and becomes undecidable thereafter". [1] All of which is to say that checking the computational equivalence of parsers is still possible. But, as designers, in order to reconcile the desirable with the practically attainable, we need to stick to the simplest possible input languages (i.e., regular and context-free). This is the kind of security trade-offs I was alluding to. And this also links us to the other thread on browser security, exploits, and Firefox. -- Alfonso [1] http://langsec.org/papers/Bratus.pdf
FYI: This is on libressl-dev: http://article.gmane.org/gmane.comp.encryption.libressl/74 http://news.gmane.org/gmane.comp.encryption.libressl (so far they didn't piss me off). Also on [openssl-users]: https://mta.openssl.org/pipermail/openssl-users/2015-September/002033.html They consider all of these "features", so I am not trolling them anymore.
On Thu, Sep 10, 2015 at 02:39:08PM +0300, Georgi Guninski wrote:
FYI:
This is on libressl-dev:
http://article.gmane.org/gmane.comp.encryption.libressl/74 http://news.gmane.org/gmane.comp.encryption.libressl
From libressl's commits (modulo me being MITMed) https://github.com/libressl-portable/portable/commit/105c86f3ed1508e9bb55ea3... first round of 2.3.0 release notes [line 52] + Thanks for <censored> for + mentioning the possibility of a weak (non prime) q value and + providing a test case. + + See + https://cpunks.org/pipermail/cypherpunks/2015-September/009007.html + for a longer discussion.
On Sat, Sep 05, 2015 at 08:28:03AM +0300, Georgi Guninski wrote:
This works with openssl 1.0.1p over SSL.
Attached is self signed cert and the priv. key.
Session: ./apps/openssl s_server -accept 8080 -cert ./cacert2.pem -key ./key-comp2.key -HTTP
openssl s_client -connect localhost:8080
Server public key is 1204 bit Verify return code: 18 (self signed certificate)
sage: q=0x008000000000000000001d8000000000000000012b sage: factor(q) 604462909807314587353111 * 1208925819614629174706189
Troll friendly :)))) This appears to work on libressl-2.2.3 too. Independent verification will be appreciated. Hi Theo :P -- georgi
Blogged about this: https://j.ludost.net/blog/archives/2015/09/05/rfc-2631_fips_186-3_and_openss... Is there better forum for this, some crypto list for noobs? Have reliable key generation, but even the current key is weak enough IMHO (it is about O(2^40) ).
Metzdowd & randombit's respective crypto mailing lists, crypto practicum (smaller), reddit's /r/crypto forum (I'm a mod there). They're all open to noobs that are willing to learn (but keep in mind that staying on topic and succinct is a bit more important on the mailing lists, in particular metzdowd apply premoderation with formatting requirements). - Sent from my tablet Den 5 sep 2015 10:05 skrev "Georgi Guninski" <guninski@guninski.com>:
Blogged about this:
https://j.ludost.net/blog/archives/2015/09/05/rfc-2631_fips_186-3_and_openss...
Is there better forum for this, some crypto list for noobs?
Have reliable key generation, but even the current key is weak enough IMHO (it is about O(2^40) ).
On Sat, Sep 05, 2015 at 10:17:45AM +0200, Natanael wrote:
Metzdowd & randombit's respective crypto mailing lists, crypto practicum (smaller), reddit's /r/crypto forum (I'm a mod there). They're all open to noobs that are willing to learn (but keep in mind that staying on topic and succinct is a bit more important on the mailing lists, in particular metzdowd apply premoderation with formatting requirements).
- Sent from my tablet
Thanks. Maybe will spam some of these later. If someone spams before me, please let me know.
participants (4)
-
Alfonso De Gregorio
-
Georgi Guninski
-
Natanael
-
Peter Gutmann