the list fool here chiming in... i just wonder if there is a boundary condition that exists that is making passwords insecure by protocols that could themselves be modified, and alter the probabilities of easy dictionary attacks. for instance, if A-Z and a few special characters are allowed in a US keyboard alphabet, if that is the majority of what is being automatically cracked, perhaps it is not surprising. yet what if the passwords 'length' were not the issue, such that a 20 character string (of several number.words with several intermixed special characters) could still be successfully attacked, given those limited parameters. and such a view may be: moar characters, longer string, etc. perhaps mistaken though i wonder what 'dictionaries' are referenced because if they are mapped to normal words of a given language, and special symbols, and yet held within the walls or boundaries of a particular alphabet or sign-system, then perhaps language-to-language the password cracking situation remains basically the same. yet, what if the Unicode barrier (if that is what it is) is dissolved, such that many languages could co-exist, such as 12 different N's and ligatures and other special characters... ex. normal bounded password in a given Unicode character set / alphabet, with special characters: th3r0uT33Nw4!z+3sezU3 compare this with a multi-alphabet approach: п世לកóવિz what are the computational probabilities of searching across all ascii/unicode alphabet characters and getting that pattern matched, versus let's say [35] options per password character in a constrained alphabet. the calculation would be something like this (to 21 spaces in ex.) [35][35][35][35][35][35][35][35][35][35][35][35][35][35][35]...[35] versus, if wildly approximating: [1,000] options for the 'multiple language Unicode characters and special symbols' password approach. the following estimate may be inaccurate though for 35 times itself over and over twenty one times, the number i arrived at was: 9.321739789445372e+33 again, calculation may be inaccurate, though for the multiple character approach, only 11 characters @ [1000] needed for: 1.e+36 though imagining it could be many fewer characters in comparison, say if accessing Chinese and Arabic other characters. in other words, perhaps a hypothetical [10,000] symbols could be made available per password unit than [35] in the highly restricted approach that is easy to crack, and that would change how computation occurs with regard to how such passwords are created, stored, and exist, especially in a highly constrained OS and peripheral environment that constrains cross-pollination of such key typographic information that would change the basic dynamics, data behavior. this is the slot-machine approach to [p|a|s|s|w|o|r|d|s] because at a certain point, anything could tally... [*|*|x|ம்|7|*|*|#|ன்] though what is more... for each _space_ there could be any sign.symbol, as with icons or special characters, potentially, that could be potential keyspace or what today may appear as "junk" information that is infrastructural for multilanguage computing, and be utilized beyond the language boundary for its signage, for passwords & security. in this way, the three unit password (icon=ascii symbol) ['icon'][પ્રે][ю] could be stronger than a much longer, restricted alphabet, if going towards 10,000+ options per character. yet the model itself could example such that a password is constructed by a bit-set string, where like a slot machine, could instead have 'words' as the units (as with the existing password approach, word1&word2, etc.)... ['word1'][[પ્રે][ю]['gps-coords'] if only viewed in bounded terms and of serial computation, a long *predictable* string may be easier to attack than a shorter unpredictable string that takes massive resources to churn through the possibilities - and who knows, maybe the password has a time-cycle that automatically changes its nature during that time period of calculation. surface tension, iridescence, spinner
participants (1)
-
brian carroll