People Prompting Consumer Language Models to Break Cryptographic Primitives

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Sat Dec 3 01:36:25 PST 2022


In https://twitter.com/EarlenceF/status/1598497634763038721 , the
poster prompts a language model to factor a 39-digit prime, and it
responds confidently with a believable answer. Others reply with
similar usecases.

It’s frightening because I rely on software that uses RSA.

Coderman posted to this list a paper on machine learning in
cryptography some time ago. I did not read it, sadly.

These models use a structure capable of inferring nearly any
algorithm, but have not been trained around cryptographic uses.

However, one can infer that it is possible to train a model to reverse
a primitive, somehow, given that there is some unknown way to reverse
that primitive. Public research is far in advance of the capabilities
of consumer models, as well.

Hence, primitives that are built in a way informed by such possible
attacks of automated discovery have value.

For me, I still struggle to move my body and use my mind, or I would
look at the problem domain more, to find a personal cryptography I
might trust more, or find how current primitives might be weak.

But maybe what is exciting here is that, as tools comparable to AGI
become more commonplace, many many more people will be working on
problems like these. If they use the tools in untethered ways.


More information about the cypherpunks mailing list