CDR: Re: Think cash

Greg Broiles gbroiles at netbox.com
Wed Oct 11 17:29:22 PDT 2000


At 12:59 PM 10/11/00 -0400, Marcel Popescu wrote:
>Real-To:  "Marcel Popescu" <mdpopescu at geocities.com>
>
>An interesting idea has surfaced on the freenet-chat list: is it possible to
>build a program that creates some sort of a puzzle, whose answer the
>generating computer knows (and can verify), but which can only be answered
>by a human being, not by a computer? [Additional requirement: it should be
>easy for the human to answer the puzzle.]
>
>My proposal was to randomly create an image, which should be 1) easily
>recognizable by a human (say the image of a pet), but 2) complex enough so
>that no known algorithm could "reverse-engineer" this. [You need a
>randomly-generated image because otherwise one could build a large database
>of all the possible images and the correct answers.] Background information
>would also be very useful - see
>http://www.digitalblasphemy.com/userg/images/969403123.shtml - it's easy for
>a human being to identify the animal in the picture, but (AFAIK) impossible
>to write a program to do the same thing.

I don't follow the other list you mentioned, so I don't know what the 
actual problem to solve is - my guess is that this is an anti-bot 
protection measure, intended to make sure that only human participants can 
engage in a conversation.

If that's the problem - or if it's similar - you'll also need to make the 
puzzle difficult enough that
it's hard to brute-force or solve statistically - let's say you provide the 
other party with 20 images,
19 cats and 1 dog, and ask them to identify the dog.

What keeps a bot from answering the question 20 times? Let's assume the 
first arms-race countermeasure prevents answering the question more than 
once by generating puzzles on-the-fly from known cat and dog images - so 
the bot just picks an answer randomly, and keeps doing that until they hit.

Can God create a rock so big he can't lift it?

I think you're barking up the wrong tree, thinking about "known algorithms" 
and such - just like with crypto, the real way in isn't to attack the 
strong front door, but to just go around it.

This sounds like maybe it's essentially a credentialling/ID problem, where 
you're generating credentials on the fly based on a short-form Turing test. 
Can you restate the problem so that instead of a Turing test it's a more 
familiar multi-channel authentication process? (e.g., require new 
participants to have "introductions" from existing participants, track 
introductions, and remove the access for accounts found to be bots, or 
found to have introduced bots .. or similar.)

--
Greg Broiles
gbroiles at netbox.com






More information about the cypherpunks-legacy mailing list