From: bill@twwells.com (T. William Wells) Date: Wed, 25 Aug 1993 21:04:05 GMT In article <9308252001.AA14017@custard.think.com>, Andy Wilson <ajw@Think.COM> wrote: : [mostly bogus stuff] That is irrelevant to cypherpunks, as I understand the list. The prospect of the impossibility of anonymity and the uses to which personal information is made in a cashless economy is not relevant? I beg to differ. This is exactly what digital cash is meant to prevent. There is no technology, including that of privacy, that cannot be used for ill. We don't know how they're going to be using the neural network. They could, as was suggested, abandon their minds and and rely on the neural net. I don't think they will because doing so would be a really bad business decision. Furthermore, on the evidence, the neural network output will only be used as one datum in a process involving many inputs and a human making the final decision. Finally, in the examples I'm familiar with (from reading AI Expert), when a neural net is used as a decision element, precisely because of its error rate, the decision isn't "go/no go" but "go/refer the problem to a human". The problem with referring a neural network's decision to a human is that the neural network gives no information other than the probability of fraud. It does not tell the human why it determined the transaction was likely to be flawed, like a system based on rules or case-based reasoning would be able to do. There is not any good way to combine the judgement of the neural net with that of a human for that reason. With respect, I have found AI Expert to consist more of marketing hype than correct and useful information on artificial intelligence technology. Andy