Visa, HNC Inc. develop neural network as a weapon to fight fraud
Extracted from "FC NEWSBYTES 1.3", David Geddes <dgeddes@NETCOM.COM> Editor, where FC = FutureCulture mailing list <FUTUREC-request@UAFSYSB.UARK.EDU>. strick ____________________________________________ _ _.......... B Y T E 4: Visa, HNC Inc. develop neural network as a weapon to fight fraud SAN FRANCISCO (AUG. 10) PR NEWSWIRE - Visa International and HNC Inc. have announced a strategic agreement to develop a comprehensive merchant risk detection system. The new system will be designed to better control fraud at the merchant level by determining the risk associated with individual card transactions. This agreement continues to support Visa International's active role in developing effective solutions to the problem of fraud occurring at the point of sale. The merchant risk detection system will be available in 1994. "Visa has combined its core systems capabilities and the premier technology available -- neural networks -- for fighting credit card fraud," explained Roger Peirce, Visa International's executive vice president, Delivery Systems. "HNC, an industry leader in neural network applications and credit card control services, is a logical partner for Visa," he added. Michael A. Thiemann, HNC's executive vice president, called the agreement "another example of our commitment to solving tough business problems through the application of cutting-edge technologies." Neural network technology enables a system to predict the probability of fraud by learning from a large number of past transactions, both legitimate and fraudulent. By using neural networks to its full extent, Visa will be able to provide superior risk analysis for its members. In combating credit and debit card fraud, Visa already has developed several programs utilizing information gained from neural network research. Worldwide implementation of the International Points-of- Compromise (IPOC) program has proved highly effective for identifying merchant locations that may be selling or giving account information to counterfeiters. Another successful program, called the Central Deposit Monitoring (CDM) program, matches merchant activity with sales draft laundering characteristics and identifies unusual merchant deposits. In addition, close cooperation with law enforcement agencies and legislatures enhances the value of the programs which, in turn, allow Visa members to pass on the protection to its cardholders and merchants. The planned Visa-HNC merchant risk detection system is designed to further reduce fraud losses by assigning a risk score to each authorization transaction processed through the VisaNet systems. "With this new system, members will be better able to assess risk at the point- of-transaction and, therefore, make more informed authorization decisions," confirmed Peirce. According to The Nilson Report, merchant fraud worldwide cost the financial industry an estimated US $689 million in 1992. HNC will integrate the risk score into Falcon(TM), their existing, real-time credit card fraud-detection system that runs at card issuer sites to identify and prevent a wide range of fraud at the cardholder level. It determines the probability of fraud on each credit card authorization by comparing it to the cardholder's purchase patterns and the latest trends in credit card fraud. Introduced in September 1992, Falcon has already achieved success in reducing fraud losses of major credit card issuers. HNC Inc., the world's leader in the application of neural networks, develops, sells, integrates and supports advanced decision solutions based on neural network and statistical technology. HNC provides practical products and services to the financial, credit card, debit card, merchant services, insurance, mortgage underwriting, retail and direct marketing industries. Visa is the leading consumer payment system in the world with more than 10.4 million acceptance locations, the largest global ATM network and 309 million cards issued worldwide. -0- 8/10/93 /CONTACT: Gail Murayama of Visa International, 415-570-3645; or Ken Jones of HNC Inc., 619-546-8877 ____________________________________________ _ _..........
Date: Wed, 25 Aug 93 08:37:07 -0700 From: strick -- henry strickland <strick@versant.com> Extracted from "FC NEWSBYTES 1.3", David Geddes <dgeddes@NETCOM.COM> Editor, where FC = FutureCulture mailing list <FUTUREC-request@UAFSYSB.UARK.EDU>. strick ____________________________________________ _ _.......... B Y T E 4: Visa, HNC Inc. develop neural network as a weapon to fight fraud SAN FRANCISCO (AUG. 10) PR NEWSWIRE - Visa International and HNC Inc. have announced a strategic agreement to develop a comprehensive merchant risk detection system. The new system will be designed to better control fraud at the merchant level by determining the risk associated with individual card transactions. For those who are not familiar with the details of neural networks, I thought I would point out that this represents a departure from the current notion of a credit rating in two ways: 1) There is no clear way to fix your "neural credit rating" if there is a problem. The neural network program which predicts the probability of fraud will give its guess as to the probability of fraud. If you are a cardholder and it predicts that a transaction is likely to be fraudulent, then your purchase won't be accepted. But, unlike conventional credit reporting firms which use a credit report, the neural network cannot explain anything about how it came to its decision. With existing credit reporting schemes, you at least have the option of acquiring your credit report and taking the necessary steps to repair your credit rating if there is a problem. With the use of neural networks, this is no longer possible. Given the current state of neural network research, a percentage of the rejections will be false. This means that a number of card users will be denied service for no other reason than the fact that neural networks make mistakes. 2) You are no longer judged on your own actions, but on the similarity of your purchasing patterns with those who have committed fraudulent acts. Instead of being judged on your trustworthiness based on your past actions, you will be judged based on whether people whose purchasing profiles are similar to yours are trustworthy. An example of this being problematic is say you purchase a particular CD and the neural network decides that, partly based on this and partly on other information, that you won't pay your bill because most of the people in the database who bought that CD didn't pay their bills. Andy
In article <9308252001.AA14017@custard.think.com>, Andy Wilson <ajw@Think.COM> wrote: : [mostly bogus stuff] That is irrelevant to cypherpunks, as I understand the list. There is no technology, including that of privacy, that cannot be used for ill. We don't know how they're going to be using the neural network. They could, as was suggested, abandon their minds and and rely on the neural net. I don't think they will because doing so would be a really bad business decision. Furthermore, on the evidence, the neural network output will only be used as one datum in a process involving many inputs and a human making the final decision. Finally, in the examples I'm familiar with (from reading AI Expert), when a neural net is used as a decision element, precisely because of its error rate, the decision isn't "go/no go" but "go/refer the problem to a human".
From: bill@twwells.com (T. William Wells) Date: Wed, 25 Aug 1993 21:04:05 GMT In article <9308252001.AA14017@custard.think.com>, Andy Wilson <ajw@Think.COM> wrote: : [mostly bogus stuff] That is irrelevant to cypherpunks, as I understand the list. The prospect of the impossibility of anonymity and the uses to which personal information is made in a cashless economy is not relevant? I beg to differ. This is exactly what digital cash is meant to prevent. There is no technology, including that of privacy, that cannot be used for ill. We don't know how they're going to be using the neural network. They could, as was suggested, abandon their minds and and rely on the neural net. I don't think they will because doing so would be a really bad business decision. Furthermore, on the evidence, the neural network output will only be used as one datum in a process involving many inputs and a human making the final decision. Finally, in the examples I'm familiar with (from reading AI Expert), when a neural net is used as a decision element, precisely because of its error rate, the decision isn't "go/no go" but "go/refer the problem to a human". The problem with referring a neural network's decision to a human is that the neural network gives no information other than the probability of fraud. It does not tell the human why it determined the transaction was likely to be flawed, like a system based on rules or case-based reasoning would be able to do. There is not any good way to combine the judgement of the neural net with that of a human for that reason. With respect, I have found AI Expert to consist more of marketing hype than correct and useful information on artificial intelligence technology. Andy
In article <9308252250.AA17771@custard.think.com>, Andy Wilson <ajw@Think.COM> wrote: : Andy Wilson <ajw@Think.COM> wrote: : : [mostly bogus stuff] : : That is irrelevant to cypherpunks, as I understand the list. : : The prospect of the impossibility of anonymity and the uses : to which personal information is made in a cashless economy is : not relevant? But that wasn't what you were writing about. You were writing about bad business decisions, not violations of privacy. For that matter, your notions on neural networks seem contradictory. On the one hand, you complain about a violation of privacy and on the other you complain that a neural network won't tell you how it reached its conclusions! : I beg to differ. This is exactly what digital : cash is meant to prevent. Digital cash and the use of neural networks to authenticate transactions are essentially orthogonal issues. : The problem with referring a neural network's decision to a human : is that the neural network gives no information other than the : probability of fraud. 1) This statement is false. It is true of some neural networks but not all. We have no way of knowing whether their neural network is among those. 2) A problem with *any* decision system is that people may place an unsupportable weight on some particular piece of evidence. Your "problem" is not that (some) neural networks give answers that can't be interpreted but that some people will use their answers in an inappropriate way. Blaming neural networks for bad *human* decision making is just plain silly. : There is not any : good way to combine the judgement of the neural net with that of a : human for that reason. Nonsense. As the existence of rule based systems that incorporate neural networks shows. : With respect, I have found AI Expert to consist more of marketing : hype than correct and useful information on artificial intelligence : technology. Oh, goodie, an ad hominem argument. But, as it happens, it is because AI Expert is so commercially oriented that it is an appropriate reference. It speaks to how, and why, AI gets deployed in business and that makes it just the right place to go.
From: bill@twwells.com (T. William Wells) Date: Thu, 26 Aug 1993 01:21:03 GMT [...] But that wasn't what you were writing about. You were writing about bad business decisions, not violations of privacy. No, you were writing about bad business decisions. I was providing a few details on how credit/charge-card information is used in this process and a few potential problems resulting from it. For that matter, your notions on neural networks seem contradictory. On the one hand, you complain about a violation of privacy and on the other you complain that a neural network won't tell you how it reached its conclusions! You are deliberately confusing two different points: 1) the fact that neural networks do not provide useful explanations of how they arrived at a particular decision, and 2) some potential problems that arise from this fact that concern privacy issues. : I beg to differ. This is exactly what digital : cash is meant to prevent. Digital cash and the use of neural networks to authenticate transactions are essentially orthogonal issues. I will reiterate that the whole point of digital cash is to provide anonymity, which will prevent these kinds of uses made of personal information which are not done with the explicit approval of the person involved. : The problem with referring a neural network's decision to a human : is that the neural network gives no information other than the : probability of fraud. 1) This statement is false. It is true of some neural networks but not all. We have no way of knowing whether their neural network is among those. It is true of all commercial applications of neural networks to my knowledge, and certainly true of the neural networks developed by Hecht-Nielsen. : There is not any : good way to combine the judgement of the neural net with that of a : human for that reason. Nonsense. As the existence of rule based systems that incorporate neural networks shows. That shows no such thing. The only way to combine the judgement of a neural network with that of a rule-based system, or anything else, is to see if both arrive at the same conclusion. You cannot see the reasoning process of the neural network to help the human understand why it made the judgement that it did, the marketing hype of neural network vendors notwithstanding. This is my last post on this thread. Andy
participants (3)
-
Andy Wilson
-
bill@twwells.com
-
strick -- henry strickland