towards a theory of reputation
-----BEGIN PGP SIGNED MESSAGE----- Many of the topics discussed on this list are economic in nature. Unfortunately cypherpunks haven't attracted the attention of professional economists who might be willing to apply their analytic tools to these issues. Reputation is one of these issues that is especially important. I'm not an economist, so I hope these ramblings do not discourage real economists from tackling reputation as a serious research project. The first step toward a theory of reputation is defining what reputation is. The definition should correspond closely enough to our common sense notion of reputation so that our intuitions about it are not completely useless. I think a good definition is this: Alice's reputation of Bob is her expectation of the results of future interactions with Bob. If these interactions are mainly economic in nature, then we can represent Alice's reputation of Bob by a graph with the horizontal axis labeled price and the vertical axis labeled expected utility. A point (x,y) on the graph means that Alice expects to get y utils in a business transaction where she pays Bob x dollars. Given this definition, it is natural to say the Bob's reputation is the set of all other people's reputations of Bob. A reputation system consists of a set of entities, each of whom has a reputation and a method by which he changes his reputation of others. I believe the most important question for a theory of reputation to answer is what is a good method (reputation algorithm) by which a person changes his reputation of others. A good reputation algorithm must serve his self-interest; it must not be (too) costly to evaluate; its results must be stable; a reputation system where most people use the algorithm must be stable (i.e., the reputation system must be an evolutionarily stable system). In a reputation based market, each entity's reputation has three values. First is the present value of expected future profits, given the reputation (let's call it the operating value). Note that the entity's reputation allows him to make positive economic profits, because it makes him a price-maker to some extent. Second is the profit he could make if he threw away his reputation by cheating all of his customers (throw-away value). Third is the expected cost of recreating an equivalent reputation if he threw away his current one (replacement cost). Now it is clear that if a reputation's throw-away value ever exceeds its operating value or replacement cost, its owner will, in self-interest, throw away his reputation by cheating his customers. In a stable reputation system, this should happen very infrequently. This property may be difficult to achieve, however, because only the reputation's owner knows what its values are, and they may fluctuate widely. For example the operating value may suddenly decrease when his competitor announces a major price cut, or the replacement cost may suddenly decrease when he succeeds subverting a respected reputation agency. One way to answer some of these questions may be to create a model of a reputation system with a simple reputation algorithm and a simplified market, and determine by analysis or simulation whether it has the desirable properties. I hope someone who has an economist friend can persuade him to do this. Wei Dai -----BEGIN PGP SIGNATURE----- Version: 2.6.2 iQCVAwUBMLI9Ujl0sXKgdnV5AQECCQQAiHq+s3PfsEOJmk0ng9aETQwHVe5EvrA0 +0wimcO2IGf+Bix8J/bLtAlW2eEcXM90pMvBgv+Q4jTkvI5RvGyuMf5cvOgj6rTz wj9aCqoltjHm/l1dfoWWCn4VSIU8OAJ6wKN/HRANQ5B56TPOgEkS2EDSM2C3w4m9 BgwcG5rBeA8= =taTA -----END PGP SIGNATURE-----
Wei Dai <weidai@eskimo.com> writes:
The first step toward a theory of reputation is defining what reputation is. The definition should correspond closely enough to our common sense notion of reputation so that our intuitions about it are not completely useless. I think a good definition is this: Alice's reputation of Bob is her expectation of the results of future interactions with Bob. If these interactions are mainly economic in nature, then we can represent Alice's reputation of Bob by a graph with the horizontal axis labeled price and the vertical axis labeled expected utility. A point (x,y) on the graph means that Alice expects to get y utils in a business transaction where she pays Bob x dollars. Given this definition, it is natural to say the Bob's reputation is the set of all other people's reputations of Bob.
This is an interesting approach. However this seems to fold in issues of reliability with issues of quality and value. If I have a choice of two vendors, one of whom produces a product which is twice as good, but there is a 50% chance that he will abscond with my money, I am not sure how to value him compared with the other. It seems like the thrust of the analysis later is to determine whether people will in fact try to disappear. But that is not well captured IMO by an analysis which just ranks people in terms of "utility" for the price.
A reputation system consists of a set of entities, each of whom has a reputation and a method by which he changes his reputation of others. I believe the most important question for a theory of reputation to answer is what is a good method (reputation algorithm) by which a person changes his reputation of others. A good reputation algorithm must serve his self-interest; it must not be (too) costly to evaluate; its results must be stable; a reputation system where most people use the algorithm must be stable (i.e., the reputation system must be an evolutionarily stable system).
I am not sure about this last point. It seems to me that a good reputation is one which is most cost-effective for its owner. Whether it is good for social stability is not relevant to the person who is deciding whether to use it. ("But what if everyone behaved that way? How would you feel then?") It may be nice for the analyst but not for the participant.
In a reputation based market, each entity's reputation has three values. First is the present value of expected future profits, given the reputation (let's call it the operating value). Note that the entity's reputation allows him to make positive economic profits, because it makes him a price-maker to some extent. Second is the profit he could make if he threw away his reputation by cheating all of his customers (throw-away value). Third is the expected cost of recreating an equivalent reputation if he threw away his current one (replacement cost).
I don't really know what the first one means. There are a lot of different ways I can behave, which will have impact on my reputation, but also on my productivity, income, etc. There are other ways I can damage my reputation than by cheating, too. I can be sloppy or careless or just not work very hard. So the first two are really part of a continuum of various strategies I may apply in life. The second is pretty clear but the first seems to cover too wide a range to give it a value.
Now it is clear that if a reputation's throw-away value ever exceeds its operating value or replacement cost, its owner will, in self-interest, throw away his reputation by cheating his customers. In a stable reputation system, this should happen very infrequently. This property may be difficult to achieve, however, because only the reputation's owner knows what its values are, and they may fluctuate widely. For example the operating value may suddenly decrease when his competitor announces a major price cut, or the replacement cost may suddenly decrease when he succeeds subverting a respected reputation agency.
It would be useful to make some of the assumptions a bit clearer here. Is this a system in which cheating is unpunishable other than by loss of reputation, our classic anonymous marketplace? Even if so, there may be other considerations. For example, cheating may have costs, such as timing the various frauds so that people don't find out and extricate themselves from vulnerable situations before they can get stung. Also, as has been suggested here in the past, people may structure their interactions so that vulnerabilities to cheating are minimized, reducing the possible profits from that strategy.
One way to answer some of these questions may be to create a model of a reputation system with a simple reputation algorithm and a simplified market, and determine by analysis or simulation whether it has the desirable properties. I hope someone who has an economist friend can persuade him to do this.
It might be interesting to do something similar to Axelrod's Evolution of Cooperation, where (human-written) programs played the Prisoner's Dilemma against each other. In that game, programs had reputations in a sense, in that each program when it interacted with another remembered all their previous interactions, and chose its behavior accordingly. The PD is such a cut-throat game that it apparently didn't prove useful to try to create an elaborate reputation-updating model (at least in the first tournaments; I understand that in later versions some programs with slightly non-trivial complexity did well). What you might want to do, for simplicity, is to have your universe consist of just one good (or service, or whatever), with some producers who all have the same ability, and some consumers, all with the same needs. Where they differ would be in their strategies for when to cheat, when to be honest, when to trust, and when to be careful. At any given time a consumer must choose which producer to buy from. The details of their interaction would appear to greatly influence the importance of reputation. Maybe there could be a tradeoff where if the consumer is willing to pay in advance he gets a better price than if he will only provide cash on delivery. (Unfortunately it seems like the details of this tradeoff will basically determine the outcome of the experiment. However maybe some values will lead to interesting behavior.) Producers who want to cheat could do so by offering greater discounts for payment in advance, offering low prices in order to attract as many customers as possible before disappearing. Consumers might rightly be suspicious of an offer that looks too good. Maybe it could be set up so consumers could cheat, too. No, I think that is too complicated. Then producers would have to know consumers' reputations and I think it gets muddy. Probably it would be simplest to just have producers have reputations. Hal
On Tue, 21 Nov 1995, Hal wrote:
This is an interesting approach. However this seems to fold in issues of reliability with issues of quality and value. If I have a choice of two vendors, one of whom produces a product which is twice as good, but there is a 50% chance that he will abscond with my money, I am not sure how to value him compared with the other. It seems like the thrust of the analysis later is to determine whether people will in fact try to disappear. But that is not well captured IMO by an analysis which just ranks people in terms of "utility" for the price.
Our intuitive notion of reputation combines the issues of reliability and quality. In your example, whether you choose the reliable vendor or the unreliable one depends on whether you are risk-seeking or risk-averse. You must prefer one or the other or be indifferent. In general how you make these choices depend on your values and your expectations of what the vendors will do, which include both expectations of reliability and expectations of quality. Can you elaborate more on why the analysis is inadequate? (I know it probably isn't adequate, but why do you think so?)
I am not sure about this last point. It seems to me that a good reputation is one which is most cost-effective for its owner. Whether it is good for social stability is not relevant to the person who is deciding whether to use it. ("But what if everyone behaved that way? How would you feel then?") It may be nice for the analyst but not for the participant.
Right, I'm speaking from the point of view of the analyst when I say "good", but it also applies to individual participants. Each person does what he thinks is in his best interest, but if this turns out to be unstable for the reputation system as a whole, then it won't last very long so there is little point in getting involved in the first place. In other word, I would not choose to participate in an unstable reputation system.
I don't really know what the first one means. There are a lot of different ways I can behave, which will have impact on my reputation, but also on my productivity, income, etc. There are other ways I can damage my reputation than by cheating, too. I can be sloppy or careless or just not work very hard. So the first two are really part of a continuum of various strategies I may apply in life. The second is pretty clear but the first seems to cover too wide a range to give it a value.
You are right that there is continuum of strategies, but I assume there is a discontinuity between completely throwing away your reputation and any other strategy. So operating value is the maximum amount of profit you can make by optimizing among all other strategies except disappearing.
It would be useful to make some of the assumptions a bit clearer here. Is this a system in which cheating is unpunishable other than by loss of reputation, our classic anonymous marketplace? Even if so, there may be other considerations. For example, cheating may have costs, such as timing the various frauds so that people don't find out and extricate themselves from vulnerable situations before they can get stung. Also, as has been suggested here in the past, people may structure their interactions so that vulnerabilities to cheating are minimized, reducing the possible profits from that strategy.
When I wrote the original post I was thinking of the classic anonymous marketplace. But I think it can apply to other types of markets. Cheating costs can be easily factored into the throw-away value, and an important question for any theory of reputation to answer is how to structure transactions to minimize this value. Many more assumptions need to be made in modeling a particular reputation system, but I was trying to list some general properties that might apply to all reputation systems.
It might be interesting to do something similar to Axelrod's Evolution of Cooperation, where (human-written) programs played the Prisoner's Dilemma against each other. In that game, programs had reputations in a sense, in that each program when it interacted with another remembered all their previous interactions, and chose its behavior accordingly. The PD is such a cut-throat game that it apparently didn't prove useful to try to create an elaborate reputation-updating model (at least in the first tournaments; I understand that in later versions some programs with slightly non-trivial complexity did well).
The tit-for-tat program that won both contests uses an extremely simple reputation algorithm -- it expects the next action of the other player to be the same as the last action. This is an example of what I called a "good" reputation algorithm. It serves the self-interest of the entities that use it; it is cheap to use; when widely used the system is stable. Wei Dai
I am far behind in my C'punks reading and am likely to get farther behind before I catch up, so perhaps this is well-known or dated. However, the recent revival of reputation discussion reminded me of a very interesting claim made by Miller & Drexler in "Comparative Ecology: A Computational Perspective" (http://www.webcom.com/~agorics/agorpapers.html). I'll quote from section 4:
... Trademarking of services and products enables producers to establish valuable reputations. The lack of this mechanism in biology [17] contributes to the relative sparseness of symbiosis there.
4.4. Food webs and trade webs
Biological and market ecosystems both contain a mixture of symbiotic and negative-sum relationships. This paper argues that biological ecosystems involve more predation, while idealized market ecosystems involve more symbiosis. Indeed, one can make a case that this is so even for human market ecosystems-that biological ecosystems are, overall, dominated by predation, while market ecosystems are, overall, dominated by symbiosis.
In human markets (as in idealized markets) producers within an industry compete, but chains of symbiotic trade connect industry to industry. Competition in biology likewise occurs most often among those occupying the same niche, but here, it is predation that connects from niche to niche. Because of the lack of reputations and trademarks, symbiosis in biology occurs most often in situations where the "players" find themselves in a highly-iterated game. In the extreme, the symbiotic system itself becomes so tightly woven that it is considered a single organism-as with lichens composed of fungi and algae, or animals composed of eukaryotic cells containing mitochondria. Predation, of course, links one symbiotic island to the next.
Ecology textbooks show networks of predator-prey relationships-called food webs-because they are important to understanding ecosystems; "symbiosis webs" have found no comparable role. Economics textbooks show networks of trading relationships circling the globe; networks of predatory or negative-sum relationships have found no comparable role. (Even criminal networks typically form cooperative "black markets".) One cannot prove the absence of such spanning symbiotic webs in biology, or of negative-sum webs in the market; these systems are too complicated for any such proof. Instead, the argument here is evolutionary: that the concepts which come to dominate an evolved scientific field tend to reflect the phenomena which are actually relevant for understanding its subject matter.
[17] Wickler, Wolfgang, Mimicry in Plants and Animals (World University Library/ MaGraw-Hill, New York, 1968).
This collection of Miller&Drexler papers is very much worth reading if you haven't run across it yet. Ted
Sorry to be so late picking up this thread, but I was very busy this past week. Wei Dai <weidai@eskimo.com> writes:
Can you elaborate more on why the analysis is inadequate? (I know it probably isn't adequate, but why do you think so?)
"Reputation" is a fairly broad concept. It generally refers to our expectations of how some person will behave in various circumstances. To some extent, every character trait can have a reputation associated with it. A person can have a reputation for honesty, for efficiency, for steadiness, for accuracy, and so on. Even looking at it solely from the point of view of a consumer choosing a service provider, any or all of these traits might be important depending on the situation. If I need the work done right away, I will choose a supplier with a reputation for speed. If I want to be sure it is right and doesn't have to be redone, I will chose one with a reputation for care and accuracy, and so on. I don't think the notion of a graph showing utility (an overall summing up of value to me) versus cost really captures this notion. Such a graph is useful and adequate for some forms of economic analysis where certain simplifying assumptions are made, but I don't think it will work in this case. One of the big issues we would want to analyze is the impact of various sets of rules and conventions for how trades occur. The question is how trust could be established, or how trade could occur in its absence, given the possibility of avoiding retribution for dishonest behavior that anonymous communication allows. In this analysis we are going to need more information than just utility vs price. We will need to separate out those various factors which go to make up the utility. Changing the market conventions (say, by introducing escrow agencies) will change the weightings of the various factors that make up utility. If I no longer have to trust the honesty of the person I am trading with (because we have an escrow agency to help us make the exchange) then the importance of his reputation for honesty goes down. The result is that the "reputation" curves will change rather dynamically and unpredictably as we consider different possible structures in the market. This will make the analysis of them intractable, I would think. As I wrote before, it makes more sense to me to focus explicitly on the issue of trust and honesty, since those seem to be the main issues which are going to take on more importance in an anonymous market. Yes, they are important in already existing markets, too, and there are plenty of fly by night, hole in the wall companies which exist solely to do business dishonestly and then evade retribution. But the ease of doing these things could increase in an anonymous market. The other fact that makes trustworthiness more important in such a market is the cost it applies. One of the potential benefits of anonymity is privacy. To establish trust by keeping a steady pseudonym (as was suggested earlier, a trade name or brand name performs this function even as companies and personnel change out from under it) means giving up a certain level of privacy. Even if the trade name is controlled pseudonymously, the linkability of its transactions represents a form of exposure which can be seen as a cost. If the only way to be successful in business is to give up some of the privacy that anonymity would provide by working through a consistent pseudonym, that would be an interesting result. Again, the issue is primarily one of trustworthiness, as I see it. I do think the idea of analyzing costs in terms of "throwing away your reputation" by cheating and starting anew is an interesting approach. The question is whether you can really quantify the value of a reputation. I know in business now corporations do carry on their books something called "good will" which I believe is roughly the value of their good name and trade marks. However it is not normally considered to be a major asset, I think. Hal
Hal writes:
Changing the market conventions (say, by introducing escrow agencies) will change the weightings of the various factors that make up utility. If I no longer have to trust the honesty of the person I am trading with (because we have an escrow agency to help us make the exchange) then the importance of his reputation for honesty goes down. The result is that the "reputation" curves will change rather dynamically and unpredictably as we consider different possible structures in the market. This will make the analysis of them intractable, I would think.
Analytically, using an escrow agent doesn't change the utility function. It replaces the trading partner's honesty reputation estimate with the escrow agent's (which is presumably higher, or why use them?). This is just a parameter substitution. Whence comes the intractability?
I don't have time to write much now, but I got a request for information on the Prisoner's Dilemma problem, so I did a web search, and found an interesting sounding paper at <URL: http://www.cs.wisc.edu/~smucker/ipd-cr/ipd-cr.html>. I have not read it yet, but according to the web page this adds to the traditional PD simulations the feature that participants can choose whom to interact with (rather than having to interact with everyone or with a random other program). Maybe "reputation" would be more important in such a simulation since the element of choice seems to be one of the key areas where reputation matters. I'll try to read the paper over the holidays, but it sounds like it might be relevant. Hal
participants (4)
-
Hal -
Scott Brickner -
Ted_Anderson@transarc.com -
Wei Dai