
Deranged Mutant wrote:
first, i would like to make it clear that i have no personal grudges against
I never thought you did...
That was simply to ensure a flame free discussion. Almost anything I talk about on my local bbs echos (back in India) turns out to be a flame war, specially if its related to _touchy_ software like PGP :)
pgp, i have some interest in reputation systems which i am trying to pursue. since pgp tries to model a reputation system and is being used world-wide, i am using pgp as a model for discussion.
That's the point, though. PGP *does not* try to model a reputation system. When you sign keys, you only attest that you are sure that that key belongs to the person whose name is on it. A signature says *nothing* about a person's reputation. This is meant to be used separately by each user of PGP. If Alice knows Bob personally, then she knows how trustworthy Bob is when he signs a key, in relation to her. This is not meant to be shared with anyone else, and has no meaning to anyone else except Alice.
in that case if alice receives a key signed by bob, she would know the key is good. and there is no need for the two other fields. infact the reason why i felt pgp is trying to model a reputation system if because of these two fields that carry trust values.
You can't set up a global web of trust. It's computationally infeasable, esp. with contradictions, to resolve. It's also meaningless. Say Alice trusts Bob. Bob trusts Carol. Carol trusts Don. Should Alice trust Don? No... subjective factors like "trust" don't commute.
this is precisely what i wanted to know. can there be a model wherein one can compute a trust-like parameter? if yes in what ways should this parameter be modified and qualified. or conversely what kind of qualification/modification is required for computing trust?
copule of things you mentioned are intresting. in a reputation system the trust parameters are fuzzy and there are lot of layers (since we are talking of a web of users), so we can't mix it with the liar paradox.
Yes we can. There's been work done with the Liar paradox using fuzzy logic. Also other work in repuation webs ("Say the president of a company asks all his vice presidents for their opinions about each other, so he can appoint a successor. How does he resolve contradictions, when all he has are their opinions about each other?").
lets say a reputation system involves alice, bob, carol and don. Computing trust parameter with respect to each other can lead to situtions where alice trusts bob x% and has y% proof that he is a liar, after considering all other relations. If this x% > y% she'll believe him, if not she won't. thats why i feel the liar paradox doesn't really pose much of a threat.
as far as halting problem goes, again can we reduce a reputation system to halting problem? Infact alonzo church and some other scientists made symbolic equivalents of turing's machine which could determine the broad limits of automatic computation. is there any research being done on reputation systems that involves church's thesis?
I mispoke slightly: I mentioned the halting problem in terms there being non-computable functions. There are people on the list (and elsewhere) who don't have an inkling about the halting problem, and propose security/reputation systems that fly in the face of it.
I've been working on a proof or discussion of a "Very Generalized Halting Problem" that basically says that you can check that something is "not x" from a set of tests to see if that something is not "x"... if it fails at least one test, you know it's not "x", but if it doesn't fail any, because such a set must be incomplete (Goedel's Theorem...), you are never sure that it's "x".
How this relates to a reputation system? If you set up a global reputation network where people tag each other in degrees of trust, you try for completeness and end up with . People will contradict each other and there will be no way to resolve it. Alice says Bob is trustworthy to a degree, and Bob says Carol is trustworthy to a degree, but Carol says Alice is untrustoworthy (rates Alice lower).
There will be inconsistency only if trust/untrust values are absolute. that is you either trust someone or you don't. i am talking of a system wherein trust values lie between 0..1 which are derived from a set of tags. in that case bob alice trusts bob x% bob trusts carol y% and carol trusts alice z%. if z < x, z < y simply implies alice trusts bob, bob trusts carol more than carol trusts alice. so the system will be relative rather than absolute. a person who is trusted by more people will carry greater weight and people trusted by him will carry higher weight too. this way snoopy who is not trusted by many people will automatically have a weak relationship with others.
There's the other point that someone else on the list (Perry?) brought up: if people publicly rate each other, there may be social/political pressure to give some people lower or higher ratings.
again these ratings will be decided in a web rather that on one-to-one basis. that way the social/political pressure can be reduced though not completely eliminated.
And again, being a private rating system means 1) I don't offend anyone since no one else knows how I rate them, and 2) there are no contradictions because I am not relying on other's ratings to determine how I rate them.
BTW, some work using fuzzy logic in terms of how people rate each other is being done at the Group for Logic and Formal Semantics at the University at Stony Brook (in Stony Brook, NY). I went to school there and still help out in the Philosophy Department and the G.L.F.S. with some programming etc. It isn't exactly the same as ratings systems, but its close enough.
if any research work is freely available i'll be really interested. best, vipul.