HF: a very brilliant and thoughtful essay that sparks many ideas for me. I am sure you will be flamed, or someone will want to, for your analogy of hierarchical CA's to MLM, but imho you are right on there!! a beautiful analogy to help the public see why hierchical CA's are not very pretty. what amazed me is that you didn't introduce the concept of a graph. clearly, the web of trust and the hierarchical CA are actually just different kinds of graphs. (for the uninitiated, a graph is a network consisting of nodes and edges.) the hierarchical CA is a tree. PRZ's "web of trust" is a graph that is not treelike. the point you make about his trust being "non transitive" is actually saying (as I understand it) that trust only propagates to adjacent edges in the trust graph, but not further. that is, say A trusts B, and B trusts C. a "trust link" exists between A and B and B and C. but a trust link does not exist between A and C. interestingly, beginning CS students learn to create a "transitive closure" of a graph by drawing all the missing links. this is effectively what is going on in a Hierarchical CA. a path of links implies a link between all the nodes in that path. your point that the "trust graph" is the most problematic area of cryptography at this point is really right on the nail. we all have to realize that Public Key Cryptography solved one vexing problem, the requirement of the preexistence of a secret channel. but it does not solve another problem-- ensuring that keys are associated with the individuals one communicates with. what I was thinking as I read your essay was that perhaps some new metrics are called for. it seems to me that people are hitting a brick wall in thinking, "trust is something that is either there or it isn't". I think in the graph situation what we really have is information about the "strength" of a trust link between nodes. the problem then can be generalized: suppose I have a graph of edges, and numerical weights that represent the trust between entities represented by the nodes of the graph. the question is, suppose A wishes to know the strength of his trust from himself to some other entity C. it should be clear that this is in fact a variation of the "shortest path" problem. it suggests a straightforward depth- or breadth-first search. the code could tell you the "strongest trust path" between you and some entity using some heuristic, such that the trust between you and this person is the average of the trust of all the traversed links, or something like that. I am not saying this is the correct formula: it would be interesting to try to find other formulas that are "correct" in the sense that they truly model trust. (another obvious formula would decrease the trust strength dramatically if any link in the chain were weak.) I would be interested to hear what people think a correct "trust formula" should be. in fact what you have delineated HF, are two extreme trust formulas at different ends of the spectrum. (hope I get this right) 1. the HCA (hierarchical certification authority scheme). all trust links are 0 or 1. (0 is the same as no link). the trust between entity A and entity B is 1 if a path exists between them, 0 otherwise. 2. the PRZ scheme. all links are 0 or 1. a trust exists between A and B if B is adjacent to A, 0 otherwise. it seems to me that possibly neither is "correct", and that perhaps a "correct" formula may not even exist. there are clearly other variations. I'm being a bit sloppy, and I'd be happy for anyone to hammer out these ideas with more rigor. what might be ideal is if every person could choose to use whatever trust algorithm they desired. (that is, a system that supported *both* HCA and PRZ is easily conceivable, with the consumer determining how he wishes to use the "trust data", although PRZ complicates this by insisting that some trust data must be secret) and as I wrote, other possible algorithms, with some obvious defects: 3. trust is measured as 0 to 1, or perhaps -1 to 1. the trust between A and B is the highest average possible in the path between A and B. ("optimistic") 4. or, trust is measured as the worst average possible. ("pessimistic") 5. trust is the product of trust values. etc. what we have is a graph in which some links are explicitly given, and we have to "derive" some of the implied links based on our knowledge of "trust properties" and the given trust values. it is quite interesting that in fact the problem of "secrecy" is replaced by that of "trust" by PKC, and that to adequately solve the "trust" problem, we must try to figure out what its actual properties are. how does human trust work? how should it work? are there ways to formalize or optimize the informal "algorithms" that people use to deal with trust issues? we are getting into some deep psychological issues. can trust be quantified? also, there are some other obvious computational problems that immediately ensue. how can we efficiently store all this graph information? is there a way to distribute it over a network? how can we efficiently respond to "trust queries"? etc. it seems to me that both PRZ's scheme and the HCA scheme are only the very first, most basic ideas of how to tackle these complex issues and that we are likely to see new variations by others. it is quite possible that some cpunks may help immensely in refining the field. here is another idea: it appears what we have is a trust graph in which some people may want to selectively reveal or conceal their trust. this complicates things because now the algorithms may or may not run on the "open trust" values, or the "secret/hidden values", etc. ugh!!! the trust network itself is subject to the kind of secrecy and hiding that is associated with the original problem it tries to solve (i.e. conveying secret information). hence it seems to me a good "trust network system" would support some things: 1. allow efficient trust queries, without severe problems associated with "nearness" of the participants. 2. allow individual users to decide how they wish to use the system, possibly supporting *both* HCA and PRZ etc. (we all seem to be working from the assumption they are mutually exclusive. but do they really have to be? is there really a "best" algorithm, or is the best situation to actually allow different algorithms for different situations?) 3. allow people to selectively reveal or conceal their trust values. 4. be distributed. 5. not rely on a central authority. etc.-- additions, anyone? it appears there is a rich vein of memes in all this beyond the basic territory explored by PRZ and HCA for someone to mine. in fact I'm surprised their aren't more academic papers out on this subject that tackle some of the things I am referring to above (alternate trust algorithms, trust as a network, etc.) maybe someone would like to work up some alternative prototypes ala the way remailers were developed in this "community".