Are there any good general cryptographic protocols for groups taking group actions by formal consensus or voting rules? I'm thinking of a "distributed agent" that is empowered to do various things but which is activated only by a vote of its owners. This would be like a "Robo-moderator" for a newsgroup, or a "Robo-Personnel Administrator" for a company's Board of directors, or a "Robo-Rater" for a restaurant rating website, or.... Crucial facts about a protocol that does the right thing would be: 1) DOES NOT create any single priveleged user or machine. 2) Resistant to denial-of-service attacks and attempts to "stack the vote." (Requires user authentication) 3) No altered versions of the agent ought to be able to gather enough information to force an action as long as at least the majority of agents are unaltered. 4) Once a consensus is reached, a majority of the agents acting together should be able to take whatever action is found even if the dissenters' agents don't cooperate with them. (a consensus reassembles a key? But then that key can't be used again, what's the next key?) Bear
On Fri, 12 Jan 2001, Ray Dillinger wrote:
Are there any good general cryptographic protocols for groups taking group actions by formal consensus or voting rules?
There are distributed signing protocols which could go partway towards meeting this goal. For some details, you can look at the IBM project on "Proactive Security" http://www.hrl.il.ibm.com/Proactive/ They've implemented some of the protocols involved; there's a paper in ACM CCS 5 I think which gives the API for their "Proactive Security Toolkit." Java 1.1 implementation of distributed DSA signatures. Unfortunately, the web page, which advertises a download for 8/99, seems to be out of date. http://www.hrl.il.ibm.com/Proactive/project.html Now, if you could get your hands on this, then you could do what you want by having each voter hold a share of the distributed secret key. The voters agree amongst themselves however on what orders to sign, and then execute distributed key signing to sign an order. The robot doesn't do anything unless it receives a valid signed order. -David
At 06:01 PM 1/12/01 -0500, Ray Dillinger wrote:
Crucial facts about a protocol that does the right thing would be:
1) DOES NOT create any single priveleged user or machine.
2) Resistant to denial-of-service attacks and attempts to "stack the vote." (Requires user authentication)
3) No altered versions of the agent ought to be able to gather enough information to force an action as long as at least the majority of agents are unaltered.
4) Once a consensus is reached, a majority of the agents acting together should be able to take whatever action is found even if the dissenters' agents don't cooperate with them. (a consensus reassembles a key? But then that key can't be used again, what's the next key?)
Interesting idea. Starting with 1 user who can admit (by virtue of having 100% of the vote) and then letting the users vote to add others. I don't think reassembling the key is the final stage. I think the server could simply use a voting protocol to get (or timeout) permission to do proposed actions. We are assuming that the server is trusted, right? The server could send signed PGP-encrypted email to all members saying: "The following script has been proposed to be run by GroupServer for your Group.. to vote yes or no, sign a yes or no message and encrypt and send it to GroupServer. This vote closes in 3 days, and votes are acknowleged immediately." Perhaps I'm not clear on what constitutes an action that could be distributed without relying on a trusted actor (server). (Thinking out loud) Maybe the actions require access to a distributed N-of-M database? How do you prevent someone from reusing the reconstructed database? Or uncooperatives refusing to update their slice of the DB?
On Fri, 12 Jan 2001, David Honig wrote:
(Thinking out loud) Maybe the actions require access to a distributed N-of-M database? How do you prevent someone from reusing the reconstructed database? Or uncooperatives refusing to update their slice of the DB?
One way to address this problem is to use secret sharing. Everyone gets a share. Only a certain threshold need to cooperate to reconstruct. Everyone's secret counts the same, so in order to deny service you need to have fewer than threshold non-cooperatives. You never reconstruct the database in one place. Instead, you figure out a clever way to do a distributed query on the database shares, such that at the end of the protocol, out pops the result. There are plausibility results due to Ben-Or, Goldwasser, Goldreich, Wigderson, and others about this under the name "secure multiparty computation." Briefly, if you can express a boolean F function with n inputs, then n parties can get together and evaluate F(x1,x2,...,xn) such that * everyone learns the output * no one learns anything about an xi not their own So in particular you can build an F() which reconstructs a database from shares x1,x2,...xn and then runs a query on the database. Only the results of the query are output; the theorems tell you that the shares stay secret. "So is it practical?" The answer is NO. These protocols tell you how to secure multiparty compute a function gate by gate. With nontrivial computational overhead and communication per gate. But, hey, at least it's possible! -David
On Fri, 12 Jan 2001, David Honig wrote:
the server could simply use a voting protocol to get (or timeout) permission to do proposed actions. We are assuming that the server is trusted, right?
Actually, no. That creates a single priveleged machine, which is also a point of failure, which is also a point of attack, which is also subject to subpeonas or outright theft. Ideally, this is something that runs on the distributed machines of the participants. I think that's the only way to be safe from the "lawsuit attack".
Perhaps I'm not clear on what constitutes an action that could be distributed without relying on a trusted actor (server).
For example, consider a robo-moderated mailing list formed by cat owners. They have a "posting protocol" that requires you to submit a digital coin worth a dollar or two along with your letter. If enough people click on the "this is spam" button, the group agents donate the coin to an animal shelter and you can't spend it. Otherwise, you get your coin back when your message expires. The posting protocols etc. are wrapped in scripts, of course; on your end you get a message box that says "Are you willing to post a two-dollar bond that says most of the people on the list don't think this is spam?" and yes/no buttons. The subscribers just have another little button on their mail reader - So it goes Next message, delete, reply, reply all, spam. I'd really like it if somebody has figured out a way for a group to form consensus and act on that consensus as though it were a single individual -- capable of participating in general protocols. But individual solutions to problems like the above would be a great start. Bear
At 06:01 PM 1/12/01 -0500, Ray Dillinger wrote:
Crucial facts about a protocol that does the right thing would be:
1) DOES NOT create any single priveleged user or machine.
2) Resistant to denial-of-service attacks and attempts to "stack the vote." (Requires user authentication)
3) No altered versions of the agent ought to be able to gather enough information to force an action as long as at least the majority of agents are unaltered.
4) Once a consensus is reached, a majority of the agents acting together should be able to take whatever action is found even if the dissenters' agents don't cooperate with them. (a consensus reassembles a key? But then that key can't be used again, what's the next key?)
Interesting idea. Starting with 1 user who can admit (by virtue of having 100% of the vote) and then letting the users vote to add others.
I don't think reassembling the key is the final stage. I think the server could simply use a voting protocol to get (or timeout) permission to do proposed actions. We are assuming that the server is trusted, right?
The server could send signed PGP-encrypted email to all members saying: "The following script has been proposed to be run by GroupServer for your Group.. to vote yes or no, sign a yes or no message and encrypt and send it to GroupServer. This vote closes in 3 days, and votes are acknowleged immediately."
(Thinking out loud) Maybe the actions require access to a distributed N-of-M database? How do you prevent someone from reusing the reconstructed database? Or uncooperatives refusing to update their slice of the DB?
On Sat, 13 Jan 2001, Ray Dillinger wrote:
list don't think this is spam?" and yes/no buttons. The subscribers just have another little button on their mail reader - So it goes Next message, delete, reply, reply all, spam.
Well, the totally trivial and stupid thing is for a list reader to sign a message saying "I think message X is spam" and send it to the list server. Actually, he doesn't even have to send the message; he can just send the signature if the message is in some canonical format. The server can verify the signature, verify the user's ID, increment a counter, and throw away the signature. When the counter passes a threshold T, -chomp- the server eats the bond. The server can even keep the signatures around if it wants to prove to the luser later that yes, lots of people really did think his message was spam. This has at least two problems 1) Identifies the user who says "I think this is spam." Not a good idea in principle, possibly not a good idea in practice. A potential solution would be a way for a user to sign a message in such a way that * no one can determine which individual public key signed the message * yet anyone can determine that the signer's public key belongs to a specific set of public keys (chosen by the signer and fixed at signature time to avoid the problem with "well, remove one public key and try again!") in this case, the set of eligible list voters. There's probably some crypto voting paper which solves a problem much like this. I'm not up on that. 2) Keeping an audit trail so the server can prove that the majority really did think message X was spam. With this proposal audit trails consist of up to T signatures, where T is the threshold used to trigger the spam alert. At like 1K per signature and many e-mails, this could be sizable. -David
Well, the totally trivial and stupid thing is for a list reader to sign a message saying "I think message X is spam" and send it to the list
Sorry, I re-read your message and noted the requirement to ahve no central server. How about this: 1) To post a message, sender S takes a 2-dollar coin and then uses some kind of verifiable secret sharing protocol to split it into shares. 2) S sends the shares to the group agents. 3) Each group agent verifies that it has a share consistent with the other group agents (see Byzantine Agreement for this one). If any share fails, then something bad happens (what?). The other problem is what happens if S just submitted a bunch of garbage; I'm not sure how to deal with this DoS attack. 4) If a group agent thinks the message is spam, it sends its share to Engineers Sans Frontiers or whoever. Otherwise it keeps mum. Now if enough group agents (1/2, 1/3, whatever) think the message is spam, enough shares collect at step 4) to reconstruct the 2-dollar coin. Otherwise not enough shares collect and the coin is never reconstructed. Presumably S kept a copy and can spend it later. No central server now, just needs a verifiable secret sharing scheme. Pedersen has one, and another is part of the Proactive Security work I mentioned previously. -David
At 12:23 AM 1/13/01 -0500, dmolnar wrote:
On Fri, 12 Jan 2001, David Honig wrote:
(Thinking out loud) Maybe the actions require access to a distributed N-of-M database? How do you prevent someone from reusing the reconstructed database? Or uncooperatives refusing to update their slice of the DB?
One way to address this problem is to use secret sharing. Everyone gets a share. Only a certain threshold need to cooperate to reconstruct. Everyone's secret counts the same, so in order to deny service you need to have fewer than threshold non-cooperatives.
You never reconstruct the database in one place. Instead, you figure out a clever way to do a distributed query on the database shares, such that at the end of the protocol, out pops the result. There are plausibility results due to Ben-Or, Goldwasser, Goldreich, Wigderson, and others about this under the name "secure multiparty computation." Briefly, if you can express a boolean F function with n inputs, then n parties can get together and evaluate F(x1,x2,...,xn) such that
* everyone learns the output * no one learns anything about an xi not their own
Suppose the action to be voted on is an update of the distributed DB [1]. How do you enforce an *update* on the shares? Wouldn't that require the cooperation of all shareholders? I would think that enough noncooperative shareholders could fork off their own group, diverging from the point where they didn't update their shares. [1] For instance, the DB could be the list of members, the action could be to add or drop a member. Also, some group actions might compromise the DB itself, and so it seems to me that you'd often need a trusted server which accepts votes on its actions. Though I realize this implies a central point of attack/control (the Napster problem).
On Sat, 13 Jan 2001, dmolnar wrote:
1) To post a message, sender S takes a 2-dollar coin and then uses some kind of verifiable secret sharing protocol to split it into shares. <snip>
4) If a group agent thinks the message is spam, it sends its share to Engineers Sans Frontiers or whoever.
No central server now, just needs a verifiable secret sharing scheme. Pedersen has one,
Cite, or URL? A verifiable secret sharing protocol could solve a *LOT* of protocol problems and I want to read it closely. (Thanks in advance for any pointers...)
and another is part of the Proactive Security work I mentioned previously.
On Byzantine Agreements? I have run into references to the topic, but it was never really clear what Byzantine Agreement really means. Bear
On Sat, 13 Jan 2001, Ray Dillinger wrote:
No central server now, just needs a verifiable secret sharing scheme. Pedersen has one,
Cite, or URL? A verifiable secret sharing protocol could solve a *LOT* of protocol problems and I want to read it closely. (Thanks in advance for any pointers...)
Pedersen's verifiable secret sharing: Non-interactive and information-theoretic secure verifiable secret sharing. In J. Feigenbaum, editor, Advances in Cryptology -- CRYPTO '91, volume 576 of Lecture Notes in Computer Science, pages 129-140, 11-15 August 1991. Springer-Verlag, 1992 Stadler's publically verifiable secret sharing: http://citeseer.nj.nec.com/stadler96publicly.html Schoenmakers' publically verifiable secret sharing: http://www.win.tue.nl/math/dw/pp/berry/papers/crypto99.ps.gz Wenbo Mao explains what "publically verifiable" or "universally verifiable" means and why to use it: http://www.hp.co.uk/people/wm/papers/oak98.ps Rosario Gennaro's thesis on VSS: http://citeseer.nj.nec.com/72839.html Stinson's bibliography on secret sharing schemes: http://www.cacr.math.uwaterloo.ca/~dstinson/ssbib.html
and another is part of the Proactive Security work I mentioned previously.
On Byzantine Agreements? I have run into references to the topic, but it was never really clear what Byzantine Agreement really means.
Actually, I meant that a verifiable secret sharing scheme is used in the proactive security work. Thanks, -David
participants (3)
-
David Honig
-
dmolnar
-
Ray Dillinger