Maybe you could say more about the details of your credential system. Such a system built on Wagner blinding might be very interesting.
I've been thinking it would be nice to post my entire paper here (and maybe on sci.crypt.research) before sending it off to the journals. What are the issues surrounding that, though? The academic folks here seem uncomfortable when I talk about it, like I'd be leaking something secret. AFAICT, nobody else would be able to apply for a patent on the idea without telling a lot of lies in the process. So that leaves the possibility that somebody whips out another paper on the topic before mine's all the way done. Are the journals going to be snippy about copyright issues? Why haven't I seen other papers published on usenet and such before going to press?
If I replace h1 with (g^b0) and get the issuer to sign:
((g^b0)*g^b1 *h2*g^b2 *h3*g^b3...)^k
I should be able to divide the two results and get h1^k. But part of the cut-and-choose protocol will be to require that the n/2 checked documents are all valid and different from any previous instances of the protocol. So it should be extremely hard for the user to sneak lots of previously used values and fake h's (which are really blinding factors) into the unrevealed documents. But are there other ways to separate out signatures on individual h's?
You're really going to remember all the discarded h values from all the previous instances of credential issuing? Seems like it might be a lot of data. How many h values do you typically expect?
I get around that by having the issuer issue a new random value for each issuing session which gets hashed several times along with some other data before going into the blinded messages. You have to prove that the value properly descends from the issuer's random value, which makes it tough to reuse values from a previous session. -J