Meeting Report: "Developing the Advanced Encryption Standard"

Phillip M. Hallam-Baker hallam at ai.mit.edu
Fri Apr 18 16:51:38 PDT 1997


>  Regarding computational efficiency, NIST will favor
>efficiency on 32-bit processors and short key-setup time, will test
>efficiency on a little endian processor, and will publish the specs of
the
>test system. 

This seems misguided IMHO. The current trend is towards 64 bit
processors and the inefficiency of using only half of a 64 bit
processor
seems to me to be somewhat more serious than the hassle of having
to kludge up 64 bit operations on a 32 bit processor. The most likely 
platforms are 64 bit and 8 bit (embedded systems such as cellular).


> They also encourage two submissions: reference (possibly in
>Java) and optimized (in C).  Regarding memory requirements, NIST will
>measure memory requirements for C implementation on a single
reference
>platform (presumably a Pentium Pro), although submitters are welcome
to
>provide results for other platforms.  

Thats also a somewhat limited approach. The x86 familly is getting
long
in the tooth. Intel themselves have it scheduled for replacement in
1999.
The architecture is very much compromised by backwards compatibility 
with the CISC instruction set. A mixed bag of AXP, Pentium PRO and
commodity embedded processors popular in VLSI cell form such as 
Z-80, 6502 and 680x would be more reasonable.

>It was pretty much
>universally thought that this schedule is wildly optimistic.

Like design a new cypher in 6 months?

>NIST has a hard time figuring out how to measure hardware efficiency.
>They'd like to have definitive metrics (like there will be for
software)
>but are unwilling to force submitters to provide VHDL code, or gate
counts,
>or whatever.

Hardware is likely to cover a wide range of uses. The number of gates
for
DES in an ultra fast, `unwound' implementation is many more than a
n
iterator using the same gates for each round.

>NIST talked about what to do about "tweaking" algorithms after
submission.
>What if a break is found, but a simple fix prevents the attack?  What
if
>someone submits an algorithm and someone else proposes a tweak? 
These
>questions were not answered.

Surely this just menas that the tweakers get credit?

>But is there enough time for people to invent strong 128-bit block
ciphers?
>Probably not.  One alternative is to take existing 64-bit block
ciphers,
>and then use a 4-round Luby-Rackoff construction to create a 128-bit
block
>variant.  Another is to give people more time.  Both were talked
about.  I
>would like them to approve triple-DES as an interim standard, and then
take
>all the time they need for a secure 128-bit block cipher.


I suspect the assumption made is that the time table will slip. My
concern is 
however that the starting gate closes too soon. 6 months is too little
time
to start something entirely new. But I would not be suprised if there
was
an extension. But unless people know in advance there will be one it 
means that they find out in six months time that they have six months
to sumbit and algorithm.

I've seen this type of delay a lot in the IETF standards arena. The
working
group starts by assuming it has too little time to do something new
then
spends far longer plugging up holes in a broken plan.

Phill



-------------- next part --------------
A non-text attachment was scrubbed...
Name: bin00003.bin
Type: application/octet-stream
Size: 1624 bytes
Desc: "smime.p7s"
URL: <https://lists.cpunks.org/pipermail/cypherpunks-legacy/attachments/19970418/91587f07/attachment.bin>


More information about the cypherpunks-legacy mailing list