Hello: Not to distract from the entertaining MIME thread, but I've got a question that's a little closer to a crypto topic (i.e., software psueudo-random number generators). In the aftermath of the Pentium-can't-divide-accurately flap, I modified a random-number generation routine I'd written to check for the presence of the Pentium divide errors. In the process, I put in a routine that did an elementary benchmarking of the chip's performance in both integer (speed to repeatedly execute an empty for-loop 1 million times) and floating point operations (inserting a divide operation in the loop, and adjusting the resulting execution time by subtracting the time required for the empty loop before computing divide-calculations-per-second performance). This is an admittedly very crude benchmark, but I wanted to get some rough idea how many divides could be performed per minute of program execution (i.e., to estimate how long the program could run before a Pentium-problem might occur). Anyway, I found what appeared to be very strange results when comparing performance on my 486/66 versus a 486/25 and 386/20: namely, although the 386 was dead last on both the primarily integer-based empty- for-loop and for-loop-with-divide timings, the 486/25 and 486/66 turned in effectively identical times in the empty-loop benchmark (the 486/66 was about 33% faster than the 486/25 in the divide-based benchmark). All machines were running essentially equivalent versions of Windows for Workgroups). My question is, why would the 486/66 and 25 produce comparable integer- based empty loop performance? I haven't tried a comparable program running under plain-DOS to see if this is somehow Windows related. I supsect there's an easy explanation, but it escapes me. Any suggestions would be greatly appreciated. rj ------------------------------------------------------------------ R. J. Harvey (mail: harveyrj@vt.edu) (PGPkey 0BADDDB5: 82 42 53 EA 97 B0 A2 B2 FC 92 90 BB C2 26 FD 21)