I see 160ms round trip times on my SLIP link from home to work, and I can't account for all of this time by just adding up transmission times and store-and-forward delays for the data rates and packet sizes I'm using. And I don't think it can be explained by the trellis decoding in V.32 bis, as that should account for only a few bits of delay. I've since heard of very similar figures for other modems, so It's not just my modem. I'm beginning to suspect the V.42bis packetizing algorithms. Although they're not described in the spec, I suspect that real V.42bis implementations use timers to determine when to send the the currently queued data as a frame. Or maybe there's a Nagle-like algorithm like the one in TCP: immediately send the first byte of data on an idle link, but keep additional traffic pending until the first byte is acknowledged in order to aggregate stream traffic into larger frames. This is all speculation so far, but it does explain the long RTTs I see with packet traffic even though raw character-at-a-time traffic seems to be fast. Stream traffic would see the worst delays of all, which is ordinarily okay for a file transfer, but death to a real time stream like voice. That's why we may be forced to turn off V.42 entirely and speak synchronously to the modem. Time to haul out a protocol analyzer and do some timing measurements. Real timing measurements would help, but it's not just the V.42bis algorithms. A year or two ago, I did some measurements on a number of different modems. I saw large -- and sometimes unacceptable -- delays even without V.42 or MNP in use. My methodology was to enable loopback at various points -- hardware loopback plugs, local loopback on my modem, remote loopback or a plug at the far end, etc. I sent single characters, and timed how long they took to show up. The modems seem to either buffer up several characters, or have to wait a while before sending the first one, but the delays appeared to be for the first character in a ``bunch'' (``packet'' is too strong a word).