
The internet can't get that much capacity, we don't have swiutching technology beyond the test phase to handle gigabits of data per second, and we don't have the routing technology to move packets from point A to points B-ZZZ when searching through a routing table hundreds of thousands of lines long that never has a chance to stabalize between the change messages that keep coming in. If you're interested, search for the writings of Noel Chiappa, who talks about this regularly on ietf, big-internet, etc. Adam jim bell wrote: | | Potentially. However, there has been some mention of a new standard for | voice compression that puts voice into 2400 bits per second, a factor of | about 25 lower than the phone company normally uses. (They use 8,000 samples | per second at 8 bits per sample, companded.) At that rate, a pair of | modern, 2.4 Gb/s fibers could handle 1 million simultaneous phone calls. | Since some of the newer fiber systems put 8 or more separate channels down a | single fiber, that would work out to 8 million conversations. | | I have to conclude that we shouldn't even be close to running out of | Internet capacity, _IF_ it were driven by state-of-the-art fiber and | similar-speed switches. But it probably isn't. At best, Internet probably | only gets a fraction of the capacity of a given fiber wherever it flows. | This will have to change. -- "It is seldom that liberty of any kind is lost all at once." -Hume