Wow, the problem is solved, right?
Wrong. With the number of systems on the net growing rapidly, any realistic extrapolation leaves the number of Windows systems as being even larger than today. Hence we face at least as much exposure as at present, which the evidence has shown is more than enough to cause tremendous economic damage.
You miss out on the fact that, if Windows has, say, 90% of the machines (disregarding differences between desktop/server/whatever), the damage would, with your metric, be three times as large as the cost you point at, which would affect a third of the machines (with numbers higher than today, but still less that what they would be with 90% of machines running MS).
And in fact, it is worse, because any flaws in the Mac or Linux OSs will now be just as dangerous as for Windows! What we will face is a situation where the *weakest* of the widely used OS's will determine the risk factor for the system as a whole.
Yes, you are right: when you don't put all your eggs in the same basket, you have *more* risk to get crushed eggs. But, in return, you have less risk of losing *all* your eggs. The point is to contain worst case cost, at the expense of having more likely minimum cost.
chosen Windows because it is popular, has good development tools, and in the early days was easier to write for (remember that up until a few years ago, the Mac lacked preemptive multitasking, and Linux wasn't even a blip on the radar).
Windows 2000 was only a few years ago too. Windows NT 3 and 4 were not desktopo OSes, used only on servers. And I worked in a company that had the misfortune of running an NT 3 server. Preemptive multitasking does not imply stability, as this experience showed, though I won't claim our experience was typical. There was BeOS too, which could have been widely available save for MS having the computer makers' ear (firmly grasped in an iron fist). But you still have a fair point on this point, and I agree at varying degrees with the rest of your points, except where you come back to:
The result is that we will have a system where, as pointed out above, not one but several architectures are each widespread enough to bring the net to its knees when an exploit is discovered. This network will only be as strong as its weakest link. Diversity, in this context, is a risk factor, not a risk mediator.
For serial systems, not parallel ones. Encryption is a serial one. Redundancy using different systems is not: you need to destroy all branches to bring the system down (though I do not deny that you can bring the quality of service down by bringing a node down, depending on the degree of redundancy). Of course, the above holds for a more or less homogeneous distribution of the different (here) OSes. Otherwise, you have a connected graph of monocultures, and the first argument applies. -- Vincent Penquerc'h