That's right, the Microsoft MBSM (Marketing Bull Shit Machine) is really cranking into gear, this time via Mike Neil posting on the otherwise respectable Windows Server Division blog (Ward, none of this is directed at you, you do a fine job. It's when those marketing types somehow take over your blog that things go down the toilet).
Seriously Mike, do you think the ppl reading Ward's blog are managerial types who don't see straight through marketing BS like "We’re designing Windows Server virtualization to scale up to 64 processors, which I’m proud to say is something no other vendor’s product supports."
You're proud to say you're DESIGNING something that no other vendor's product (currently) supports!?!?! Come back when you have a shipping product that comes somewhere near what your competitors are (currently) doing in terms of performance, scalability, management and stability, then you can boast about what features you have that they don't.
I'd also be interested to know where the requirements came from to develop such a feature... applications requiring 64 processors are generally not the kind of thing you would virtualise... how would you go about allowing for hardware failure with such a VM? Have another 64way piece of hardware sitting there doing nothing, that you can fire up in the event of a hardware failure on the first one? Gee, we're back with the physical world problem again!
Seriously, stuff like this just makes me think Microsoft aren't on the right wavelength. Another blog somewhere once stated that while everyone currently in the virtualisation market has solved the physical problem, only VMware has actually solved the VIRTUAL problem, by way of their management software and Vmotion. It's pointless to tout things like hot add NICs and CPU's, how many physical servers will let you hot swap a failed CPU? When was the last time you were in a datacenter and tried to do that? How about trying to do that on a blade? What would happen to the instructions the VM was executing at the time the CPU failed - are they gona seamlessly failover to a different CPU? How many applications that require going from 2 to 4 CPUs and then back again are really suitable for virtualisation? That kind of issue is generally solved by scaling out the number of machines running the application rather than increasing the CPU available to a single machine. What happens to the HAL and the kernel when you try to go from 1 CPU to 2 CPU's and then back again? I can only assume the enlightenments in Longhorn will solve that issue...
I hope when the product launches it has something like Vmotion, otherwise there's no way it can hope to compete. Microsoft is already on the back foot in terms of the platforms that will be supported on top of Windows Virtualization, and one of Microsoft's most successful arguments against Linux is "cost of acquisition is a negligible part of TOC". Do they hope to fly in the face of that by expecting if they give Windows Virtualization away, it will be a convincing enough argument to switch?
Time to buy some shares in VMware. I can see the virtualisation market playing out much the same way as search currently is.