Fault Tolerant virtual machines brings a second copy of a virtual machine online and keeps it in a passive mode until the first VM fails. Service is then seamlessly transferred to the second copy. The primary virtual machine is brought online and all execution occurs here, while the instructions are logged and transferred over to the second virtual machine. The second executes the logged instructions. This brings a new alternative to HA failover which requires a certain amount of downtime in the event of a failure. This technology should bring 100% uptime to mission critical servers in the same way that HA brought easy clustering to the datacenter.
Data Recovery is a big brother to VCB. VCB becomes the set of API's that can be leveraged into a complete disk based backup solution. Data recovery promises to make VCB backup copies of virtual machines. The technology also packages de-duplication into the mix to allow the conservation of disk space. The solution also strives to prevent data corruption by the use of the VCB "snapshots", which would allow us to roll back to a point in time backup of the virtual server.
vmSafe is a set of API's which can be used by security vendors (more announcements promised this week). The one point made from the session, and one I hope to follow up on further this week, is the many security vendors are packing many technologies (anti-virus, IPS, firewall, etc.) into a single virtual appliance which can be run on each ESX host. All of the security monitoring can happen outside of the VM without an agent.
One of the new application vServices announced is the ability to hot add CPU and memory to a virtual machine. I didn't glean any additional details about these particular capabilities, so I'll be following up on this.
vmDirectpath will allow for direct hardware access from a virtual machine. This technology is designed to increase the amount of IOPs available from 100K to 200K and is specifically designed for the scenario where an application requires a lot of polls for a hardware device. This technology reduces the number of hops to the physical hardware from the virtual server and decreases latency there.
Paravirt SCSI will allow a virtual machine to directly access a SCSI device. That's really about it from what we were introduced...
vStorage Thin Provisioning is particularly interesting for HTC's datacenter. We are evaluating a number of virtual desktop solutions which we may be implementing in our organization. Thin provisioning will allow us an enormous amount of flexiblity. At the current time, we will be deploying many copies of Windows XP, each requiring approximately 4G of SAN storage used and probably another 10G of white space for each virtual desktop. Thin Provisioning will allow us to create a single base OS VMDK and then point all other copies which utilize this same base image using pointers and not require a separate copy for each virtual desktop. The same can be said for our extreme farm of Windows 2003 servers... We could create a single base image for these servers and then all the space required after that is just the additional software, data or changes to the base OS.
vNetwork Distributed Switch is a great addition which will allow administrators to define the switch at a higher level than each ESX server. In my datacenter today, I have 8 hosts in one cluster with the same virtual switch configuration, 3 each in 3 additional clusters and one 4 node cluster. Each ESX host was configured by hand and the maintenance of adding VLANs across our 8 node cluster isn't the most exciting part of my job. Moving this task to a higher level is logical and also brings some additional benefits. For instance, bringing the switch to a cluster level will allow for things like monitoring ports which are agnostic to the ESX host and will follow the VM wherever it may travel. VMware also announced Third Party Switch which should allow switch vendors to bring their proprietary functions to the virtual switch level - such as QOS protocols and configuration. This will basically allow VMware to function as an additional device which adheres to your rules and configuration defined elsewhere on your ethernet network.
Virtual Center will be rebranded as vCenter Server and vCenter will become the name brand for all of VMware's management application suite of software. 2009 will bring several new vCenter products which should fill the gap within the lifecycle of the virtual datacenter.
Appspeed will be a product which is based upon the technology aquired from Beehive. This product will monitor and remediate performance issues on a per application basis. Each application will have a vApp profile built which defines the components needed to make the application work, along with service level information for the product. When the service levels are not being met, the product will be able to take actions necessary to remediate the performance issues. In the future, that could mean resizing the virtual machine.
Orchestrator caught my attention as the more useful of the addtional apps in the vCenter suite. It is a end to end scripting language and is actually the basis for VMware's Lifecycle Manager product. The Lifecycle Manager is actually just a set of Orchestrator scripts packaged into an application. Orchestrator will be useful in automation of routine tasks in the datacenter.
VMware also announced CapacityIQ, Chargeback, Config Control and Host Profiles in the vCenter product line. CapacityIQ will basically allow you to what-if your virtual datacenter to help provide a more accurate idea of when you need to expand capacity or how a particular change will affect your capacity. Chargeback is just want is says - a chargeback tool - which allows you to appropriate a cost to the virtual datacenter based on utilization. Config Control is one that I didn't get any detail down about - so I'm interested to see what it is about. Host Profiles is one that I find useful - in that it can cut down the inital deploy time of an ESX host by allowing you to apply a host profile to the server and let is self configure. I know in my datacenter, we have about 5 different host types, each with it's own definition, but with multiple hosts defined very similarly. This would be especially helpful when you add capacity, since the exact configuration steps won't be fresh in your brain once you progress past the initial VI3 deployment.