Getting the Most out of vMotion - Architecture, Features, Debugging
"Getting the Most out of vMotion - Architecture, Features, Debugging vMotion is a key, widely adopted technology which enables the live migration of virtual machines on the vSphere platform. It enables critical datacenter workflows, including automated load-balancing with DRS and DPS, hardware maintenance, and the permanent migration of workloads. Each vSphere release introduces new vMotion functionality, and significant performance improvements to address key customer requests and enable new use cases. In this session, join engineers from the development and performance teams to get an insiders' view of vMotion architecture, cutting edge features, best practices, and tools for performance troubleshooting. Performance studies will be presented for some of the hot topics including Monster VM migrations, migrations over IPv6, and Metro migrations over Distributed/Federated storage deployments. Finally, take a sneak-peek into the future and performance directions for vMotion including long distance migrations and migration to public clouds".
- Gabriel Tarasuk-Levin - Staff Engineer 2, VMware
- Sreekanth Setty - Staff Engineer, VMware
vMotion
Transperent move of VM to another host.
vMotion requires shared storage
vMotion enables features like DRS and FT
vMotion Workflow
- skeleton vm on destination
- copy vm memory state - most complex portion of workflow
- quiesce vm on source
- transfer device state and transfer remaining memory changes
- resume vm on destination
- poweroff source vm
How does the memory copy work? Pretty complex. Uses iterative memory pre-copy. Cycle through memory pages, copy and monitor for changes (dirty pages). Repeat until converges on equality.
Storage vMotion
Flip side of vMotion, only care about disk state.
VM remains on same host.
Storage vMotion has similar workflow as vMotion
- skeleton vm on destination
- copy vm cold data, such as snapshots
- copy vm hot data content
- quiesce vm on source
- transfer device state and hand off memory state
- resume vm on destination
- free vm resources on source
Available since ESXi 5.1
Only available from the Web Client
Moving VM atomically to another host without shared storage!
Workflow looks like Storage vMotion
Current transfers cold data across management network - architectural decision that VMware is trying to fix
Works with any storage type (NFS, SAS, etc)
Technology written by presenter
Features
History:
- vMotion (2003)
- v5.0: Multi-NIC vMotion, Sun during Page send (DPS)
- v5.1: vMotion without shared storage
- v5.5: Metro VPLEX support, IPv6 improvements
Performance
Performance metrics:
- migration time (memory/disk)
- switch over time
- application impact (throughput and latency)
Monster VM Migration Performance
- 2 NICs shows significant benefit. 3rd NIC not so much (due to vMotion helper threads limitation).
vMotion Across Metro Distances
Metro has up to 10 ms round trip time
EMC VPLEX optimizes vMotion duration
VPLEX uses some caching features for syncing across Metro
What's Next for vMotion
- vNIC improvements - such as 3rd NIC performance
- support Array Replication with VVOLs
- Long Distance vMotion
- vMotion within/to the Hybrid Cloud (vCHS)
Awesome, great post and helpful, I like your post
ReplyDelete