ESXi Host Settings
Set the maximum number of consecutive “sequential” I/Os allowed from one VM before switching to another VM:
Set the maximum I/O request size passed to storage devices. With XtremIO, it is required to change it from 32767 (default setting of 32MB) to 4096 (4MB):
Set the maximum number of active storage commands (I/Os) allowed at any given time at the VMkernel :
Note : The setting for vSphere 5.5 is per volume! So after adding a volume to a ESXi 5.5 host this command needs to be rerun.
Verify which HBA module is currently loaded
For example when using Qlogic :
Set Queue Depth (in this case Qlogic)
Reboot the host and than verify that queue depth adjustment is applied
When using the vSphere Native Multipathing :
Set the vSphere NMP Round Robin path switching frequency to XtremIO volumes from the default value (1000 I/O packets) to 1:
When using the EMC PowerPath software :
Upload the EMC PowerPath software installer to a local datastore and run the following line (be sure to change the path and filename to your own specifications!).
vCenter Settings
The maximum number of concurrent full cloning operations should be adjusted, based on the XtremIO cluster size. The vCenter Server parameter config.vpxd.ResourceManager.maxCostPerHostdetermines the maximum number of concurrent full clone operations allowed (the default value is 8). Adjusting the parameter should be based on the XtremIO cluster size as follows:
VAAI
Be sure to check if VAAI is enabled :
Why? Let me show you the differences between the deployment of a Windows 2012 R2 template (15GB) with VAAI enabled and disabled :
VAAI enabled : 34 seconds
Performance Testing
Before and after changing the settings I ran some simple IOmeter tests with the following configuration :
4x Windows 2012 R2 4vCPU’s, 8GB, 40GB vDisk for OS, 40GB vDisk connected to a Paravirtual vSCSI Adapter used for the IOmeter test file.
One of the VM’s was used as IOmeter manager / dynamo and the rest of the IOmeter dynamo processes connected to the manager, all configured with 4 workers per dynamo process. The VM’s where on the same ESXi host to be sure that we can compare the results and no other influences could affect the tests.
Default ESXi Settings
Test Name | IOPS | MBps |
Max Throughput-100%Read | 67604 | 2107 |
RealLife-60%Rand-65%Read | 68803 | 528 |
Max Throughput-50%Read | 44780 | 1392 |
Random-8k-70%Read | 74179 | 574 |
Optimal ESXi Settings
Test Name | IOPS | MBps |
Max Throughput-100%Read | 93876 | 2924 |
RealLife-60%Rand-65%Read | 108679 | 841 |
Max Throughput-50%Read | 39949 | 1240 |
Random-8k-70%Read | 100129 | 773 |
Not bad increase of IOPS and throughput I must say! My advice? Apply the recommended settings!
No comments:
Post a Comment