Monday, August 15, 2016

Optimize ESXi for SSD

ESXi Host Settings

Set the maximum number of consecutive “sequential” I/Os allowed from one VM before switching to another VM:
Set the maximum I/O request size passed to storage devices. With XtremIO, it is required to change it from 32767 (default setting of 32MB) to 4096 (4MB):
Set the maximum number of active storage commands (I/Os) allowed at any given time at the VMkernel :
Note : The setting for vSphere 5.5 is per volume! So after adding a volume to a ESXi 5.5 host this command needs to be rerun.
Verify which HBA module is currently loaded
For example when using Qlogic :
Set Queue Depth (in this case Qlogic)
Reboot the host and than verify that queue depth adjustment is applied
When using the vSphere Native Multipathing :
Set the vSphere NMP Round Robin path switching frequency to XtremIO volumes from the default value (1000 I/O packets) to 1:
When using the EMC PowerPath software :
Upload the EMC PowerPath software installer to a local datastore and run the following line (be sure to change the path and filename to your own specifications!).

vCenter Settings

The maximum number of concurrent full cloning operations should be adjusted, based on the XtremIO cluster size. The vCenter Server parameter config.vpxd.ResourceManager.maxCostPerHostdetermines the maximum number of concurrent full clone operations allowed (the default value is 8). Adjusting the parameter should be based on the XtremIO cluster size as follows:

  • 10TB Starter X-Brick (5TB) and a single X-Brick – 8 concurrent full clone operations
  • Two X-Bricks – 16 concurrent full clone operations
  • Four X-Bricks – 32 concurrent full clone operations
  • Six X-Bricks – 48 concurrent full clone operations



  • VAAI

    Be sure to check if VAAI is enabled :
    Why? Let me show you the differences between the deployment of a Windows 2012 R2 template (15GB) with VAAI enabled and disabled :
    VAAI enabled : 34 seconds


  • Performance Testing

    Before and after changing the settings I ran some simple IOmeter tests with the following configuration :
    4x Windows 2012 R2 4vCPU’s, 8GB, 40GB vDisk for OS, 40GB vDisk connected to a Paravirtual vSCSI Adapter used for the IOmeter test file.

  • One of the VM’s was used as IOmeter manager / dynamo and the rest of the IOmeter dynamo processes connected to the manager, all configured with 4 workers per dynamo process. The VM’s where on the same ESXi host to be sure that we can compare the results and no other influences could affect the tests.
    Default ESXi Settings
    Test NameIOPSMBps
    Max Throughput-100%Read676042107
    RealLife-60%Rand-65%Read68803528
    Max Throughput-50%Read447801392
    Random-8k-70%Read74179574
    Optimal ESXi Settings
    Test NameIOPSMBps
    Max Throughput-100%Read938762924
    RealLife-60%Rand-65%Read108679841
    Max Throughput-50%Read399491240
    Random-8k-70%Read100129773
    Not bad increase of IOPS and throughput I must say! My advice? Apply the recommended settings! 
  • No comments:

    Post a Comment

    Vmware NSX SSL creation 

    Using OpenSSL for NSX Manager SSL import: Creates CSR and 4096 bit KEY Creating NSX 6.4.2 SSL    openssl req -out nsxcert.csr -newkey rsa:40...