Wednesday, June 29, 2016

Repeated characters when typing in remote console Fix

Repeated characters when typing in remote console (196)

Details

When typing into a remote console, you see unintended repeated keystrokes.

Solution

If you are using a wide-area or low-bandwidth connection, the time delay over the network may be long enough to cause the virtual machine to start auto-repeat.
To reduce these effects, increase the time threshold necessary for auto-repeat in the remote console.
  1. Power off the virtual machine.
  2. Add a line, similar to this, at the end of your virtual machine's configuration (.vmx) file:

    keyboard.typematicMinDelay = "2000000"
    The delay is specified in micro-seconds, so the line in the example above increases the repeat time to 2 seconds. This should ensure that you never get auto-repeat unless you intend it.
  3. Power on the virtual machine.
To make the changes using the vSphere Client:
  • Power off the virtual machine
  • Right click virtual machine select Edit Settings
  • Click Options General Configuration Parameters
  • Click Add Row
  • Under Name enter keyboard.typematicMinDelay In the Value field 2000000
  • Click OK
  • Power on the virtual machine 

http://www.purepowershellguy.com/

http://www.purepowershellguy.com/

Step-by-Step Configure DHCP Failover with 2012 R2




In Part 1, I provided with walk through of how to install DHCP and configure scope on your newly installed DHCP server. In this post, we are going to setup failover between two servers. I am assuming that two DHCP servers are installed and a scope is configure already.
So let’s go ahead and configure failover for given scope:

In the resulting dialog, you can either select all scopes that exist or select scopes you want to provide failover for:

On next page, you need to provide partner server. This is the server where scope information will be replicated to:

After selecting partner server, you need to configure failover relationship parameters (you can read more detailed description on TechNet article: http://technet.microsoft.com/en-us/library/dn338985.aspx):
  • Relationship name is descriptive and can be anything that helps you identify the relationship
  • Maximum client lead time defines how much time extension a partner server can provide to a client, based on time known by partner server
  • Mode allows you to choose between hot standby mode and load balance mode. In hot standby mode only one server services DHCP requests from clients. Load balance mode allows both servers in partnership to serve DHCP clients
  • If you select load balance mode, you can define how each server will serve DHCP clients by defining percentage per server
  • State switchover interval controls how long standby server waits after losing communication with primary server before assuming active state and servicing DHCP clients. In load balance mode it may not be important factor as both servers can serve clients based on defined percentage of scope IP addresses. In Hot standby mode, it will be important to select, or the standby server will never switchover automatically
  • To secure communications between DHCP partner servers, you have an option to enable message authentication between servers. If enabled, you must specify shared secret.

After configuring failover mode and parameters, you are provided with summary page:

When you click finish, you will see progress dialog, informing you of status of failover configuration steps:

If all steps are successful, you now have a working DHCP failover pair, with no clustering required! Cheers!

New Features ofWindows Server 2012 Failover Clustering

New Features ofWindows Server 2012 Failover Clustering

Windows IT Pro
Aug 22, 2012
Prepare for easier management, increased scalability, and more flexibility
In “Troubleshooting Windows Server 2008 R2 Failover Clusters,” I discussed troubleshooting failover clusters—specifically, the locations and tips for where you can go to get the data you need in order to troubleshoot a problem. The Microsoft Program Management Team looked at quite a few of the top problems and worked to improve them in Windows Server 2012 Failover Clustering. So this month, I’ll talk about the new features and functionality of Server 2012 Failover Clustering. The new changes for failover clustering offer easier management, increased scalability, and more flexibility.
 

Scalability Limits

One of the first things to talk about is scalability limits. With Server 2012 clustering, you now have a maximum limit of 64 nodes per cluster. If you’re running as a Hyper-V cluster with highly available virtual machines (VMs), the limit has increased to 4,000 VMs per cluster and 1,024 VMs per node. With these increased limits, Server Manager has been bolstered with the ability to discover and provide remote management capabilities. When you’ve configured a cluster, it will show all nodes, including the name of the cluster and any VMs on the cluster. In Server Manager, you would see where the remote management can be accomplished, as Figure 1 shows. With this capability, you can enable additional roles/features remotely.
WWMSD_Fig1

Figure 1: Configuring remote management

A New Level of AD Integration

When you’re creating a cluster, you’ll experience a new level of detection about where the cluster creates an object in Active Directory (AD). When allowing the cluster to create the object, it will detect the organizational unit (OU) where the nodes reside and create the object in the same OU. It will still use the logged-on user account to create the Cluster Name Object (CNO), so this account needs to have Read and Create permissions on this OU. If you want to bypass this detection, or place it in a separate OU, you can specify that during the creation. For example, if I want to place the cluster name in an OU called Cluster, during creation I would input the data that Figure 2 shows.
WWMSD_Fig2
Figure 2: Placing the cluster name in an OU called Cluster 
If you’re doing it through PowerShell, the command would be
New-Cluster “CN=WIN2012-CLUSTER,OU=Cluster,DC=Contoso,DC=com” –Node WIN2012-Node1,WIN2012-Node2,WIN2012-Node3,WIN2012-Node4

Quorum Configuration

The quorum configuration has been simplified, and a new dynamic quorum model is available—now the default when you’re creating a cluster. You can also manually remove nodes from participating in the voting. When you go through the Configure Cluster Quorum Wizard, you’re provided with three options:
·         Use typical settings (recommended)—The cluster determines quorum management options and, if necessary, selects the quorum witness.
·         Add or change the quorum witness—You can select the quorum witness; the cluster determines quorum management options.
·         Advanced quorum configuration and witness selection—You determine the quorum management options and the quorum witness.
When you choose the typical settings, the wizard will select the quorum type as dynamic. With a dynamic quorum, the number of votes changes depending on the number of participating nodes. The way it works is that, to keep a cluster up, you must have a quorum or consensus of votes. Each node that participates in a cluster is a vote. If you have also chosen to have a witness disk or share, that’s an additional vote. To keep the cluster going, more than half of the votes must continue to run. You can use the math equation of (total votes +1)/2. I have nine total nodes in a cluster without a witness disk. So, using the math equation above, it would be (9+1)/2 or 5 total votes to keep the cluster up.
 
So, for example, consider what occurs with a Server 2008 R2 cluster and a Server 2012 cluster. In a Server 2008 R2 cluster, using the same nine nodes in the cluster, this means that I have nine total votes and will need five votes (nodes) to remain going to keep the cluster up. If there are only four nodes up, the cluster service will terminate because there aren’t enough remaining votes. The administrator would need to take manual actions to force the cluster to start and get back to production. In a new Server 2012 failover cluster, when a node goes down, the number of votes needed to remain up also dynamically goes down. With my nine nodes (votes), if one node (vote) goes down, the total vote count becomes eight. If another two nodes go down, the vote count becomes six. The Server 2012 cluster will continue running and stay in production without intervention needed. Dynamic Quorum is the default and recommended quorum configuration for Server 2012 clusters.
WWMSD_Figure3
Figure 3: Changing the quorum configuration
 

To change the quorum witness configuration, you can right-click the name of the cluster in the far left pane, choose More Actions, and select Configure Cluster Quorum Settings as Figure 3 shows. The wizard will let you set a disk witness, set a file share witness, or leave it as dynamic. If you choose the advanced settings, one of the first settings you’ll determine is what nodes actually have a vote in the cluster, as Figure 4 shows. All nodes participate with a vote to achieve quorum. If you de-select a node, it won’t have a vote. Using the earlier example of nine nodes, for Server 2008 R2 clusters, you have only eight voting members, so a witness disk or share would need to be added. In both Server 2008 R2 and Server 2012 clusters, this non-voting node doesn’t have a vote; if it’s the only node left, the cluster service will stop and manual intervention will be necessary.
WWMSD_Figure4
Figure 4: Determining which cluster nodes have a vote 
When going through the Configure Cluster Quorum Wizard, the next screen shows the option where you can select or de-select the dynamic quorum option, as Figure 5 shows. As you can see from it, the default action is selected and is also recommended. If you want to change the quorum configuration to add a witness disk or share, the next screen in the wizard, Select Quorum Witness, is where you can choose the witness disk or share to use.
WWMSD_Figure5
Figure 5: The Configure Quorum Management page 

Cluster Validation

There are some new cluster-validation enhancements. One of the big benefits is that storage tests will run significantly faster. The storage tests measure things such as which node can see the drives, determine failovers individually and as groups to all nodes, check to see if the drive can be yanked away from each node from the other nodes, and so on. In Server 2008 R2 failover clusters, if you had a large number of disks, the storage tests took a lot of time to complete. With Server 2012 cluster validation, the tests have been streamlined in their execution and in the speed they take to complete. A new option for the storage tests is that you can target specific LUNs to run the tests against, as you see in Figure 6. If you want to test a single LUN or a specific set of LUNs, just select the ones you want. There are also new tests for Cluster Shared Volumes (CSV) as well as for Hyper-V and the VMs. These tests check to see if your networking is configured with the recommended settings to ensure that network connectivity can be made between machines, quick/live migrations are set up to work, the same network switches are created on all nodes, and so on.
WWMSD_Figure6
Figure 6: Reviewing your storage status 

Cluster Virtual Machine Monitoring

When you’re running highly available VMs with the Hyper-V role in a cluster, you can take advantage of a new feature called Virtual Machine Monitoring. With this new monitoring, you can actually have Failover Clustering monitor specific services within the VM and react if there is a problem with a service. For example, if you’re running a VM that provides print services, you can monitor the Print Spooler service. To set this up, you can:
1.    Right-click the VM in Failover Clustering.
2.    Choose More Actions.
3.    Choose Configure Monitoring.
4.    Choose the service or services that you would like to monitor.
If you want to set this up with PowerShell, the command would be
Add-ClusterVMMonitoredItem –VirtualMachine “VM Name” –Service Spooler
Failover Clustering will then “monitor” the VM and service through periodic health checks. If it determines that the monitored service is unhealthy, it will consider it to be in a critical state. It will first log the following event on the host. For example:
 
Event ID: 1250
Source: FailoverClustering
Description: Cluster Resource “Virtual Machine Name” in clustered role “Virtual Machine Name” has received a critical state notification. For a virtual machine this indicates that an application or service inside the virtual machine is in an unhealthy state. Verify the functionality of the service or application being monitored within the virtual machine.

It will then restart the VM (forced shutdown, but graceful) on the host that it’s currently running on. If it fails again, it will move it to another node to start. Virtual Machine Monitoring gives you a finer granularity of the kind of monitoring you want to have for your VMs. It also brings the added benefit of additional health-checking, as well as availability. Without Virtual Machine Monitoring, if a particular service has a problem, it would continue in that state and user intervention would be required to get it back up.
 

Cluster Aware Updating

Cluster Aware Updating (CAU) is new to Server 2012 Failover Clustering. This feature automates software updating (security patches) while maintaining availability. With CAU, you have the following actions available:
·         Apply updates to this cluster
·         Preview updates to this cluster
·         Create or modify the Updating Run Profile
·         Generate a report on past Updating Runs
·         Configure cluster self-updating options
·         Analyze cluster updating readiness
CAU will work in conjunction with your existing Windows Update Agent (WUA) and Windows Server Update Services (WSUS) infrastructures to apply important Microsoft updates. When CAU begins to update, it will go through the following steps:
1.    Put each node of the cluster into node maintenance mode.
2.    Move the clustered roles off the node. In the case of highly available VMs, it will perform a live migration of the VMs.
3.    Install updates and any dependent updates.
4.    Perform a reboot of the node, if necessary.
5.    Bring the node out of maintenance mode.
6.    Restore the clustered roles on the node.
7.    Move to update the next node.
You can start CAU from Server Manager, Failover Cluster Manager, or a remote machine. The recommendations and considerations for setting this up are as follows:
·         Don’t configure the nodes for automatic updating either from Windows Update or a WSUS server.
·         All cluster nodes should be uniformly configured to use the same update source (WSUS server, Windows Update, or Microsoft Update).
·         If you update using Microsoft System Center Configuration Manager 2007 and Microsoft System Center Virtual Machine Manager 2008, exclude cluster nodes from all required or automatic updates.
·         If you use internal software distribution servers (e.g., WSUS servers) to contain and deploy the updates, ensure that those servers correctly identify the approved updates for the cluster nodes.
·         Review any preferred owner settings for clustered roles. Configure these settings so that when the software-update process is complete, the clustered roles will be distributed across the cluster nodes.

Alternative Connections

In the previous versions, the only way you could connect to a share is with the Client Access Point (the network name in the group) because the shares were scoped to only this name. More information about this behavior is explained in the blog post “ File Share ‘Scoping’ in Windows Server 2008 Failover Clusters.” This limited the way administrators had clients connect to shares because there was only one option to connect. This was a big problem with administrators because, in some cases, it made server consolidations more difficult and time consuming because additional steps needed to be taken into consideration—which, in turn, led to longer downtimes to perform the consolidations. Because of this, Server 2012 Failover Clustering now gives you the ability to connect to shares via the virtual network name, the virtual IP address, or a CNAME that is created in DNS. One caveat is that when using CNAMEs, there is an additional configuration needed for the name. For example, suppose you had a file share with the clustered network name TXFILESERVER and you wanted to set up a CNAME of TEXAS in DNS to connect. Through PowerShell, you would execute
 
Get-ClusterResource “TXFILESERVER” | Set-ClusterParameter Aliases TEXAS
 When this is done, you must take the name offline and put it back online before it will take effect and answer the connection.
You need to consider the repercussions of connecting via the IP address or alias. When connecting via these methods, Kerberos won’t be the authentication method used because it will drop to using NTLM security. So, although connecting via these alternative methods does bring flexibility, the security trade-off for NTLM authentication must be taken into consideration.
 

CSV Updates

CSV has been updated with the following list of capabilities. These features provide for easier setups, broader workloads, and enhanced security and performance in a wider variety of deployments, as well as greater availability.
·         Storage capabilities for Scale-Out File Servers (more on this later), not just highly available VMs
·         A new CSV Proxy File System (CSVFS) to provide a single, consistent filename space
·         Support for BitLocker drive encryption
·         Direct I/O for file data access, enhancing VM creation and copy performance
·         Removal of external authentication dependencies when a domain controller (DC) might not be available
·         Integration with the new Server Message Block (SMB) 3.0 to provide for file servers, Hyper-V VMs, and applications such as SQL Server
·         Use of SMB Multichannel and SMB Direct to allow CSV traffic to stream across multiple networks, and use of network adapters that support Remote Direct Memory Access (RDMA)
·         Ability to scan and correct volumes with zero offline time as NTFS identifies, logs, and repairs issues without affecting the availability of CSV drives

Scale-Out File Servers

Scale-Out File Servers can host continuously available and scalable storage, by using the SMB 3.0 protocol, and utilizes CSV for the storage. Benefits of Scale-Out File Servers include the following:
·         Provides for active-active file shares in which all nodes accept and serve SMB client requests. This functionality provides for transparent failover to other cluster nodes during planned maintenance and unplanned failures.
·         Increases the total bandwidth of all file server nodes. There is no longer the bandwidth concern of all network client connections going to a single node; instead, a Scale-Out File Server gives you the ability to transparently move a client connection to another node to continue servicing that client without any network disruption. The limit to a Scale-Out File Server at this time is eight nodes.
·         CSV takes the improved Chkdsk times a step further by eliminating the offline phase. With CSVFS, you can run Chkdsk without affecting applications.
·         Another new CSV feature is CSV Cache, which can improve performance in some scenarios such as Virtual Desktop Infrastructure (VDI). CSV Cache allows you to allocate system memory (RAM) as a write-through cache. This provides caching of read-only unbuffered I/O that isn’t cached by the Windows Cache Manager. CSV Cache boosts the performance of read requests with write-through for no caching of write requests.
·         Eases management tasks. Is no longer necessary to create multiple clustered file servers where multiple disks and placement strategies are needed.
Scale-Out File Servers are ideal for SQL Server and Hyper-V configurations. The design behind Scale-Out File Servers is for applications that keep files open for long periods of time, as do most data operations. A SQL Server database or a Hyper-V VM .vhd file performs a lot of data operations (changes to the file itself), but doesn't perform a lot of actual metadata updates. It shouldn’t be used as a user data share where the workload has a high number of NTFS metadata updates. With NTFS, metadata updates are operations such as opening/closing files, creating new files, renaming existing files, deleting files, and so on, that make changes to the file system of the drive.
 

Always Improving

There are further enhancements, but I just don’t have the space to list them all. Our Microsoft Program Management Team has worked hard and listened to users about features they've been wanting from failover clusters, and the team has delivered. We've also taken some of the top issues seen in previous versions and made them into positives with the new version.
http://ib.adnxs.com/seg?member=827&add=4781746,3422395,3541096,5298916,3123101,4456306,3159149,3444542,5058287,5162352,5162324,2036954http://ib.adnxs.com/seg?member=827&add=2122505

Monday, June 27, 2016

How to Configure Multiple TCP/IP Stacks in vSphere 6


It's easy to miss neat features that get added to really broad products, because they don't get publicized as the most important thing about the release. But the under-publicized features can also be really useful and important to some environments. I wrote recently about VM Component Protection, which falls into this category. Another addition to vSphere 6 that's important but under-discussed is the addition of independent vMotion and Provisioning TCP/IP stacks to the ESXi host right out of the box.

In vSphere at 5.1 and earlier, there was a single TCP/IP stack which all the different types of network traffic used. This meant that management, virtual machine (VM) traffic, vMotion, Network File Copy and so on were all bound to use the same stack. While this was generally tolerable, it did cause irritation on occasion when it would have made more sense to configure a certain type of traffic differently than the rest. Because of the shared stack, all VMkernel interfaces had some things in common: they shared the same default gateway, the same memory heap, and the same ARP and routing tables.

Repointing the VMware vCenter Server 6.0 between External Platform Services Controllers within a Site in a vSphere Domain (2113917)

Repointing a vCenter Server Appliance to another external Platform Service Controller in the same Site

  1. Log in to the vCenter Server Appliance Linux console as the root user.
  2. Run this command:

    shell.set --enabled true

  3. Run the shell command.
  4. Run this command to repoint the vCenter Server Appliance to Platform Services Controller (PSC) appliance:

    /usr/lib/vmware-vmafd/bin/vmafd-cli set-dc-name --server-name localhost --dc-namesystemname_of_second_PSC

    Where, systemname_of_second_PSC is the system name used to identify the second Platform Services Controller.

    Notes
    :
    • The system name must be an FQDN or a static IP address and FQDN is case sensitive.
    • If the second Platform Services Controller runs on an HTTPS port that is different from the HTTPS port of the first Platform Services Controller, update the port number using this command:

      /usr/lib/vmware-vmafd/bin/vmafd-cli set-dc-port --server-name localhost --dc-port https_port_of_second_PSC
  5. Run this command to stop the services in the vCenter Server Appliance:

    service-control --stop --all
  6. Run this command to start the services in the vCenter Server Appliance:

    service-control --start --all 
  7. Using the vSphere Web Client, log in to the vCenter Server instance in the vCenter Server Appliance to verify that the vCenter Server is up and running and manageable.

Repointing a Windows vCenter Server to another External Platform Service Controller

  1. Log in as an Administrator to the virtual machine or physical server on which you installed vCenter Server.
  2. Click Start > Run, type cmd and press Enter.
  3. Run this command to repoint the vCenter Server instance to the second Platform Services Controller:

    C:\Program Files\VMware\vCenter Server\vmafdd\vmafd-cli set-dc-name --server-name localhost --dc-name system_name_of_second_PSC

    Where, system_name_of_second_PSC is the system name used to identify the second Platform Services Controller.

    Notes:
    • The system name must be an FQDN or a static IP address and FQDN is case sensitive.
    • If the second Platform Services Controller runs on an HTTPS port that is different from the HTTPS port of the first Platform Services Controller, update the port number using this command:

      C:\Program Files\VMware\vCenter Server\vmafdd\vmafd-cli set-dc-port --server-name localhost --dc-port https_port_of_second_PSC
  4. Run this command to change to vCenter Server and/or Platform Services Controller installation directory:

    cd C:\Program Files\VMware\vCenter Server\bin

    Note: This command uses the default installation path. If you have installed vCenter Server and/or Platform Services controller to another location, modify this command to reflect the correct install location.
  5. Run this command to stop the vCenter Server services:

    service-control --stop --all
  6. Run this command to start the vCenter Server services:

    service-control --start --all
  7. Using the vSphere Web Client, log in to the vCenter Server instance to verify that the vCenter Server is active.

MANAGE/EDIT THE CORE DUMP CONFIGURATION OF AN ESXI HOST

http://www.renjithmenon.com/manageedit-the-core-dump-configuration-of-an-esxi-host/



What is Core Dump?
Core dump is diagnostic information being captured during purple screen error or a host failure. The core Dump resides in a Diagnostic partition and in-order to create a partition we require atleast 100 MB of free space either locally or remotely available disks.
Coredump partition limitations
  • It cannot be created on an iSCSI LUN (Both hardware and software) and No FCoE LUN.
  • Diagnostic partiions cannot be create using vCLI however the existing partitions can be activated.
    To view the core dump partition from your ESXi host execute below command from the ESXi CLI.
    esxcli system coredump partition get

ESXi Disable Core Dump file

http://www.virten.net/2014/02/permanently-disable-esxi-5-5-coredump-file/


Monday, June 20, 2016

Fix VSS Error using VMconverter for WIndows without reboot

REF

We have been having issues with certain Windows 2003 Systems (Both 32bit/64bit) failing backups when the "Microsoft Software Shadow Copy Provider" goes into a hung state i.e. "Stopping".
We have have a workaround short of a reboot.
c:\tasklist /svc | findstr swprv

To resolve this problem, follow these steps:
  1. Click Start, click Run, type Regedit, and then click OK.
  2. Locate and then click the following registry subkey:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions
  3. On the Edit menu, click Delete, and then click Yes to confirm that you want to delete the subkey.
  4. Exit Registry Editor.
  5. Click Start, click Run, type services.msc, and then click OK.
  6. Right-click the following services one at a time. For each service, click Restart:
    • COM+ Event System
    • COM+ System Application
    • Microsoft Software Shadow Copy Provider
    • Volume Shadow Copy
  7. Click Start, click Run, type cmd, and then click OK.
  8. At the command prompt, type vssadmin list writers, and then press ENTER.
  9. If the VSS writers are now listed, close the Command Prompt window. You do not have to complete the remaining steps. 

    If the VSS writers are not listed, type the following commands at the command prompt. Press ENTER after each command.
    • cd /d %windir%\system32
    • net stop vss
    • net stop swprv
    • regsvr32 ole32.dll
    • regsvr32 oleaut32.dll
    • regsvr32 /i eventcls.dll
    • regsvr32 vss_ps.dll
    • vssvc /register
    • regsvr32 /i swprv.dll
    • regsvr32 es.dll
    • regsvr32 stdprov.dll
    • regsvr32 vssui.dll
    • regsvr32 msxml.dll
    • regsvr32 msxml3.dll
    • regsvr32 msxml4.dll
    Note The last command may not run successfully.
  10. At the command prompt, type vssadmin list writers, and then press ENTER.
  11. Confirm that the VSS writers are now listed.

Vmware NSX SSL creation 

Using OpenSSL for NSX Manager SSL import: Creates CSR and 4096 bit KEY Creating NSX 6.4.2 SSL    openssl req -out nsxcert.csr -newkey rsa:40...