Tuesday, January 16, 2018

Setup SNMP properties for all host in vcenter

# PowerCLI Script for adding syslogserver to hosts
#
# EverythingShouldBeVirtual.com
# Change the following to match your environment
# vi_server is your vCenter
$vi_server = “vcenterservername”
$vcuser = "vcenterserverusername"
$vcpass = "vcenterserverpassword"
$communities = "public"
$syslocation = "Atlanta"

Connect-VIServer -Server $vi_server -User $vcuser -Password $vcpass

# Setup variable to use in script for all hosts in vCenter
$vmhosts = @(Get-VMHost)

# Configure syslog on each host in vCenter
foreach ($vmhost in $vmhosts) {
Write-Host ‘$vmhost = ‘ $vmhost
$esxcli = Get-EsxCli -VMHost $vmhost
$esxcli.system.snmp.set($null,$communities,"true",$null,$null,$null,$null,$null,$null,$null,$null,$null,$syslocation)
$esxcli.system.snmp.get()
}

Disconnect-VIServer * -Confirm:$false

Git Branching Model

The main branches 

At the core, the development model is greatly inspired by existing models out there. The central repo holds two main branches with an infinite lifetime:
  • master
  • develop
The master branch at origin should be familiar to every Git user. Parallel to the master branch, another branch exists calleddevelop.
We consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state.
We consider origin/develop to be the main branch where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. Some would call this the “integration branch”. This is where any automatic nightly builds are built from.
When the source code in the develop branch reaches a stable point and is ready to be released, all of the changes should be merged back into master somehow and then tagged with a release number. How this is done in detail will be discussed further on.
Therefore, each time when changes are merged back into master, this is a new production releaseby definition. We tend to be very strict at this, so that theoretically, we could use a Git hook script to automatically build and roll-out our software to our production servers everytime there was a commit on master.

Supporting branches 

Next to the main branches master and develop, our development model uses a variety of supporting branches to aid parallel development between team members, ease tracking of features, prepare for production releases and to assist in quickly fixing live production problems. Unlike the main branches, these branches always have a limited life time, since they will be removed eventually.
The different types of branches we may use are:
  • Feature branches
  • Release branches
  • Hotfix branches
Each of these branches have a specific purpose and are bound to strict rules as to which branches may be their originating branch and which branches must be their merge targets. We will walk through them in a minute.
By no means are these branches “special” from a technical perspective. The branch types are categorized by how we use them. They are of course plain old Git branches.

Feature branches 

May branch off from:developMust merge back into:developBranch naming convention:anything except master, develop, release-*, or hotfix-*
Feature branches (or sometimes called topic branches) are used to develop new features for the upcoming or a distant future release. When starting development of a feature, the target release in which this feature will be incorporated may well be unknown at that point. The essence of a feature branch is that it exists as long as the feature is in development, but will eventually be merged back into develop (to definitely add the new feature to the upcoming release) or discarded (in case of a disappointing experiment).
Feature branches typically exist in developer repos only, not in origin.

Creating a feature branch 

When starting work on a new feature, branch off from the develop branch.
$ git checkout -b myfeature develop
Switched to a new branch "myfeature"

Incorporating a finished feature on develop 

Finished features may be merged into the develop branch to definitely add them to the upcoming release:
$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff myfeature
Updating ea1b82a..05e9557
(Summary of changes)
$ git branch -d myfeature
Deleted branch myfeature (was 05e9557).
$ git push origin develop
The --no-ff flag causes the merge to always create a new commit object, even if the merge could be performed with a fast-forward. This avoids losing information about the historical existence of a feature branch and groups together all commits that together added the feature. Compare:
In the latter case, it is impossible to see from the Git history which of the commit objects together have implemented a feature—you would have to manually read all the log messages. Reverting a whole feature (i.e. a group of commits), is a true headache in the latter situation, whereas it is easily done if the --no-ff flag was used.
Yes, it will create a few more (empty) commit objects, but the gain is much bigger than the cost.

Release branches 

May branch off from:developMust merge back into:develop and masterBranch naming convention:release-*
Release branches support preparation of a new production release. They allow for last-minute dotting of i’s and crossing t’s. Furthermore, they allow for minor bug fixes and preparing meta-data for a release (version number, build dates, etc.). By doing all of this work on a release branch, thedevelop branch is cleared to receive features for the next big release.
The key moment to branch off a new release branch from develop is when develop (almost) reflects the desired state of the new release. At least all features that are targeted for the release-to-be-built must be merged in to develop at this point in time. All features targeted at future releases may not—they must wait until after the release branch is branched off.
It is exactly at the start of a release branch that the upcoming release gets assigned a version number—not any earlier. Up until that moment, the develop branch reflected changes for the “next release”, but it is unclear whether that “next release” will eventually become 0.3 or 1.0, until the release branch is started. That decision is made on the start of the release branch and is carried out by the project’s rules on version number bumping.

Creating a release branch 

Release branches are created from the develop branch. For example, say version 1.1.5 is the current production release and we have a big release coming up. The state of develop is ready for the “next release” and we have decided that this will become version 1.2 (rather than 1.1.6 or 2.0). So we branch off and give the release branch a name reflecting the new version number:
$ git checkout -b release-1.2 develop
Switched to a new branch "release-1.2"
$ ./bump-version.sh 1.2
Files modified successfully, version bumped to 1.2.
$ git commit -a -m "Bumped version number to 1.2"
[release-1.2 74d9424] Bumped version number to 1.2
1 files changed, 1 insertions(+), 1 deletions(-)
After creating a new branch and switching to it, we bump the version number. Here, bump-version.sh is a fictional shell script that changes some files in the working copy to reflect the new version. (This can of course be a manual change—the point being that some files change.) Then, the bumped version number is committed.
This new branch may exist there for a while, until the release may be rolled out definitely. During that time, bug fixes may be applied in this branch (rather than on the develop branch). Adding large new features here is strictly prohibited. They must be merged into develop, and therefore, wait for the next big release.

Finishing a release branch 

When the state of the release branch is ready to become a real release, some actions need to be carried out. First, the release branch is merged into master (since every commit on master is a new release by definition, remember). Next, that commit on master must be tagged for easy future reference to this historical version. Finally, the changes made on the release branch need to be merged back into develop, so that future releases also contain these bug fixes.
The first two steps in Git:
$ git checkout master
Switched to branch 'master'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2
The release is now done, and tagged for future reference.
Edit: You might as well want to use the -s or -u <key> flags to sign your tag cryptographically.
To keep the changes made in the release branch, we need to merge those back into develop, though. In Git:
$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)
This step may well lead to a merge conflict (probably even, since we have changed the version number). If so, fix it and commit.
Now we are really done and the release branch may be removed, since we don’t need it anymore:
$ git branch -d release-1.2
Deleted branch release-1.2 (was ff452fe).

Hotfix branches 

May branch off from:masterMust merge back into:develop and masterBranch naming convention:hotfix-*
Hotfix branches are very much like release branches in that they are also meant to prepare for a new production release, albeit unplanned. They arise from the necessity to act immediately upon an undesired state of a live production version. When a critical bug in a production version must be resolved immediately, a hotfix branch may be branched off from the corresponding tag on the master branch that marks the production version.
The essence is that work of team members (on thedevelop branch) can continue, while another person is preparing a quick production fix.

Creating the hotfix branch 

Hotfix branches are created from the master branch. For example, say version 1.2 is the current production release running live and causing troubles due to a severe bug. But changes on developare yet unstable. We may then branch off a hotfix branch and start fixing the problem:
$ git checkout -b hotfix-1.2.1 master
Switched to a new branch "hotfix-1.2.1"
$ ./bump-version.sh 1.2.1
Files modified successfully, version bumped to 1.2.1.
$ git commit -a -m "Bumped version number to 1.2.1"
[hotfix-1.2.1 41e61bb] Bumped version number to 1.2.1
1 files changed, 1 insertions(+), 1 deletions(-)
Don’t forget to bump the version number after branching off!
Then, fix the bug and commit the fix in one or more separate commits.
$ git commit -m "Fixed severe production problem"
[hotfix-1.2.1 abbe5d6] Fixed severe production problem
5 files changed, 32 insertions(+), 17 deletions(-)
Finishing a hotfix branch
When finished, the bugfix needs to be merged back into master, but also needs to be merged back into develop, in order to safeguard that the bugfix is included in the next release as well. This is completely similar to how release branches are finished.
First, update master and tag the release.
$ git checkout master
Switched to branch 'master'
$ git merge --no-ff hotfix-1.2.1
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2.1
Edit: You might as well want to use the -s or -u <key> flags to sign your tag cryptographically.
Next, include the bugfix in develop, too:
$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff hotfix-1.2.1
Merge made by recursive.
(Summary of changes)
The one exception to the rule here is that, when a release branch currently exists, the hotfix changes need to be merged into that release branch, instead of develop. Back-merging the bugfix into the release branch will eventually result in the bugfix being merged into develop too, when the release branch is finished. (If work in develop immediately requires this bugfix and cannot wait for the release branch to be finished, you may safely merge the bugfix into develop now already as well.)
Finally, remove the temporary branch:
$ git branch -d hotfix-1.2.1
Deleted branch hotfix-1.2.1 (was abbe5d6).

SQL Server to AWS RDS Migration Steps

This document covers the steps to Migrate SQL Server from on Premise DB to  AWS Cloud RDS AURORA DB. Steps are mentioned below-
Following are the steps-
1) Create Aurora DB on cloud on CLIP VPC

2) Install AWS Schema Conversion Tool on machine
Setup available here-
 \\st.com\dept$\Information Technology Dept\IT-Sterling GA\SOA\FrameworksTeam\RDS Changes

3) Convert Schema to Amazon Aurora
  • Connect to Source Database(SQL Server DB Instance)
  • Connect to Target Database(AURORA cloud- Details provided above)
  • Select the Schema from Source DB and Click on Convert Report
  • Report will show the details about conversion compatibility
  • In Our case except following Objects, other objects were compatible 

  1.  Procedures-
                  CLIP3_ARCHIVE_AND_PURGE
                  MIGRATE_CLIP_ARCHIVE_CONFIG
       2. Check constraints
                 CK_TRACKING_EVENT(We dropped this and used Reference table)
                 CK_PREFERENCE(This we will take care of using Trigger)
       3. Tables which has Auto-Increment keys should define that key as primary Key Index otherwise MySQL does not support autoincrement
       4. Attached Migration Report which has details around findings for DB Objects

4) Fix the conversion findings
  • Removed CK_PREFERENCE check constraints
  • CK_TRACKING_EVENT check is replaced with Reference Key constraint with a new table.
  • Re-written the stored procedure to make it compatible with MySQL
  • Added Primary key Index to the tables which are missing(EMAIL_ALERT_TEMPLATE, UI_PREF_CONFIG)
5) Push Data from SQL Server to Amazon Aurora(SQL Workbench)
  • Download SQL Workbench(Build118) and unzip it
  • Download both SQL server(available at com/microsoft/sqlserver/sqljdbc4/4.0/sqljdbc4-4.0.jar in repository) and MySQL Database drivers(mysql-connector-java-5.1.39-bin.jar attached here)
  • Create two connection profiles for each of these databases and give drivers path in connection profile(On Click of Manage Drivers)
  • Go to Tools on SQL Workbench and click on Data Pumper tool
  • Connect to both source and target database
  • Move data for each table one by one(Here we need DBA for QA and PROD to move data to AURORA)

6) Changes in Application to use new Aurora DB
  • Change in server.xml for each JVM to set new connection string to connect with new database
  • Connection String mentioned below-

    url="jdbc:mysql://clip.cluster-ctwaaurutcz2.us-east-1.rds.amazonaws.com:3306/CLIP3"
    password=<new password if different>
    driverClassName="com.mysql.jdbc.Driver"
  • Add mysql-connector-java-5.1.39-bin.jar to <node>/lib folder
  • Change persistence XML to use MYSQL Dialect
  • Change in CLIP-UI
    1) main/resources/META-INF/persistence.xml

    Changed this property to point MySQLDialect
     <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/>

    2)shared-model/src/main/java/com/sterling/clip/shared/model/TradingPartner.java

    Removed [] from @Table(name = "TRADING_PARTNER") line from Model class. Table name in [] is not supported in MySQL.

    3)shared-model/src/main/resources/META-INF/spring/spring-shared-model-context.xml

    Changed JPA Properties to point MySQL Dialect. We should externalizing this as well??

    <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop>
  • Change in BIF

    bif\receiver\receiver-gateway\src\main\resources\META-INF\spring\bif-receiver-gateway-context.xml

    Externalize this property to support BIF in both SQL Server and MYSQL versions

    <prop key="org.quartz.jobStore.driverDelegateClass">${database.driver.jdbcdelegate}</prop>

    For On PREM branch, we have to use SQLServerJDBC delegate and for cloud, we have to use StdJDBCDelegate as shown below.

    Added this property to receiver-gateway's app.properties file for mysql so it can read from property file.

       database.driver.jdbcdelegate=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
  • Change in Receiver-gateway
    (Copied from Above)
    Added this property to receiver-gateway's app.properties file for mysql so it can read from property file.

       database.driver.jdbcdelegate=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
  • Change in OMS-Receiver
    No Changes.
  • Change in OMS-Sender
    No Changes.
  • Change in Sender-gateway
    No Changes.

Cisco UCS VIC NIC teaming for physical SQL blades


CISCO URL with full details 

What to Do Next
Read the Release Notes for Cisco UCS Virtual Interface Card Drivers before installing the driver.

Installing the NIC Teaming Driver from the Control Panel

Procedure
    Step 1  In Windows, click Start > Control Panel.
    Step 2  Navigate to and click the Network and Sharing Center.
    For the specific location see the Windows server documentation.
    Step 3  In the Network and Sharing Center, click Manage Network Connections.
    Step 4  In the Network Connections folder, right-click on an Ethernet interface and choose Properties.
    Step 5  Click Install and choose Protocol > Add.
    Step 6  Browse to the drivers directory and click OK.
    The Cisco NIC Teaming Driver is installed and listed in the Ethernet interface properties.

    What to Do Next
    In the command prompt, run the enictool.exe utility to create and delete teams.

    Installing the NIC Teaming Driver from the Command Prompt

    Procedure
      Step 1  In Windows, open a command prompt with administrator privileges.
      Step 2  At the command prompt, enter enictool -p "drivers_directory"
      The Cisco NIC Teaming Driver is installed using the .inf files located in the specified directory.


      Example:
      This example installs the teaming driver using the .inf files located in the temp directory:
      C:\> enictool -p "c:\temp"

      What to Do Next
      Use the enictool.exe utility to create and delete teams.

      Configuring the NIC Teaming Driver Using enictool.exe

      Procedure
        Step 1  In Windows, open a command prompt with administrator privileges.
        Step 2  To create a team, enter enictool -c "list of connections" -m mode
        The mode options are as follows:
        • 1—Active Backup
        • 2—Active Backup with failback to active mode
        • 3—Active Active (transmit load balancing)
        • 4—802.3ad LACP


        Example:
        This example creates a team of two NICs in Active Backup mode:
        C:\> enictool -c "Local Area Connection" "Local Area Connection 2" -m 1
        Step 3  To delete a team, enter enictool -d "name of the NIC team" 

        Example:
        This example deletes a team called "Local Area Connection 3":
        C:\> enictool -d "Local Area Connection 3"
        Note   
        Local Area Connection 3 is the name of the NIC team and not the name of the individual adapters.
        Step 4  To view additional options and usage information, enter enictool /?
        Use the displayed command option information to configure load balancing method, load balancing hash method, and other options.

        PowerCLI Tips

        Set-PowerCLIConfiguration -InvalidCertificationAction IgnoreGet

        Visual Studio Code
        http://www.lucd.info/2016/04/23/visual-studio-code-powercli/

        Load PowerCLi in PowerShell ISE
        # Load Windows PowerShell cmdlets for managing vSphere


        #Add VMware Module for newer OS
        Imprt-Module VMware.PowerCLI -Scope CurrentUser
        Import-Module Vmware.VimAutomation.Core
        Import-Module Vmware.CLI

        Suppress Deprecated Warning 
        Set-PowerCLIConfiguration -DisplayDeprecatedWarnings $false -Scope User

        Filters with PowerCLi
        Get-VM | Where-Object {$_.NumCPU -gt 1} 

        #Output via grid
        Get-VM | out-GridView

        #For each example
        #Set Time for ESX Host
        $hosts =get-vmhost 
        foreach($i in $hosts){
        $t = Get-Date
        $dst = Get-VMHost $i | %{ Get-View $_.ExtensionData.ConfigManager.DateTimeSystem } 
        $dst.UpdateDateTime((Get-Date($t.ToUniversalTime()) -format u))
        }

        #Property Names
        Get-Process | Sort-Object -property Name


        Getting the commands available in PowerCLI
        Nouns and Verbs

        Example of all the commands:
        Get-VIcommand

        Get-VICommand notice the heading command type - name - definition

        Name is the commandlet (lots)

        Get-Vicommand | Measure-Object = show you the total number of commandlets in PowerCLI

        ****You can sort by Groups to view them verb
        the verb is the verb before the dash -  wait-|task  = verb is "wait"

        Could sort by count = Get-Vicommand | Group-Object verb | Sort count

        or even group by noun
        Get-Vicommand | Group-Object noun | Sort count | format-wide -Column 3


        Get-VIcommands = for powercli
        Get-command = for powershell

        More Examples -
         Get-command *vmscript

        Built in HELP

        help new-vm

        TrainSignal
        The pipeline
        examine object
        filter objects
        export data

        Pipeline down the pipeline
        Get-Object - Process Object - Output Object

        #Filter More than 2 criteria at the same time
        Get-Process | Where-Object {($_.PriorityClass -eq “Normal”) -AND ($_.HandleCount -lt 300)}

        #Export results to CVS



        New-Item c:\temp -type directory
        Get-Process | select name,handlecount | Export-Csv -delimiter "`t" -path "C:\temp\AllProcess.csv" -NoTypeInformation
        notepad "C:\temp\AllProcess.csv"
        Import-Csv -delimiter "`t" -path "C:\temp\AllProcess.csv"



        $files = Get-ChildItem -Path C:\Folder -Filter *.txt -Recurse

        $files.where({ ([datetime]$_.LastWriteTime).Date -eq '7/16/2016' })

        1
        2




        Vmware NSX SSL creation 

        Using OpenSSL for NSX Manager SSL import: Creates CSR and 4096 bit KEY Creating NSX 6.4.2 SSL    openssl req -out nsxcert.csr -newkey rsa:40...