Not what you expected?

You may be arriving here having expected some different content.  Perhaps you arrived expecting some content related to triathlon, only to be surprised by IT related content – sorry for that!  

For many years I have “maintained” three blogs, which means three instances of the blog software and content related to it.  I decided that I want to simplify things a bit and a way for me to do that was to merge the content of the three blogs.

I believe all the categories from the previous blog should be around, so once I add a bit more in terms of navigation abilities it should be easy enough to find your way to the content you are looking for.  And maybe, just maybe it will give me an opportunity to post a bit more varied content (or just content at all).

Opps I let my ESX eval expire

Last week I had an unfortunate experience; I ignored the expiration warning of a bunch of nested ESXi hosts and allowed the license to expire.

Rather then logging into a bunch of individual hosts to apply a key, I wrote a quick script that will connect to vCenter, ensure all the licenses are the correct one, and then reconnect the hosts.

It’s not pretty, but it’s effective should this happen to you.

Here is a link to take a look

param (
    [Parameter(Mandatory=$true)][string]$vCenterHost,
    [Parameter(Mandatory=$true)][string]$vCUsername,
    [Parameter(Mandatory=$true)][string]$vcPassword,
    [Parameter(Mandatory=$true)][string]$hostUser,
    [Parameter(Mandatory=$true)][string]$hostPassword,
    [Parameter(Mandatory=$true)][string]$hostLicenseKey
)
try 
{
    Write-Host "Correcting licensings on hosts in vCenter: $vCenterhost" -ForegroundColor Green
    
    $vCenterConnection = Connect-VIServer -Server $vCenterHost -User $vCUsername -Password $vcPassword

    #we will assume we should update all the hosts

    $vmhosts = Get-VMHost

    foreach ($vmhost in $vmhosts)
    {  
        try
        {
            Write-Host "Correcting licensings on host: $vmhost" -ForegroundColor Green
            $vmHostConnection = Connect-VIServer -Server $vmhost -user $hostUser -password $hostPassword
            set-vmhost -Server $vmHostConnection -LicenseKey $hostLicenseKey -Confirm:$false
            if ($vmhost.ConnectionState -ne 'Connected')
            {
                set-vmhost -server $vCenterConnection -VMHost $vmhost -State Connected -Confirm:$false
            }
            disconnect-viserver -Server $vmHostConnection -Confirm:$false
        }
        catch 
        {
            Write-host "Unable to connect to ESX Host $vmhost"
            
        }
    }

    Disconnect-viserver  -Server $vCenterConnection -Confirm:$false
}

catch 
{
    Write-host "Unable to connect to vCenter server $vCenterhost"
}

Let’s Build an image pipeline! (part 3)

Today we will be diving into each section of our packer json file and their purpose.

This is the third post of a planned four part series:

If you haven’t gotten the gist by now, the permutations of what you can do with packer in terms of building images are nearly endless.  Thus far we have focused on a very narrow case of building a RHEL/CentOS 7 VMware compatible image, to then clone as needed.  However, with an understanding of the different tunables, you can easily adapt a working image build to do many interesting things.  We’ll again be using my sample as a reference to facilitate discussion.  You will also like to checkout the excellent documentation for packer.There are four sections of a packer template:

  • Builders
  • Provisioners
  • Post-Processors
  • Variables

The only truly required section is the builders section, but each section provides certain functionality.

Builders

Builders is the section of the packer json template that defines instructions which packer uses to create the actual virtual machine and install the operating system.  There are a number of builders, which allow packer to support a number of different platforms including VMware, OpenStack, Azure, AWS, etc.  Some builders, such as the AWS and VMware ones, contain multiple sub-types to allow for additional flexibility.

For example the VMware builder allows you to build a VM straight from an ISO or utilize an existing VM as a starting point. The key take away to remember about builders is that it is the definition of the virtual machine that you are creating.

Regardless of if the system being built is destined to be a vSphere template, an Amazon EC2 instance, or a vSphere virtual machine that you are provisioning directly; the builder section gives you full control over what your output will look like from a hardware perspective: network connectivity, CPU/Memory configuration, which answer file to use etc.

Speaking of the answer file, you are able to present this to the virtual machine by using a built-in to packer web server, or via a virtual floppy that is generated via a list of files or directories.

Provisioners

Provisoners is the section of the packer json template that allows you define actions that packer should take after installing the operating system is complete by the builder.  There are a number of provisioners which will ultimately allow you to dial in the configuration of your system or template exactly as you need to.  You can choose from Chef, Ansible, PowerShell, Puppet, the shell, or even create your own.  In our example we utilize the following provisioners:

file – This provisioner simply copies files from the machine executing packer to a specified location on the provisioned system, for use later on during the build by a different provisioner, or for longer term use down the road.

shell – This provisioner executes shell commands on the provisioned system.  It could be anything ranging from simple shell commands, to scripts placed there by something like the file provisioner.  There is a different variant of the shell provisoner for Windows.

A cool feature of all provioners is the ability to limit their execution to only specific builds – for instance if your packer json contains definitions for multiple templates/machines.  You can accomplish this by including the “only” directive in the provisioner.

{
"type": "shell",
"script": "script.sh",
"only": ["vmware-iso"]
}

You also have the ability to provide overrides for different builders.  Say for example if you are building a instance that needs a different parameter set for a command.  You can see an example of us using the provisioner override on lines 108 to 124 of our template.

{
"type": "shell",
"scripts": [
"scripts/sethostname.sh",
"scripts/satellite_reg.sh",
"scripts/install_ansible.sh",
"scripts/build_version_file.sh"
],
"execute_command": "echo 'labansible'|sudo {{ .Vars }} -S bash '{{ .Path }}'",
"override": {
"CentOS": {
"environment_vars": [
"OS_VERSION={{user `os_version`}}",
"IMAGE_BUILD_VERSION={{user `image_build_version`}}",
"KATELLO_HOSTNAME={{user `satellite_server_fqdn`}}",
"SATELLITE_ORG={{user `satellite_org`}}",
"SATELLITE_ACTIVATIONKEY={{user `satellite_activation_key_centos`}}"
]
},
"RHEL": {
"environment_vars": [
"OS_VERSION={{user `os_version`}}",
"IMAGE_BUILD_VERSION={{user `image_build_version`}}",
"KATELLO_HOSTNAME={{user `satellite_server_fqdn`}}",
"SATELLITE_ORG={{user `satellite_org`}}",
"SATELLITE_ACTIVATIONKEY={{user `satellite_activation_key_rhel`}}"
]
}
}
}

Post-Processors

Post-Processors aren’t directly covered in our example.  We actually have what could be called a custom post-processor in the deploy-image.ps1 script.  In the year plus since I originally developed this code, Hashicorp and the community have done a great job of fleshing out the options available to do post processing including taking the machine you built and putting it from say a Fusion or Workstation build environment and depositing it directly to a vSphere environment and running it, or converting to a template (and putting it into a vCenter).

Other options allow for importing it to Amazon, or execute things like our deploy-image.ps1 script directly from packer as opposed to doing it via the Jenkins pipeline like we did.

Variables

The last “section” I want to cover is not something that functionally does something to build your virtual machine, but as the name variables implies, it allows you to define a set of variables which can then be defined at run-time to provide a level of dynamic behavior.

Say for example you wanted to build a CentOS or Red Hat Image, but give yourself the flexibility to change certain aspects of the build – such as the base ISO file, or the Katello subscription key.  The variables section allows you to define it with a reasonable default value, which you can then override when you call the packer executable.

You can even reference environment variables of the executing system to manipulate the build – in our example we are reading an environment variable IMAGE_OUTPUT_DIR, which we then reference later when we instruct packer on where to build the image.

  "variables": {
   ...
    "output_directory": "{{env `IMAGE_OUTPUT_DIR`}}"
  },

Wrapping it up

It’s hard to convey the things you can do with packer in a single series of posts, the possibilities are endless, and it can be a bit intimidating when you are digging in and trying to figure out the best way to leverage it.  Next time, I’ll wrap up this series by brain dumping some of the cool things I think you can accomplish with Packer to make your life as a builder of things easier, and give you more time for coffee and other fun activities.

Configure VCSA Backup

Note: I enjoy writing, but often find myself not doing so simply because it can take a lot of effort to write a “good” post.  In an effort to post more regularly, I’m planning to post more frequent short write-ups of things I am doing in my lab.

One of the most overlooked aspects of your management infrastructure is backups and recoverability.  Without your management infrastructure, your job of recovering everything else got that much harder.  It’s simple to protect your vCenter.  All that is required is a location, which can be FTPS, HTTPS, SCP, FTP, NFS, SMB, or HTTP, a username and password.  The directions were generated and validated on a 6.7 U2 VCSA, but should be basically the same on 6.5.

Continue reading

Perspective

metandpat

This week I was able to attend the Wisconsin VMUG UserCon.  In the past I was always hesitant to go to these as I assumed that I wouldn’t be able to learn anything new.  I always figured I know what I need to know technically, what am I possible going to encounter that will increase my technical knowledge.  In other words, rightly or wrongly I was focused on only my technical growth.  This year I was excited about the UserCon, partly because I got a bit of recognition in front of the crowd from none other than Pat Gelsinger, but mostly for the opportunity to sit through a couple of sessions or have conversations with people to gain insight on the challenges and struggles they are facing and trying to figure out how to take that perspective to my benefit.

Continue reading

VCDX #275

It took me, not unsurprisingly, longer then expected to sit down and write this post. Primarily because I am a little in shock that I actually achieved this, but also because imposter syndrome kicked in a bit, and really I simply needed time to sort through all the emotions I experienced: euphoria, disbelief, let down, loss of direction, etc. Now that I have had almost two weeks to sort through my thoughts and feelings I figured it was time to share my thoughts on my journey to VCDX.

Continue reading

VCDX-DCV: Strike One

We regret to inform you...

Back in February, I decided that I was going to seriously pursue getting my VCDX certification.  This is not only a big deal because of the amount of work that it requires, but because for the past 15 or so years I have shunned the pursuit of any and all certifications.

After completing all the pre-requisite exams, I submitted my design in September, targetting a December defense date.  To my joy/surprise/terror, in early November I was notified that my design was cleared to move to the defense phase.

Continue reading

NSX-v ECMP Active/Passive configuration

OK, I know that the title to this post is a bit of a oxymoran.  ECMP Active/Passive?  Isn’t ECMP about active/active/active/active/…. ? Yes it is, but imagine a scenario where you are building your NSX-v deployment across two campuses, with datacenter firewalls upstream – making it important that you ensure data flow stays predictable and asymmetric routing doesn’t become an issue: ingress through datacenter A, egress through datacenter Roie Ben Haim has a fantastic write-up for why this is important, so I’ll leave you with this link as a primer.

Welcome back.  Now that we are on the same page on why we need to control datacenter ingress/egress, let’s further imagine that we have some incredibly demanding North South requirements that force us to lay down the maximum ECMP configuration of 8 ESGs.  Does that mean you can’t have passive, lower weighted ESGs on the secondary site of the DLR?  Does it mean you have to fail the ESGs over to the secondary site?

These questions hit me today, so I sought out to answer them.

Continue reading

VCAP 6 DCV – Deploy Exam

Earlier this month I took the VCAP 6 DCV – Deploy exam to round out my VCIX 6.5 certification.  Previously, I mentioned how important I felt time management was during the VCAP 6.5 DCV – Design exam, well that exam had nothing on the Deploy exam.  I used the entire duration and probably could have used another 30 to 40 minutes to increase my level of confidence walking out the door.

I walked away leaving two questions unanswered because while I knew what was needed to answer them correctly, I don’t work in those aspects of vSphere enough to know how to accomplish them without digging around in the documentation.

Beyond that I had one question I knew was wrong, and another two I felt iffy about.

Regarding the experience of the exam itself:

It was incredibly stressful – imagine the stress of a huge meltdown in your environment at work.  Stressful right?  When the day is over you want to go home and sit on the couch with a beer.  Now take that stress level and subject yourself to it 27 times in a little over 3 hours, in an environment that you have never touched before, on someone elses computer, and you don’t have google (or coworkers/VMware support).   I took the exam first thing in the morning, and when I was done I was completely toast.

That said, the exam interface itself used the HoL format used in the online training courses and well the Hand on Labs.  I had no issues there other than that I just missed driving my good old Zenbook or Surface.

My biggest complaint about the whole experience was that it took over a week to get my results, which was more stressful than the exam itself.  Fortunately, when the email came the news was good news!

Content wise I felt it was a really good all around test of ability for a vSphere administrator/engineer/architect type person.  If I had to redo I’d focus on those areas of the vSphere suite that I don’t do every day.  i.e.  The stuff I do everyday or have focused on a lot in the past: powercli, HA, DRS, esxcli, iSCSI, networking I would flat-out ignore.  Anything related to those I either don’t need documentation or I simply need a reminder of the exact spelling of an advanced parameter or something similar.

Review the blueprint and drill into those things that you aren’t comfortable with. You don’t have to build to a level of mastery, but get yourself to the point where you know what needs to be done so you can accomplish it 70% to 80% on your own and you can find the last 20% to 30% in the documentation within 30 to 60 seconds – i.e. you understand the breakout of the vSphere documentation and you know the keywords you are looking for to find the section quickly in a PDF.

Lastly – the only real preparation I did for this exam was the solutions4crowds exam simulator.  The simulator was only 17 questions, but it provided a great mockup of what to expect and it was worth every bit of the $10 the owner asks for to utilize it.

Triathlon and me

The chief cause of  failure and unhappiness is trading what you want most for what you want now.

I will leave out my thoughts on Ironman Texas 2018, the race, both the positive and negative.  Instead I want to talk about my relationship with triathlon over the last couple years.   Aside from my relationship with my wife and my career, triathlon is the longest running activity I have engaged in in my life – this August it will be 15 years that I have been doing triathlons.  It isn’t surprising that it has had it’s ups and downs.

What I have struggled with these last two years +- is that it “started” with a great race and then fed into a series of poor performances, poor training and increasing frustration.  There have been a number of contributing factors:

  • Layoffs at work in 2017 = stressful work environment
  • Assignment to a new team/project at work getting to do awesome stuff = Very engaging job
  • 3 young children = Crazy Train
  • Paradoxical downward self-feeding cycle of triathlon related events = Very little excitement about training or racing.

The truth is in terms of life overall I can’t say that I have ever been more happy, but in terms of my pursuit of triathlon I have been incredibly unhappy, to the point where after Ironman Wisconsin last fall I contemplated if I really wanted to do another triathlon.

Early this year I signed up for Ironman Texas, ostensibly to get a Kona slot for this fall, which incidentally didn’t happen, but even after signing up I didn’t change my level of engagement or behaviors.  My engagement was so low that at one point my wife actually questioned if going to race Texas was the right thing to do.

  • Could I really have a race that would contribute to changing the direction that triathlon was going in my life?
  • Was I going to enjoy myself?
  • Was it fair to my family to make this solo trip without a chance of accomplishing the original purpose of it?

If I’m honest about it, the answer to all those questions at the time of the conversation was likely no, but for whatever reason I could not just not do a race that I had signed up for.  It was as if I needed  to do the race to tell myself what I needed to do.

In the lead up to the race I was excited and nervous about it – was I going to enjoy myself, would it be a positive day?

In the end the race was not great in an absolute sense but I had fun and for the first time in over two years I enjoyed myself and was 100% engaged the entire race.  It’s probably a bit premature to say that my love of triathlon is back and the furnace is roaring, but yesterday was a huge step in the right direction and I’m excited to see if I can rekindle it and find out if I can return to my level of fitness that I had just a handful of years ago.