Day Two Automation Self Service Framework – Part 1

In my years on the administration and engineering side of the VMware world a task I was frequently responsible for was creating policies around the standards of my environment and then defining a process to ensure those policies were adhered to.  Some examples of the policies I authored was things around snapshot lifetime, security hardening or datastore freespace, etc.

Often times these standards conflicted with the pressures I faced to complete requests in a timely fashion on the requests faced by my team and I.  When faced with conflicting pressures like this you can chose to not enforce your established polices, which can have significant negative impacts when your team is audited or you suffer a negative impact to your environment as the result of not enforcing your established policies; or you can choose to not deliver the requested service in a timely fashion, which will likely have equally bad outcomes.

In this multi-part blog series we will explore one path mitigating the conflicting pressures by utilizing vROPS to monitor for violations of a given policy, and when detected performing automatic remediation of policy violations, while allowing a for exceptions.  We will close with exploring  how we can build upon this monitoring framework to deliver self-service capabilities of day two actions.

Continue reading

Enabling SSL on vROPS vPostgreSQL

I recently spent a lot of time digging into the question of “How do I enable a PKI signed cert on the vROPS vPostgreSQL database to make my security team happy?”  As I often do, I started my journey with a simple google search: “vrops vpostgres ssl“.  This search lead me to the helpful page: Enabling TLS on Localhost Connections.  Hooray, I have my answer.  However, as I read the document, asked others about it, and thought more about the subject – things didn’t add up.

Continue reading

vROPS Optimization without shared storage

Over the last month or so I have had two customers inquire about vROPS’s ability to perform cross-cluster storage vmotions in order to enforce business intent or multi-cluster workload optimization.

Being the eager wet-behind the ear SE that I am, I set it up in my lab with some vSAN datastores and it worked! Success! Then I reviewed my notes of the use case the customer described:

  • NFS based storage
  • Datastore per cluster
  • No shared datastores between clusters

no-shared-storage

I then set out to create that scenario in my lab so that I could let my customers know that vROPS would solve their problems and slice their bread, to my dismay I was met with the following helpful message from vROPS.

error

I turned to a bit of RTFM at this point, which called out that I needed to use Datastore Clusters for non vSAN workloads.  I proceeded to rush into a configuration where I created a common datastore cluster for all my clusters, but non-shared datastores.

oops

I’m not sure why I thought this would work

You should not be surprised that this did not work as I would expect. So after hammering at it for a bit, I realized that the needed configuration was no-shared storage, no-shared datastore clusters – which in hindsight makes complete sense.

The winning receipe is what makes logical sense.  The key here is that the datastores need to be fully shared between all clusters you want to balance between or the datastores need to not be shared, but a part of a “dedicated” datastore cluster for that vSphere cluster. See the diagram below for two different valid configurations.

thereciepe2

Not what you expected?

You may be arriving here having expected some different content.  Perhaps you arrived expecting some content related to triathlon, only to be surprised by IT related content – sorry for that!  

For many years I have “maintained” three blogs, which means three instances of the blog software and content related to it.  I decided that I want to simplify things a bit and a way for me to do that was to merge the content of the three blogs.

I believe all the categories from the previous blog should be around, so once I add a bit more in terms of navigation abilities it should be easy enough to find your way to the content you are looking for.  And maybe, just maybe it will give me an opportunity to post a bit more varied content (or just content at all).

Opps I let my ESX eval expire

Last week I had an unfortunate experience; I ignored the expiration warning of a bunch of nested ESXi hosts and allowed the license to expire.

Rather then logging into a bunch of individual hosts to apply a key, I wrote a quick script that will connect to vCenter, ensure all the licenses are the correct one, and then reconnect the hosts.

It’s not pretty, but it’s effective should this happen to you.

Here is a link to take a look

Let’s Build an image pipeline! (part 3)

Today we will be diving into each section of our packer json file and their purpose.

This is the third post of a planned four part series:

If you haven’t gotten the gist by now, the permutations of what you can do with packer in terms of building images are nearly endless.  Thus far we have focused on a very narrow case of building a RHEL/CentOS 7 VMware compatible image, to then clone as needed.  However, with an understanding of the different tunables, you can easily adapt a working image build to do many interesting things.  We’ll again be using my sample as a reference to facilitate discussion.  You will also like to checkout the excellent documentation for packer.There are four sections of a packer template:

  • Builders
  • Provisioners
  • Post-Processors
  • Variables

The only truly required section is the builders section, but each section provides certain functionality.

Builders

Builders is the section of the packer json template that defines instructions which packer uses to create the actual virtual machine and install the operating system.  There are a number of builders, which allow packer to support a number of different platforms including VMware, OpenStack, Azure, AWS, etc.  Some builders, such as the AWS and VMware ones, contain multiple sub-types to allow for additional flexibility.

For example the VMware builder allows you to build a VM straight from an ISO or utilize an existing VM as a starting point. The key take away to remember about builders is that it is the definition of the virtual machine that you are creating.

Regardless of if the system being built is destined to be a vSphere template, an Amazon EC2 instance, or a vSphere virtual machine that you are provisioning directly; the builder section gives you full control over what your output will look like from a hardware perspective: network connectivity, CPU/Memory configuration, which answer file to use etc.

Speaking of the answer file, you are able to present this to the virtual machine by using a built-in to packer web server, or via a virtual floppy that is generated via a list of files or directories.

Provisioners

Provisoners is the section of the packer json template that allows you define actions that packer should take after installing the operating system is complete by the builder.  There are a number of provisioners which will ultimately allow you to dial in the configuration of your system or template exactly as you need to.  You can choose from Chef, Ansible, PowerShell, Puppet, the shell, or even create your own.  In our example we utilize the following provisioners:

file – This provisioner simply copies files from the machine executing packer to a specified location on the provisioned system, for use later on during the build by a different provisioner, or for longer term use down the road.

shell – This provisioner executes shell commands on the provisioned system.  It could be anything ranging from simple shell commands, to scripts placed there by something like the file provisioner.  There is a different variant of the shell provisoner for Windows.

A cool feature of all provioners is the ability to limit their execution to only specific builds – for instance if your packer json contains definitions for multiple templates/machines.  You can accomplish this by including the “only” directive in the provisioner.

You also have the ability to provide overrides for different builders.  Say for example if you are building a instance that needs a different parameter set for a command.  You can see an example of us using the provisioner override on lines 108 to 124 of our template.

Post-Processors

Post-Processors aren’t directly covered in our example.  We actually have what could be called a custom post-processor in the deploy-image.ps1 script.  In the year plus since I originally developed this code, Hashicorp and the community have done a great job of fleshing out the options available to do post processing including taking the machine you built and putting it from say a Fusion or Workstation build environment and depositing it directly to a vSphere environment and running it, or converting to a template (and putting it into a vCenter).

Other options allow for importing it to Amazon, or execute things like our deploy-image.ps1 script directly from packer as opposed to doing it via the Jenkins pipeline like we did.

Variables

The last “section” I want to cover is not something that functionally does something to build your virtual machine, but as the name variables implies, it allows you to define a set of variables which can then be defined at run-time to provide a level of dynamic behavior.

Say for example you wanted to build a CentOS or Red Hat Image, but give yourself the flexibility to change certain aspects of the build – such as the base ISO file, or the Katello subscription key.  The variables section allows you to define it with a reasonable default value, which you can then override when you call the packer executable.

You can even reference environment variables of the executing system to manipulate the build – in our example we are reading an environment variable IMAGE_OUTPUT_DIR, which we then reference later when we instruct packer on where to build the image.

Wrapping it up

It’s hard to convey the things you can do with packer in a single series of posts, the possibilities are endless, and it can be a bit intimidating when you are digging in and trying to figure out the best way to leverage it.  Next time, I’ll wrap up this series by brain dumping some of the cool things I think you can accomplish with Packer to make your life as a builder of things easier, and give you more time for coffee and other fun activities.

Configure VCSA Backup

Note: I enjoy writing, but often find myself not doing so simply because it can take a lot of effort to write a “good” post.  In an effort to post more regularly, I’m planning to post more frequent short write-ups of things I am doing in my lab.

One of the most overlooked aspects of your management infrastructure is backups and recoverability.  Without your management infrastructure, your job of recovering everything else got that much harder.  It’s simple to protect your vCenter.  All that is required is a location, which can be FTPS, HTTPS, SCP, FTP, NFS, SMB, or HTTP, a username and password.  The directions were generated and validated on a 6.7 U2 VCSA, but should be basically the same on 6.5.

Continue reading

Perspective

metandpat

This week I was able to attend the Wisconsin VMUG UserCon.  In the past I was always hesitant to go to these as I assumed that I wouldn’t be able to learn anything new.  I always figured I know what I need to know technically, what am I possible going to encounter that will increase my technical knowledge.  In other words, rightly or wrongly I was focused on only my technical growth.  This year I was excited about the UserCon, partly because I got a bit of recognition in front of the crowd from none other than Pat Gelsinger, but mostly for the opportunity to sit through a couple of sessions or have conversations with people to gain insight on the challenges and struggles they are facing and trying to figure out how to take that perspective to my benefit.

Continue reading

VCDX #275

It took me, not unsurprisingly, longer then expected to sit down and write this post. Primarily because I am a little in shock that I actually achieved this, but also because imposter syndrome kicked in a bit, and really I simply needed time to sort through all the emotions I experienced: euphoria, disbelief, let down, loss of direction, etc. Now that I have had almost two weeks to sort through my thoughts and feelings I figured it was time to share my thoughts on my journey to VCDX.

Continue reading

VCDX-DCV: Strike One

We regret to inform you...

Back in February, I decided that I was going to seriously pursue getting my VCDX certification.  This is not only a big deal because of the amount of work that it requires, but because for the past 15 or so years I have shunned the pursuit of any and all certifications.

After completing all the pre-requisite exams, I submitted my design in September, targetting a December defense date.  To my joy/surprise/terror, in early November I was notified that my design was cleared to move to the defense phase.

Continue reading