Let’s Build an image pipeline! (part 3)

Today we will be diving into each section of our packer json file and their purpose.

This is the third post of a planned four part series:

If you haven’t gotten the gist by now, the permutations of what you can do with packer in terms of building images are nearly endless.  Thus far we have focused on a very narrow case of building a RHEL/CentOS 7 VMware compatible image, to then clone as needed.  However, with an understanding of the different tunables, you can easily adapt a working image build to do many interesting things.  We’ll again be using my sample as a reference to facilitate discussion.  You will also like to checkout the excellent documentation for packer.There are four sections of a packer template:

  • Builders
  • Provisioners
  • Post-Processors
  • Variables

The only truly required section is the builders section, but each section provides certain functionality.

Builders

Builders is the section of the packer json template that defines instructions which packer uses to create the actual virtual machine and install the operating system.  There are a number of builders, which allow packer to support a number of different platforms including VMware, OpenStack, Azure, AWS, etc.  Some builders, such as the AWS and VMware ones, contain multiple sub-types to allow for additional flexibility.

For example the VMware builder allows you to build a VM straight from an ISO or utilize an existing VM as a starting point. The key take away to remember about builders is that it is the definition of the virtual machine that you are creating.

Regardless of if the system being built is destined to be a vSphere template, an Amazon EC2 instance, or a vSphere virtual machine that you are provisioning directly; the builder section gives you full control over what your output will look like from a hardware perspective: network connectivity, CPU/Memory configuration, which answer file to use etc.

Speaking of the answer file, you are able to present this to the virtual machine by using a built-in to packer web server, or via a virtual floppy that is generated via a list of files or directories.

Provisioners

Provisoners is the section of the packer json template that allows you define actions that packer should take after installing the operating system is complete by the builder.  There are a number of provisioners which will ultimately allow you to dial in the configuration of your system or template exactly as you need to.  You can choose from Chef, Ansible, PowerShell, Puppet, the shell, or even create your own.  In our example we utilize the following provisioners:

file – This provisioner simply copies files from the machine executing packer to a specified location on the provisioned system, for use later on during the build by a different provisioner, or for longer term use down the road.

shell – This provisioner executes shell commands on the provisioned system.  It could be anything ranging from simple shell commands, to scripts placed there by something like the file provisioner.  There is a different variant of the shell provisoner for Windows.

A cool feature of all provioners is the ability to limit their execution to only specific builds – for instance if your packer json contains definitions for multiple templates/machines.  You can accomplish this by including the “only” directive in the provisioner.

{
"type": "shell",
"script": "script.sh",
"only": ["vmware-iso"]
}

You also have the ability to provide overrides for different builders.  Say for example if you are building a instance that needs a different parameter set for a command.  You can see an example of us using the provisioner override on lines 108 to 124 of our template.

{
"type": "shell",
"scripts": [
"scripts/sethostname.sh",
"scripts/satellite_reg.sh",
"scripts/install_ansible.sh",
"scripts/build_version_file.sh"
],
"execute_command": "echo 'labansible'|sudo {{ .Vars }} -S bash '{{ .Path }}'",
"override": {
"CentOS": {
"environment_vars": [
"OS_VERSION={{user `os_version`}}",
"IMAGE_BUILD_VERSION={{user `image_build_version`}}",
"KATELLO_HOSTNAME={{user `satellite_server_fqdn`}}",
"SATELLITE_ORG={{user `satellite_org`}}",
"SATELLITE_ACTIVATIONKEY={{user `satellite_activation_key_centos`}}"
]
},
"RHEL": {
"environment_vars": [
"OS_VERSION={{user `os_version`}}",
"IMAGE_BUILD_VERSION={{user `image_build_version`}}",
"KATELLO_HOSTNAME={{user `satellite_server_fqdn`}}",
"SATELLITE_ORG={{user `satellite_org`}}",
"SATELLITE_ACTIVATIONKEY={{user `satellite_activation_key_rhel`}}"
]
}
}
}

Post-Processors

Post-Processors aren’t directly covered in our example.  We actually have what could be called a custom post-processor in the deploy-image.ps1 script.  In the year plus since I originally developed this code, Hashicorp and the community have done a great job of fleshing out the options available to do post processing including taking the machine you built and putting it from say a Fusion or Workstation build environment and depositing it directly to a vSphere environment and running it, or converting to a template (and putting it into a vCenter).

Other options allow for importing it to Amazon, or execute things like our deploy-image.ps1 script directly from packer as opposed to doing it via the Jenkins pipeline like we did.

Variables

The last “section” I want to cover is not something that functionally does something to build your virtual machine, but as the name variables implies, it allows you to define a set of variables which can then be defined at run-time to provide a level of dynamic behavior.

Say for example you wanted to build a CentOS or Red Hat Image, but give yourself the flexibility to change certain aspects of the build – such as the base ISO file, or the Katello subscription key.  The variables section allows you to define it with a reasonable default value, which you can then override when you call the packer executable.

You can even reference environment variables of the executing system to manipulate the build – in our example we are reading an environment variable IMAGE_OUTPUT_DIR, which we then reference later when we instruct packer on where to build the image.

  "variables": {
   ...
    "output_directory": "{{env `IMAGE_OUTPUT_DIR`}}"
  },

Wrapping it up

It’s hard to convey the things you can do with packer in a single series of posts, the possibilities are endless, and it can be a bit intimidating when you are digging in and trying to figure out the best way to leverage it.  Next time, I’ll wrap up this series by brain dumping some of the cool things I think you can accomplish with Packer to make your life as a builder of things easier, and give you more time for coffee and other fun activities.

Configure VCSA Backup

Note: I enjoy writing, but often find myself not doing so simply because it can take a lot of effort to write a “good” post.  In an effort to post more regularly, I’m planning to post more frequent short write-ups of things I am doing in my lab.

One of the most overlooked aspects of your management infrastructure is backups and recoverability.  Without your management infrastructure, your job of recovering everything else got that much harder.  It’s simple to protect your vCenter.  All that is required is a location, which can be FTPS, HTTPS, SCP, FTP, NFS, SMB, or HTTP, a username and password.  The directions were generated and validated on a 6.7 U2 VCSA, but should be basically the same on 6.5.

Continue reading

Perspective

metandpat

This week I was able to attend the Wisconsin VMUG UserCon.  In the past I was always hesitant to go to these as I assumed that I wouldn’t be able to learn anything new.  I always figured I know what I need to know technically, what am I possible going to encounter that will increase my technical knowledge.  In other words, rightly or wrongly I was focused on only my technical growth.  This year I was excited about the UserCon, partly because I got a bit of recognition in front of the crowd from none other than Pat Gelsinger, but mostly for the opportunity to sit through a couple of sessions or have conversations with people to gain insight on the challenges and struggles they are facing and trying to figure out how to take that perspective to my benefit.

Continue reading

VCDX #275

It took me, not unsurprisingly, longer then expected to sit down and write this post. Primarily because I am a little in shock that I actually achieved this, but also because imposter syndrome kicked in a bit, and really I simply needed time to sort through all the emotions I experienced: euphoria, disbelief, let down, loss of direction, etc. Now that I have had almost two weeks to sort through my thoughts and feelings I figured it was time to share my thoughts on my journey to VCDX.

Continue reading

VCDX-DCV: Strike One

We regret to inform you...

Back in February, I decided that I was going to seriously pursue getting my VCDX certification.  This is not only a big deal because of the amount of work that it requires, but because for the past 15 or so years I have shunned the pursuit of any and all certifications.

After completing all the pre-requisite exams, I submitted my design in September, targetting a December defense date.  To my joy/surprise/terror, in early November I was notified that my design was cleared to move to the defense phase.

Continue reading

NSX-v ECMP Active/Passive configuration

OK, I know that the title to this post is a bit of a oxymoran.  ECMP Active/Passive?  Isn’t ECMP about active/active/active/active/…. ? Yes it is, but imagine a scenario where you are building your NSX-v deployment across two campuses, with datacenter firewalls upstream – making it important that you ensure data flow stays predictable and asymmetric routing doesn’t become an issue: ingress through datacenter A, egress through datacenter Roie Ben Haim has a fantastic write-up for why this is important, so I’ll leave you with this link as a primer.

Welcome back.  Now that we are on the same page on why we need to control datacenter ingress/egress, let’s further imagine that we have some incredibly demanding North South requirements that force us to lay down the maximum ECMP configuration of 8 ESGs.  Does that mean you can’t have passive, lower weighted ESGs on the secondary site of the DLR?  Does it mean you have to fail the ESGs over to the secondary site?

These questions hit me today, so I sought out to answer them.

Continue reading

VCAP 6 DCV – Deploy Exam

Earlier this month I took the VCAP 6 DCV – Deploy exam to round out my VCIX 6.5 certification.  Previously, I mentioned how important I felt time management was during the VCAP 6.5 DCV – Design exam, well that exam had nothing on the Deploy exam.  I used the entire duration and probably could have used another 30 to 40 minutes to increase my level of confidence walking out the door.

I walked away leaving two questions unanswered because while I knew what was needed to answer them correctly, I don’t work in those aspects of vSphere enough to know how to accomplish them without digging around in the documentation.

Beyond that I had one question I knew was wrong, and another two I felt iffy about.

Regarding the experience of the exam itself:

It was incredibly stressful – imagine the stress of a huge meltdown in your environment at work.  Stressful right?  When the day is over you want to go home and sit on the couch with a beer.  Now take that stress level and subject yourself to it 27 times in a little over 3 hours, in an environment that you have never touched before, on someone elses computer, and you don’t have google (or coworkers/VMware support).   I took the exam first thing in the morning, and when I was done I was completely toast.

That said, the exam interface itself used the HoL format used in the online training courses and well the Hand on Labs.  I had no issues there other than that I just missed driving my good old Zenbook or Surface.

My biggest complaint about the whole experience was that it took over a week to get my results, which was more stressful than the exam itself.  Fortunately, when the email came the news was good news!

Content wise I felt it was a really good all around test of ability for a vSphere administrator/engineer/architect type person.  If I had to redo I’d focus on those areas of the vSphere suite that I don’t do every day.  i.e.  The stuff I do everyday or have focused on a lot in the past: powercli, HA, DRS, esxcli, iSCSI, networking I would flat-out ignore.  Anything related to those I either don’t need documentation or I simply need a reminder of the exact spelling of an advanced parameter or something similar.

Review the blueprint and drill into those things that you aren’t comfortable with. You don’t have to build to a level of mastery, but get yourself to the point where you know what needs to be done so you can accomplish it 70% to 80% on your own and you can find the last 20% to 30% in the documentation within 30 to 60 seconds – i.e. you understand the breakout of the vSphere documentation and you know the keywords you are looking for to find the section quickly in a PDF.

Lastly – the only real preparation I did for this exam was the solutions4crowds exam simulator.  The simulator was only 17 questions, but it provided a great mockup of what to expect and it was worth every bit of the $10 the owner asks for to utilize it.

VCAP 6.5 DCV – Design

A couple of weeks ago I sat the VMware VCAP 6.5 DCV – Design exam as part of the process of completing the prerequisites for submitting for VCDX.  I don’t have a lot of insight to add beyond the excellent post by vHersey.  I did want to jot a few notes to help relieve the dearth of information out there on this exam.

The things that really stood out to me:

  • Read the questions carefully
  • Practice, practice, practice identifying RCARs
  • Understand the difference between functional and non-functional requirements
  • On several questions it was easy to eliminate answers simply by being able to identify technical non-starters

Beyond that – manage your time well.  Overall I found the exam to be much less technical than the VCP 6.5 DCV exam, but the VCAP 6.5 DCV – Design was significantly more intense as I felt the devil was in the details of many of the questions and answers.

Hopefully the 6.5 Deploy exam will be released shortly so I can say on track for my submission goal date.

 

Let’s Build an image pipeline! (part 2)

Sorry for the delay in getting part two published, life had been busy the past week or so.  Today we are going to build upon what we covered in part one, namely the pieces of the code and/or things you need to adapt about the packer template file for it to be functional for your environment.

This is the second post of a planned 4 part series:

Continue reading

Let’s Build an image pipeline! (part 1.5)

As I was working on part two of this planned four part series about packer, I realized I forgot a crucial step in setting up Jenkins!  Due to part of our pipeline including deployment to an artifact repository and a vSphere environment, we need to create some credentials within Jenkins.  I have also created a friendly link to all parts of this walk through.  I will retroactively edit the links as the posts are written.

Continue reading