ChefDK on Windows – Environmental Variables

The recent release of the Chef Development Kit for Windows has been great for my work flow. If you do not have your own Ruby installed on the system, you probably want to use the Ruby bundled with ChefDK. For beginners, you can operate inside this environment by prepending ‘chef exec’ to any Ruby commands you may want to run. Eventually, you might want to install your own Ruby Gems. This can be a problem, if you do not have certain environmental variables set.

Here, I include a PowerShell script which sets or removes the environmental variables required to use ChefDK’s Ruby as your local environment’s Ruby. I make use of the local user’s %PATH% variable, which will always be appended to the system’s %PATH%. We also create a User environmental variable called CHEFDK_RUBY, which will be appended to the local user’s %PATH%. A few other Ruby-related variables are set based on recommendations set forth in the ChefDK on Windows Survival Guide.

# Filename: Set-ChefDK_Enviro_Vars.ps1
# Brian Dwyer - Intelligent Digital Services - 11/5/14

# ***USAGE***
# To Setup the ChefDK Ruby Variables
# ./Set-ChefDK_Enviro_Vars.ps1 set

# To Remove the ChefDK Ruby Variables
# ./Set-ChefDK_Enviro_Vars.ps1 unset

# System-Wide Environmental Variables
$System_Vars='HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment'

# User-Specific Environmental Variables
$User_Vars='HKCU:\Environment'

# Make Sure we don't expand/evaluate environmental variables on the return
$DoNotExpand=[Microsoft.WIN32.RegistryValueOptions]::DoNotExpandEnvironmentNames

# Check if ChefDK is installed
If (!$env:Path.Contains('chefdk\bin'))
  {
    echo ""
    echo "/======| Error: ChefDK does not seem to be installed. |=====\"
    echo ""
    pause
    exit
  }

# Get ChefDK Installation Directory
$ChefDK_DIR = $env:Path.Split(';') -like "*chefdk\bin" | Out-String -Stream | Split-Path -Parent

# Determine ChefDK Ruby Version
$ChefDK_RubyVer = Get-ChildItem -Name $ChefDK_DIR\embedded\lib\ruby\gems

# Setup
If ( $args[0] -eq 'set' )
  {
  echo "/======| Setting up Registry Keys... |=====\"
  Set-ItemProperty $User_Vars -Name 'CHEFDK_RUBY' -Value "$env:USERPROFILE\.chefdk\gem\ruby\$ChefDK_RubyVer\bin;$ChefDK_DIR\embedded\bin"
  Set-ItemProperty $User_Vars -Name 'GEM_ROOT' -Value "$ChefDK_DIR\embedded\lib\ruby\gems\$ChefDK_RubyVer"
  Set-ItemProperty $User_Vars -Name 'GEM_HOME' -Value "$env:USERPROFILE\.chefdk\gem\ruby\$ChefDK_RubyVer"
  Set-ItemProperty $User_Vars -Name 'GEM_PATH' -Value "$env:USERPROFILE\.chefdk\gem\ruby\$ChefDK_RubyVer;$ChefDK_DIR\lib\ruby\gems\$ChefDK_RubyVer"
  If (!(Get-Item $User_Vars).GetValue('PATH','Default',$DoNotExpand).Contains('CHEFDK_RUBY'))
   {
   If (((Get-ItemProperty $User_Vars).PATH) -eq $null)
    {
    Set-ItemProperty $User_Vars -Name 'PATH' -Value '%CHEFDK_RUBY%'
    }
   Else
    {
    $NewVal=(Get-Item $User_Vars).GetValue('PATH','Default',$DoNotExpand) + ';%CHEFDK_RUBY%'
    Set-ItemProperty $User_Vars -Name 'PATH' -Value $NewVal
    }
   }
  pause
  exit
  }
Elseif ( $args[0] -eq 'unset' )
  {
  echo "/======| Removing Registry Keys... |=====\"
  Set-ItemProperty $User_Vars -Name 'PATH' -Value (Get-Item $User_Vars).GetValue('PATH','Default',$DoNotExpand).replace(';%CHEFDK_RUBY%', '')
  Set-ItemProperty $User_Vars -Name 'PATH' -Value (Get-Item $User_Vars).GetValue('PATH','Default',$DoNotExpand).replace('%CHEFDK_RUBY%', '')
  Remove-ItemProperty $User_Vars -Name 'CHEFDK_RUBY'
  Remove-ItemProperty $User_Vars -Name 'GEM_ROOT'
  Remove-ItemProperty $User_Vars -Name 'GEM_HOME'
  Remove-ItemProperty $User_Vars -Name 'GEM_PATH'
  pause
  exit
  }
 Else
  {
  echo '------------------------------------------------'
  echo "|*|  - Set ChefDK Environmental Variables -  |*|"
  echo '------------------------------------------------'
  echo "|   Use 'set' or 'unset' to control script     |"
  echo '------------------------------------------------'
  pause
  exit
  }

Wildfly Java Application Server – Chef Cookbook

A while back, I wrote a cookbook to deploy the Wildfly Java Application Server. Wildfly is the successor of JBoss AS. This Chef cookbook handles the entire deployment process, and also contains LWRP’s to set system attributes, datasources, and deploy code. If you are looking to automate the installation and configuration of Wildfly, I am confident that this Chef cookbook will be a big help. Check it out on my GitHub. Feel free to contribute as well!

Chef & Berkshelf – SSL Certificate Validation Error on Windows

When using Windows, Chef, Vagrant and Berkshelf, you may encounter an issue with certificate validation. The error you may encounter is as follows:

Faraday::SSLError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed

The problem is Ruby is expecting the SSL_CERT_FILE environmental variable to be set. This should point to a CA bundle to use for SSL certificate validation. If this is not set, certificate validation will fail. You can adjust the Berkshelf configuration to not verify SSL certificates, but I have found this setting to be problematic and sporadically not work. The better option, is to actually set the environmental variable.

Luckily, Vagrant comes with a CA bundle. We can leverage Vagrant’s installation information in the registry to determine the installation location of Vagrant and use this to set the SSL_CERT_FILE variable appropriately. The following PowerShell script can be used to do this. It has a set and unset function. By default, running ./Set-SSLCert_Chef_Vagrant.ps1 set will set SSL_CERT_FILE as a user-specific environmental variable.

After running this script, close and re-open any open command prompts for the new variable to take effect. Vagrant is required for this script to work correctly.

# Filename: Set-SSLCert_Chef_Vagrant.ps1
# Brian Dwyer - 5/22/14

# ***USAGE***
# To Setup the SSL_CERT_FILE Variable
# ./Set-SSLCert_Chef_Vagrant.ps1 set

# To Remove the SLL_CERT_FILE Variable
# ./Set-SSLCert_Chef_Vagrant.ps1 unset

# System-Wide Environmental Variables
$System_Vars='HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment'

# User-Specific Environmental Variables
$User_Vars='HKCU:\Environment'

# Check if Vagrant is installed
If (!$env:Path.Contains('Vagrant'))
  {
    echo ""
    echo "/======| Error: Vagrant does not seem to be installed. |=====\"
    echo ""
    pause
    exit
    }

# Get Vagrant Installation Directory
$Vagrant_DIR = $env:Path.Split(';') -like "*vagrant*" | Out-String -Stream | Split-Path -Parent

# Registry Property Key/Value
$Reg_Key='SSL_CERT_FILE'
$Reg_Value="$Vagrant_DIR\embedded\cacert.pem"

# Setup
If ( $args[0] -eq 'set' )
  {
  echo "/======| Setting up Registry Key... |=====\"
  Set-ItemProperty $User_Vars -Name $Reg_Key -Value $Reg_Value
  pause
  exit
  }
Elseif ( $args[0] -eq 'unset' )
  {
  echo "/======| Removing Registry Key... |=====\"
  Remove-ItemProperty $User_Vars -Name $Reg_Key
  pause
  exit
  }
 Else
  {
  echo '------------------------------------------------'
  echo "|*|-Set SSL_CERT_FILE Environmental Variable-|*|"
  echo '------------------------------------------------'
  echo "|   Use 'set' or 'unset' to control script     |"
  echo '------------------------------------------------'
  pause
  exit
  }

AWS – Highly-Available NAT in VPC

arch_nat_active

Like most sysadmins, one of my primary responsibilities is ensuring high-availability in our environments. Recently, I’ve been working a lot more with Amazon AWS. Amazon recently began forcing new accounts to make use of VPC. When you create a VPC, an Internet Gateway must be provisioned to route traffic to the Internet. VPC’s utilize subnet constructs for virtual networking. Subnets are assigned a routing table, and in the case of a Public subnet, the default route of this table is pointed at the Internet Gateway. Instances in this public subnet are assigned public, non-RFC1918 Elastic IP addresses. At the moment, only 5 Elastic IP addresses may be requested per account. You can request more via support, but obviously they are trying to ween people away from using them for everything. Consequently, NAT & supporting instances must be in place to facilitate external communication for non-public subnets.

In the case of these subnets, the default-route should be pointed at a NAT instance residing in the Public subnet. This brings about a single point of failure. Should the NAT instance go down, nothing in that subnet can speak to the outside world; the default-route becomes a black-hole. In order to combat this, multiple NAT instances can be provisioned in different availability zones, and with a little magic, configured to take over each others traffic-routing responsibilities on-demand.

Amazon has furnished a document with a workaround for this situation. Essentially, a script running on each NAT instance performs a health check on the other NAT instance, and should the other instance go down, the healthy instance will take over. It does so by adjusting the routing tables via AWS API calls. The script will also attempt to bring the failed instance back online.

UPDATE: The NAT Monitor script outlined by Amazon has a flaw. The ec2-describe-instances call to determine the state of the other NAT instance does not function properly. The documentation references using $5 instead of $4 to set the NAT_STATE variable, however I have found $6 to work best, but test this because your EC2-API-tools version might yield different results. I also highly suggest the –show-empty-fields argument because if the number of fields changes, the awk statement could potentially grab the incorrect field.

NAT_STATE=`/opt/aws/bin/ec2-describe-instances $NAT_ID -U $EC2_URL --show-empty-fields | grep INSTANCE | awk '{print $6;}'`

There is one issue with the configuration outlined by the Amazon document; the IAM roles permissions are too loose. Using the policy defined in the document, the NAT instance is granted permissions to restart every instance belonging to the account. Additionally, the NAT instance could modify any and all routing tables, such as those in other regions, VPC's, etc. You probably don't want your NAT instances in US-West-2 making any modifications whatsoever to US-East-1. The below policy is an attempt to restrict permissions as best as permitted by supported IAM policy conditions. Just substitute/replace the region and VPC information with your own. Also, tag the NAT instances with a 'Type' and 'VPC' field, setting 'Type' to 'NAT' and 'VPC' with the VPC's ID.

Restricted IAM Policy

{
   "Statement":[
      {
         "Sid":"DescribeStuff",
         "Action":[
            "ec2:DescribeInstances"
         ],
         "Effect":"Allow",
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ec2:Region":"us-west-2",
               "ec2:ResourceTag/VPC":"vpc-abcd1234"
            }
         }
      },
      {
         "Sid":"RoutingTableAccess",
         "Action":[
            "ec2:CreateRoute",
            "ec2:ReplaceRoute"
         ],
         "Effect":"Allow",
         "Resource":"*",
         "Condition":{
            "StringEquals":{
               "ec2:Region":"us-west-2"
            }
         }
      },
      {
         "Sid":"NATInstanceControl",
         "Action":[
            "ec2:StartInstances",
            "ec2:StopInstances"
         ],
         "Effect":"Allow",
         "Resource":"arn:aws:ec2:us-west-2:*",
         "Condition":{
            "StringLike":{
               "ec2:ResourceTag/Type":"NAT",
               "ec2:ResourceTag/VPC":"vpc-abcd1234"
            }
         }
      }
   ]
}

Apple Configurator – Backup & Restoration

I work a bit with Apple Configurator to provision, supervise and join various Apple devices to mobile device management (MDM) servers. One thing not integrated into Apple Configurator is the ability to backup and restore data easily. The need for this might arise if you have a remote operator who may need to remotely provision devices, or if you’d like to hand off a set configuration to a client. Hopefully, the below bash script makes life a little easier in this regard.

If the directories change, adjust accordingly. This is current as of Apple Configurator 1.5. On a second note, if you’d like to supervise devices which have already been supervised, you will also need to copy the Apple Configurator certificate and key from your keychain. Meraki has created a utility which extracts this here. I have not tested this, but in theory, you should be able to delete the Configurator certificate & key from the remote user’s keychain and have them import yours.

#!/bin/bash
# Apple Configurator Backup & Restoration Script
# Brian Dwyer - Intelligent Digital Services 6/18/14

# IPSW File Location - ~/Library/Containers/com.apple.configurator/Data/Library/Caches/com.apple.configurator/Firmware/

# Configuration Files Location
config_dir="/Users/$USER/Library/Containers/com.apple.configurator/Data/Library/Application Support/com.apple.configurator"
db_dir="/var/db/lockdown"

ConfigName=ConfiguratorFiles_Backup
ConfigDBName=ConfiguratorDB_Backup

function backup() {
  # Backup the Restore Files & Stuff
  if [ -d "$config_dir" ]; then
    echo 'Backing up the Configurator Files...'
    nohup tar zpcvf $ConfigName.tar.gz -C "$config_dir" . >/dev/null 2>&1
    echo 'Done'
  fi

  # Backup the Database
  if [ -d "$db_dir" ]; then
    echo 'Backing up the Configurator Database...'
    nohup tar zpcvf $ConfigDBName.tar.gz -C "$db_dir" . >/dev/null 2>&1
    echo 'Done'
  fi
}


function restore() {
  if [ -e "$ConfigName.tar.gz" ]; then
      # Use -o to make the current user the owner of extracted files
      tar xvpof $ConfigName.tar.gz -C "$config_dir"
  else
    echo "Configurator File Backup does not exist. Make sure $ConfigName.tar.gz exists..."
  fi

  if [ -e "$ConfigDBName.tar.gz" ]; then      
    echo "We needs sudo to preserve ownership on the database files at $db_dir"
      sudo tar xvpf $ConfigDBName.tar.gz -C "$db_dir"
  else
    echo "Configurator Database Backup does not exist. Make sure $ConfigDBName.tar.gz exists..."
  fi
}


case "$1" in

   backup)
        backup
        ;;
   restore)
        restore
        ;;
        *)
       echo "Usage:  {backup|restore}"
       echo "Backup: Backs up Configurator data to the current directory"
       echo "Restore: Restores backups in the current directory"
       RETVAL=2
esac

Chef – ‘Wrapper’ Cookbooks and Execution Order

Recently, I’ve been experimenting with Chef for Configuration Management.  One of the greatest thing’s about Chef is the concept of ‘Wrapper’ or ‘Application’ cookbooks, which basically make use of existing community cookbook code.  These community cookbooks, wherever they are sourced, are referred to as ‘Library’ cookbooks.  Basically, rather than forking the cookbook and adjusting it yourself, you write a ‘Wrapper’ for the Library cookbook in order to facilitate desired changes.  These changes are typically just overriding attributes, but sometimes you might have to take it a step further.

For example, after creating a few ‘Hello World’ cookbooks, I decided I was going to cook up a Wrapper for gitlab.org’s gitlab cookbook.  I prefer to use the most up-to-date NginX RPM available, as well as utilize the Percona MySQL suite rather than the standard MySQL packages.  For the GitLab cookbook at least, this involved making sure both Repositories were present prior to the resource execution phase.  Additionally, since Gitlab requires mysql-libs and this package is a member of the packages array in the cookbooks default attributes, the ‘packages’ array has to be overridden as well.  Here is an example of the override with a scope limited to the gitlab cookbook, and packages aren’t listed for brevity. This would be included in your wrapper cookbook’s attributes.

override.gitlab["packages"] = %w{packages, without, mysql-libs}

NOTE: I’m putting on my flame-suit; I’m sure there’s something I’m not doing ‘correctly’ here or to proper standards, but this is my first go, and it does work.

There’s a certain amount of Voodoo to get your prerequisites to execute first, for example installing the NginX and Percona repositories.  For the life of me, I couldn’t get the ‘yum-percona’ cookbook to run first when called as an ‘include_recipe’ inside my cookbook.  I know the proper method would be to place the ‘yum_percona’ cookbook further ahead in the node’s run list, but my objective here was to exploit the power of Chef.  What I ended up doing was creating two resources to pull down and install the repo installer RPM. Unfortunately, you can’t just feed the yum_package resource the URL like you could in Bash.  Here’s a little code snippet showing the Voodoo here:

mycookbook-gitlab\default.rb

include_recipe "mycookbook-gitlab::nginx_repo"
include_recipe "gitlab"

mycookbook-gitlab\nginx_repo.rb

# => Download NginX Yum Repo RPM
remote_file "#{Chef::Config[:file_cache_path]}/nginx_repo_latest.rpm" do
source "http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm"
action :nothing
end.run_action(:create)

# => Install NginX Yum Repo from RPM
yum_package "nginx_repo" do
source "#{Chef::Config[:file_cache_path]}/nginx_repo_latest.rpm"
action :nothing
end.run_action(:install)

The key here is the action:nothing, and the end.run_action(:install).  Chef operates in two phases; compile and execution.  The compile phase involves gathering all the resource objects from the cookbook/recipe dependency hierarchy in the node’s run list.  The execution phase is when all said resources are executed.  Obviously, you want the proper yum repositories in place prior to executing a yum install.  This snippet causes the Nginx Repo RPM to be downloaded and installed during the compilation phase. I have seen this in cookbooks before, but wasn’t sure what its purpose was.  These can be used with any resource type, although it’s probably more helpful with some than others. If you find that the descending execution order of your run_list, cookbook, or specific recipe is not being honored, you might need to use this construct to fix it… Or, look for this construct to find the offending blip. If it’s in an upstream community cookbook, override it the way we did here.

Finally, as an FYI, you can make use of the NginX community cookbook to handle the NginX repo installation. By default, it looks to EPEL for the package. You can set the attribute node[‘nginx’][‘repo_source’] = ‘nginx’, which defaults to epel.

OS X – Excessive CPU Usage while running as a VM

After installing OS X as a VM running the Server role, I have noticed excessive CPU usage while idling. I have narrowed this down to the default screensaver which runs on the login page. Even if you disable this screensaver as a user, it will not disable it for the login screen. I have found that the only way to resolve this is to actually delete the screensaver file itself! Here is how to do it.

sudo su
cd "/System/Library/Screen Savers/"
rm -r Flurry.saver/