DigitalOcean, Chef and Ohai – Retrieving a Droplet’s Private IP Address

Recently, I attempted to use the Ohai value for node['cloud_v2']['local_ipv4'] and node['cloud']['local_ipv4']['ip_address'] to determine the Private IP address of my Cloud-based nodes in a Chef cookbook.  Unfortunately, it does not work accurately for DigitalOcean instances any longer.

According to DigitalOcean documentation, if Private Networking is enabled, the IP will be assigned to eth1.  Recently, I noticed that a second Private IP address has begun to be assigned to the eth0 interface.  This is/was causing Ohai to assign the eth0 secondary (private) IP address to node['cloud_v2']['local_ipv4'] and node['cloud']['local_ipv4']['ip_address']

As you can see below, there is a second, private IP address assigned to eth0. I believe this has something to do with the recent release of Floating IP Addresses.

bdwyertech@dummy-droplet:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your
# system and how to activate them. For more information, see
# interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth1 eth0
      iface eth0 inet static
      address 123.234.123.234
      netmask 255.255.255.0
      gateway 123.234.123.1
      up ip addr add 10.13.0.123/16 dev eth0
      dns-nameservers 8.8.8.8 8.8.4.4
iface eth1 inet static
      address 10.128.123.123
      netmask 255.255.0.0

Initially, I just wrote a simple function to detect the IP address on eth1 via the node hash if DigitalOcean is detected by Ohai as the cloud provider. However, querying DigitalOcean metadata seems to be the more robust solution.

DigitalOcean has released a metadata service, similar to AWS, where you can query http://169.254.169.254/metadata/{API_VERSION} for droplet information.  DigitalOcean conveniently allows you to query the droplet’s metadata in its entirety and return it in JSON format.  I’ve went ahead and written a simple library to query this data and bring it into Ruby as a hash.

# DigitalOcean Metadata Chef Library
# rubocop:disable LineLength

require 'net/http'

# Public: This defines a module to retrieve Metadata from DigitalOcean
module DoMetadata
  DO_METADATA_ADDR = '169.254.169.254' unless defined?(DO_METADATA_ADDR)
  DO_SUPPORTED_VERSIONS = %w( v1 )
  DO_DEFAULT_API_VERSION = 'v1'

  def self.http_client
    Net::HTTP.start(DO_METADATA_ADDR).tap { |h| h.read_timeout = 600 }
  end

  # Get metadata for a given path and API version
  def metadata_get(id, api_version = DO_DEFAULT_API_VERSION, json = false)
    path = "/metadata/#{api_version}/#{id}"
    path = "/metadata/#{api_version}.json" if json
    response = http_client.get(path)
    case response.code
    when '200'
      response.body
    when '404'
      Chef::Log.info("Encountered 404 response retreiving DO metadata path: #{path} ; continuing.")
      nil
    else
      fail "Encountered error retrieving DO metadata (#{path} returned #{response.code} response)"
    end
  end
  module_function :metadata_get

  # Retrieve the JSON metadata, and return it as a Ruby hash
  def parse_json_metadata(api_version = DO_DEFAULT_API_VERSION)
    retrieved_metadata = metadata_get(nil, api_version, true)
    return unless retrieved_metadata
    JSON.parse(retrieved_metadata) if retrieved_metadata
  end
  module_function :parse_json_metadata
end

Code also available on my Github.

This conveniently allows you to query the resulting Ruby hash, and use it in your code.

# => Get the Droplet's Metadata
metadata = DoMetadata.parse_json_metadata

metadata['interfaces']['private'][0]['ipv4']['ip_address'] # => Droplet's Private IP Address
metadata['interfaces']['private'][0]['ipv4']['netmask'] # => Droplet's Private Subnet Mask
Advertisements

Chef – Ohai in AWS EC2 VPC

This is a quick tip to those of you who are using Chef inside an AWS VPC. The EC2 Ohai plugin does not run by default, which prevents some meaningful node attributes from being collected.

The EC2-specific node attributes I find most useful are:

node['ec2']['instance_id'] # => Instance's ID
node['ec2']['local_ipv4'] # => Instance's IPv4 Address
node['ec2']['placement_availability_zone'] # => Instance's Region & Availability Zone
node['ec2']['ami_id'] # => Instance's Baseline AMI

To get your instances inside a VPC to pick up meaningful node attributes related to EC2, you have to create an Ohai hint file for the EC2 plugin. To do so, simply throw this into your initial bootstrap.

mkdir -p /etc/chef/ohai/hints && touch ${_}/ec2.json

Make sure you don’t do that blindly on non-EC2 instances, as it will significantly increase the execution time of Ohai.  You might want to wrap this in an if statement, and use something like the example below.

if [[ $(dmidecode | grep -i amazon) ]] ; then
 mkdir -p /etc/chef/ohai/hints && touch ${_}/ec2.json 
fi

Apple Configurator – Backup & Restoration

I work a bit with Apple Configurator to provision, supervise and join various Apple devices to mobile device management (MDM) servers. One thing not integrated into Apple Configurator is the ability to backup and restore data easily. The need for this might arise if you have a remote operator who may need to remotely provision devices, or if you’d like to hand off a set configuration to a client. Hopefully, the below bash script makes life a little easier in this regard.

If the directories change, adjust accordingly. This is current as of Apple Configurator 1.5. On a second note, if you’d like to supervise devices which have already been supervised, you will also need to copy the Apple Configurator certificate and key from your keychain. Meraki has created a utility which extracts this here. I have not tested this, but in theory, you should be able to delete the Configurator certificate & key from the remote user’s keychain and have them import yours.

#!/bin/bash
# Apple Configurator Backup & Restoration Script
# Brian Dwyer - Intelligent Digital Services 6/18/14

# IPSW File Location - ~/Library/Containers/com.apple.configurator/Data/Library/Caches/com.apple.configurator/Firmware/

# Configuration Files Location
config_dir="/Users/$USER/Library/Containers/com.apple.configurator/Data/Library/Application Support/com.apple.configurator"
db_dir="/var/db/lockdown"

ConfigName=ConfiguratorFiles_Backup
ConfigDBName=ConfiguratorDB_Backup

function backup() {
  # Backup the Restore Files & Stuff
  if [ -d "$config_dir" ]; then
    echo 'Backing up the Configurator Files...'
    nohup tar zpcvf $ConfigName.tar.gz -C "$config_dir" . >/dev/null 2>&1
    echo 'Done'
  fi

  # Backup the Database
  if [ -d "$db_dir" ]; then
    echo 'Backing up the Configurator Database...'
    nohup tar zpcvf $ConfigDBName.tar.gz -C "$db_dir" . >/dev/null 2>&1
    echo 'Done'
  fi
}


function restore() {
  if [ -e "$ConfigName.tar.gz" ]; then
      # Use -o to make the current user the owner of extracted files
      tar xvpof $ConfigName.tar.gz -C "$config_dir"
  else
    echo "Configurator File Backup does not exist. Make sure $ConfigName.tar.gz exists..."
  fi

  if [ -e "$ConfigDBName.tar.gz" ]; then      
    echo "We needs sudo to preserve ownership on the database files at $db_dir"
      sudo tar xvpf $ConfigDBName.tar.gz -C "$db_dir"
  else
    echo "Configurator Database Backup does not exist. Make sure $ConfigDBName.tar.gz exists..."
  fi
}


case "$1" in

   backup)
        backup
        ;;
   restore)
        restore
        ;;
        *)
       echo "Usage:  {backup|restore}"
       echo "Backup: Backs up Configurator data to the current directory"
       echo "Restore: Restores backups in the current directory"
       RETVAL=2
esac

Chef – ‘Wrapper’ Cookbooks and Execution Order

Recently, I’ve been experimenting with Chef for Configuration Management.  One of the greatest thing’s about Chef is the concept of ‘Wrapper’ or ‘Application’ cookbooks, which basically make use of existing community cookbook code.  These community cookbooks, wherever they are sourced, are referred to as ‘Library’ cookbooks.  Basically, rather than forking the cookbook and adjusting it yourself, you write a ‘Wrapper’ for the Library cookbook in order to facilitate desired changes.  These changes are typically just overriding attributes, but sometimes you might have to take it a step further.

For example, after creating a few ‘Hello World’ cookbooks, I decided I was going to cook up a Wrapper for gitlab.org’s gitlab cookbook.  I prefer to use the most up-to-date NginX RPM available, as well as utilize the Percona MySQL suite rather than the standard MySQL packages.  For the GitLab cookbook at least, this involved making sure both Repositories were present prior to the resource execution phase.  Additionally, since Gitlab requires mysql-libs and this package is a member of the packages array in the cookbooks default attributes, the ‘packages’ array has to be overridden as well.  Here is an example of the override with a scope limited to the gitlab cookbook, and packages aren’t listed for brevity. This would be included in your wrapper cookbook’s attributes.

override.gitlab["packages"] = %w{packages, without, mysql-libs}

NOTE: I’m putting on my flame-suit; I’m sure there’s something I’m not doing ‘correctly’ here or to proper standards, but this is my first go, and it does work.

There’s a certain amount of Voodoo to get your prerequisites to execute first, for example installing the NginX and Percona repositories.  For the life of me, I couldn’t get the ‘yum-percona’ cookbook to run first when called as an ‘include_recipe’ inside my cookbook.  I know the proper method would be to place the ‘yum_percona’ cookbook further ahead in the node’s run list, but my objective here was to exploit the power of Chef.  What I ended up doing was creating two resources to pull down and install the repo installer RPM. Unfortunately, you can’t just feed the yum_package resource the URL like you could in Bash.  Here’s a little code snippet showing the Voodoo here:

mycookbook-gitlab\default.rb

include_recipe "mycookbook-gitlab::nginx_repo"
include_recipe "gitlab"

mycookbook-gitlab\nginx_repo.rb

# => Download NginX Yum Repo RPM
remote_file "#{Chef::Config[:file_cache_path]}/nginx_repo_latest.rpm" do
source "http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm"
action :nothing
end.run_action(:create)

# => Install NginX Yum Repo from RPM
yum_package "nginx_repo" do
source "#{Chef::Config[:file_cache_path]}/nginx_repo_latest.rpm"
action :nothing
end.run_action(:install)

The key here is the action:nothing, and the end.run_action(:install).  Chef operates in two phases; compile and execution.  The compile phase involves gathering all the resource objects from the cookbook/recipe dependency hierarchy in the node’s run list.  The execution phase is when all said resources are executed.  Obviously, you want the proper yum repositories in place prior to executing a yum install.  This snippet causes the Nginx Repo RPM to be downloaded and installed during the compilation phase. I have seen this in cookbooks before, but wasn’t sure what its purpose was.  These can be used with any resource type, although it’s probably more helpful with some than others. If you find that the descending execution order of your run_list, cookbook, or specific recipe is not being honored, you might need to use this construct to fix it… Or, look for this construct to find the offending blip. If it’s in an upstream community cookbook, override it the way we did here.

Finally, as an FYI, you can make use of the NginX community cookbook to handle the NginX repo installation. By default, it looks to EPEL for the package. You can set the attribute node[‘nginx’][‘repo_source’] = ‘nginx’, which defaults to epel.

OS X – Excessive CPU Usage while running as a VM

After installing OS X as a VM running the Server role, I have noticed excessive CPU usage while idling. I have narrowed this down to the default screensaver which runs on the login page. Even if you disable this screensaver as a user, it will not disable it for the login screen. I have found that the only way to resolve this is to actually delete the screensaver file itself! Here is how to do it.

sudo su
cd "/System/Library/Screen Savers/"
rm -r Flurry.saver/

Apple Profile Manager – Mountain Lion Migration

Recently I had the pleasure of migrating an OSX Lion server to Mountain Lion.  It’s primary function was an MDM server for Apple devices.  Basically, the upgrade process involves upgrading to Mountain Lion followed by installing the updated Lion Server app.

Primarily, my interactions with the Apple Server have been for Profile Manager functionality.  In Lion, the Profile Manager utilized a PostgreSQL backend with a datastore located in /usr/share/devicemgr/backend/.  iOS applications and other push to device material were located in the ‘/backend/file_store/’ directory named as their MD5 checksum equivalent.  Logs for the devicemgr service were located in the ‘/backend/logs/’ directory.

In Mountain Lion Server, what used to be located in /usr/share/ is now packed into the Server Application itself.  For example, the same ‘/devicemgr/backend/’ is now located at ‘/Applications/Server.app/Contents/ServerRoot/backend/file_store/’.  The iOS applications and other push material are now located at ‘/var/devicemgr/ServiceData/Data/FileStore/’.

This knowledge is critical if you encounter an issue with the Profile Manager; there is not much info to go on if you have a problem.  In Lion, I had seen cases where the Profile Manager would for an unknown reason delete applications I was trying to push, causing managed devices to be unable to receive the application.

In the case of Mountain Lion Server, I encountered the following issue with devices post-upgrade when trying to upload an updated version of an application.

ProfileManager[217] <Error>: Caught unhandled exception undefined method `get_all_devices’ for nil:NilClass at …’

To me, this sounded like a nonexistent remnant of Lion was being referenced to.  To some people, this might sound like a good time to reset the Profile Manager with the wipeDB.sh script.  However, this would require you to rejoin all devices to the MDM.  In this case, there was only a single application which the MDM was being used to deploy, so I figured I would try clearing the Postgres tables containing the application information and see what happened.  After running the following commands, I was able to upload my application and push without the ‘undefined method’ error as show above.

sudo psql –U _postgres –d device_management
DELETE from public.ios_applications; SELECT setval('ios_applications_id_seq', 1); DELETE from ios_application_library_item_relations; SELECT setval('ios_application_library_item_relations_id_seq', 1);
/q
serveradmin stop devicemgr
serveradmin start devicemgr