OS X – Excessive CPU Usage while running as a VM

After installing OS X as a VM running the Server role, I have noticed excessive CPU usage while idling. I have narrowed this down to the default screensaver which runs on the login page. Even if you disable this screensaver as a user, it will not disable it for the login screen. I have found that the only way to resolve this is to actually delete the screensaver file itself! Here is how to do it.

sudo su
cd "/System/Library/Screen Savers/"
rm -r Flurry.saver/

Online Upgrade from ESXi 5.0 to 5.1 Using Host Profiles and CLI

On a standalone ESXi box or in a small environment not running vSphere Update Manager, the best way to update your hosts is via SSH CLI.  I recently discovered a way to use this patching method to perform a version upgrade for 5.0 to 5.1 using host profiles rather than using the interactive installer, aka the install CD.  VMware hosts their standard host profiles on the web, so these can be utilized to perform the 5.0 to 5.1 upgrade with minimal downtime; all that you’ll have to do is apply the profile and reboot, just like a normal patch. Remember to migrate any important VM’s off the host in question prior to running these operations.

1. Enable HTTP Client Firewall Exception

esxcli network firewall ruleset set -e true -r httpClient 

2. List Profiles

esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml 

Continue reading

Update – Been Busy

It’s been a while since I last posted to this blog, so I figured I would post an update.  I have been very busy lately working to complete my internship requirements to graduate from college while also studying for the MCITP Enterprise Administrator exam.  I should have the latter done in a week or two and will begin studying for the CCNP Route exam.

I have added some new equipment to the lab, including a Dell 4220 42u server rack, Cisco 2821, and another 16gb RAM to my server bringing it to 32gb.

The rack was an absolute necessity.  All of my gear was more or less scattered around not plugged in; I needed a way to efficiently store and power it.  I found the rack locally with the Cisco 2821 and a 10,000 kVa Tripp-Lite 240v SmartOnline UPS for $400.  I could not pass that up.  The rack itself was in great shape, it just had a lot of sheet rock dust on it and needed to be washed.  After washing it with a garden hose, sponge and car soap in my driveway, it cleaned up real nice and was ready to hold my gear.  I grabbed up two power strips from Monoprice with integrated cable management and mounted them neatly in the rear of the rack where the expensive PDU’s would normally go.

The Cisco 2821 still had a config on it when I got it so I confreg 0x2142’ed it and wiped her clean to confirm she worked.  After doing that, I opened it up, blew a lot of dust out of it and dropped an extra 512mb RAM into it.  While doing so, I saw that it had an AIM-VPN module installed in it leading me to believe that it shipped as a 2821 w/sec package.  Sweet deal!  This thing will likely play a nice role when it comes time to mess with VoIP since it is now my only Cisco piece with that capability.

The server RAM I found on eBay.  Four 4Gb DDR2 FB-DIMM’s for $150.  That was another deal I found hard to pass on, especially being that my main usage of the server is for virtualization and labs.  RAM is probably the most critical component of a virtual server for a home lab.  Anyone who has ever run Exchange in a lab or production environment knows just how demanding it can be, especially of RAM.  After stress testing the server, I concluded the RAM is good, so it was definitely a good deal.

 

VMware vSphere Client – Clear Recent Connections (RDP Also)

Today I figured out how to empty the recent connections for the VMware vSphere Client.  After a while, these begin to add up and I do not like having useless IP addresses and DNS names from experimental ESXi boxes appearing in my client when I attempt to log in.  I searched the registry for some no longer functional servers and I found some keys in HKCU which contained the recently connected hosts.  Below I have listed the two values inside PowerShell commands which can be used to delete them.

Remove-ItemProperty -Path ‘HKCU:\Software\VMware\Virtual Infrastructure Client\Preferences\UI\ClientsXml’ -Name * 
Remove-ItemProperty -Path ‘HKCU:\Software\VMware\VMware Infrastructure Client\Preferences’ -Name RecentConnections

Update – Remote Desktop Connections

I figure many people interested in clearing the vSphere Client may also be interested in clearing their Remote Desktop connections list.  To accomplish this, you would remove the file “Default.rdp” from the users Documents directory.  To accomplish this, utilize one of the following PowerShell or shell commands.

Remove-Item $Env:UserProfile\Documents\Default.rdp -Force
del %userprofile%\Documents\Default.rdp /A H

Dell PowerEdge 2950

So recently I decided I wanted a server of my own which I could run VMware ESXi on at home.  Previously, my personal “server” was just an old tower system composed of whatever extra components I had laying around the house.  It had a few old hard drives, 4Gb DDR, an Athlon 4000+, and one fan…  Obviously, it has become antiquated, so its time for an upgrade!

While working with server gear in the field, I have found Intel’s RMM and HP’s iLO Out-of-Band management solutions to be extremely helpful and a technology that I just had to have in my next personal sandbox.  I also wanted something that would run ESXi smoothly, permitting me to experiment and conduct labs at home.  Initially, I was thinking about just picking up an Intel Executive Series motherboard and dropping a quad-core into it since they have Intel Active Management Technology, which is similar to RMM.  Then I had an epiphany and figured I should try to grab some enterprise-grade hardware, so I looked at the HP DL line of servers on eBay knowing that they had the iLO Integrated Lights Out functionality.  What I wanted was just outside of my price range; I was looking to spend less than $600 for a name brand, rack mount, dual processor machine with 16gb RAM minimum.  The HP DL380 G4’s were in my price range, but the technology was just too old.  I wanted Intel Xeon’s based on the Core2 architecture, not the hot, slow and old NetBurst which the Pentium 4 was based on.  In addition, I wanted a server with a SAS/SATA backplane that accepts 3.5″ hard drives so I could add relatively cheap, fast storage.  For the most part, DL380 G5’s equipped to my desired specification were out of my price range.

Eventually, I ran across Dell’s PowerEdge line of servers and saw that they were readily available for a relatively affordable price.  I was eying the 2950’s with 3.5″ bays, preferably a unit with a bunch of drives already so that I would not have to buy sleds to put new, bigger drives into.  Eventually I found a unit with two dual-core Xeon 5160’s at 3.0Ghz, 16Gb RAM and five 3.5″ SAS drives within my price range, so I made it mine.  It did not have a DRAC card and was missing one sled so I picked up a DRAC and two sled’s on eBay for around $50.  In addition, I grabbed two 1.5Tb WD Black 64Mb Cache SATA units to serve as my large ESXi datastore.  For those of you out there who were confused by the manuals description that states the SAS sleds are different than the SATA sleds, I placed my WD SATA drives into the SAS sleds and they work just fine.  I am not sure why they have such confusing information in the hardware manual.

The first thing I did when I received the server was update the firmwares in all the components.  When I received it, the firmwares were circa early 2007 and needed some updating.  I took the BIOS from 1.1.0 to 2.7.0, PERC5 RAID from 5.0.1 to 5.2.2, BMC from 1.14 to 2.37, SAS backplane from 1.0 to 1.05, and the Broadcom NetXtreme II NIC’s from 1.8.0 to 6.2.14.  I also updated the DRAC5 to 1.60 and the DVD/CD-RW from DE05 to DE08.  In addition, I grabbed Dell’s latest SAS/SATA hard drive firmware update CD which is a conglomerate of updates encompassing any and all of the drives which they have sold.  I used the DRAC5 to mount the ISO remotely and allow it to run and update the drives and not surprisingly, all five SAS units received updated firmware.  After completing all of these updates, I cleared the CMOS and then proceeded to get my new 2950 configured and running.

I must say, the Dell hardware is all very nice.  The PERC5 seems like a great RAID card supporting RAID 0,1,5,10 and 50.  It’s interface is intuitive and I have simulated a RAID5 disk failure and rebuild and it has done so quickly and seamlessly.  It also supports hot-spares which is a great enterprise feature.  My only issue with the PERC5 is that the cache battery is dead.  However, I have already found a replacement battery on eBay and it is on the way.  Apparently, you need to disconnect the cache battery if the server is going to sit for more than a week or two.  From what I have read, the early PERC5 firmwares may have also had a charging issue.  I am sure the replacement battery, the updated firmware, and disconnecting the battery as outlined in the hardware manual will prevent this issue from recurring any time soon.

The DRAC feature is also exactly what I wanted for OOB Management access.  The only thing that irked me was that the self-signed SSL certificate expired in 2010, and there was no option in the web-GUI to generate a new one.  You could upload one or create a certificate request, but these features are geared more towards organizations running their own enterprise CA.  Eventually, after configuring the DRAC for SSH access, I SSH’ed into it and found a command which would accomplish the task of creating a new self-signed certificate.

Another benefit of the PE2950 is that it is VMware white-listed.  I feel it is very important to have white-listed hardware if you ever intend on relying on the services it runs, not only for phone support, but just for overall functionality since VMware’s kernel and included drivers are tuned for stability and performance with specific configurations.  If you are running random gear, it may work, but it may not run reliably and the issues you may have might be difficult if not impossible to troubleshoot and fix.

Overall, I am glad that I did not just buy some updated desktop gear and use it as an ESXi box.  Just the added experience of having enterprise-grade hardware at home is great, not to mention how reliable it is, knock on wood…

 

VMware ESXi & Intel SR1550AL Servers

I recently encountered an issue when running an ESXi 4.1 environment where I was getting a Purple Screen of Death at right about the 24 hour uptime mark.  Like most technologists, I first hit Google with the error code and had no luck.  I found a similar error code which suggested that it had something to do with a storage I/O issue.  The issue did not seem driver related, so I began to look into firmware and BIOS updates for the server.

After some investigation, I realized that there were four BIOS/firmware’s in my SR1550 which needed updating.  After trying to do them all manually, I found a nifty utility called the Intel Deployment Assistant.  The Intel DA utility is a bootable CD that allows you to configure many different features, including the ability connect to the Internet to check and download the latest BIOS/firmware updates and install them automatically.  This worked great and I found out that the system had two year old BIOS and firmwares and the current build numbers were much newer.  So, I let the utility do its thing, but one update kept failing.  I tried it on all three of the SR1550’s and had the same issue on all of them; the HSC/backplane firmware would not update.

At this point, I started looking for older revisions of the Hot Swap Controller firmware figuring that the jump from revision 1.41 to 2.15 was too great.  There were many revisions between 1.41 and 2.15, but they were not available on Intel’s website.  Finally, after a lot of searching, I found an Intel document that confirmed my suspicion that I need to upgrade to a different older update prior to jumping to 2.15.  Alas, I decided I had to get into contact with Intel about this issue.

At this point, I’m sitting in the server room and its already after six and my stomach is growling.  I had no phone reception in the server room so I tried a button to chat with an Intel support representative.  Much to my surprise, after about 10 minutes, the guy I was in contact had a solution.  Although he could not locate the firmware revision I needed, he located an internal document that stated if I disconnected the RMM2 and then proceeded with the upgrade to 2.15, it should work.  So I popped open the server, disconnected the RMM2 module, booted up, manually launched the update from a DR-DOS boot-disk and much to my surprise, it worked!

For those of you who do not know what the Intel RMM is, it is a Remote Management Module that allows you to have out-of-band control of the server.  As a matter of fact, it offers an IP-based KVM over SSL through a Java applet, and it even allows you to remotely mount and boot the server to ISO images… How awesome is that!   After that little bugger gave me all that headache, I decided to give it a try and it is great.  You use a little utility called psetup to configure IP and login settings, and then your good to go!

Anyway, yes, after updating all of those firmwares, I now have over 180 days uptime on the ESXi boxes.  I hope this helps any other SR15xx, SR25xx users out there that may be encountering stability issues with ESXi.

   

Virtualization

At the moment, a major trend in the IT industry is the push towards virtualization.  In addition to being a  large, quickly growing social movement, the push to “go green” is a major focus of many industries & organizations looking to maintain or improve their public image by reducing their carbon footprint.  Virtualization is one way the IT industry is “going green”.

At first, you might ask yourself why virtualize?  From a system administrator’s standpoint, the modularity of virtual machines is impeccable.  You can easily migrate services from one hypervisor to another, not having to worry about drivers or any major compatibility issues.  Everything remains intact after a proper transition from one host to another; no reconfiguration necessary.  This modularity takes a huge burden off a system administrator who would usually be concerned with these things when migrating a service to a new server.  For example, if a company increases in size and needs higher performance hardware to run a certain service, normally this would mean migrating the service to a fresh OS install on a new server.  You could not simply place the old hard drive into the new server and expect it to work.  The differences in hardware would require a fresh OS installation and even the possible danger of incompatibility with the older software.

Now, let’s say that machine was virtualized and its host’s specifications were inadequate.  A new, more powerful server would require only a hypervisor to be installed to be ready to run the VM.  The actual virtual machine could then be migrated from the old machine with a few keystrokes.  After adjusting the privileged domain’s configuration to match that of the original host, the service in question could then be returned to its original state.  In practice, the time it takes to migrate virtual machines is considerably less than migrating physical hosts.  In addition, this ease of migration allows for virtual machines to be moved around amongst hosts in an attempt to achieve the most efficient use of resources possible.  If you have a few services that don’t see much usage, you can host them all on a single machine.  If one of those services suddenly becomes more in demand, it can be migrated to another host with more free resources available.

This modularity is enticing and also entails ease of incorporating high-availability.  When type-one hypervisors are clustered, they can add redundancy to a virtual machine.  For example, if you had two Hyper-V’s clustered and attached to commonly available storage such as a SAN, if one of the cluster members was to crash, the virtual machines running on it would automatically continue running on the second member.  How this works is similar to how Cisco incorporates HSRP.  Each host has an IP address, but the two share a virtual IP address.  This virtual IP address is the only IP a client sees when accessing a service hosted on the cluster.  When managed, they appear as a single VM host.  Virtual machines are migrated to the cluster the same way you migrate a VM from one host to another.  You can choose which machine will primarily host the VM, but when failover occurs it will be automatically migrated to another available cluster member.  Even though you can pick which machine will host your VM, it is accessed through the cluster’s virtual IP, regardless.  The best thing about Hyper-V is that this functionality is included in its purchase price – free!

VMware HA is similar but requires licensing.  VMware vMotion and Storage vMotion drive vCenter’s HA and also offer the ability to quickly migrate VM’s with not only their live state information, but also their associated virtual disks.  VMware’s vCenter can also be used to dynamically move VM’s from host to host and actually power up and power down hosts as VM demand for resources increases and decreases.

In addition to all of these benefits, virtualization is also more economical.  Because you can host multiple virtual machines on a single host, what used to require a dozen or more servers may now only require two or three powerful servers.  In doing this, you have less hardware to maintain, more free space and best of all, less power usage.  In today’s economy where every dollar counts, this is important.  Money can be saved by virtualizing.  You can go so far as to virtualize your desktop environment as well, but more about that another day.

In today’s world, it’s pretty easy to see why virtualization has gotten so much attention so quickly.  It offers many real benefits on many fronts.  It is administrator friendly, accountant friendly and even environmentalist friendly.  It’s not too often you see something new that provides gratification to all three of these types simultaneously!