Let me preface this by saying that this is not my first vSphere upgrade. I’m comfortable with the procedures and I’m confident in my ability. I still broke the network. But, like a good admin should, I own up to it. It was my fault. But, also like a good admin, I tracked down the outage and fixed it. Pointing fingers and redirecting blame doesn’t get the packets flowing again. Ok, I’ve got that out of the way so you can’t look upon me with scorn.
It all started during a monthly maintenance cycle. This month, I was deploying a new vCenter 5.5 Server Appliance (VCSA) to replace our non-virtualized Windows 2008 vCenter 5.1 Server. I didn’t really care about historical data, and recreating permissions and so forth is relatively simple in this particular environment, so my plan was to stand up a new VCSA, disconnect the hosts from the old vCenter and connect them to VCSA. Next, I used VUM to upgrade the hosts from 5.1 to 5.5.
Hello, sandwich fans! It’s been awhile since I’ve written, but I have some fresh deli meat for you today. If you recall, last year I wrote a blog post about setting up VMware, Synology, and iSCSI MPIO. It turns out to have been my most-read post so far, for which I thank you. Since I’ve gotten such positive feedback, today I’m going to show you a similar setup, but this time I’m going to use NFS instead of iSCSI.
There are some pretty significant differences between iSCSI and NFS, both in terms of architecture and performance. One big difference is that NFS really doesn’t have support for multi-pathing (MPIO) in the way that iSCSI does. It has a few work-arounds like using alternate subnets and so forth, but for today we’re going to rely on simple failover on the host side with LACP link bonding on the storage side. Later on, we’ll compare the performance to the iSCSI system we built last year.
In one of our data silos, we’ve been using a lot of spreadsheets and manual tracking to manage physical-to-virtual cross-connects between our physical switches and our ESXi hosts. There are a couple of reasons for this. The first is that up until recently, the silo was using non-Cisco switches and the second is that the licensing on our hosts don’t allow for LLDP, only CDP. We only have Standard licensing and Enterprise is required for distributed switches. Only distributed switches can do LLDP, so we were stuck with a protocol that we couldn’t use with our physical switches. Now that we’re in the process of migrating the silo to some newer hardware, I’m preparing to do a small redesign of our environment.