In our last episode, I wrote about the final construction and initial thoughts on the PC I built to drive my HTC Vive headset. Now that I’ve been using it for a few weeks, it’s time to start talking about my thoughts on the build and on VR. Stick around, it’ll probably be thrilling or at least interesting enough to occupy you for a few minutes in between Pokemon Go captures.
Let me preface this by saying that this is not my first vSphere upgrade. I’m comfortable with the procedures and I’m confident in my ability. I still broke the network. But, like a good admin should, I own up to it. It was my fault. But, also like a good admin, I tracked down the outage and fixed it. Pointing fingers and redirecting blame doesn’t get the packets flowing again. Ok, I’ve got that out of the way so you can’t look upon me with scorn.
It all started during a monthly maintenance cycle. This month, I was deploying a new vCenter 5.5 Server Appliance (VCSA) to replace our non-virtualized Windows 2008 vCenter 5.1 Server. I didn’t really care about historical data, and recreating permissions and so forth is relatively simple in this particular environment, so my plan was to stand up a new VCSA, disconnect the hosts from the old vCenter and connect them to VCSA. Next, I used VUM to upgrade the hosts from 5.1 to 5.5.
Hello, sandwich fans! It’s been awhile since I’ve written, but I have some fresh deli meat for you today. If you recall, last year I wrote a blog post about setting up VMware, Synology, and iSCSI MPIO. It turns out to have been my most-read post so far, for which I thank you. Since I’ve gotten such positive feedback, today I’m going to show you a similar setup, but this time I’m going to use NFS instead of iSCSI.
There are some pretty significant differences between iSCSI and NFS, both in terms of architecture and performance. One big difference is that NFS really doesn’t have support for multi-pathing (MPIO) in the way that iSCSI does. It has a few work-arounds like using alternate subnets and so forth, but for today we’re going to rely on simple failover on the host side with LACP link bonding on the storage side. Later on, we’ll compare the performance to the iSCSI system we built last year.
“Hello, this is tech support. You opened a trouble ticket stating that you had an error message and needed some assistance. Can you elaborate?”
“Yes, it’s broken!”
“What’s broken? Can you tell me what the error message said?”
“I don’t remember.”
“What program were you working in?”
“I don’t remember.”
“Is everything working now?”
“Maybe. I don’t know.”
Does this exchange sound familiar to you? If you’ve spent any time in I.T., then it should. Every day, thousands of I.T. support professionals have to field service calls from users who are extremely intelligent in all other areas of their life, but when sitting in front of a computer they can’t seem to grok the magic screen in front of them. These are people who spend hours a day in front of a keyboard, who have as one of the primary tools of their job one of those mystical computers that only the “gurus” can seem to figure out.
So I had an interesting little problem this morning. I got a call from a fellow engineer asking if something was wrong with our vCenter 5.1 server. He couldn’t log in. Obviously, that’s more than a little concerning so I told him I’d take a look at it. I brought up my client and attempted to sign in and received the following error:
A general system error occurred: Authorize Exception