Update: Looking for NFS instead of iSCSI? Check out this post: How to set up VMware ESXi, a Synology NFS NAS, and Failover Storage Networking
This week, I’ve been working on a lightweight virtualization infrastructure for a customer and I thought you’d like to see a little of how I put it together. The customer wasn’t really interested in paying for a full SAN solution that would include chassis redundancy and high performance. They opted instead for a 12-bay Synology RS2414 RP+, a couple of HP servers for ESXi hosts, and a Cisco 2960 Layer 2 Gigabit storage switch all tied together with VMware vSphere Essentials Plus.
While not exactly a powerhouse in terms of speed and reliability, this entry-level virtualization platform should serve to introduce them into the world of virtual servers, drastically reduce rackspace and power consumption, provide the flexibility they need to recover quickly from server hardware outages, and allow them to more easily migrate off of their aging server hardware and operating systems all without breaking the bank. Today, I’m going to show you how I set up Active/Active MPIO using redundant links on both the ESXi hosts and the Synology NAS, allowing for multipath failover and full utilization of all network links.
First, a little bit about the hardware used. The Synology RS2414RP+ is running Synology’s DSM version 5 operating system and carries 4 x 1GB ethernet ports and 12 drive bays containing 3 x 3TB drives (9TB raw). This model also supports all four VAAI primitives, which will be important when we make decisions about how to create our LUNs. This support will allow us to offload certain storage-specific tasks to the NAS, which dramatically reduces network and compute workloads on the hosts.
The hosts are regular HP DL380 2U servers with 40GB of RAM and carrying 4 x 1GB ethernet ports. Because they only have four onboard 1GB ports, we lose a lot of speed and redundancy options on the networking side, but since this customer is only going to have a handful of lightly-to-moderately used VM guests, they shouldn’t experience any deal-killer performance problems. In any case, they’ll experience much better response times than what they are currently seeing on their old gear. In another post, I’ll run some IOMETER tests to show actual IOPs to get an idea of expected performance.
Here is my very quick-and-dirty whiteboard layout:
1 port for Management/VMotion
2 ports for iSCSI storage
1 port for Public network access to VM guests
1 port for Public network management access
3 ports for iSCSI storage
The iSCSI storage and the VMotion ports are all on a dedicated back-end network switch. The Public ports will go to existing premise switches. Normally, I don’t like to double-up Management and VMotion on one port, but with only 4 ethernet ports on the hosts, we’re a bit limited in our options. Since this is such a small environment, there won’t be a lot of guest VMotioning happening anyway.
The first thing that we have to do is to set up the networking on our Synology. In this example, I used LAN ports 2, 3, and 4 for the iSCSI storage, giving them each a static IP address on the storage network. Port 1 has a static IP on the Management network. You’ll notice that I’ve already enabled jumbo frames and set them to MTU 9000.
Now we’re going to create our LUNs. Synology doesn’t support VAAI 100% across the board on their hardware. In order to have full support, you have to create your LUNs as “iSCSI LUN (Regular Files)” rather than as block storage. You’ll see this in a minute. To get ready to do that, we’ll first create a Disk Group. Open the Storage Manager on the DSM. In this example, you’ll see that I’ve created one already and I opted to give it the full size of the three-drive array:
Next, create a Volume on the Disk Group. Again, you’ll see I’ve already created a Volume and I again opted to fill the Disk Group created previously.
Now, we’ll create our first LUN. Click on iSCSI LUN, and then click Create. You’ll notice that only the top option is available. That’s due to the way we created the Disk Group and Volume. The two previous steps are only necessary if you wish to use VAAI. Your iSCSI will still work with the Block-Level storage, you just won’t have hardware acceleration support in ESXi.
Click next and choose the options for your LUN. The critical step here is to ensure that you set “Advanced LUN features” to YES. I’ve also heard that Thin Provisioning must be enabled as well, but I was not able to replicate it. If you haven’t already created an iSCSI target, go ahead and do it in this wizard.
In this example, I chose not to enable CHAP.
Next, click on iSCSI Target and edit the target you created in the last step. It’s critical that you enable “Allow multiple sessions…” here since we’ll have several network cards targeting this LUN. You may also wish to set up masking, as well. In the second picture, I’ve blocked “Default privileges”, and then added the iSCSI initiator names for my two hosts. This will prevent stray connections from unwanted hosts (or attackers).
Now, let’s jump over to one of our hosts and set up the networking and iSCSI software adapter. You’ll need to do this on each host in your cluster. For this exercise, we’re using the VMware Infrastructure Client because I haven’t completed the build yet, so I don’t have a vCenter Server to connect a web browser to.
After you connect to your host, set up your storage networking. In this example, we’re using two of our four ethernet ports for iSCSI storage. Create a vswitch and two vmkernels, each with their own IP address on the storage network as shown. You’ll also want to set your MTU here as well if you’re using jumbo frames.
On each vmkernel, edit the NIC Teaming settings and override the switch failover order. Your adapters should have opposite configurations. As shown in this example, iSCSI-1 has vmnic1 “Active” and vmnic2 “Unused”, while iSCSI-2 has the reverse (vmnic2 “Active” and vmnic1 “Unused”). Essentially, this binds each vmkernel to a specific LAN port. Later on, we’ll teach the host how and when to utilize the ports.
Next, click on Configuration and then Storage Adapters. Add your software HBA by clicking Add. You’ll notice I’ve already done this step.
Highlight the iSCSI Software Adapter you just created and click on Properties. If you click on Configure (on the General tab), you can get the iSCSI initiator name to put in the Masking section we talked about a little while ago. If you use CHAP, you’ll also want to configure it here (globally) or on the individual targets. Click on the Dynamic Discovery tab and click Add. Here, you’ll put in the IP address of one of the iSCSI ports we set up on the NAS. You only have to put in one, but you may want to put in two for redundancy. Any remaining ethernet ports will be discovered automatically and will appear on the Static Discovery tab.
Now, click on the Network Configuration tab. Click on Add and individually add the two iSCSI vmkernel ports. In the screenshot, you’ll see that the path status for both physical links is “Active”. This is our Active/Active multipathing.
Now, go to Storage and rescan your HBA’s. Once the rescan is complete, you can click on Add Storage to add your new LUN as normal. You’ll see that the Hardware Acceleration state of your LUNs is “Supported”. That means all VAAI primitives are functional.
The very last thing we need to do is to enable Round Robin on our LUN. You’ll need to do this for each LUN on each host but it will cause your iSCSI network cards to balance the traffic instead of loading it all onto a single port. Alternatively, you can set the default pathing profile for the plugin and then reboot the server, which will cause it to affect all LUNs using that storage plugin. To set it globally, SSH into your host and execute:
# esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR
Otherwise, to set it per-LUN, go back to Storage, highlight your LUN, and click on Manage Paths. Change your Path Selection to Round Robin and click Change.
Here, you can see a utilization chart with traffic being balanced across both vmnics to the storage array. Remember, it’s using Round Robin, so it’s not true load balancing, but as you can see it does a pretty good job of using both links. I highlighted the adapters so that both lines would show up more clearly.
That’s it! You now have Active/Active Multi-Path I/O configured to your Synology NAS with full hardware acceleration enabled! In another post, I will show the performance and throughput characteristics of the environment we built so that you can see how the traffic looks. Be sure to Like and reblog this if you found it helpful. C’mon, people, I need the traffic!