Building My Lab Environment – Part V: Shared Storage Configuration

This is the fifth part of my series on building my lab environment. In Part I, I laid out the hardware that I would use for my lab. Part II covered the installation of VMware vSphere 5 on the hardware. Part III covered the VMware vSphere 5 network configuration. Part IV covered the resource pool configuration. In this post I will cover the shared storage solution I purchased and setting it up in vSphere.

The Hardware

QNAP TS-459U-SP+

The QNAP TS-459U-SP+ is a small to medium sized NAS solution. This particular model can come in a standard desktop NAS chassis, or a rack mountable chassis as denoted by the U in the model name. As I am going for a rack mount setup, I chose the U model. For the rack mounts it also comes in an SP (single power supply) or RP (redundant power supply) model. Considering this is just for my home lab I went with the cheaper SP model.

It can hold four 2.5/3.5” SATA I/II drives for a total of 16 TB. It also allows for online RAID expansion and RAID level migration, is a VMware Ready certified iSCSI array, and has a host of other features. You can see the full specs here on QNAPs site.

Cost on Newegg: $1,199.99 US dollars

Additional Hard drive

In my first post in the series I talked about buying two Western Digital RE4 WD2003FYYS 2TB 7200 RPM hard drives. When I bought the QNAP, I bought an additional drive to bring the total up to three, enough for the RAID 5 I wanted. However, due to the typhoon that happened in Thailand, it cost me $349.99 US dollars compared to the $199.99 US dollars the first two cost me.

Cost on Newegg: $349.99 US dollars

The Setup

So, it took me awhile to get everything setup due to my learning about the QNAP’s capabilities. I also had an interesting time moving VM’s around. Before I bought the QNAP, I had the first two Western Digital drives in the vSphere server and had some virtual machines created on them. Once the QNAP arrived, I had to setup the QNAP with the new hard drive, move one hard drive that was unused from the vSphere server and setup a mirror on the QNAP. I then had to create an NFS share on the QNAP to hold the VMs, and use some vSphere client commands to properly move the VM’s. Quiet tedious to do. After that was all finished, I moved the third hard drive to the QNAP and did an online RAID migration from a mirror to a RAID 5. The results were less than spectacular due to my overlooking the recommendation to remove a finicky QPKG add-on that caused me to think I had lost all my data and VMs. Some work at the linux command line got it all straightened out, but it was a nerve wracking experience that took several days to fix. Since everything is set up now, I can’t exactly do a “walkthrough” of setting it up, but I will detail what I did. QNAP provides instructions on how to configure their NAS for vSphere here on their site.

  • To start off with, I created a user on the QNAP called vSphereSvc that has the same password as the vSphere root account. I used this to provide access to the NFS share.
  • I then created a new Share Folder on the QNAP called vSphereDataStore01.qnap share properties
  • I then gave the vSphereSvc account Read/Write permissions to the share.
  • I also set the NFS Access Control to only allow connections to this share from the IP address of the vSphere server.
  • With all that done, I open up the vSphere client and click on the host, then the “Configuration” tab, and select “Storage” under Hardware. From there I click “Add Storage…” in the upper right of the client.
  • On the Add Storage wizard you have the option to add a Disk/LUN or Network File System. I went with Network File System for two reason. One I read in several different posts on the QNAP forums about there being issues with the QNAP implementation of iSCSI. And secondly, the QNAP uses embedded linux for its operating system. Handling NFS shares is an essentially native operation for the QNAP where as the iSCSI implementation creates a file on top of the ext4 file system and creates another level of abstraction. I felt I would get the best performance by using the NFS share instead.
    add-storage-01
  • As you can see here, selecting NFS greatly reduces the amount of configuration needed to get the data store up and running. Put in the IP address/DNS name, the share folder, and give it a name and you’re done.
    add-storage-02

Series Posts

Advertisements

Posted on June 25, 2012, in Lab, vSphere. Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: