Building My Lab Environment – Part IV: Configuring vSphere 5 Resource Pools

This is the fourth part of my series on building my lab environment. In Part I, I laid out the hardware that I would use for my lab. Part II covered the installation of VMware vSphere 5 on the hardware. Part III covered the VMware vSphere 5 network configuration. In this post I will cover the vSphere 5 resource pools that I setup.

Resources

Before going into resource pools, I’m going to talk a little bit about resource sharing. Most of this was gleaned from Mastering VMware vSphere 5 by Scott Lowe that I mentioned in a previous post and I highly recommend picking it up.

There are two main resources that a host has to manage for all virtual machines. These are the memory and CPU. When you create a virtual machine, you specify the amount of RAM and the number of CPUs that the virtual machine will have. If you have a host with 4GB of RAM available to the guests, you can create four virtual machines each with 1GB of RAM and there will be no contention for memory. (While not “technically” true, as there is a little bit of memory overhead, it is still useful for our purposes.) The same can be said of having a four processor/core host and creating four single CPU virtual machines.

But what happens if you add a fifth virtual machine? You get resource contention. The virtual machines will starting contending with each other for the available resources. vSphere however comes with some nifty tricks to help reduce this contention.

The first is idle page reclamation. This allows the vSphere server to reclaim memory pages that the virtual machine isn’t actively using. The second is transparent page sharing. Identical pages of memory are shared between virtual machines to reduce the total number of memory pages needed. The last is the balloon driver. This is a driver installed with the VMware tools that requests memory (or inflates) from inside the virtual machine and then passes that memory back out to the vSphere host to utilize in other virtual machines. When the contention is gone, the host gives that memory back to the balloon driver who then deflates and gives it back to the guest.

Along with the automatic methods that vSphere can use to minimize contention, it also allows administrators to set resource limits and shares to prioritize some virtual machines over others. These can be set on a per virtual machine level or by creating resource pools that virtual machines are placed in. I’m not going to go into setting resource limits and shares at the virtual machine level as that is tedious as more machines are added. The best way to control the resources is with resource pools.

Resource Pools

A resource pool is a logical construct of CPU and Memory settings that control access to those resources. Any virtual machine put into the resource pool takes on the settings of that pool. Resource pools are useful in my situation to separate the development and production virtual machines so that the production virtual machines have priority access to the resources.

A resource pool has two parts. The CPU Resources and the Memory Resources. Each of these parts allow you to set both the minimum reserved and maximum reserved MHz/MB as well as specifying the number of Shares.

To create a new Resource Pool, you right-click on the host in the vSphere client as select New Resource Pool.

Create Resource Pool 01

This will present you with the Create Resource Pool window where you give the resource pool a name and set its limits.

Create Resource Pool 02

Shares

So, to run through the options, the Shares is how you prioritize one group of virtual machines over another group of sibling virtual machines when there is contention for resources. For an example, say you have two resource pools named Development and Production. The Shares for Development is set to Low (2000) and the Shares for Production is set to Normal (4000). The total Shares allocated is 6000, with Development getting 33% (2000 divided by 6000) of the resources and Production getting 66% (4000 divided by 6000) of the resources. So for every one page of memory that Development receives, Production will receive two. Again, this only comes into effect when there is contention for resources.

Something to understand however is that the Shares get split between all the virtual machines in the resource pool. So if there are four virtual machines in the Development resource pool and sixteen in the Production resource pool, the Development virtual machines will actually get more resources than the Production virtual machines. For the Development VMs they each receive 500 shares apiece (2000 divided by 4) and the Production VMs receive 250 shares apiece (4000 divided by 16). There is a great blog post explaining this by Duncan Epping called The Resource Pool Priority-Pie Paradox. He has a bunch of other great posts regarding vSphere. A couple of others I found useful in regards to Shares are Resource Pools and Shares and Custom shares on a Resource Pool, scripted.

Reservation

The Reservation is the minimum amount of resources that is guaranteed to the Resource Pool. So if we set the Reservation limit for the memory to be 4096 MB, this Resource Pool will be guaranteed to have 4096 MB of physical memory available at all times. It is a good idea to set this so that the virtual machines in the Resource pool have at least some guaranteed resources and can limit usage of their VMkernel swap file (think a Windows page file for virtual machines) and waiting on the CPU queue.

The Expandable Reservation checkbox allows the Resource Pool to increase it’s Reservation setting to fulfill reservations set on individual virtual machines in the Resource Pool. You only have to be concerned about this if you are manually setting resource allocations on individual virtual machines in the Resource Pool.

Limit

The Limit is the maximum amount of resources that the virtual machines in the pool can use. It does not limit the number of virtual machines you can create, nor the amount of RAM given to those virtual machines, but it does limit the amount of physical RAM that they can consume. Anything above that must be provided by the VMkernel swap file that resides on the physical storage and will be noticeably slower. So, if we create a pool with a limit of 2048 MB of RAM, we can create four virtual machines with 512 MB of RAM each, and they will all fit in the physical memory space. But if we create one more virtual machine with 512 MB of RAM, that 512 MB will have to be provided by the VMkernel swap and all the virtual machines in the pool will suffer performance degradation as some of their memory has to be paged out to the VMkerenal swap file.

My Settings

So, finally to the settings I chose myself.

Dev resource poolProd resource pool

Nothing too fancy here, limited the Development pool to a fixed number of resources while ensuring they had some guaranteed while giving them a low priority if there is contention. Similarly with the Production, I gave them unlimited resources to utilize the full resources of the server and guaranteeing them a bigger chunk of what is available while giving them the highest priority.

In Closing

While it is a lot to wrap your head around at first, and there is some complexity and gotchas, Resource Pools offer a great way to prioritize some virtual machines over others. If you ever want to compare pools or virtual machines to see how the resources are split between them, you can click on the host or pool and view the Resource Allocations tab.

Resource Tab


Series Posts

Advertisements

Posted on May 3, 2012, in Lab, vSphere. Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: