Building My Capture and Deployment Server – Part I: Virtual Machine Settings

This is the first part of my series on building my capture and deployment server. In this post I will cover the settings I used for my virtual machines.


DEV-DC-01 is going to be the first virtual machine in my Development environment. This virtual machine will act as the deployment and image capture server to build the first STIGed image.

So, to start out with, I right-click on the Development Resource Pool and select “New Virtual Machine”. I select Custom for the configuration to open up more options when creating this virtual machine.


I give it a name.


I select the NFS Data store on my QNAP server.


Leave it as the default Virtual Machine Version 8


Make sure that Windows Server 2008 R2 is selected.


Now the CPU setting is important for virtual machines. If you don’t need multiple CPU’s for performance, then don’t assign them. If a virtual machine has more than two virtual CPU’s(vCPU), and there is an application running threads on both that require synchronization, vSphere can run into an issue where one vCPU out paces the other vCPU. This causes processing to be halted on one thread while the other catches up. VMware published a document on this here. Joshua Townsend also has a good blog entry here detailing a real world instance of this causing issues. He also provides some great links at the bottom of his post more information on co-scheduling.


Here I changed the default RAM from 4 GB to 1 GB as this is a dev server and it really doesn’t need that much RAM for what it will be doing.


For the network adapter I changed it from the default Intel E1000 NIC to the VMXNET3 NIC. This is recommended in the Performance Best Practices for VMware vSphere 5.0 for guest operating systems that support it. There are some limitations in that you have to use a virtual hardware version of at least 7 and you are unable to vMotion to a host running ESX/ESXi 3.5.x or earlier. BUT the VMXNET paravirtualized adapters pass network traffic between the virtual machine and the network with less overhead than the default E1000 adapter. A downside to using the VMXNET3 NIC is outlined in a blog post by Scott Lowe. Essentially there is an issue when cloning a Windows 7 or Windows 2008 R2 VM that causes “orphaned NICs”. Luckily there is a hotfix available by Microsoft at this KB article.

That entire last paragraph? Forget everything about it. After spending a couple of hours trying to get network address translation working in RRAS, I find an entry on the VMware community board about there being a bug with VMXNET3 NICs and NAT going back to ESXi 4. I ended up going back to the Intel E1000 NIC and will use it for all future VMs.


I left the SCSI controller set to the default. I do want to point out the last option however. The VMware paravirtual controller is a better controller for I/O intensive applications. It uses less CPU and provides increased throughput compared to the other adapters. However, the performance best practices I linked to above recommend to only use this for virtual machines that need it. This blog post here by Scott Lowe, whose book I keep mentioning, talks in more detail about when to use and not use this controller.


Nothing special here, just creating a new virtual disk.


This I modified from the default 40 GBs to 80 GBs. A full windows installation can approach 25 GB by itself before you even starting installing application or windows features. And this will hold several captured windows images, so it will need the space.


Noting special here, just the default SCSI device node.


And… the summary of all the settings. I also hit the check box to edit the virtual machine settings before completion.


I had two reasons. The first, I wanted to set the CD/DVD drive to connect at power on and point it to the Windows Server 2008 R2 iso. The second, I wanted to set the total video memory on the video card to 32MB to allow for larger display settings and a smoother mouse.


And that’s it for this post.


Series Posts


Posted on June 25, 2012, in Lab, STIG, vSphere. Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: