Thursday, February 24, 2011

Troubleshooting Performance Related Problems in vSphere 4.1 Environments

Source : communities.vmware.com
The hugely popular Performance Troubleshooting for VMware vSphere 4 guide is now updated for vSphere 4.1 . This document provides step-by-step approach for troubleshooting most common performance problems in vSphere-based virtual environments. The steps discussed in the document use performance data and charts readily available in the vSphere Client and esxtop to aid the troubleshooting flows. Each performance troubleshooting flow has two parts:
  1. How to identify the problem using specific performance counters.
  2. Possible causes of the problem and solutions to solve it.

New sections that were added to the document include troubleshooting performance problems in resource pools on standalone hosts and DRS clusters, additional troubleshooting steps for environments experiencing memory pressure (hosts with compressed and swapped memory), high CPU ready time in hosts that are not CPU saturated, environments sharing resources such as storage and network, and environments using snapshots.

This document by no means covers the entire breadth of performance-related problems. We request the readers of this document, including VMware performance community members and vSphere administrators, to help us enhance this document by letting us know about all the performance problems they encounter in their vSphere-based virtual environments, including those that require elaborate troubleshooting steps.  We hope that the community will actively contribute by engaging in live discussions, providing feedback, and asking questions. All this input will serve as the topics for future updates.

Monday, January 24, 2011

Due to a possible dead lock on rpmdb, upgrading ESX 4.0 to 4.0 Update 1 can fail or time out and leave the host in an unusable state


Due to a possible dead lock on rpmdb, upgrading ESX 4.0 to 4.0 Update 1 can fail or time out and leave the host in an unusable state

Symptoms

When attempting to upgrade ESX 4.0 to ESX 4.0 Update 1 (U1), you may experience these symptoms:
  • Upgrade operation may fail or hang and can result in an incomplete installation
  • Upon reboot, the host that was being upgraded may be left in an inconsistent state and may display a purple diagnostic screen with the following error:

    COS Panic: Int3 @ mp_register_ioapic

Purpose

ESX 4.0 U1 includes an upgrade to glibc version 5.3 which implements a change in locking mechanism compared to glibc version 5.2 already installed with ESX 4.0. If rpm command is run during the installation of ESX 4.0 U1, a dead lock may be placed on rpmdb. For more information, see RedHat PR 463921. 
 
As a result, upgrading ESX 4.0 to 4.0 U1 can fail or time out and leave the host in an unusable state. 
 
While this issue is not hardware vendor specific, this has been reported to occur on HP Proliant systems if Insight Management Agents are already installed and running on the host being upgraded. Investigations into this issue revealed that Insight Management Agents run rpm commands on a regular basis which triggers the deadlock during the U1 installation. This can also occur on any system from other vendors that has a process or an application running rpm, or if you happen to manually run the rpm command, like rpm -qa, while Update 1 installation is in progress.

Note: VMware esxupdate tool can be used standalone and is also used by VMware Update Manager and VMware Host Update Utility.

Resolution

Who is affected

  1. Customers using VMware vSphere 4 upgrading to ESX 4.0 U1 on HP Proliant systems with a supported version of HP Insight Management Agents running.
  2. Customers running rpm commands on systems from any vendor while upgrading to ESX 4.0 U1.
This affects any of the following upgrading scenarios:
  • Upgrade using Update Manager
  • Upgrade using esxupdate
  • Upgrade using vSphere Host Update Utility
Note: ESXi is not affected.

Solution

ESX 4.0 Update 1 has been re-released with changes to avoid this issue. The installation process checks for running agents and stops them before proceeding.
 
The re-released ESX 4.0 Update1 is referred to as ESX 4.0 Update 1a and is available via vSphere Update Manager (VUM) and the VMware Downloads site.
 
Note: The changes in ESX 4.0 Update 1a do not address the issue with glibc locking mechanism. It is critical that you do not run rpm commands on any host while the ESX 4.0 Update 1a installation is in progress. 
 
If you meet one or both of the conditions of Who is Affected and you already ran the original ESX 4.0 Update 1 installation but have not rebooted the host, do not reboot the ESX host. Contact VMware Technical Support for assistance. For more information, see How to Submit a Support Request.
 
WARNING: Rebooting the host means the host may need to be reinstalled because it is not recoverable after a reboot.
 
WARNING: If you have virtual machines running on local storage, they may not be retained if you reinstall ESX 4.0 as a result of this issue. Contact VMware Support for assistance before reinstalling.

Restarting hostd (mgmt-vmware) on ESX hosts restarts hosted virtual machines where virtual machine Startup/Shutdown is enabled

Details
This is an issue with virtual machines that are set to automatically start or stop and that are hosted on ESX 3.x. Manually shutting down, starting up, or restarting hostd through the service console causes hosted virtual machines that are set to automatically change power states to stop, start, or restart, respectively. 

Disable Virtual Machine Startup/Shutdown for the ESX host through VirtualCenter or a VMware Infrastructure (VI) Client that is directly connected to the host.
 
GUI Method 
To disable Virtual Machine Startup/Shutdown:
  1. Log in to VirtualCenter.
  2. Select the ESX Server host where you want restart hostd.
  3. Select the Configuration tab.
  4. Select Virtual Machine Startup/Shutdown.
  5. Select Properties.
  6. Deselect Allow Virtual machines to start and stop automatically with the system.
CLI Method
If the host is not reachable through VirtualCenter or the VI Client:
  1. Log in to the ESX Server service console as root.
  2. At the command line run vimsh.
  3. At the [/] prompt, type:
    hostsvc/autostartmanager/enable_autostart 0
     
  4. Type exit. You can now safely restart mgmt-vmware (hostd).

How to Divide & Combine vSphere 4.x license keys

Dividing vSphere 4.x license keys

To divide vSphere 4.x license keys:
  1. Go to http://www.vmware.com/account/login.do and log in to the license portal.
  2. Expand the product edition (e.g vSphere 4 Standard) under Your VMware Product License Keys to view the available license keys.
  3. Click Divide.
  4. Select the license you wish to Divide by clicking on the associated radio button.
  5. Click Continue.

    You can review the order information for the license you wish to split and decide how many new licenses you want to generate.
  6. Enter the count for each of the new license keys.
  7. Click Continue.

    On the confirmation page, you can review the split operation. A warning message appears.
  8. Click Confirm.

    A dialog is displayed while the operation is in progress. When the Split Operation is complete, you return to the Licensing page. The original license key is no longer visible in the portal and you see the newly generated license keys indicated by New.

Combining vSphere 4.x license keys

To combine the vSphere 4.x license keys:
  1. Go to http://www.vmware.com/account/login.do and log in to the license portal.
  2. Expand the product edition (e.g vSphere 4 Standard) under Your VMware Product License Keys to view the available license keys.

    Note: You cannot combine license keys that belong to different editions. For example you cannot combine a vSphere Standard License key with a vSphere Enterprise License Key.
  3. Click Combine.
  4. Select the licenses you wish to combine by clicking on the associated check boxes
  5. Click Continue.

    On the confirmation page, you have a chance to review the combine operation. A warning message appears.
  6. Click Confirm to proceed with the combine operation.

    A dialog is displayed while the operation is in progress. When the Combine Operation completes, you return to the Licensing page. The original license keys are no longer visible in the portal. You see the newly generated license keys indicated by New.

Friday, January 14, 2011

ESXTOP - Deep Dive

Source - www.yellow-bricks.com
ESXTOP
Intro
Thresholds
Howto – Run
Howto – Capture
Howto – Analyze
Howto – Limit esxtop to specific VMs
References
Changelog
This page is solely dedicated to one of the best tools in the world for ESX; esxtop.

Intro

I am a huge fan of esxtop! I read a couple of pages of the esxtop bible every day before I go to bed. Something I however am always struggling with is the “thresholds” of specific metrics. I fully understand that it is not black/white, performance is the perception of a user in the end.
There must be a certain threshold however. For instance it must be safe to say that when %RDY constantly exceeds the value of 20 it is very likely that the VM responds sluggish. I want to use this article to “define” these thresholds, but I need your help. There are many people reading these articles, together we must know at least a dozen metrics lets collect and document them with possible causes if known.
Please keep in mind that these should only be used as a guideline when doing performance troubleshooting! Also be aware that some metrics are not part of the default view. You can add fields to an esxtop view by clicking “f” on followed by the corresponding character.
I used VMworld presentations, VMware whitepapers, VMware documentation, VMTN Topics and of course my own experience as a source and these are the metrics and thresholds I came up with so far. Please comment and help build the main source for esxtop thresholds.

Metrics and Thresholds

Display Metric Threshold Explanation
CPU %RDY 10 Overprovisioning of vCPUs, excessive usage of vSMP or a limit(check %MLMTD) has been set. See Jason’s explanation for vSMP VMs
CPU %CSTP 3 Excessive usage of vSMP. Decrease amount of vCPUs for this particular VM. This should lead to increased scheduling opportunities.
CPU %SYS 20 The percentage of time spent by system services on behalf of the world. Most likely caused by high IO VM. Check other metrics and VM for possible root cause
CPU %MLMTD 0 The percentage of time the vCPU was ready to run but deliberately wasn’t scheduled because that would violate the “CPU limit” settings. If larger than 0 the world is being throttled due to the limit on CPU.
CPU %SWPWT 5 VM waiting on swapped pages to be read from disk. Possible cause: Memory overcommitment.
MEM MCTLSZ 1 If larger than 0 host is forcing VMs to inflate balloon driver to reclaim memory as host is overcommited.
MEM SWCUR 1 If larger than 0 host has swapped memory pages in the past. Possible cause: Overcommitment.
MEM SWR/s 1 If larger than 0 host is actively reading from swap(vswp). Possible cause: Excessive memory overcommitment.
MEM SWW/s 1 If larger than 0 host is actively writing to swap(vswp). Possible cause: Excessive memory overcommitment.
MEM CACHEUSD 0 If larger than 0 host has compressed memory. Possible cause: Memory overcommitment.
MEM ZIP/s 0 If larger than 0 host is actively compressing memory. Possible cause: Memory overcommitment.
MEM UNZIP/s 0 If larger than 0 host has accessing compressed memory. Possible cause: Previously host was overcommited on memory.
MEM N%L 80 If less than 80 VM experiences poor NUMA locality. If a VM has a memory size greater than the amount of memory local to each processor, the ESX scheduler does not attempt to use NUMA optimizations for that VM and “remotely” uses memory via “interconnect”.
NETWORK %DRPTX 1 Dropped packets transmitted, hardware overworked. Possible cause: very high network utilization
NETWORK %DRPRX 1 Dropped packets received, hardware overworked. Possible cause: very high network utilization
DISK GAVG 25 Look at “DAVG” and “KAVG” as the sum of both is GAVG.
DISK DAVG 25 Disk latency most likely to be caused by array.
DISK KAVG 2 Disk latency caused by the VMkernel, high KAVG usually means queuing. Check “QUED”.
DISK QUED 1 Queue maxed out. Possibly queue depth set to low. Check with array vendor for optimal queue depth value.
DISK ABRTS/s 1 Aborts issued by guest(VM) because storage is not responding. For Windows VMs this happens after 60 seconds by default. Can be caused for instance when paths failed or array is not accepting any IO for whatever reason.
DISK RESETS/s 1 The number of commands reset per second.
DISK CONS/s 20 SCSI Reservation Conflicts per second. If many SCSI Reservation Conflicts occur performance could be degraded due to the lock on the VMFS.

Running esxtop

Although understanding all the metrics esxtop provides seem to be impossible using esxtop is fairly simple. When you get the hang of it you will notice yourself staring at the metrics/thresholds more often than ever. The following keys are the ones I use the most.
Open console session or ssh to ESX(i) and type:
esxtop
By default the screen will be refreshed every 5 seconds, change this by typing:
s 2
Changing views is easy type the following keys for the associated views:
c = cpu
m = memory
n = network
i = interrupts
d = disk adapter
u = disk device (includes NFS as of 4.0 Update 2)
v = disk VM
p = power states

V = only show virtual machine worlds
e = Expand/Rollup CPU statistics, show details of all worlds associated with group (GID)
k = kill world, for tech support purposes only!
l  = limit display to a single group (GID), enables you to focus on one VM
# = limiting the number of entitites, for instance the top 5

2 = highlight a row, moving down
8 = highlight a row, moving up
4 = remove selected row from view
e = statistics broken down per world
6 = statistics broken down per world
Add/Remove fields:
f
Changing the order:
o
Saving all the settings you’ve changed:
W
Keep in mind that when you don’t change the file-name it will be saved and used as default settings.
Help:
?
In very large environments esxtop can high CPU utilization due to the amount of data that will need to be gathered and calculations that will need to be done. If CPU appears to highly utilized due to the amount of entities (VMs / LUNs etc) a command line option can be used which locks specific entities and keeps esxtop from gathering specific info to limit the amount of CPU power needed:
esxtop -l
More info about this command line option can be found here.

Capturing esxtop results

First things first. Make sure you only capture relevant info. Ditch the metrics you don’t need. In other words run esxtop and remove/add(f) the fields you don’t actually need or do need! When you are finished make sure to write(W) the configuration to disk. You can either write it to the default config file(esxtop4rc) or write the configuration to a new file.
Now that you have configured esxtop as needed run it in batch mode and save the results to a .csv file:
esxtop -b -d 2 -n 100 > esxtopcapture.csv
Where “-b” stands for batch mode, “-d 2″ is a delay of 2 seconds and “-n 100″ are 100 iterations. In this specific case esxtop will log all metrics for 200 seconds. If you want to record all metrics make sure to add “-a” to your string.
Or what about directly zipping the output as well? These .csv can grow fast and by zipping it a lot of precious diskspace can be saved!
esxtop -b -a -d 2 -n 100 | gzip -9c > esxtopoutput.csv.gz

Analyzing results

You can use multiple tools to analyze the captured data.
  1. perfmon
  2. excel
  3. esxplot
Let’s start with perfmon as I’ve used perfmon(part of Windows also know as “Performance Monitor”) multiple times and it’s probably the easiest as many people are already familiar with it. You can import a CSV as follows:
  1. Run: perfmon
  2. Right click on the graph and select “Properties”.
  3. Select the “Source” tab.
  4. Select the “Log files:” radio button from the “Data source” section.
  5. Click the “Add” button.
  6. Select the CSV file created by esxtop and click “OK”.
  7. Click the “Apply” button.
  8. Optionally: reduce the range of time over which the data will be displayed by using the sliders under the “Time Range” button.
  9. Select the “Data” tab.
  10. Remove all Counters.
  11. Click “Add” and select appropriate counters.
  12. Click “OK”.
  13. Click “OK”.
The result of the above would be:

With MS Excel it is also possible to import the data as a CSV. Keep in mind though that the amount of captured data is insane so you might want to limit it by first importing it into perfmon and then select the correct timeframe and counters and export this to a CSV. When you have done so you can import the CSV as follows:
  1. Run: excel
  2. Click on “Data”
  3. Click “Import External Data” and click “Import Data”
  4. Select “Text files” as “Files of Type”
  5. Select file and click “Open”
  6. Make sure “Delimited” is selected and click “Next”
  7. Deselect “Tab” and select “Comma”
  8. Click “Next” and “Finish”
All data should be imported and can be shaped / modelled / diagrammed as needed.
Another option is to use a tool called esxplot”. You can download the latest version here.
  1. Run: esxplot
  2. Click File -> Import -> Dataset
  3. Select file and click “Open”
  4. Double click host name and click on metric

As you can clearly see in the screenshot above the legend(right of the graph) is too long. You can modify that as follows:
  1. Click on “File” -> preferences
  2. Select “Abbreviated legends”
  3. Enter appropriate value
For those using a Mac, esxplot uses specific libraries which are only available on the 32Bit version of Python. In order for esxplot to function correctly set the following environment variable:
export VERSIONER_PYTHON_PREFER_32_BIT=yes

Limiting your view

In environments with a very high consolidation ratio (high number of VMs per host) it could occur that the VM you need to have performance counters for isn’t shown on your screen. This happens purely due to the fact that height of the screen is limited in what it can display. Unfortunately there is currently no command line option for esxtop to specify specific VMs that need to be displayed. However you can export the current list of worlds and import it again to limit the amount of VMs shown.
esxtop -export-entity filename
Now you should be able to edit your file and comment out specific worlds that are not needed to be displayed.
esxtop -import-entity filename
I figured that there should be a way to get the info through the command line as and this is what I came up with. Please note that needs to be replaced with the name of the virtual machine that you need the GID for.
VMWID=`vm-support -x | grep  |awk '{gsub("wid=", "");print $1}'`
VMXCARTEL=`vsish -e cat /vm/$VMWID/vmxCartelID`
vsish -e cat /sched/memClients/$VMXCARTEL/SchedGroupID
Now you can use the outcome within esxtop to limit(l) your view to that single GID. William Lam has written an article a couple of days after I added the GID section. The following is a lot simpler than what I came up with, thanks William!
VM_NAME=STA202G ;grep "${VM_NAME}" /proc/vmware/sched/drm-stats  | awk '{print $

IOBlazer - storage micro-benchmark tool - run in VM - brand new baby from VMware Labs

IOBlazer is a multi-platform storage stack micro-benchmark. IOBlazer runs on Linux, Windows and OSX and it is capable of generating a highly customizable workload. Parameters like IO size and pattern, burstiness (number of outstanding IOs), burst interarrival time, read vs. write mix, buffered vs. direct IO, etc., can be configured independently. IOBlazer is also capable of playing back VSCSI traces captured using vscsiStats. The performance metrics reported are throughput (in terms of both IOPS and bytes/s) and IO latency.
IOBlazer evolved from a minimalist MS SQL Server emulator which focused solely on the IO component of said workload. The original tool had limited capabilities as it was able to generate a very specific workload based on the MS SQL Server IO model (Asynchronous, Un-buffered, Gather/Scatter). IOBlazer has now a far more generic IO model, but two limitations still remain:
  1. The alignment of memory accesses on 4 KB boundaries (i.e., a memory page)
  2. The alignment of disk accesses on 512 B boundaries (i.e., a disk sector).
Both limitations are required by the gather/scatter and un-buffered IO models.
A very useful new feature is the capability to playback VSCSI traces captured on VMware ESX through the vscsiStats utility. This allows IOBlazer to generate a synthetic workload absolutely identical to the disk activity of a Virtual Machine, ensuring 100% experiment repeatability.


http://labs.vmware.com/flings/ioblazer