uberAgent Support

Impact of Regular CPU Spikes on XenApp Application Response Times

uberAgent collects most of its data at regular intervals, by default every 30 seconds. That activity, although very small in absolute numbers, may show up as regular spikes in Task Manager:

Seeing this in Task Manager you may wonder whether these spikes affect user experience and reduce application responsiveness.

Short answer: no, if you do not overcommit CPU resources

Long answer:

On Citrix XenApp we typically have hundreds of processes that all want their share of CPU, RAM, and so on. Windows' scheduler is very effective at fairly distributing the available resources. It even knows which applications are in the foreground and gives those a higher priority if the system is configured accordingly, which should be the case on clients and terminal servers:

The high number of processes is the reason why terminal servers can utilize even very powerful server hardware to full capacity. With SBC, virtualization is not a means of getting more workloads on underutilized hardware. Instead, if used at all, it is a means of facilitating management.

The scheduling system described above works well as long as there is only one scheduling mechanism. That is the case if Citrix XenApp runs on a physical machine. It is also the case if the terminal server runs on a virtual machine without CPU overcommitment.

But as soon as CPU overcommitment is introduced, user experience and application responsiveness may deteriorate.

The reason is that CPU overcommitment introduces a second scheduling mechanism at the hypervisor level. The hypervisor scheduler knows nothing about the internals of the virtual machines, while each VM's operating system scheduler knows nothing about the workings of the hypervisor scheduler. Each on its own does a good job, but combined they do more bad than good.

The effects are typically not noticeable with light workloads, but Citrix XenApp with hundreds of processes cannot be described as light. Consequently, optimization guides like  this one discourage assigning more virtual CPUs to the VMs than physical cores are available.

But what about HyperThreading? With HyperThreading enabled, 20 physical cores are seen as 40 logical cores by the hypervisor. When do we start to overcommit - when we assign more than 20 vCPUs or when we assign more than 40 vCPUs to VMs?

Newer sizing guides recommend something in between, i.e. no more than 30 vCPUs should be assigned in this example.

Have more questions? Submit a request


Please sign in to leave a comment.