VMware vSphere ESXi Memory Issues Can Slow Down the Virtual Infrastructure Performance

ESXi is the bare-metal hypervisor and the main component of VMware vSphere that is responsible to run Virtual Machines (VMs) on top of it. ESXi abstracts the underlying hardware resources from the physical server and allocates them to VMs to run workloads and applications. In ESXi hosts, everything is managed by VMKernel who decouples the underlying resources from the physical host, and allocates them to installed VMs.

When the ESXi hypervisor is installed on a compatible physical server, by default it allocates resources to installed VMs, and if workloads increase in an ESXi host, the resources will be overcommitted which means resource allocation to workloads exceeds capacity. This issue can be managed through Shares (allow to prioritize VMs to claim resources), Reservations (define a guaranteed minimum amount of resource allocated to a VM by ESXi host), and Limits (define a max amount of resource allocated to a VM by ESXi host).

When the workload is increased on the ESXi host, it can compromise the performance of the ESXi host due to several factors such as the heavy consumption of resources by certain VMs, applications, and workloads. If this happens, other VMs cannot get access to required physical resources when they need them, and this can badly affect the workload performance on other VMs. To improve the performance of ESXi hosts in your virtual infrastructure, you need to track the resource consumption of a VM from time to time and make sure that the VMs are right-sized with available resources. In this article, we’ll discuss some memory metrics that help in tracking the performance of ESXi hosts in your virtual environment.

Memory Metrics

We all know that VMs on the ESXi hosts are bound to share the underlying physical resources of the host, and as with CPU, memory is also a leading resource-contention factor in virtual environments. In VMware vSphere environment, three layers of memory such as host physical memory (available memory to ESXi hosts), guest physical memory (available memory to OS installed on VMs), and guest virtual memory (available memory at the application level of a VM) exist.

Each VM that is installed on an ESXi host is configured with its host physical memory that can be accessed by the guest OS and can be different in size from how much allocated to it which depends on the requirements of the VM along with any configured shares, limits, or reservations. When a VM starts, the ESXi host creates memory addresses that are presented to that particular VM, and when application or workload runs on a VM, it attempts to read/write from the memory page, and the guest OS of the VM translates between guest allocated virtual memory and guest physical memory like it in a non-virtualized environment.

To monitor all of this, it is pretty important to track memory metrics and CNIL Metrics and Logs can do this pretty well for you with easy-to-use UI. In this post, we’ll try to cover some ESXi memory metrics that allow monitoring of the memory usage, performance, and capacity of your virtual infrastructure.

Memory Usage

One of the important metrics of ESXi is memory usage that measures the percentage of configured memory that is being actively used at the VM level. A VM should ideally spare some configured memory, and if it is constantly using all of its configured memory, the ESXi host will not allocate additional memory to the VM, and that VM will become less resilient to any spikes in memory usage. If it happens during monitoring, you should reconfigure the memory size, and update memory allocation settings such as share, limit, reservation, etc.

At the ESXi host level, memory usage is represented in the percentage of the physical memory of an ESXi host that is being consumed, and if it is constantly high, it may not be provisioned to configured VMs that need the memory, and memory ballooning will be run more often.

CNIL Metrics and Logs monitors the memory usage, memory ballooning, and memory swapping of ESXi hosts in the VMware vSphere environment.

Memory Swap Usage

When a VM is provisioned by an ESXi host, physical disk storage files called swap files are also allocated, and the size of a swap file is determined by the configured size of the VM. If we have a VM configured with 4 GB of memory and has a 2 GB reservation, it has a 2 GB swap file, and by default, swap files of VMs are collocated with their virtual disks on shared storage.

If the physical memory of the ESXi host is running low and memory ballooning is not reclaiming an adequate memory to meet the requirements quickly enough, the ESXi host will start using swap space to read/write data, and the process is known as memory swapping. A VM can be slow down when reading/writing data to disk takes place and it normally takes a longer time than using memory, and if this happens, memory swapping will take place.

CNIL Metrics and Logs is one of the best VMware vSphere monitoring solutions available in the market that run within minutes into your virtual infrastructure and start monitoring the ESXi memory and other important metrics.

Memory Ballooning Usage

Memory ballooning is a memory reclamation method in VMware vSphere-based environment that allows ESXi host to reclaim unused memory from configured VMs on top of it, and if any VM experiences a shortage in memory, it can be reclaimed to run the application.

Each VM in VMware vSphere environment has a balloon driver installed on it, and if an ESXi host faces a low memory issue, it can reclaim the memory from the guest physical memory of VMs. For this purpose, the request is sent to the balloon driver of the VM as the ESXi host doesn’t have any knowledge about memory that is not being used by any VM, and memory is taken from the balloon driver by the ESXi host, deallocate it from host physical memory, and reallocate it to other VMs that required it.

Conclusion

To improve the performance of ESXi hosts in your virtual infrastructure, you need to monitor the memory metrics of your ESXi hosts. Memory metrics of ESXi hosts and configured VMs are very important to track along with other metrics.

Metrics and Logs is a software that helps VMware admins to track and monitor important ESXi metrics including memory metrics such as memory usage and swapping and runs in minutes to identify the issues that can degrade the performance of your VMware-based virtual environment.

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.