how-to-make-vmware-faster

Virtualization is an indispensable part of a modern data center.  Frequently, the degree of virtualization is 90 percent or more.  What formerly operated on a number of servers today runs on a few hosts.  With the high rate of virtualization and the resulting increase in complexity, problems are more difficult to locate.  It is therefore necessary to consider how the infrastructure can be monitored accurately and how potential problem situations can be found to avoid costly errors.  Unfortunately, under certain circumstances, even minor problems can significantly negatively impact the entire infrastructure.

Performance related issues are much harder to detect and to pinpoint as several layers of a highly complex physical and virtual hardware stack is involved.

how to make VMware faster

Photo courtesy of Hernan Piñera(CC ShareALike)

Virtual vs. Physical performance

When people started virtualizing their systems it was all about consolidating many physical boxes into a few physical servers, enabled by virtualization technologies, to run as a virtual machine on top of these.  There was no need providing high performance to the virtual machines as most of the migrated systems were underutilized anyway – sometimes less than 10% of the hardware abilities.

Over the years of gaining more and more trust in virtualization technology but also having much more powerful physical systems people started running systems that had more performance needs.

While there was a performance gap between physical servers and virtual servers of 10% and more in the early days in the meantime this gap is most of the time less than 2%.

Some of the biggest driver were modern cpu and memory architecture, better integrated network cards but most important high performing storage systems ready for the data and communication storm packed servers running hundreds of virtual machines demand.

Mission-critical applications

As it comes to performance, most mission-critical applications, mainly databases or .email solutions, the expectation is very high to get the best vm performance possible as people directly feel the impact. Waiting 1 second more for every click in an application to respond adds up very fast and is a big issue when it comes to huge scale. This was the number one reason why people where hesitating running mission-critical application on virtual environments as there is some performance loss because of the virtualization layer but also important people can screw up the performance easily by deploying the systems in a wrong way. Either too many or just mixing the wrong systems on the same physical platform.

Learning the behaviour

There are great tools out there that learn over time from the behavior of your infrastructure and support the system administrator to go after strange spikes or abnormal patterns. The issue here is, that these systems need to be trained not to detect false positives continuously but more important they need a symptom to react or recommend. When using a default SCSI controller in your virtual machine, there is really nothing wrong with it.

So the learning solutions don’t trigger an alert as there is nothing bad about it. But is there really nothing bad about it? Well, typically a paravirtualized disk controller would let you gain 5-30% more performance, but it’s not the default controller. Optimizing the virtual hardware gets you very close to the best vm performance you have seen in your environment.

This is a common problem of most solutions out there, they don’t know what application you’re using in the guest and what performance you would need and if you could get that from a different virtual hardware.

Therefore you only solve half the problem, as you get an alert about a potential strange performance behavior compared to normal, but all the recommendations are about performance related configuration and placement. You miss the opportunity of gaining full performance through the whole stack as you don’t change the disk controller.

If you get 80% of the potential physical performance into the guest, you basically tune the 80% but not the 100%!

VMware vSphere provides you all the possibilities but you need to know and configure everything in the right way and there is no built-in auto tune.

Some tricks how to make VMware faster

  1. keep your VMware Tools updated
  2. Disconnect or remove all medias like Floppy Drives or CD Drives that are not needed
  3. make sure to avoid CPU and Memory limits
  4. avoid old VM hardware
  5. watch performance metrics
  6. avoid VM snapshot issues
  7. think about paravirtualized SCSI driver
  8. avoid Monster VMs that excess the capabilities of one CPU socket

Conclusion

There is no simple way of just running an application and your VM performance is going through the roof. Without changing the VM virtual hardware, optimizing the settings the load placement doesn’t have the needed effect to achieve your best vm performance.

Start testing some of your vSphere environment for free and learn how to make VMware faster:

Sign Up for opvizor!

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.