extending-all-flash-vmware-vsan-cache-tier-sizing-requirement-for-different-endurance-level-flash-device

Extending All Flash vSAN Cache Tier Sizing Requirement for Different Endurance Level Flash Device

We would like to share a new blog post from Biswapati Bhattacharjee from VMware Virtual Blocks about extending All Flash vSAN cache tier sizing requirement for different endurance level flash device.

Please find the original article from VMware Virtual Blocks with all comments here or scroll down for some more information.

Using our VMware vSAN integration that has already been featured at yellow-bricks.com you can get great insights of the vSAN performance of one or multiple vSAN enabled cluster. Furthermore, our new histogram dashboards gives you great planning details for vSAN cache requirements.

Introduction

The flash storage industry evolved through many key changes in the last couple of years. For example, there are now new devices with high performance and endurance with smaller capacity points. At the same time, large capacity devices with lower endurance are becoming available in the enterprise market. With these changes, it requires to provide additional details to our already published all flash cache-tier guideline blog available here.

In this blog post, I would like to expand on the already published vSAN all flash cache-tier guidelines based on these changes.

To extend the guidance beyond our current blog, I would like to focus on a couple key areas given we now have many cache-tier devices supported by vSAN ecosystem with varying endurance, performance, and capacity.

  1. vSAN All Flash cache and capacity ratio: There is some confusion today that the 10% vSAN cache to capacity ratio needs to remain the same for an All Flash vSAN. This 10% guideline was meant for hybrid vSAN only. This 10% is a general recommendation could be too much or it may not be enough and should be based on use case, capacity and performance requirements. vSAN all flash caching does not have a % to capacity ratio requirement.
  2. vSAN All Flash cache sizing guidance: Though there are many possibilities exist, I would like to consider two additional data points for cache-tier sizing requirements when deploying an All Flash vSAN. The data points are 3 DWPD (low endurance devices) and 30 DWPD (very high endurance devices) as caching device. Our current blog guideline is with 10 DWPD endurance point only and the following table shows the details.

vSAN

Photo courtesy of VMware / Biswapati Bhattacharjee

Fig 1: Cache Tier Sizing Guideline with 10 DWPD

As you can see there is no capacity link associated with a cache size when designing an All Flash vSAN. The designing guidelines based on different workload write profiles.

Based on this varying endurance, following two tables extrapolate capacity requirement for 30 DWPD and 3 DWPD device endurance.

vSAN

Photo courtesy of VMware / Biswapati Bhattacharjee

Fig 2: Cache Tier Sizing Guideline with 30 DWPD

vSAN

Photo courtesy of VMware / Biswapati Bhattacharjee

Fig 3: Cache Tier Sizing Guideline with 3 DWPD

Conclusion

The above sizing guidelines are based on varying capacity and endurance only. However, in a customer environment, there could be other variables which may require additional considerations to accommodate specific needs. But in general, this guideline should help using devices of different capacity, endurance and performance for identified workload as a cache-tier device for All Flash vSAN.

vSAN hardware guidance provides details around device performance and endurance class.  You can read the details here. If a device meets the guideline for cache tier device (both performance and endurance) in a certain profile (e.g. AF-4, AF-6, AF-8), you can use it with varying endurance.

Hopefully, this short blog clarifies the difference between cache ratios for Hybrid vs All Flash vSAN cache sizing based on workload and how different endurance point devices like the Intel® Optane™ SSD and others can be consumed for vSAN deployment.

VMware vSAN histogram

Don’t forget – our histogram for VMware vSAN provide major benefits and great insights when planning Caching.

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.