opvizor-performance-analyzer-part-9-performance-in-real-time-for-netapp

There is a best practices available as white paper for NetApp and VMware vSphere Storage. You can find the complete paper here.

In the NetApp Community there a many interesting discussions, e.g.

"Beginning this month, Tech OnTap highlights popular discussion threads appearing in the NetApp Technical Community. In this month’s discussion, Christopher Madden, NetApp IT enterprise architect, shares his Graphite/Grafana Quick Start Installation Guide and fields dozens of questions from community members."

See the complete discussion here.

What are the best practices for adding disks to an existing aggregate or traditional volume?

When determining how many disks to add to an existing aggregate or traditional volume (TradVol), the following must be considered:

  • The number of disks currently in the aggregate/TradVol
  • The size of the RAID group(s)
  • The current amount of space used in the aggregate/TradVolIn an aggregate, space used will show the space guarantees of the associated FlexVols, not just the actual usage by stored data.

When disks are added to an existing aggregate/TradVol, the storage system will attempt to keep the amount of data stored on each disk about equal.

For example, if you have four 20GB data disks in a TradVol/aggregate containing 60 GB of data, each disk will hold approximately 15 GB. The total space in the volume is 80 GB. The used space is 60GB. Each data disk contains 15GB of data.

When you add a new disk to that aggregate/TradVol, the storage system will write new data to this disk until it matches the pre-existing disks, which contain 15 GB each. 

In the previous example, after adding one 20GB disk to the aggregate/TradVol, the total size will be 100GB. The used space is still 60GB. The original three disks contain 15GB used space each. The newly added disk has 0GB used. Writes to the aggregate/TradVol will go to the newly added disk until its used space reaches 15GB. Once all four data disks have 15GB used, then the data will be striped across all four disks.

For best performance, it is advisable to add a new RAID group of equal size to existing RAID groups. If a new RAID group cannot be added, then at minimum, three or more disks should be added at the same time to an existing RAID group. This allows the storage system to write new data across multiple disks.

For example, if you have four 20GB data disks in an aggregate/TradVol containing 60 GB of data, each disk will hold approximately 15 GB. The total space in the volume is 80 GB. The used space is 60GB. Each data disk contains 15GB of data.

When three new disks are added, the total space in the aggregate/TradVol is 420GB. The used space is 60GB. The original three disks contain 15GB each of data. The three new disks contain 0GB of data. When new data is written to the aggregate/TradVol, it will be striped evenly across the three new disks until each disk contains 15GB of data. Once that occurs, new data will be striped evenly across all six data disks.

By adding a minimum of three disks at a time to an aggregate/TradVol, the throughput to disk is increased by providing more disks to write to at a given time.

Please read the complete article here with some hopefully helpful examples.

The new way

If you’re looking for a very modern way to check and monitor performance, you should give Performance Analyzer a try.

Monitor and Analyze your NetApp storage systems (7-mode and Cluster mode) configuration and performance metrics. Correlate events and metrics from your Storage system, the underlying operating system and the related infrastructure components (VMware vSphere Datastores aso.). Troubleshoot issues using our efficient data crawler and preconfigured dashboards.

NetApp Dashboard

Some of our NetApp integration features are:

  • Monitor metrics across 7-mode and Cluster mode systems
  • Check caching and deduplication efficiency in real time
  • Full insights into Latency and IOps
  • React on front-end and backend-network anomalies
  • Drilldown from Highlights to Details, Filer Overview to single Disk

NetApp Highlights

Sign Up for Performance Analyzer today!

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.