migration-of-vmfs5-to-vmfs6-datastores

vSphere 6.5 and vSphere 6.7 support VMFS6, the new filesystem format of VMware. Migrating from VMFS-5 to VMFS-6 is pretty straightforward and can be easily automated using the PowerCLI command Update-VMFSDatastore.

David Stamen, Technical Marketing Engineer at VMware published a very helpful guide.

>>

Once all of your ESXi hosts that are connected to the VMFS-5 datastore have been upgraded to vSphere 6.5 or vSphere 6.7 you can then proceed with your datastore VMFS6 migration.

Please note that vSphere 6.7 no longer supports VMFS-3. Prior to upgrading your ESXi host you should do an upgrade from VMFS-3 to VMFS-5 or they will be upgraded automatically during the Host upgrade.

Migrating from VMFS-5 to VMFS-6

When we start our migration of our datastore’s to VMFS-6 you may be wondering… Do I need to? What are the benefits? As we can see below graphic there was quite a few enhancements in support for Automatic Space Reclamation, In-Guest Space Reclamation and native support for 4K native storage.

VMFS Migration

Photo courtesy of David Stamen

You will also see that we have mentioned migrating from VMFS-5 to VMFS-6. Due to the underlying storage changes to support 4K Native storage as well as other features the metadata has changed and the upgrade cannot be done In-Place. The migration requires you to delete the current datastore and re-create. In a later section we will cover how to automate this process.

You can find out more information on vSphere 6.7 Storage enhancements here.

Checking Current VMFS Version

In case you are not sure what VMFS version your datastore is currently running, we can find out with a simple PowerCLI one-liner.

Get-Datastore | Select Name, FileSystemVersion

Here we can see that DS01 is still at VMFS-5 and DS02 has already been upgraded to VMFS-6. In the next section we will target upgrading our datastore to VMFS-6.

Updating VMFS Version

When it comes to migrating to VMFS-6 we have a few methods. A good reference to use is KB2147924, this KB covers the supported methods and ways to update your VMFS version. My colleague Nigel Hickey has covered how to do GUI based upgrades of VMFS. However, when you have many datastore’s it may help to automate this process.

As we reference the above KB it mentions a PowerCLI cmdlet called Update-VMFSDatastore. This is an powerful tool that acts a bit different than your standard PowerCLI cmdlet, it is very verbose and detailed with the checks that it does. However, there are also some considerations to take into account when using it.

A few considerations when using it are as follows:

  • Requires use of datastore cluster and have a temporary VMFS-5 datastore of equal or greater capacity.
  • Issues storage vMotion requests to the temporary datastore.
  • Validates that no unsupported VMs exist on the datastore such as those with SRM, VADP, VRM or Clustering.
  • Will temporarily disable Storage DRS and enable once complete.
    • If cmdlet fails enabling storage DRS is a manual effort.
  • Carefully review the usage of the Resume and Rollback parameters in case of any errors.

So now that we know some things to look out for what does the cmdlet actually do?

  1. Checks for VADP, SMPFT, MSCS/RAC virtual machines on Source Datastore.
  2. Makes sure datastore is accessible to all hosts.
  3. Validates Temporary datastore has sufficient capacity.
  4. Modifies Storage DRS Automation to manual
  5. Moves VM and orphaned data from Source to Temp datastore.
  6. Unmount Source Datastore
  7. Recreate Source Datastore as VMFS-6
  8. Moves VM and orphaned data from Temp to Source datastore.
  9. Restores Original Storage DRS setting.

Now that we understand how it works, lets jump into the usage.

Connect-VIServer vCenter

$Source = Get-Datastore "DSS"

$Temp = Get-Datastore "DST"

$Server = (Get-VIServer -Server "vCenter")

Update-VmfsDatastore -Datastore $Source -TemporaryDatastore $Temp -TargetVmfsVersion "6" -Server $Server

It requires very minimal input, we will put in the source datastore, the temporary datastore, vCenter Server and the Target VMFS version. 

It is quite simple and if you encounter errors they will be logged and you can rollback using the rollback parameter of if the error can resumed you can use the resume parameter.

You can see more information on the usage of Update-VMFSDatastore at the PowerCLI Reference.

Find the full article here: 

https://blogs.vmware.com/vsphere/2018/10/automating-migration-of-vmfs-5-to-vmfs-6-datastores.html

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.