vmware-tanzu-projects-octant-kubernetes-runtime-overview

Heptio started a bunch of open source projects designed to help Kubernetes developers and operators to run and maintain their Kubernetes Cluster in the best possible way. VMware acquired Heptio end of last year and released many of the Heption projects under the new VMware Tanzu umbrella. 

There are already a couple of projects to help with container backups and more. But this posts covers a very cool project called Octant, that gives you a Kubernetes runtime overview.

You can filter namespaces, search for certain tags or just browse through your K8s runtime.

The Octant description shows: a web-based, highly extensible platform for developers to better understand the complexity of Kubernetes clusters. I would describe it as a Kubernetes runtime overview, that supports 3rd party plugins as well. It’s only a question of time, before the first plugins will be created by the community.

What is Octant

The project page: https://github.com/vmware-tanzu/octant

Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer’s toolkit for gaining insight and approaching complexity found in Kubernetes. 

Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities.

Features

  • Resource Viewer

    Graphically visualizate relationships between objects in a Kubernetes cluster. The status of individual objects are represented by color to show workload performance.

  • Summary View

    Consolidated status and configuration information in a single page aggregated from output typically found using multiple kubectl commands.

  • Port Forward

    Forward a local port to a running pod with a single button for debugging applications and even port forward multiple pods across namespaces.

  • Log Stream

    View log streams of pod and container activity for troubleshooting or monitoring without holding multiple terminals open.

  • Label Filter

    Organize workloads with label filtering for inspecting clusters with a high volume of objects in a namespace.

  • Cluster Navigation

    Easily change between namespaces or contexts across different clusters. Multiple kubeconfig files are also supported.

  • Plugin System

    Highly extensible plugin system for users to provide additional functionality through gRPC. Plugin authors can add components on top of existing views.

Deployment

The deployment of Octant is quite cool and somewhat surprising as its not a kubernetes service or helm chart to be deployed. It’s an executable that leverages the Kubernetes API using your kubectl configuration.

Simply download or install the release.

wget https://github.com/vmware-tanzu/octant/releases/download/v0.7.0/octant_0.7.0_Linux-64bit.deb
sudo dpkg -i octant_0.7.0_Linux-64bit.deb

Before starting Octant, make sure you have access to your kubernetes cluster. Best is to test that using the kubectl cluster-info command.

Start the Octant web server:

octant

Octant tries to launch the default web browser on 127.0.0.1:7777. As I’m connected to a remote console, I start Octant on all interfaces and a available port.

For configuring Octant, setting up a development environment, or running tests, refer to the documentation here.

kubectl cluster-info #continue if succeed
OCTANT_LISTENER_ADDR=0.0.0.0:8090 octant #any interface, Port 8090

kubernetes runtime overview

There are also deployment options for MacOS and Windows, just in case you run a different operating system. The main setup and configuration works the same.

Dashboard

Simply access the octant dashboard using your browser and the configured network address and port (in this case http://ipaddress:8090). You could potentially put a reverse proxy in front of your Octant in case you want to run it permanently and have some protection (i. e. basic auth).

octant Kubernetes runtime overview

The dashboard shows you an overview of your current workloads, configurations files, RBAC security settings or events that go on in your Kubernetes namespace. You can change the namespace at any time using the drop down menu at the top.

Last but not least, you can write and integrate your own plugins. I’m really curious and excited what solutions will come up and will be integrated here.

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.