K8s, Grafana, & CodeNotary


CodeNotary is the only decentralized, secure and blockchain based Kubernetes solution out there to guarantee the integrity of the containers running in a k8s cluster. In this blog, we will show you how to find and continuously monitor for unwanted Docker /images/blog in your K8s environment by using CodeNotary and its CLI tool, vcn. We will also show you how you can visualize everything in a Prometheus time series database using Grafana. The blockchain and smart contract based CodeNotary vcn tool allows you to protect your environment so only the containers signed by you (or by another signer you trust) and therefore recognized by you as ones you want to use, will be clearly identifiable as “Trusted”.


The applications we will be using/referencing that you should be familiar with in this blog are:

  • Docker
  • Kubernetes
  • Prometheus
  • Grafana


Now, let’s protect our environment.


To begin with, you will need to start a free trial with CodeNotary. If you don’t have one yet, you can do so here.


Once you have your free trial up and running, let’s get started.



Getting Started and Cloning the Repo


First off, there’s another Github project called vcn k8s we are going to clone. The vcn k8s project is a watchdog that continuously verifies the integrity and authenticity of the containers running in a k8s cluster. To clone, we use:


git clone


Once we have our clone, we change the directory to the correct corresponding folder.


cd vcn-k8s


Running a list of the folder contents, we see a Docker Compose file, Dockerfile, Kubernetes DaemonSet, and also the verify.Prometheus.


vcn-k8s GitHub folder contents


The vcn command actually runs inside the pod as we will see later on. Of course, we can check all the files out whenever we want but for now we are going straight into building our container.



Building Our Docker Container


So we enter the command:


docker-compose build


And now everything gets downloaded and our container is built.



Next, we want to tag the container and upload it to Docker Hub. In your case, just use your own account so your code would look like this (below):


docker tag 57e9d748995e (your account)/vcn


Now, we want to push our container onto Docker Hub.


docker push (your account)/vcn



When the command runs through successfully our Docker image has been uploaded and tagged correctly.



Working with Our DaemonSet


The next thing we want to do is to change the DaemonSet so that it is going to use the image we just uploaded to DockerHub.


To do so we’ll enter the command:


vi vcn-k8s-daemonset.yaml

  Now, let’s get into the DaemonSet.


Here we find a default image already entered and we will replace it with our own Docker Hub image.


# securityContext:

privileged: true

  image: _(your account)_/vcn:latest**


Our DaemonSet tells Kubernetes to deploy a pod to every Kubernetes node automatically.


We also see the annotation that tells the Prometheus server inside our Kubernetes deployment that the pods can be scraped and port 9581 is used by CodeNotary vcn.




We go ahead and save the Daemonset configuration file to reflect our changes.


Next, we deploy the Daemonset into Kubernetes using:


kubectl apply -f vcn-k8s-daemonset.yaml


Once it has been deployed into Kubernetes, we see:




Now, we can check the pods and see if they are running. Specifically, we check for codenotary-vcnverify, which we see below in the first 2 returned lines.


pods list


And they are already running. Perfect!


That’s basically it. Right? Well, not so much because now we want to visualize the data.


Making the Data Useful


Currently, what happens is that the two pods running are each on one of the Kubernetes nodes.


So we need to check if our Prometeus is already scraping the data. With Kubernetes you don’t need to configure it in any way because the cool thing on DaemonSet is that when you have a Prometheus running inside of your Kubernetes, you are able to work with annotations to make sure everything is crawled automatically when the pods go online. This way we can continuously monitor for unwanted and foreign Docker /images/blog in our environment.


Next, we will check our Prometheus server.


Testing Our Prometheus Set Up


First, we need to make sure our targets are correct…


targets url


…which they are.


Next, let’s scroll down and check if the Prometheus finds the CodeNotary container and if they are scraped correctly.


Prometheus pod


At first, the container will appear as down and be listed as invalid. This is because the application needs to detect and verify each running container. After a few seconds, we check it again and the target is up and the data is being scraped.


Prometheus pod


The 2nd container is still down, but it takes only a couple of seconds before it is also up.


Now, we can start checking to see if the data is coming in on the Prometheus graph page.


Prometheus Graph Page


We just go for the vcn verification level metrics.


vcn_verification_level query


Here we type vcn_verifation_level and run the query.


Yes. The data is coming in now. As we can see, we have various containers with the status value indicated for each. (In this demonstration, one of the containers is already signed, which is why we see a different value on one of them i.e. 2 instead of zero.)


Prometheus pod status

Visualizing CodeNotary Results and Dashboard Integration


Now to see how everything is working together, we want to import or create a Grafana dashboard.


After pulling up our dashboard list, we select ‘Import Dashboard’ on the right-hand side.


import dashboard


Next, we copy and paste the dashboard JSON file content and load it. Alternatively, we can create the dashboard ourselves based on the Prometheus data.


load dashboard JSON file content


After importing the dashboard, we select it in the Grafana dashboard navigation.


selecting Grafana dashboard navigation


And here we go! Our ‘Container Status’ viewer is up.


Opvizor container status viewer


We see there are non-verifiable and verifiable containers. We can also see after expanding the ‘Container Status’ panel window that the verifiable and non-verifiable containers are labeled as ‘Trusted’ and ‘Unknown’ along the inside right side of the window. Here we can see the container (the 3rd one in the list – netdata/netdata) we will be signing later in this demo is listed as ‘Unknown’.


Opvizor container status viewer panel window


Trusting Our Docker Images


Next, we want to trust our container /images/blog and mark them to be production ready. In our example, we go for the netdata container. So let’s check what container image is used by the NetData pods with the command:


kubectl describe pod netdata-master-0


Now, we scroll back up through the description we just retrieved and locate the image that is getting pulled.


container image description


Now, we pull the image ourselves locally in order to sign it using CodeNotady vcn:


docker pull netdata/netdata:v1.13.0


CodeNotary Trust Process


The next step for us is to sign the image with our CodeNotary vcn tool so we mark it in an immutable way that we trust this container. Optionally, we can use the switch --public if we want to disclose our identity so others can rely on us. As we sign our Docker image, we make sure to add docker:// in front of the image name (including the tag) as demonstrated below:


vcn s –public -y docker://netdata/netdata:v1.13.0


Now, we need to put in our vcn credentials for the Keystore (private key) in order to sign the asset onto the CodeNotary blockchain.


Key & Key Store Passphrase


Here we go. And now our image is signed!


CodeNotary vcn verification


Before checking the dashboard, we will verify our local image for good measure.


vcn v docker://netdata/netdata:v1.13.0


Here we see our image is, in fact, verified as well as we see all of its attributes.


CodeNotary vcn verified attributes


In the dashboard, we can see that our data container is now verified as ’Trusted’.


CodeNotary vcn verified TRUSTED


Using CodeNotary vcn sign, we can start signing all of the containers /images/blog we approve and trust to run within our Kubernetes environment.


It’s as simple as that.


There’s a free trial of course. Actually, if you are non-commercial and open source contributor, just sign up with CodeNotary and you can do exactly what we just showed you forever for free.



Yes, Show Me to My Free Trial



If you liked this tutorial, check out our other integrations here.

Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials


Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).


immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance


Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.


While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace


No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data


IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.


immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence


CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.


As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.