securing-your-azure-devops-ecosystem-jenkins-and-kubernetes-aks-using-codenotary-part-1

Jenkins is one of the most popular CI/CD components in the DevOps world. It’s very easy to deploy and configure and there is a great amount of plugins for all kinds of integration. When using the Azure DevOps ecosystem, Jenkins in combination with Kubernetes (AKS) is such a powerful team. We at CodeNotary want to make your life very easy and the integration of CodeNotary into your Jenkins pipeline even more so. This blog post gives you a complete guideline.

If you want to learn more about Notarization and Authentication, please check this blog post first: https://hackernoon.com/the-day-we-started-to-protect-devops-with-blockchain-a9g6y33gt

But let’s start step by step.

Jenkins

Jenkins is an open source automation server written in Java. It is used to continuously build and test software projects, enabling developers to set up a CI/CD environment.

Everyone in the DevOps world knows the Jenkins logo

There is a very nice guideline that describes a complete Azure deployment for Jenkins, Grafana and AKS. I can only recommend to read it: https://medium.com/@adilsonbna/building-my-own-azure-devops-ecosystem-ef92b8db9da5

This blog post is taking the deployed Hello World docker image as an example how you can use CodeNotary to notarize every docker container image build before it gets deployed.

Login as Admin

As a first step you need to login to Jenkins with an admin account or an account that can change an existing project.

Jenkins URL and SSH access in your Azure Portal

As you cannot login as Admin using the JenkinsURL, you need to run az login within your terminal and then the JenkinsSSH command from the Outputs page of Azure. You can access Jenkins using the url http://localhost:8080 afterwards and use the admin account.

Configure your Pipeline

Click Configure of your pipeline to change the script.

Pipeline Script

node {
 def built_img = ''
 def taggedImageName = ''

 stage('Checkout git repo') {
   git branch: 'master', url: params.GIT_REPO
 }
 stage('Build Docker image') {
   built_img = docker.build(params.DOCKER_REPOSITORY + ":${env.BUILD_NUMBER}", './jenkins-cicd-container')
 }
 stage('Push Docker image to Azure Container Registry') {
   docker.withRegistry(params.REGISTRY_URL, params.REGISTRY_CREDENTIALS_ID ) {
   taggedImageName = built_img.tag("${env.BUILD_NUMBER}")
 }
docker.image('codenotary/vcn:0.7.1-docker').inside('-v "/var/run/docker.sock:/var/run/docker.sock:ro" --entrypoint ""') { c ->
   sh 'VCN_USER="user" VCN_PASSWORD="password" vcn login'
   sh 'VCN_NOTARIZATION_PASSWORD="password" vcn n --attr jenkins=' + env.BUILD_NUMBER + ' docker://' + taggedImageName
 }

 built_img.push("${env.BUILD_NUMBER}");
 }
 }
 stage('Deploy configurations to Azure Container Service (AKS)') {
 withEnv(['TAGGED_IMAGE_NAME=' + taggedImageName]) {
 acsDeploy azureCredentialsId: params.AZURE_SERVICE_PRINCIPAL_ID, configFilePaths: 'jenkins-cicd-container/kubernetes/*.yaml', containerService: params.AKS_CLUSTER_NAME + ' | AKS', dcosDockerCredentialsPath: '', enableConfigSubstitution: true, resourceGroupName: params.AKS_RESOURCE_GROUP_NAME, secretName: '', sshCredentialsId: ''
 }
 }

The important part to change:

docker.image('codenotary/vcn:0.7.1-docker').inside('-v "/var/run/docker.sock:/var/run/docker.sock:ro" --entrypoint ""') { c ->
    sh 'VCN_USER="user" VCN_PASSWORD="password" vcn login'
    sh 'VCN_NOTARIZATION_PASSWORD="password" vcn n --attr jenkins=' + env.BUILD_NUMBER + ' docker://' + taggedImageName
  }
  1. we use the docker image of CodeNotary that contains the vcn command line tool
  2. make sure the docker container starts with a mapped docker.sock
  3. run a vcn login using your username and password (we recommend creating a service account for your jenkins at CodeNotary.io). That way you can always differentiate between automatic notarization and manual.
  4. run the notarization process for the newly generated docker image (during the Jenkins process)

Check the notarization

You can check the pipeline output for the successful notarization of your Jenkins build docker image.

That’s it – all of your future Jenkins built docker /images/blog, will be automatically notarized and you can authenticate them from everywhere in the world.

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.