installing-a-openshift-test-environment-using-minishift

Many companies decide to user RedHat OpenShift to manage their Kubernetes platform to simplify operations and use a straightforward UI instead of the commandline.

Furthermore, OpenShift has many security features built-in.

Supporting Kubernetes and OpenShift customers the same way, we needed a development and testing platform for OpenShift as well and discovered Minishift as a easy and sufficient solution.

In that article we share our installation steps using Ubuntu 18.10 and the caveats we ran into.

What is OpenShift?

Red Hat® OpenShift® is a hybrid cloud, enterprise Kubernetes application platform.

What is Minishift?

Minishift is a tool that helps you run OpenShift locally by running a single-node OpenShift cluster inside a VM. You can try out OpenShift or develop with it, day-to-day, on your local host.

Preparation

There are preparation steps you should be aware of before installing Minishift. These steps mainly focus on setting up the hypervisor that should be used to run the virtual machine for Minishift.

As we’re using Ubuntu 18.10 the KVM hypervisor is our choice and the following installation steps are based on it.

If your running macOS or Windows, please check the installation steps for these operating systems here:

macOS

Windows

All Platforms

KVM

If you run a different linux than Ubuntu 18.10 or higher, you can check the following url for alternative installations:

Setting Up Virtualization Environment Minishift

Install libvirt and qemu-kvm on your system:

sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system
sudo usermod -a -G libvirt $(whoami) # add yourself to the libvirt group
newgrp libvirt # apply group change to current session

Install the KVM driver binary and make it executable as follows:

sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-ubuntu16.04 -o /usr/local/bin/docker-machine-driver-kvm # install docker-machine-driver KVMsudo chmod +x /usr/local/bin/docker-machine-driver-kvm

Check libvirtd service

systemctl is-active libvirtd
sudo systemctl start libvirtd # if the service is not active

libvirt networking

Next step is checking the libvirt networking and if all is up and running.

Check the network status

sudo virsh net-list –all
sudo virsh net-start default # start the service if not running
sudo virsh net-autostart default # mark the service as autostart

Minishift installation

Download Minishift software for your operating system from the Minishift Releases page, extract and copy the executable into your path.

cd /tmp
wget https://github.com/minishift/minishift/releases/download/v1.34.0/minishift-1.34.0-linux-amd64.tgz
tar xzf minishift-1.34.0-linux-amd64.tgz
sudo cp minishift-1.34.0-linux-amd64/minishift /usr/local/bin
sudo chmod +x /usr/local/bin/minishift

Minishift initial start

The first start of Minishift is downloading OpenShift binary from Github and the Minishift Centos Iso to your local system and sets up all networking, storage and everything else you need to run OpenShift.

To get started simply run minishift start

OpenShift Minishift start

Minishift installation done

Developer login into Minishift

After the installation is done, you’ll get all the login details in your commandline, the URL to access OpenShift and the information to login as a developer.

You can simply click the link or open it in a browser, use developer as a username and anything except an empty password as a password.

OpenShift developer login

Admin login into Minishift

Now the first issues started, as you need to run oc login -u system:admin

First issue: Can’t find the oc command

Solution: run eval $(minishift oc-env)

Great, the oc command works now, but the admin login still fails (be aware that sometimes the link redirects to localhost and you need to manually change it to the Minishift ip again).

Second issue: no admin login possible

Solution

minishift addon apply admin-user
oc login -u admin # type in a password to use

OpenShift Admin login

Installation of a Minishift addon

There are some community addons you can use to simplify the Minishift installation. One example is Prometheus.

The process to get the addons is very easy:

git clone https://github.com/minishift/minishift-addons.git # clone the addon repository
minishift addons install minishift-addons/add-ons/prometheus # point to the addon path you want to install

Deploy Prometheus addon

minishift addons apply prometheus –addon-env namespace=kube-system

Minishift Prometheus addon

That’s it – you have a running minishift environment now and you can start playing around or developing extensions or deploying your applications on OpenShift.

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.