vmware-vsphere-install-centos-8-and-run-minishift

VMware vSphere – install Centos 8 and run minishift

Many companies are running OpenShift on top of VMware vSphere to deploy, run and manage their container lifecycle. OpenShift uses the container orchestration platform Kubernetes to do so.

Especially when developing applications for OpenShift or if you just want to run a local test environment to play around, you should definitely check out minishift.

This blog posts covers the creation of a virtual machine and the installation of the Centos 8 operating system as well as the first steps installing minishift.

Creation of the virtual machine

You can pretty much create a completely standardized VM and select Centos 8 (x64) as the operating system. As we want to run it based on KVM in that VM, you need to export the hardware virtualization feature.

Centos 8 os

enable hardware virtualization

Installation of Centos 8

We don’t dig into the installation of Centos 8 itself, as its pretty straightforward. You can simply download the dvd image here, connect it and run the installer:

http://isoredirect.centos.org/centos/8/isos/x86_64/CentOS-8-x86_64-1905-dvd1.iso

First start Centos 8

The real interesting part starts with the first configuration steps of your fresh installed operating system. Especially if you’re not used to RedHat OS or used to older versions, there are some changes in the command line.

Setup your network

# set your hostname
nmtui-hostname# configure your network
nmtui-edit# connect your network
nmtui-connect

Update packages

dnf check-update
dnf update
dnf clean all# install some basic tools
dnf install nano vim wget curl net-tools lsof bash-completion

Create a new user account with sudo permissions

useradd user
passwd user
usermod -aG wheel user

Install kvm

Start by configuring a bridge network

# create and edit the following file
vi /etc/sysconfig/network-scripts/ifcfg-br0# file content
DEVICE=br0
TYPE=Bridge
IPADDR=192.168.10.100
NETMASK=255.255.255.0
GATEWAY=192.168.10.1
DNS=192.168.10.1
ONBOOT=yes
BOOTPROTO=static
DELAY=0

and the default network adapter

# create or change the interface config (check the interface name, i. e. ens192)
/etc/sysconfig/network-scripts/ifcfg-ens192# file content
DEVICE=eth0
TYPE=Ethernet
BOOTPROTO=none
BRIDGE=br0
NAME=ens192
DEVICE=ens192
ONBOOT=yes

Reboot the system and it should come up with a bridge network and the configured ip address.

# install required packages
dnf install qemu-kvm qemu-img libvirt virt-install libvirt-client# check the kvm module
lsmod | grep kvm# start and enable the libvirtd
systemctl start libvirtd
systemctl enable libvirtd

That’s it, KVM is installed and should be up and running.

Preparation

As virtualbox is used by default, we need to install the minishift KVM driver first:

sudo usermod -a -G libvirtd $(whoami)
newgrp libvirt
curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 -o docker-machine-driver-kvm
sudo mv docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm

Then start the default network

sudo virsh net-start default
sudo virsh net-autostart default

Installation

Check the latest release and change the release number accordingly: 

https://github.com/minishift/minishift/releases/latest

export VER="1.34.1"
curl -L https://github.com/minishift/minishift/releases/download/v$VER/minishift-$VER-linux-amd64.tgz -o minishift-$VER-linux-amd64.tgz
tar xvf minishift-$VER-linux-amd64.tgz# copy the executable to /usr/local/bin
sudo mv minishift-$VER-linux-amd64/minishift /usr/local/bin # check the version
minishift version

First start

# start minishift
minishift start# get minishift console url
minishift console –url# stop minishift
minishift stop

minishift start

The last message should show the console url that can be opened using a browser

Server Information …
OpenShift server started.The server is accessible via web console at:
https://192.168.42.144:8443/console

Install cli (oc)

To control and manage Openshift using the command line, you should install the oc command as well. The oc command is already integrated and you can simply copy it into your default path, like /usr/local/bin

sudo cp ~/.minishift/cache/oc/v3.11.0/linux/oc /usr/local/bin

oc commands

check if oc is working: oc version

login as administrator: oc login -u system:admin

check your running configuration: oc config view

Next steps

You should now have a running installation and the oc command should give you meaningful responses. Within a next blog post we’re going to cover steps like kubectl installation, addon installation and running your first application.

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.