You might have seen or even used the vCenter Event Broker applianced that had been released as a fling some weeks ago and was released completely open source last week by the excellent team of Michael Gasch and William Lam.

While the tag example that comes with the project and that is covered in the manual is a nice starter, it’s not the most useful thing to have. But that can easily be changed. Think about acting based on reconfigure events and push VM config changes to Slack: Audit VM configuration! That way you have all changes documented and searchable using the Slack service.

Audit VM configuration

Setup the vCenter Event Broker Appliance (VEBA)

I won’t go into details here, as we covered it already in former blog post.

You can read it here:

Audit VM configuration requirements

To enable a service to audit VM configuration, we need to add or change the following:

  • create a Slack webhook
  • vcconfig.json to include Slack details
  • template that includes Slack
  • stack.yml to use a different container image and to detect vm reconfigure events
  • handler function to send the VM change information to Slack

You can also speed things up and use our repository that contains the Example already:

You can find everything under examples/powercli/hwchange-slack.

For a quick start simply clone the original or the forked repository:

Original repository:

In that case I recommend to copy the example/powercli/tagging into another directory to customize the files.

Quick start is using the forked repository including the Audit VM configuration example:

git clone
# or
git clone

Setup Slack

To generate a Slack webhook just visit the following link while logged into Slack:

Generate Slack webhook

Simply create and select a Slack channel you want to use for the Audit VM configuration messages.

Copy the Webhook URI and the Channel name and configure that in your vcconfig.json.

The new Audit VM configuration function


The first file we’re changing is our credential file that contains the vCenter connection details and Slack.

    "VC" : "my-vCenter",
    "VC_USERNAME" : "user@vsphere.local",
    "VC_PASSWORD" : "userpassword",
    "SLACK_URL"   : "",
    "SLACK_CHANNEL" : "vcevent"


That file contains the stack definition that will be pushed to the Open FaaS service later on. The important lines are: gateway, image, und topic.

  name: faas
  gateway: https://veba.mynetwork.local
    lang: powercli
    handler: ./handler
    image: opvizorpa/powercli-slack:latest
      write_debug: true
      read_debug: true
      function_debug: false
      - vcconfig
      topic: vm.reconfigured
  • Gateway: the FQDN of the VEBA Appliance
  • Image: the image to be used (best is to use your own Docker Hub account)
  • Topic: vm.reconfigured covers all configuration changes of a VM


This is your function written in PowerShell:

# Process function Secrets passed in
$VC_CONFIG_FILE = "/var/openfaas/secrets/vcconfig"
$VC_CONFIG = (Get-Content -Raw -Path $VC_CONFIG_FILE | ConvertFrom-Json)
if($env:function_debug -eq "true") {
    Write-host "DEBUG: `"$VC_CONFIG`""

# Process payload sent from vCenter Server Event
$json = $args | ConvertFrom-Json
if($env:function_debug -eq "true") {
    Write-Host "DEBUG: `"$json`""

$eventObjectName = $json.objectName

# import and configure Slack
Import-Module PSSlack | Out-Null

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore  -DisplayDeprecationWarnings $false -ParticipateInCeip $false -Confirm:$false | Out-Null

# Connect to vCenter Server
Write-Host "Connecting to vCenter Server ..."
Connect-VIServer -Server $($VC_CONFIG.VC) -User $($VC_CONFIG.VC_USERNAME) -Password $($VC_CONFIG.VC_PASSWORD)

# Retrieve VM changes
$Message = (Get-VM $eventObjectName | Get-ViEvent -MaxSamples 1).FullFormattedMessage

# Bold format for titles
[string]$Message = $Message -replace "Modified","*Modified*" -replace "Added","*Added*" -replace "Deleted","*Deleted*"

# Send VM changes
Write-Host "Detected change to $eventObjectName ..."

New-SlackMessageAttachment -Color $([System.Drawing.Color]::red) `
                           -Title 'VM Change detected' `
                           -Text "$Message" `
                           -Fallback 'ouch' |
    New-SlackMessage -Channel $($VC_CONFIG.SLACK_CHANNEL) `
                     -IconEmoji :fire: |
    Send-SlackMessage -Uri $($VC_CONFIG.SLACK_URL)

Write-Host "Disconnecting from vCenter Server ..."
Disconnect-VIServer * -Confirm:$false

Whenever the function is triggered, we receive the object as a payload. Based on that object, we grab the latest event. There is a chance of catching the wrong event if there are too many changes for the same VM in a very short amount of time.

Good news – this is known to the VEBA guys and a change is planned to enhance the payload with the event id.

FaaS PowerCLI Template

This one is very important as we use it to build and push the container image to our Container image registry. Best is to create your own account on and login locally: docker login

If you don’t use the forked repository, copy the PowerCLI template to your working directory from here and change it to add Slack:


We simply need to add the following part:

# Add PSSlack
RUN pwsh -c "$ProgressPreference = "SilentlyContinue"; Install-Module PSSlack"

Complete file content:

FROM vmware/powerclicore:latest

RUN mkdir -p /home/app
USER root
RUN echo "Pulling watchdog binary from Github." 
    && curl -sSL > /usr/bin/fwatchdog 
    && chmod +x /usr/bin/fwatchdog 
    && cp /usr/bin/fwatchdog /root

# Add PSSlack
RUN pwsh -c "$ProgressPreference = "SilentlyContinue"; Install-Module PSSlack"


USER root

# Populate example here - i.e. "cat", "sha512sum" or "node index.js"
SHELL [ "pwsh", "-command" ]
ENV fprocess="xargs pwsh ./function/script.ps1"
COPY function function
# Set to true to see request in function logs
ENV write_debug="true"


HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1
CMD [ "fwatchdog" ]

Build, Push and Publish the function

Now comes the easy part as we only need to run a couple of commands:

FaaS-CLI – configure OpenFaaS

# set up faas-cli for first use
faas-cli login -p VEBA_OPENFAAS_PASSWORD --tls-no-verify

# create the secret based on the local file
faas-cli secret create vcconfig --from-file=vcconfig.json --tls-no-verify

Build and Deploy

# Build the new container based on the Template
faas-cli build -f stack.yml

# Push the container to the registry
faas-cli push -f stack.yml

# Deploy the function to the OpenFaaS service
faas-cli deploy -f stack.yml --tls-no-verify

Pushing the container image to your Dockerhub repo

You can check the container image on your Dockerhub repo and if the Slack integration worked.

Slack integration

Make sure you receive a 200 code when deploying the function!

Slack output

OpenFaaS is now configured to watch for VM reconfigure events to trigger the PowerCLI Script.

That’s it and you should see a similar message in your chosen Slack channel to audit VM configuration changes.

Audit VM configuration

Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials


Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).


immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance


Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.


While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace


No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data


IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.


immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence


CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.


As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.