VMware Event Broker (aka VEBA) on Kubernetes – First steps

In the following post, we will discover how to deploy the VMware Event Broker services (VEBA) within an existing Kubernetes (K8S) cluster and use it to add/edit custom attributes information to virtual machines.

The goal of the VEBA deployment is to be able to listen for events in the VMware vCenter infrastructure and to run specific tasks when filtered events occurs: it is the event driven automation concept.

To be accurate, VEBA stands for “VMware Event Broker Appliance”: a Photon OS based virtual machine, available in OVA format, with an embedded small K8S cluster to support the “VMware Event Broker” services. In the following post, I re-use an existing K8S cluster to support the “VMware Event Broker” services but I will use the VEBA acronym to simplify the redaction: even if I do not use the appliance deployment method.

If you need more details about VEB(A), the official website if well documented: vmweventbroker.io and lot of other use-cases are listed: notification, automation, integration, remediation, audit, analytics…

VMware Event Broker components

VEBA Architecture

VMware Event Router

The VMware Event Router, is the VEBA component, watching for new events generated by an Event Stream Source and routing the event to the Event Stream Processors. In the mean time, the VER translate the events to the cloudevents format: a specification for describing event data in a common way.

Event Stream Source

Currently, the VEBA only support one source for event stream: the vCenter Server.

As announced at VMworld2020 (VEBA and the Power of Event-Driven Automation – Reloaded [HCP1358]), a Cloud Director event stream source is in preparation.

Event Stream Processors

The Event Stream Processor is in charge of handling the event propagated by the VMware Event Router to the appropriate automation tasks that are configured to run for the specific type of event.

As the time I write this post, 2 processors are available:

  • Amazon EventBridge: to run on AWS serverless event services, your automation tasks.
  • OpenFaaS®: An open-source project to run Function as a Service (FaaS) automation task over a K8S deployment.

In my setup, I use the OpenFaaS processor.

Pre-requisites

Kubernetes

To proceed, we consider that an existing cluster is deployed.

If you need to deploy a really light and simple lab setup, I can highly recommend to use k3s to deploy your own K8S cluster: K3S: Quick-Start Guide.

In my own lab, I use a K8S cluster deployed by Rancher with the vSphere node driver (but that doesn’t change anything to the current use-case).

kubectl cli

kubectl is the standard CLI tool to operate K8S resources.

Once installed, you need to link your K8S cluster configuration file. There are multiple methods to do so, so I prefer to link the official documentation for Organizing Cluster Access Using kubeconfig Files.

You can check the setup by running:

# Display the current configuration
kubectl config view

# Get client and server version
kubectl version --short

The last command should output something close to this:

Client Version: v1.19.3
Server Version: v1.19.2

faas-cli

The faas-cli requirement is linked to the usage of the OpenFaaS processor in the following setup.

Here is one installation method:

curl -sSL https://cli.openfaas.com | sudo sh

You can also use an alternative installation method described in the faas-cli project GitHub repository.

VEBA deployment on Kubernetes

The VEBA deployment on K8S is quite simple and does not require a lot of configuration.

Get the VEBA code and dependencies

git clone https://github.com/vmware-samples/vcenter-event-broker-appliance
cd vcenter-event-broker-appliance/vmware-event-router/hack
git clone https://github.com/openfaas/faas-netes -b "0.9.2"  --single-branch

Deploy VEBA to your Kubernetes cluster

VEBA team provide a setup script to handle the deployment:

bash create_k8s_config.sh

You will be prompted to provide some settings there:

  • vCenter Server FQDN
  • vCenter Server Username and password
  • Deploy OpenFaaS: [y n]: In my setup, yes, I want to deploy an OpenFaaS instance
  • OpenFaaS Admin Password: The password to configure for the admin account of OpenFaaS instance

Then you are prompted to review the settings and in order to proceed the VEBA deployment.

Check the deployment

Few minutes later, you should have a new VEBA deployment:

kubectl get pods -n vmware
# You should get a running pod
NAME                                   READY   STATUS    RESTARTS   AGE
vmware-event-router-859b97c894-bxx94   1/1     Running   0          25m

This pod, in the vmware is the VMware Event Router as explained previously in this post.

If you requested the OpenFaaS deployment, you now have a set of pods in the openfaas namespace:

kubectl get pods -n openfaas
# You should get a new set of running pods
NAME                                 READY   STATUS    RESTARTS   AGE
alertmanager-66556574f7-g225s        1/1     Running   0          27m
basic-auth-plugin-86995c9c5f-2zs4r   1/1     Running   0          27m
faas-idler-7dbbcb48bb-tjhrg          1/1     Running   3          27m
gateway-5c4c48545d-hdshr             2/2     Running   2          27m
nats-6ff956f47c-hlqwx                1/1     Running   0          27m
prometheus-857c769b7-mcsmt           1/1     Running   0          27m
queue-worker-7b5756c9c4-wv9ml        1/1     Running   3          27m

It may take a couple of minutes in order to get all pods in a running state. Be patient.

Login to your OpenFaaS cli and UI

Login to faas-cli

We now need to get the OpenFaaS URI to use the faas-cli client. The following one-liner should provide you the appropriate information:

echo "export OPENFAAS_URL=http://"$(kubectl -n openfaas describe pods $(kubectl -n openfaas get pods | grep "gateway-" | awk '{print $1}') | grep "^Node:" | awk -F "/" '{print $2}')":31112"

The above command output give you the command to run, to setup the OPENFAAS_URL environment variable. This variable then can be used as an endpoint by the faas-cli tool.

export OPENFAAS_URL=https://<node ip>:31112

And to login:

echo '**YourPassword**' | faas-cli login --password-stdin

A warning will recommend you to use an HTTPS endpoint instead of the HTTP one: let’s ignore it for the moment.

At least, you should get a message like: “credentials saved for admin http://<node ip>:31112" meaning that you successfully configured your faas-cli client.

Login to the UI

Use the same URL to login with the admin account to the web UI and you should get something like that:

Empty OpenFaaS UI

First function

Time to describe our first function use case:

We have a lab vCenter with multiple users, multiple projects, PoC etc. And it’s a bit hard to know which VM belongs to which user, and if the project is still active.

A way I found to handle this, is to set Custom Attributes to the VM objects in vCenter, and to populate values when specific event occurs:

  • event-creation_date: To store the creation date
  • event-last_poweredon: To store the last powered on date
  • event-owner: To store the user that created the VM

Custom attributes created for this function

Function files/folders structure

An VEBA OpenFaaS function is made of the following items:

  • handler/: this folder will store the content of our function code (folder name can be personalized)
  • stack.yaml: This file will describe our function
  • A config file, passed as a K8S secret to our function, used to store credentials and other environment specific variables. In my example, it’s a YAML file: vcconfig.yml.

To simplify this post, I invite you to clone this sample repository:

git clone https://github.com/lrivallain/veba-sample-custom-attribute.git
cd veba-sample-custom-attribute/

stack.yaml file

This description file is used by VEBA to create the function run on our function-processor.

provider:
  name: openfaas
  gateway: http://<node ip>:31112
functions:
  vm-creation-attr:
    lang: python3
    handler: ./handler
    image: lrivallain/veba-vc-vm-creation-attr
    environment:
      write_debug: true
      read_debug: true
    secrets:
      - vcconfig
    annotations:
      topic: VmCreatedEvent, VmClonedEvent, VmRegisteredEvent, DrsVmPoweredOnEvent, VmPoweredOnEvent, VmPoweringOnWithCustomizedDVPortEvent

As you see, we specify here the:

  • OpenFaaS URI (the one in OPENFAAS_URL)
  • A language type: python3
  • The function folder: ./handler
  • A base image to run the function: lrivallain/veba-vc-vm-creation-attr
    • This image contains the appropriate dependencies to run our function
  • The configuration as a K8S secret name.
  • And in the annotations: the topic(s) to subscribe for this function.
    • Depending on your vCenter version, you can find an Event list in the vcenter-event-mapping repository of William Lam.

handler/ folder

The handler folder is made of:

  • An index.py file, use to handle the function instantiation: keep it like it is provided to start: of course, you can inspect the content to analyse the (simple) behaviour.
  • A function/ subfolder:
    • The handler.py file contains the code run each time the function is triggered
    • The requirements.txt file contains some function specific dependencies.
  • The Dockerfile used to build the base image: lrivallain/veba-vc-vm-creation-attr: Docker Cloud Build Status

vcconfig.yaml

This is a quite simple configuration file to rename to the expected name:

cp vcconfig.example.yaml vcconfig.yaml
vcenter:
  server: vcsa-fqdn
  user: service-account@vsphere.local
  password: "**********"
  ssl_verify: false

attributes:
  owner: event-owner
  creation_date: event-creation_date
  last_poweredon: event-last_poweredon

You need to setup your VCSA instance, credentials and the name of custom attributes to use for each need.

Custom attributes creation

The script currently does not handle the custom attribute creation so you need to create them before using the function:

Custom attributes

Deploy our function

We now got function code, configuration, and the VEBA over K8S deployed. Let’s deploy our function.

First step is to create the “secret” to store our local configuration:

faas-cli secret create vcconfig --from-file=vcconfig.yml

And to confirm if it worked, we can lookup for the vcconfig secret in a new namespace named: openfaas-fn (for OpenFaaS Function)

kubectl get secrets -n openfaas-fn vcconfig
# Output:
NAME       TYPE     DATA   AGE
vcconfig   Opaque   1      2m53s

Now we need to pull the OpenFaaS language template for the specified lang in our stack.yml file:

faas template store pull python3

In fact, this command will pull all (12) the languages templates from the openfaas registry, not only the one you are looking for.

We are ready to deploy our Function-as-a-Service:

faas-cli deploy -f stack.yml
# Output
Deploying: vm-creation-attr.
Deployed. 202 Accepted.
URL: http://10.6.30.114:31112/function/vm-creation-attr.openfaas-fn

We can check that a new pod is now part of the openfaas-fn namespace:

$ kubectl get pods -n openfaas-fn
# Output:
NAME                                READY   STATUS    RESTARTS   AGE
vm-creation-attr-65d9f75464-lf2sk   1/1     Running   0          94s

And our function is well listed in faas-cli

faas-cli list
# Output:
Function                        Invocations     Replicas
vm-creation-attr                0               1

The same in UI (need a refresh):

First Function deployed in the OpenFaaS UI

Invoke function

Invocation is now easy: juste create or power-on a VM in your vCenter and the event will be catched by VEBA, forwared to your OpenFaaS function and the code will run, inspecting the cloudevents incoming data and doint the expected tasks.

Follow function invocation

There is two way to follow the function invocation(s).

By using kubectl logs and specifing the openfaas-fn namespace, the pod name (from above commands), and the --tail and/or --follow args:

kubectl logs -n openfaas-fn vm-creation-attr-65d9f75464-lf2sk --tail 100 --follow
# Output:
2020/11/01 14:41:26 Version: 0.18.1     SHA: b46be5a4d9d9d55da9c4b1e50d86346e0afccf2d
2020/11/01 14:41:26 Timeouts: read: 5s, write: 5s hard: 0s.
2020/11/01 14:41:26 Listening on port: 8080
2020/11/01 14:41:26 Writing lock-file to: /tmp/.lock
2020/11/01 14:41:26 Metrics listening on port: 8081

Or with faas-cli command:

faas-cli logs vm-creation-attr --tail 100
# Output:
2020-11-01T14:41:26Z 2020/11/01 14:41:26 Version: 0.18.1        SHA: b46be5a4d9d9d55da9c4b1e50d86346e0afccf2d
2020-11-01T14:41:26Z 2020/11/01 14:41:26 Timeouts: read: 5s, write: 5s hard: 0s.
2020-11-01T14:41:26Z 2020/11/01 14:41:26 Listening on port: 8080
2020-11-01T14:41:26Z 2020/11/01 14:41:26 Writing lock-file to: /tmp/.lock
2020-11-01T14:41:26Z 2020/11/01 14:41:26 Metrics listening on port: 8081

Both outputs are very similar, so you can use the one that is the more convenient to you.

VM creation

In the case of a VM creation, we have the following output:

Logs for the VM creation event

And the attributes are populated according to the expected behavior:

Attributes for the VM creation event

VM powered-On

If we power On a VM:

Logs for the VM poweredOn event

And the attributes are populated according to the expected behavior:

Attributes for the VM poweredOn event

Conclusion

We successfully covered the deployment of our first Event-Driven Function-as-a-Service use-case, greatly helped by the VMware Event Broker services.

There is a multitude of events you can subscribe in your VMware virtual datacenter to imagine an infinity list of use cases: it is time to unlock your creativity!