VMware Event Broker on Kubernetes with Knative functions - part 2
Overview
This post is the second part of a small series about VMware Event Broker on Kubernetes with Knative functions.
If you plan to apply the following procedure, we assume that the content mentioned in the Part 1 is already deployed in your target setup.
Deploy VMware Event Broker with knative support
Disclaimer: This section of the post was made with the help of @embano1 who provided a knative-ready helm chart for vcenter-event-broker deployment (PR:392). He also provided an example of the
override.yaml
file we will use below.
Create a namespace
The following commands will create a namespace vmware-fn
to host and run automation functions.
1cat << EOF > vmware-fn-ns.yaml
2---
3apiVersion: v1
4kind: Namespace
5metadata:
6 name: vmware-fn
7EOF
8
9kubectl apply -f vmware-fn-ns.yaml
10
11kubectl get ns vmware-fn
12# Output
13NAME STATUS AGE
14vmware-fn Active 10s
Of course: you can customize this target namespace and even re-use an existing one.
Create a Broker
1cat << EOF > mt-broker.yaml
2---
3apiVersion: eventing.knative.dev/v1
4kind: Broker
5metadata:
6 name: vmware-event-broker
7 namespace: vmware-fn
8EOF
9
10kubectl apply -f mt-broker.yaml
11
12kubectl get broker -n vmware-fn
13# Output (I remove a loooong URL field)
14NAME AGE READY
15vmware-event-broker 23s True
Prepare event-router configuration
Create an override.yaml
with your settings:
1cat << EOF > override.yaml
2eventrouter:
3 config:
4 logLevel: debug
5 vcenter:
6 address: https://vcsa.local
7 username: test@vsphere.local
8 password: VMware1!
9 insecure: true # ignore TLS certs if required
10 eventProcessor: knative
11 knative:
12 destination:
13 ref:
14 apiVersion: eventing.knative.dev/v1
15 kind: Broker # we use a Knative broker to send events to
16 name: vmware-event-broker # name of the broker
17 namespace: vmware-fn # namespace where the broker is deployed
18EOF
Ensure to specify broker name and namespace according to the one configured in the previous section.
Helm deployment
If not already done, we will register the veba helm-charts registry and get metadata locally:
1# register chart repo and update chart information
2helm repo add vmware-veba https://projects.registry.vmware.com/chartrepo/veba
3helm repo update
At this time, the support of knative with helm
vmware event router deployment method is only supported in chart version >= v0.6.2. Ensure that this version is available:
1helm search repo event-router --versions | grep v0.6.2
2# Output
3vmware-veba/event-router v0.6.2 v0.6.0 The VMware Event Router is used to connect to v...
Lets deploy it.
Here we create a specific namespace
vmware
for this purpose but you can reusevmware-fn
or any other one.
1helm install -n vmware --create-namespace veba-knative vmware-veba/event-router -f override.yaml --wait --version v0.6.2
2# Output
3NAME: veba-knative
4LAST DEPLOYED: Wed May 5 12:55:39 2021
5NAMESPACE: vmware
6STATUS: deployed
7REVISION: 1
8TEST SUITE: None
We can now check that the deployment status:
1helm list --namespace vmware
2# Output
3NAME NAMESPACE REVISION STATUS CHART APP VERSION
4veba-knative vmware 1 deployed event-router-v0.6.2 v0.6.0
5
6kubectl get pod -n vmware
7# Output
8NAME READY STATUS RESTARTS AGE
9router-cdc874b59-vpckd 1/1 Running 0 36s
Usage
Now its time to perform some tasks based on event routing setup.
Deploy a sample echo function
The first (and very useful!) thing we can do, is to echo cloud events occurring in the target vCenter server.
VEBA team provide multiple echo samples (python or powershell based). Here we will use the python-based one provided by @embano1/kn-echo:
1cat << EOF > kn-py-echo.yaml
2---
3apiVersion: serving.knative.dev/v1
4kind: Service
5metadata:
6 name: kn-py-echo-svc
7spec:
8 template:
9 metadata:
10 annotations:
11 autoscaling.knative.dev/maxScale: "1"
12 autoscaling.knative.dev/minScale: "0"
13 spec:
14 containers:
15 - image: embano1/kn-echo
16---
17apiVersion: eventing.knative.dev/v1
18kind: Trigger
19metadata:
20 name: kn-py-echo-trigger
21spec:
22 broker: vmware-event-broker
23 subscriber:
24 ref:
25 apiVersion: serving.knative.dev/v1
26 kind: Service
27 name: kn-py-echo-svc
28EOF
29
30kubectl apply -n vmware-fn -f kn-py-echo.yaml
31# Output
32service.serving.knative.dev/kn-py-echo-svc created
33trigger.eventing.knative.dev/kn-py-echo-trigger created
We can check what was created:
1kn service list -n vmware-fn
2# Output
3NAME URL LATEST AGE CONDITIONS READY REASON
4kn-py-echo-svc http://kn-py-echo-svc.vmware-fn.example.com kn-py-echo-svc-00001 3m34s 3 OK / 3 True
5
6kn trigger list -n vmware-fn
7# Output
8NAME BROKER SINK AGE CONDITIONS READY REASON
9kn-py-echo-trigger vmware-event-broker ksvc:kn-py-echo-svc 2m8s 5 OK / 5 True
10
11kubectl get pod -n vmware-fn
12# Output
13NAME READY STATUS RESTARTS AGE
14kn-py-echo-svc-00001-deployment-7d8fcf598-5g8f7 2/2 Running 0 63s
As we specified autoscaling.knative.dev/minScale: "0"
in the service definition, the deployed pods may or may not be deployed at a specific time: if there is no event fired by vCenter for a period of time, Knative Serving will terminate the pod associated to the service, and recreate it when new event will arrive:
1kubectl get pod -n vmware-fn
2# Output
3NAME READY STATUS RESTARTS AGE
4No resources found in vmware-fn namespace.
If you want to look at incoming events, get the current running pod name and look at its logs:
1kubectl logs -n vmware-fn kn-py-echo-svc-00001-deployment-7d8fcf598-ngtdd user-container -f
Deploy vm-creation-attr function
I also did a re-write of the vm-creation-attr function I did write for OpenFaaS process to be knative compliant.
As a reminder, I did a(nother long) post a few month back about this function. The main goal is to populate custom attributes values for VM object based on the user who created the VM, the creation date and the last-poweredon date.
The knative function is hosted on GitHub: lrivallain/kn-vm-creation-attr-fn. You can get the function.yaml
file to start the deployment:
1curl -LO https://raw.githubusercontent.com/lrivallain/kn-vm-creation-attr-fn/main/function.yaml
Configuration
Edit the content of function.yaml
to configure the following settings:
1# In `ConfigMap` section
2VC_SERVER: vcsa.local
3VC_USER: test@vsphere.local
4VC_SSLVERIFY: True
5VC_ATTR_OWNER: event-owner
6VC_ATTR_CREATION_DATE: event-creation_date
7VC_ATTR_LAST_POWEREDON: event-last_poweredon
8
9# In `Secret` section
10VC_PASSWORD: Vk13YXJlMSEK
The VC_PASSWORD
is base64 encoded: you can generate it by using a command like:
1echo -n "YourP@ssw0rd" | base64
We assume that you use the previously mentioned vmware-event-broker
broker name, but you can change it by using:
1sed -i s/vmware-event-broker/NAMEOFYOURBROKER/ function.yaml
Deploy
1kubectl appy -n vmware-fn -f function.yaml
Then you can check the result with following commands:
1kn service list -n vmware-fn
2
3kn trigger list -n vmware-fn
4
5kubectl get pod -n vmware-fn
You will notice that there is multiple
kn-vm-creation-attr-fn-trigger-xxxx
triggers deployed. It is due to the filtering applied to incoming event, to only get the one matching some specific actions results.
Test
By looking at pod logs, you can see the actions resulting from the incoming events:
1172.17.0.1 - - [04/May/2021 14:08:00] "POST / HTTP/1.1" 204 -
22021-05-04 14:08:00,230 INFO werkzeug Thread-3 : 172.17.0.1 - - [04/May/2021 14:08:00] "POST / HTTP/1.1" 204 -
32021-05-04 14:09:18,462 DEBUG handler Thread-4 : "***cloud event*** {"attributes": {"specversion": "1.0", "id": "42516969-218a-406f-9ccc-db387befc4bf",
4"source": "https://vcsa.local/sdk", "type": "com.vmware.event.router/event", "datacontenttype": "application/json", "subject": "DrsVmPoweredOnEvent", "time": "2021-05-04T07:33:33.773581268Z", "knativearrivaltime": "2021-05-04T07:33:33.772937393Z"}, "data": {"Key": 992270, "ChainId": 992267, "CreatedTime": "2021-05-04T07:33:32.759Z", "UserName": "VSPHERE.LOCAL\\test-user", "Datacenter": {"Name": "Datacenter", "Datacenter": {"Type": "Datacenter", "Value": "datacenter-21"}}, "ComputeResource": {"Name": "Cluster01", "ComputeResource": {"Type": "ClusterComputeResource", "Value": "domain-c84"}}, "Host": {"Name": "esxi1.local", "Host": {"Type": "HostSystem", "Value": "host-34"}}, "Vm": {"Name": "TestVM", "Vm": {"Type": "VirtualMachine", "Value": "vm-596"}}, "Ds": null, "Net": null, "Dvs": null, "FullFormattedMessage": "DRS powered On TestVM on esxi1.local in Datacenter", "ChangeTag": "", "Template": false}
5}
62021-05-04 14:09:18,464 DEBUG vcenter Thread-4 : Initializing vCenter connection...
72021-05-04 14:09:18,992 INFO vcenter Thread-4 : Connected to vCenter 10.6.29.7
82021-05-04 14:09:19,483 INFO handler Thread-4 : Apply attribute > event-last_poweredon
92021-05-04 14:09:19,774 DEBUG handler Thread-4 : End of event
10172.17.0.1 - - [04/May/2021 14:09:19] "POST / HTTP/1.1" 204 -
112021-05-04 14:09:19,777 INFO werkzeug Thread-4 : 172.17.0.1 - - [04/May/2021 14:09:19] "POST / HTTP/1.1" 204 -
Is it serverless?
With a autoscaling.knative.dev/minScale: "0"
annotation setting (as provided by default in the above functions), have look at the pods list to see the result of an event:
1k get pods --watch -n vmware-fn
2# Output
3kn-vm-creation-attr-fn-service-00002-deployment-848865fdd-xgvb9 0/2 Pending 0 0s
4kn-vm-creation-attr-fn-service-00002-deployment-848865fdd-xgvb9 0/2 ContainerCreating 0 1s
5kn-vm-creation-attr-fn-service-00002-deployment-848865fdd-xgvb9 1/2 Running 0 5s
6kn-vm-creation-attr-fn-service-00002-deployment-848865fdd-xgvb9 1/2 Running 0 6s
7kn-vm-creation-attr-fn-service-00002-deployment-848865fdd-xgvb9 2/2 Running 0 7s
8# And after about 60s without events:
9kn-vm-creation-attr-fn-service-00002-deployment-848865fdd-xgvb9 2/2 Terminating 0 68s
10kn-vm-creation-attr-fn-service-00002-deployment-848865fdd-xgvb9 1/2 Terminating 0 71s
11kn-vm-creation-attr-fn-service-00002-deployment-848865fdd-xgvb9 0/2 Terminating 0 2m8s
As you can see, the function is acting as a serverless one: when needed, the appropriate number of pods is spawned, and when there is not incoming (and matching) event: no pods are kept on the cluster.
You can easily change values of autoscaling.knative.dev/maxScale: "1"
and autoscaling.knative.dev/minScale: "0"
according to your needs: for example, with minScale: "1"
: at least one pod will always remain listening for events: This could improve the time to execute an action it there is no pod to spawn after an inactivity period.
So, considering that the service provider is knative, our functions are acting like serverless ones and the management component is in charge of scaling (up and down to 0), the components running our application code, according to the incoming requests: This enable all the benefits of serverless applications and of-course, its drawbacks.
Credits
Title photo by James Harrison on Unsplash