At the end of each training about k8s, we note that trainees see the power of k8s in terms of automation and scalability. But they also fear the complexity of the tool. This feeling is even stronger when trainees are developers and not system administrators. Could Knative be an answer to this frustration?
Targeted audience: k8s users, developers
By Jacques Roussel, Cloud Consultant @Objectif Libre
Definition
Knative is a serverless framework. It’s a tool that executes code, no matter which language is used. You don’t even have to worry about the server running the code, or how the network access should be setup. You just have to send a piece of code to execute to the platform, and that’s it. Furthermore, the platform will manage the auto-scaling for you.
Setup your environment
Prerequisites
For our test environment, you need a VM with at least 8 vcpus and 16Go of memory running ubuntu 18.04. Be carreful to the CPU and RAM because the stack is quite big. On this VM we will deploy k8s, calico, helm, istio and knative.
K8s installation
In order to install k8s, you can create this script:
#!/bin/bash # # install.sh # sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get install -y docker-ce docker-ce-cli containerd.io curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" |sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl sudo swapoff -a sudo kubeadm init --pod-network-cidr=192.168.0.0/16 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config USER_UID=$(id -u) USER_GID=$(id -g) sudo chown ${USER_UID}:${USER_GID} $HOME/.kube/config MASTER_NAME=$(kubectl get no -o=jsonpath='{.items[0].metadata.name}') kubectl taint node ${MASTER_NAME} node-role.kubernetes.io/master:NoSchedule- kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
And run it on the VM:
$ bash install.sh
At this point, you should have a fonctional k8s with one node. Check that the cluster is ready before continuing:
$ kubectl get no NAME STATUS ROLES AGE VERSION k8s-master Ready master 15m v1.14.1
Install helm
$ curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz $ tar -xvzf helm-v2.13.1-linux-amd64.tar.gz $ sudo mv linux-amd64/helm /usr/local/bin/
Next we create a file for the tiller service account:
# tiller.yml apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
We deploy it:
$ kubectl apply -f tiller.yml $ helm init --service-account tiller
Then we wait few minutes and we check if everything is OK:
$ helm version Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Install istio
Istio is the mesh service used by knative. It needs to be install before knative.
$ curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.1.5 sh - $ cd istio-1.1.5/ $ helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
Then we wait that all CRD are up. The following command should return 53:
$ kubectl get crds | grep 'istio.io\|certmanager.k8s.io' | wc -l 53
Then we can finish the istio installation:
$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system
Before to continue, we need to wait and make sure that all istio components are running:
$ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-citadel-86b5b9fb58-425gf 1/1 Running 0 2m46s istio-galley-5b98bd6859-qnx5z 1/1 Running 0 2m46s istio-ingressgateway-65576f8745-hjghv 1/1 Running 0 2m46s istio-init-crd-10-6t6zn 0/1 Completed 0 6m31s istio-init-crd-11-vdh4n 0/1 Completed 0 6m31s istio-pilot-78fff96ddf-wxsvm 2/2 Running 0 2m46s istio-policy-5fb895b86d-b2m9p 2/2 Running 1 2m46s istio-sidecar-injector-855966f687-pjzq6 1/1 Running 0 2m46s istio-telemetry-75bc675d7d-4cd29 2/2 Running 2 2m46s prometheus-d8d46c5b5-mfglm 1/1 Running 0 2m46s
Install knative
We are almost done. Let’s deploy Knative:
$ kubectl label namespace default istio-injection=enabled $ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.5.0/serving.yaml --filename https://github.com/knative/build/releases/download/v0.5.0/build.yaml --filename https://github.com/knative/eventing/releases/download/v0.5.0/release.yaml --filename https://github.com/knative/eventing-sources/releases/download/v0.5.0/eventing-sources.yaml --filename https://github.com/knative/serving/releases/download/v0.5.0/monitoring.yaml --filename https://raw.githubusercontent.com/knative/serving/v0.5.0/third_party/config/build/clusterrole.yaml $ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.5.0/serving.yaml --filename https://github.com/knative/build/releases/download/v0.5.0/build.yaml --filename https://github.com/knative/eventing/releases/download/v0.5.0/release.yaml --filename https://github.com/knative/eventing-sources/releases/download/v0.5.0/eventing-sources.yaml --filename https://github.com/knative/serving/releases/download/v0.5.0/monitoring.yaml --filename https://raw.githubusercontent.com/knative/serving/v0.5.0/third_party/config/build/clusterrole.yaml
We launch the command two time because of theses issues : 968 et 1036.
Then we wait and make sure that everything is running :
$ kubectl get pods --namespace knative-serving $ kubectl get pods --namespace knative-build $ kubectl get pods --namespace knative-eventing $ kubectl get pods --namespace knative-sources $ kubectl get pods --namespace knative-monitoring
Now we are ready to use Knative.
Usage
The idea here is to show how we can deploy automatically an app from a git repository. For this purpose, we need a registry. Here I use the official Docker hub.
Setup the build
If you need credentials to access to the registry, create a push-account.yml file:
# push-account.yml apiVersion: v1 kind: Secret metadata: name: basic-user-pass annotations: build.knative.dev/docker-0: https://index.docker.io/v1/ type: kubernetes.io/basic-auth data: username: BASE_64_REDACTED password: BASE_64_REDACTED --- apiVersion: v1 kind: ServiceAccount metadata: name: build-bot secrets: - name: basic-user-pass
DO NOT FORGET that the username and password need to be base64 encoded.
Then we push our credentials in k8s and we add the template that we will use to deploy our app:
$ kubectl apply -f push-account.yml $ kubectl apply --filename https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
The template must have a Dockerfile at the root of the repository.
Build and deploy an app
# deploy.yml apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: app-from-source namespace: default spec: runLatest: configuration: build: apiVersion: build.knative.dev/v1alpha1 kind: Build spec: serviceAccountName: build-bot source: git: url: https://github.com/ObjectifLibre/knative-article.git revision: master template: name: kaniko arguments: - name: IMAGE value: docker.io/jarou/knative-python-helloworld:latest timeout: 10m revisionTemplate: metadata: annotations: autoscaling.knative.dev/minScale: "1" autoscaling.knative.dev/target: "10" spec: container: image: docker.io/jarou/knative-python-helloworld:latest imagePullPolicy: Always
DO NOT FORGET to replace the push url by your own otherwise the push will fail.
We specified two values:
- autoscaling.knative.dev/minScale tells to knative to always keep one instance running
- autoscaling.knative.dev/target: “10” lowers the value of the ingress connections for the scaling (100 by default).
We can deploy our app:
$ kubectl apply -f deploy.yml
Our application is deploying:
$ kubectl get po NAME READY STATUS RESTARTS AGE app-from-source-bms72-pod-ea603c 0/1 Completed 0 111s app-from-source-qhpp5-deployment-6f5b85b68f-mppxt 3/3 Running 0 15s
To check if it’s ok:
$ INGRESSGATEWAY=istio-ingressgateway $ INGRESSGATEWAY_LABEL=istio $ IP_ADDRESS=$(kubectl get po --selector $INGRESSGATEWAY_LABEL=ingressgateway --namespace istio-system --output 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc $INGRESSGATEWAY --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}') $ curl -H "Host: app-from-source.default.example.com" http://${IP_ADDRESS}
To check if the auto-scaling is working we can run the following commands:
$ curl -LO https://storage.googleapis.com/jblabs/dist/hey_linux_v0.1.2 $ chmod +x hey_linux_v0.1.2 $ kubectl get po app-from-source-bms72-pod-ea603c 0/1 Completed 0 20m app-from-source-qhpp5-deployment-6f5b85b68f-mppxt 3/3 Running 0 19m $ ./hey_linux_v0.1.2 -z 30s -c 50 -host app-from-source.default.example.com http://${IP_ADDRESS} $ kubectl get po NAME READY STATUS RESTARTS AGE app-from-source-bms72-pod-ea603c 0/1 Completed 0 20m app-from-source-qhpp5-deployment-6f5b85b68f-7mk6f 3/3 Running 0 28s app-from-source-qhpp5-deployment-6f5b85b68f-mppxt 3/3 Running 0 19m app-from-source-qhpp5-deployment-6f5b85b68f-qhvgx 3/3 Running 0 30s app-from-source-qhpp5-deployment-6f5b85b68f-rw9d6 3/3 Running 0 30s app-from-source-qhpp5-deployment-6f5b85b68f-vpxdv 3/3 Running 0 28s
Conclusion
We deployed knative and used it with a demo application. We just deployed a python app that does nothing. But if we look closer at our git repository, there are only two files: our code and a Dockerfile. Howerver at the end we have an app deployed in a webserver that scale automatically without doing anything special. The goal is met.
So if you are a developper and you want to try a serverless approach, knative is a product to follow with attention.