Serverless

Companies have always focused on developing new features, rather than wasting time managing an infrastructure, it is in this context that Serverless appeared. In this blogpost, we will discuss serverless and focus on OpenFaaS in Kubernetes.

Target audience: k8s users and developers

By Florian Davasse, DevOps Consultant Trainee @ Objectif-Libre

Introduction to OpenFaaS: Serverless made accessible

Definitions

Serverless

Serverless meets the need to focus on the essential, namely: the application. All management (scaling, monitoring, updates…) is hidden. You only have to focus on your “delivery”.
That being said, Serverless seems to solve all the problems, except that this abstraction has a cost that can be difficult to anticipate. Hence, it’s interesting to deploy your own FaaS platform, in order to get an idea of the consumed resources or simply to keep control over your data.

FaaS

FaaS (Function-as-a-Service) is a Cloud service linked to Serverless computing. Developers can use it to deploy an individual function. The function starts in milliseconds and processes the incoming request(s). As soon as our function is no longer needed, it stops.

OpenFaaS

Prerequisites

For this tutorial, we’re going to need :

  • A Kubernetes cluster already installed
  • Helm deployed and configured on the cluster
  • The Helm client on your machine
  • kubectl on your computer to manage the cluster

Installation

It is possible to install OpenFaaS in several ways, you can apply all the yaml files manually or simply use Helm which will allow you to install and configure OpenFaaS easily. The other advantage of using Helm is the automatic installation and configuration of Prometheus for OpenFaaS.

Installation by Helm

We’re going to need two namespaces, one for the core services of OpenFaaS and one to host the features we’re going to deploy. For the sake of simplicity, we will keep the OpenFaaS recommended values “openfaas” for core services and “openfaaas-fn” for functions.

$ kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml

To connect to the gateway, we’re going to need a secret.
# generate a random password
$ PASSWORD=$(head -c 12 /dev/urandom | shasum | cut -d' ' -f1)
# create the secret
$ kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password="$PASSWORD"

We will now add the OpenFaaS repo Helm:
$ helm repo add openfaas https://openfaas.github.io/faas-netes/

$ helm repo update && helm upgrade openfaas --install openfaas/openfaas \
--namespace openfaas \
--set basic_auth=true \
--set functionNamespace=openfaas-fn

Last step, you need to install faas-cli on your machine in order to be able to manage OpenFaaS remotely and to build our functions locally.$ curl -sSL https://cli.openfaas.com | sudo sh

Now that OpenFaaS is installed, we can develop and deploy our features!

Development of a function

Before we develop our function, we need an agent that will redirect web traffic to your function or the standard input of your feature. In OpenFaaS, this agent is called the “watchdog”. This agent is provided via a Docker image and is included in a hidden way via the templates that we will see below (agent Doc).

To initialize a new function in python, we will use faas-cli to create a template of our function in the language of our choice:

$ export USER=YourDockerUsername
$ faas-cli new hello-world --image=$USER/hello-world --lang python3

2019/07/25 11:16:19 No templates found in current directory.
2019/07/25 11:16:19 Attempting to expand templates from https://github.com/openfaas/templates.git
2019/07/25 11:16:21 Fetched 16 template(s) : [csharp csharp-armhf dockerfile dockerfile-armhf go go-armhf java8 node node-arm64 node-armhf php7 python python-armhf python3 python3-armhf ruby] from https://github.com/openfaas/templates.git
Folder: hello-world created.
___ _____ ____
/ _ \ _ __ ___ _ __ | ___|_ _ __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) | __/ | | | _| (_| | (_| |___) |
\___/| .__/ \___|_| |_|_| \__,_|\__,_|____/
|_|
Function created in folder: hello-world
Stack file written: hello-world.yml

$ tree -I template
.
├── hello-world
│   ├── handler.py
│   ├── __init__.py
│   └── requirements.txt
└── hello-world.yml

1 directory, 4 files

  • hello-world.yml will contain all the necessary information for OpenFaaS to generate the Docker image.
  • requirements.txt for dependencies.
  • handler.py contains your function.

Now, we just have to build our function with the command $ faas-cli build -f hello-world.yml.
This command will create the Docker image with the included “watchdog” agent and add the “latest” tag to it.
You can also automate the push of the image to a public/private registry.

Deployment of a function

The deployment of functions can be done in several ways: you can go through the gateway, through the API, or through the CLI.

Gateway

We will now redirect local traffic from port 31112 to remote port 8080 of the gateway service.
$ kubectl port-forward svc/gateway -n openfaas 31112:8080 &
We will be able to connect to the gateway via the url : http://127.0.0.1:31112/ using the admin user and the password contained in the environment variable $PASSWORD.

Once logged in, just click on “Deploy new function”.

On this page, you will be able to deploy in one click ready-to-use functions, and to deploy your own function in the “custom” tab.

In this tab, you will be able to specify the Docker image to be used, the environment variables to be used etc…

CLI

Concerning the OpenFaaS client, it allows you to perform all the necessary operations: build a function, deploy, look at the logs, describe a function…
First, we’ll authenticate via the gateway.
$ echo $PASSWORD | faas-cli login --gateway http://127.0.0.1:31112 --password-stdin

We are going to push the image we just built on the image registry of Docker:
$ faas-cli push -f hello-world.yml

Once the image is deployed on our registry or the Docker registry, we can deploy the feature on OpenFaaS:
$ faas-cli deploy --image $USER/hello-world --name hello-word --gateway 127.0.0.1:31112

In this article, we have decomposed all the phases (build, push, deploy). However, all these steps could have been summarized with a single command:
$ faas-cli up -f hello-world.yml --gateway 127.0.0.1:31112

Monitoring / Scaling

Another interesting aspect of OpenFaaS is monitoring and scaling. To scale functions, OpenFaaS defines a rule in the Prometheus AlertManager and uses the metrics that Prometheus will scrape (retrieve) to see if it observes latencies to serve a particular function. If Prometheus triggers an alert, OpenFaaS will scale this function and thus launch other pods.

On the monitoring part, we can observe several elements: the latency of the gateway, the execution time of the functions, the number of replicas… All metrics are available on this page: OpenFaaS Docs.

Conclusion

If the simplicity of OpenFaaS and its abstraction are its main assets, its lack of functionalities would be its weakness. To fill all the gaps, you will have to turn to more complete/complex solutions such as Knative (article Objectif Libre) or OpenFaaS Cloud.