Service Mesh : Discovery and Implementation
By Yann Provost, Cloud Consultant @ObjectifLibre / Paris agency.
This article is an introduction to the Service Mesh, with a focus on Istio, in a Kubernetes context.
Target audience: Kubernetes Administrator
What’s the Mesh ?
Find a simple explanation of the Service Mesh feels like passing through Brittany (France) without tasting Kouign-amann: impossible.
So we will play the game and propose a simple definition, in one sentence:
“A Service Mesh is like a platform for control and management, secure and efficient between all the (micro) services of an application”
Most of the time, it is organized around two architectural components:
- A Control Plane, grouping all the core bricks: configuration, rules, authentication, display of metrics, etc.
- A Data Plane, mesh of proxies, which may be present:
- at a node level; this is called shared host proxy, deployed by DaemonSet.
- at a pod level; we then speak about sidecar proxy, deployed by injection with an existing container.
Together, these two elements will move from one administration of network services to another, proposing new functionalities and notably improving:
- The load balancing of high-level traffic (L7) HTTP, but also TCP or WebSocket
- ACLs: whitelist / blacklist, API access, quotas
- Collecting network traces, metrics and logs
- Discovery service
- Cross-service security (TLS support)
So much for the generic view of the Mesh service.
Who you gonna call ?
Currently, several solutions exist. It will be a question of choosing according to your needs, your preferences or/and your ecosystem.
We will not make an exhaustive list, but here are the main speakers.
The leaders
Istio : Collaboration between Google, IBM and Lyft. It relies on the Envoy proxy. Most actors evolve around Kubernetes. It is this solution that we will detail.
Linkerd (pronounced Linker-dee) : First Service Mesh to have been created in 2016, updated by Twitter engineers. They have evolved to facilitate the resolution of scale problems on very large infrastructures.
SuperGloo : Very high-level oriented, it is THE rising solution of orchestration of network services of Solo.io. It has become very popular in recent months. SuperGloo offers a much simpler and automated Service Mesh than its counterparts. It supports ingress traffic (north-south) and mesh (east-west). Users can choose any ingress / mesh association, SuperGloo handles everything and manages the operation of all pairs automatically.
The outsiders
Consul : HashiCorp thinks that the principle of load balancing is not optimal: increased costs, SPOF, latency. The idea is to rely on a registry that will gather all the information on the different nodes, services and components of the platform. We are talking about Service Discovery. Since version 1.2, Consul offers Connect, its own Data Plane and a proxy sidecar. However, some functions (L7 route, gateway) are still in Beta version. For this, Connect offers integration with Envoy (and other proxies) to solve the problem.
NGINX : We no longer present the company and the web server of the same name. But the development of its own service is ongoing, based on … Istio.
Envoy : The “sidecar proxy”, used in particular by the Istio solution, has developed its own service. Follow it very closely.
Does Istio stand out?
This article is also presenting in details the operation and principles of the service.
The choice fell on Istio for different reasons:
- Very present in the community, especially the Kubernetes one.
- Receives the support of major players in the field, such as RedHat (for its OpenShift solution).
- Relies on a reference in terms of sidecar proxy, ie Envoy.
- Product already oriented and thought for production and exploitation, surely the most relevant now.
As seen in this diagram, Istio presents different bricks that have very particular functions within the Control Plane :
- Mixer : Manages access control and metrics collection from Envoy.
- Driver : Manages intelligent routing (a / b tests, canaries deployment, etc.) and resilient routing (waiting times, test attempts, etc.).
- Citadel : Manages identities as well as cross-service and end-user authentication.
- Gallery : sort of meta component, it will take care of all the configuration of all components of Istio including the relationship with the infrastructure on which it runs.
.. and Data Plane:
- Envoy : Proxy, deployed as sidecar, will allow to manage all the traffic by the service concerned. It provides many features (load balancing, TLS, health-checks etc.).
This represents the BASE of Istio. We will see that depending on the type of installation and bricks added, other components may be displayed (as a reminder, we will focus on the demo stack).
Implementation
In order to dive into the heart of the matter, you will be able to follow a procedure to obtain a functional Istio service on a Kubernetes platform (around minikube).
Follow the guide and get to the other side!
Prerequisites (for this demo):
- An Ubuntu system 16.04 / 18.xx
- 16 GB of Ram / 4 vCPU
Optional:
- A virtualization solution (kvm, virtualbox etc.)
Installing our stack:
1 \ Minikube
Purpose: to dispose of our environment, namely Kubernetes
# Docker setup
$ sudo apt-get install docker.io
# Pull minikube binary
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube $ sudo install minikube /usr/local/bin
# Cluster startup
$ sudo minikube start --memory=16384 --cpus=4 --kubernetes-version=v1.14.2 --vm-driver=none $ sudo chown -R $USER $HOME/.kube $HOME/.minikube
# Pull kubectl binary
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl
# Proper operation verification
$ kubectl -n kube-system get po NAME READY STATUS RESTARTS coredns-fb8b8dccf-mbz77 1/1 Running 0 coredns-fb8b8dccf-rt8mj 1/1 Running 0 etcd-minikube 1/1 Running 0 kube-addon-manager-minikube 1/1 Running 0 kube-apiserver-minikube 1/1 Running 0 kube-controller-manager-minikube 1/1 Running 0 kube-proxy-5v4gv 1/1 Running 0 kube-scheduler-minikube 1/1 Running 0 storage-provisioner 1/1 Running 0 tiller-deploy-7f656b499f-4ktj2 1/1 Running 0
2 \ Helm
Purpose: have the Helm package manager, to then deploy Istio
# Pull installation script
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh $ chmod 700 get_helm.sh
# Deployment
$ ./get_helm.sh
# Installation, if missing, of socat, necessary to Helm
$ sudo apt-get install socat
3 \ Istio
# Retrieving the latest version
$ curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.2 sh -
# Add the istioctl binary in the PATH
$ cd istio-1.2.2 / $ export PATH=$PWD/bin:$PATH
# ServiceAccount and ClusterRoleBinding creation to use Helm
$ kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
# Upgrade of tiller
$ helm init --upgrade --service-account tiller
# Deployment of Istio CRDs
$ helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
# Verification of number of CRDs: there must be 23 (and not 53 as in version 1.1.x)
$ kubectl get crds | grep 'istio.io \ | certmanager.k8s.io' | wc -l 23
# Deploying Istio in demo mode
/! \ For use outside the cloud provider, it is necessary to change the type of service istio-ingressgateway from LoadBalancer to NodePort (thanks to the--set gateways.istio-ingressgateway.type = NodePort
directive)
$ helm install install / kubernetes / helm / istio --name istio --namespace istio-system --values install / kubernetes / helm / istio / values-istio-demo.yaml --set gateways.istio-ingressgateway.type = NodePort
# Deployment check
$ kubectl get svc -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) grafana ClusterIP 10.111.12.28 <none> 3000 / TCP istio-citadel ClusterIP 10.105.178.155 <none> 8060 / TCP, 15014 / TCP istio-egressgateway ClusterIP 10.111.34.207 <none> 80 / TCP, 443 / TCP, 15443 / TCP istio-galley ClusterIP 10.109.117.14 <none> 443 / TCP, 15014 / TCP, 9901 / TCP NodePort 10.100.108.48 <none> 15020: 32378 / TCP, 80: 31380 / TCP, 443: 31390 / TCP, 31400: 31400 / TCP, 15029: 30694 / TCP, 15030: 30395 / TCP, 15031: 31105 / istio-ingressgateway TCP 15032: 32748 / TCP, 15443: 31005 / TCP istio-pilot ClusterIP 10.106.251.175 <none> 15010 / TCP, 15011 / TCP, 8080 / TCP, 15014 / TCP istio-policy ClusterIP 10.110.158.161 <none> 9091 / TCP, 15004 / TCP, 15014 / TCP istio-sidecar-injector ClusterIP 10.104.47.230 <none> 443 / TCP istio-telemetry ClusterIP 10.111.71.245 <none> 9091 / TCP, 15004 / TCP, 15014 / TCP, 42422 / TCP jaeger-agent ClusterIP None <none> 5775 / UDP, 6831 / UDP, 6832 / UDP jaeger-collector ClusterIP 10.103.76.38 <none> 14267 / TCP, 14268 / TCP jaeger-query ClusterIP 10.109.245.234 <none> 16686 / TCP kiali ClusterIP 10.99.45.168 <none> 20001 / TCP prometheus ClusterIP 10.108.186.42 <none> 9090 / TCP tracing ClusterIP 10.104.184.13 <none> 80 / TCP zipkin ClusterIP 10.102.89.77 <none> 9411 / TCP
$ kubectl get po -n istio-system NAME READY STATUS RESTARTS grafana-6fb9f8c5c7-wp59t 1/1 Running 0 istio-citadel-68c85b6684-m8bhb 1/1 Running 0 istio-cleanup-secrets-1.2.2-dr7sj 0/1 Completed 0 istio-egressgateway-5f7889bf58-lnwq8 1/1 Running 0 istio-galley-77d697957f-plrdb 1/1 Running 0 istio-ingressgateway-8b858ff84-7458b 1/1 Running 0 istio-init-crd-10-r8c86 0/1 Completed 0 istio-init-crd-11-h7qmf 0/1 Completed 0 istio-init-crd-12-cfjzz 0/1 Completed 0 istio-pilot-5544b58bb6-6g2zz 2/2 Running 0 istio-policy-68946fb9b9-vcfb5 2/2 Running 2 istio-security-post-install-1.2.2-lmf6n 0/1 Completed 0 istio-sidecar-injector-66549495d8-ddtkr 1/1 Running 0 istio-telemetry-7749c6d54f-vn8h4 2/2 Running 0 istio-tracing-5d8f57c8ff-x7d87 1/1 Running 0 kiali-7d749f9dcb-rghlh 1/1 Running 0 prometheus-776fdf7479-xkmgw 1/1 Running 0
At this point, we have :
- Functional Kubernetes
- Helm, which allowed us to easily deploy more
- Istio, deployed in demo mode: ie with tools allowing its exploitation / presentation (Prometheus, Grafana, Kiali etc.)
By default in our Istio deployment, injection of the Envoy sidecar will be almost automatic (via the WebHook admission controller). Just add a label on the namespace (s) concerned.
# Example with the namespace default
:
$ kubectl namespace label default istio-injection=enabled
From there, all pods created in this namespace will have an additional container: istio-proxy
Note that the creation of an istio-init
initialization container is also visible.
And now ?
Istio provides a test deployment, which will help to understand the possibilities that are available to us.
This application is called BookInfo and will take place in the default
namespace (on which we have already activated the injection of the sidecar proxy
).
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
Then check that the services & pods are up & running with the commands kubectl get svc
and kubectl get po
- We also note that several versions of some components are present.
Make sure the application itself is running correctly:
$ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage|grep -o "<title>.*</ title>" <title>Simple Bookstore App</title>
Finally, to make the application accessible from outside the cluster, we will use a gateway object:
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
A Gateway is a gate from outside the cluster pointing directly to Istio
Similar to ingress
, it will however provide all the features provided by the Mesh Service.
This manifest also created VirtualService
objects that are comparable to traditional vhosts in the Apache / NginX world. They can be put in relation or not with the DestinationRule
mentioned right after.
To access the application, it will be necessary to get :
- the
http
orhttps
port of theistio-ingressgateway
service in theistio-system
namespace - the
IP
of your host (here in an installation with minikube)
We can then test and query the application, via a curl
command or in a web browser:
$ curl -s http://IP:PORT_HTTP/productpage|grep -o "<title>.*</title>" <title>Simple Bookstore App</title>
- BookInfo is now accessible from outside via Istio.
Traffic Management and Application Version Routing
As we have seen, there are several target versions for BookInfo:
$ kubectl get po NAME READY STATUS RESTARTS details-v1-5544dc4896-wtmk5 2/2 Running 0 productpage-v1-7868c48878-k2zkk 2/2 Running 0 Ratings-v1-858fb7569b-jbmc7 2/2 Running 0 ratings-v2-dbd656795-xspd5 2/2 Running 0 reviews-v1-796d4c54d7-sz4v5 2/2 Running 0 reviews-v2-5d5d57db85-4r6rn 2/2 Running 0 reviews-v3-77c6b4bdff-tdwcf 2/2 Running 0
To benefit from version-based routing, it is necessary to create a DestinationRule
for each component of the BookInfo application containing the subsets
of each version, with their label:
$ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
Test by example
We can then set up A / B Testing on the reviews
component:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 75 - destination: host: reviews subset: v2 weight: 25
Here we direct 75% of traffic to version v1 of reviews and 25% to version v2
Visualization and metrics
The Istio deployment we have implemented also contains metrics collection components (prometheus
) and corresponding dashboards (grafana
)
To access it, it is possible to open temporary access via a port-forward
on the appropriate component:
$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000
N.B: adding --address 0.0.0.0
can be useful in the case of a remote machine
There is a lot of information available about network traffic, CPU consumption, vhosts etc.
You can also access other telemetry services like Kiali, Jaeger or zipkin.
Administration of Istio / Envoy
The istioctl
binary allows to consult the functional status of the proxies, their configurations or to inject a sidecar
$ istioctl proxy-status NAME LDS CDS EDS RDS PILOT VERSION details-v1-5544dc4896-wtmk5.default SYNCED SYNCED (50%) SYNCED istio-pilot-5544b58bb6-6g2zz 1.2.2 istio-egressgateway-5f7889bf58-lnwq8.istio-system SYNCED SYNCED SYNCED (100%) NOT SENT istio-pilot-5544b58bb6-6g2zz 1.2.2 istio-ingressgateway-8b858ff84-7458b.istio-system SYNCED SYNCED (100%) SYNCED istio-pilot-5544b58bb6-6g2zz 1.2.2 mongodb-v1-679d664df4-4pcl9.default SYNCED SYNCED (50%) SYNCED istio-pilot-5544b58bb6-6g2zz 1.2.2 productpage-v1-7868c48878-k2zkk.default SYNCED SYNCED (50%) SYNCED istio-pilot-5544b58bb6-6g2zz 1.2.2 ratings-v1-858fb7569b-jbmc7.default SYNCED SYNCED (50%) SYNCED istio-pilot-5544b58bb6-6g2zz 1.2.2 SYNCED SYNCED SYNCED (50%) SYNCED istio-pilot-5544b58bb6-6g2zz 1.2.2 reviews-v1-796d4c54d7-sz4v5.default SYNCED SYNCED (50%) SYNCED istio-driver-5544b58bb6-6g2z 1.2.2 reviews-v2-5d5d57db85-4r6rn.default SYNCED SYNCED SYNCED (50%) SYNCED istio-pilot-5544b58bb6-6g2zz 1.2.2 SYNCED SYNCED SYNCED (50%) SYNCED istio-pilot-5544b58bb6-6g2zz 1.2.2
$ istioctl proxy-config cluster reviews-v1-796d4c54d7-sz4v5.default FQDN SERVICE PORT SUBSET DIRECTION TYPE BlackHoleCluster - - - STATIC PassthroughCluster - - - ORIGINAL_DST details.default.svc.cluster.local 9080 - outbound EDS details.default.svc.cluster.local 9080 v1 outbound EDS details.default.svc.cluster.local 9080 v2 outbound EDS grafana.istio-system.svc.cluster.local 3000 - outbound EDS istio-citadel.istio-system.svc.cluster.local 8060 - outbound EDS istio-citadel.istio-system.svc.cluster.local 15014 - outbound EDS istio-egressgateway.istio-system.svc.cluster.local 80 - outbound EDS istio-egressgateway.istio-system.svc.cluster.local 443 - outbound EDS istio-egressgateway.istio-system.svc.cluster.local 15443 - outbound EDS istio-galley.istio-system.svc.cluster.local 443 - outbound EDS istio-galley.istio-system.svc.cluster.local 9901 - outbound EDS istio-galley.istio-system.svc.cluster.local 15014 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 15020 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 15029 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 15030 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 15031 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 15032 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 15443 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 31400 - outbound EDS [...]
In conclusion
The possibilities offered by the Service Mesh and Istio are numerous and could not be described in a single article.
The goal was therefore, after its implementation, to try out the concepts and options offered.
Other articles will follow, they will further detail the range of benefits from adding the Mesh Service to your infrastructure, adding an API Gateway service or a multi-cluster Control Plane.
Stay tuned!