What is Service Mesh & why do we need it? + Linkered Tutorial

KarthiKeyan Shanmugam
7 min readDec 17, 2018


In Microservice ecosystem, usually cross-cutting concerns such as service discovery, service-to-service and origin-to-service security, observability and resiliency etc., are deployed via shared asset such as an API gateway or ESB. As microservice grows in size and complexity, it can become harder to understand and manage.

Service mesh technique addresses these challenges where implementation of these cross-cutting capabilities is configured as code. A service mesh provides an array of network proxies alongside containers. Each proxy serves as a gateway to each interaction that occurs, both between containers and between servers. The proxy accepts the connection and spreads the load across the service mesh. Service mesh serves as a dedicated infrastructure layer for handling service-to-service communication.

Service mesh offers consistent discovery, security, tracing, monitoring and failure handling without the need for a shared asset such as an API gateway or ESB. So if you have service mesh on your cluster, you can achieve all the below items without making changes to your application code.

  • Automatic Load balancing
  • Fine-grained control of traffic behaviour with routing rules, retries, failovers etc.,
  • Pluggable policy layer
  • Configuration API supporting access controls, rate limits and quotas
  • Service discovery
  • Service monitoring with automatic metrics, logs, and traces for all traffic
  • Secure service to service communication

In Service Mesh model, each of Microservice will have a companion proxy sidecar. Sidecar gets attached to a parent application and provides supporting features for the application. The sidecar also shares the same life cycle as the parent application, being created and retired alongside the parent.

Side Car Pattern
Image — Side Car Pattern

Key Use Cases for Service Mesh

  • Service discovery: Service mesh provides service-level visibility and telemetry, which helps enterprises with service inventory information and dependency analysis.
  • Operation reliability: Metrics data from service mesh allows you to see how your services are performing for ex. how long did it take it to respond to service requests, how much resource it is using etc., This data is useful to detect the issues & correct them.
  • Traffic governance: With service mesh, you can configure the mesh network to perform fine-grained traffic management policies without going back and changing the application. This includes all ingress and egress traffic to and from the mesh.
  • Access control: With service mesh, you can assign policy that a service request can only be granted based on the location where the request came can only succeed if the requester passes the health check.
  • Secure service-to-service communications: You can enforce mutual TLS for service-to-service communications for all your service in mesh. Also you can enforce service-level authentication using either TLS or JSON Web Tokens.

Currently service mesh is being offered by Linkerd, Istio, and Conduit providers. Service mesh is ideally for multicloud scenarios since it offers a single abstraction layer that obscures the specifics of the underlying cloud. Enterprises can set policies with the service mesh, and have them enforced across different cloud providers.

What is Service Mesh & why do we need it ? Click To Tweet

In the next section, we can look at how to implement Linkerd service mesh for the sample application we have used before i.e., ngnix deployment

Linkered Service Mesh

Linkerd is a service sidecar and service mesh for Kubernetes and other frameworks. Linkerd sidecar is attached to a parent application and provides supporting features for the application. It also shares the same life cycle as the parent application, being created and retired alongside the parent. Applications and services often require related functionality, such as monitoring, logging, configuration, and networking services.Linkerd makes running your service easier and safer by giving you runtime debugging, observability, reliability, and security–all without requiring any changes to your code.


Linkerd has three basic components: (1) User Interface (both command-line and web-based options are available), (2) data plane, and (3) control plane.

Linkerd Architecture
Image — Linkerd Architecture

Key Components

  • User Interface is comprised of a CLI (linkerd) and a web UI. The CLI runs on your local machine; the web UI is hosted by the control plane.
  • Control plane is composed of a number of services that run on your cluster and drive the behavior of the data plane.It is responsible for aggregating telemetry data from data plane proxies.
  • Data plane is comprised of ultralight, transparent proxies that are deployed in front of a service. These proxies automatically handle all traffic to and from the service.

Next steps,we will download and install Linkerd,deploy Sample app.

If you’re looking for quickstart on basic understanding of Kubernetes concepts, please refer earlier posts for understanding on Kubernetes & how to create,deploy & rollout updates to the cluster.

Step #1: Validate Kubernetes Version

Check if you’re running Kubernetes cluster 1.9 or later by using kubectl version command.

Validate Kubernetes version
Image — Validate Kubernetes version

Step #2: Install Linkerd CLI

We will be using CLI to interact with Linkerd control plane,download the CLI onto your local machine using curl command.

You can also download the CLI directly via the Linkerd releases page.

 Linkerd CLI Installation
Image — Linkerd CLI Installation

Verify if the CLI is installed and running correctly using linkerd command.

Verify Linkerd Installation
Image — Verify Linkerd Installation

Step #3: Validate Kubernetes cluster

To ensure that the Linkerd control plane will install correctly, we are going run pre check to validate that everything is configured correctly.

Pre check Kubernetes cluster
Image — Pre check Kubernetes cluster

Step #4: Install Linkerd on the Kubernetes cluster

We are going to install the Linkerd control plane into its own namespace using linkerd installcommand. Post installation,Linkerd control plane resources will be added to your cluster and start running immediately.

Linkerd Installation
Image — Linkerd Installation

Post installation,run linkerd check to check if everything is ok.

Linkerd Validation
Image — Linkerd Validation (1)

Post validation,you should be having [ok] status for all the items.

Linkerd Validation (2)
Image — Linkerd Validation (2)

Step #5: View Control plane components

We have installed control plane and its running,To view the components of the control plane,use kubectl command.

Control Plane components
Image — Control Plane components

You can also view Linkerd dashboard by running linkerd dashboard

Launch Linkerd Dashboard
Image — Launch Linkerd Dashboard

To view traffic,use linkerd -n linkerd top deploy/web command.

Linkerd Traffic
Image — Linkerd Traffic

Congrats! we have successfully installed and configured Linkerd components..

Next step is to setup sample application, check the metrics.

Step #6: Deploy sample app

We are going to use Ngnix web app as sample,to install run kubectl apply command

Deploy Sample application
Image — Deploy Sample application

Now the application is installed,next step is to inject Linkerd to the app by piping linkerd inject and kubectl apply command. Kubernetes will execute a rolling deploy and update each pod with the data plane’s proxies, all without any downtime.

Inject Linkerd to application
Image — Inject Linkerd to application

If you’ve notice,we have added Linkerd to existing services without touching the original YAML.

To view high level stats about the app, you can run linkerd -n ngnix-deployment stat deploy command.

The Linkerd dashboard provides a high level view of what is happening with your services in real time. It can be used to view the “golden” metrics (success rate, requests/second and latency), visualize service dependencies and understand the health of specific service routes.To view detailed metrics,you can use Grafana which is part of Linkerd control plane and provides actionable dashboards for your services out of the box. It is possible to see high level metrics and dig down into the details, even for pods.

Sample Grafana Dashboard - Top Line Metrics
Image — Sample Grafana Dashboard — Top Line Metrics

Today,we have learnt how to install Linkerd & its components.We have also deployed sample service and able to view traffic and its metrics.

Like this post? Don’t forget to share it!

Additional Resources