You've successfully subscribed to WorldRemit Technology Blog
Great! Next, complete checkout for full access to WorldRemit Technology Blog
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.
Service mesh

Service mesh

. 5 min read

| Image by Ricardo Gomez Angel via Unsplash Copyright-free


The service mesh is growing as a potential solution to the challenges posed by micro-services architectures.

In the short term, service APIs have gone from being primarily an edge interface, connecting developers outside the organisation with internal systems to the core that makes up those internal systems (micro-services), into a functioning body. As a consequence, one of the inevitable results of micro-service-oriented architectures is the increase in internal communication within the data centre.

Service mesh emerged as a potential solution to the challenges posed by increased East-West traffic by providing a different framework for deploying existing technology.

The nature of API is still changing

Our services always used API for communication, regardless of the technology or architecture used. APIs are of course always evolving, to cover more and more requirements. But why is the nature of APIs changing? Why do we keep setting new challenges for what is the most important part of our services?

APIs change as the architecture changes. API used to be the edge of our system, an edge technology that allowed external users or customers to access our services. As our application becomes more complex, the roles of our APIs have changed. Thanks to micro-services, we could see almost completely new sets of requirements that should be covered by the APIs.

Of course, behind this change, a more complex transformation occurred. We have decoupled our monolith services into micro-services and we have completely changed the infrastructure and the way of thinking about architecture and programming.

Why has this happened? Because we need and try a new way to scale our business. We found that the rapid development of features required changes. As part of these changes, APIs essentially become a transformation layer for transforming data across services.

In the monolith world

In a monolith application, we have one front-end layer before the entire service and some database. The Front-end includes security and validation.

Each element can be made by separate teams, in one large code base. But even small changes in the smallest part of the system determine the redeployment of everything. We must collaborate with other teams during each deployment. This creates a scalability problem as the code base and the teams grow.

Into the micro-service world

To better understand why the service mesh is so important, we need to understand every change that needs to be made.

In the case of a monolith application, as developers, we get used to thinking of APIs as a safe point of communication. We provide a proper security and validation layer, then inside the application, every communication (function calls) is treated as protected.

After a change into micro-services, we provide new vertical communication. Now APIs become not only an external point of our application, but also a dedicated part to internal communication. Still, we have an API gateway for our application, but almost every call to other micro-service goes through a network request.

Micro-services replace local function invocations with APIs over a network. It is an important reason why a service mesh exists.

What is more, the network is a crazy variable because it is not reliable. We need to provide proper security checks, variable checks, etc. The network can slow down, fail (in monolith function calls this problem is non-existent) have timeouts and delays.

How can we provide a proper mechanism to ensure that services can communicate without problems? As developers, we do not care about proper settings and mechanisms. We will just make a request and make sure it just works. The more services we provide, the more problems we have.

A quick stop to build our dictionary

North-south communication: traffic that comes from outside the data-center (also know as ingress)
East-west communication: traffic that comes from within the data-center (more prevalent with micro-services.)

East-west communication solution 1

Very often in a micro-service oriented architecture, developers make a decision which is adequate for a monolith architecture. A security layer is provided in every micro-service available in the company.

In addition to the duplication limitation, an appropriate library is created that is reused between services. The library could take care of retrying, proper security mechanisms, and so on.

Problems with this solution are similar to those in monolith. Every time we repair or update the library, we need to update every service. Moreover, we need to provide an appropriate library for each service, which means that if services are created in different programming languages, then such a library must be created and maintained in those languages.

East-west communication solution 2

To extend and make the above approach more flexible, we can provide a proxy for each service. So we're moving the problem from the engineering teams to DevOps. We will remove the code from the services and ensure the appropriate configuration requirements during the implementation and operation of the application. The proxy will provide the correct mechanism for communication between services, the appropriate way of delegation.

We are very close to service mesh.

What is the problem with the proxy? Latency. The speed of each one of these requests becomes critical in east-west traffic. Latency becomes a first-class citizen.

How to calculate latency in this approach? For micro-services, the problem combines, and we must sum-up each service latency. Latency can kill our system. Monitoring this factor is fundamental. So how do we track latency? We need to somehow distinguish and separate network latency from proxy latency and service latency.

Final solution

We have provided a reverse proxy. The assumption is that the request between micro-service and proxy is instantaneous. Why? Because they are on the localhost. Technically, the proxy is a sidecar proxy in the Kubernetes world. Components are always deployed on this same virtual machine. Communication between service and proxy is all-time on the localhost, so it's fast, add almost zero latency.

The reverse proxy could add proper monitoring, latency information, metrics, etc. Effectively proxy and reverse-proxy become a contact point between each communication.

As developers, we don't care how somebody contacts us, how and with which services we must communicate.

Service mesh pattern

East-west communication between services is always proxies by proxy and reverse-proxy.

Proxies give us:

  • routing
  • telemetry
  • monitoring and observability
  • circuit breaking and health checks
  • error-handling
  • extensibility

Service mesh contains a proper control plane, with API and sometimes GUI to configure proxies. Changes are applied immediately. Each service contains its data plane, which enforces configuration. Data planes only process requests and cannot configure the system.

Because service mesh is a pattern, not technology, it could be implemented on its own or use existing solutions (for example, Istio).


Service mesh for East-West communication in the micro-service world becomes a must-have in each organisation. Proper implementation of this pattern would bring the company tangible benefits, simplify the work of developers and DevOps.