You've successfully subscribed to WorldRemit Technology Blog
Great! Next, complete checkout for full access to WorldRemit Technology Blog
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.
Migrating to a Serverless Architecture

Migrating to a Serverless Architecture

. 4 min read

| Image by C Dustin via Unsplash Copyright-free

At KubeCon/CloudNativeCon I attended the Serverless Practitioners Summit(virtually) which covered many subject matters revolving around Serverless Architecture. Many of the talks during the day focused on moving towards a serverless architecture from legacy applications. With the popularity and usage of Serverless solutions on the rise, I’ve taken the opportunity to write about this topic in particular.

One of the attributes of Serverless is that it is inherently stateless. By implementing a serverless architecture we want to turn all the stateful parts of our application to services, and ensuring the inputs and outputs of these services are also stateless.  

In 'Patterns to Build Stateful Apps with Serverless Apps' the presenter, Tanmai Gopal, identifies 3 kinds of state handling:

  • Accessing data
  • Persisting data
  • Shared memory, in 2 forms: state persisted across consecutive runs & state shared across concurrent runs

There are a few things to consider when migrating from a traditional application to a serverless architecture. First of all, it is important to know what inputs are required for the computation, as that will determine how your functions can be triggered. Serverless functions have no shared memory and a massive concurrent execution. If we want to use serverless functions to alter an eventual state, how do we manage all these executions? In addition, the security context of your FaaS application will be separate and differ from the security context of your data sources with existing data, which can make the data difficult to obtain. Furthermore, the number of simultaneous connections that can be opened to any single database is often limited and we would need security credentials to access the data required.

These issues regarding accessing data can be addressed in a few ways. One step would be to use a data API rather than directly connecting to a database. This would provide a layer of abstraction in which the service behind the data API is responsible for managing the connections. The serverless function should therefore be able to make a stateless HTTP request meaning a connection does not need to be maintained, whilst implementing authentication and authorization via a security credential exchange such as OpenID Connect(OIDC). It is possible to trigger serverless functions with an event payload, which can react to changes in stateful systems, capture relevant data and deliver it to the serverless function as an input.

When transitioning to serverless it is also possible that you will face issues persisting data in the form of transactions, which are longer sequences of information exchange between the application and the database. Naturally, a transaction is a single unbreakable unit, and must be completely done or not done at all. In addition to this, you will have to contend with  multiple transactions occurring simultaneously, which can become painful, where for example two or more are attempting to update the same data at the same time, which eventually will lead to failure of at least one.

A solution to this problem would be to break this transaction up into a set of events. In essence, you would have a series of serverless functions, that would make a small change to data and then trigger the following event.

The diagram below outlines a very simple implementation of a traditional application. In this example, I will use an online book store to illustrate the possibilities and advantages of migrating to a serverless architecture. Here, most of the logic would all be located within the Book Store Server, along with things like client authentication, database transactions and page navigation. As a result, the flow of information, control, and security is handled by what is basically a monolithic application.

On the other hand, when moving towards a serverless model, we split up the different functions of what was the Book Store server into multiple components:

  • The Authentication/Authorization logic has been migrated to OIDC. This is optimal for solving the issue with differing security contexts, and will standardize the method of authentication and authorization across the application. Using OIDC the client is able to sign in through a user pool, authenticate and receive tokens which are exchanged for credentials to access resources; in this case, the API Gateway and DynamoDB.
  • We can utilise an API Gateway as our Data API, in which each route is associated with a serverless resource to handle that specific bit of application functionality. Our handler here is a AWS Lambda function that responds to HTTP requests via the API Gateway to either search through the Book inventory or make a Purchase. Each serverless function will receive input from HTTP request parameters to execute its logic and return a result to the API gateway, which in turn will convert this result into an HTTP response that it passes back to the client. In the case where there are multiple other steps required prior to making a purchase, for example, to add a user to a subscription list or a loyalty programme, you can break this down into two or three Lambda functions that can trigger one after the other, eventually modifying the state of the Purchase Data Store.

This migration to a serverless architecture, therefore, results in a standardised security context across the multiple parts of the application, a means of accessing data without requiring direct database access and a more flexible structure in which the modularisation of the application better accommodates the need for updates to be made to its separate components independently. A final benefit of a serverless implementation is that pricing is based on the amount of resources used by an application or function, which is a benefit when compared to paying for individual VMs on an hourly basis. By using an API backed by serverless functions, you are able to scale from zero rapidly for a lower cost which is advantageous for an application that doesn’t need either a provisioned server or to be operating on a 24/7 basis.