Tips and Tricks: Working with AWS Lambda at Large Scale

Share on facebook
Share on twitter
Share on linkedin
Share on reddit

AWS Lambda is a powerful tool that lets you quickly build a scalable application without needing to care about hardware. As Lambda scales, challenges emerge that are not present in more traditional systems or, in some cases, even in instances of the same function with less traffic. 

Getting a handle on how AWS Lambda operates at scale is a key component of building maintainable serverless applications.

This article explores some of the challenges of Lambda at scale and offers several tips and tricks to help you find your way through the confusion.

What Can Go Wrong?

Serverless applications have a number of different failure modes that arise as applications scale. In some cases these are simple expansions of problems present in all types of distributed applications. In others, these are unique issues that arise due to the ephemeral nature of serverless hardware, or the need of each serverless function to have a copy of all local code.

Below is a small list of the types of things that can go wrong as a Lambda function scales:

    • The complexity of the code base becomes unmanageable. Sharing of code libraries becomes complex, as each function needs a local copy of all the code it will run.
    • The code stops responding as usage grows. Concurrency limits built into Lambda itself can conspire to ruin your user’s day.
    • Deployments become arcane batch-ridden nightmares. Since much of Lambda deployment is based on file operations, the processes that control these operations need special attention.
    • Custom-rolled templates evolve along separate paths. Given the disparate nature of Lambda functions, it’s easy for code drift to happen among your functions that can impair your ability to quickly respond to customer needs.

In addition to the potential pain points listed above, there are the simple frustrations of Lambda development itself. (Ian Miell provides an excellent overview of the kinds of challenges you are likely to face in Lambda development and maintenance.) We’ll explore some ways around these issues below.

Improving Code Sharing with Lambda Layers

The default method of deploying Lambda functions requires that you deploy a zip file containing all of the code relevant to your function. This includes not only the function code itself, but also any dependent libraries your code requires to operate. Shared libraries, such as REST APIs for communicating with data stores in your system, need to be duplicated among your Lambda functions, with each function having its own local version.

This naive approach can quickly lead to confusion; without a dedicated method of standardization, you are likely to end up with multiple versions of the same access libraries running in your serverless ecosystem.

Lambda Layers solve this complexity problem by providing a shared base from which your functions can build. Lambda Layers are versioned sets of functionality that can be used as the base of your AWS Lambda function container.

Lambda Layers allow you to consolidate all of your shared supporting libraries into a reusable, versioned component. Your Lambda functions can then be built on top of these layers, removing the need to copy code around every time you make a change by building the function on top of the changing code itself. Lambda Layers let you set common base elements for your Lambda functions, reducing code duplication and overall code deployment complexity.

Beware of Concurrency Limits

During initial development and deployment of your serverless application, concurrency limits are probably far from your mind. In most small applications—say, batch functions that run on periodic data uploads—there may not even be a path to invoking significant numbers of concurrent function executions in your serverless ecosystem. 

However, for those functions whose usage scales along with your application traffic, it’s important to note that AWS Lambda functions are subject to concurrency limits. When functions reach 1,000 concurrent executions, they are subject to AWS throttling rules. Future calls will be delayed until your concurrent execution averages are back below the threshold. This means that as your applications scale, your high-traffic functions are likely to see drastic reductions in throughput during the time you need them most. To work around this limit, simply request that AWS raise your concurrency limits for the functions that you expect to scale.

Cleaning Up Your Deployments with CloudFormation

The default means of deploying AWS Lambda functions is to upload zip files to S3 buckets, an error-prone process requiring extensive duplication. This is done independently of the steps required to configure the execution environment for your Lambda functions, leaving you to find your way through brittle UIs to define API Gateway resources, IAM roles, S3 buckets, and more. Without dedicated effort, this brittle process can become hard to document and maintain, with critical knowledge about your deployment processes split among multiple team members.

CloudFormation lets you define your infrastructure at the code level, creating and tying together numerous different service configurations and creations into a single set of template files. CloudFormation allows you to create standards and reusable elements around your infrastructure definitions, solidifying your resource needs into a single location. By properly applying CloudFormation, you can remove significant complexity from Lambda deployment.

Maximize your reach with expert-based tech content. Get started today.

Creating Templates and Standards with SAM

Creating new Lambda functions is a manual process when done without appropriate tooling. Setting aside the frustration of needing to use UIs and web wizards to configure access to your critical functionality, all of your related functions deploy individually rather than as part of a cohesive whole. This makes building a comprehensive view of your application challenging, as each element of your application’s functionality lives on its own island.

The AWS Serverless Application Model, or AWS SAM, provides you with the tools you need to turn your collection of functions into an application in every sense of the word. By providing a template file that you can modify according to your needs, AWS SAM automatically generates all of the serverless resources your application needs to operate. AWS SAM operates on template files, and can configure not only code paths, but also any of the potential AWS Lambda trigger sources. This allows you to create a cohesive view of your application, with all the configuration you need in one place.

Tying Your Lambda Functions into CI/CD

As serverless applications scale in complexity, it becomes critical to do thorough testing to ensure that each deployment of your serverless function suite is successful. As the functions themselves exist on effective islands within the AWS ecosystem, tying their deployment into your organization’s continuous-integration/continuous-deployment pipeline can be a frustrating process, with many potential error points. Each function needs to be treated individually, so there is a lot of potential for copy-paste errors to emerge—particularly as your application grows.

With proper application of CloudFormation and AWS SAM, you can create deployment pipelines that integrate with nearly all CI/CD tools.

First, on the CI side, AWS SAM lets you create test suites and command-line-based testing mechanisms that can be used to automatically verify each change to your serverless functionality. With a few configuration lines in the AWS SAM template.yaml file, you can create a unit test suite and a deployment harness that integrate with any CI/CD pipeline that supports command-line interaction.

Couple this with infrastructure definitions written in AWS CloudFormation template scripts, and you can automate every element of your serverless application’s infrastructure.

Scaling Your Lambda Knowledge

AWS Lambda is great for getting functionality to the user quickly. However, it is important to be aware of AWS Lambda limitations that can inhibit feature delivery as your application scales. With proper application of best practices and organizational standards—improving your function’s concurrency support, incorporating CloudFormation and AWS SAM, and tying your functions into your CI/CD pipelines—you can remove nearly all of the pain from Lambda deployment.

Tech content for tech experts by tech experts

Contact us to learn about pricing.
Share on facebook
Share on twitter
Share on linkedin
Share on reddit

Related posts