The term “serverless architecture” is a recent addition to the technology lexicon, coming into common use following the launch of Amazon Web Services (AWS) Lambda in 2014. The term is both paradoxical and provocative: In fact, there really are servers backing “serverless” technology, but the term illustrates one of IT’s biggest headaches, server administration. Serverless architecture offers the promise of writing code without the worries of ongoing server administration.

But is the reality as sweet as the promise? The agency I work for recently put this question to the test when we deployed an app in production using a serverless architecture.

For our project, we opted for a serverless architecture using AWS Lambda, API Gateway, S3, and DynamoDB to power a website and chatbot application for a major entertainment company. A serverless approach was a natural fit because the app was expected to receive a significant amount of traffic during the initial few months, likely tapering off thereafter. Relying on AWS-managed components meant we could achieve scalability and high-availability without the cost or complexity of setting up redundant, load-balanced, auto-scaled, EC2 instances. More importantly, it meant the client only paid for actual compute time.

The end result was fantastic: The deployed application ran flawlessly. The journey to get there, on the other hand, was a bit more challenging.

Here are five key lessons for building a serverless app in the real world.

1. Allow sufficient time for Lambda and API Gateway configuration

The first thing to be aware of when building a serverless web app is that there is a lot of configuration involved with each Lambda function, including:

  • Uploading the code
  • Configuring the Lambda function
  • Creating the API endpoint (e.g. specifying which HTTP methods it listens to)
  • Setting up the IAM security role for the function
  • Configuring the behavior of the HTTP request (e.g. how the request variables are received and transformed into Lambda function arguments)
  • Configuring the behavior of the HTTP response (e.g. how return variables are sent back to the caller and transformed into an HTTP/JSON format)
  • Creating the staging endpoint
  • Deploying the API

It’s critical to factor in configuration time for each function when mapping out your project schedule. But as annoying as the amount of configuration may seem, remember that Lambda functions are nanoservices, and their configuration is part of the deployment process itself. It wouldn’t seem strange to spend this amount of time configuring and deploying an API microservice. And on the flip side, it makes life real easy for IT operations.

2. Documentation is light, so be prepared for some detective work

The real challenge is less about the number of configuration steps per se and more about the lack of documentation in general. Error messages are often cryptic and the number of configuration options is large, making diagnosis time-consuming. Making matters worse, documentation and community support is still immature, so there isn’t a huge corpus of community knowledge to draw from. This situation will undoubtedly improve over time as serverless technology gains popularity.

3. Find the right balance between tight cohesion and loose coupling

One of the more difficult questions we faced was how to structure the application itself. Prior to the advent of serverless architecture, functions were assumed to exist within a larger cohesive application with shared memory and a call stack. In contrast, there is a notable dearth of published design patterns for serverless applications. Even Lambda’s close cousin, the microservice, operates much like a traditional application internally with shared memory and a call stack, so the design patterns for microservices didn’t provide much help either. With AWS Lambda, you are literally deploying a single function, outside the context of a larger application. So how do you live without the things we take for granted every day like shared functions and common configuration settings?

One option is to take nanoservices to the extreme and expose every function as a Lambda. While this approach might sound good at first blush, in practice it’s a nightmare. It does solve the shared code problem, but it creates a ton of configuration busy work, and the performance of your application will crawl to a grinding halt since every function invocation will be an out-of-process call.

At the other end of the spectrum, you can deploy a single Lambda function that acts as a controller for the entire application, using parameter values to handle the request appropriately. While this solves the shared code problem, it creates another one: complexity. With this approach, the controller function can quickly become bloated and unmanageable.

For our project, we landed on the Goldilocks approach (not too hot, not too cold) and created a limited set of Lambda functions that mapped logically to how client callers would consume them. This felt like the right solution, although it still left us with the problem of dealing with shared code and configuration.

4. Establish an efficient development workflow

The development workflow for a serverless app is different from your typical project. First of all, technically speaking, there is no “local” development environment, since everything runs on cloud-hosted components. Another idiosyncrasy of serverless development is that the process scope of each Lambda function is isolated, yet the functions exist within the logical context of a larger parent application. This means there is a need to access shared code across Lambdas but no good way to share it.

Our solution was simple but effective. First, we created a separate directory for the shared code outside the directories containing the code for each Lambda function. The shared code was updated in this parent folder. Then, we wrote a simple bash script that copied the shared code from the parent directory into each of the Lambda function directories. The script also used the AWS CLI to update the Lambda functions in AWS, deploying to the appropriate staging environment based on a command line argument. Using this approach, we were able to kill two birds with one stone: It allowed us to rapidly deploy and test our Lambda functions in AWS, and it gave us a way to update shared code in one location while ensuring it was used consistently across all Lambdas.

While this wasn’t the most elegant solution, it worked extremely well in practice. For the next project, we’ll be keeping an eye on the progress of the Serverless Framework, an open source project designed to ease some of the development workflow problems associated with serverless architecture.

5. Automate serverless infrastructure with CloudFormation

Given the complexity of the configuration of an AWS serverless application, automating the creation of the app’s infrastructure is a must. It’s challenging enough to configure a serverless application in a development environment, and you certainly don’t want to manually redo these steps in QA, staging, and production – it’s just too easy to miss something. To solve for this, we wrote a parameterized CloudFormation template to create the full application stack, including the DynamoDB database and all of the API Gateway and Lambda configurations.

When you use CloudFormation to script the infrastructure, be prepared for tight coupling between the application code and the infrastructure scripts. Because you’re deploying individual functions, you’ll need a CloudFormation resource for each Lambda and its corresponding API Gateway endpoint. This means that low-level changes to the application code can require changes to infrastructure scripts. As such, your developers have to be involved with updating the CloudFormation scripts as they add and edit Lambda functions.

Also, make sure you clearly understand the security model for Lambda functions and API Gateway endpoints before you create your CloudFormation script. There are some security settings that AWS makes invisibly in the background when you create an API Gateway endpoint in the AWS Console that you’ll need to recreate programmatically in your CloudFormation script. Specifically, you need to grant the API Gateway permission to invoke each Lambda function.

Including this CloudFormation snippet once for each API Gateway endpoint that calls a Lambda function will save you some potential head-scratching:

“APIInvokePermissionForMyLambdaFunction”: {

“Type”: “AWS::Lambda::Permission”,

“Properties”: {

“FunctionName”: “MyLambdaFunction”,

“Action”: “lambda:InvokeFunction”,

“Principal”: “




Is serverless worth the effort?

So is serverless architecture all roses and rainbows? Sadly, it is not. Like any new technology, it has some warts and a few kinks that need to be worked out. Although you don’t have to do much (if any) server administration, you do have to perform quite a bit of service configuration. So the real question is: Is serverless architecture worth it?

To answer this, it’s important to remember why you would choose a serverless architecture in the first place. Its primary benefit is not that it streamlines work during development but that it reduces the burden of administration after development. The value that serverless architecture provides is its support for rapidly building scalable and highly-available applications with minimal maintenance or operational support. This is a huge benefit that cannot be overstated. In fact, this benefit is so great that it justifies the additional time spent during development, especially if you can reduce this inefficiency over time as your team learns and adapts. The bottom line is that serverless architecture provides enormous business value, and it will only continue grow in adoption.

Jake Bennett is CTO at Seattle-based agency POP.