Building an API using Serverless framework

Intspirit Ltd
AWS Tip
Published in
11 min readAug 24, 2022

--

Serverless framework logo.
serverless framework

In this article we will go over the Serverless basics, its architecture and highlight the aspects of how to build a Serverless API in the AWS for your application. Also will bring the light to the situations when it’s worth doing to go on with Serverless framework.

If the first thoughts you have after seeing the words “serverless” and “lambda” are a “stolen server” and “wavelength” correspondingly, then this article is for you!

What is Serverless?

Serverless is a way to build and run applications & services without having to manage servers. Serverless applications still run on servers, but all server management like maintaining, provisioning, and scaling is done by a cloud provider (e.g. AWS, Google Cloud Platform etc). Not that hard right?

Going next through the basics, there are two types of Serverless applications:

  • First type deals with backend functionality as a whole — BaaS (backend-as-a-service). General idea: backend is provided and you only have to write and maintain the frontend;
  • Second one addresses microservices in applications only, responding to the events that occur — FaaS (function-as-a-service).Concept is simple: execute your code without thinking about servers and pay only for the compute time consumed.

Further we’re going to build an API (we’d be focusing on the BaaS type of application in the article) based on serverless framework and using AWS services.

So okay, it’s time to see the picture in general, so schematically such an application will look like this:

Serverless high-level architecture.
Figure 1. Serverless high-level architecture

Let’s take a closer look at each component:

  1. AWS API GATEWAY is an AWS service for building APIs. Any request sent here triggers a lambda (in fact, it can be anything from AWS Kinesis to even a regular EC2, that’s a topic for another article) that’s tied to a specific endpoint. Amazon API Gateway allows you to create REST, HTTP, and WebSocket APIs at any scale.
  2. AWS Lambda is a service that lets you run your code without managing servers. Lambda performs all of the administration of the compute resources, capacity provisioning and automatic scaling. AWS Lambdas will act as controllers for our API.
  3. Everything else — handlers, services, guards — is our logic built according to a pattern we choose. This logic will be n̶e̶a̶t̶l̶y̶ ̶s̶t̶r̶u̶c̶t̶u̶r̶e̶d̶ minced together and put into Lambda handlers.
  4. DB — our database. AWS Documentation advises us to use DynamoDB (AWS noSQL is not advised here). But in reality, we can use any DB we are familiar with, such as RDS or DocumentDB (translation for the uninitiated — PostgreSQL or MongoDB).
  5. AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS and third-party resources, and provision and manage them in an orderly and predictable fashion.

Together, all AWS Serverless services (API Gateway, Lambda and etc.) are referred to as the AWS Serverless Stack. When these technologies first appeared, working with them was extremely inconvenient due to the complexity of describing CloudFormation for both API Gateway and Lambda. This issue was later resolved with the Serverless Framework.

Serverless Framework is an open source devOps framework for configuring API Gateway and Lambda.

Later in the article we will see how simple it is to work with.

Why use Serverless?

Despite a number of difficulties and limitations that a Serverless application developer will have to face (more on that below, hehe!), the Serverless infrastructure also has a number of undeniable advantages:

1. Simple deployment.

The life of a DevOps engineer becomes easier and more enjoyable when deploying Serverless apps rather than when setting up and configuring an application in Amazon ЕС2. All you need is:

  • Credentials for an AWS account with the necessary access.
  • And… that’s it — CloudFormation deployment will be launched with one command.

Some services (example — SQS — message queues for microservices, distributed systems, and Serverless applications) cannot be created by the Serverless framework. They will have to be added manually through AWS UI, and then credentials of these freshly created services will have to be manually added into the Serverless config.

2. Out of the box scalability.

A simple example: when the horizontal load on your application increases, instead of moving to a more powerful server you need… nothing. Well, you could play with your lambdas’ concurrencies, but that’s not always necessary. (Concurrency is the number of requests that your function is serving at any given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. See this to know more). At the moment of peak load, the number of simultaneously running lambdas grows up to 10,000 and none of them are binded to the computing power of your EC2 VMs.

3. Easy interaction between your services and AWS services.

  • Most of the work, if not all of it, has already been done for you. There are 40+ Lambda templates for the most common use cases (e.g. microservice-http-endpoint, dynamodb-process-stream, sns-message, sqs-poller,cloudwatch-alarm-to-slack-python, s3-get-object, etc.)
  • AWS documentation… it’s really good!
  • Ease of interaction. Again, in most cases, you will not have to get inside the AWS UI to redo service settings. Serverless app can do it out of the box (with very rare exceptions).

4. Cost.

You only pay for the time that Lambdas work. No requests means no work time for lambdas, which means no payments have to be made.

And now f̶i̶n̶a̶l̶l̶y̶,̶ ̶t̶h̶e̶ ̶d̶e̶s̶e̶r̶t̶!̶ the disadvantages of course:

1. Cost.

What do you mean? Wasn’t this an advantage?

There are about 2600000 seconds in a month. If your EC2 has been working hard all this time, you will still pay a fixed amount ($44 and up). But if your AWS API GATEWAY was constantly bombarded by thousands upon thousands of requests, and lambdas were constantly being invoked, this will cost a pretty penny. You will pay both for the number of requests and for “megabytes per time” (The cost of lambda work is calculated by how much RAM was used and for how long).

Let’s count together. Here is a CloudWatch log for an invoked lambda (real project, which resolves the relationship between three third-party services):

CloudWatch log of invoked lambda function.
Figure 2. CloudWatch log of invoked lambda function

The cost of this particular call (outside the free-tier) is about $0.0000085. But what if it was invoked a million times? (~$8.5) And if you have 10 such lambdas?(~$85) Or 100? (~$850)

2. Debugging.

Even if you have a well thought out system of logs and they do not spaghetti into CloudWatch (which they tend to do unless properly configured: each lambda invocation creates a new Lambda version and randomly pours logs into CloudWatch), debugging remains a non-trivial task.

3. Relocating your application.

You are tied to AWS. Once and for all. Take it as a given. Moving to, let’s say, Google Cloud Platform will, of course, be cheaper than writing your application from scratch, but not exponentially.

4. Security.

A server that runs Serverless functions runs them for myriad customers, which opens up a lot of security concerns. TechRepublic sister site ZDNet lists 10 potential security risks associated with Serverless computing, which include:

  • Function event data injection, which is an SQL injection-style attack on a server running Serverless functions;
  • Insecure Serverless deployment configuration, which accounts for any number of mistakes on the administrative end that leave Serverless computing servers open to man-in-the-middle attacks;
  • Inadequate monitoring and logging of functions, which can tip administrators off to attackers performing reconnaissance to test the potential for attack;
  • Insecure third-party dependencies — Serverless functions that call on third-party dependencies can put data at risk if those dependencies contain malicious code;
  • DDoS attacks on Serverless platforms can overload them and take down functionality for multiple customers at the same time.

5. Limitations of Serverless API.

  • AWS API GATEWAY PAYLOAD LIMIT — 10 MB (link) — what size of incoming request’s body can AWS API Gateway “digest”
  • AWS Lambda Payload Limit — 6 MB (link) — what size of incoming event’s body can AWS Lambda “digest”.

What does this mean? This means that unless we d̶a̶n̶c̶e̶ ̶w̶i̶t̶h̶ ̶a̶ ̶t̶a̶m̶b̶o̶u̶r̶i̶n̶e̶ put our head down and work around it, we cannot put a body over 6 MB into a single POST request to our lambda. Sad but true story, we faced towards both limitations in a real project.

How to use Serverless?

We have covered the structure of Serverless applications, but have not even come close to the most important thing — how does the Serverless framework actually work?

So, once the logic of our application is ready, and once we’ve created thousands of lines of code, serverless finally comes into play. We just give a command to deploy (with the serverless deploy command)… and nothing. First, we must put together a serverless configuration file, in which we explain to the framework what comes from where, what connects with what, and which rights to give stuff.

AWS API Gateway acts as our endpoints, and AWS Lambda acts as the controller. The Serverless framework itself can, of course, create boilerplates. But we will do it another way. We will do everything manually. A textbook helloworld will look like this, done step by step:

  1. Install Serverless globally: `npm install -g serverless`
  2. Create a folder for our project with any name.
  3. Create a lambdas folder inside the project folder.
  4. Inside the lambdas folder, create a helloworld.js file with the following content:

This handler is our lambda that will be called when accessing a specific url, which we will set in the config. This, basically, is just a basic function which accepts up to 3 arguments:

function(event, context, callback) {}

One of them is not even necessary for our helloworld, but still:

  • event — a basic request with URL, METHOD, HEADERS, BODY, etc.
  • context — contains information about the invocation, function, and execution environment
  • callback — essentially a basic callback used for synchronous code. This callback will be executed when all other code has been executed. Roughly speaking, this is return (and in the lambda, return is an analogue of the send method in an Express server), but synchronous.

In our case, the lambda function simply always sends a standard response — status 200, body { message: ‘Hello world’ }.

5. Also inside the project folder, let’s create a serverless.yml and fill it like this:

Note: serverless.yml created according to the Dockerfile principles, hence the Python-style whitespaces ^^

6. We are almost ready for the deployment, but… we still need AWS account credentials. Set them up from any terminal (Example for MacOS & Ubuntu):

  • export AWS_ACCESS_KEY_ID=<your-key-here>
  • export AWS_SECRET_ACCESS_KEY=<your-secret-key-here>

7. Now, boldly execute using the “sls deploy” command

8. Once deploy has finished, we will see something like this:

The console says the service is deployed to stask.

Our first endpoint is ready! Now sending a GET request to https://asudrow3hk.execute-api.eu-west-1.amazonaws.com/hello will return us a response of {“message”:”Hello world”}.

A screenshot of Stacks in the AWS CloudFormation.
Figure 3. AWS CloudFormation->Stacks
A screenshot of AWS Buckets.
Figure 4. AWS Buckets
A screenshot of AWS Lambdas.
Figure 5. AWS Lambdas

Advanced serverless.yml configuration.

There are many addons & plugins for the Serverless framework, whether official or not so much. Here are some of the real must-have ones:

1. serverless-dotenv-plugin — allows getting variables from .env instead of hardcoding them inside serverless.yml.

It would look something like this:

Note: If we want to make more than one stage/env, we need to perform the following (in the simplest way):

  • Create a desired number of serverless.yml files and accordingly name them serverless.stageName.yml
  • For each of them, inside the provider section in the stage property put a corresponding stageName
  • Create the same amount of .env.stageName files
  • Deploy using the following command: “export NODE_ENV=stageName && sls deploy -c ./serverless.stageName.yml”

Also keep in mind that lambda names for different environments should be different.

2. serverless-ignore — basically .gitignore. Even the syntax is the same. Used to create an .slsignore file in the root folder of the project. All the source files that you specify in this file will not be included in the deployment. Be sure to include your .env.*

Note: If AWS packages appear in your lambdas (let’s say for working with a S3 bucket, etc.), you can add the line node_modules/aws-sdk/*. AWS already has it, and you will save on the deployment size).

In order to connect plugins, it is not enough to simply install them (these are ordinary npm packages and they are installed accordingly). You also need to add the plugins section to serverless.yml:

Access issues. AWS IAM policies.

If your lambdas work with AWS services, you need to clearly define roles & policies. Example for reading and deleting from a bucket:

You can add an ‘iam’ section to:

  • “provider” section — in this case all listed policies will be given to ALL lambdas within your Serverless app
  • any individual lambda section. In this case, all listed policies will be given only to this particular lambda

The easiest resource to use is ARN. In case of a bucket, you can add path to the ARN link to specify where exactly in the bucket your lambda can roam, or simply add `/*`, which means access to the entire bucket.

Figure 6. An example of where to get ARN

“timeout” property.

Another important lambda property is the timeout. It’s not what you think. AWS API Gateway has a default timeout limit of 15 seconds. But lambda’s timeout is its lifetime, which can be set to up to 900 seconds.

A real situation: the lambda is overloaded with logic or a lot of sequential requests to third-party-services, and takes longer than 15 seconds to finish executing. It might finish the task, if within the maximum timeout of 900, but you will not get an answer.

Example:

Lambda structure.

Of course in our helloworld example, everything is plain and simple. Once you start working on more complex stuff, you should structure your code according to whichever pattern you prefer.

It is also recommended to move from JS to TS. This will allow you to avoid a lot of pitfalls while writing a code. In our helloworld case, the tsc command (or any other transpiler) would simply be added to the deployment launch script, and the path to the lambda handlers in serverless.yml would be taken directly from build/.

Lambda triggers.

Although the article is called “building an API using Serverless”, keep in mind that a request from the AWS API Gateway is far from the only lambda trigger out there. Events of many AWS services can act as a lambda trigger. Some examples include upload to AWS S3, message in SQS, or even another lambda (usually via AWS Step Functions). It’s even possible to turn on a schedule for lambda execution.

In conclusion.

As a parting word, we will say: you must be 101% sure that you need a Serverless application. You shouldn’t go the Serverless way just because you’ve been told that “it’s what the cool kids do nowadays”. Weigh all the pros and cons and decide for yourself.

Article written by an engineer from Intspirit.

Follow us on Medium to not miss new articles.
Follow us on LinkedIn and Facebook to be aware of other awesome projects we do.
Try yourself in our telegram-channel with JS quizzes frequently asked in job interviews.

Thank you for reading,
and let’s make IT better!

--

--