Facts and fallacies of Serverless

Photo by Zdeněk Macháček (Unsplash), in honor of “Facts and fallacies of Software Engineering”, a very good book from Robert L. Glass

I recently read an article that gives some reasons not to go with Serverless. I’d like to go through the arguments given in this article (and many others like this one), because there are misunderstandings and misconceptions about Serverless.

Fallacy #1: Serverless is the wrong tool for you, if you know the load your application has to be able to handle.

This is certainly true that Serverless is really interesting for its elasticity and its ability to scale up and down to handle unexpected or inconstant load. But it doesn’t mean you cannot use it for predictable or constant load…

The argument given in the article previously mentioned is the cost, stating that you would pay $23.36/month with Fargate (2 vCPU, 2 GB of RAM) versus $14.02/month for a T4g.small EC2 instance (same capacity). But as often, the author falls in the trap: this is not the TCO (Total Cost of Ownership). You will need to manage the EC2 instance: orchestration & isolation (containers), updates, patching, … and certainly pay someone for it, while this is completely done by AWS in the case of Fargate.

Also, Lambda is not even mentioned in this comparison. Using Lambda, you would probably need API Gateway in front of it. Let’s do some math.

API Gateway costs = $3.6 (3M requests / month)
Lambda costs = $10.6$ (3M requests / month, 800ms and 256 MB of RAM)

To remain at the same price of the EC2 instance (~$14), I can perform 3 million requests per month with my combo API Gateway / Lambda. But once again, this comparison does not make sense because you really don’t get the same value from each service:

Comparison Lambda / EC2

Lambda is fully managed by AWS, it scales automatically, it has built-in high availability, … And that’s probably what the author meant to say. So to get back to the initial statement, I would rephrase it like this: Serverless may not be necessary if you just need one instance, no HA, and pretty constant load. But again, it’s worth evaluating cost of both to be sure, and not just cost of infrastructure, think TCO (look at the little buddy on the right…)! This article may be useful…

Fallacy #2: Serverless is the wrong tool for you, if low response time is a must have requirement

Photo by Oscar Sutton on Unsplash

This one is recurring… “Functions take time to initialize and actually serve the first request”, this is the famous cold-start. I won’t say this is wrong, but it also really depends on the load of your function: if your application is under pretty constant load, it is unlikely you will meet cold functions (few percents).

Furthermore, it’s been almost one year and a half since AWS announced provisioned concurrency, a feature that keep functions warmed and ready to handle requests. Combined with Application Auto Scaling, you can automatically increase the amount of concurrency during times of high demand and decrease it when the demand decreases, or you can schedule provisioned concurrency for recurring peaks.

Finally, another common misconception is to consider Serverles as only functions (FAAS). But you can use Fargate (ECS/EKS) if you see too many cold starts and it is not reasonable for your use case or if you need predictable and constantly very low latency (double digit ms).

Fallacy #3: Serverless is the wrong tool for you, if you are not ready to manage the complexity

We can distinguish two things here.

1/ The first one concerns Serverless services themselves. They are made to make developers life easier: fully managed by AWS, highly available and scalable, you can use most of them with a single API call (ex: sqs:sendMessage, sns:publish, …). Also looking at Lambda or Fargate, they are truly made for developers: write some code, package (zip / docker image) and deploy it, that’s all! So in that regard, we cannot say Serverless is complex.

2/ The second one is about architecture. Serverless applications generally consist in multiple units of compute (functions or containers), communicating with each other asynchronously through messages (SQS) or events (SNS, EventBridge), wait… am I describing microservices architecture?! Definitely, Serverless suits microservices and event-driven architecture very well.

About one year ago, Xavier Lefèvre published a great article: What a typical 100% Serverless Architecture looks like in AWS:

Typical Serverless Architecture by Xavier Lefèvre

It made a lot of noise on social networks, many people were outraged by such a complexity. But who said designing, building, deploying and maintaining microservices was easy?! It is certainly not: how to split them, which granularity, how to communicate between them, how to have proper observability on everything? And even more important and more complex: how to organize teams? what about release management? etc… No, microservices and distributed, event-driven architectures are not for everyone!

Microservices, when under control, can provide many benefits (agility, elasticity, independent deployments, resilience, …), and Serverless services really help in building such architectures. But it is not just a matter of technology here: developers and ops skills, maturity of the organisation, here are the real challenges.

So to come back to the initial statement, I would rephrase it like this: Microservices is the wrong architecture for you (be it with Serverless or another technology), if you are not ready to manage the complexity.

To conclude on this one, please note that Serverless is not just synonym of Microservices, there’s much more you can do without entering in this complexity: react to a file uploaded to S3, process some data inserted in a DynamoDB table, perform some remediation action on a AWS Config finding, execute a scheduled task, and much more… with just a bit of code!

Serverless is the wrong tool for you, if your applications current tech stack doesn’t fit well with Serverless

Technology stack stickers

Well, I won’t spend too much time on this one as this is not specific to Serverless, but any new paradigm. And I think we fall back to the previous point: microservices. You certainly won’t switch from a monolith to microservices with a snap of the fingers.

Regarding technologies, languages, frameworks, you can certainly use whatever suits you with Fargate, and with little to no code change. With Lambda, most languages are supported today, either natively by AWS or using the Lambda Runtime API — even Cobol! Then looking at frameworks, you may not be able to use them all (I don’t know actually where is the limit) but many of them and Spring Boot for sure (give a look at Spring Cloud Functions too). You will probably need to adapt your code, add a handler function but again it’s not a technology issue here: You will probably have more troubles to find the good boundaries of your function than actually implement it (or refactor your current code).

Now let’s speak about the real limitations of S̶e̶r̶v̶e̶r̶l̶e̶s̶s̶ Lambda. I’ll speak about Lambda because for me you can do almost everything with Serverless in general.

Fact #1: Lambda is the wrong tool for you, if you need more than 15 minutes to process a request.

It is one of the service limits: Lambda function cannot run more than 900 seconds. So if you have a night batch that takes hour to process some data, either you can split it and spread it on hundreds or thousands of functions, or you choose another technology, like AWS Batch (which can also use Serveless resources since December 2020).

Fact #2: Lambda is the wrong tool for you, if you need more than 10 GB of memory or 6 vCPUs.

If you need more than that for a single execution of your function, for example to process big media files, then Lambda is not the good choice. And if you look carefully at this limit and the previous one, you will notice Lambda functions are not really made for big processes and tasks. If you need more resources, have a look at Fargate.

And if you want to know the real limits of Lambda, simply read this page, there’s not that much actually.

There is actually one limit today (mid-April 2021) for Serverless: GPUs.

Fact #3: Serverless is the wrong tool for you, if you need GPUs.

Neither Lambda nor Fargate support GPUs at the time of writing. So if you need GPUs to perform machine learning training or video encoding for example, have a look at other services: SageMaker for ML or the Elemental suite for video processing.

All the aforementioned limits are subject to evolve — they already did in the past: We went from 5 to 15 minutes timeouts in 2018. Memory and CPUs got increased from 3 GB (and up to 2 vCPUs) to 10 GB (and 6 vCPUs) before re:Invent 2020. And maybe GPU will be available at some point (follow this issue if interested).

There are many misconceptions and misunderstandings around Serverless. One of them, source of many others, is that Serverless == Lambda.

Lambda fits perfectly for small tasks, mainly event-driven, potentially (but non only) for microservices. They have been designed in that sense and they do a really good job at it, for a very reasonable price (again, think TCO!). Due to this design and some constraints (memory, timeout), they may not fit all uses-cases. Many barriers have already been pushed: provisioned concurrency for cold starts, memory and timeout increases, Runtime API for more languages supported, … And I bet many others will fall.

But Serverless is not only Lambda! There are plenty of Serverless services, for many different usages: SNS, SQS, EventBridge, API Gateway, AppSync, Step Functions, Cognito, DynamoDB, Aurora (Serverless), Amplify, without forgetting S3! Most of the AI services (Textract, Comprehend, Forecast, Rekognition, …) are also Serverless if you think, and many more. Talking about Serveless Compute, because this is where the misunderstanding is, Lambda is not the only service, Fargate (ECS or EKS) is another great one.

Now, looking at the overall Serveless ecosystem, and the many options you have, I think there’s very few things that cannot be built on top of it.

My 2 cents to conclude: when you choose an AWS Service, especially a compute one, prefer the one with the highest level of abstraction (and thus the one with less infrastructure maintenance), so you can focus on the business and deliver value to your customers/users. This should be the primary driver when moving to the cloud. Let the infrastructure and undifferentiated heavy lifting to the provider and focus on the business value: deliver better and faster.

And since I’m a nice guy, I give you my thoughts (check this for the official AWS recommendations) :

Compute decision tree



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jérôme Van Der Linden

Senior Solution Architect @AWS - software craftsman, agile and devops enthusiastic, cloud advocate. Opinions are my own.