During this re:Invent, I’m sharing with you the biggest serverless-related announcements and what they mean for you in a series of hot-takes.
Starting with, the biggest announcements from today’s keynote, plus a bunch of other important announcements just before r:Invent.
Lambda now bills you by the ms as opposed to 100 ms. So if your function runs for 42ms you will be billed for 42ms, not 100ms.
This instantly makes everyone’s lambda bills cheaper without anyone having to lift a finger. It’s the best kind of optimization!
However, this might not mean much in practice for a lot of you because your Lambda bill is $5/month, so saving even 50% only buys you a cup of Starbucks coffee a month. …
Today, AWS announced another major feature to the Lambda platform: the option to package your code and dependencies as container images. The advantage of this capability is that it makes it easier for enterprise users to use a consistent set of tools for security scanning, code signing, and more. It also raises the maximum code package size for a function to a whopping 10GB.
As an official launch partner, we are proud to add support for container images in the Lumigo platform so you can quickly identify functions that are deployed as a container image.
I have been involved with a client project to help the client launch a new social network for university students to engage with each other to do sports.
Amongst other things, users can:
And so on.
The client is a bootstrapped startup and we had to launch the app before the semester restarted in September 2020. …
A common complaint I have heard about serverless applications is that they tend to look really complicated on architecture diagrams, with many moving parts. But does it mean serverless applications are more complex compared to their serverful counterparts?
Before I get to that, let’s do a simple exercise.
Which of these two serverful applications are more complex?
Pretty hard to tell, right? Since the architecture diagram alone doesn’t tell the full story. And those EC2 icons are really great at hiding all the complexities that are buried in your code.
What if we have a more honest representation of what these two applications actually look like. You know, by not omitting 90% of what is actually going on in these applications. …
There is a growing ecosystem of vendors that are helping AWS customers gain better observability into their serverless applications. All of them have been facing the same struggle: how to collect telemetry data about AWS Lambda functions in a way that’s both performant and cost-efficient.
To address this need, Amazon is announcing today the release of AWS Lambda Extensions. In this post, we discuss what Lambda Extensions are, what problem do they solve, different use cases for them, and how to work with them.
We at Lumigo are proud to join the announcement as an official AWS launch partner. And with it, we’re adding a new capability to the Lumigo platform, which lets you see the CPU and network usage of your functions. This helps you quickly identify functions that are CPU- or network-bound so that you can improve their performance by increasing their memory size. …
So much has been written about Lambda cold starts. It’s easily one of the most talked-about and yet, misunderstood topics when it comes to Lambda. Depending on who you talk to, you will likely get different advice on how best to reduce cold starts.
So in this post, I will share with you everything I have learned about cold starts in the last few years and back it up with some data.
But first -
Lambda automatically scales the number of workers (think containers) that are running your code based on traffic. A “cold start” is the 1st request that a new Lambda worker handles. …
I previously wrote about five reasons you should consider AppSync over API Gateway. One thing that API Gateway supports but you can’t do with AppSync out-of-the-box yet is custom domain names.
Your shiny new AppSync API is available at
XYZ.appsync-api.us-east-1.amazonaws.com/graphql, but you really want people to use your own domain instead because
dev.example.com/graphql is much more memorable and informative.
In this post, let’s look at two ways you can do this.
This is my preferred way. It’s easy to set up and cheap to run.
Assuming your AppSync API resource is called
GraphQlApi (if you use the serverless-appsync-plugin with the Serverless framework, then this is the logical id it’ll use). …
When you build your application on top of Lambda, AWS automatically scales the number of “workers” (think containers) running your code based on traffic. And by default, your functions are deployed to three Availability Zones (AZs). This gives you a lot of scalability and redundancy out of the box.
When it comes to API functions, every user request is processed by a separate worker. So the API-level concurrency is now handled by the platform. This also helps to simplify your code since you don’t have to worry about managing in-process concurrency. …