You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Configuring a Lambda function correctly is the difference between a performant, cost-efficient function and one that times out, runs out of memory, or costs more than necessary. This lesson covers the key configuration options — runtime selection, memory and CPU allocation, timeout settings, concurrency controls, and deployment packaging.
The runtime determines which programming language and version your function uses. AWS provides managed runtimes that include the language interpreter, SDK, and operating system dependencies.
| Runtime Identifier | Language | OS | End of Support |
|---|---|---|---|
nodejs20.x | Node.js 20 | Amazon Linux 2023 | TBD |
nodejs18.x | Node.js 18 | Amazon Linux 2 | 2025-09 |
python3.12 | Python 3.12 | Amazon Linux 2023 | TBD |
python3.11 | Python 3.11 | Amazon Linux 2 | TBD |
java21 | Java 21 (Corretto) | Amazon Linux 2023 | TBD |
java17 | Java 17 (Corretto) | Amazon Linux 2 | TBD |
dotnet8 | .NET 8 | Amazon Linux 2023 | TBD |
ruby3.3 | Ruby 3.3 | Amazon Linux 2023 | TBD |
provided.al2023 | Custom runtime | Amazon Linux 2023 | TBD |
Consider these factors:
Decision Matrix:
Team expertise ---------> Use the language your team knows best
Cold start tolerance ---> Node.js/Python for low latency; Java/C# for throughput
Ecosystem needs --------> Java for enterprise libraries; Python for data/ML
Max performance --------> Custom runtime with Go or Rust (compiled binaries)
If your language is not natively supported, use the provided.al2023 runtime with a bootstrap file:
#!/bin/sh
# bootstrap — the entry point for custom runtimes
set -euo pipefail
# Initialisation
# ...
# Processing loop
while true; do
# Get next invocation event from Lambda Runtime API
HEADERS=$(mktemp)
EVENT_DATA=$(curl -sS -LD "$HEADERS" \
"http://$\{AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Process the event (call your binary/script)
RESPONSE=$(./my-handler "$EVENT_DATA")
# Send response
curl -sS -X POST \
"http://$\{AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" \
-d "$RESPONSE"
done
Memory is the single most important performance lever for Lambda functions. It controls not just RAM but also proportional CPU, network bandwidth, and disk I/O.
Memory vCPU Allocation Network Bandwidth
128 MB ---> Fraction of 1 vCPU Low
512 MB ---> ~0.3 vCPU Moderate
1024 MB ---> ~0.6 vCPU Moderate
1769 MB ---> 1 full vCPU High
3538 MB ---> 2 vCPUs High
5307 MB ---> 3 vCPUs Very high
7076 MB ---> 4 vCPUs Very high
8845 MB ---> 5 vCPUs Very high
10240 MB ---> 6 vCPUs Maximum
Key insight: At 1,769 MB, you get one full vCPU. Below this, CPU-bound tasks will be throttled. Above this, Lambda allocates multiple vCPUs (but your code must be multi-threaded to benefit).
The open-source AWS Lambda Power Tuning tool tests your function at different memory settings and plots cost vs. duration:
Duration (ms)
|
800 | *
600 | *
400 | * *
200 | * * * * *
|________________________________
128 256 512 1024 1769 3008 (MB)
Cost ($)
|
0.10 | * *
0.08 | * *
0.06 | *
0.04 | *
0.03 | * *
|________________________________
128 256 512 1024 1769 3008 (MB)
The sweet spot is where duration plateaus but cost is still reasonable — typically the "knee" of the curve.
The timeout setting determines the maximum time a single invocation can run before Lambda terminates it.
| Setting | Value |
|---|---|
| Minimum | 1 second |
| Maximum | 900 seconds (15 minutes) |
| Default | 3 seconds |
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.