You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Lambda functions do not run on their own — they execute in response to events. An event source is the AWS service or custom application that generates the event and invokes your function. Understanding event sources is fundamental to building effective serverless architectures.
Lambda event sources fall into three categories based on how invocation is managed:
| Category | Who Invokes Lambda? | Examples |
|---|---|---|
| Push (synchronous) | The source service calls Lambda directly and waits | API Gateway, ALB, Cognito |
| Push (asynchronous) | The source service calls Lambda and returns immediately | S3, SNS, EventBridge, SES |
| Poll-based | Lambda polls the source for new records | SQS, Kinesis, DynamoDB Streams, Kafka |
Push (Sync): Source ---invoke---> Lambda ---response---> Source
Push (Async): Source ---invoke---> Lambda (queued)
<---202---
Poll-based: Lambda ---poll---> Source (SQS/Kinesis/DDB)
Lambda <---records---
S3 can trigger Lambda when objects are created, modified, or deleted.
| Event | Trigger |
|---|---|
s3:ObjectCreated:* | Any object creation (PUT, POST, COPY, multipart) |
s3:ObjectCreated:Put | Object created via PUT |
s3:ObjectRemoved:* | Any object deletion |
s3:ObjectRestore:Completed | Glacier restore completed |
{
"Records": [
{
"eventSource": "aws:s3",
"eventName": "ObjectCreated:Put",
"s3": {
"bucket": { "name": "my-uploads" },
"object": {
"key": "images/photo.jpg",
"size": 1048576
}
}
}
]
}
import boto3
from PIL import Image
import io
s3 = boto3.client('s3')
def handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
# Download original image
response = s3.get_object(Bucket=bucket, Key=key)
image = Image.open(io.BytesIO(response['Body'].read()))
# Create thumbnail
image.thumbnail((200, 200))
buffer = io.BytesIO()
image.save(buffer, 'JPEG')
buffer.seek(0)
# Upload thumbnail
thumb_key = f"thumbnails/{key.split('/')[-1]}"
s3.put_object(Bucket=bucket, Key=thumb_key, Body=buffer)
print(f"Created thumbnail: {thumb_key}")
Warning: Never write the output back to the same bucket and prefix that triggers the function — this creates an infinite loop.
SQS is a poll-based event source. Lambda automatically polls the queue, retrieves messages in batches, and invokes your function.
aws lambda create-event-source-mapping \
--function-name process-orders \
--event-source-arn arn:aws:sqs:eu-west-1:123456789012:order-queue \
--batch-size 10 \
--maximum-batching-window-in-seconds 5
| Setting | Description | Range |
|---|---|---|
batch-size | Max messages per invocation | 1–10 (standard), 1–10,000 (FIFO) |
maximum-batching-window | Wait time to accumulate a batch | 0–300 seconds |
{
"Records": [
{
"messageId": "abc-123",
"body": "{\"orderId\": \"ORD-001\", \"amount\": 49.99}",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1704067200000"
}
}
]
}
By default, if your function throws an error, the entire batch returns to the queue. Enable partial batch responses to report individual failures:
export const handler = async (event) => {
const failedItems = [];
for (const record of event.Records) {
try {
const order = JSON.parse(record.body);
await processOrder(order);
} catch (error) {
failedItems.push({ itemIdentifier: record.messageId });
}
}
return { batchItemFailures: failedItems };
};
DynamoDB Streams capture item-level changes (INSERT, MODIFY, REMOVE) and deliver them to Lambda in order.
{
"Records": [
{
"eventName": "MODIFY",
"dynamodb": {
"Keys": { "userId": { "S": "user-123" } },
"OldImage": { "status": { "S": "pending" } },
"NewImage": { "status": { "S": "confirmed" } }
}
}
]
}
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.