You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
The Log Router is the central routing engine in Cloud Logging. Every log entry that arrives in Cloud Logging passes through the Log Router, which evaluates inclusion and exclusion filters to determine where each entry should be stored. Sinks are the routing rules that direct log entries to specific destinations. Together, the Log Router and sinks give you fine-grained control over log storage, cost, and compliance.
Log entry arrives
|
Log Router evaluates all sinks
|
+-----------+-----------+-----------+
| | | |
_Required _Default Custom Custom
bucket bucket sink 1 sink 2
(audit) (general) (BigQuery) (Cloud Storage)
| Principle | Description |
|---|---|
| Every log passes through | The Log Router processes every log entry — there is no way to bypass it |
| Multiple destinations | A single log entry can be routed to multiple sinks simultaneously |
| Inclusion filters | Each sink has a filter that determines which log entries it receives |
| Exclusion filters | Filters on the _Default sink that drop matching entries before storage |
| Order does not matter | All sinks are evaluated independently — they do not form a pipeline |
Sinks can route log entries to several destination types:
| Destination | Use Case |
|---|---|
| Cloud Logging bucket | Centralised log storage with Logs Explorer access (default) |
| Cloud Storage | Long-term archival at low cost — useful for compliance requirements |
| BigQuery | SQL-based analysis of log data at scale — ideal for trend analysis and reporting |
| Pub/Sub | Real-time streaming to downstream consumers (e.g., SIEM, custom alerting) |
| Splunk | Direct integration with Splunk via a Dataflow pipeline |
| Requirement | Recommended Destination |
|---|---|
| Real-time search and debugging | Cloud Logging bucket |
| Long-term retention at low cost | Cloud Storage |
| SQL-based analytics and dashboards | BigQuery |
| Real-time processing and integration | Pub/Sub |
| Compliance archival (immutable) | Cloud Storage with retention policies |
# Create a sink that routes audit logs to BigQuery
gcloud logging sinks create audit-to-bigquery \
bigquery.googleapis.com/projects/my-project/datasets/audit_logs \
--log-filter='logName:"cloudaudit.googleapis.com"'
# Create a sink that routes all GKE logs to Cloud Storage
gcloud logging sinks create gke-to-storage \
storage.googleapis.com/my-gke-logs-bucket \
--log-filter='resource.type="k8s_container"'
# Create a sink that streams error logs to Pub/Sub
gcloud logging sinks create errors-to-pubsub \
pubsub.googleapis.com/projects/my-project/topics/error-logs \
--log-filter='severity >= ERROR'
When you create a sink, Cloud Logging automatically creates a unique writer identity (a service account) for that sink. You must grant this service account write permission on the destination resource:
# Get the sink's writer identity
gcloud logging sinks describe audit-to-bigquery --format='value(writerIdentity)'
# Grant BigQuery Data Editor role to the sink's service account
gcloud projects add-iam-policy-binding my-project \
--member="serviceAccount:p123456789-123456@gcp-sa-logging.iam.gserviceaccount.com" \
--role="roles/bigquery.dataEditor"
Exclusion filters prevent matching log entries from being stored in the _Default bucket, reducing ingestion costs without losing the ability to route them elsewhere:
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.