You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Cloud Profiler is a statistical, low-overhead profiling service that continuously collects CPU and memory usage data from your production applications. Unlike traditional profilers that add significant overhead and are typically only used in development, Cloud Profiler is designed to run in production with negligible impact on performance — typically less than 0.5% CPU overhead.
Development and staging environments rarely replicate production traffic patterns. Workloads that perform well under test may exhibit completely different behaviour under real user traffic — different data distributions, different concurrency levels, and different code paths. Production profiling reveals where your application actually spends its time and memory.
| Benefit | Description |
|---|---|
| Find real bottlenecks | Identify CPU and memory hotspots under actual production load |
| Reduce costs | Optimise code to reduce CPU usage and lower compute costs |
| Improve latency | Identify functions that contribute most to request latency |
| Detect regressions | Compare profiles across versions to spot performance regressions |
| No reproduction needed | Analyse performance issues without reproducing them locally |
| Profile Type | What It Measures | Language Support |
|---|---|---|
| CPU time | Time spent executing CPU instructions (excludes I/O waits) | Go, Java, Node.js, Python |
| Wall time | Elapsed real time (includes I/O waits, sleeps, locks) | Go, Java, Node.js, Python |
| Heap | Memory currently allocated on the heap | Go, Java, Node.js, Python |
| Allocated heap | Total memory allocated over time (including freed memory) | Go, Java |
| Contention | Time threads spend waiting for locks | Go, Java |
| Threads | Number of active threads | Java |
import "cloud.google.com/go/profiler"
func main() {
cfg := profiler.Config{
Service: "order-service",
ServiceVersion: "1.2.3",
ProjectID: "my-project",
}
if err := profiler.Start(cfg); err != nil {
log.Fatalf("Failed to start profiler: %v", err)
}
// Your application code here
}
import googlecloudprofiler
def main():
googlecloudprofiler.start(
service='order-service',
service_version='1.2.3',
verbose=0,
)
# Your application code here
# Download the profiler agent
wget https://storage.googleapis.com/cloud-profiler/java/latest/profiler_java_agent.tar.gz
tar xzf profiler_java_agent.tar.gz
# Start your application with the agent
java -agentpath:/opt/cprof/profiler_java_agent.so=\
-cprof_service=order-service,\
-cprof_service_version=1.2.3 \
-jar my-application.jar
require('@google-cloud/profiler').start({
serviceContext: {
service: 'order-service',
version: '1.2.3',
},
});
The Cloud Profiler interface in the Cloud Console provides interactive flame graphs and analysis tools:
A flame graph visualises the call stack of your application. Each bar represents a function, and the width of the bar represents the proportion of time (or memory) consumed by that function and its callees:
|====================== main() =======================|
|========= handleRequest() ========|=== logMetrics() =|
|=== parseJSON() =|= queryDB() ===| |
| |= executeSQL() | |
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.