You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
AI APIs are the primary interface for integrating large language models into your applications. This lesson covers how to make API calls using both REST and SDKs, understand the message format, stream responses, and handle errors gracefully.
Most modern LLM APIs use a chat completions interface. You send a list of messages (conversation history) and receive a model-generated response.
| Role | Purpose |
|---|---|
system | Sets the model's behaviour, persona, or instructions |
user | The human user's input |
assistant | The model's previous responses (for context) |
messages = [
{"role": "system", "content": "You are a concise coding assistant."},
{"role": "user", "content": "Write a Python function to reverse a string."},
]
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
],
temperature=0.7,
max_tokens=256,
)
answer = response.choices[0].message.content
print(answer)
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.