You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
As AI applications grow in complexity, hard-coded prompt strings become unmanageable. This lesson covers how to use template engines for prompts, request structured outputs from LLMs, validate those outputs, and handle parsing failures gracefully.
Hard-coded prompts with string concatenation are fragile:
# Fragile — hard to maintain, easy to break
prompt = "Summarise the following " + doc_type + " about " + topic + ": " + text
Templates separate the prompt structure from the data, making prompts reusable, testable, and version-controllable.
def build_prompt(topic: str, context: str) -> str:
return f"""You are an expert on {topic}.
Answer the following question using only the provided context.
Context:
{context}
Question: {{question}}"""
pip install jinja2
from jinja2 import Template
template = Template("""You are a {{ role }} assistant.
{% if context %}
Use the following context to answer:
{% for doc in context %}
- {{ doc }}
{% endfor %}
{% endif %}
User question: {{ question }}
Respond in {{ format }} format.""")
prompt = template.render(
role="helpful",
context=["Doc 1 content", "Doc 2 content"],
question="What is RAG?",
format="JSON",
)
Store templates as separate files for version control:
prompts/
├── summarise.j2
├── classify.j2
└── extract.j2
from jinja2 import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader("prompts"))
template = env.get_template("summarise.j2")
prompt = template.render(text=my_document)
Many applications need the LLM to return structured data, not free text.
response = client.chat.completions.create(
model="gpt-4o-mini",
response_format={"type": "json_object"},
messages=[
{
"role": "system",
"content": "Extract entities from the text. Return JSON with a 'entities' array.",
},
{
"role": "user",
"content": "Apple CEO Tim Cook announced the new iPhone in Cupertino.",
},
],
)
import json
data = json.loads(response.choices[0].message.content)
print(data)
# {"entities": [{"name": "Apple", "type": "ORG"}, ...]}
schema_prompt = """Extract product information and return valid JSON matching this schema:
{
"name": "string",
"price": "number",
"currency": "string",
"in_stock": "boolean",
"features": ["string"]
}
Text: {{ text }}"""
Use Pydantic to validate LLM outputs against a defined schema:
from pydantic import BaseModel
class Product(BaseModel):
name: str
price: float
currency: str
in_stock: bool
features: list[str]
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.