You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
As AI applications grow in complexity, hard-coded prompt strings become unmanageable. This lesson covers how to use template engines for prompts, request structured outputs from LLMs, validate those outputs, and handle parsing failures gracefully.
Hard-coded prompts with string concatenation are fragile:
# Fragile — hard to maintain, easy to break
prompt = "Summarise the following " + doc_type + " about " + topic + ": " + text
Templates separate the prompt structure from the data, making prompts reusable, testable, and version-controllable.
def build_prompt(topic: str, context: str) -> str:
return f"""You are an expert on {topic}.
Answer the following question using only the provided context.
Context:
{context}
Question: {{question}}"""
pip install jinja2
from jinja2 import Template
template = Template("""You are a {{ role }} assistant.
{% if context %}
Use the following context to answer:
{% for doc in context %}
- {{ doc }}
{% endfor %}
{% endif %}
User question: {{ question }}
Respond in {{ format }} format.""")
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.