Week 01: LLM Foundations & Prompt Engineering
What You'll Learn
Understand how LLMs actually work — tokenization, context windows, temperature, and why the same prompt gets different results. You'll learn prompt engineering patterns that professional engineers use, not tutorials.
Session Schedule
| Day | Time | Focus |
|---|---|---|
| Saturday | 8:00 - 11:00 PM WAT | LLM Architecture & Prompt Patterns |
| Sunday | 8:00 - 11:00 PM WAT | API Integration & Build Session |
Pre-Requisites
- Python 3.12+ installed
- OpenAI API key (free tier works)
- Anthropic API key (optional but recommended)
- VS Code or PyCharm
- Completed Orientation Modules 1-5
Topics Covered
Transformer Architecture Intuition
How attention mechanisms work. Why transformers replaced RNNs. What "context window" really means and why it matters for production systems.
Attention Tokenization Context WindowsPrompt Engineering Patterns
Few-shot prompting, chain-of-thought, ReAct pattern, structured outputs. When to use each and why most tutorials teach them wrong.
Few-Shot CoT ReAct Structured OutputModel Selection
GPT-4o vs Claude 3.5 vs Gemini. When to use which model. Cost vs quality tradeoffs. How to benchmark for your use case.
GPT-4o Claude GeminiOpenAI & Anthropic APIs
Authentication, streaming, function calling, error handling, retry logic. Production patterns, not hello world.
OpenAI SDK Anthropic SDK StreamingToken Cost Optimization
How tokenization affects costs. Prompt compression techniques. When to use smaller models. Cost monitoring in production.
Tiktoken Cost Tracking OptimizationWeekly Build: Smart Prompt Optimizer
Build a prompt optimization tool that takes a naive prompt, analyzes it, and automatically applies chain-of-thought, few-shot examples, and structured output formatting.
Architecture
User Input (naive prompt)
|
v
Prompt Analyzer (classify intent, detect weaknesses)
|
v
Strategy Selector (choose: CoT, few-shot, structured, etc.)
|
v
Prompt Rewriter (apply selected strategies)
|
v
A/B Tester (run both prompts, compare outputs)
|
v
Report (show improvement metrics)
Key Files
| File | Purpose |
|---|---|
main.py | CLI entry point |
analyzer.py | Prompt weakness detection |
strategies.py | CoT, few-shot, structured output strategies |
rewriter.py | Apply strategies to prompts |
evaluator.py | A/B test and score outputs |
Resources
Required Reading
- OpenAI Prompt Engineering Guide
- Anthropic Prompt Engineering Documentation
- "Attention Is All You Need" paper (skim sections 1-3)
Code Repository
Clone the bootcamp repo and switch to the week-01 branch:
git clone https://github.com/softbricks-academy/agentic-ai-bootcamp.git cd agentic-ai-bootcamp git checkout week-01
Session Recording
Recording will be available within 24 hours after the live session. Check the WhatsApp group for the link.
Homework
Due before Week 2 live session.
- Complete the prompt optimizer build — push your code to the bootcamp repo
- Experiment with 3 different models — compare GPT-4o, Claude, and Gemini on the same prompt optimization task
- Write a 1-page reflection — what surprised you about prompt engineering? Share in the WhatsApp group
- Cost analysis — calculate the token cost of running your optimizer 100 times on each model