Week 01: LLM Foundations & Prompt Engineering

Build: Smart prompt optimizer with automatic chain-of-thought
Overview
Topics
Weekly Build
Resources
Homework

What You'll Learn

Understand how LLMs actually work — tokenization, context windows, temperature, and why the same prompt gets different results. You'll learn prompt engineering patterns that professional engineers use, not tutorials.

Session Schedule

DayTimeFocus
Saturday8:00 - 11:00 PM WATLLM Architecture & Prompt Patterns
Sunday8:00 - 11:00 PM WATAPI Integration & Build Session

Pre-Requisites

  • Python 3.12+ installed
  • OpenAI API key (free tier works)
  • Anthropic API key (optional but recommended)
  • VS Code or PyCharm
  • Completed Orientation Modules 1-5

Topics Covered

Transformer Architecture Intuition

How attention mechanisms work. Why transformers replaced RNNs. What "context window" really means and why it matters for production systems.

Attention Tokenization Context Windows

Prompt Engineering Patterns

Few-shot prompting, chain-of-thought, ReAct pattern, structured outputs. When to use each and why most tutorials teach them wrong.

Few-Shot CoT ReAct Structured Output

Model Selection

GPT-4o vs Claude 3.5 vs Gemini. When to use which model. Cost vs quality tradeoffs. How to benchmark for your use case.

GPT-4o Claude Gemini

OpenAI & Anthropic APIs

Authentication, streaming, function calling, error handling, retry logic. Production patterns, not hello world.

OpenAI SDK Anthropic SDK Streaming

Token Cost Optimization

How tokenization affects costs. Prompt compression techniques. When to use smaller models. Cost monitoring in production.

Tiktoken Cost Tracking Optimization

Weekly Build: Smart Prompt Optimizer

Build a prompt optimization tool that takes a naive prompt, analyzes it, and automatically applies chain-of-thought, few-shot examples, and structured output formatting.

Architecture

User Input (naive prompt)
    |
    v
Prompt Analyzer (classify intent, detect weaknesses)
    |
    v
Strategy Selector (choose: CoT, few-shot, structured, etc.)
    |
    v
Prompt Rewriter (apply selected strategies)
    |
    v
A/B Tester (run both prompts, compare outputs)
    |
    v
Report (show improvement metrics)

Key Files

FilePurpose
main.pyCLI entry point
analyzer.pyPrompt weakness detection
strategies.pyCoT, few-shot, structured output strategies
rewriter.pyApply strategies to prompts
evaluator.pyA/B test and score outputs

Resources

Required Reading

  • OpenAI Prompt Engineering Guide
  • Anthropic Prompt Engineering Documentation
  • "Attention Is All You Need" paper (skim sections 1-3)

Code Repository

Clone the bootcamp repo and switch to the week-01 branch:

git clone https://github.com/softbricks-academy/agentic-ai-bootcamp.git
cd agentic-ai-bootcamp
git checkout week-01

Session Recording

Recording will be available within 24 hours after the live session. Check the WhatsApp group for the link.

Homework

Due before Week 2 live session.

  1. Complete the prompt optimizer build — push your code to the bootcamp repo
  2. Experiment with 3 different models — compare GPT-4o, Claude, and Gemini on the same prompt optimization task
  3. Write a 1-page reflection — what surprised you about prompt engineering? Share in the WhatsApp group
  4. Cost analysis — calculate the token cost of running your optimizer 100 times on each model