AI AI EducationCurriculum Library
All courses

AI Curriculum

4. Early-Career Technical AI Practitioner Program

AudienceEarly-career software engineers, data analysts, ML engineers, product engineers, technical founders, and automation builders
Duration8-10 weeks, 2 live sessions per week plus a build sprint. Can be compressed into a 5-day intensive for corporate teams
Modules10

4. Early-Career Technical AI Practitioner Program

Course Positioning

A production-oriented technical program for engineers, analysts, product builders, and junior data professionals who need to build reliable AI systems rather than merely use AI tools.

Learning outcomes

  • Design AI applications with clear user workflows, model boundaries, data flow, evaluation, monitoring, and fallback behavior.
  • Build RAG, structured extraction, classification, summarization, and agentic workflows with realistic reliability checks.
  • Understand when to prompt, fine-tune, retrieve, use tools, add human review, or avoid AI altogether.
  • Create evaluation datasets, automated tests, guardrails, and regression checks for LLM-based systems.
  • Estimate latency, cost, accuracy, privacy, and maintenance burden for AI features.
  • Ship a small AI product prototype with documentation, tests, and deployment plan.

Course Design Snapshot

  • Positioning: A production-oriented technical program for engineers, analysts, product builders, and junior data professionals who need to build reliable AI systems rather than merely use AI tools.
  • Audience: Early-career software engineers, data analysts, ML engineers, product engineers, technical founders, and automation builders.
  • Duration: 8-10 weeks, 2 live sessions per week plus a build sprint. Can be compressed into a 5-day intensive for corporate teams.
  • Prerequisites: Working Python or JavaScript ability, Git basics, API usage, and comfort with databases or spreadsheets.
  • Format: Architecture patterns, code labs, evaluation workshops, debugging clinics, and a production-style final build.

Expanded Topic-by-Topic Coverage

Module 1. AI product architecture

Module focus: AI product architecture: user intent, task decomposition, model choice, input/output contracts, and failure modes. Primary live activity or lab: Map an existing workflow into an AI-assisted system diagram.

Topics and coverage

user intent

  • What it means: define user intent clearly and connect it to the module focus: AI product architecture: user intent, task decomposition, model choice, input/output contracts, and failure modes.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

task decomposition

  • What it means: define task decomposition clearly and connect it to the module focus: AI product architecture: user intent, task decomposition, model choice, input/output contracts, and failure modes.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

model choice

  • What it means: place model choice inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

input/output contracts

  • What it means: define input/output contracts clearly and connect it to the module focus: AI product architecture: user intent, task decomposition, model choice, input/output contracts, and failure modes.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

failure modes

  • What it means: define failure modes clearly and connect it to the module focus: AI product architecture: user intent, task decomposition, model choice, input/output contracts, and failure modes.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Map an existing workflow into an AI-assisted system diagram.
  • Learners produce: Map an existing workflow into an AI-assisted system diagram.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 2. LLM APIs and structured outputs

Module focus: LLM APIs and structured outputs: prompt contracts, JSON schemas, function calling, retries, and validation. Primary live activity or lab: Build a structured extraction pipeline with schema validation and error handling.

Topics and coverage

prompt contracts

  • What it means: explain how prompt contracts changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

JSON schemas

  • What it means: define JSON schemas clearly and connect it to the module focus: LLM APIs and structured outputs: prompt contracts, JSON schemas, function calling, retries, and validation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

function calling

  • What it means: explain how function calling changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

retries

  • What it means: define retries clearly and connect it to the module focus: LLM APIs and structured outputs: prompt contracts, JSON schemas, function calling, retries, and validation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

validation

  • What it means: define validation clearly and connect it to the module focus: LLM APIs and structured outputs: prompt contracts, JSON schemas, function calling, retries, and validation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Build a structured extraction pipeline with schema validation and error handling.
  • Learners produce: Build a structured extraction pipeline with schema validation and error handling.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 3. Embeddings and RAG

Module focus: Embeddings and RAG: chunking, indexing, retrieval quality, reranking, citations, and answer faithfulness. Primary live activity or lab: Build a RAG prototype and evaluate retrieval and answer grounding separately.

Topics and coverage

chunking

  • What it means: define chunking clearly and connect it to the module focus: Embeddings and RAG: chunking, indexing, retrieval quality, reranking, citations, and answer faithfulness.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

indexing

  • What it means: define indexing clearly and connect it to the module focus: Embeddings and RAG: chunking, indexing, retrieval quality, reranking, citations, and answer faithfulness.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

retrieval quality

  • What it means: explain how retrieval quality changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

reranking

  • What it means: define reranking clearly and connect it to the module focus: Embeddings and RAG: chunking, indexing, retrieval quality, reranking, citations, and answer faithfulness.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

citations

  • What it means: define citations clearly and connect it to the module focus: Embeddings and RAG: chunking, indexing, retrieval quality, reranking, citations, and answer faithfulness.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

answer faithfulness

  • What it means: define answer faithfulness clearly and connect it to the module focus: Embeddings and RAG: chunking, indexing, retrieval quality, reranking, citations, and answer faithfulness.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Build a RAG prototype and evaluate retrieval and answer grounding separately.
  • Learners produce: Build a RAG prototype and evaluate retrieval and answer grounding separately.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 4. Evals

Module focus: Evals: golden datasets, synthetic tests, adversarial tests, model-as-judge limits, and regression suites. Primary live activity or lab: Create an eval harness for summarization or extraction with test cases.

Topics and coverage

golden datasets

  • What it means: connect golden datasets to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

synthetic tests

  • What it means: define synthetic tests clearly and connect it to the module focus: Evals: golden datasets, synthetic tests, adversarial tests, model-as-judge limits, and regression suites.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

adversarial tests

  • What it means: define adversarial tests clearly and connect it to the module focus: Evals: golden datasets, synthetic tests, adversarial tests, model-as-judge limits, and regression suites.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

model-as-judge limits

  • What it means: place model-as-judge limits inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

regression suites

  • What it means: place regression suites inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Create an eval harness for summarization or extraction with test cases.
  • Learners produce: Create an eval harness for summarization or extraction with test cases.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 5. Agents and tool use

Module focus: Agents and tool use: planning loops, tool routers, state, memory, permissions, human confirmation, and rollback. Primary live activity or lab: Build a constrained agent that uses two tools and logs every action.

Topics and coverage

planning loops

  • What it means: define planning loops clearly and connect it to the module focus: Agents and tool use: planning loops, tool routers, state, memory, permissions, human confirmation, and rollback.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

tool routers

  • What it means: define tool routers clearly and connect it to the module focus: Agents and tool use: planning loops, tool routers, state, memory, permissions, human confirmation, and rollback.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

state

  • What it means: define state clearly and connect it to the module focus: Agents and tool use: planning loops, tool routers, state, memory, permissions, human confirmation, and rollback.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

memory

  • What it means: define memory clearly and connect it to the module focus: Agents and tool use: planning loops, tool routers, state, memory, permissions, human confirmation, and rollback.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

permissions

  • What it means: define permissions clearly and connect it to the module focus: Agents and tool use: planning loops, tool routers, state, memory, permissions, human confirmation, and rollback.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

human confirmation

  • What it means: define human confirmation clearly and connect it to the module focus: Agents and tool use: planning loops, tool routers, state, memory, permissions, human confirmation, and rollback.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

rollback

  • What it means: define rollback clearly and connect it to the module focus: Agents and tool use: planning loops, tool routers, state, memory, permissions, human confirmation, and rollback.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Build a constrained agent that uses two tools and logs every action.
  • Learners produce: Build a constrained agent that uses two tools and logs every action.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 6. Data and privacy engineering

Module focus: Data and privacy engineering: PII, access control, redaction, retention, audit logs, and tenant separation. Primary live activity or lab: Add redaction and permission checks to an AI workflow.

Topics and coverage

PII

  • What it means: define PII clearly and connect it to the module focus: Data and privacy engineering: PII, access control, redaction, retention, audit logs, and tenant separation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

access control

  • What it means: define access control clearly and connect it to the module focus: Data and privacy engineering: PII, access control, redaction, retention, audit logs, and tenant separation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

redaction

  • What it means: define redaction clearly and connect it to the module focus: Data and privacy engineering: PII, access control, redaction, retention, audit logs, and tenant separation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

retention

  • What it means: define retention clearly and connect it to the module focus: Data and privacy engineering: PII, access control, redaction, retention, audit logs, and tenant separation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

audit logs

  • What it means: define audit logs clearly and connect it to the module focus: Data and privacy engineering: PII, access control, redaction, retention, audit logs, and tenant separation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

tenant separation

  • What it means: define tenant separation clearly and connect it to the module focus: Data and privacy engineering: PII, access control, redaction, retention, audit logs, and tenant separation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Add redaction and permission checks to an AI workflow.
  • Learners produce: Add redaction and permission checks to an AI workflow.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 7. Deployment and observability

Module focus: Deployment and observability: queues, caching, streaming, tracing, cost tracking, latency budgets, and incident response. Primary live activity or lab: Deploy a small endpoint and capture traces, errors, latency, and cost per request.

Topics and coverage

queues

  • What it means: define queues clearly and connect it to the module focus: Deployment and observability: queues, caching, streaming, tracing, cost tracking, latency budgets, and incident response.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

caching

  • What it means: define caching clearly and connect it to the module focus: Deployment and observability: queues, caching, streaming, tracing, cost tracking, latency budgets, and incident response.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

streaming

  • What it means: define streaming clearly and connect it to the module focus: Deployment and observability: queues, caching, streaming, tracing, cost tracking, latency budgets, and incident response.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

tracing

  • What it means: define tracing clearly and connect it to the module focus: Deployment and observability: queues, caching, streaming, tracing, cost tracking, latency budgets, and incident response.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

cost tracking

  • What it means: place cost tracking inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

latency budgets

  • What it means: place latency budgets inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

incident response

  • What it means: define incident response clearly and connect it to the module focus: Deployment and observability: queues, caching, streaming, tracing, cost tracking, latency budgets, and incident response.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Deploy a small endpoint and capture traces, errors, latency, and cost per request.
  • Learners produce: Deploy a small endpoint and capture traces, errors, latency, and cost per request.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 8. Fine-tuning and adaptation

Module focus: Fine-tuning and adaptation: prompt tuning, adapters, supervised fine-tuning, data quality, and when not to fine-tune. Primary live activity or lab: Compare prompt-only, RAG, and fine-tuned or few-shot approaches on the same task.

Topics and coverage

prompt tuning

  • What it means: explain how prompt tuning changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

adapters

  • What it means: define adapters clearly and connect it to the module focus: Fine-tuning and adaptation: prompt tuning, adapters, supervised fine-tuning, data quality, and when not to fine-tune.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

supervised fine-tuning

  • What it means: define supervised fine-tuning clearly and connect it to the module focus: Fine-tuning and adaptation: prompt tuning, adapters, supervised fine-tuning, data quality, and when not to fine-tune.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

data quality

  • What it means: connect data quality to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

when not to fine-tune

  • What it means: define when not to fine-tune clearly and connect it to the module focus: Fine-tuning and adaptation: prompt tuning, adapters, supervised fine-tuning, data quality, and when not to fine-tune.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Compare prompt-only, RAG, and fine-tuned or few-shot approaches on the same task.
  • Learners produce: Compare prompt-only, RAG, and fine-tuned or few-shot approaches on the same task.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 9. Security and misuse

Module focus: Security and misuse: prompt injection, data exfiltration, insecure tools, jailbreaks, and model supply-chain risk. Primary live activity or lab: Run a prompt-injection test suite against a RAG or agent workflow.

Topics and coverage

prompt injection

  • What it means: explain how prompt injection changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

data exfiltration

  • What it means: connect data exfiltration to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

insecure tools

  • What it means: define insecure tools clearly and connect it to the module focus: Security and misuse: prompt injection, data exfiltration, insecure tools, jailbreaks, and model supply-chain risk.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

jailbreaks

  • What it means: define jailbreaks clearly and connect it to the module focus: Security and misuse: prompt injection, data exfiltration, insecure tools, jailbreaks, and model supply-chain risk.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

model supply-chain risk

  • What it means in this course: define model supply-chain risk in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Early-career software engineers, data analysts, ML engineers, product engineers, technical founders, and automation builders must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

Practice and evidence of learning

  • Learners complete or discuss: Run a prompt-injection test suite against a RAG or agent workflow.
  • Learners produce: Run a prompt-injection test suite against a RAG or agent workflow.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 10. Capstone sprint

Module focus: Capstone sprint: end-to-end AI feature from problem framing to demo, evals, and deployment plan. Primary live activity or lab: Ship a working prototype with documentation and a reliability report.

Topics and coverage

end-to-end AI feature from problem framing to demo

  • What it means: connect end-to-end AI feature from problem framing to demo to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

evals

  • What it means: define evals clearly and connect it to the module focus: Capstone sprint: end-to-end AI feature from problem framing to demo, evals, and deployment plan.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

deployment plan

  • What it means: place deployment plan inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Ship a working prototype with documentation and a reliability report.
  • Learners produce: Ship a working prototype with documentation and a reliability report.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Core labs and builds

  • Structured extraction lab from messy PDFs, emails, or support tickets.
  • RAG lab with retrieval metrics, answer faithfulness checks, and failure cases.
  • Agent lab with strict permissions, tool logs, and human approval checkpoints.
  • Production readiness lab: latency/cost budget, eval dashboard, and monitoring design.

Capstone

  • Build a production-style AI assistant or workflow automation for a real use case such as invoice extraction, legal document triage, clinical intake summarization, customer support QA, sales research, or internal knowledge search. The capstone must include tests, evals, logs, fallback behavior, and cost estimate.

Assessment design

  • Code reviews for reliability, readability, and defensive design.
  • Evaluation dataset quality and coverage.
  • Architecture review: data flow, security, privacy, and operational constraints.
  • Final demo with live failure testing and incident response discussion.
  • Python or TypeScript, OpenAI/Anthropic/Gemini-compatible APIs, LangChain or LlamaIndex only after manual pipeline basics, FastAPI or Next.js, vector database, pytest, GitHub Actions, tracing tools, Docker.

Instructor notes

  • This course should feel closer to a software engineering bootcamp than a prompt engineering workshop. The core habit is to treat LLMs as unreliable services that need contracts, evals, monitoring, and user-centered design.

Instructor Build Checklist

  • Prepare one short demo for each module and one learner activity that creates a saved artifact.
  • Prepare examples that match the audience, local context, and likely tools learners can access.
  • Add a verification step to every AI-generated output: factual check, source check, data sensitivity check, and quality review.
  • Keep a running portfolio folder so each module contributes to the final project or learner playbook.
  • Reserve time for reflection on what the learner did, what AI did, what was checked, and what remains uncertain.