AI AI EducationCurriculum Library
All courses

AI Curriculum

2. Technical AI and Machine Learning for Grades 11-12

AudienceStudents in grades 11-12, ideally with basic Python exposure and school-level mathematics
Duration12 weeks, 2 sessions per week, 90 minutes per session, with optional weekend project clinics
Modules12

2. Technical AI and Machine Learning for Grades 11-12

Course Positioning

A rigorous pre-university AI course for students who may pursue computer science, engineering, statistics, data science, design, medicine, business analytics, or scientific research.

Learning outcomes

  • Implement linear regression, logistic regression, gradient descent, and a simple neural network in Python.
  • Use vectors, matrices, loss functions, gradients, and optimization to explain how models learn.
  • Compare classical ML, deep learning, transformers, RAG, and fine-tuning at a high level.
  • Evaluate models using classification and regression metrics, validation sets, and failure slices.
  • Build a small ML or LLM-assisted application with documented data, evaluation, and deployment choices.
  • Read a simplified AI research paper and extract the problem, method, experiment, and limitation.

Course Design Snapshot

  • Positioning: A rigorous pre-university AI course for students who may pursue computer science, engineering, statistics, data science, design, medicine, business analytics, or scientific research.
  • Audience: Students in grades 11-12, ideally with basic Python exposure and school-level mathematics.
  • Duration: 12 weeks, 2 sessions per week, 90 minutes per session, with optional weekend project clinics.
  • Prerequisites: Basic Python, algebra, graphs, functions, probability intuition, and willingness to work through notebooks.
  • Format: Concept lecture, math intuition, code walkthrough, lab, error analysis, and project checkpoint.

Expanded Topic-by-Topic Coverage

Module 1. AI problem framing

Module focus: AI problem framing: prediction, classification, generation, control, ranking, retrieval, and decision support. Primary live activity or lab: Turn five real-world problems into formal ML tasks with inputs, labels, metrics, and risks.

Topics and coverage

prediction

  • What it means: define prediction clearly and connect it to the module focus: AI problem framing: prediction, classification, generation, control, ranking, retrieval, and decision support.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

classification

  • What it means: place classification inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

generation

  • What it means: define generation clearly and connect it to the module focus: AI problem framing: prediction, classification, generation, control, ranking, retrieval, and decision support.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

control

  • What it means: define control clearly and connect it to the module focus: AI problem framing: prediction, classification, generation, control, ranking, retrieval, and decision support.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

ranking

  • What it means: define ranking clearly and connect it to the module focus: AI problem framing: prediction, classification, generation, control, ranking, retrieval, and decision support.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

retrieval

  • What it means: explain how retrieval changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

decision support

  • What it means: show where decision support appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

Practice and evidence of learning

  • Learners complete or discuss: Turn five real-world problems into formal ML tasks with inputs, labels, metrics, and risks.
  • Learners produce: Turn five real-world problems into formal ML tasks with inputs, labels, metrics, and risks.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 2. Python for data

Module focus: Python for data: arrays, dataframes, visualization, preprocessing, missing data, leakage, and reproducibility. Primary live activity or lab: Clean a dataset, create train/validation/test splits, and write a short data report.

Topics and coverage

arrays

  • What it means: define arrays clearly and connect it to the module focus: Python for data: arrays, dataframes, visualization, preprocessing, missing data, leakage, and reproducibility.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

dataframes

  • What it means: connect dataframes to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

visualization

  • What it means: define visualization clearly and connect it to the module focus: Python for data: arrays, dataframes, visualization, preprocessing, missing data, leakage, and reproducibility.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

preprocessing

  • What it means: show where preprocessing appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

missing data

  • What it means: connect missing data to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

leakage

  • What it means: define leakage clearly and connect it to the module focus: Python for data: arrays, dataframes, visualization, preprocessing, missing data, leakage, and reproducibility.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

reproducibility

  • What it means: define reproducibility clearly and connect it to the module focus: Python for data: arrays, dataframes, visualization, preprocessing, missing data, leakage, and reproducibility.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Clean a dataset, create train/validation/test splits, and write a short data report.
  • Learners produce: Clean a dataset, create train/validation/test splits, and write a short data report.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 3. Regression

Module focus: Regression: linear functions, mean squared error, residuals, regularization intuition, and feature scaling. Primary live activity or lab: Fit linear regression and compare hand-computed predictions with library output.

Topics and coverage

linear functions

  • What it means: define linear functions clearly and connect it to the module focus: Regression: linear functions, mean squared error, residuals, regularization intuition, and feature scaling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

mean squared error

  • What it means: define mean squared error clearly and connect it to the module focus: Regression: linear functions, mean squared error, residuals, regularization intuition, and feature scaling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

residuals

  • What it means: define residuals clearly and connect it to the module focus: Regression: linear functions, mean squared error, residuals, regularization intuition, and feature scaling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

regularization intuition

  • What it means: define regularization intuition clearly and connect it to the module focus: Regression: linear functions, mean squared error, residuals, regularization intuition, and feature scaling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

feature scaling

  • What it means: connect feature scaling to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

Practice and evidence of learning

  • Learners complete or discuss: Fit linear regression and compare hand-computed predictions with library output.
  • Learners produce: Fit linear regression and compare hand-computed predictions with library output.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 4. Classification

Module focus: Classification: logistic regression, probabilities, cross-entropy intuition, thresholds, class imbalance. Primary live activity or lab: Build a binary classifier and choose a threshold for two different business goals.

Topics and coverage

logistic regression

  • What it means: place logistic regression inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

probabilities

  • What it means: define probabilities clearly and connect it to the module focus: Classification: logistic regression, probabilities, cross-entropy intuition, thresholds, class imbalance.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

cross-entropy intuition

  • What it means: define cross-entropy intuition clearly and connect it to the module focus: Classification: logistic regression, probabilities, cross-entropy intuition, thresholds, class imbalance.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

thresholds

  • What it means: show where thresholds appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

class imbalance

  • What it means: define class imbalance clearly and connect it to the module focus: Classification: logistic regression, probabilities, cross-entropy intuition, thresholds, class imbalance.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Build a binary classifier and choose a threshold for two different business goals.
  • Learners produce: Build a binary classifier and choose a threshold for two different business goals.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 5. Gradient descent

Module focus: Gradient descent: loss landscape intuition, learning rate, epochs, batches, and why optimization can fail. Primary live activity or lab: Code gradient descent for a one-variable model and visualize the update path.

Topics and coverage

loss landscape intuition

  • What it means: define loss landscape intuition clearly and connect it to the module focus: Gradient descent: loss landscape intuition, learning rate, epochs, batches, and why optimization can fail.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

learning rate

  • What it means: define learning rate clearly and connect it to the module focus: Gradient descent: loss landscape intuition, learning rate, epochs, batches, and why optimization can fail.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

epochs

  • What it means: define epochs clearly and connect it to the module focus: Gradient descent: loss landscape intuition, learning rate, epochs, batches, and why optimization can fail.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

batches

  • What it means: define batches clearly and connect it to the module focus: Gradient descent: loss landscape intuition, learning rate, epochs, batches, and why optimization can fail.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

why optimization can fail

  • What it means: place why optimization can fail inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Code gradient descent for a one-variable model and visualize the update path.
  • Learners produce: Code gradient descent for a one-variable model and visualize the update path.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 6. Neural networks

Module focus: Neural networks: matrix multiplication, activations, hidden layers, backpropagation concept, and overfitting. Primary live activity or lab: Train an MLP on a small image or tabular dataset and tune architecture choices.

Topics and coverage

matrix multiplication

  • What it means: define matrix multiplication clearly and connect it to the module focus: Neural networks: matrix multiplication, activations, hidden layers, backpropagation concept, and overfitting.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

activations

  • What it means: define activations clearly and connect it to the module focus: Neural networks: matrix multiplication, activations, hidden layers, backpropagation concept, and overfitting.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

hidden layers

  • What it means: define hidden layers clearly and connect it to the module focus: Neural networks: matrix multiplication, activations, hidden layers, backpropagation concept, and overfitting.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

backpropagation concept

  • What it means: define backpropagation concept clearly and connect it to the module focus: Neural networks: matrix multiplication, activations, hidden layers, backpropagation concept, and overfitting.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

overfitting

  • What it means: define overfitting clearly and connect it to the module focus: Neural networks: matrix multiplication, activations, hidden layers, backpropagation concept, and overfitting.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Train an MLP on a small image or tabular dataset and tune architecture choices.
  • Learners produce: Train an MLP on a small image or tabular dataset and tune architecture choices.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 7. Computer vision and sequences

Module focus: Computer vision and sequences: CNN intuition, embeddings, recurrence limits, attention, and transformers. Primary live activity or lab: Implement a toy attention mechanism and inspect attention scores.

Topics and coverage

CNN intuition

  • What it means: define CNN intuition clearly and connect it to the module focus: Computer vision and sequences: CNN intuition, embeddings, recurrence limits, attention, and transformers.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

embeddings

  • What it means: explain how embeddings changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

recurrence limits

  • What it means: define recurrence limits clearly and connect it to the module focus: Computer vision and sequences: CNN intuition, embeddings, recurrence limits, attention, and transformers.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

attention

  • What it means: define attention clearly and connect it to the module focus: Computer vision and sequences: CNN intuition, embeddings, recurrence limits, attention, and transformers.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

transformers

  • What it means: place transformers inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Implement a toy attention mechanism and inspect attention scores.
  • Learners produce: Implement a toy attention mechanism and inspect attention scores.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 8. Generative AI

Module focus: Generative AI: tokenization, pretraining, instruction tuning, RLHF/RLAIF idea, hallucination, and sampling. Primary live activity or lab: Compare temperature, top-p, and prompt structure on a controlled task.

Topics and coverage

tokenization

  • What it means: define tokenization clearly and connect it to the module focus: Generative AI: tokenization, pretraining, instruction tuning, RLHF/RLAIF idea, hallucination, and sampling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

pretraining

  • What it means: place pretraining inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

instruction tuning

  • What it means: define instruction tuning clearly and connect it to the module focus: Generative AI: tokenization, pretraining, instruction tuning, RLHF/RLAIF idea, hallucination, and sampling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

RLHF/RLAIF idea

  • What it means: define RLHF/RLAIF idea clearly and connect it to the module focus: Generative AI: tokenization, pretraining, instruction tuning, RLHF/RLAIF idea, hallucination, and sampling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

hallucination

  • What it means: define hallucination clearly and connect it to the module focus: Generative AI: tokenization, pretraining, instruction tuning, RLHF/RLAIF idea, hallucination, and sampling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

sampling

  • What it means: define sampling clearly and connect it to the module focus: Generative AI: tokenization, pretraining, instruction tuning, RLHF/RLAIF idea, hallucination, and sampling.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Compare temperature, top-p, and prompt structure on a controlled task.
  • Learners produce: Compare temperature, top-p, and prompt structure on a controlled task.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 9. Retrieval and grounding

Module focus: Retrieval and grounding: embeddings, vector search, chunking, RAG, citation, and evaluation. Primary live activity or lab: Build a small document QA system over a provided collection and test answer faithfulness.

Topics and coverage

embeddings

  • What it means: explain how embeddings changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.
  • What it means: define vector search clearly and connect it to the module focus: Retrieval and grounding: embeddings, vector search, chunking, RAG, citation, and evaluation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

chunking

  • What it means: define chunking clearly and connect it to the module focus: Retrieval and grounding: embeddings, vector search, chunking, RAG, citation, and evaluation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

RAG

  • What it means: explain how RAG changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

citation

  • What it means: define citation clearly and connect it to the module focus: Retrieval and grounding: embeddings, vector search, chunking, RAG, citation, and evaluation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

evaluation

  • What it means: connect evaluation to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

Practice and evidence of learning

  • Learners complete or discuss: Build a small document QA system over a provided collection and test answer faithfulness.
  • Learners produce: Build a small document QA system over a provided collection and test answer faithfulness.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 10. Evaluation and robustness

Module focus: Evaluation and robustness: confusion matrix, calibration, fairness slices, adversarial examples, and model cards. Primary live activity or lab: Write an error analysis report with at least three failure categories.

Topics and coverage

confusion matrix

  • What it means: define confusion matrix clearly and connect it to the module focus: Evaluation and robustness: confusion matrix, calibration, fairness slices, adversarial examples, and model cards.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

calibration

  • What it means: define calibration clearly and connect it to the module focus: Evaluation and robustness: confusion matrix, calibration, fairness slices, adversarial examples, and model cards.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

fairness slices

  • What it means: define fairness slices clearly and connect it to the module focus: Evaluation and robustness: confusion matrix, calibration, fairness slices, adversarial examples, and model cards.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

adversarial examples

  • What it means: define adversarial examples clearly and connect it to the module focus: Evaluation and robustness: confusion matrix, calibration, fairness slices, adversarial examples, and model cards.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

model cards

  • What it means: place model cards inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Write an error analysis report with at least three failure categories.
  • Learners produce: Write an error analysis report with at least three failure categories.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 11. Deployment basics

Module focus: Deployment basics: APIs, latency, cost, privacy, monitoring, and human-in-the-loop workflows. Primary live activity or lab: Create a simple web or notebook interface around a model.

Topics and coverage

APIs

  • What it means: define APIs clearly and connect it to the module focus: Deployment basics: APIs, latency, cost, privacy, monitoring, and human-in-the-loop workflows.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

latency

  • What it means: place latency inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

cost

  • What it means: place cost inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

privacy

  • What it means in this course: define privacy in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Students in grades 11-12, ideally with basic Python exposure and school-level mathematics must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

monitoring

  • What it means: define monitoring clearly and connect it to the module focus: Deployment basics: APIs, latency, cost, privacy, monitoring, and human-in-the-loop workflows.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

human-in-the-loop workflows

  • What it means: show where human-in-the-loop workflows appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

Practice and evidence of learning

  • Learners complete or discuss: Create a simple web or notebook interface around a model.
  • Learners produce: Create a simple web or notebook interface around a model.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 12. Research and capstone studio

Module focus: Research and capstone studio: literature reading, experimental design, ablations, and final presentation. Primary live activity or lab: Present a capstone with data, method, results, limitations, and next steps.

Topics and coverage

literature reading

  • What it means: define literature reading clearly and connect it to the module focus: Research and capstone studio: literature reading, experimental design, ablations, and final presentation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

experimental design

  • What it means: show where experimental design appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

ablations

  • What it means: define ablations clearly and connect it to the module focus: Research and capstone studio: literature reading, experimental design, ablations, and final presentation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

final presentation

  • What it means: show where final presentation appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

Practice and evidence of learning

  • Learners complete or discuss: Present a capstone with data, method, results, limitations, and next steps.
  • Learners produce: Present a capstone with data, method, results, limitations, and next steps.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Core labs and builds

  • From-scratch gradient descent for a small regression problem.
  • Neural network classification lab using a small image or tabular dataset.
  • Mini-RAG system with embeddings, retrieval evaluation, and answer verification.
  • Paper reading lab using one accessible paper on transformers, diffusion, or reinforcement learning.

Capstone

  • Students choose one track: predictive ML project, generative AI assistant with retrieval, computer vision prototype, or AI for science mini-investigation. The final artifact includes code, notebook, model evaluation, model card, and presentation.

Assessment design

  • Technical notebooks with clear code and interpretation.
  • Short math and concept checks after each unit.
  • Mid-course practical exam: implement and evaluate a supervised ML model.
  • Final capstone scored on framing, implementation, evaluation, communication, and safety.
  • Python, Google Colab, NumPy, pandas, scikit-learn, PyTorch or TensorFlow, Hugging Face demos, FAISS or Chroma for vector search, GitHub Classroom if available.

Instructor notes

  • The key upgrade from a usage course is that students should implement parts of the learning process, not only call AI tools. The emphasis should be conceptual depth plus visible working code.

Instructor Build Checklist

  • Prepare one short demo for each module and one learner activity that creates a saved artifact.
  • Prepare examples that match the audience, local context, and likely tools learners can access.
  • Add a verification step to every AI-generated output: factual check, source check, data sensitivity check, and quality review.
  • Keep a running portfolio folder so each module contributes to the final project or learner playbook.
  • Reserve time for reflection on what the learner did, what AI did, what was checked, and what remains uncertain.