AI AI EducationCurriculum Library
All courses

AI Curriculum

3. Undergraduate Technical AI Track

AudienceUndergraduates in CS, engineering, math, statistics, biology, economics, design technology, or other quantitative disciplines
Duration14 weeks, 3-4 hours of lecture/lab per week plus independent project time
Modules14

3. Undergraduate Technical AI Track

Course Positioning

A full undergraduate-level technical course that can serve as an AI minor foundation, elective, or intensive bridge into applied AI research.

Learning outcomes

  • Explain supervised, unsupervised, self-supervised, generative, and reinforcement learning paradigms.
  • Derive and implement core ML algorithms at a level sufficient for debugging and research adaptation.
  • Train, evaluate, and compare deep learning models, transformers, embeddings, and generative models.
  • Design reliable experiments using baselines, ablations, uncertainty estimates, and statistical reporting.
  • Build AI systems that include data pipelines, models, evaluation harnesses, deployment constraints, and monitoring.
  • Read modern AI papers and translate them into testable implementation plans.

Course Design Snapshot

  • Positioning: A full undergraduate-level technical course that can serve as an AI minor foundation, elective, or intensive bridge into applied AI research.
  • Audience: Undergraduates in CS, engineering, math, statistics, biology, economics, design technology, or other quantitative disciplines.
  • Duration: 14 weeks, 3-4 hours of lecture/lab per week plus independent project time.
  • Prerequisites: Python, data structures, linear algebra basics, probability basics, and comfort reading technical documentation.
  • Format: Lecture, derivation-lite theory, implementation labs, paper discussions, reproducibility assignments, and final project.

Expanded Topic-by-Topic Coverage

Module 1. Mathematical foundations

Module focus: Mathematical foundations: vectors, matrices, probability, expectations, distributions, optimization geometry. Primary live activity or lab: Diagnostic notebook: vector operations, probability simulation, and gradient visualization.

Topics and coverage

vectors

  • What it means: define vectors clearly and connect it to the module focus: Mathematical foundations: vectors, matrices, probability, expectations, distributions, optimization geometry.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

matrices

  • What it means: define matrices clearly and connect it to the module focus: Mathematical foundations: vectors, matrices, probability, expectations, distributions, optimization geometry.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

probability

  • What it means: connect probability to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

expectations

  • What it means: define expectations clearly and connect it to the module focus: Mathematical foundations: vectors, matrices, probability, expectations, distributions, optimization geometry.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

distributions

  • What it means: define distributions clearly and connect it to the module focus: Mathematical foundations: vectors, matrices, probability, expectations, distributions, optimization geometry.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

optimization geometry

  • What it means: place optimization geometry inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Diagnostic notebook: vector operations, probability simulation, and gradient visualization.
  • Learners produce: Diagnostic notebook: vector operations, probability simulation, and gradient visualization.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 2. Supervised learning

Module focus: Supervised learning: regression, classification, regularization, bias-variance, and cross-validation. Primary live activity or lab: Implement and compare linear/logistic models, tree models, and boosted baselines.

Topics and coverage

regression

  • What it means: place regression inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

classification

  • What it means: place classification inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

regularization

  • What it means: define regularization clearly and connect it to the module focus: Supervised learning: regression, classification, regularization, bias-variance, and cross-validation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

bias-variance

  • What it means in this course: define bias-variance in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Undergraduates in CS, engineering, math, statistics, biology, economics, design technology, or other quantitative disciplines must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

cross-validation

  • What it means: define cross-validation clearly and connect it to the module focus: Supervised learning: regression, classification, regularization, bias-variance, and cross-validation.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Implement and compare linear/logistic models, tree models, and boosted baselines.
  • Learners produce: Implement and compare linear/logistic models, tree models, and boosted baselines.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 3. Optimization

Module focus: Optimization: gradient descent variants, stochasticity, normalization, initialization, and training dynamics. Primary live activity or lab: Train the same model under different optimizers and analyze convergence.

Topics and coverage

gradient descent variants

  • What it means: define gradient descent variants clearly and connect it to the module focus: Optimization: gradient descent variants, stochasticity, normalization, initialization, and training dynamics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

stochasticity

  • What it means: define stochasticity clearly and connect it to the module focus: Optimization: gradient descent variants, stochasticity, normalization, initialization, and training dynamics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

normalization

  • What it means: define normalization clearly and connect it to the module focus: Optimization: gradient descent variants, stochasticity, normalization, initialization, and training dynamics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

initialization

  • What it means: define initialization clearly and connect it to the module focus: Optimization: gradient descent variants, stochasticity, normalization, initialization, and training dynamics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

training dynamics

  • What it means: place training dynamics inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Train the same model under different optimizers and analyze convergence.
  • Learners produce: Train the same model under different optimizers and analyze convergence.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 4. Representation learning

Module focus: Representation learning: embeddings, metric learning, dimensionality reduction, clustering, and retrieval. Primary live activity or lab: Build an embedding search system and evaluate nearest-neighbor quality.

Topics and coverage

embeddings

  • What it means: explain how embeddings changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

metric learning

  • What it means: define metric learning clearly and connect it to the module focus: Representation learning: embeddings, metric learning, dimensionality reduction, clustering, and retrieval.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

dimensionality reduction

  • What it means: define dimensionality reduction clearly and connect it to the module focus: Representation learning: embeddings, metric learning, dimensionality reduction, clustering, and retrieval.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

clustering

  • What it means: define clustering clearly and connect it to the module focus: Representation learning: embeddings, metric learning, dimensionality reduction, clustering, and retrieval.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

retrieval

  • What it means: explain how retrieval changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

Practice and evidence of learning

  • Learners complete or discuss: Build an embedding search system and evaluate nearest-neighbor quality.
  • Learners produce: Build an embedding search system and evaluate nearest-neighbor quality.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 5. Deep learning

Module focus: Deep learning: MLPs, CNNs, normalization, dropout, residual connections, and GPU training basics. Primary live activity or lab: Implement a PyTorch training loop with logging and checkpointing.

Topics and coverage

MLPs

  • What it means: define MLPs clearly and connect it to the module focus: Deep learning: MLPs, CNNs, normalization, dropout, residual connections, and GPU training basics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

CNNs

  • What it means: define CNNs clearly and connect it to the module focus: Deep learning: MLPs, CNNs, normalization, dropout, residual connections, and GPU training basics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

normalization

  • What it means: define normalization clearly and connect it to the module focus: Deep learning: MLPs, CNNs, normalization, dropout, residual connections, and GPU training basics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

dropout

  • What it means: define dropout clearly and connect it to the module focus: Deep learning: MLPs, CNNs, normalization, dropout, residual connections, and GPU training basics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

residual connections

  • What it means: define residual connections clearly and connect it to the module focus: Deep learning: MLPs, CNNs, normalization, dropout, residual connections, and GPU training basics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

GPU training basics

  • What it means: place GPU training basics inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Implement a PyTorch training loop with logging and checkpointing.
  • Learners produce: Implement a PyTorch training loop with logging and checkpointing.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 6. Sequence models and transformers

Module focus: Sequence models and transformers: tokenization, attention, positional encodings, pretraining objectives. Primary live activity or lab: Code a small attention block and train a toy character-level model.

Topics and coverage

tokenization

  • What it means: define tokenization clearly and connect it to the module focus: Sequence models and transformers: tokenization, attention, positional encodings, pretraining objectives.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

attention

  • What it means: define attention clearly and connect it to the module focus: Sequence models and transformers: tokenization, attention, positional encodings, pretraining objectives.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

positional encodings

  • What it means: define positional encodings clearly and connect it to the module focus: Sequence models and transformers: tokenization, attention, positional encodings, pretraining objectives.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

pretraining objectives

  • What it means: place pretraining objectives inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

Practice and evidence of learning

  • Learners complete or discuss: Code a small attention block and train a toy character-level model.
  • Learners produce: Code a small attention block and train a toy character-level model.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 7. Foundation models

Module focus: Foundation models: scaling laws intuition, instruction tuning, alignment, prompting, RAG, fine-tuning, adapters. Primary live activity or lab: Compare prompting, RAG, and lightweight fine-tuning for the same task.

Topics and coverage

scaling laws intuition

  • What it means: define scaling laws intuition clearly and connect it to the module focus: Foundation models: scaling laws intuition, instruction tuning, alignment, prompting, RAG, fine-tuning, adapters.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

instruction tuning

  • What it means: define instruction tuning clearly and connect it to the module focus: Foundation models: scaling laws intuition, instruction tuning, alignment, prompting, RAG, fine-tuning, adapters.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

alignment

  • What it means: define alignment clearly and connect it to the module focus: Foundation models: scaling laws intuition, instruction tuning, alignment, prompting, RAG, fine-tuning, adapters.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

prompting

  • What it means: explain how prompting changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

RAG

  • What it means: explain how RAG changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

fine-tuning

  • What it means: define fine-tuning clearly and connect it to the module focus: Foundation models: scaling laws intuition, instruction tuning, alignment, prompting, RAG, fine-tuning, adapters.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

adapters

  • What it means: define adapters clearly and connect it to the module focus: Foundation models: scaling laws intuition, instruction tuning, alignment, prompting, RAG, fine-tuning, adapters.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Compare prompting, RAG, and lightweight fine-tuning for the same task.
  • Learners produce: Compare prompting, RAG, and lightweight fine-tuning for the same task.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 8. Generative models

Module focus: Generative models: autoencoders, diffusion intuition, language generation, sampling, evaluation limits. Primary live activity or lab: Experiment with sampling settings and evaluate diversity, quality, and factuality.

Topics and coverage

autoencoders

  • What it means: define autoencoders clearly and connect it to the module focus: Generative models: autoencoders, diffusion intuition, language generation, sampling, evaluation limits.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

diffusion intuition

  • What it means: define diffusion intuition clearly and connect it to the module focus: Generative models: autoencoders, diffusion intuition, language generation, sampling, evaluation limits.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

language generation

  • What it means: define language generation clearly and connect it to the module focus: Generative models: autoencoders, diffusion intuition, language generation, sampling, evaluation limits.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

sampling

  • What it means: define sampling clearly and connect it to the module focus: Generative models: autoencoders, diffusion intuition, language generation, sampling, evaluation limits.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

evaluation limits

  • What it means: connect evaluation limits to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

Practice and evidence of learning

  • Learners complete or discuss: Experiment with sampling settings and evaluate diversity, quality, and factuality.
  • Learners produce: Experiment with sampling settings and evaluate diversity, quality, and factuality.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 9. Reinforcement learning

Module focus: Reinforcement learning: MDPs, value functions, policy gradients, exploration, reward design. Primary live activity or lab: Train a small RL agent in a gridworld and study reward hacking.

Topics and coverage

MDPs

  • What it means: define MDPs clearly and connect it to the module focus: Reinforcement learning: MDPs, value functions, policy gradients, exploration, reward design.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

value functions

  • What it means: define value functions clearly and connect it to the module focus: Reinforcement learning: MDPs, value functions, policy gradients, exploration, reward design.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

policy gradients

  • What it means in this course: define policy gradients in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Undergraduates in CS, engineering, math, statistics, biology, economics, design technology, or other quantitative disciplines must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

exploration

  • What it means: define exploration clearly and connect it to the module focus: Reinforcement learning: MDPs, value functions, policy gradients, exploration, reward design.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

reward design

  • What it means: show where reward design appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

Practice and evidence of learning

  • Learners complete or discuss: Train a small RL agent in a gridworld and study reward hacking.
  • Learners produce: Train a small RL agent in a gridworld and study reward hacking.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 10. Evaluation science

Module focus: Evaluation science: baselines, ablations, confidence intervals, calibration, robustness, and benchmark leakage. Primary live activity or lab: Reproduce a result from a small paper or benchmark and report deviations.

Topics and coverage

baselines

  • What it means: define baselines clearly and connect it to the module focus: Evaluation science: baselines, ablations, confidence intervals, calibration, robustness, and benchmark leakage.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

ablations

  • What it means: define ablations clearly and connect it to the module focus: Evaluation science: baselines, ablations, confidence intervals, calibration, robustness, and benchmark leakage.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

confidence intervals

  • What it means: define confidence intervals clearly and connect it to the module focus: Evaluation science: baselines, ablations, confidence intervals, calibration, robustness, and benchmark leakage.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

calibration

  • What it means: define calibration clearly and connect it to the module focus: Evaluation science: baselines, ablations, confidence intervals, calibration, robustness, and benchmark leakage.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

robustness

  • What it means: define robustness clearly and connect it to the module focus: Evaluation science: baselines, ablations, confidence intervals, calibration, robustness, and benchmark leakage.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

benchmark leakage

  • What it means: define benchmark leakage clearly and connect it to the module focus: Evaluation science: baselines, ablations, confidence intervals, calibration, robustness, and benchmark leakage.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Reproduce a result from a small paper or benchmark and report deviations.
  • Learners produce: Reproduce a result from a small paper or benchmark and report deviations.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 11. Interpretability, safety, and governance

Module focus: Interpretability, safety, and governance: saliency, probes, causal claims, privacy, security, and misuse analysis. Primary live activity or lab: Audit a model using error slices, interpretability probes, and a risk register.

Topics and coverage

saliency

  • What it means: define saliency clearly and connect it to the module focus: Interpretability, safety, and governance: saliency, probes, causal claims, privacy, security, and misuse analysis.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

probes

  • What it means: define probes clearly and connect it to the module focus: Interpretability, safety, and governance: saliency, probes, causal claims, privacy, security, and misuse analysis.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

causal claims

  • What it means: define causal claims clearly and connect it to the module focus: Interpretability, safety, and governance: saliency, probes, causal claims, privacy, security, and misuse analysis.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

privacy

  • What it means in this course: define privacy in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Undergraduates in CS, engineering, math, statistics, biology, economics, design technology, or other quantitative disciplines must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

security

  • What it means in this course: define security in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Undergraduates in CS, engineering, math, statistics, biology, economics, design technology, or other quantitative disciplines must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

misuse analysis

  • What it means in this course: define misuse analysis in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Undergraduates in CS, engineering, math, statistics, biology, economics, design technology, or other quantitative disciplines must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

Practice and evidence of learning

  • Learners complete or discuss: Audit a model using error slices, interpretability probes, and a risk register.
  • Learners produce: Audit a model using error slices, interpretability probes, and a risk register.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 12. AI systems

Module focus: AI systems: data versioning, model serving, latency, batching, caching, cost, observability, and incident response. Primary live activity or lab: Deploy a small model endpoint and simulate monitoring drift.

Topics and coverage

data versioning

  • What it means: connect data versioning to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

model serving

  • What it means: place model serving inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

latency

  • What it means: place latency inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

batching

  • What it means: define batching clearly and connect it to the module focus: AI systems: data versioning, model serving, latency, batching, caching, cost, observability, and incident response.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

caching

  • What it means: define caching clearly and connect it to the module focus: AI systems: data versioning, model serving, latency, batching, caching, cost, observability, and incident response.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

cost

  • What it means: place cost inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

observability

  • What it means: place observability inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

incident response

  • What it means: define incident response clearly and connect it to the module focus: AI systems: data versioning, model serving, latency, batching, caching, cost, observability, and incident response.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Deploy a small model endpoint and simulate monitoring drift.
  • Learners produce: Deploy a small model endpoint and simulate monitoring drift.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 13. Research workshop

Module focus: Research workshop: paper reading, method extraction, implementation planning, and experimental design. Primary live activity or lab: Prepare a one-page paper-to-project translation memo.

Topics and coverage

paper reading

  • What it means: define paper reading clearly and connect it to the module focus: Research workshop: paper reading, method extraction, implementation planning, and experimental design.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

method extraction

  • What it means: define method extraction clearly and connect it to the module focus: Research workshop: paper reading, method extraction, implementation planning, and experimental design.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

implementation planning

  • What it means: define implementation planning clearly and connect it to the module focus: Research workshop: paper reading, method extraction, implementation planning, and experimental design.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

experimental design

  • What it means: show where experimental design appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

Practice and evidence of learning

  • Learners complete or discuss: Prepare a one-page paper-to-project translation memo.
  • Learners produce: Prepare a one-page paper-to-project translation memo.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 14. Capstone presentations and peer review

Module focus: Capstone presentations and peer review: technical demo, experiment report, and future work. Primary live activity or lab: Submit code, report, and reproducibility checklist.

Topics and coverage

technical demo

  • What it means: define technical demo clearly and connect it to the module focus: Capstone presentations and peer review: technical demo, experiment report, and future work.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

experiment report

  • What it means: define experiment report clearly and connect it to the module focus: Capstone presentations and peer review: technical demo, experiment report, and future work.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

future work

  • What it means: define future work clearly and connect it to the module focus: Capstone presentations and peer review: technical demo, experiment report, and future work.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Submit code, report, and reproducibility checklist.
  • Learners produce: Submit code, report, and reproducibility checklist.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Core labs and builds

  • From-scratch ML lab: linear/logistic regression and gradient descent.
  • Deep learning lab: PyTorch training loop, checkpoints, and metric logging.
  • Transformer lab: toy attention, tokenizer choices, and generation experiments.
  • AI systems lab: model serving, latency measurement, cost estimate, and monitoring plan.

Capstone

  • A research-style applied AI project. Students must define a task, reproduce at least one baseline, implement a meaningful improvement or analysis, run ablations, evaluate limitations, and submit a reproducible repository and concise technical paper.

Assessment design

  • Weekly notebooks graded for correctness, interpretation, and reproducibility.
  • Paper discussion memos and peer critiques.
  • Midterm practical exam covering supervised learning and deep learning implementation.
  • Final project with code review, experiment report, presentation, and model/system card.
  • Python, NumPy, pandas, scikit-learn, PyTorch, Hugging Face, Weights & Biases or MLflow, Git, Docker basics, Jupyter, cloud GPU credits if available.

Instructor notes

  • Undergraduates should learn to reason from problem to data to method to evaluation. Avoid turning the course into a tour of APIs. Make students implement enough internals to debug models and read papers confidently.

Instructor Build Checklist

  • Prepare one short demo for each module and one learner activity that creates a saved artifact.
  • Prepare examples that match the audience, local context, and likely tools learners can access.
  • Add a verification step to every AI-generated output: factual check, source check, data sensitivity check, and quality review.
  • Keep a running portfolio folder so each module contributes to the final project or learner playbook.
  • Reserve time for reflection on what the learner did, what AI did, what was checked, and what remains uncertain.