1. Technical AI Foundations for Grades 8-10
Course Positioning
A hands-on technical alternative to a general AI usage course. It introduces students to how models learn from data, using accessible math, visual coding, and beginner Python.
Learning outcomes
- Explain the difference between rule-based programs, machine learning models, and generative AI systems.
- Represent real-world information as tables, labels, images, tokens, and simple vectors.
- Train and test simple classifiers such as k-nearest neighbors, decision trees, and perceptrons.
- Understand overfitting, underfitting, bias, noise, train/test splits, and why accuracy alone can mislead.
- Build beginner models for image, sound, text, and tabular data using safe classroom datasets.
- Create a final AI mini-project and explain its limitations clearly.
Course Design Snapshot
- Positioning: A hands-on technical alternative to a general AI usage course. It introduces students to how models learn from data, using accessible math, visual coding, and beginner Python.
- Audience: Students in grades 8-10 with curiosity about science, math, robotics, games, or coding.
- Duration: 10 weeks, 2 sessions per week, 60-75 minutes per session. Can also be delivered as a 5-day bootcamp.
- Prerequisites: Basic arithmetic, graphs, percentages, and comfort using a browser. Prior coding is helpful but not required.
- Format: Short concept lesson, live demonstration, guided notebook or visual lab, reflection, and mini-build every week.
Expanded Topic-by-Topic Coverage
Module 1. What makes AI different from normal software
Module focus: What makes AI different from normal software: rules, data, labels, prediction, and feedback loops. Primary live activity or lab: Classify hand-drawn shapes using rules, then compare with a trained model.
Topics and coverage
rules
- What it means: define rules clearly and connect it to the module focus: What makes AI different from normal software: rules, data, labels, prediction, and feedback loops.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
data
- What it means: connect data to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
labels
- What it means: connect labels to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
prediction
- What it means: define prediction clearly and connect it to the module focus: What makes AI different from normal software: rules, data, labels, prediction, and feedback loops.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
feedback loops
- What it means: define feedback loops clearly and connect it to the module focus: What makes AI different from normal software: rules, data, labels, prediction, and feedback loops.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Classify hand-drawn shapes using rules, then compare with a trained model.
- Learners produce: Classify hand-drawn shapes using rules, then compare with a trained model.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 2. Data as examples
Module focus: Data as examples: rows, columns, labels, missing values, measurement error, and class balance. Primary live activity or lab: Create a small dataset about classroom objects and visualize patterns.
Topics and coverage
rows
- What it means: define rows clearly and connect it to the module focus: Data as examples: rows, columns, labels, missing values, measurement error, and class balance.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
columns
- What it means: define columns clearly and connect it to the module focus: Data as examples: rows, columns, labels, missing values, measurement error, and class balance.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
labels
- What it means: connect labels to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
missing values
- What it means: define missing values clearly and connect it to the module focus: Data as examples: rows, columns, labels, missing values, measurement error, and class balance.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
measurement error
- What it means: define measurement error clearly and connect it to the module focus: Data as examples: rows, columns, labels, missing values, measurement error, and class balance.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
class balance
- What it means: define class balance clearly and connect it to the module focus: Data as examples: rows, columns, labels, missing values, measurement error, and class balance.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Create a small dataset about classroom objects and visualize patterns.
- Learners produce: Create a small dataset about classroom objects and visualize patterns.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 3. Features and distance
Module focus: Features and distance: why machines need numbers, vectors, similarity, and nearest-neighbor reasoning. Primary live activity or lab: Build a k-nearest-neighbor classifier for animals, sports, or simple images.
Topics and coverage
why machines need numbers
- What it means: define why machines need numbers clearly and connect it to the module focus: Features and distance: why machines need numbers, vectors, similarity, and nearest-neighbor reasoning.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
vectors
- What it means: define vectors clearly and connect it to the module focus: Features and distance: why machines need numbers, vectors, similarity, and nearest-neighbor reasoning.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
similarity
- What it means: define similarity clearly and connect it to the module focus: Features and distance: why machines need numbers, vectors, similarity, and nearest-neighbor reasoning.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
nearest-neighbor reasoning
- What it means: define nearest-neighbor reasoning clearly and connect it to the module focus: Features and distance: why machines need numbers, vectors, similarity, and nearest-neighbor reasoning.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Build a k-nearest-neighbor classifier for animals, sports, or simple images.
- Learners produce: Build a k-nearest-neighbor classifier for animals, sports, or simple images.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 4. Decision trees and if-then learning
Module focus: Decision trees and if-then learning: splits, depth, impurity intuition, and interpretability. Primary live activity or lab: Train a small decision tree and explain every branch in plain language.
Topics and coverage
splits
- What it means: define splits clearly and connect it to the module focus: Decision trees and if-then learning: splits, depth, impurity intuition, and interpretability.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
depth
- What it means: define depth clearly and connect it to the module focus: Decision trees and if-then learning: splits, depth, impurity intuition, and interpretability.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
impurity intuition
- What it means: define impurity intuition clearly and connect it to the module focus: Decision trees and if-then learning: splits, depth, impurity intuition, and interpretability.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
interpretability
- What it means: define interpretability clearly and connect it to the module focus: Decision trees and if-then learning: splits, depth, impurity intuition, and interpretability.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Train a small decision tree and explain every branch in plain language.
- Learners produce: Train a small decision tree and explain every branch in plain language.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 5. Linear models and the perceptron
Module focus: Linear models and the perceptron: weights, bias term, score, threshold, and mistakes as learning signals. Primary live activity or lab: Use a spreadsheet or notebook to update a two-feature perceptron.
Topics and coverage
weights
- What it means: define weights clearly and connect it to the module focus: Linear models and the perceptron: weights, bias term, score, threshold, and mistakes as learning signals.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
bias term
- What it means in this course: define bias term in operational terms, not as an abstract principle.
- What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Students in grades 8-10 with curiosity about science, math, robotics, games, or coding must never delegate blindly to AI.
- Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
- Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.
score
- What it means: define score clearly and connect it to the module focus: Linear models and the perceptron: weights, bias term, score, threshold, and mistakes as learning signals.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
threshold
- What it means: show where threshold appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
- What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
- Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
- Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.
mistakes as learning signals
- What it means: connect mistakes as learning signals to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
Practice and evidence of learning
- Learners complete or discuss: Use a spreadsheet or notebook to update a two-feature perceptron.
- Learners produce: Use a spreadsheet or notebook to update a two-feature perceptron.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 6. Neural networks as layered function builders
Module focus: Neural networks as layered function builders: neurons, activations, hidden layers, and pattern composition. Primary live activity or lab: Use TensorFlow Playground or a notebook to see decision boundaries change.
Topics and coverage
neurons
- What it means: define neurons clearly and connect it to the module focus: Neural networks as layered function builders: neurons, activations, hidden layers, and pattern composition.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
activations
- What it means: define activations clearly and connect it to the module focus: Neural networks as layered function builders: neurons, activations, hidden layers, and pattern composition.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
hidden layers
- What it means: define hidden layers clearly and connect it to the module focus: Neural networks as layered function builders: neurons, activations, hidden layers, and pattern composition.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
pattern composition
- What it means: define pattern composition clearly and connect it to the module focus: Neural networks as layered function builders: neurons, activations, hidden layers, and pattern composition.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Use TensorFlow Playground or a notebook to see decision boundaries change.
- Learners produce: Use TensorFlow Playground or a notebook to see decision boundaries change.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 7. AI for images, sound, and text
Module focus: AI for images, sound, and text: pixels, spectrograms, tokens, and why data type changes the model design. Primary live activity or lab: Train a webcam image classifier or audio classifier with a safe no-face dataset.
Topics and coverage
pixels
- What it means: define pixels clearly and connect it to the module focus: AI for images, sound, and text: pixels, spectrograms, tokens, and why data type changes the model design.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
spectrograms
- What it means: define spectrograms clearly and connect it to the module focus: AI for images, sound, and text: pixels, spectrograms, tokens, and why data type changes the model design.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
tokens
- What it means: define tokens clearly and connect it to the module focus: AI for images, sound, and text: pixels, spectrograms, tokens, and why data type changes the model design.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
why data type changes the model design
- What it means: connect why data type changes the model design to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
Practice and evidence of learning
- Learners complete or discuss: Train a webcam image classifier or audio classifier with a safe no-face dataset.
- Learners produce: Train a webcam image classifier or audio classifier with a safe no-face dataset.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 8. Evaluation
Module focus: Evaluation: train/test split, confusion matrix, precision, recall, fairness checks, and bad test design. Primary live activity or lab: Evaluate a model that performs well overall but fails on one subgroup.
Topics and coverage
train/test split
- What it means: define train/test split clearly and connect it to the module focus: Evaluation: train/test split, confusion matrix, precision, recall, fairness checks, and bad test design.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
confusion matrix
- What it means: define confusion matrix clearly and connect it to the module focus: Evaluation: train/test split, confusion matrix, precision, recall, fairness checks, and bad test design.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
precision
- What it means: define precision clearly and connect it to the module focus: Evaluation: train/test split, confusion matrix, precision, recall, fairness checks, and bad test design.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
recall
- What it means: define recall clearly and connect it to the module focus: Evaluation: train/test split, confusion matrix, precision, recall, fairness checks, and bad test design.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
fairness checks
- What it means: define fairness checks clearly and connect it to the module focus: Evaluation: train/test split, confusion matrix, precision, recall, fairness checks, and bad test design.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
bad test design
- What it means: show where bad test design appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
- What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
- Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
- Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.
Practice and evidence of learning
- Learners complete or discuss: Evaluate a model that performs well overall but fails on one subgroup.
- Learners produce: Evaluate a model that performs well overall but fails on one subgroup.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 9. Generative AI basics
Module focus: Generative AI basics: next-token prediction, prompts, hallucinations, data leakage, and responsible use. Primary live activity or lab: Test controlled prompts and identify when an answer needs verification.
Topics and coverage
next-token prediction
- What it means: define next-token prediction clearly and connect it to the module focus: Generative AI basics: next-token prediction, prompts, hallucinations, data leakage, and responsible use.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
prompts
- What it means: explain how prompts changes the interaction between human intent, model behavior, external information, and final output.
- What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
- Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
- Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.
hallucinations
- What it means: define hallucinations clearly and connect it to the module focus: Generative AI basics: next-token prediction, prompts, hallucinations, data leakage, and responsible use.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
data leakage
- What it means: connect data leakage to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
responsible use
- What it means: define responsible use clearly and connect it to the module focus: Generative AI basics: next-token prediction, prompts, hallucinations, data leakage, and responsible use.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Test controlled prompts and identify when an answer needs verification.
- Learners produce: Test controlled prompts and identify when an answer needs verification.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 10. Project studio
Module focus: Project studio: choose a problem, collect examples, train, evaluate, explain, and present. Primary live activity or lab: Final demo with a model card and a two-minute explanation.
Topics and coverage
choose a problem
- What it means: define choose a problem clearly and connect it to the module focus: Project studio: choose a problem, collect examples, train, evaluate, explain, and present.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
collect examples
- What it means: define collect examples clearly and connect it to the module focus: Project studio: choose a problem, collect examples, train, evaluate, explain, and present.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
train
- What it means: define train clearly and connect it to the module focus: Project studio: choose a problem, collect examples, train, evaluate, explain, and present.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
evaluate
- What it means: define evaluate clearly and connect it to the module focus: Project studio: choose a problem, collect examples, train, evaluate, explain, and present.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
explain
- What it means: define explain clearly and connect it to the module focus: Project studio: choose a problem, collect examples, train, evaluate, explain, and present.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
present
- What it means: define present clearly and connect it to the module focus: Project studio: choose a problem, collect examples, train, evaluate, explain, and present.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Final demo with a model card and a two-minute explanation.
- Learners produce: Final demo with a model card and a two-minute explanation.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Core labs and builds
- Data detective lab: students find hidden bias in a toy dataset and propose a better collection strategy.
- Classifier from scratch lab: students manually compute nearest neighbors before using code.
- Neural network playground lab: students change features and layers and record what changes.
- Model card lab: students write a short summary of intended use, failure cases, and safety rules.
Capstone
- Build a small AI classifier for a classroom-safe problem such as plant leaf category, object type, school supply recognition, simple sentiment, or environmental observations. The final submission includes data notes, model choice, evaluation results, demo, and limitations.
Assessment design
- Weekly mini-quizzes on concepts, not memorization.
- Lab notebook with screenshots, observations, and failure analysis.
- Capstone model quality, explanation quality, and honesty about limitations.
- Peer review: students test each other's models and report edge cases.
Recommended tools and datasets
- Google Colab or JupyterLite, Scratch or visual blocks for first two sessions, TensorFlow Playground, Teachable Machine, small CSV datasets, simple image folders, spreadsheet software.
Instructor notes
- Keep mathematics concrete. Students should touch actual data every session. Avoid presenting generative AI as magic; anchor every topic in examples, measurement, and failure cases.
Instructor Build Checklist
- Prepare one short demo for each module and one learner activity that creates a saved artifact.
- Prepare examples that match the audience, local context, and likely tools learners can access.
- Add a verification step to every AI-generated output: factual check, source check, data sensitivity check, and quality review.
- Keep a running portfolio folder so each module contributes to the final project or learner playbook.
- Reserve time for reflection on what the learner did, what AI did, what was checked, and what remains uncertain.