1. AI Literacy for Grades 8-10
Course Positioning
This course introduces AI as a thinking partner, creative tool, and object of critical investigation. It should feel playful but not shallow. Students should learn how AI generates answers, why it can be wrong, how to use it without copying, and how to build small creative projects that demonstrate understanding.
Learning outcomes
- Explain in simple language what AI, machine learning, datasets, prompts, and hallucinations mean.
- Use AI to brainstorm, ask questions, summarize, practice language, and create responsibly without treating it as an answer machine.
- Identify signs of unreliable AI output, misleading images, deepfakes, and manipulative content.
- Create a small AI-assisted project such as a story, comic, quiz, science explainer, study plan, or awareness poster.
- Practice classroom norms for attribution, privacy, cyber-safety, and healthy AI use.
Expanded Topic-by-Topic Coverage
Module 1. What is AI?
Module focus: AI in phones, games, recommendations, search, translation, chatbots, image tools. Difference between rule-based software and learning systems. Primary live activity or lab: Students list AI systems they already use and classify them by purpose. Expected take-home output: Personal AI map.
Topics and coverage
AI in phones
- What it means: define AI in phones clearly and connect it to the module focus: AI in phones, games, recommendations, search, translation, chatbots, image tools. Difference between rule-based software and learning systems.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
games
- What it means: define games clearly and connect it to the module focus: AI in phones, games, recommendations, search, translation, chatbots, image tools. Difference between rule-based software and learning systems.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
recommendations
- What it means: define recommendations clearly and connect it to the module focus: AI in phones, games, recommendations, search, translation, chatbots, image tools. Difference between rule-based software and learning systems.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
search
- What it means: define search clearly and connect it to the module focus: AI in phones, games, recommendations, search, translation, chatbots, image tools. Difference between rule-based software and learning systems.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
translation
- What it means: define translation clearly and connect it to the module focus: AI in phones, games, recommendations, search, translation, chatbots, image tools. Difference between rule-based software and learning systems.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
chatbots
- What it means: explain how chatbots changes the interaction between human intent, model behavior, external information, and final output.
- What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
- Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
- Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.
image tools
- What it means: define image tools clearly and connect it to the module focus: AI in phones, games, recommendations, search, translation, chatbots, image tools. Difference between rule-based software and learning systems.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Difference between rule-based software and learning systems
- What it means: define Difference between rule-based software and learning systems clearly and connect it to the module focus: AI in phones, games, recommendations, search, translation, chatbots, image tools. Difference between rule-based software and learning systems.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Students list AI systems they already use and classify them by purpose.
- Learners produce: Personal AI map.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 2. How machines learn patterns
Module focus: Data, labels, examples, patterns, bias, overfitting explained through non-mathematical games. Primary live activity or lab: Human training game: students train a classmate to identify imaginary animals using examples. Expected take-home output: Reflection on why examples matter.
Topics and coverage
Data
- What it means: connect Data to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
labels
- What it means: connect labels to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
examples
- What it means: define examples clearly and connect it to the module focus: Data, labels, examples, patterns, bias, overfitting explained through non-mathematical games.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
patterns
- What it means: define patterns clearly and connect it to the module focus: Data, labels, examples, patterns, bias, overfitting explained through non-mathematical games.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
bias
- What it means in this course: define bias in operational terms, not as an abstract principle.
- What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what students aged roughly 13-16 must never delegate blindly to AI.
- Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
- Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.
overfitting explained through non-mathematical games
- What it means: show where overfitting explained through non-mathematical games appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
- What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
- Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
- Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.
Practice and evidence of learning
- Learners complete or discuss: Human training game: students train a classmate to identify imaginary animals using examples.
- Learners produce: Reflection on why examples matter.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 3. Prompting as clear communication
Module focus: Goal, context, constraints, examples, tone, format. Why vague prompts produce weak answers. Primary live activity or lab: Prompt makeover challenge: turn bad prompts into clear prompts. Expected take-home output: First prompt library.
Topics and coverage
Goal
- What it means: define Goal clearly and connect it to the module focus: Goal, context, constraints, examples, tone, format. Why vague prompts produce weak answers.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
context
- What it means: define context clearly and connect it to the module focus: Goal, context, constraints, examples, tone, format. Why vague prompts produce weak answers.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
constraints
- What it means: define constraints clearly and connect it to the module focus: Goal, context, constraints, examples, tone, format. Why vague prompts produce weak answers.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
examples
- What it means: define examples clearly and connect it to the module focus: Goal, context, constraints, examples, tone, format. Why vague prompts produce weak answers.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
tone
- What it means: define tone clearly and connect it to the module focus: Goal, context, constraints, examples, tone, format. Why vague prompts produce weak answers.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
format
- What it means: define format clearly and connect it to the module focus: Goal, context, constraints, examples, tone, format. Why vague prompts produce weak answers.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Why vague prompts produce weak answers
- What it means: explain how Why vague prompts produce weak answers changes the interaction between human intent, model behavior, external information, and final output.
- What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
- Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
- Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.
Practice and evidence of learning
- Learners complete or discuss: Prompt makeover challenge: turn bad prompts into clear prompts.
- Learners produce: First prompt library.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 4. AI for studying without cheating
Module focus: Using AI for explanations, quizzes, flashcards, feedback, and practice. Difference between help and substitution. Primary live activity or lab: Use AI to generate a quiz from a textbook paragraph, then correct the quiz. Expected take-home output: Study companion worksheet.
Topics and coverage
Using AI for explanations
- What it means: define Using AI for explanations clearly and connect it to the module focus: Using AI for explanations, quizzes, flashcards, feedback, and practice. Difference between help and substitution.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
quizzes
- What it means: define quizzes clearly and connect it to the module focus: Using AI for explanations, quizzes, flashcards, feedback, and practice. Difference between help and substitution.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
flashcards
- What it means: define flashcards clearly and connect it to the module focus: Using AI for explanations, quizzes, flashcards, feedback, and practice. Difference between help and substitution.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
feedback
- What it means: define feedback clearly and connect it to the module focus: Using AI for explanations, quizzes, flashcards, feedback, and practice. Difference between help and substitution.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
practice
- What it means: define practice clearly and connect it to the module focus: Using AI for explanations, quizzes, flashcards, feedback, and practice. Difference between help and substitution.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Difference between help and substitution
- What it means: define Difference between help and substitution clearly and connect it to the module focus: Using AI for explanations, quizzes, flashcards, feedback, and practice. Difference between help and substitution.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Use AI to generate a quiz from a textbook paragraph, then correct the quiz.
- Learners produce: Study companion worksheet.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 5. Truth, hallucinations, and verification
Module focus: Why AI can sound confident and still be wrong. Source checking, common-sense checking, asking for uncertainty. Primary live activity or lab: Spot the fake answer: compare generated answers against trusted sources. Expected take-home output: Verification checklist.
Topics and coverage
Why AI can sound confident and still be wrong
- What it means: define Why AI can sound confident and still be wrong clearly and connect it to the module focus: Why AI can sound confident and still be wrong. Source checking, common-sense checking, asking for uncertainty.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Source checking
- What it means: define Source checking clearly and connect it to the module focus: Why AI can sound confident and still be wrong. Source checking, common-sense checking, asking for uncertainty.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
common-sense checking
- What it means: define common-sense checking clearly and connect it to the module focus: Why AI can sound confident and still be wrong. Source checking, common-sense checking, asking for uncertainty.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
asking for uncertainty
- What it means: define asking for uncertainty clearly and connect it to the module focus: Why AI can sound confident and still be wrong. Source checking, common-sense checking, asking for uncertainty.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Spot the fake answer: compare generated answers against trusted sources.
- Learners produce: Verification checklist.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 6. Images, deepfakes, and media literacy
Module focus: AI-generated images, voice cloning, misinformation, consent, watermarking, and online sharing. Primary live activity or lab: Analyze examples of real, edited, and generated media using a checklist. Expected take-home output: Media trust scorecard.
Topics and coverage
AI-generated images
- What it means: define AI-generated images clearly and connect it to the module focus: AI-generated images, voice cloning, misinformation, consent, watermarking, and online sharing.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
voice cloning
- What it means: define voice cloning clearly and connect it to the module focus: AI-generated images, voice cloning, misinformation, consent, watermarking, and online sharing.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
misinformation
- What it means: define misinformation clearly and connect it to the module focus: AI-generated images, voice cloning, misinformation, consent, watermarking, and online sharing.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
consent
- What it means: define consent clearly and connect it to the module focus: AI-generated images, voice cloning, misinformation, consent, watermarking, and online sharing.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
watermarking
- What it means: define watermarking clearly and connect it to the module focus: AI-generated images, voice cloning, misinformation, consent, watermarking, and online sharing.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
online sharing
- What it means: define online sharing clearly and connect it to the module focus: AI-generated images, voice cloning, misinformation, consent, watermarking, and online sharing.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Analyze examples of real, edited, and generated media using a checklist.
- Learners produce: Media trust scorecard.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 7. Creative AI project studio
Module focus: Storytelling, comics, explainers, presentations, posters, or simple chatbots. Primary live activity or lab: Teams build an AI-assisted creative artifact with a human contribution log. Expected take-home output: Project artifact.
Topics and coverage
Storytelling
- What it means: define Storytelling clearly and connect it to the module focus: Storytelling, comics, explainers, presentations, posters, or simple chatbots.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
comics
- What it means: define comics clearly and connect it to the module focus: Storytelling, comics, explainers, presentations, posters, or simple chatbots.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
explainers
- What it means: define explainers clearly and connect it to the module focus: Storytelling, comics, explainers, presentations, posters, or simple chatbots.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
presentations
- What it means: show where presentations appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
- What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
- Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
- Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.
posters
- What it means: define posters clearly and connect it to the module focus: Storytelling, comics, explainers, presentations, posters, or simple chatbots.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
simple chatbots
- What it means: explain how simple chatbots changes the interaction between human intent, model behavior, external information, and final output.
- What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
- Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
- Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.
Practice and evidence of learning
- Learners complete or discuss: Teams build an AI-assisted creative artifact with a human contribution log.
- Learners produce: Project artifact.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 8. Showcase and responsible use pledge
Module focus: Attribution, privacy, respectful use, avoiding over-reliance, asking better questions. Primary live activity or lab: Student showcase with peer feedback. Expected take-home output: AI use pledge and portfolio page.
Topics and coverage
Attribution
- What it means: define Attribution clearly and connect it to the module focus: Attribution, privacy, respectful use, avoiding over-reliance, asking better questions.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
privacy
- What it means in this course: define privacy in operational terms, not as an abstract principle.
- What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what students aged roughly 13-16 must never delegate blindly to AI.
- Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
- Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.
respectful use
- What it means: define respectful use clearly and connect it to the module focus: Attribution, privacy, respectful use, avoiding over-reliance, asking better questions.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
avoiding over-reliance
- What it means: define avoiding over-reliance clearly and connect it to the module focus: Attribution, privacy, respectful use, avoiding over-reliance, asking better questions.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
asking better questions
- What it means: define asking better questions clearly and connect it to the module focus: Attribution, privacy, respectful use, avoiding over-reliance, asking better questions.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Student showcase with peer feedback.
- Learners produce: AI use pledge and portfolio page.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Labs, projects, and assessments
- Lab 1: Turn a confusing school topic into a simple explanation, quiz, and memory aid.
- Lab 2: Create a two-page comic or poster explaining one AI risk to younger students.
- Capstone: Build an AI-assisted learning artifact and submit a short log describing what the student did, what AI did, and what was verified.
Evaluation approach
- 30% participation and prompt exercises.
- 30% verification and media-literacy worksheet.
- 40% capstone project with human contribution log.
- Evaluation should reward curiosity, checking, and explanation quality more than polish.
Recommended tools and materials
- Age-appropriate AI assistant with teacher supervision.
- Slides, worksheets, safe image examples, collaborative whiteboard, quiz generator, and classroom LMS.
- No student personal data should be entered into public AI tools.
Safety, ethics, and governance emphasis
- Avoid unsupervised accounts for minors unless school policy allows it.
- Teach students not to upload private photos, school IDs, phone numbers, addresses, or family information.
- Include explicit discussion of plagiarism, fake media, cyberbullying, and respectful use.
Delivery notes
- Keep sessions short and active. Use games, debates, and small projects.
- Do not over-focus on technical jargon. Build intuitions first.
- Invite parents or teachers to the final showcase so the course builds trust.
Instructor Build Checklist
- Prepare one short demo for each module and one learner activity that creates a saved artifact.
- Prepare examples that match the audience, local context, and likely tools learners can access.
- Add a verification step to every AI-generated output: factual check, source check, data sensitivity check, and quality review.
- Keep a running portfolio folder so each module contributes to the final project or learner playbook.
- Reserve time for reflection on what the learner did, what AI did, what was checked, and what remains uncertain.