3. AI for Undergraduates: From Literacy to Applied Projects
Course Positioning
This course helps undergraduates become competent AI users and project builders. It is not only about prompting. Students learn how to use AI for research, analysis, coding, domain projects, and career portfolios while understanding limitations, evaluation, and responsible use.
Learning outcomes
- Explain the AI stack from data and models to applications, interfaces, and evaluation.
- Use AI tools for literature review, coding, data analysis, writing, design, and presentations.
- Design a workflow that combines human reasoning, AI generation, verification, and iteration.
- Build an applied prototype or research artifact relevant to the student's discipline.
- Create a portfolio entry that clearly documents problem, method, AI use, evaluation, and limitations.
Expanded Topic-by-Topic Coverage
Module 1. AI as a general-purpose technology
Module focus: Foundation models, multimodality, model ecosystems, open vs closed models, applications across disciplines. Primary live activity or lab: Map AI impact across five majors. Expected take-home output: Discipline-specific opportunity map.
Topics and coverage
Foundation models
- What it means: explain how Foundation models changes the interaction between human intent, model behavior, external information, and final output.
- What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
- Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
- Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.
multimodality
- What it means: define multimodality clearly and connect it to the module focus: Foundation models, multimodality, model ecosystems, open vs closed models, applications across disciplines.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
model ecosystems
- What it means: place model ecosystems inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
- What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
- Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
- Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.
open vs closed models
- What it means: place open vs closed models inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
- What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
- Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
- Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.
applications across disciplines
- What it means: define applications across disciplines clearly and connect it to the module focus: Foundation models, multimodality, model ecosystems, open vs closed models, applications across disciplines.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Map AI impact across five majors.
- Learners produce: Discipline-specific opportunity map.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 2. Prompting beyond tricks
Module focus: Task decomposition, role/context/examples, rubrics, critique loops, structured outputs, reusable prompt templates. Primary live activity or lab: Turn a messy assignment brief into a robust AI workflow. Expected take-home output: Prompt workflow sheet.
Topics and coverage
Task decomposition
- What it means: define Task decomposition clearly and connect it to the module focus: Task decomposition, role/context/examples, rubrics, critique loops, structured outputs, reusable prompt templates.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
role/context/examples
- What it means: define role/context/examples clearly and connect it to the module focus: Task decomposition, role/context/examples, rubrics, critique loops, structured outputs, reusable prompt templates.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
rubrics
- What it means: define rubrics clearly and connect it to the module focus: Task decomposition, role/context/examples, rubrics, critique loops, structured outputs, reusable prompt templates.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
critique loops
- What it means: define critique loops clearly and connect it to the module focus: Task decomposition, role/context/examples, rubrics, critique loops, structured outputs, reusable prompt templates.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
structured outputs
- What it means: explain how structured outputs changes the interaction between human intent, model behavior, external information, and final output.
- What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
- Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
- Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.
reusable prompt templates
- What it means: explain how reusable prompt templates changes the interaction between human intent, model behavior, external information, and final output.
- What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
- Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
- Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.
Practice and evidence of learning
- Learners complete or discuss: Turn a messy assignment brief into a robust AI workflow.
- Learners produce: Prompt workflow sheet.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 3. Research workflows
Module focus: Search, literature triage, citation tracking, note synthesis, claim verification, avoiding fabricated references. Primary live activity or lab: Build a literature matrix on a chosen topic. Expected take-home output: Annotated research matrix.
Topics and coverage
Search
- What it means: define Search clearly and connect it to the module focus: Search, literature triage, citation tracking, note synthesis, claim verification, avoiding fabricated references.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
literature triage
- What it means: define literature triage clearly and connect it to the module focus: Search, literature triage, citation tracking, note synthesis, claim verification, avoiding fabricated references.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
citation tracking
- What it means: define citation tracking clearly and connect it to the module focus: Search, literature triage, citation tracking, note synthesis, claim verification, avoiding fabricated references.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
note synthesis
- What it means: define note synthesis clearly and connect it to the module focus: Search, literature triage, citation tracking, note synthesis, claim verification, avoiding fabricated references.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
claim verification
- What it means: define claim verification clearly and connect it to the module focus: Search, literature triage, citation tracking, note synthesis, claim verification, avoiding fabricated references.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
avoiding fabricated references
- What it means: define avoiding fabricated references clearly and connect it to the module focus: Search, literature triage, citation tracking, note synthesis, claim verification, avoiding fabricated references.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Build a literature matrix on a chosen topic.
- Learners produce: Annotated research matrix.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 4. Data and spreadsheet analysis
Module focus: Cleaning data, generating formulas, interpreting charts, asking statistical questions, spotting errors. Primary live activity or lab: Analyze a small dataset with AI support and manual checks. Expected take-home output: Data analysis memo.
Topics and coverage
Cleaning data
- What it means: connect Cleaning data to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
generating formulas
- What it means: define generating formulas clearly and connect it to the module focus: Cleaning data, generating formulas, interpreting charts, asking statistical questions, spotting errors.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
interpreting charts
- What it means: define interpreting charts clearly and connect it to the module focus: Cleaning data, generating formulas, interpreting charts, asking statistical questions, spotting errors.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
asking statistical questions
- What it means: define asking statistical questions clearly and connect it to the module focus: Cleaning data, generating formulas, interpreting charts, asking statistical questions, spotting errors.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
spotting errors
- What it means: define spotting errors clearly and connect it to the module focus: Cleaning data, generating formulas, interpreting charts, asking statistical questions, spotting errors.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Analyze a small dataset with AI support and manual checks.
- Learners produce: Data analysis memo.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 5. Coding with AI
Module focus: Specification writing, pseudocode, code generation, debugging, tests, notebooks, Git basics. Primary live activity or lab: Use AI to build and test a small script or notebook. Expected take-home output: GitHub or notebook artifact.
Topics and coverage
Specification writing
- What it means: show where Specification writing appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
- What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
- Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
- Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.
pseudocode
- What it means: define pseudocode clearly and connect it to the module focus: Specification writing, pseudocode, code generation, debugging, tests, notebooks, Git basics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
code generation
- What it means: define code generation clearly and connect it to the module focus: Specification writing, pseudocode, code generation, debugging, tests, notebooks, Git basics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
debugging
- What it means: define debugging clearly and connect it to the module focus: Specification writing, pseudocode, code generation, debugging, tests, notebooks, Git basics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
tests
- What it means: define tests clearly and connect it to the module focus: Specification writing, pseudocode, code generation, debugging, tests, notebooks, Git basics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
notebooks
- What it means: define notebooks clearly and connect it to the module focus: Specification writing, pseudocode, code generation, debugging, tests, notebooks, Git basics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Git basics
- What it means: define Git basics clearly and connect it to the module focus: Specification writing, pseudocode, code generation, debugging, tests, notebooks, Git basics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Use AI to build and test a small script or notebook.
- Learners produce: GitHub or notebook artifact.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 6. Domain project design
Module focus: Problem framing, stakeholders, constraints, feasibility, risk, evaluation metrics. Primary live activity or lab: Project canvas workshop. Expected take-home output: One-page project proposal.
Topics and coverage
Problem framing
- What it means: define Problem framing clearly and connect it to the module focus: Problem framing, stakeholders, constraints, feasibility, risk, evaluation metrics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
stakeholders
- What it means: define stakeholders clearly and connect it to the module focus: Problem framing, stakeholders, constraints, feasibility, risk, evaluation metrics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
constraints
- What it means: define constraints clearly and connect it to the module focus: Problem framing, stakeholders, constraints, feasibility, risk, evaluation metrics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
feasibility
- What it means: define feasibility clearly and connect it to the module focus: Problem framing, stakeholders, constraints, feasibility, risk, evaluation metrics.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
risk
- What it means in this course: define risk in operational terms, not as an abstract principle.
- What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what undergraduate students across disciplines must never delegate blindly to AI.
- Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
- Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.
evaluation metrics
- What it means: connect evaluation metrics to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
Practice and evidence of learning
- Learners complete or discuss: Project canvas workshop.
- Learners produce: One-page project proposal.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 7. Responsible AI and academic integrity
Module focus: Disclosure, privacy, bias, hallucination, accountability, originality, copyright. Primary live activity or lab: Analyze a case of AI misuse in academia or industry. Expected take-home output: Responsible-use statement.
Topics and coverage
Disclosure
- What it means: define Disclosure clearly and connect it to the module focus: Disclosure, privacy, bias, hallucination, accountability, originality, copyright.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
privacy
- What it means in this course: define privacy in operational terms, not as an abstract principle.
- What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what undergraduate students across disciplines must never delegate blindly to AI.
- Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
- Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.
bias
- What it means in this course: define bias in operational terms, not as an abstract principle.
- What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what undergraduate students across disciplines must never delegate blindly to AI.
- Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
- Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.
hallucination
- What it means: define hallucination clearly and connect it to the module focus: Disclosure, privacy, bias, hallucination, accountability, originality, copyright.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
accountability
- What it means: define accountability clearly and connect it to the module focus: Disclosure, privacy, bias, hallucination, accountability, originality, copyright.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
originality
- What it means: define originality clearly and connect it to the module focus: Disclosure, privacy, bias, hallucination, accountability, originality, copyright.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
copyright
- What it means: define copyright clearly and connect it to the module focus: Disclosure, privacy, bias, hallucination, accountability, originality, copyright.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
Practice and evidence of learning
- Learners complete or discuss: Analyze a case of AI misuse in academia or industry.
- Learners produce: Responsible-use statement.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Module 8. Capstone sprint
Module focus: Prototype, evaluation, documentation, presentation. Primary live activity or lab: Build and present a project. Expected take-home output: Portfolio-ready project page.
Topics and coverage
Prototype
- What it means: define Prototype clearly and connect it to the module focus: Prototype, evaluation, documentation, presentation.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
evaluation
- What it means: connect evaluation to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
- What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
- Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
- Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.
documentation
- What it means: define documentation clearly and connect it to the module focus: Prototype, evaluation, documentation, presentation.
- What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
- Demonstration: give one simple example, one realistic example, and one failure or limitation example.
- Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
presentation
- What it means: show where presentation appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
- What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
- Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
- Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.
Practice and evidence of learning
- Learners complete or discuss: Build and present a project.
- Learners produce: Portfolio-ready project page.
- Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
- Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.
Minimum coverage before moving on
- Learners can explain the module vocabulary without relying on tool-generated text.
- Learners have seen one worked example, one hands-on application, and one limitation or failure case.
- Learners know what must be verified, what data must be protected, and who remains accountable for the output.
Labs, projects, and assessments
- Lab 1: Build a literature review matrix with at least 10 verified sources.
- Lab 2: Use AI to generate code, then write tests and document limitations.
- Lab 3: Convert a class assignment into a reusable AI-assisted workflow.
- Capstone: Applied prototype or research artifact in the student's field, with a transparent AI-use log.
Evaluation approach
- 20% weekly workflow assignments.
- 20% research and verification matrix.
- 20% code/data/design lab.
- 40% capstone project, documentation, and presentation.
Recommended tools and materials
- AI assistant, search tools, Zotero or reference manager, Google Sheets/Excel, Colab/Jupyter or Replit, GitHub, Canva/Figma optional.
- Optional technical extension: API use, embeddings, simple RAG, and local models.
Safety, ethics, and governance emphasis
- Use a mandatory AI-use disclosure template.
- Teach students to verify citations and numerical claims manually.
- Avoid submitting private survey data, grades, or identifiable participant data into public AI tools.
Delivery notes
- Use discipline-specific examples so non-CS students see relevance.
- End with a portfolio showcase; this increases seriousness and employability value.
- Offer optional coding clinics for students who want a technical track.
Instructor Build Checklist
- Prepare one short demo for each module and one learner activity that creates a saved artifact.
- Prepare examples that match the audience, local context, and likely tools learners can access.
- Add a verification step to every AI-generated output: factual check, source check, data sensitivity check, and quality review.
- Keep a running portfolio folder so each module contributes to the final project or learner playbook.
- Reserve time for reflection on what the learner did, what AI did, what was checked, and what remains uncertain.