AI AI EducationCurriculum Library
All courses

AI Curriculum

8. AI and the Future of Work: Economics, Governance, and Policy

AudienceBusiness leaders, HR leaders, educators, policymakers, students, economists, consultants, union leaders, and civic organizations
Duration8 weeks, 1-2 sessions per week
Modules8

8. AI and the Future of Work: Economics, Governance, and Policy

Course Positioning

A multidisciplinary course on how AI changes tasks, occupations, firms, wages, education, institutions, law, and public policy.

Learning outcomes

  • Analyze AI impact at the task level rather than making simplistic job replacement claims.
  • Understand augmentation, automation, deskilling, reskilling, labor displacement, productivity, wage effects, and inequality risks.
  • Design workforce transition plans for teams, firms, schools, and local economies.
  • Compare governance approaches: company policy, national regulation, standards, audits, procurement rules, and international principles.
  • Evaluate AI adoption through both productivity and social legitimacy lenses.
  • Create a responsible AI workforce strategy for a real organization or sector.

Course Design Snapshot

  • Positioning: A multidisciplinary course on how AI changes tasks, occupations, firms, wages, education, institutions, law, and public policy.
  • Audience: Business leaders, HR leaders, educators, policymakers, students, economists, consultants, union leaders, and civic organizations.
  • Duration: 8 weeks, 1-2 sessions per week.
  • Prerequisites: No coding required. Comfort with reading reports and discussing evidence.
  • Format: Case studies, task analysis, labor-market frameworks, governance design, policy simulation, and scenario planning.

Expanded Topic-by-Topic Coverage

Module 1. From jobs to tasks

Module focus: From jobs to tasks: exposure, complementarity, substitution, tacit knowledge, and why occupations are bundles of activities. Primary live activity or lab: Decompose five jobs into tasks and classify AI exposure.

Topics and coverage

exposure

  • What it means: define exposure clearly and connect it to the module focus: From jobs to tasks: exposure, complementarity, substitution, tacit knowledge, and why occupations are bundles of activities.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

complementarity

  • What it means: define complementarity clearly and connect it to the module focus: From jobs to tasks: exposure, complementarity, substitution, tacit knowledge, and why occupations are bundles of activities.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

substitution

  • What it means: define substitution clearly and connect it to the module focus: From jobs to tasks: exposure, complementarity, substitution, tacit knowledge, and why occupations are bundles of activities.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

tacit knowledge

  • What it means: define tacit knowledge clearly and connect it to the module focus: From jobs to tasks: exposure, complementarity, substitution, tacit knowledge, and why occupations are bundles of activities.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

why occupations are bundles of activities

  • What it means: define why occupations are bundles of activities clearly and connect it to the module focus: From jobs to tasks: exposure, complementarity, substitution, tacit knowledge, and why occupations are bundles of activities.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Decompose five jobs into tasks and classify AI exposure.
  • Learners produce: Decompose five jobs into tasks and classify AI exposure.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 2. Productivity and firm redesign

Module focus: Productivity and firm redesign: copilots, agents, workflow automation, management layers, and organizational bottlenecks. Primary live activity or lab: Map a department before and after AI adoption.

Topics and coverage

copilots

  • What it means: explain how copilots changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

agents

  • What it means: explain how agents changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

workflow automation

  • What it means: show where workflow automation appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

management layers

  • What it means: define management layers clearly and connect it to the module focus: Productivity and firm redesign: copilots, agents, workflow automation, management layers, and organizational bottlenecks.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

organizational bottlenecks

  • What it means: define organizational bottlenecks clearly and connect it to the module focus: Productivity and firm redesign: copilots, agents, workflow automation, management layers, and organizational bottlenecks.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Map a department before and after AI adoption.
  • Learners produce: Map a department before and after AI adoption.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 3. Skills and education

Module focus: Skills and education: reskilling, assessment, lifelong learning, credentialing, and AI literacy across age groups. Primary live activity or lab: Design a skill transition pathway for one occupation.

Topics and coverage

reskilling

  • What it means: define reskilling clearly and connect it to the module focus: Skills and education: reskilling, assessment, lifelong learning, credentialing, and AI literacy across age groups.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

assessment

  • What it means: define assessment clearly and connect it to the module focus: Skills and education: reskilling, assessment, lifelong learning, credentialing, and AI literacy across age groups.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

lifelong learning

  • What it means: define lifelong learning clearly and connect it to the module focus: Skills and education: reskilling, assessment, lifelong learning, credentialing, and AI literacy across age groups.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

credentialing

  • What it means: define credentialing clearly and connect it to the module focus: Skills and education: reskilling, assessment, lifelong learning, credentialing, and AI literacy across age groups.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

AI literacy across age groups

  • What it means: define AI literacy across age groups clearly and connect it to the module focus: Skills and education: reskilling, assessment, lifelong learning, credentialing, and AI literacy across age groups.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Design a skill transition pathway for one occupation.
  • Learners produce: Design a skill transition pathway for one occupation.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 4. Labor economics

Module focus: Labor economics: wages, bargaining power, inequality, winner-take-most markets, and geographic concentration. Primary live activity or lab: Debate whether AI raises average productivity while widening inequality.

Topics and coverage

wages

  • What it means: define wages clearly and connect it to the module focus: Labor economics: wages, bargaining power, inequality, winner-take-most markets, and geographic concentration.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

bargaining power

  • What it means: define bargaining power clearly and connect it to the module focus: Labor economics: wages, bargaining power, inequality, winner-take-most markets, and geographic concentration.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

inequality

  • What it means: define inequality clearly and connect it to the module focus: Labor economics: wages, bargaining power, inequality, winner-take-most markets, and geographic concentration.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

winner-take-most markets

  • What it means: connect winner-take-most markets to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

geographic concentration

  • What it means: define geographic concentration clearly and connect it to the module focus: Labor economics: wages, bargaining power, inequality, winner-take-most markets, and geographic concentration.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Debate whether AI raises average productivity while widening inequality.
  • Learners produce: Debate whether AI raises average productivity while widening inequality.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 5. Governance inside organizations

Module focus: Governance inside organizations: acceptable-use policy, procurement, audits, documentation, human review, and accountability. Primary live activity or lab: Draft a company AI use policy for one department.

Topics and coverage

acceptable-use policy

  • What it means in this course: define acceptable-use policy in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Business leaders, HR leaders, educators, policymakers, students, economists, consultants, union leaders, and civic organizations must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

procurement

  • What it means: define procurement clearly and connect it to the module focus: Governance inside organizations: acceptable-use policy, procurement, audits, documentation, human review, and accountability.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

audits

  • What it means: define audits clearly and connect it to the module focus: Governance inside organizations: acceptable-use policy, procurement, audits, documentation, human review, and accountability.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

documentation

  • What it means: define documentation clearly and connect it to the module focus: Governance inside organizations: acceptable-use policy, procurement, audits, documentation, human review, and accountability.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

human review

  • What it means: define human review clearly and connect it to the module focus: Governance inside organizations: acceptable-use policy, procurement, audits, documentation, human review, and accountability.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

accountability

  • What it means: define accountability clearly and connect it to the module focus: Governance inside organizations: acceptable-use policy, procurement, audits, documentation, human review, and accountability.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Draft a company AI use policy for one department.
  • Learners produce: Draft a company AI use policy for one department.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 6. Public policy

Module focus: Public policy: education policy, competition policy, data governance, labor protections, public sector AI, and safety standards. Primary live activity or lab: Run a policy simulation for schools, SMEs, or healthcare workers.

Topics and coverage

education policy

  • What it means in this course: define education policy in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Business leaders, HR leaders, educators, policymakers, students, economists, consultants, union leaders, and civic organizations must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

competition policy

  • What it means in this course: define competition policy in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Business leaders, HR leaders, educators, policymakers, students, economists, consultants, union leaders, and civic organizations must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

data governance

  • What it means in this course: define data governance in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Business leaders, HR leaders, educators, policymakers, students, economists, consultants, union leaders, and civic organizations must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

labor protections

  • What it means: define labor protections clearly and connect it to the module focus: Public policy: education policy, competition policy, data governance, labor protections, public sector AI, and safety standards.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

public sector AI

  • What it means: define public sector AI clearly and connect it to the module focus: Public policy: education policy, competition policy, data governance, labor protections, public sector AI, and safety standards.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

safety standards

  • What it means in this course: define safety standards in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Business leaders, HR leaders, educators, policymakers, students, economists, consultants, union leaders, and civic organizations must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

Practice and evidence of learning

  • Learners complete or discuss: Run a policy simulation for schools, SMEs, or healthcare workers.
  • Learners produce: Run a policy simulation for schools, SMEs, or healthcare workers.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 7. Ethics and rights

Module focus: Ethics and rights: surveillance, bias, transparency, explainability, dignity, consent, accessibility, and inclusion. Primary live activity or lab: Audit a workplace AI tool for fairness and worker impact.

Topics and coverage

surveillance

  • What it means: define surveillance clearly and connect it to the module focus: Ethics and rights: surveillance, bias, transparency, explainability, dignity, consent, accessibility, and inclusion.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

bias

  • What it means in this course: define bias in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Business leaders, HR leaders, educators, policymakers, students, economists, consultants, union leaders, and civic organizations must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

transparency

  • What it means: define transparency clearly and connect it to the module focus: Ethics and rights: surveillance, bias, transparency, explainability, dignity, consent, accessibility, and inclusion.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

explainability

  • What it means: define explainability clearly and connect it to the module focus: Ethics and rights: surveillance, bias, transparency, explainability, dignity, consent, accessibility, and inclusion.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

dignity

  • What it means: define dignity clearly and connect it to the module focus: Ethics and rights: surveillance, bias, transparency, explainability, dignity, consent, accessibility, and inclusion.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.
  • What it means: define consent clearly and connect it to the module focus: Ethics and rights: surveillance, bias, transparency, explainability, dignity, consent, accessibility, and inclusion.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

accessibility

  • What it means: define accessibility clearly and connect it to the module focus: Ethics and rights: surveillance, bias, transparency, explainability, dignity, consent, accessibility, and inclusion.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

inclusion

  • What it means: define inclusion clearly and connect it to the module focus: Ethics and rights: surveillance, bias, transparency, explainability, dignity, consent, accessibility, and inclusion.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Audit a workplace AI tool for fairness and worker impact.
  • Learners produce: Audit a workplace AI tool for fairness and worker impact.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 8. Scenario planning

Module focus: Scenario planning: optimistic, turbulent, unequal, and regulated AI futures. Primary live activity or lab: Present a workforce transition strategy with milestones and safeguards.

Topics and coverage

optimistic

  • What it means: define optimistic clearly and connect it to the module focus: Scenario planning: optimistic, turbulent, unequal, and regulated AI futures.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

turbulent

  • What it means: define turbulent clearly and connect it to the module focus: Scenario planning: optimistic, turbulent, unequal, and regulated AI futures.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

unequal

  • What it means: define unequal clearly and connect it to the module focus: Scenario planning: optimistic, turbulent, unequal, and regulated AI futures.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

regulated AI futures

  • What it means: define regulated AI futures clearly and connect it to the module focus: Scenario planning: optimistic, turbulent, unequal, and regulated AI futures.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Present a workforce transition strategy with milestones and safeguards.
  • Learners produce: Present a workforce transition strategy with milestones and safeguards.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Core labs and builds

  • Task exposure mapping lab for teaching, accounting, law, HR, sales, software, healthcare, and manufacturing.
  • Workforce transition lab: redesign roles without erasing human accountability.
  • Governance lab: acceptable-use policy, risk tiers, review boards, and training requirements.
  • Policy lab: local reskilling plan for SMEs, colleges, or government departments.

Capstone

  • Create a sector-specific AI workforce transition plan. It should include affected tasks, augmentation opportunities, risk of displacement, reskilling plan, governance model, measurement framework, and policy recommendations.

Assessment design

  • Task exposure analysis.
  • Workforce redesign memo.
  • Governance policy draft.
  • Final transition plan presentation.
  • Task inventories, job descriptions, workforce survey templates, governance checklists, policy report readings, role redesign canvases, risk matrices.

Instructor notes

  • The course should emphasize that AI adoption is not only a technology transition. It is an institutional transition involving incentives, skills, law, trust, and distribution of gains.

Instructor Build Checklist

  • Prepare one short demo for each module and one learner activity that creates a saved artifact.
  • Prepare examples that match the audience, local context, and likely tools learners can access.
  • Add a verification step to every AI-generated output: factual check, source check, data sensitivity check, and quality review.
  • Keep a running portfolio folder so each module contributes to the final project or learner playbook.
  • Reserve time for reflection on what the learner did, what AI did, what was checked, and what remains uncertain.