AI AI EducationCurriculum Library
All courses

AI Curriculum

5. Mid-Level AI Architect and Technical Leadership Program

AudienceMid-level professionals with responsibility for platform decisions, vendor choices, AI governance, security, data strategy, or cross-functional delivery
Duration10 weeks, one 2-hour seminar and one 90-minute architecture studio per week
Modules10

5. Mid-Level AI Architect and Technical Leadership Program

Course Positioning

A technical leadership course for managers, architects, senior engineers, and product leaders responsible for scaling AI safely across teams and business units.

Learning outcomes

  • Design an enterprise AI platform architecture spanning data, models, tools, evaluation, security, governance, and monitoring.
  • Distinguish between proof-of-concept success and scalable operational value.
  • Choose appropriate patterns such as RAG, fine-tuning, agents, classical ML, workflow automation, or human-in-the-loop review.
  • Build an AI portfolio with risk tiers, ROI hypotheses, staffing needs, and technical dependency maps.
  • Create governance mechanisms aligned with product velocity rather than blocking innovation.
  • Lead AI delivery teams with clear handoffs among business, engineering, legal, security, domain experts, and operations.

Course Design Snapshot

  • Positioning: A technical leadership course for managers, architects, senior engineers, and product leaders responsible for scaling AI safely across teams and business units.
  • Audience: Mid-level professionals with responsibility for platform decisions, vendor choices, AI governance, security, data strategy, or cross-functional delivery.
  • Duration: 10 weeks, one 2-hour seminar and one 90-minute architecture studio per week.
  • Prerequisites: Experience managing software, data, analytics, product, security, operations, or digital transformation projects.
  • Format: Systems lectures, case discussions, architecture reviews, governance templates, ROI workshops, and portfolio design.

Expanded Topic-by-Topic Coverage

Module 1. AI as enterprise architecture

Module focus: AI as enterprise architecture: capabilities, workflows, platform layers, operating model, and maturity stages. Primary live activity or lab: Map current AI maturity and identify platform gaps.

Topics and coverage

capabilities

  • What it means: define capabilities clearly and connect it to the module focus: AI as enterprise architecture: capabilities, workflows, platform layers, operating model, and maturity stages.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

workflows

  • What it means: show where workflows appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

platform layers

  • What it means: define platform layers clearly and connect it to the module focus: AI as enterprise architecture: capabilities, workflows, platform layers, operating model, and maturity stages.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

operating model

  • What it means: place operating model inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

maturity stages

  • What it means: define maturity stages clearly and connect it to the module focus: AI as enterprise architecture: capabilities, workflows, platform layers, operating model, and maturity stages.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Map current AI maturity and identify platform gaps.
  • Learners produce: Map current AI maturity and identify platform gaps.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 2. Use-case portfolio design

Module focus: Use-case portfolio design: value, feasibility, data readiness, risk tier, user adoption, and executive sponsorship. Primary live activity or lab: Prioritize 20 candidate use cases using a scored portfolio matrix.

Topics and coverage

value

  • What it means: define value clearly and connect it to the module focus: Use-case portfolio design: value, feasibility, data readiness, risk tier, user adoption, and executive sponsorship.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

feasibility

  • What it means: define feasibility clearly and connect it to the module focus: Use-case portfolio design: value, feasibility, data readiness, risk tier, user adoption, and executive sponsorship.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

data readiness

  • What it means: connect data readiness to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

risk tier

  • What it means in this course: define risk tier in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Mid-level professionals with responsibility for platform decisions, vendor choices, AI governance, security, data strategy, or cross-functional delivery must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

user adoption

  • What it means: define user adoption clearly and connect it to the module focus: Use-case portfolio design: value, feasibility, data readiness, risk tier, user adoption, and executive sponsorship.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

executive sponsorship

  • What it means: define executive sponsorship clearly and connect it to the module focus: Use-case portfolio design: value, feasibility, data readiness, risk tier, user adoption, and executive sponsorship.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Prioritize 20 candidate use cases using a scored portfolio matrix.
  • Learners produce: Prioritize 20 candidate use cases using a scored portfolio matrix.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 3. Data architecture

Module focus: Data architecture: data contracts, knowledge sources, vector stores, access control, lineage, freshness, and governance. Primary live activity or lab: Design a data and retrieval architecture for an internal knowledge assistant.

Topics and coverage

data contracts

  • What it means: connect data contracts to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

knowledge sources

  • What it means: define knowledge sources clearly and connect it to the module focus: Data architecture: data contracts, knowledge sources, vector stores, access control, lineage, freshness, and governance.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

vector stores

  • What it means: define vector stores clearly and connect it to the module focus: Data architecture: data contracts, knowledge sources, vector stores, access control, lineage, freshness, and governance.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

access control

  • What it means: define access control clearly and connect it to the module focus: Data architecture: data contracts, knowledge sources, vector stores, access control, lineage, freshness, and governance.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

lineage

  • What it means: define lineage clearly and connect it to the module focus: Data architecture: data contracts, knowledge sources, vector stores, access control, lineage, freshness, and governance.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

freshness

  • What it means: define freshness clearly and connect it to the module focus: Data architecture: data contracts, knowledge sources, vector stores, access control, lineage, freshness, and governance.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

governance

  • What it means in this course: define governance in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Mid-level professionals with responsibility for platform decisions, vendor choices, AI governance, security, data strategy, or cross-functional delivery must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

Practice and evidence of learning

  • Learners complete or discuss: Design a data and retrieval architecture for an internal knowledge assistant.
  • Learners produce: Design a data and retrieval architecture for an internal knowledge assistant.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 4. Model strategy

Module focus: Model strategy: open vs closed models, multi-model routing, fine-tuning, evaluation, model lifecycle, and vendor risk. Primary live activity or lab: Create a model selection rubric and vendor comparison sheet.

Topics and coverage

open vs closed models

  • What it means: place open vs closed models inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

multi-model routing

  • What it means: place multi-model routing inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

fine-tuning

  • What it means: define fine-tuning clearly and connect it to the module focus: Model strategy: open vs closed models, multi-model routing, fine-tuning, evaluation, model lifecycle, and vendor risk.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

evaluation

  • What it means: connect evaluation to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

model lifecycle

  • What it means: place model lifecycle inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

vendor risk

  • What it means in this course: define vendor risk in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Mid-level professionals with responsibility for platform decisions, vendor choices, AI governance, security, data strategy, or cross-functional delivery must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

Practice and evidence of learning

  • Learners complete or discuss: Create a model selection rubric and vendor comparison sheet.
  • Learners produce: Create a model selection rubric and vendor comparison sheet.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 5. AI application patterns

Module focus: AI application patterns: copilots, workflow automation, decision support, RAG, agents, batch intelligence, and embedded AI. Primary live activity or lab: Select the right architecture pattern for five business cases.

Topics and coverage

copilots

  • What it means: explain how copilots changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

workflow automation

  • What it means: show where workflow automation appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

decision support

  • What it means: show where decision support appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

RAG

  • What it means: explain how RAG changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

agents

  • What it means: explain how agents changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

batch intelligence

  • What it means: define batch intelligence clearly and connect it to the module focus: AI application patterns: copilots, workflow automation, decision support, RAG, agents, batch intelligence, and embedded AI.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

embedded AI

  • What it means: define embedded AI clearly and connect it to the module focus: AI application patterns: copilots, workflow automation, decision support, RAG, agents, batch intelligence, and embedded AI.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Select the right architecture pattern for five business cases.
  • Learners produce: Select the right architecture pattern for five business cases.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 6. Evaluation and quality management

Module focus: Evaluation and quality management: offline tests, online monitoring, human review, error taxonomies, and acceptance thresholds. Primary live activity or lab: Build an evaluation plan for a high-risk and a low-risk use case.

Topics and coverage

offline tests

  • What it means: define offline tests clearly and connect it to the module focus: Evaluation and quality management: offline tests, online monitoring, human review, error taxonomies, and acceptance thresholds.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

online monitoring

  • What it means: define online monitoring clearly and connect it to the module focus: Evaluation and quality management: offline tests, online monitoring, human review, error taxonomies, and acceptance thresholds.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

human review

  • What it means: define human review clearly and connect it to the module focus: Evaluation and quality management: offline tests, online monitoring, human review, error taxonomies, and acceptance thresholds.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

error taxonomies

  • What it means: define error taxonomies clearly and connect it to the module focus: Evaluation and quality management: offline tests, online monitoring, human review, error taxonomies, and acceptance thresholds.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

acceptance thresholds

  • What it means: show where acceptance thresholds appears in the learner's real workflow and which parts are judgment-heavy versus draftable.
  • What to cover: current workflow, pain points, AI-assisted steps, human review checkpoints, quality standard, and ownership of the final decision.
  • Demonstration: convert one messy real-world input into a structured brief, draft, analysis, checklist, or next action.
  • Evidence of learning: learners produce a reusable template or playbook entry that can be used after the course.

Practice and evidence of learning

  • Learners complete or discuss: Build an evaluation plan for a high-risk and a low-risk use case.
  • Learners produce: Build an evaluation plan for a high-risk and a low-risk use case.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 7. Security and privacy

Module focus: Security and privacy: prompt injection, data leakage, identity, tool permissions, audit trails, secrets, and regulatory exposure. Primary live activity or lab: Run a tabletop exercise for an AI incident.

Topics and coverage

prompt injection

  • What it means: explain how prompt injection changes the interaction between human intent, model behavior, external information, and final output.
  • What to cover: inputs, constraints, examples, output format, grounding, iteration, failure modes, and when a human must intervene.
  • Demonstration: show a weak attempt, a stronger structured attempt, and a reviewed final version with explicit checks.
  • Evidence of learning: learners create a reusable prompt, schema, retrieval note, or workflow pattern and test it on at least two examples.

data leakage

  • What it means: connect data leakage to the data lifecycle from source and structure through analysis, interpretation, and decision-making.
  • What to cover: source reliability, missing or biased data, leakage, assumptions, calculations, and the difference between correlation and decision-ready evidence.
  • Demonstration: walk through a small dataset or example table and mark the checks required before trusting the result.
  • Evidence of learning: learners produce a short analysis note that includes assumptions, limitations, and verification steps.

identity

  • What it means: define identity clearly and connect it to the module focus: Security and privacy: prompt injection, data leakage, identity, tool permissions, audit trails, secrets, and regulatory exposure.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

tool permissions

  • What it means: define tool permissions clearly and connect it to the module focus: Security and privacy: prompt injection, data leakage, identity, tool permissions, audit trails, secrets, and regulatory exposure.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

audit trails

  • What it means: define audit trails clearly and connect it to the module focus: Security and privacy: prompt injection, data leakage, identity, tool permissions, audit trails, secrets, and regulatory exposure.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

secrets

  • What it means: define secrets clearly and connect it to the module focus: Security and privacy: prompt injection, data leakage, identity, tool permissions, audit trails, secrets, and regulatory exposure.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

regulatory exposure

  • What it means: define regulatory exposure clearly and connect it to the module focus: Security and privacy: prompt injection, data leakage, identity, tool permissions, audit trails, secrets, and regulatory exposure.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Run a tabletop exercise for an AI incident.
  • Learners produce: Run a tabletop exercise for an AI incident.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 8. Infrastructure and cost

Module focus: Infrastructure and cost: latency, batching, caching, inference economics, GPU/CPU choices, edge/cloud tradeoffs. Primary live activity or lab: Estimate cost per workflow and identify optimization levers.

Topics and coverage

latency

  • What it means: place latency inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

batching

  • What it means: define batching clearly and connect it to the module focus: Infrastructure and cost: latency, batching, caching, inference economics, GPU/CPU choices, edge/cloud tradeoffs.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

caching

  • What it means: define caching clearly and connect it to the module focus: Infrastructure and cost: latency, batching, caching, inference economics, GPU/CPU choices, edge/cloud tradeoffs.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

inference economics

  • What it means: place inference economics inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

GPU/CPU choices

  • What it means: place GPU/CPU choices inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

edge/cloud tradeoffs

  • What it means: define edge/cloud tradeoffs clearly and connect it to the module focus: Infrastructure and cost: latency, batching, caching, inference economics, GPU/CPU choices, edge/cloud tradeoffs.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Estimate cost per workflow and identify optimization levers.
  • Learners produce: Estimate cost per workflow and identify optimization levers.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 9. Governance and change management

Module focus: Governance and change management: policy, review boards, documentation, procurement, training, and adoption metrics. Primary live activity or lab: Draft a lightweight AI governance playbook.

Topics and coverage

policy

  • What it means in this course: define policy in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Mid-level professionals with responsibility for platform decisions, vendor choices, AI governance, security, data strategy, or cross-functional delivery must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

review boards

  • What it means: define review boards clearly and connect it to the module focus: Governance and change management: policy, review boards, documentation, procurement, training, and adoption metrics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

documentation

  • What it means: define documentation clearly and connect it to the module focus: Governance and change management: policy, review boards, documentation, procurement, training, and adoption metrics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

procurement

  • What it means: define procurement clearly and connect it to the module focus: Governance and change management: policy, review boards, documentation, procurement, training, and adoption metrics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

training

  • What it means: place training inside the AI system stack so learners know what problem it solves and what tradeoffs it introduces.
  • What to cover: inputs, outputs, system boundaries, evaluation criteria, cost or latency implications, and common failure cases.
  • Demonstration: use a diagram, small code sample, worksheet, or tool trace to make the mechanism visible.
  • Evidence of learning: learners compare two approaches and explain which one they would choose for a realistic constraint.

adoption metrics

  • What it means: define adoption metrics clearly and connect it to the module focus: Governance and change management: policy, review boards, documentation, procurement, training, and adoption metrics.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

Practice and evidence of learning

  • Learners complete or discuss: Draft a lightweight AI governance playbook.
  • Learners produce: Draft a lightweight AI governance playbook.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Module 10. Capstone architecture review

Module focus: Capstone architecture review: enterprise AI roadmap, platform blueprint, operating cadence, and risk controls. Primary live activity or lab: Present a board-ready AI transformation roadmap.

Topics and coverage

enterprise AI roadmap

  • What it means: define enterprise AI roadmap clearly and connect it to the module focus: Capstone architecture review: enterprise AI roadmap, platform blueprint, operating cadence, and risk controls.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

platform blueprint

  • What it means: define platform blueprint clearly and connect it to the module focus: Capstone architecture review: enterprise AI roadmap, platform blueprint, operating cadence, and risk controls.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

operating cadence

  • What it means: define operating cadence clearly and connect it to the module focus: Capstone architecture review: enterprise AI roadmap, platform blueprint, operating cadence, and risk controls.
  • What to cover: the core concept, why it matters, what good usage looks like, and where learners are likely to misunderstand it.
  • Demonstration: give one simple example, one realistic example, and one failure or limitation example.
  • Evidence of learning: learners explain the topic in their own words and apply it to a small artifact or decision.

risk controls

  • What it means in this course: define risk controls in operational terms, not as an abstract principle.
  • What to cover: sensitive data boundaries, affected stakeholders, approval paths, documentation, and what Mid-level professionals with responsibility for platform decisions, vendor choices, AI governance, security, data strategy, or cross-functional delivery must never delegate blindly to AI.
  • Use case: present one acceptable use, one borderline use, and one prohibited use, then ask learners to justify the classification.
  • Evidence of learning: learners add a risk control, review step, or escalation rule to their course project.

Practice and evidence of learning

  • Learners complete or discuss: Present a board-ready AI transformation roadmap.
  • Learners produce: Present a board-ready AI transformation roadmap.
  • Instructor checks for accuracy, practical usefulness, clear assumptions, appropriate human review, and fit with the course audience.
  • Learners revise once after feedback so the module contributes to the final project, portfolio, or playbook.

Minimum coverage before moving on

  • Learners can explain the module vocabulary without relying on tool-generated text.
  • Learners have seen one worked example, one hands-on application, and one limitation or failure case.
  • Learners know what must be verified, what data must be protected, and who remains accountable for the output.

Core labs and builds

  • AI portfolio scoring lab for ROI, data readiness, risk, and implementation complexity.
  • Architecture review lab using sequence diagrams, threat models, and cost models.
  • Governance lab: model cards, risk registers, review gates, approval workflows, and audit logs.
  • Executive narrative lab: translate architecture into business value and operational risk language.

Capstone

  • Create a 12-month AI implementation roadmap for an organization. The roadmap includes capability map, use-case portfolio, architecture diagrams, build/buy decisions, governance model, staffing plan, budget assumptions, risk controls, and measurement plan.

Assessment design

  • Architecture memos reviewed for feasibility and operational completeness.
  • Portfolio prioritization quality and assumptions.
  • Governance artifacts and incident tabletop performance.
  • Final roadmap presentation scored on technical depth, risk realism, and executive clarity.
  • Architecture diagramming tools, cloud cost calculators, model API pricing sheets, evaluation templates, risk register templates, data catalog examples, MLOps/LLMOps references, security checklists.

Instructor notes

  • The goal is not to turn managers into model researchers. The goal is to make them technically literate enough to ask the right questions, prevent expensive pilot traps, and build scalable AI operating systems.

Instructor Build Checklist

  • Prepare one short demo for each module and one learner activity that creates a saved artifact.
  • Prepare examples that match the audience, local context, and likely tools learners can access.
  • Add a verification step to every AI-generated output: factual check, source check, data sensitivity check, and quality review.
  • Keep a running portfolio folder so each module contributes to the final project or learner playbook.
  • Reserve time for reflection on what the learner did, what AI did, what was checked, and what remains uncertain.