Early Access

AI is now a board matter. Learn it. Apply it. Deliver oversight.

You’re a Non‑Executive Director — your job is oversight, not operations. AI isn’t another IT upgrade; it changes risk, controls, assurance and ethics. NED AI is an education‑first programme with a built‑in AI assistant: learn what good AI oversight looks like, then get your very own secure conversational assistant to accelerate your board level capability.

NEDs Chairs Company Secretaries
Learn

Education

Board‑level modules: Governance expectations, AI risk & internal control, assurance options, ethics, workforce impact.

Apply

Your Context

Securely add public board artefacts. Your very own AI assistant uses your docs + a curated governance library to guide you.

Deliver

Board‑ready

Get help creating charter addendums, AI risk entries, board calendars, develop a Responsible AI policy , and access a world of NED and board room knowledge.

What you’ll learn & what you’ll make

Learn
  • Board duties for AI oversight (challenge, risk, controls, assurance)
  • The AI landscape (models, vendors, deployment options)
  • Policy, privacy, IP, ethics, workforce impact
Make
  • AI oversight charter addendum (Audit/Tech/Board)
  • AI risk register entries aligned to your sector
  • Board calendar insert with reporting cadence
  • Responsible AI policy starter
  • Vendor diligence questionnaire & agent governance checklist

Register your interest

AI topics — concise & board‑relevant

  • What an LLM is; tokens & context windows; strengths & limits
  • Training vs fine‑tuning vs RAG vs inference
  • Safety & governance: hallucinations, data leakage, prompt injection; IP & privacy
  • Who’s who: OpenAI/ChatGPT, Anthropic/Claude, Google/Gemini, Microsoft/Copilot, Meta/Llama, AWS/Bedrock, Cohere/Command, Mistral/Mixtral
  • Deployment: APIs, SaaS copilots, private/self‑hosted models
  • Vendor assessment: assurance, privacy posture, pricing & lock‑in

Classical ML & vision
Regression, trees, clustering; CNNs/Transformers for images; OCR & document AI.
Generation models
Diffusion, GANs, VAEs for image/video/audio; strengths, risks, evidence to ask for.
Reinforcement & RLHF
Where it shows up (recommendations, control) and oversight implications.
Recommenders & forecasting
Personalisation, propensity, demand; bias, explainability, monitoring.
Data & deployment
RAG vs fine‑tuning, vector stores, MLOps, third‑party risk, avoiding lock‑in.

What they are
LLM‑driven systems that plan steps, call tools/APIs, and act on data for research, reporting and workflow automation.
Risks
Autonomy creep, prompt injection, data exfiltration, looping/cost runaways, weak audit trails, third‑party tool risk.
Controls
Human‑in‑the‑loop approvals, scoped permissions, budgets/timeouts, sandboxing, immutable logs, evaluation/red‑team results.

Plain definitions
Clear, non‑hyped explanations of AGI and Superintelligence — what’s consensus, what’s speculative.
Board stance
Reflect frontier risk/opportunity in risk appetite, disclosures, and scenario planning.

Hands‑on demos
Short, optional exercises: text→image, text→video, text→speech. Fun, memorable, practical.
Governance caveats
Copyright, consent, bias, safety, disclosure — what policies must cover.

Privacy & CPD

Privacy by design

Your uploaded context is private to you and used only to personalise outputs. Detailed data policy will be published before launch.

CPD alignment

Designed for UK NEDs. We’re pursuing CPD recognition with leading bodies; early access registrants get updates first.

FAQ

Is this operational training?

No. It’s board‑level oversight. Learn what to ask for, what evidence to expect, and which artefacts to table.

Is my data private?

Yes. Your uploads sit in a private store to personalise your outputs. Controls and policies will be documented before launch.

When does it launch?

Soon. Join the list for early‑bird places and pricing.

Who is this for?

Current and aspiring NEDs, Chairs, and Company Secretaries who want education plus a practical tool suite.