00: Technical AI Safety: Training: Bluedot.org
Let’s start with the question: “How might we build safe AI?”
March 2026
Tuesday is the day I start my next Bluedot impact course:
Before I start - I recommend Blue.dot training to everybody. The courses are facilitated and only require a donation at the end of the course.
This article is on Technical AI Safety course: Please note Bluedot recommend attending the AGI Strategy course first which I completed in February 2026
Each week I will summarise progress and reading and perhaps you too may decide to apply for this course.
The AGI strategy course focused on the question: “How do we make AI go well?”
On the course, you identified the future you’re working toward and understood the key dynamics:
Drivers of AI progress: compute, data, algorithms
Threat pathways: power concentration, gradual disempowerment, catastrophic pandemics, critical infrastructure collapse
Plans for making AI go well: government control over AGI, hand over control to aligned superintelligence, build defences and diffuse AI
Layers of defences to build: prevent dangerous AI actions → constrain dangerous AI capabilities → withstand dangerous AI actions
This course focuses on defining what AI systems we are building and how.
You will gain the technical foundation to understand what it will actually take to make AI systems safer – and why it’s so challenging.
Throughout the rest of the course, I will:
Diagnose why making AI safe is technically challenging
Evaluate current safety techniques: what works, what doesn’t, where the gaps are
Build your own “kill chain” showing how defences might break
Identify the most promising intervention point for your contribution
Leave with a fundable action plan to start shipping
What this course isn’t
Though important for making AI go well, Bluedot will cover the following in a separate course:
AI policy details: though you’ll gain the technical grounding for effective AI governance
Compute governance: hardware verification and tracking deserve their own deep dive
AI security: e.g. preventing model theft or escape
ML basics: complete our AI foundations modules first if you need them
Let’s start with the question: “How might we build safe AI?”


