About Discarded.AI

Discarded.AI explores how AI safety, governance and ethics actually work inside institutions.

It looks beyond the headlines and frameworks to the real systems, compromises and trade-offs that shape responsible technology in practice.

This publication is written by Alan Robertson, a practitioner in Responsible AI and regulatory risk. His work spans the development of governance frameworks, audit controls and strategy across global financial services.

He writes from the inside where regulation, assurance and innovation collide and from the belief that truth still matters, even when systems become intelligent.

Discarded.AI is also for people who want to find a way into the field.

It shares the lessons, resources and perspectives that help new voices gain confidence in AI safety and ethics, whatever their background.

What You’ll Find Here

  • Essays that explore how Responsible AI frameworks succeed or fail inside organisations.

  • Briefing notes on governance, regulation and risk trends shaping AI oversight.

  • Book reviews and reflections on the ideas influencing the field.

  • Signals & Oversights, short commentaries on what’s happening across AI ethics and policy.

Each piece is written to help readers think more clearly about what Responsible AI looks like once principles meet reality.

Why Subscribe

Subscribe to receive new essays directly in your inbox and unlock the full archive of posts.

Subscribers get practical insight into the governance of AI not from theory, but from lived experience inside regulated industries.

For a focused, ad-free reading experience and to join discussions with others working in this field, download the Substack app and follow Discarded.AI..

Discarded.AI is where we examine what must remain constant.

User's avatar

Subscribe to Discarded.AI

Exploring how AI safety, governance and ethics actually work inside institutions where the truth still matters and where new voices can find their place in AI safety.

People

Exploring how AI safety, governance and ethics actually work inside institutions where the truth still matters, and where new voices can find their place in AI safety.