I have been thinking about Maslow’s hierarchy of needs lately. Not the tidy diagram we all remember from business courses, but what it really means when life starts to come apart.
Food. Shelter. Safety. Belonging. Esteem. Then, right at the top, self-actualisation. The place where we find purpose, creativity, and morality. The story goes that you cannot reach that summit until the lower layers are secure.
But I am not sure that is true.
Ethics does not wait for comfort. It arrives in the middle of hunger, fear and loss. It asks questions we cannot postpone until we are ready.
Most of my professional life has been spent in places where the pyramid feels complete. Boardrooms. Policy papers. Frameworks for Responsible AI. We speak confidently about fairness, accountability, and transparency. We talk about moral principles as if they are design features that can be built, tested, and deployed.
It is all well-intentioned. It is also very clean.
Those conversations begin from safety. They assume people are well fed, warm, and able to think clearly about right and wrong. They are the reflections of people who can afford reflection. Fairness, accountability, transparency. The language of those who are not starving.
Then I imagine the opposite. The job gone. The power off. The security gone. And suddenly the ethical questions are no longer about frameworks. They are about survival.
Would I steal food if my family were hungry.
Would I lie to get medicine for my child.
Would I sell something sacred to keep us alive another week.
I would like to believe I would hold my values. But belief and experience are not the same.
I have seen what desperation does to people. It bends principles until they become unrecognisable. It does not make them immoral. It simply reminds us that morality built only on comfort is brittle.
This is what troubles me about AI ethics.
We are designing systems that make decisions about people’s lives, most of whom live closer to the base of the pyramid than the top. Welfare recipients. Gig workers. Refugees. People who cannot wait for self-actualisation before they act.
Yet those designing the systems often live far above that line. We debate fairness in conference rooms while the consequences of our code play out in food banks and visa queues. We apply the ethics of comfort to the realities of survival.
A mother who lies on a benefits form to feed her children is not less ethical than the engineer who builds the fraud detection model. She is operating within a different layer of need. But the system will never understand that. It will flag her as an anomaly. A deviation. A risk.
Artificial intelligence is literal. It does not understand context or contradiction. It cannot distinguish desperation from deceit. And so it enforces the rules of those who live comfortably upon those who do not.
That is the moral distance we rarely admit.
I have helped build Responsible AI frameworks. I believe in them. But sometimes they feel like moral paperwork. We check the boxes. We measure bias. We add human oversight. And yet somewhere, someone is still made invisible by the data.
The deeper truth is that ethics cannot be fully externalised. It is not a checklist or a compliance score. It is a question that lives inside us. When we turn it into procedure, we risk mistaking process for conscience.
Maybe ethics is not about what we stand for. Maybe it is about who we are when standing becomes impossible.
That, I think, is the real test of Responsible AI. Not whether the system behaves ethically, but whether the people building it remember what it feels like to be powerless. Ethics should not depend on comfort. It should depend on conscience.
Perhaps our job is not to design perfect morality, but to build systems that still recognise humanity when perfection breaks down. To accept that fairness looks different when someone is hungry. To understand that dignity matters most when people have least.
Trust is not engineered. It is earned by holding dignity in place even when outcomes are uncertain. That is what ethics looks like when it stops being theory.
I often think of Viktor Frankl, who found meaning in a concentration camp when every other freedom was stripped away. Meaning did not come from safety. It came from the refusal to surrender the idea of humanity.
That, I think, is where ethics begins. Not at the summit of Maslow’s pyramid, but in the cracks where survival meets conscience.
Maybe that is what we have forgotten in our work on AI. That the people our systems touch do not live inside models. They live inside uncertainty. And if our ethics cannot reach them there, it does not matter how elegant our principles are.
Ethics is not a luxury of self-actualisation. It is the foundation that holds us together when everything else falls apart.


