The AI Savior Complex
In my last piece, I wrote about who gets to build AI. This one is about who it’s built to save, and who it silences along the way. It’s the next chapter in Global Conversations on AI.
The Fantasy
I grew up wanting to leave.
I grew up wanting to leave. Not because I hated home;
I never detached from it. But I was taught, quietly and persistently, that leaving was how you succeed.
That you gather tools elsewhere, then return with solutions.
That distance makes you sharper.
That the West made you sharper.
This was the dream:
leave, learn, help.
Go far so you can come back and SAVE.
It was never really framed as saviorism. It was framed as responsibility. But responsibility dressed as salvation.
I went to a French lycée in Marrakesh, then an elite business school in Paris. I was trained to think in systems: strategy, scale, delivery. I learned how to fix things, to optimize, to measure.
What I didn’t learn was how to ask: Who decided they were broken?
And in systems like the one that shaped me (France, Germany, etc. and the corridors of Fortune 500 companies), we spent far more time building answers than understanding questions. We learned to intervene before we learned to listen.
When things go wrong, we don’t ask what made the conditions.
We ask how to fix them.
How to deploy.
How to solve.
It’s a logic I internalized without noticing. I admit it.
A logic rooted not in North Africa and its colonial legacies that I didn’t yet have the language to name.
And I carried that logic into my desire to do good.
And that’s why my dream growing up was to work in International Development Aid. It really was!
It’s until you reach seniority in corporate and move from execution to strategy. Until one day, you’re no longer just doing the work; you want to change things. That’s when you hit the wall.
First, you’re too different from the dominant ideology in the high corporate sphere, so you’re not really heard.
Then you start to see it clearly: the corporate world mirrors the colonial power structure. And the tools you’re helping build? They’re not neutral, they’re reinforcing the very systems you thought you were escaping.
And somewhere in the middle of that, I began to wonder:
What made me believe we needed saving in the first place?
And who taught me that I was the one to do it?
It didn’t feel like progress. It felt like performance. Also, if we were really saving for a while, shouldn’t things have gotten better?
I didn’t know how to make sense of the contradiction. Until I started reading people who had already asked these questions before me.
Freire wrote that to speak a true word is to transform the world.
I started to wonder if anything we were building in AI was capable of that.
Or if we were just learning new ways to speak false ones, faster.
Even now, I still ask myself:
Is the Ethical AI Alliance I am building part of this too?
Is it just another way of trying to save people?
Another project in a long line of well-meaning interventions? But when I sit with that question, I remember the foundation of it.
It wasn’t built to fix people. It wasn’t built to deliver answers.
It was built to listen. To ask better questions.
To make space for people across regions, languages, and disciplines to come together, not under a single framework, but in shared resistance.
We’re not here to save anyone. We’re here to interrupt the systems that believe they can.
The Myth
AI’s promise of help comes from the same colonial logic that shaped the world we live in today.
The white savior complex, which has existed for centuries, is just as alive in tech as it is in global development work.
The idea that the West holds all the answers, that technology and solutions flow from the North to the South, is embedded in how AI is designed, implemented, and sold.
But there was something unsettling about that hope.
In wanting to help, I realized I was still buying into something I hadn’t fully questioned: the logic of optimization. It’s everywhere, in tech, in business, in medicine, the idea that things can always be better, faster, more efficient.
But who decides what “better” means?
And who gets left out when optimization is the goal?
This is where I started to see the connection between my desire to help, the systems that trained me, and something much older: the logic of eugenics.
Eugenics, at its core, is the idea that humanity can be improved through selective processes, deciding who gets to thrive, who is worthy of survival, and who is not.
It’s like optimizing humanity, deciding who is “fit” and who isn’t.
And when I looked at the way I was trained to think about helping, about fixing problems through data, scaling solutions, and optimizing systems, I began to see how deeply tied these ideas were.
The same logic that says we should improve the world by optimizing the lives of some people is the same logic that’s at the root of eugenics:
deciding who is valuable and who isn’t.
In AI, this logic of eugenics shows up in the way we optimize for certain outcomes, for certain people, for certain problems, deciding on the way who matters, who gets access, who is represented…and who’s not.
Timnit Gebru and Ruha Benjamin helped me see a layer beneath:
When AI promises to optimize, we need to ask, “Optimize for whom?” AI doesn’t just improve, it sorts, it categorizes, and it excludes. It decides who gets help and who doesn’t. And the systems behind these decisions are often rooted in power, privilege, and inequality.
In the Global South, AI is often presented as the great fixer, the answer to problems that have been historically created by colonial structures. But who owns AI? Who decides how it is used? Who profits from it? And who is excluded from the conversation?
Abeba Brian and Kate Crawford have written about this dynamic: how AI is often sold as the solution to problems, but rarely shapes the conversation from the people who need the solutions most. These AI systems don’t emerge from the needs of the communities they’re meant to serve; they emerge from the needs of the global tech giants that control them. I know it, I worked for one.
» So what we think of as “solutions” are often just tools to reinforce power.
The Refusal
I’ve always believed in the potential of technology.
In its ability:
To solve.
To fix.
To streamline and scale.
In medicine, in society, in systems that could finally outsmart the chaos of the world we live in.
When my father (bless him) was diagnosed with colon cancer, my belief in AI’s promise grew stronger. The scans. The algorithms that predicted the best course of treatment. The hope that technology could extend his life.
I felt it; that raw, personal hope for my family, for the future, for time.
(My dad is doing okay now hamdulilah.)
But yes, I was full of hope for all patients. But at the same time, I could feel the discomfort settling in. Even as I hoped, I saw the limits.
I saw the silence that surrounds this technology: who built it, who controls it, and who benefits from it.
Again, Timnit Gebru, Ruha Benjamin, Abeba Birhane, and Kate Crawford have all been instrumental in giving us a language to understand that.
Timnit Gebru spoke about how much of AI’s infrastructure is built on extractive industries. It’s built on data scraped from marginalized communities, on mining rare minerals that hurt the very lands AI claims to save. She challenged the narrative that AI can be “neutral” or “objective,” showing how deeply embedded it is in inequality.
Ruha Benjamin brought forth the notion of the New Jim Code: that what we call “bias” is often a function of deeper structural violence being codified into systems. AI isn’t just optimizing for efficiency; it’s optimizing for control. And the deeper we go into these systems, the more they erase the context of who gets harmed.
Abeba Birhane warned about the illusion of objectivity; the idea that AI systems, once trained, are “neutral,” that they make decisions based purely on data, untouched by human biases. But as Birhane and others have pointed out, data itself is already a reflection of power, privilege, and exclusion. AI doesn’t just reflect the world, it often reinforces it, amplifying old inequalities.
Kate Crawford wrote about AI as an extractive industry. She traced the human and environmental toll that AI development demands—from the rare earth minerals that power the machines to the labor of people whose data fuels algorithms. These aren’t side effects of technology. These are built-in costs of a system that prioritizes “innovation” over care.
Together, these thinkers confirmed what I had seen from the other side, sitting in offices of the richest tech builders of the world:
» AI is not a savior.
It doesn’t come to us without a history.
And that history is one of power and exploitation.
And the problem with the AI savior fantasy isn’t just who it tries to save.
It’s who it silences in the process.
Sometimes, those choices are violent.
In India, millions of people were pushed out of welfare systems when their biometric ID—Aadhaar—failed to match. Old women, rural workers, children. If your fingerprint didn’t scan or your name was spelled wrong in the database, you didn’t eat.These weren’t bugs. They were design decisions that prioritized efficiency over dignity.
In Palestine, facial recognition is used to track and control people in the West Bank. The Israeli military built an app—Blue Wolf—that turns surveillance into a game. Soldiers take photos of Palestinians and upload them into a system that flags who can move and who can’t. No consent. No transparency. No appeal.
This is what happens when “safety” becomes a tool of power.
And prediction replaces permission.
Even ‘well-meaning’ AI projects aren’t safe from this logic.
I once heard Shikoh Gitau, a Kenyan leader in tech I met recently, say:
“We want to create the best use cases for actual human needs”, referring to AI in Africa.
That sentence stayed with me. Because it holds the difference between solutionism and solidarity. Between building with people and building over them.
Too many AI tools are built for imaginary users: efficient, logical, and ready to be saved. But real people have:
Context. Memory. Contradictions.
And sometimes, the most human thing we can do is say:
No. Not like this. Not for me.
» And so, we have a choice.
We can continue down the road of eugenics-driven AI, where some people are worthy of help, while others are left behind. And accept the current AI trend: AI that promises to fix everything, from inequality to healthcare, with algorithms that optimize for efficiency and control.But we know for a fact it is designed to optimize the lives of a few while leaving many behind.
OR
The other option is to reject the savior mentality.
We don’t need to keep optimizing people.
Instead, we can reclaim AI as a tool that serves, not saves.
To move away from the savior mentality is to reject the idea that technology can fix people or solve everything with one-size-fits-all solutions.
Maybe instead, we could create space for people to thrive on their own terms:
A tool that serves. That supports communities. That empowers them to solve their own problems.
And maybe the problem isn’t that we want to help.
Maybe it’s that we don’t know how to help…without control.
AI doesn’t have to be a savior.
It can be a companion. A translator. A witness. A tool.
Nothing more, nothing less.
What if it was designed by midwives, not engineers?
What would AI look like if it were co-designed by farmers in southern Tunisia?
What if it grew from oral histories, not just datasets?
What if it understood the language of the land, not just the logic of machines?
What if it was trained on refusal, on care, on uncertainty and not just efficiency?
What if it was built in community halls, not corner offices?
What if it asked: “Who is this for?” before asking: “How do we scale it?”
What if AI could hold contradiction, instead of resolving it?
Not every problem needs solving.
Not every community needs saving.
Sometimes, what we need most is to protect the space for people to speak for themselves.
The shift is subtle, but it’s important:
We don’t need AI to fix us.
We need it to help us flourish on our own terms.
No top-down solutions.
The choice again: Continuing the trend or reclaiming AI.
Which path will we take?
And I, for one, no longer want to save the world.
I want to unlearn the systems that made me believe I could.
Excellent insights Asma. I really like your ethical lens and perspective on AI. I recently wrote something that mirrors yours, but from a different perspective. But I really liked the way you put it. Looking forward to connecting with you.