Your AI Ethics Are a Luxury Belief
The woman in Dhaka knows something the ethics committee in ‘Oxbridge’ doesn't. She knows that when female doctors are vanishingly rare in Bangladesh, when the alternative is explaining intimate medical concerns to a male doctor who makes her deeply uncomfortable, when her culture and circumstances offer no good options for private healthcare, you don't have the luxury of refusing a tool that can interpret symptoms, suggest questions to ask and provide basic medical information even if it means uploading personal photographs to servers owned by an American corporation. She knows that purity is a position you can only afford when you already have everything you need.
This is the conversation we're not having about artificial intelligence. In the actual world where people work and create and struggle, something more interesting is happening. People are quietly using these tools to do things they couldn't do yesterday, while simultaneously knowing that something was taken without permission to make it possible.
The contradiction feels unbearable only if you've never lived with contradiction before. But most of the world runs on such tensions. The smartphone in your pocket contains minerals that passed through conflict zones. The coffee you drink this morning connected you to a supply chain you'll never fully map. We don't resolve these tensions by declaring ourselves pure or impure. We muddle through, pushing for better systems while using what we have.
What makes the AI purity discourse particularly suspect is how neatly it maps onto existing privilege. The established illustrator with a full client roster can afford to declare AI art soulless and contaminated. The immigrant entrepreneur trying to create marketing materials for her first business cannot. The tenured professor with research funding can denounce automated literature reviews as academic dishonesty. The independent scholar in Lagos trying to synthesise papers cannot. When your refusal to engage becomes a signal of virtue, you might ask yourself who's actually served by your abstention.
The real issue happened upstream, in the training data centres where everything written and drawn and photographed was swept into vast mathematical machinery without consent or compensation. The Kenyan content moderators suffering psychological damage for poverty wages are real. The artists whose styles can now be mimicked at the push of a button experienced a real loss. But here's what purity politics cannot hold: acknowledging these truths doesn't lead inevitably to the conclusion that a nurse in Liverpool should feel guilty for using ChatGPT to translate discharge instructions for a worried Somali family.
The legal landscape tells us something important about how societies actually navigate these tensions. Japan permits AI training on copyrighted material. The EU allows it unless creators opt out. The US is fighting it out in courtrooms, case by case. Australia is still debating. None of these approaches are pure. All of them recognise that absolute positions serve no one. The word "theft" that gets thrown around is a moral metaphor, not a legal reality. The actual term is infringement, and its boundaries shift depending on where you stand.
The workforce that keeps these systems from descending into hallucinogenic chaos remains largely invisible. They're in Nairobi and Manila and Hyderabad, filtering the worst of human expression for wages that wouldn't buy lunch in San Francisco. A politics that actually cared about justice would start there, with contracts that specify psychological support, exposure limits, and real wages. Instead, we get performance: the announcement of boycotts that change nothing, the declaration of institutional policies that everyone knows will be ignored, the social media campaigns that let people feel clean.
History doesn't move in straight lines of moral clarity. The camera was going to destroy painting until it freed painting from the burden of mere representation. The printing press was going to corrupt human memory until it became memory's greatest amplifier. The web was going to democratise everything until it concentrated power in new hands. Technology doesn't determine outcomes. The rules we write and the institutions we build do.
The productivity gains from these tools flow disproportionately to novices and those with fewer resources. This isn't rhetoric; it's what the research consistently shows. The person who gains most from automated code completion isn't the senior developer but the bootcamp graduate. The person who gains most from grammar checking isn't the novelist but the migrant professional writing in their third language. If we care about equity, we should care about this. If we only care about performing our politics, we can keep pretending these gains don't matter.
The settlement we need isn't mysterious. Training should require licences or meaningful opt-out mechanisms. And yes some of us are advising this as AI addendums fall on our desks. Models should publish their sources and failure modes. File formats should carry provenance so attribution travels with the work. Creators should have collective bargaining power to negotiate terms. Public institutions should provide compute resources so capability isn't hostage to private platforms. Content moderators should have unions and trauma support and living wages. None of this requires moral sainthood. It requires the boring work of governance.
But governance doesn't trend on social media. Denunciation does. So we get the discourse we deserve: pure positions that change nothing while the actual architecture of power gets built by default. Every hour spent cataloguing the moral failures of someone using Midjourney for their newsletter is an hour not spent on procurement standards, on labour organising, on competition policy. The companies building these systems must be thrilled by how thoroughly we've been distracted by purity theatre.
The culture we keep won't be determined by our pronouncements but by our practices. Learn when speed is a trap. Learn to recognise when the model is fluent and wrong. Learn to put the tool down when presence matters more than productivity. These are skills, not moral positions. They're how we maintain human judgement in a world of automated suggestion.
The fire has already been stolen. The question now is whether we'll spend our time condemning everyone who uses the light, or whether we'll do the harder work of ensuring the fire gets distributed fairly and the people who tend it get paid. The woman in Dhaka has already decided. She'll take the privacy risk over the alternative of suffering in silence. The real question is whether the rest of us will catch up to her pragmatism, or whether we'll keep performing our virtue while the future gets written without us.



I've been watching the AI ethics debate from inside the tech world for two years now, and you've named something that's been bothering me but I couldn't articulate.
The purity performance is exhausting. While we debate whether using AI makes us complicit, people are quietly solving real problems like a small business owner generating product descriptions she couldn't afford to commission. The ethics committee gets to feel clean while the woman in Dhaka gets medical information that might save her life.
Your point about where the real work needs to happen: procurement standards, labor organizing, competition policy, that's where change actually lives.
But it's harder than posting about boycotts, so we get the discourse we deserve instead of the governance we need.
Hi Carlo, as a person growing up in Dhaka, your first paragraph threw me off. I appreciate the essence of your argument but a teacher in Dhaka is instructed to just have all the worksheets in one language either Bengali or English. Even though people speak many different mother tongues, the country and the schools don’t recognize and accommodate them. If you want to use an example from Bangladesh (as an example of people living in contradiction), here are some better examples: 1. A woman in Dhaka sharing pictures of her intimate body parts to chatgpt to get answers to a physical problem, because female doctors are still very rare in Bangladesh and going to a male doctor is uncomfortable. 2. A user translating English medical reports to Bengali (because her English is not good and no access to good translators). 3. An older adult asking Chatgpt to remember his laptop password and credit card numbers.