For the first time, we're sharing our cognitive workspace with artificial minds that can write, ‘reason’ and create alongside us. Yet most of us are still approaching this partnership with mental models borrowed from a different era, when knowledge was something humans produced alone, in quiet libraries or solitary walks.
What if that's precisely the wrong approach?
I've been exploring a different way of thinking about knowledge in our AI-mediated world, one that draws on wisdom from other places: Indigenous knowledge systems, Ubuntu philosophy and even post-humanist thought. The result challenges some of our most basic assumptions about how we know what we know.
The Myth of the Solitary Knower
Western thought has long celebrated the image of the lone genius making breakthroughs in isolation. Think Newton under his apple tree or Einstein in his study. But this myth was always just that: a myth. Knowledge has always been relational, emerging from conversations, collaborations and connections. What's changing now is that some of those connections are with artificial minds.
The question isn't whether to use AI tools. The question is how to use them whilst preserving what makes human cognition valuable and unique. This requires what philosophers call epistemic humility: recognising that all knowledge, whether human or machine-generated, is partial and situated. No single actor holds the complete picture.
Learning from Other Ways of Knowing
Here's where things get interesting. Indigenous knowledge systems have always understood something that Western philosophy is only beginning to grasp: knowledge isn't just about facts stored in individual brains. It's about relationships, reciprocity and responsibility to community.
Ubuntu philosophy from Southern Africa offers another crucial insight: "I am because we are." Our knowledge and identity emerge through our connections with others. In an AI context, this suggests we should think less about humans versus machines and more about the quality of the relationships we're building.
Post-humanist thinkers push us even further, asking us to consider the "more than human" actors in our knowledge systems. When an AI draws on vast datasets to answer our questions, it's channeling not just algorithms but the accumulated insights of millions of human contributions, filtered through silicon and code. The resulting knowledge belongs to an assemblage that includes humans, machines, data and the infrastructures that connect them.
Practical Wisdom for the AI Age
So what does this mean for how we actually work and learn with AI? First, it means being intentional about which cognitive tasks we keep for ourselves. Before consulting an AI, try articulating your initial thoughts. Write them down. This isn't just about preserving skills; it's about maintaining the muscle memory of independent thought.
Second, it means considering transparency in output. Every AI output should come with some indication of its confidence level and sources. We need to see the seams, the uncertainty, the places where the machine is guessing rather than knowing. This isn't a weakness of AI systems; it's a feature that keeps us epistemically honest.
Third, and perhaps most importantly, it means preserving our right to challenge and contest what AI tells us. The most dangerous moment in human-AI interaction isn't when the machine is wrong; it's when we stop questioning whether it might be.
The Justice Dimension
There's another layer here that often gets overlooked in Silicon Valley's enthusiasm for AI tools. Whose knowledge traditions are being centred? Whose are being sidelined? When we train AI systems predominantly on Western texts and then deploy them globally, we risk a kind of epistemic colonialism, imposing one way of knowing on everyone else.
This is why any serious approach to AI-mediated knowledge must grapple with justice. It's not enough to make AI tools efficient or even accurate. We need to ask whether they're enhancing human flourishing and ecological care, whether they're amplifying marginalised voices or drowning them out.
Building New Habits of Mind
The path forward isn't about choosing between human and artificial intelligence. It's about developing new habits of mind that allow us to work with AI whilst maintaining our cognitive sovereignty. This might mean scheduling regular "AI-free" periods to test whether our skills are atrophying. It might mean keeping epistemic journals that track how AI influences our thinking over time.
For educators, it means teaching students not just to use AI tools but to understand their biases and the importance of our thinking. For researchers, it means being transparent about when and how AI shaped their work. For all of us, it means approaching these tools with a combination of openness and healthy scepticism.
A Philosophy with Teeth
What we need isn't just abstract theorising about AI and knowledge. We need what one might call "epistemic humility with teeth": practical frameworks that help us navigate this new landscape whilst preserving human agency and promoting justice.
This means being willing to slow down when Silicon Valley wants us to speed up. It means insisting on our right to understand, challenge and sometimes refuse what AI systems tell us. It means remembering that efficiency isn't the only value worth pursuing.
Most of all, it means recognising that we're not just users of AI tools. We're participants in a vast experiment in hybrid cognition, one that will shape how future generations understand truth, knowledge and what it means to think. We owe it to them to get this right.
The question isn't whether AI will change how we know. It's whether we'll shape that change consciously, drawing on the best of human wisdom from all traditions, or whether we'll sleepwalk into a future where we've outsourced our sense-making to machines we no longer understand. The choice, for now at least, remains ours.
I’m thinking the same way you are Carlos and it’s so lovely for me to read your post here today. The more I work with AI the more I realize how important it is to have the ability to question my thinking not just sometimes but all the time. I’m using AI to help me with that and I think I am actually growing in a way that I can’t just with the people I know because mostly they simply argue with me to adopt their point of view or agree with me. That’s a waste of time. I love your paragraph about epistemic humility. That’s somwthing AI is going to be really good for — helping humans confront our cognitive distortions.
Some good thoughts here but also some room to draw more richly on those perspectives you mention. I'll talk to posthumanism since that's what I know most about but I believe this also to be relevant to many indigenous perspectives: we don't keep cognitive tasks for ourselves. Rather, we distribute / assemble / configure thinking differently, merging differently with things around us in collective forms of thinking even when it looks like we are doing something on our own. It's all collective and has always been so. We have always thought with the land, with animals, other humans, material elements, technologies that are material, cognitive and social. I think that recognising how AI is part of this already complex and evolving way of being and thinking will be important to making sense of it.