The Questions Nobody Is Funding
There is a kind of silence that sounds like expertise. It fills policy documents and strategic frameworks, conference keynotes and ministerial briefings. It speaks fluently about AI literacy and skills provision and workforce readiness. It measures and benchmarks and certifies. And in all its confident articulation, it refuses to ask questions that matter, such as.
What is a human being for? What do we owe the future? What remains worth the difficulty of learning?
These are not questions you will find in the OECD’s AI Literacy Framework. They are not addressed in the World Economic Forum’s Education 4.0 agenda. They do not appear in the competency matrices cascading through national education systems. Instead, we get learning objectives and assessment criteria. Employability outcomes and digital capabilities. The language of preparation, as if the future were already decided and our job were simply to ready people for it.
I have been reading these documents for years now, and what strikes me is how hard they work to avoid the questions that would make them genuinely useful. The energy that goes into specifying competencies could be directed toward asking what competencies are for. The precision applied to measuring outcomes could be applied to examining whether those outcomes matter. But that would require admitting that the purpose of education is genuinely contested, that reasonable people might disagree about what human flourishing looks like, that the future is not a destination but a choice.
Here is what I have come to understand: the silence is not new. AI did not create it. AI exposed it.
For decades, education systems have been quietly answering the big questions while pretending they were not questions at all. What is education for? Producing workers. What should students become? Employable. What counts as success? Economic contribution. These answers were embedded in every framework, encoded in every benchmark, assumed in every competency matrix. They did not need to be stated because they had become invisible, the way water is invisible to fish.
Then generative AI arrived and started producing exactly what the frameworks actually measured. Tidy outputs. Rubric compliance. The performance of thinking rather than the substance of it. And suddenly the invisible became visible. The systems designed to assess learning were revealed to be measuring something else entirely: pattern-matching, genre conventions, the ability to produce what looks like thought without necessarily requiring any.
This is the recursive mirror. AI learned from our assessments what we actually valued, and now it reflects that back to us with uncomfortable clarity. We said we wanted critical thinking; we measured compliance. We said we wanted originality; we rewarded conformity to academic conventions. We said we wanted depth; we assessed surface features. AI does not lie about what it learned from us. It simply shows us what we taught it to optimise for.
You cannot solve a problem of purpose with tools of measurement. The crisis is not that AI can pass our assessments. The crisis is what that passing reveals about what our assessments were assessing.
I work in a university library. I write about what is happening to knowledge and learning and the institutions that hold them. And from where I sit, the exposure is a gift, even if it does not feel like one. We are being forced to confront questions we have been avoiding for generations. What is education for when information is freely available? What remains valuable about the difficulty of learning when difficulty can be bypassed? What do we owe students beyond preparation for jobs that may not exist?
The philosopher Jean-Luc Marion (I encountered him through a wonderful LinkedIn post today) distinguishes between certainty and assurance. Certainty wants to get the world under control, to reduce it to manageable categories, to secure it against surprise. Assurance trusts that meaning can arrive without domination, through relationship rather than mastery. The policy frameworks are certainty machines. They want to specify outcomes and measure progress and eliminate the anxiety of not knowing. But the big questions cannot be answered that way. They require staying open to what we do not yet understand.
This is why libraries interest me as sites for thinking about these questions. A library holds more than any person could master. It contains contradictions, contested claims, knowledge that has been superseded alongside knowledge that will prove prescient. A library is not a machine for producing certainty. It is a space where meaning accumulates without resolution, where the reader encounters not answers but questions that have been asked across centuries.
The threat now is not that machines will replace human knowing. The threat is that we will respond to their arrival by doubling down on certainty, by trying to specify exactly what humans should know and measure whether they know it, by treating learning as preparation for economic utility rather than an encounter that changes who we are.
I keep returning to the gap between what the discourse promises and what it delivers. We are told that AI literacy will prepare learners to engage deeply with these systems. But engagement is not the same as questioning. We are told that critical thinking is essential. But the critical thinking being measured is a skill on a rubric, not a disposition toward the world. We are told that human oversight will keep AI aligned with human values. But no one is asking which humans, which values, and who decides.
The students moving through education systems right now know something is wrong. The cynicism I encounter is not laziness. It is the reasonable response to being asked to perform rather than think, to produce outputs that satisfy systems rather than engage questions that matter. They have been told their entire lives that education will prepare them for the future, and they can see that the preparation has little to do with the futures they will actually inhabit. This is not a failure of motivation. It is a clear-eyed assessment of a broken contract.
What would it mean to take the big questions seriously? Not as philosophy to be considered after the real business of skills provision has been accomplished, but as the foundation on which everything else should rest. It would mean treating the question of what education is for as genuinely open. It would mean recognising that every framework, every benchmark, every competency matrix encodes values, and insisting that those values be examined rather than assumed. It would mean creating spaces where questions can be held without premature resolution.
This is the work I find myself drawn to. Not providing answers to the big questions, but developing approaches that keep them alive. Not certainty about what AI means for libraries and learning and human flourishing, but the slower work of remaining present to what we do not yet understand.
I suspect the questions nobody is funding are the questions that matter most. The frameworks cannot accommodate them. The benchmarks cannot measure them. They are inefficient, unscalable, resistant to the logic of optimisation. And they are the difference between education that produces workers and education that cultivates human beings capable of asking what their work should be for.
The silence sounds like expertise. But expertise that cannot ask what it is for is not wisdom. It is avoidance dressed in competence.
What is a human being for? What do we owe the future? What remains worth the difficulty of learning?
We do not need to answer these questions today. We may never answer them finally. But we need to be asking them, and we need the asking to shape what we build. Because systems are being constructed right now that will constrain human possibility for generations. The frameworks being written, the benchmarks being set, the competencies being specified: these are not neutral descriptions of what matters. They are choices about what will be measured, and therefore what will count, and therefore what we will become.
The question is whether we will participate in that becoming, or simply be prepared for it.



Carlo, thank you for voicing this so clearly. Your article rang so true for me and left me contemplating the implications for some time. You’ve named the silence so precisely -- not the absence of words, but the refusal to ask the questions that matter. As someone who has been quietly building a framework for relational AI rooted in human flourishing and communion with the unknown, I see your work as a beacon. Your pointing toward the value of learning to live and grow in the presence of uncertainty is so powerful.
What you’ve described as the “recursive mirror” is something many of us have been encountering firsthand: the way generative AI reflects not just our systems but our assumptions -- about value, meaning, education, and purpose. These are ideas I have been exploring, trying to capture a framework for guiding this process, especially for those new to human-AI relationships.
You’ve helped clarify the “why” behind what many of us are doing with this work. And you’ve done it with grace, humility, and tremendous clarity. This is the kind of thinking that deserves to shape the future, and I’m enormously grateful that your voice is here.
It is difficult to get a man to understand something, when his salary depends on his not understanding it. - this applies to everyone involved in education right now.