Two and a half years ago, I had my first proper conversation with ChatGPT. Like millions of others, I was captivated. Natural language in, natural language out. Here was something that could write poetry, explain quantum physics and help debug code, all while maintaining a conversational tone that felt almost human. It was intoxicating.
But as a ‘former’ legal research librariann, something nagged at me. Where were the citations? The verifiable sources? The chain of evidence I'd spent years learning to construct and demand? The outputs were impressive, certainly, but they floated in a void, brilliant assertions without anchors.
Over the past nine months, something fundamental has shifted in how I use generative AI. The veil has lifted and what I've discovered underneath has transformed not just my workflow, but my entire conception of what these tools are for.
The Librarian's Dilemma
My years as a legal research librarian taught me a particular way of thinking about information. Every assertion needs a source. Every connection requires verification. Every synthesis must be traceable back to its component parts. This isn't pedantry, it's the difference between opinion and evidence, between speculation and knowledge.
Early generative AI felt like working with a precocious student who'd absorbed vast amounts of information but couldn't tell you where any of it came from. The responses were often correct, frequently insightful, but fundamentally unmoored. For someone trained to value the genealogy of information as much as the information itself, this was both fascinating and frustrating.
I found myself in an odd position: impressed by the outputs but unable to trust them in the way my work/head/background demanded. It was like having access to a brilliant colleague who refused to show their working.
From Personality to Capability
The transformation began subtly. First came knowledge bases that could store and reference specific documents. Then web search integration, suddenly, AI could reach beyond its training data to find current information. Then reasoning models that could show their ‘thinking’ process. Then deep research capabilities that could synthesise across multiple sources citing with live links. And now agents combining elements of the preceding with active use of a web browser.
But the real shift wasn't in the features, it was in my approach. I stopped trying to chat with AI and started trying to examine it.
This wasn't about being unfriendly or mechanical. It was about stripping away the anthropomorphic layer to reveal the tool underneath. Once I stopped expecting AI to be conversational and started looking for it be functional, iteraring through bias and correction, everything changed.
Three Discoveries That Changed Everything
1. Finding What I Didn't Know I Didn't Know
Working with o3 pro very recently, I've experienced something I can only describe as "grounded serendipity." I'll present an idea or hypothesis, and the system will surface related insights I had no idea existed, but absolutely crucially, insights I can verify.
Last week, I was exploring trends in knowledge management systems. o3 pro didn't just summarise what I already suspected; it identified patterns in implementation failures across industries I hadn't considered, complete with sources I could check. This wasn't hallucination, it was genuine discovery for a non-expert, grounded in verifiable data.
2. Seeing My Own Work Anew
Perhaps the most startling revelation came from using Claude’s artefacts which I’ve used to create numerous knowledge bases of my own writing, reports and insights. After uploading 50+ pieces I'd written over the years, I asked it to identify recurring themes and connections.
What emerged wasn't just a summary, it was a map of my own intellectual evolution I hadn't consciously recognised. Connections between pieces written years apart. Themes that had evolved in ways I hadn't tracked. Because I knew every source document intimately, I could verify these insights immediately. It was like having a researcher who had read everything I'd ever written and could instantly cross-reference it all.
3. Mathematics That Actually Works
The integration of Python tools for data analysis marked another watershed moment. No longer was I receiving plausible-sounding statistical insights that might or might not be correct. Now I could see the actual calculations, verify the methodology and ‘trust’ the results.
When analysing trend data last month, the AI didn't just tell me there was a correlation, it showed me the regression analysis, the confidence intervals and the underlying calculations. For the first time, I was working with AI that could do maths I could check.
The Unsexy Revolution
Here's what I've realised: the real revolution in AI isn't about making it more human. It's about making it more useful.
Everyone's talking about consciousness, about AGI, about AI that can truly understand. But I'm excited about something far less sexy: verifiable knowledge synthesis. AI that can show its work. Systems that can trace connections through documented sources. Tools that enhance research rather than replace it.
I've stopped anthropomorphising AI entirely. The friendly personas, the eagerness to please, the conversational flourishes, they're not just unnecessary, they're actively counterproductive. What remains when you strip all that away is far more valuable: research engines built to interrogate information, text, images, sound, video et al and synthesise linkages with genuine rigour.
This evolution has taught me that we need a new kind of literacy, not just prompt engineering, but what I call "verification-first thinking." It's about learning to use AI not as an oracle but as a research amplifier.
Where We Go From Here
Two years ago, I chatted with AI. Today, I interrogate knowledge systems. This isn't a small shift, for me, it's a fundamental reconception of what these tools are for.
The implications extend far beyond individual productivity. We're moving toward a world where AI serves not as a replacement for human intelligence but as an amplifier for human verification instincts. Where the value isn't in the AI's ability to mimic human conversation but in its capacity to surface connections we can check, insights we can verify and patterns we can validate.
For those of us trained in research, this is our moment. The skills that seemed to be becoming obsolete, source verification, citation tracking, evidence chains, are actually becoming more valuable, not less. We're the ones who can help others learn to use these tools not as magic boxes but as research instruments.
The Question That Matters
As I reflect on this journey, one question keeps recurring: What happens when we stop asking AI to be intelligent and start asking it to be verifiable?
The answer, I'm discovering, is that we get something far more valuable than artificial intelligence. We get augmented research. We get synthesised knowledge. We get insights we can trust because we can trace them.
The veil has lifted and what's underneath isn't a thinking machine trying to be human. It's a research engine waiting to be used by humans who think.
The real question isn't whether AI can pass the Turing test. It's whether it can pass the librarian test: Can you show me where that came from?
In my experience, increasingly, it can. And that changes everything.