The hospital bed wasn't where I expected my AI journey to take shape. But then again, nothing about the final quarter of 2023 followed the expected path. As I lay there in November, wrestling with both illness and the first thoughts of draft institutional AI strategy, I couldn't have imagined how deeply personal the following year's exploration of artificial intelligence would become.
This was the year everything changed, though not in the ways the tech headlines would have you believe. While the world obsessed over each new model release and capability breakthrough, I found myself on a more nuanced journey - one that led me from strategic frameworks to ethical quandaries, from governance structures to the very foundations of how we think about education itself.
My personal journey from struggling to walk to regaining strength paralleled the institutional journey I was witnessing in higher education. Both required patience, determination, and a willingness to question fundamental assumptions. As my body healed, my mind raced with possibilities for transformation that went far beyond the tactical and technical.
I became a pinball in the machine of institutional transformation, bouncing between the pragmatic and philosophical. One day crafting high-level strategy documents, the next grappling with the raw human reactions to change. How do you get buy-in for something you're still trying to understand yourself? How do you build governance around capabilities that shift weekly? How do you encourage deeper thinking about tools that many still see as either salvation or threat?
The TEQSA snapshot mid-year (we’ve just seen) revealed a sector in its own rehabilitation process - some institutions sprinting ahead with comprehensive frameworks and innovative approaches, others still finding their feet. Like my own, progress wasn't linear. Some days brought breakthrough insights, others humbling reminders of how far we had to go.
But something unexpected happened along the way. The more I engaged with AI, the more it became clear that we weren't just dealing with a new technology - we were witnessing the dissolution of comfortable certainties about knowledge, learning, and human potential itself. Every question about AI implementation led to deeper questions about education's purpose, about the nature of intelligence, about what it means to be authentically human in an increasingly hybrid world.
The writing began as therapy - a way to process the cascade of insights and uncertainties that came with each new exploration. But it became something more. Each article, each post, each late-night reflection was another step into unknown territory. I found myself mapping not just AI's capabilities, but the shifting landscape of human cognition and creativity itself.
As the year progressed, a pattern emerged. The most profound changes weren't happening in the technology - they were happening in us. In how we think, how we learn, how we create and how we change. The real revolution wasn't in the tools but in the dissolution of boundaries we once thought immutable - between human and machine, between individual and collective knowledge creation, between teaching and learning itself.
I watched institutions wrestle with these changes; frameworks developed that felt outdated almost as soon as they were written. I saw educators grapple with existential questions about their role and value. I witnessed students intuitively grasp possibilities that their teachers were still struggling to articulate. The gap wasn't just in progress but in perspective - between those who saw AI as a technical challenge to be managed and those who recognised it as a catalyst for fundamental transformation.
And somewhere along the way, my own perspective shifted. The strategic questions remained important, but they were dwarfed by bigger ones: What does it mean to be educated in an age of artificial minds? How do we maintain human agency while embracing cognitive collaboration? What new forms of intelligence might emerge from this dance between human insight and artificial capability?
2024 wasn't just the year AI became real - it was the year it became personal. The year we began to understand that this isn't just about adopting new tools or updating old systems. It's about evolving alongside our artificial counterparts, about reimagining what it means to think, to learn, to create.
Now, as I write this from a very different place than where I started - both literally and metaphorically - I'm struck by how far we've come and how far we have yet to go. The questions have gotten bigger, not smaller. The implications deeper, not clearer. And that's exactly as it should be.
Because this journey isn't about finding answers anymore. It's about learning to navigate uncertainty with wisdom, about maintaining our humanity while expanding our capabilities, about discovering new ways of being in a world where the boundaries between human and artificial intelligence grow increasingly fluid.
2024 taught me that the most important skill isn't mastery of AI tools - it's the ability to remain deeply, authentically human while dancing with artificial minds. To maintain ethical clarity while embracing cognitive enhancement. To hold onto wisdom while reaching for new capabilities.
As we step into 2025, I'm less certain about the future than I was a year ago, and strangely more optimistic because of it. The questions that keep me up at night have evolved from "How do we implement this?" to "Who might we become?"
And perhaps that's the greatest gift this year has given us - not answers, but better questions. Not certainty, but the courage to explore. Not a map, but the wisdom to know we're charting new territory together.
Here's to the journey ahead, to the questions yet to be asked, and to maintaining our humanity while embracing the artificial minds that are becoming our cognitive dance partners.
The future doesn't need our permission to be revolutionary. It just needs us to be brave enough to engage with it.
AI is both comforting and concerning, predictable and unnatural. It’s this juxtaposition, this cognitive dissonance, that I wrestle with sometimes. Is AI a reflection of my own thoughts? We prompt it and it predicts what we are searching for. Magic. It doesn’t know what we want, nor do we. And though it gathers answers from a universe of sources, it is still only one voice. A possible echo chamber.