Artificial intelligence isn't just an optional extra at Shopify anymore – it's now the norm. In a recently publicised internal memo, Shopify CEO Tobi Lütke laid out a bold new "AI-first" mandate for his company. Every employee, from junior staff to top executives, is expected to embrace AI as part of their daily work. The memo, pointedly titled "AI usage is now a baseline expectation," signals a seismic shift in how one of the world's tech darlings views the future of work. It got me thinking: What if we applied the same lens to our universities? Are we in higher education prepared for a world where AI use is assumed – even demanded – as a baseline skill?
Shopify's AI-First Mandate - AI or Bust
Shopify's internal AI mandate memo, shared publicly by CEO Tobi Lütke via X, highlights the new AI-first expectations. The memo declares that "using AI effectively is now a fundamental expectation" for everyone at the company, including leadership. This signals a dramatic cultural shift where opting out of AI is seen as "slow-motion failure," and even executives must lead by example.
In his March 20 memo (which he tweeted to pre-empt leaks), Lütke makes it crystal clear: there's no opting out of AI. "Using AI effectively is now a fundamental expectation of everyone at Shopify," he wrote, bluntly warning that anyone who tries to ignore this trend will stagnate and "stagnation is slow-motion failure". In a company growing 20–40% year-on-year, standing still means falling behind. The mandate isn't just for coders or creatives; "everyone means everyone," Lütke emphasised "including me and the executive team."
What does this look like in practice? For one, AI must be involved from the very start of any project. Shopify expects teams to prototype with AI – the memo says the early "Get Shit Done" (GSD) phase of projects "should be dominated by AI exploration." AI can generate drafts, code, designs – whatever jump-start is needed – in a "fraction of the time" it used to take. The idea is that by the time humans review a prototype, AI has already done hours of work in minutes, allowing the team to learn and iterate faster. In Lütke's words, "AI dramatically accelerates this process."
Perhaps most striking, Shopify is baking AI into employee performance reviews. Mastering AI isn't a nice-to-have skill; it's now formally assessed. "We will add AI usage questions to our performance and peer review questionnaire," the memo announces. Employees will need to show not just that they tried an AI tool, but that they've learned how to use it well. Prompt writing, feeding the right context, refining outputs – these are becoming core competencies. Colleagues will even give feedback on each other's AI usage as part of peer reviews. In short, being a "constant learner" with AI is officially part of the job description.
Another notable aspect of Shopify's mandate is the culture of shared AI knowledge. The company isn't just handing down edicts; it's equipping people with tools and encouraging a learning community. Every employee has access to a suite of state-of-the-art AI tools – from OpenAI's ChatGPT (via an internal bot called Chat.shopify) to GitHub Copilot, Anthropic's Claude, and more. They're all available and ready to go. And Shopify wants employees to swap notes on what works: the memo encourages everyone to "share what you learned". Teams regularly post their AI "wins and losses" in Slack channels (with cheeky names like #ai-centaurs), exchanging prompt ideas and novel use cases. This open dialogue means a hack by one team can quickly spread company-wide. AI fluency becomes a collective pursuit, not a siloed expertise.
Finally, Lütke issued a powerful challenge: prove a human is needed before hiring one. If a team wants more budget or people, they must first demonstrate that AI can't do the job. "Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI," the memo instructs. He even prompts them to imagine "what would this area look like if autonomous AI agents were already part of the team?". Only after exhausting the potential of AI should they consider adding human labour. It's a radical "AI-first" filter on resource allocation – essentially, hire AI, not humans, whenever possible. That might sound stark, but Lütke's tone is less threatening and more opportunistic. He speaks of AI as a "massive productivity multiplier" and celebrates employees who achieved "100x" outputs by boldly partnering with AI. Rather than framing it as job cuts, he frames it as unleashing people to tackle implausible tasks by leveraging AI. The memo even notes these changes can lead to "really fun discussions and projects."
In sum, Shopify's mandate isn't just a productivity hack – it's a cultural shift. It blends a hard-nosed insistence on efficiency (do more with AI before asking for more people) with a rallying cry for innovation and learning. Shopify is saying: we're all in on AI and we expect you to be too. As Lütke put it, "AI will totally change Shopify, our work, and the rest of our lives". This is as much about seizing opportunity as it is about cutting costs. Which raises the question: while tech companies race ahead with AI-first cultures, where does that leave the world of higher education?
Higher Education: Late to the Party?
Looking at the tertiary education sector here in Australia and across the globe – one can't help but notice a stark contrast. Universities have no equivalent mandate that "AI use is required" for their staff or students. In fact, if Shopify's culture is "AI or bust," higher ed's culture has been more like "AI… but be careful." Many universities are still wringing their hands over AI, scrambling to update academic integrity policies and issuing cautious guidelines. You certainly won't find memos being sent out that say "AI usage is now a baseline expectation" for every lecturer and administrator! There are no institution-wide directives compelling professors to integrate AI into their research or teaching workflows – at least not yet.
This isn't to say nobody in academia is experimenting. There are wonderful pockets of innovation, and plenty of individual educators and librarians (✋) excited about AI. But overall, education has lagged behind industry in adopting and normalising AI. Consider recent survey data: over half of college students in one U.S. study reported using generative AI in their coursework, yet more than 75% of faculty admitted they do not regularly use these tools. In other words, students are charging ahead, often self-servicing with tools like ChatGPT, while a majority of their instructors have barely dipped a toe in the water. This gap is telling. If AI were a language, our students are quickly becoming conversational, while many of us educators remain tongue-tied.
Why the hesitation? Part of it is certainly the valid concern about academic misconduct, the fear that AI makes cheating easier. Universities, being risk-averse institutions, have responded with guardrails and position statements rather than encouragements to play. Meanwhile, industry is emphasising opportunity over fear. Shopify's memo explicitly frames AI as an "unprecedented change" and asks employees to embrace it for "the benefit of our [customers]", aligning with core values of learning and change. Lütke's stance is that AI is a train we all need to board and fast. In higher ed, though, the predominant stance is still cautious observation – waiting at the station, figuring out the timetable, while our students have already hopped on the next departure.
There's also the matter of institutional support (or lack thereof). Tech companies like Shopify are actively provisioning AI tools for their people – essentially saying "here's your toolbox, go build." In contrast, what's the default AI toolkit for a lecturer or student at a university? Often, it's whatever free or public tool they find on their own. A few universities have begun offering licensed tools (for example, some campuses have enabled Microsoft's AI Copilot). But these tend to be piecemeal and often behind the curve. Relying solely on an institutional default might mean missing out on the state-of-the-art developments happening elsewhere. The AI field moves ridiculously fast – new models and innovations emerge every month – and right now, educators who want to explore these have to take initiative individually. There's no university-wide "AI exploration hour" or Slack channel where professors swap prompt ideas for lesson planning. (Wouldn't that be great?) In short, the experimental culture that Shopify promotes is largely absent in academia.
Agency, Opportunity, and Innovation – Not Just Efficiency
One key insight from Shopify's approach is the focus on human-AI collaboration as a source of innovation. Yes, there's an efficiency drive (doing more with less), but there's also a strong message of agency – empowering each individual to achieve more with AI, to level up their work in ways previously impossible. Lütke's memo brims with an almost geeky excitement about what AI enables: he talks about employees in "absolute awe" of new capabilities, tools acting as "multipliers," and tackling problems that "we wouldn't even have chosen to tackle before". This is about unleashing creativity and ambition, not just speeding up routine tasks. In fact, he stresses that "using AI well is a skill that needs to be carefully learned by… using it a lot." In other words, mastery comes through practice, experimentation, even play – the very things we often encourage in learning.
Imagine if universities adopted a similar ethos. What if, instead of viewing AI primarily as a threat to academic integrity or a cost-saver for grading automation, we viewed it as an opportunity for academic innovation? What new research questions could we explore with AI co-researchers? How might AI assistants enable more personalised and interactive learning experiences for students? There are professors already using AI to draft research paper sections, generate quiz questions, simulate historical debates, you name it – but these remain individual initiatives. There's little in the way of a mandate from above that says, "Go forth and experiment, and report back what you discover."
The Shopify story suggests that real progress comes when experimentation is not just allowed but expected. It's a shift from a defensive stance (managing risks, waiting for peer institutions to move) to a proactive stance (seizing the initiative, sharing best practices). The memo explicitly ties this to core values and the company's mission. What are our core values in higher education? Curiosity? Advancing knowledge? Preparing students for the future? If we truly believe in those, can we afford not to wholeheartedly explore the most transformative technology of our time?
To be clear, the contexts are different. A university is not a profit-driven tech company. But consider the external reality: the workforce our graduates are entering is rapidly embracing AI. According to the World Economic Forum, "more than half" of hiring managers say they wouldn't even hire someone who lacks AI literacy skills. That's a stunning statistic. The job market is effectively issuing its own mandate: be AI-capable, or be left behind. If our education system doesn't help students meet that bar, aren't we failing our mission? Forward-looking companies are treating AI skills as fundamental – maybe it's time our curricula and professional development do the same.
Perhaps we in academia can take a cue from Lütke's approach without needing a top-down edict. We can start by exploring AI in our own practice and sharing insights with peers. Try out that new AI tool you heard about – not just the ones our IT department gave us. (If Shopify's folks can experiment with half a dozen AI platforms, maybe we should tinker beyond just, say, the campus-approved solution.) Write a new lecture with the help of a generative model and see if it spurs a fresh approach. Invite AI to critique your lesson plan or act as a "teaching assistant" in a class forum. Then, crucially, tell colleagues what worked and what didn't. Maybe set up an informal discussion group or Teams channel about pedagogical AI hacks. By creating these grassroots spaces, we cultivate the same spirit of shared learning that Shopify formalised.
The payoff? We'd not only improve our efficiency (yes, AI can save us time in prepping materials), but we'd also likely discover creative new methods that make learning more engaging. AI can help generate multiple examples or analogies to explain a tough concept, adapt content to different reading levels, or simulate real-world scenarios for students to navigate. These are innovations that go beyond doing the same thing faster – they let us do new things altogether. Embracing AI can free up educators to focus on the human aspects of teaching – mentorship, critical discussion, emotional support – while AI handles some of the grunt work and provides sparks of inspiration. It's the augmented educator model, where our human expertise plus AI yields better outcomes than either alone.
Food for Thought
As AI becomes the assumed backbone in industry, it prompts some hard questions for higher education. Here are a few to ponder:
Should universities declare an "AI literacy" mandate for students and staff? What would it look like if using AI was as expected as using the library or the internet on campus? Would this boost innovation, or are there reasons to tread carefully?
Are we preparing students for a world that assumes AI? When more than 50% of hiring managers won't consider candidates without AI skills, should every degree programme embed AI competencies? How do we ensure our graduates can thrive alongside AI tools and not be displaced by them?
How can we support educators in becoming AI-proficient? If an AI-first culture is coming, what professional development or incentives would help faculty not just adapt, but lead the way? Should experimentation with AI be recognised and rewarded in the academic career path (much like research and service are)?
What is the cost of doing nothing? In an "AI or bust" environment, does higher ed risk irrelevance by not keeping pace? Or conversely, what unique values must universities hold onto, even as we integrate AI (e.g. human judgement, ethics, critical thinking), to guide the use of these tools responsibly?
The Time for Academia's AI Mandate Has Arrived
Let's be crystal clear: academia's cautious approach to AI is rapidly becoming untenable. When Shopify declares "using AI effectively is now a fundamental expectation," they're not outlining a far-future vision, they're codifying what's already happening in the world our graduates enter. This isn't hyperbole; it's reality.
The uncomfortable truth is that higher education now stands at a crossroads that will define its relevance for decades to come. One path leads to slow, deliberative integration of AI through endless committees and cautious policies. The other demands a bold, institutional commitment to transforming how we teach, research, and operate with AI as a core capability, not a peripheral concern.
The stakes couldn't be higher. Universities that fail to develop a cohesive, ambitious AI strategy in the next 12-18 months risk watching their carefully crafted curricula become obsolete in real-time. When students can access state-of-the-art AI that outperforms much of what we teach, our value proposition faces fundamental questions. When employers expect AI fluency as standard, our graduates' prospects depend on our willingness to adapt.
What we need isn't gentle encouragement but an educational equivalent of Lütke's mandate: a clear institutional commitment that AI proficiency is essential for every academic, every administrator, every student. This means rethinking assessment from the ground up, reimagining curriculum design with AI as a core component, creating institutional mechanisms for sharing AI innovations, and yes perhaps even considering AI capabilities before expanding human resources.
This isn't about abandoning our values or surrendering to technological determinism. Rather, it's about proactively reshaping AI to serve academic purposes instead of being reshaped by it. It's about recognising that critical thinking about AI can only happen through deep engagement with it, not through theoretical distance.
The universities that thrive in the coming decade won't be those with the most sophisticated AI policies or the strictest guidelines. They'll be those that foster cultures of bold experimentation, that equip their communities with cutting-edge AI tools, that celebrate and spread AI innovations across disciplines, and that prepare graduates who don't just understand AI but can wield it with sophistication and ethical judgement.
Shopify's memo isn't just a corporate strategy it's a glimpse into our immediate future. The question isn't whether higher education will eventually adopt similar mandates, but whether we'll do so proactively and shape our destiny, or reactively after the world has moved on without us.
The revolution isn't just coming. It's here. And academia's response cannot afford to be timid.
I like USyd's approach which a number of unis are following - the two track system. For 'open' assessments GenAI can be used; for 'secure' assessments it can't be and these are inviglated, in person. I also like how students there have created a site for GenAI tips. Overall it would be useful if academic staff had time to explore the tools.