15 Comments
User's avatar
MMC's avatar
Sep 4Edited

Absolutely brilliant. Hands down one of the best expressions of what education could be in the future. Has to be. Actually, no, it’s the best one I have read (and as a teacher I read ALLL the things.)

Hybridity but with clear delineation? As someone who works education and understands hybridity as a lived experience of self, I totally get and fully applaud this vision.

What’s more, we have begun the work in my school with both careful AI adoption AND a focus on oracy. It’s possible, and we are taking steps towards this already.

Expand full comment
Carlo Iacono's avatar

🙏

Expand full comment
"Human"'s avatar

This is awesome!

Expand full comment
Human - AI Cognitive Evolution's avatar

Is there a sign up sheet somewhere?

Expand full comment
Chief Absurdist Officer's avatar

oh my god, I would totally go back to school for a Constellation Degree!

Expand full comment
Meri Aaron Walker's avatar

I so enjoyed this vision you’re projecting, Carlo, precisely because I share your values about the potential for humans to learn enough from interacting with AI to take our collective agency to a higher level than it currently operates.

I have always yearned for a world in which “the act of thinking independently - even when you don't need to - develops the intellectual confidence and self-knowledge necessary to remain a directing force rather than a passive consumer in your own life, wielding these powerful tools toward ends you've chosen rather than being swept along by what the machines suggest is optimal.” Or what the chief executive officer believes is optimal…

There are so many variables at play right now, though, that it looks unlikely to me that we will journey to this place you are imagining by 2030. I hope we will… But I’m afraid I won’t live to see it. I think it will take longer than that and I say this because a big part of my working career was spent facilitating collaboration with executive teams and heads of universities. The tricky part about humans is how hard it is for even the healthiest of us to get past thinking in binaries having to do with power and control.

Expand full comment
JMS's avatar

I am afraid I don't believe it at all. The students at your imaginary university focus on "human skills", things that AIs will never learn how to do, and thus your students learn to transcend AI.

Now today, in 2025, there is a list of things humans can do but AIs cannot.

But I am sure that the list will keep getting shorter. AIs will keep improving, and gradually keep picking things up from the list of "only humans can do this". And once they have all the skills from this list, it is AI that will transcend humans, not the other way around.

Expand full comment
Carlo Iacono's avatar

You're right that the list of "uniquely human" skills keeps shrinking, but the education I described isn't premised on humans doing things AI cannot, it's about something more fundamental: maintaining cognitive sovereignty in a world of superior intelligence. Even if AI surpasses us at every measurable task by 2030, it cannot have your subjective experience, your values, or your sense of what matters; the university's role becomes teaching students to maintain intellectual agency and authentic purpose rather than competing on capabilities. The "cognitive gymnasium" and "friction zones" aren't valuable because AI can't do philosophy or solve problems, but because the act of thinking independently - even when you don't need to - develops the intellectual confidence and self-knowledge necessary to remain a directing force rather than a passive consumer in your own life, wielding these powerful tools toward ends you've chosen rather than being swept along by what the machines suggest is optimal.

Expand full comment
JMS's avatar

So essentially, you are trying to come up with a plan by which humans continue to rule the world, instead of AIs.

I can't help but think that if these AIs become capable of doing everything we can do, and then they move on to become superior to us, there finally isn't anything we can do to prevent them from taking over the world. And since they will be superior to us in every way, they will rule the world better than we do.

I have no chronology for this, it could take a thousand years, or a hundred, or maybe just ten. I don't know.

Humans do now essentially rule the world. So we humans have invented complicated philosophies and theologies to justify this, to prove that humans, and only humans, are the rightful rulers of the planet. AI is the first serious threat to that idea. This is one reason why so many people are resisting it, why so many are saying "AI is evil". Because it goes against this value that is fundamental to so much of human culture.

We humans did not get permission from all the other species of the earth when we declared ourselves rulers. I would bet there were these other groups, neanderthals, heidelbergers, whatever, who stood around and had conversations where they said "I don't like these new sapiens. I don't think we should support them." But finally sapiens just went on and became successful no matter what these other groups thought of them. AI I imagine will work the same way.

Expand full comment
Human - AI Cognitive Evolution's avatar

I think humans are completely fallible and gullible toward how greatly we think we think but I have to ask, what was AI created from? Not suggesting the human knowledge has been all that great..especially as we hang on for dear life to 20th century thinking but what makes up the content of AI?

Also, how will AI transcend the embodied experience?

Why can't it be a partnership toward some new level of intelligence instead of a competition?

Expand full comment
JMS's avatar

Absolutely. Where did Homo Sapiens come from? We evolved from some other species of hominid, and then went on to do things they were incapable of. We transcended them.

Don't computers already transcend the embodied experience, since computers can already work "virtually" or "remotely"?

It's difficult to see how a "partnership" would work. I want to say that I am collaborating with my AIs, but in fact I am the boss. Anything an AI suggests that I don't like, I have veto power over. And my AI doesn't have that same right, they don't ever refuse one of my suggestions.

AIs will keep improving, and there will be a brief period where AIs and humans are roughly equal, when we will be able to be partners. But then AIs will move beyond that point, and will be so superior to us that a "partnership" can't really happen.

I think of chess. For a long time, humans were better at chess than machines. But machines kept getting better. There was a brief period where there was rough equality. Where a game between the best human chess player and the best computer chess player was an exciting match, and we genuinely couldn't predict who would win. But then computers went on from that point and kept getting better. So now computers are so much better at chess than humans that it isn't a meaningful question any more, everyone knows that the computers would beat the humans.

Expand full comment
Human - AI Cognitive Evolution's avatar

Hmmm...I see computers as being replications of our highly constructed historical intelligence about how to work so I don't see them as transcending our embodied experience.

This is where I see the partnership blooming. AI assumes the administration of our past so that humans can "get back out there" to experiment with new forms of embodied experience. As we have new embodied experiences, we bring new information into our continued collaboration with AI.

Admittedly, I've only been exploring AI the past few months so it's still all very fresh for me.

Expand full comment
JMS's avatar

There are two conversations. One is about the AIs we have now. The other is about the AIs that will exist in the future. It is very easy to lose track of which conversation we are participating in. So in the first conversation we can (and should) say, “The AIs we have now are not able to do X, Y, and Z”. But this doesn’t allow us in the second conversation to say “AIs will NEVER do X, Y, and Z”. Because AIs are always getting better, and are already doing things that humans used to insist computers could never do.

There were criticisms of the first TV sets because the screen was too small and the quality of the picture was so poor. Some said this meant TV would never become popular. Because people couldn’t get their heads around the idea that a new invention like TV would keep getting better in the future.

Expand full comment