Discussion about this post

User's avatar
Mark Loundy's avatar

This story is right on the money — as far as it goes. But what I am not seeing from writers like yourself and others is the admission, if not the realization, that there is no longer a path to some AI-oriented version of today. By that, I mean that students can’t simply change their skill set and expect to have a job in an AI future. Life in the future will have to be something other than work and personal life balance. Employment , as we know it, will no longer be a choice for many millions of people.

Expand full comment
Austen McDonald's avatar

While I agree we have to do something and that augmentation is better than reactively trying to beat the systems at their own game, the comforting refrain of "humans will do creativity, empathy, and strategic thinking" that I see often online is pollyannaish.

If creativity is "coming up with solutions you couldn't think of" then ChatGPT is already way better than most humans at brainstorming.

If empathy means "patiently seeking to understand all perspectives and integrate them into the discussion" then an LLM is infinitely patient and can tailor it's engagement uniquely to each participant, making them feel heard in their own emotionally relevant way.

If strategic thinking means "seeing the big picture, anticipating future challenges and aligning with long-term goals" then most MBA graduates would agree that an LLM can gather competitor data, process the last 5 years of customer sales, funnel, and feedback data, ingest the recent recorded discussions between employees, compare all that to a catalog of 50 years of case studies and arrive at a polished plan way faster than a human can.

We can't give up yet, but let's not lull ourselves into thinking there will be plenty of jobs for humans to do "creativity, empathy, and strategic thinking."

Expand full comment
4 more comments...

No posts