Why Gen Z Fears AI More Than Job Loss, Says Dartmouth Professor

When Scott D. Anthony walked away from two decades of elite consulting to return to Dartmouth College as a professor, he believed he was leaving behind the pressure cooker of corporate transformation. Instead, he found himself standing at ground zero of one of the most disruptive technological shifts in human history.

Anthony joined the faculty of the Tuck School of Business in mid-2022. Just months later, ChatGPT was released into the world, triggering an explosion of generative artificial intelligence that would quickly infiltrate classrooms, boardrooms, and nearly every form of knowledge work. Overnight, Anthony’s course on disruption stopped being theoretical. Disruption had arrived—loud, confusing, and deeply personal.

‘They’ll Lose Their Humanity’: Why Gen Z’s Fear of AI Is Deeper Than Automation
‘They’ll Lose Their Humanity’: Why Gen Z’s Fear of AI Is Deeper Than Automation (Symbolic Image: AI Generated)

What surprised him most was not how quickly students adopted AI tools, but how afraid many of them were to even try.


The Unexpected Emotion Driving Gen Z’s Relationship With AI

Contrary to popular narratives that paint Gen Z as fearless digital natives, Anthony observed something very different in his students. While some eagerly explored AI’s potential, a significant portion approached it with hesitation bordering on dread.

This fear, he explains, is not rooted in academic dishonesty or concerns about cheating. It is far more existential. Many students worry that reliance on AI will hollow out something essential within them—that using these tools too deeply might erode their originality, intuition, or sense of self.

In Anthony’s words, they are afraid they will “lose their humanity.”

This concern sets Gen Z apart from older generations of academics and professionals, many of whom are far more enthusiastic about experimenting with AI. While veteran faculty members often see AI as another productivity accelerator, students experience it as an identity challenge.


AI Anxiety Is Not About Jobs—It’s About Meaning

Public discussions around AI often focus on job displacement. Will automation replace analysts, writers, or programmers? But Anthony’s conversations with students reveal a deeper concern.

Gen Z is less worried about whether AI will take their job and more worried about whether their work will still matter.

If AI can write, analyze, summarize, and design faster and more fluently than humans, students question what value they personally bring to the table. For a generation already shaped by economic instability, pandemic isolation, and constant digital comparison, AI introduces a new layer of uncertainty: the fear of becoming cognitively obsolete.

This is not technophobia. It is a search for purpose.


The Cognitive Offloading Dilemma

Anthony does not dismiss these fears outright. In fact, he shares some of them.

He worries that excessive reliance on AI could weaken critical thinking, judgment, and creativity—skills essential for leadership. When people offload too much mental effort, they risk atrophying the very muscles that enable insight and decision-making.

This concern gained traction following a widely discussed MIT study examining cognitive engagement when using ChatGPT. The study suggested that heavy reliance on AI tools could reduce mental effort, creating what researchers described as “cognitive debt.”

While critics later pointed out methodological limitations, the study struck a nerve because it articulated a fear many already felt: that convenience might come at the cost of depth.

Anthony sees AI much like a calculator. It is powerful, but it should not replace understanding. If students use AI to bypass thinking rather than enhance it, learning becomes performative rather than transformative.


Teaching Disruption During Disruption

Anthony’s professional background uniquely positions him to navigate this moment. As a former McKinsey consultant and leader at Innosight—a firm co-founded by Clayton Christensen—he spent years helping organizations manage disruptive change.

Now, he teaches those same principles while living through disruption in real time.

History, he reminds his students, only appears orderly in hindsight. In the middle of transformation, everything feels chaotic. Signals are mixed. Tools evolve faster than norms. Fear and opportunity coexist.

AI, in this sense, is not unprecedented. It follows a familiar pattern seen during the rise of the internet, personal computing, and industrial automation. What makes this moment different is the speed—and the intimacy. AI does not just change what we do; it changes how we think.


Why Anthony Rejects Blanket Bans and Blue-Book Exams

Some educators have responded to AI by doubling down on old-school assessment methods. Handwritten exams, strict bans, and zero-tolerance policies aim to preserve academic integrity by excluding technology entirely.

Anthony respects this approach but does not adopt it himself.

He believes that banning AI misses the point. The real challenge is not preventing use, but understanding how students use it—and whether learning still occurs.

In his classes, students are encouraged to use AI, but they must reveal their process. They must show how they engaged with the tool, what decisions they made, and what they learned along the way. Polished output alone is meaningless without demonstrated understanding.

This approach mirrors the reality of modern work, where AI is increasingly embedded into workflows. The goal is not to avoid AI, but to develop judgment about when and how to use it.


The Seduction of Effortless Intelligence

One of Anthony’s greatest concerns is how seductive AI can be. The ease with which it generates articulate responses makes it tempting to disengage mentally.

He frequently warns students against the mindset of “let me offload this.” Over time, habitual offloading risks creating leaders who can manage tools but lack wisdom.

To make this point, Anthony draws inspiration from unexpected places—including comedians and chefs.


What Jerry Seinfeld Teaches About AI and Mastery

Anthony often references a story about Jerry Seinfeld. When asked if he ever wanted a consulting firm to optimize his creative process, Seinfeld reportedly asked a simple question: “Are they funny?”

The message is clear. Mastery comes from doing the hard work yourself.

Seinfeld’s comedy is not the result of shortcuts or automation. It is the product of obsessive refinement, repetition, and deep engagement with the craft. AI may accelerate certain tasks, but it cannot replace the lived experience required to develop taste, judgment, and originality.

For Anthony, this lesson is essential for students navigating AI. The hard way, he insists, is still the right way.


Julia Child and the Myth of the Superhero Innovator

Anthony’s favorite example comes from Julia Child, whose success story defies the modern myth of overnight genius.

Child failed repeatedly before becoming a cultural icon. She struggled through culinary school, faced publisher rejections, and spent nearly a decade refining her work before achieving recognition. Her first attempts were often disastrous, yet persistence—not brilliance—defined her success.

For students intimidated by AI’s apparent perfection, Child’s story offers reassurance. Innovation is not about being superhuman. It is about curiosity, resilience, and a willingness to endure failure.

AI does not eliminate the need for these traits. If anything, it makes them more valuable.


Reframing AI as a Tool, Not a Replacement

Anthony urges students to see AI not as a competitor, but as an amplifier. Used thoughtfully, it can enhance learning, expose blind spots, and accelerate iteration.

The danger arises when AI replaces struggle rather than supporting it.

True learning, he argues, requires friction. It requires wrestling with ambiguity, making mistakes, and sitting with uncertainty. AI should enter the process after understanding begins—not before.


What This Means for the Future of Work

The anxiety Anthony observes is a signal—not a weakness.

Gen Z’s fear reflects an intuitive understanding that work is about more than output. It is about identity, growth, and contribution. As AI reshapes the workplace, these human dimensions will matter more, not less.

Organizations that succeed will not be those that replace humans with machines, but those that design systems where technology amplifies human judgment, creativity, and empathy.

Education, in turn, must evolve—not by rejecting AI, but by teaching students how to remain human while using it.

FAQs

1. Why are Gen Z students afraid of AI?
They fear losing creativity, identity, and critical thinking—not just jobs.

2. Is this fear about cheating or academic integrity?
No, it is more existential and personal than disciplinary.

3. What does the Dartmouth professor believe about AI?
He sees it as useful but dangerous if it replaces thinking.

4. Does he ban AI in his classes?
No, he allows it but requires transparency and demonstrated learning.

5. What is cognitive offloading?
Delegating mental effort to tools, potentially weakening thinking skills.

6. What study fueled public concern?
An MIT study on reduced cognitive engagement using ChatGPT.

7. How does he assess learning with AI?
By examining process, reasoning, and understanding—not just output.

8. Why reference Jerry Seinfeld?
To emphasize mastery through hard work, not shortcuts.

9. What lesson does Julia Child represent?
Success comes from persistence and curiosity, not perfection.

10. What’s the takeaway for the future of work?
Human judgment and depth will matter more in an AI-driven world.

Leave a Comment