AI Can’t Replace Education—Unless We Let It

11 hours ago 1

As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless?

Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company’s computer code. NVIDIA’s Jensen Huang has even declared coding itself obsolete.

While Bill Gates admits the breakneck pace of AI development is “profound and even a little bit scary,” he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring.

Despite the hype, AI cannot “think” for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world?

AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. 

Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. “When they know the answer, they’ll tell it to you, and when they don’t know the answer they’re really good at bullsh*tting,” he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a “bullsh*tting” chatbot. That, to me, perfectly captures AI’s so-called “knowledge.” It mimics understanding by predicting word sequences but lacks the conceptual grounding.

That’s why “creative” AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method.

That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us.

I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. 

Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what’s wrong, what’s causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor’s location and size within a few millimeters, grounded entirely in how cancer disrupts the body’s heat signature.

The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to “call bullsh*t” when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans’ greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process.

The real threat isn’t that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn’t make sense. Fortunately, the future doesn’t have to play out like this. 

We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations.

After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work “under the hood” by learning the systems themselves,  rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. 

This approach is necessary today. We don’t need more users clicking “generate” on black-box models. We need people who can understand the AI’s logic, its code and math, and catch its “bullsh*t.” 

AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter.

The choice is not whether to reject or embrace AI. It’s whether we’ll stay educated and smart enough to guide it.

Read Entire Article






<