Re-thinking thinking: Watching a Robot Lift Weights

Imagine this scenario: You sign up for a gym membership to get stronger. You hire a personal trainer. But every time you go to the gym, you sit on a bench and watch a robot lift the heavy weights for you.

The robot creates perfect form. It lifts 200kg easily. It never gets tired. You leave the gym without sweating a drop.

At the end of the year, who is stronger? The robot. Unfortunately, you have experienced atrophy: your muscles have become weaker because you stopped using them.

This is the most “down-to-earth” argument against the unchecked use of AI in education, and it’s the one we need to talk about first.

The “Productive Struggle”

In psychology and neuroscience, there is a concept called Cognitive Offloading. This is when we use a tool to reduce the mental effort required to solve a problem. Sometimes, this is great (like using a calculator for long division so you can focus on the complex physics equation).

But writing, coding, and problem-solving are different. The learning doesn’t happen when you have the answer; the learning happens while you are struggling to find it.

  • When you stare at a blank page and feel frustrated, your brain is building neural pathways.
  • When you write a messy draft and have to fix it, you are learning how to structure thoughts.
  • When you debug code for an hour, you are learning logic.

If a student asks ChatGPT to “Write a 500-word essay on Hamlet,” they get the product (the essay), but they skip the process (the thinking). They have essentially paid a robot to go to the gym for them.

The Risk: The “Google Maps” Effect

We’ve already seen this happen with navigation. Before GPS, we built mental maps of our cities. Now, many of us feel helpless without Google Maps, even in our own neighborhoods. We offloaded that skill, and our brains stopped maintaining it.

If we offload critical thinking and writing to AI, we risk graduating a generation of students who can recognize a good answer but cannot create one from scratch. We risk creating graduates who are technically efficient but intellectually fragile.

The Solution: Bring Back the “Viva” (The Oral Defense)

So, how do we fix this as a university? Banning AI is like trying to ban calculators.. It’s impossible and unhelpful.

Instead, we need to change what we value. We need to move from grading the paper to grading the person.

The most strategic shift we can make is to reintroduce a modern, scaled-down version of the Viva Voce (Oral Defense).

How it works:

  1. Submit the work: The student submits their essay or coding project (which may or may not have been assisted by AI).
  2. The Defense: The grading process includes a brief, 5-10 minute face-to-face chat with the professor or teaching assistant.
  3. The Question: “Walk me through how you reached this conclusion,” or “Explain why you chose this specific argument over that one.”

If the student wrote the paper (even with AI help) and deeply understands it, they will pass the defense easily. If they generated it blindly, they will freeze.

The Bottom Line

We are not here to police technology. We are here to build human intelligence. By shifting our focus back to human explanation, we ensure that even if the student uses a “robot” to help spot the weights, they are still the ones doing the heavy lifting.