What if AI doesn’t get much better than this?

Jayden Thomas ’27

The artificial intelligence revolution has hit every part of our daily lives, from chatbots helping us understand difficult topics to AI helpers being integrated into every app we interact with. 

What if this is as far as it goes? What if the introduction of these large language models, image generators, and personal assistants represents the peak, not the beginning, of AI progress?

We seem to be hitting a dead zone in AI progress, as illustrated by recent attempts to move forward with the revolution: the failure of OpenAI’s GPT-5, Apple’s struggle to refine its AI features, and bearish outlooks on artificial intelligence companies.

The central reason for AI’s existence is to rectify three main problems that ordinary computers cannot do: to think, empathise, or reason. Nearly 70 years after Professor John McCarthy at Dartmouth College developed ideas for “thinking machines” in 1956, the evolution of pre-trained large language models has come a long way.

However, it still falls largely short of our expectations.

For example, while AI chatbots like ChatGPT can provide detailed explanations and draft essays, they often fail when asked to solve problems that require deep reasoning or simple multi-step planning.

Image generators such as DALL.E or Midjourney can create breathtaking visuals from text prompts; however, sometimes they produce nonsensical images (called hallucinations), revealing a lack of true understanding of what the creator specifically wants in their mind.

Haverford School logo generated by DALL.E. Misspelling generated by AI – DALL.E

In more commercial settings, AI helpers or customer service assistants can only mimic politeness and respond to basic questions, but are unable to understand a person’s frustration or tailor advice with insight into a person’s emotional and physical condition.

These examples demonstrate that despite their certain capabilities, AI systems are limited in the very areas—thinking critically, logical reasoning, and empathy—that define intelligence.

The future of AI may depend on controlled development and interpretability, not only opening it up to current creators but also allowing it to be trained and developed by the whole technological community.

Although it might look like a stagnation with the progress in the AI sector for now, there is one key challenge that creators of AI could overcome: AI models and their algorithms are becoming so complex that their creators don’t understand how they work.

This “black box” problem leads to creators struggling to fully predict or explain their outputs, making debugging and identifying errors hard. As models grow larger, expanding their neural networks, we will not be able to just add more data or increase their computational power.

The future of AI, therefore, may depend on controlled development and interpretability, not only opening it up to current creators but also allowing it to be trained and developed by the whole technological community.

In other words, we need to work towards not a “smarter” AI, but one that works smarter with us.