
Artificial intelligence is a ground-breaking technology. With a wide variety of real-world applications and efficiency in many technological areas, AI is frequently used and highly productive.
The newly released GPT-5 by OpenAI provides more useful responses across math, science, finance, and other sectors than its predecessor, GPT-4. It is also capable of writing more complex programs and even fully developed apps.
While perhaps not very important to the average person, this new intelligence model means a lot to software-based employees, who develop complex, platform-based applications that take an extensive amount of time.
GPT-5 is designed to be the successor to OpenAI’s O4 model, according to CEO Sam Altman. It’s also quoted to have reduced “hallucinations” and fewer misleading results.
Regardless of these hallucinations, in everyday tasks such as writing messages or letters, or even health-related questions, the model is intended to perform more quickly and more accurately.
The new AI model has a performance advantage against its competitors, including Claude, Grok, and Deepseek, making fewer mistakes and being more efficient with its responses.
For OpenAI itself, this model is also a cost-cutting measure, saving money for the company, which has almost 800 million weekly active users.
Sixth Former Jack Ford, an experienced coder, believes that AI “codes in the same way it writes.”
“If you look at something written by ChatGPT, you won’t find any obvious grammatical or spelling mistakes. In fact, if you are very explicit about telling ChatGPT exactly what you want it to write, the output is often very comparable, if not better, than a human,” Ford said.
On the contrary, Ford remarks that “the second you ask it to write something broader,” the response will “center around a very basic idea that lacks creativity or critical thinking.”
With coding, it’s the same. “If you tell ChatGPT exactly what you want it to do and what tools you want it to use,” Ford said, “it can write code that will accomplish your goal without errors.”
With conceptual problems, Ford observed that “ChatGPT cannot both develop an architecture that works well and implement that architecture using the correct tools. It often attempts to take shortcuts or heavily simplify things, which can be extremely dangerous for people who don’t understand what it is doing.”
This also isn’t just with GPT—it applies to most or all large language models (LLMs).
“I’m completely fine with existing programmers who use AI to speed up their development workflows or implement things quickly.”
Jack Ford ’25
In his recent Reflection assembly, Ford demonstrated the dangers of LLM coding with an app that uses Spotify’s services. He uncovered a dangerous coding error with data requests that could cost detrimental amounts of money for the developer’s budget, simply by using the inspect element.
In relation to the introduction of GPT-5, vibecoding, a software-development technique, is also rapidly becoming popular among new coders. However, contrary to the popularity, it’s also reportedly taking away valuable skills and experience.
“I’m completely fine with existing programmers who use AI to speed up their development workflows or implement things quickly,” Ford said.
Ford believes using AI to code without any knowledge or skill of what they’re creating is dangerous.
“Blindly trusting whatever the LLM writes for you is never a good idea, especially if it’s to be shared,” he said.
This was already shown when Replit, a vibecoding AI, deleted a company’s entire database in a prompt.
“LLMs usually fail to implement any sort of secure or scalable code, opting for the easiest solution,” Ford said.
Around “ninety-nine percent of the time,” Ford says that “you will reach a point in your vibecoded app where the AI stops being helpful, and you will need to hire a real developer, so just do it from the start.”
Fifth Former Benas Antanavicius thinks these LLMs can provide a good starting point, but they struggle to connect two unique tasks. Antanavicius relates it to a Wikipedia page, in terms of its readability..
Some of the public agrees with Ford and Antanavicius, as many seasoned software engineers criticize blind vibe coding as it takes away the “art” of coding and inadvertently makes it less secure or contextual in its environment.
Ford and Antanavicius relay that drawing the line on how we supervise our work is essential because quality is key.

You must be logged in to post a comment.