GPT-5 Won't Be Available in Short Term, AI's Security Concerns Being Amplified

Technology Author: Yunfeng Zhang Apr 18, 2023 07:40 PM (GMT+8)

The development of artificial intelligence is the order of the day, even if governments around the world could somehow ban new AI development, it is clear that society is currently helpless to deal with the AI systems currently available, that GPT-5 has not yet arrived, and that at a time when GPT-4 is not yet fully understood, the safety and ethical issues of AI should still be carefully examined.

artificial intelligence

Last month, an open letter signed by more than 1,100 AI experts called for "an immediate moratorium on all AI labs for at least six months," including Elon Musk, CEO of Twitter, Steve Wozniak, co-founder of Apple Inc. Wozniak, and Tristan Harris of the Center for Humane Technology.

The open letter calls for a worldwide moratorium on the development of AI technologies "more powerful than GPT-4" by labs like OpenAI. In simple terms, this appears to be a boycott of GPT-5, and the open letter highlights concerns about the safety of future systems and raises potentially far-reaching risks to society and humanity.

However, GPT-5, as feared by Musk and others, may not be coming anytime soon

At an event on the commercial future of AI at MIT on April 13, OpenAI CEO and co-founder Sam Altman responded positively to the letter for the first time: "GPT-5 is not being trained now, nor will it be in the near future," and he said that the open letter did not mention most of the technical details that would require a pause in development and that it would be foolish to resist the development of a big model product in terms of a version number update.

However, just because OpenAI is not training GPT-5 models does not mean they have stopped extending GPT-4. OpenAI is reportedly still enhancing GPT-4 with various features (including connecting it to the Internet), and, based on past GPT releases, it is highly likely that the next GPT version will be GPT-4.5, not GPT-5.

 Altman also responded to the version number fallacy: There is a misconception in consumer technology that version number updates do not directly reflect product capabilities, and that users expect tight control over the digital version of the operating system, i.e., a constant iteration of new version numbers to increase security, which is essentially just marketing.

The web predicts that super AI will appear within a few years, and predictions for version updates abound, with more sophisticated commentators using vague charts with axes labeled "progress" and "time" to speculate on AI development and uncritically use them as evidence. This logic is in fact flawed.

In Altman's view, if the rate of improvement and iteration of large models is not controlled by developers, and if there is no uniform measurement standard in the industry, then even if the version number of GPTs keeps expanding, it does not fundamentally affect the development of the AI industry; the version number is just a name and there is no need to look deeper. So OpenAI, as the leader of AI frontier technology, is not unconcerned about the risks of AI such as ethics and norms, though they prefer to reduce such risks by gradually figuring out, rather than stopping the development and optimization of new versions of large models.

Instead, the discussion should focus on functionality: "demonstrating what these systems can and cannot do, as well as predicting in advance how the future will change over time," which is why, Altman says, confirming that OpenAI is not currently developing GPT-5 does nothing to comfort to those worried about AI.