Is GPT-5 in trouble? Report suggests that AI has plateaued | Tech Reader

Date:

Share:


Viralyft / Unsplash

OpenAI’s next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it’s been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a “far smaller” improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion “isn’t reliably better than its predecessor [GPT-4] in handling certain tasks,” specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

The Information’s report cites a “dwindling supply of high-quality text and other data” on which to train new models as a major factor in the new model’s insubstantial gains. In short, the AI industry is quickly running into a training data bottleneck, having already stripped the easy sources of social media data from sites like X, Facebook, and YouTube (the latter on two different occasions.) As such, these companies are increasingly having difficulty finding the sorts of knotty coding challenges that will help advance their models beyond their current capabilities, slowing down their pre-release training.

That reduced training efficiency has massive ecological and commercial implications. As frontier-class LLMs grow and further push their parameter counts into the high trillions, the amount of energy, water, and other resources is expected to increase six-fold in the next decade. This is why we’re seeing Microsoft try to restart Three Mile Island, AWS buy a 960 MW plant, and Google purchase the output of seven nuclear reactors, all to provide the necessary power for their growing menageries of AI data centers — the nation’s current power infrastructure simply can’t keep up.

In response, as TechCrunch reports, OpenAI has created a “foundations team” to circumvent the lack of appropriate training data. Those techniques could involve using synthetic training data, such as what Nvidia’s Nemotron family of models can generate. The team is also looking into improving the model’s performance post-training.

Orion, which was originally thought to be the code name for OpenAI’s GPT-5, is now expected to arrive at some point in 2025. Whether we’ll have enough available power to see it in action, without browning out our municipal electrical grids, remains to be seen.








Source link

━ more like this

Meta’s AI smart glasses have a creepy reputation, but they are finding a good purpose too

Meta’s Ray-Ban smart glasses have earned a reputation for being creepy, with growing backlash over reports of users secretly recording people in public....

VR game Teenage Mutant Ninja Turtles: Empire City launches on April 30

Everyone's four favorite anthropomorphic turtles are returning to the world of video games. Teenage Mutant Ninja Turtles: Empire City will be released on...

‘Uncanny Valley’: Iran’s Threats on US Tech, Trump’s Plans for Midterms, and Polymarket’s Pop-up Flop

Kate Knibbs: So, you went twice?Makena Kelly: Yes, Kate. I went twice.Kate Knibbs: I missed that.Zoë Schiffer: Wait, is the Pentagon Pizza thing...

Samsung’s 2026 OLED TVs get a metal makeover with brightness and anti-glare upgrades

Samsung’s TVs have been steadily getting better each year, but 2026 feels like a proper design rethink rather than just a spec bump....

iPhone 18 Pro might skip the one color that fans have been yearning for

Apple has been on a full-blown color spree ever since the iPhone 17 series dropped, and honestly, I’m still trying to process it....
spot_img