Astra Is Google’s Answer to the New ChatGPT

Date:

Share:


Pulkit Agrawal, an assistant professor at MIT who works on AI and robotics, says Google’s and OpenAI’s latest demos are impressive and show how rapidly multimodal AI models have advanced. OpenAI launched GPT-4V, a system capable of parsing images in September 2023. He was impressed that Gemini is able to make sense of live video—for example, correctly interpreting changes made to a diagram on a whiteboard in real time. OpenAI’s new version of ChatGPT appears capable of the same.

Agrawal says the assistants demoed by Google and OpenAI could provide new training data for the companies as users interact with the models in the real world. “But they have to be useful,” he adds. “The big question is what will people use them for—it’s not very clear.”

Google says Astra will be made available through a new interface called Gemini Live later this year. Hassabis said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Astra’s capabilities might provide Google a chance to reboot a version of its ill-fated Glass smart glasses, although efforts to build hardware suited to generative AI have stumbled so far. Despite OpenAI and Google’s impressive demos, multimodal modals cannot fully understand the physical world and objects within it, placing limitations on what they will be able to do.

“Being able to build a mental model of the physical world around you is absolutely essential to building more humanlike intelligence,” says Brenden Lake, an associate professor at New York University who uses AI to explore human intelligence.

Lake notes that today’s best AI models are still very language-centric because the bulk of their learning comes from text slurped from books and the web. This is fundamentally different from how language is learned by humans, who pick it up while interacting with the physical world. “It’s backwards compared to child development,” he says of the process of creating multimodal models.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Astra more robust. Other frontiers of AI, including Google DeepMind’s work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in.

“A multimodal universal agent assistant is on the sort of track to artificial general intelligence,” Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. “This is not AGI or anything, but it’s the beginning of something.”



Source link

━ more like this

Prescription power: The best medications to treat infection quickly – London Business News | Londonlovesbusiness.com

When faced with an infection that requires swift and targeted intervention, prescription medications often stand as the most effective solution....

Things still aren’t looking good for Apple’s iOS 19 update

The latest version of iOS 18.2 rolled out to (most) iPhone users yesterday, and it brought with it a slew of new features...

What advertising trends are taking businesses from meh to major? – London Business News | Londonlovesbusiness.com

In advertising, staying stuck in old-school tactics feels like clinging to a flip phone while everyone else has upgraded to...

Google lays out its vision for an Android XR ecosystem

Google's latest push into extended reality is taking shape. While the company isn't entirely ready to show off any products just yet, it...

YouTube TV prices are going up again in 2025

“Nothing is certain except death and taxes,” Benjamin Franklin apparently said in 1789. If he were alive today, he may very well have...
spot_img