Astra Is Google’s Answer to the New ChatGPT

Date:

Share:


Pulkit Agrawal, an assistant professor at MIT who works on AI and robotics, says Google’s and OpenAI’s latest demos are impressive and show how rapidly multimodal AI models have advanced. OpenAI launched GPT-4V, a system capable of parsing images in September 2023. He was impressed that Gemini is able to make sense of live video—for example, correctly interpreting changes made to a diagram on a whiteboard in real time. OpenAI’s new version of ChatGPT appears capable of the same.

Agrawal says the assistants demoed by Google and OpenAI could provide new training data for the companies as users interact with the models in the real world. “But they have to be useful,” he adds. “The big question is what will people use them for—it’s not very clear.”

Google says Astra will be made available through a new interface called Gemini Live later this year. Hassabis said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Astra’s capabilities might provide Google a chance to reboot a version of its ill-fated Glass smart glasses, although efforts to build hardware suited to generative AI have stumbled so far. Despite OpenAI and Google’s impressive demos, multimodal modals cannot fully understand the physical world and objects within it, placing limitations on what they will be able to do.

“Being able to build a mental model of the physical world around you is absolutely essential to building more humanlike intelligence,” says Brenden Lake, an associate professor at New York University who uses AI to explore human intelligence.

Lake notes that today’s best AI models are still very language-centric because the bulk of their learning comes from text slurped from books and the web. This is fundamentally different from how language is learned by humans, who pick it up while interacting with the physical world. “It’s backwards compared to child development,” he says of the process of creating multimodal models.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Astra more robust. Other frontiers of AI, including Google DeepMind’s work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in.

“A multimodal universal agent assistant is on the sort of track to artificial general intelligence,” Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. “This is not AGI or anything, but it’s the beginning of something.”



Source link

━ more like this

A new breed of Android flagships is coming and it should make Samsung nervous

A new wave of Android flagships is on the horizon, and they’re not playing it safe. The biggest shift is that these phones...

Watch the trailer for Science Saru’s Ghost in the Shell anime series

A new trailer has given us our best look yet at the upcoming The Ghost in the Shell anime. While it might not...

Apple is opening Siri to pick AI models, but there’s only only that makes sense to me 

Apple promised us a smarter, more capable Siri at WWDC 2024. The pitch was compelling: a Siri that understands your personal context, digs...

YouTube CEO opens up about AI slop, and it sounds like cozy promises

YouTube is in a slightly tricky position right now. On one hand, it’s encouraging creators to use AI tools to make content faster...

Samsung is cooking up a money-saving trick for its browser

Samsung might soon make online shopping a little less painful, and a bit cheaper too. A teardown of the latest Samsung Internet build,...
spot_img