Astra Is Google’s Answer to the New ChatGPT

Date:

Share:


Pulkit Agrawal, an assistant professor at MIT who works on AI and robotics, says Google’s and OpenAI’s latest demos are impressive and show how rapidly multimodal AI models have advanced. OpenAI launched GPT-4V, a system capable of parsing images in September 2023. He was impressed that Gemini is able to make sense of live video—for example, correctly interpreting changes made to a diagram on a whiteboard in real time. OpenAI’s new version of ChatGPT appears capable of the same.

Agrawal says the assistants demoed by Google and OpenAI could provide new training data for the companies as users interact with the models in the real world. “But they have to be useful,” he adds. “The big question is what will people use them for—it’s not very clear.”

Google says Astra will be made available through a new interface called Gemini Live later this year. Hassabis said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Astra’s capabilities might provide Google a chance to reboot a version of its ill-fated Glass smart glasses, although efforts to build hardware suited to generative AI have stumbled so far. Despite OpenAI and Google’s impressive demos, multimodal modals cannot fully understand the physical world and objects within it, placing limitations on what they will be able to do.

“Being able to build a mental model of the physical world around you is absolutely essential to building more humanlike intelligence,” says Brenden Lake, an associate professor at New York University who uses AI to explore human intelligence.

Lake notes that today’s best AI models are still very language-centric because the bulk of their learning comes from text slurped from books and the web. This is fundamentally different from how language is learned by humans, who pick it up while interacting with the physical world. “It’s backwards compared to child development,” he says of the process of creating multimodal models.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Astra more robust. Other frontiers of AI, including Google DeepMind’s work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in.

“A multimodal universal agent assistant is on the sort of track to artificial general intelligence,” Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. “This is not AGI or anything, but it’s the beginning of something.”



Source link

━ more like this

This Samsung 2.1-channel soundbar is only $100 at Best Buy today!

When it comes to built-in TV speakers, you’ll be hard-pressed to find a pair of drivers that deliver the volume, clarity, and soundstage...

The best PS5 games for 2025: Top PlayStation titles to play right now

Read our full Astro Bot review Astro Bot is a gorgeous 3D platformer with an adorable protagonist, dozens of planets to explore and...

Video games may actually be good for you, a new study claims

Despite all the claims, playing video games won’t rot your brain. More and more studies have shown that playing video games — in...

Squid Game will have a third (and final) season in 2025

It looks like we won’t have to wait long to find out what happens in the next installment of Netflix’s addictive and deadly...

Details leak on the upcoming RTX 5070 Ti and RTX 5070 GPUs

As we draw closer to January, leaks and speculation around Nvidia’s next-generation RTX 50-series GPUs are echoing all over the internet. The latest...
spot_img