Published in News

Robots that can think for themselves are a long way away

by on16 September 2024


What ever Elon Musk thinks

Elon [look at me] Musk’s dream of a home robot is a long way away, according to the former CEO of Everyday Robotics.

Hans Peter Brondmo, who headed Google's seven-year effort to build a bot until the search engine outfit shut it down, said that building AI-powered robots that can flexibly operate in the real world is going to take much longer than Silicon Valley believes and promises.

The AI revolution rests on the assumption that many long-awaited technologies such as humanoid robots, self-driving cars and superintelligent digital brains — are right around the corner; Brondmo said this is unlikely.

However, he worries that because tech companies are waking up to this idea that they have given up, the US is squandering its lead in this field, and Silicon Valley won't be "patient enough to win the global race to give AI a robot body."

While everyone agrees that the convergence of robotics and AI is inevitable, robots need more world-savvy AI to get smarter, and AI needs smarter robots to understand the world.

Robots rely on machine learning-based AI to develop the capacity to tackle goals and respond to unfamiliar situations and unexpected obstacles in the real world.

But some experts believe that today's hallucination-prone AI models must be embodied — to encounter the physical world with limbs and sensors — to evolve an understanding of the line between reality and fantasy.

Plenty of companies are pursuing this path. For instance, AI pioneer Fei-Fei Li's new startup, World Labs, aims to ground AI models in real-world understanding using human-like visual data processing.

However, some experts also believe that today's hallucination-prone AI models must be embodied — to encounter the physical world with limbs and sensors — to evolve an understanding of the line between reality and fantasy.

Plenty of companies are pursuing this path. For instance, AI pioneer Fei-Fei Li's new startup, World Labs, aims to ground AI models in real-world understanding using human-like visual data processing.

Everyday Robotics spent seven years and a small Google fortune developing a one-armed robot on a wheeled platform.

When Google halted the project in February 2023, the robots were helping clean up researchers' desks, sorting rubbish during the daytime and improvising dances in the evening.

The company said then that some of the team's work and employees would be rolled into Google Research.

Brondmo said that Google's robotics experts were divided over how to create machines that could execute simple everyday tasks, like picking up an apple from a table.

Such a mission turns out to be anything but simple: To accomplish it, you need to understand what an apple is, how gravity works, what's required to hold a round object, how not to crush a fruit and so on.

Some researchers believed the best short-term results would come from a hybrid of AI training and procedural instructions. Others—including Google founder Larry Page—favoured moving directly to "end-to-end" (e2e) learning, where you'd hand robots a general task, and they'd be able to figure out how to execute it.

"I have come to believe it will take many, many thousands, maybe even millions of robots doing stuff in the real world to collect enough data to train e2e models that make the robots do anything other than fairly narrow, well-defined tasks."

Of course, the more "stuff" robots do in the real world, the more important it will be to build them with safety catches and guardrails.

So far, robot hype is outpacing robot reality.

Last modified on 16 September 2024
Rate this item
(0 votes)