Get the latest tech news

DeepMind’s GenEM uses LLMs to generate expressive behaviors for robots


DeepMind's GenEM leverages AI to craft expressive, adaptable robot behaviors, offering a breakthrough in human-robot synergy.

In a new study, researchers at the University of Toronto, Google DeepMind and Hoku Labs propose a solution that uses the vast social context available in large language models (LLM) to create expressive behaviors for robots. The main premise of the new technique is to use the rich knowledge embedded in LLMs to dynamically generate expressive behavior without the need for training machine learning models or creating a long list of rules. “One of the key benefits of GenEM is that it responds to live human feedback – adapting to iterative corrections and generating new expressive behaviors by composing the existing ones.”

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of DeepMind

DeepMind

Photo of LLMs

LLMs

Photo of robots

robots

Related news:

News photo

UK government urged to adopt more positive outlook for LLMs to avoid missing ‘AI goldrush’

News photo

Protect AI expands efforts to secure LLMs with open source acquisition

News photo

WSJ editor post: Companies Brought in Robots. Now They Need Human ‘Robot Wranglers.’ Wandering and confused cyborgs create a new job. ‘We’ve found them on a receiving dock, just lost like a child in the park.’