Get the latest tech news
DeepMind’s GenEM uses LLMs to generate expressive behaviors for robots
DeepMind's GenEM leverages AI to craft expressive, adaptable robot behaviors, offering a breakthrough in human-robot synergy.
In a new study, researchers at the University of Toronto, Google DeepMind and Hoku Labs propose a solution that uses the vast social context available in large language models (LLM) to create expressive behaviors for robots. The main premise of the new technique is to use the rich knowledge embedded in LLMs to dynamically generate expressive behavior without the need for training machine learning models or creating a long list of rules. “One of the key benefits of GenEM is that it responds to live human feedback – adapting to iterative corrections and generating new expressive behaviors by composing the existing ones.”
Or read this on Venture Beat