Get the latest tech news
MIT debuts a large language model-inspired method for teaching robots new skills
MIT this week showcased a new model for training robots. Rather than the standard set of focused data used to teach robots new tasks, the method goes big,
Rather than the standard set of focused data used to teach robots new tasks, the method goes big, mimicking the massive troves of information used to train large language models (LLMs). The team introduced a new architecture called Heterogeneous Pretrained Transformers (HPT), which pulls together information from different sensors and different environments. “While we are just in the early stages, we are going to keep pushing hard and hope scaling leads to a breakthrough in robotic policies, like it did with large language models.”
Or read this on TechCrunch