Get the latest tech news
Sharing new research, models, and datasets from Meta FAIR
Meta FAIR is releasing new research artifacts that highlight our recent innovations in developing agents, robustness and safety, and architectures that...
Overall, the LCM outperforms or matches recent LLMs in the pure generative task of summarization, offers strong zero-shot generalization to unseen languages, and is more computationally efficient as input context grows. Dynamic Byte Latent Transformer outperforms tokenizer-based models across the board in terms of robustness, with a seven point advantage on average, and excels at processing longtail and rare sequences of unseen symbols. As a result, Meta Motivo is able to solve a wide range of whole-body control tasks, including motion tracking, goal pose reaching, and reward optimization, without any additional training or planning.
Or read this on Hacker News