Get the latest tech news

Meta proposes new scalable memory layers that improve knowledge, reduce hallucinations


According to Meta, memory layers may be the the answer to LLM hallucinations as they don't require huge compute resources at inference time.

PEER, an architecture recently developed by Google DeepMind, extends MoE to millions of experts, providing more granular control over the parameters that become activated during inference. “Memory layers with their sparse activations nicely complement dense networks, providing increased capacity for knowledge acquisition while being light on compute,” the researchers write. “Given these findings, we strongly advocate that memory layers should be integrated into all next generation AI architectures,” the researchers write, while adding that there is still a lot more room for improvement.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of Meta

Meta

Photo of hallucinations

hallucinations

Photo of knowledge

knowledge

Related news:

News photo

Old video of Dana White slapping wife recirculates after Meta board announcement

News photo

Meta is ditching third-party fact checkers on Facebook, Instagram

News photo

Meta Now Lets Users Say Gay and Trans People Have ‘Mental Illness’