Get the latest tech news
Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”
Meta’s Llama 4 model is worried about left leaning bias in the data, and wants to be more like Elon Musk’s Grok.
Bias in artificial intelligence systems, or the fact that large language models, facial recognition, and AI image generators can only remix and regurgitate the information in data those technologies are trained on, is a wellestablishedfact that researchers and academics have been warning about since their inception. In a blog post about the release of Llama 4, Meta’s open weights AI model, the company clearly states that bias is a problem it’s trying to address, but unlike mountains of research which established AI systems are more likely to discriminate against minorities based on race, gender, and nationality, Meta is specifically concerned with Llama 4 having a left-leaning political bias. “It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics,” Meta said in its blog.
Or read this on r/technology