Get the latest tech news
Meta’s Oversight Board raises concerns over automated moderation of hate speech
Meta's Oversight Board has raised concerns over the company's ability to effectively moderate hate speech with its automated systems.
Users reported the post six times after it first appeared in September 2020, but in four instances Meta's systems either determined that the content didn't violate the rules or they automatically closed the case. It notes that some users attempt to evade detection and continue to spread Holocaust denial content by using alternate spellings of words (such as replacing letters with symbols) and using cartoons and memes. The board also wants to know more about the company's ability to "prioritize accurate enforcement of hate speech at a granular policy level" as it leans more heavily on AI for content moderation.
Or read this on Endgadget