Get the latest tech news

AI Disclaimers in Political Ads Backfire on Candidates, Study Finds


Many U.S. states now require candidates to disclose when political ads used generative AI, reports the Washington Post. Unfortunately, researchers at New York University's Center on Technology Policy "found that people rated candidates 'less trustworthy and less appealing' when their ads featured...

Many U.S. states now require candidates to disclose when political ads used generative AI, reports the Washington Post.Unfortunately, researchers at New York University's Center on Technology Policy "found that people rated candidates 'less trustworthy and less appealing' when their ads featured AI disclaimers..." In the study, researchers asked more than 1,000 participants to watch political ads by fictional candidates — some containing AI disclaimers, some not — and then rate how trustworthy they found the would-be officeholders, how likely they were to vote for them and how truthful their ads were. "The candidate who was attacked was actually rated more trustworthy, more appealing than the candidate who created the ad," said Scott Babwah Brennen, who directs the center at NYU and co-wrote the report with Shelby Lake, Allison Lazard and Amanda Reid. The article notes that study participants in both parties "preferred when disclaimers were featured anytime AI was used in an ad, even when innocuous."

Get the Android app

Or read this on Slashdot

Read more on:

Photo of Study

Study

Photo of disclaimers

disclaimers

Photo of political ads

political ads

Related news:

News photo

Apple study proves LLM-based AI models are flawed because they cannot reason

News photo

Viewers don't trust candidates who use generative AI in political ads, study finds

News photo

Most US TikTok users aren’t following political accounts, study says