Get the latest tech news
Poisoning Well
An experimental strategy for contaminating Large Language Models
Since most of what they consume is on the open web, it’s difficult for authors to withhold consent without also depriving legitimate agents(AKA humans or “meat bags”) of information. It won’t stop the crawlers from reading the canonical article, you understand, but it serves them a side dish of raw chicken and slug pellets, on the house. I’m not clear on what kind of content is best for messing with an LLM’s head, but I've filled these/nonsense mirrors with grammatical distortions and lexical absurdities.
Or read this on Hacker News