Get the latest tech news

Large-scale online deanonymization with LLMs


We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to prior deanonymization work (e.g., on the Netflix prize) that required structured data or manual feature engineering, our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using cross-platform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user's Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of scale

scale

Photo of HN posts

HN posts

Related news:

News photo

Anthropic announces proof of distillation at scale by MiniMax, DeepSeek,Moonshot

News photo

Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)

News photo

‘Slow this thing down’: Sanders warns US has no clue about speed and scale of coming AI revolution