Simon Willison’s Weblog

Subscribe

Thursday, 20th November 2025

Previously, when malware developers wanted to go and monetize their exploits, they would do exactly one thing: encrypt every file on a person's computer and request a ransome to decrypt the files. In the future I think this will change.

LLMs allow attackers to instead process every file on the victim's computer, and tailor a blackmail letter specifically towards that person. One person may be having an affair on their spouse. Another may have lied on their resume. A third may have cheated on an exam at school. It is unlikely that any one person has done any of these specific things, but it is very likely that there exists something that is blackmailable for every person. Malware + LLMs, given access to a person's computer, can find that and monetize it.

Nicholas Carlini, Are large language models worth it? Misuse: malware at scale

# 1:01 am / ai, generative-ai, llms, nicholas-carlini, ai-ethics

2025 » November

MTWTFSS
     12
3456789
10111213141516
17181920212223
24252627282930