Thoughts on AI safety in this era of increasingly powerful open source LLMs
10th April 2023
This morning, VentureBeat published a story by Sharon Goldman: With a wave of new LLMs, open source AI is having a moment — and a red-hot debate. It covers the explosion in activity around openly available Large Language Models such as LLaMA—a trend I’ve been tracking in my own series LLMs on personal devices—and talks about their implications with respect to AI safety.
I talked to Sharon for this story last week. Here’s the resulting excerpt:
The latest wave of open-source LLMs are much smaller and not as cutting-edge as ChatGPT, but “they get the job done,” said Simon Willison, an open-source developer and co-creator of Django, free and open-source, Python-based web framework.
“Before LLaMA came along, I think lots of people thought that in order to run a language model that was of any use at all, you needed $16,000 worth of video cards and a stack of 100 GPUs,” he told VentureBeat. “So the only way to access these models was through OpenAI or other organizations.”
But now, he explained, open-source LLMs can run on a laptop. “It turns out maybe we don’t need the cutting-edge for a lot of things,” he said.
To expand on this point: when I said “It turns out maybe we don’t need the cutting-edge for a lot of things” I was thinking specifically about tricks like the ReAct pattern, where LLMs are given the ability to use additional tools to run things like calculations or to search for information online or in private data.
This pattern is getting a LOT of attention right now: ChatGPT Plugins is one implementation, and new packages are coming out every few days such as Auto-GPT that implement variations on this theme.
An open question for me: how powerful does your LLM need to be in order to run this pattern? My hunch is that if you have an LLM that is powerful enough to produce reasonable summaries of text, it should also be powerful enough to use as part of that pattern.
Which means that a LLM running on a laptop should be enough to create truly impressive tool-enabled AI assistants—without any need to rely on cloud AI providers like OpenAI.
However, the ethical implications of using these open source LLM models are complicated and difficult to navigate, said Willison. OpenAI, for example, has extra filters and rules in place to prevent writing things like a Hitler manifesto, he explained. “But once you can run it on your own laptop and do your own additional training, you could potentially train a fascist language model — in fact, there are already projects on platforms like 4chan that aim to train ‘anti-woke’ language models,” he said.
This is concerning because it opens the door to harmful content creation at scale. Willison pointed to romance scams as an example: Now, with language models, scammers could potentially use them to convince people to fall in love and steal their money on a massive scale,” he said.
Currently, Willison says he leans towards open source AI. “As an individual programmer, I use these tools on a daily basis and my productivity has increased, allowing me to tackle more ambitious problems,” he said. “I don’t want this technology to be controlled by just a few giant companies; it feels inherently wrong to me given its impact.”
I wrote about this more here: AI-enhanced development makes me more ambitious with my projects
This is yet another example of a theme I keep coming back to: in AI, multiple things are true at the same time. The potential for harm is enormous, and the current systems have many flaws—but they are also incredibly empowering on an individual level if you can learn how to effectively use them.
But, he still expressed concern. “What if I’m wrong?” he said. “What if the risks of misuse outweigh the benefits of openness? It’s difficult to balance the pros and cons.”
This is a real challenge for me. Sci-fi paperclip scenarios aside, most of the arguments I hear from AI critics feel entirely correct to me. There are so many risks and harmful applications of this technology.
Maybe we can regulate its use in a way that helps mitigate the worst risks... but legislation is difficult to get right, and the pace at which AI is moving appears to be far beyond that of any governmental legislative process.
My current plan is to keep helping people learn how to use these tools in as positive and productive a way as possible. I hope I don’t come to regret it.
More recent articles
- Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac - 12th November 2024
- Visualizing local election results with Datasette, Observable and MapLibre GL - 9th November 2024
- Project: VERDAD - tracking misinformation in radio broadcasts using Gemini 1.5 - 7th November 2024