Discover more from GOOD INTERNET
Against open sourcing Automatized Knowledge Interpolators
Move fast and break Knowledge.
We're entering the hot phase of AI-regulation now.
Joe Biden just signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the G7 announced an International Code of Conduct for Organizations Developing Advanced AI Systems, the EU is finalizing it's AI Act and "twenty-eight governments signed up to the so-called Bletchley declaration on the first day of the AI safety summit, hosted by the British government."
The outlook for freely distributed weights, training algorithms and AI-software doesn't look too good and i welcome this development, even when i downloaded my share of Stable Diffusion distros and played around with them for quite a while.
This is why:
A brain is not an automatic knowledge interpolator.
Any human can interpolate knowledge, obviously. You can read this and that and then synthesize the new ideas from what you've read. You are even free to write down a bomb instruction if you must, synthesized from the knowledge contained within chemistry textbooks. You may not be free to publish them, but you can absolutely write them down. But this is not an automatized process, you need to do the hard work and extract knowledge and synthesize new stuff on your own.
An LLM is an automatized knowledge interpolator and contains within it's latent space every knowledge contained in it's training data, and any interpolated step inbetween those datapoints — any bomb instruction in the style of shakespeare but written in mandarin and sung by The Ramones sits right there in that vast, multibilliondimensional space of endless possibilities, ready to be prompted.
In theory, just like in Borges Library of Babel, LLMs contain any possible text, including the instructions for bombs, chemical weapons, phishing mails, propaganda. Furthermore, aligment has inherent limitations and any behavior in that latent space can be triggered, if you find the right combination of words to trigger that output. And you can absolutely automatize this process to find the right combination of words to trigger any output, by using another LLM. In other words, with open LLMs, you can automatically trigger any synthesizable knowledge. Anything.
The most commonly used analogy for LLMs in the safety debate is nuclear weapons, but i think we need to be more precise here. What we are trying to wrap our heads around, and what we are regulating, is the automatization of the interpolation of knowledge. The analogy here then would be nuclear fission, from which you can make a bomb killing millions and a reactor generating energy. With the training of an LLM, you split knowledge into it's atoms and derive embedded patterns, building a machine that can interpolate and recreate those patterns in any way.
We don't know yet how potent this automatization of the interpolation of knowledge is. Potentially, very, and the invention of 40000 new chemical weapons within a few hours gives you a hint at the potentiality of new knowledge, both harmless and dangerous, readily encoded in latent spaces.
And I have a bad feeling about open sourcing technology that enables this kind of automatic knowledge interpolation. (I already argued along these lines in Limitations of alignment and AI-regulation, comparing AI-regulation to gun-regulation.)
New paradigms for different beasts
I'm really over the now fifty year old utopian takes on digital freedom and open source and anonymity — they never answer for questions about accountability or responsibility. I've seen exactly these paradigms fail society at large, again and again and again, and all we got was Wordpress and Linux, shitty photo editing software and cryptographically protected communications enabling excessive psychological violence operations (aka trolling) and stochastic terrorism on Telegram. I have not many reasons to trust these paradigms, and i think they provide net negative value to society. And this even doesn't take LLMs into account.
You see, Open Source is not first and foremost a mechanism for safety as its proponents claim. This is merely a welcome byproduct. First and foremost, Open Source is a mechanism for acceleration of development by using swarm intelligence. You publish code and make it transparent so that nodes in the swarm can look at and improve on it. This improvement sometimes includes safety, sure, but mostly it's about adding components and streamlining code and performance.
When Stable Diffusion was released, the first things people and devs did was not ensuring the privacy of people, or to implement safety mechanisms to reduce biased output or any of that sort. The first things people did was to improve speed and build a decentralized computing infrastructure. The first thing people did with open source deepfake models was generating nonconsensual porn and harassing women. The first thing people did with open source LLMs was not safety measures to reduce the output of racist language or bomb instructions, the first thing was to train an LLM on 4chan and build autonomous agents.
I'm pretty sure that acceleration by open source is the last thing we want when it comes to the automatic interpolation of knowledge, the underlying principle enabling all of the above. Open Source is a nice paradigm when it comes to servers and photo editing software. But AI and its ability to automatize interpolation of knowledge is a different beast and move fast and break things should not be applied to knowledge.
Maybe, for a technology with this kind of potentiality, we need a new paradigm for open source development, something between closed development in a lab and openly distributed on Github where every clown can download the technical requirements to build any automatized knowledge interpolator he can think up. I can absolutely imagine free and open distribution of code which is still restricted to researchers, but not open to the wider public.
I'm fine with regulating AI tech on an application level: Regulate AI-products more harshly than development, and restrict experimental AI-development to researchers to ensure accountability. When the new AI-Clippy-products from Microsoft generate bomb instructions or private information or defamation, then Microsoft should be held accountable for this. When a LLM in the lab does the same, it's “just“ a failed experiment in a contained setting.
Safety by openness does not apply to AI
Ofcourse, none of this will convince you if you're a diehard open source proponent. Maybe you're right, and if you want, you can sign Mozillas Joint Statement on AI Safety and Openness. I wont. In it, they say that "we have seen time and time again that (...) increasing public access and scrutiny makes technology safer, not more dangerous".
What Mozilla means by "safety" when it comes to non-AI-tech here can be summarized with privacy concerns and spam filters, which is fine and all, but how increased privacy prevents someone training an open source LLM on the genetic information of pathogens, interpolate some really mean virus from that latent space and hacking it together in a DIY-biolab is not clear to me.
Also, it is simply not true that open source tech makes technology safer, as is proven by a teenage mental health crisis and cognitive distortions on a societal level, by forums fostering stochastic terrorism, by anonymous communications enabling manhunts and troll-psyops — all of this enabled by a climate of techno-utopism and open development which was dominant in tech for the last 30 years and which gave us the world we live in now.
However, i do see how restricting AI-development to closed source will stiffle competition and make the development of especially frontier models exclusive to research labs in big corporations. And i have no reason to believe that what biohackers with open source-AI could do will not be done in secret by big corps with military contracts. This is bad — but i think open source-AI is worse.
The utopians were right about one thing: Digital tech took power from gatekeepers and redistributed it. But it also created new power structures which, until this day, can't be held accountable and more often than not act absolutely irresponsible disregarding of humans and the law. A single bad actor with open source photo editing software can make a flashing GIF, distribute it via social media servers built with and running on open source software and send hundreds of epileptic people into seizures. What do you think such a person can do with a biolab and open source LLMs? I don't want to find out.
AI is not a software running a server, it's not a software to edit photos or manage your movie-collection. AI is an automatized interpolator of knowledge which in and of itself is tremendously powerful — way more powerful than any software we invented up until now.
What is fun with image generation — except for illustrators making a living —, to produce new interpolations of styles and all kinds of otherworldly visuals, can become very serious threats when it comes to potentially dangerous forms of knowledge, such as chemistry and biology.
This is why i'm against open sourcing AI.