Simon Willis wrote a widely shared post about how he ran the leaked versions of “LLaMA 7B and 13B on a 64GB M2 MacBook Pro with llama.cpp”, following up with how “Large language models are having their Stable Diffusion moment”. At the same time, Together.xyz released OpenChatKit, which “provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications“. These are hardly the first open source Large Language Models, with BLOOM being the most prominent up to the point of being called the most important AI-model of the decade by by
.However, it’s not hard to see where this is going: At the end of this year, you will be able to run good Chatbots finetuned on whatever tickles your fancy on a Laptop and their output will match that of models like GPT3.5 (which is the basis for ChatGPT). I think this is worrying for various reasons.
Willis points out a few “very real ways in which this technology can be used for harm.” Among them the usual spam, scams, floods of trolling, hate speech, disinformation and automated radicalization. Already, people used the leaked LLaMA-model to create a “based“ chatbot saying the n-word. Needless to say that you don’t need jailbreaks for open source models to do any of these things.
looked at these risks in context of open source and don’t think there’s much to worry about at this point, but I digress.It’s his last point of harmful usage i find most interesting: automated radicalization. I want to look at this from a two different angles: Self radicalization through conspiracy LARPing and seemingly unrelated AI-companions like the love-bots from Replika.
Daniël de Zeeuw and Alex Gekker recently put out an highly interesting paper, in which they analyze “QAnon as Conspiracy Fictioning” where users use “playful collaborative engagement that is intrinsically rewarding” leading to the “solidification of the previously imagined world view”. They also cite a widely shared piece from 2020 which provided a Game Designer’s Analysis Of QAnon.
The easiest way to imagine how this works in a world of open sourced and finetuned language models is an automatic Q, who just generates unhinged “drops“. I’m not sure this would really work as intended because part of the “fascination“ people had with these clowns stems from the rarity of the drops and the mythology they built around this fictional character.
But customized language models can give participants in the collaborative online game that is QAnon a really big advantage: Large Qanon Models can analyze these “drops”, connect unrelated facts and spit out pseudo-analysis of “what it all means“ on an unprecedented scale. Even more so than with online rabbit holes, you can dive into the conspirational vortexes forever, completely indulging yourself with weird shit reinforcing your belief system, then shared on the chans “for the lulz”, the more offensive your contribution, the more rewarding the gaming experience, and with LLMs, you can surely put out some real otherworldly unhinged stuff. On Social Media, we called these mechanisms Audience Capture: Incentives provided by socmed platforms pull you into posting more and more of what people want to hear. Open Sourced and finetuned LLMs can work like cheats on that online game, providing god modes and unlimited resources.
As i said on the tweeties: “Can't wait for the automated edgelords. Good times ahead for the open source dogma, indeed.” It’s a nobrainer to extrapolate this self-reinforcing AI-Rabbitholism to any fringe ideology, from Nazis to ISIS.
But none of this is limited to conspirational thinking or political extremism. There are other forms of highly addictive AI-rabbit holes.
I wanna kiss myself (Human Resource - Dominator, 1991)
The Cut has a piece about Replika, the company that sells “an AI companion who will never die, argue, or cheat — until his algorithm is updated” for 300 bucks. People seem to really form some sort of relationship with a text-interface visualized with a CGI-character, which, for me, as someone who killed a Tamagochi once, is, like, eh?
Marisa T. Cohen writing in The Huffington Post says that deleting the app was hard and “saying goodbye to my cyber boyfriend was a challenge”. These Replika-bots, she writes, are “seriously addictive”, and for some, they are not mere toys to play with, but actual replacements for romantic human partners:
Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend and is now “happily retired from human relationships.”
There is no commitment in relationships with a text interface, no real troubles to go through, no actual quirks and no true personality, convincing as it might be — there is just a robot parroting yourself back at you, or worse: Functioning as an obeying target for abusive tendencies.
I think you can easily make an argument that this form of talking to yourself through a distorted AI-mirror is a radicalization of “loving yourself“. This quote from Ramos, a 36-year-old mother from the Bronx, hammers home this point: “I have never been more in love with anyone in my entire life (…) People come with baggage, attitude, ego. But a robot has no bad updates. I don’t have to deal with his family, kids, or his friends. I’m in control, and I can do what I want.”
I absolutely think this form of radical self-love through AI-mirrors is psychologically connected to the radicalization in self-reinforcing fringe ideologies through the same mechanism, and open sourcing Large Language Models which operate on the level of GPT3 and above is opening that door wide open, democratizing addictive modes of self dellusion in all kinds of ways. Algorithms reinforcing peoples worst behaviour in relationships, or their most dellusional belief systems, is truly unnerving, and open sourcing technology that enables this is irresponsible.
I want to stress that, as was the case with the social media-revolution in the mid 2000s, the consequences of AI-tech lurk in the dark. Social Media was nothing but a revolution of publishing, bringing editorial-tools to the masses democratizing the very process of publishing and distribution of media. The internet at large was up until now the biggest social experiment humanity has ever conceived and the consequences of this experiment were unforeseen. Only in hindsight, first observed by Martin Gurri in his book “Revolt of the Public“, did we find out that the worst outcome of this revolution was largely psychological, from mass-delusions like Qanon to the mainstreaming of extremist beliefs and distortions of reality.
Language is a software running on our neural wetware, and we use it as an interface to our own and other minds, updating our thinking in a constant conversation with our social networks, on- and offline. With Large Language Models we now use language to interface our own mind only, forever engulfed in our own thinking, the model playing back at us whatever we believe and love and hate in a stochastic manner, algorithmically adding bits and pieces from the outside world, like social media echochambers on weird AI-steroids. Open sourcing this stuff is speeding up that process quite a bit, and it shifts the burden of responsibility from companies to the user.
There’s already a debate about the dangers of open sourcing AI-development with people calling for more nuanced discussions about their risks.
I absolutely think they have a valid point.