Discover more from GOOD INTERNET
Suspension of Disbelief (in sentient AI)
Plus: Human beats AI at Go, a 1500€ book about The Shining, a mashup-movie made from shadows in films, MarioGPT and much more.
Here's a shortstory about my relation to sentience-simulating electronics: I killed a Tamagochi once. You can punish a Tamagochi and if you do it often enough, it "dies". I never owned one of these things, but a classmate did, she told me about that feature, I wanted to test it and hit the button a trillion times. It "died". Her human was upset and I had a big laugh about that. I bought a new Tamagochi and apologized for being so mean to her plastic toy, and whenever we mentioned that episode in the coming months, we did it with a smirking smile, because both of us where in on the joke: A Tamagochi is not sentient, and all I did was hit a plastic button (and ruin a fun toy, which was mean and rude, and I felt sorry afterwards). End of story.
There is a saying that is very revolutionary today: People are not dumb. I don't think many journalists writing about AI know about that idea.
Ben Thompson writes in a piece about Microsofts Bing-Chat AI: "Sydney absolutely blew my mind because of her personality; search was an irritant. I wasn’t looking for facts about the world; I was interested in understanding how Sydney worked and yes, how she felt."
Kevin Roose at the NYTimes encountered, while testing the chat-system, "a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine".
Ethan Mollick writes on Twitter: "It isn't just Turing Test passing, it is eerily convincing even if you know it is a bot, and even at this very early stage of evolution."
In contrast, James Vincent wrote a good piece with a neat metaphor which falls flat under scrutiny. In Introducing the AI Mirror Test, which very smart people keep failing he compares AI-driven chatbots with a mirror test, which Thompson and Roose fail because "they’ve spotted another form of life" instead of a stochastic library.
While I think this analogy is useful to point that out, I also think it falls flat when you look closer: The mirror test reflects yourself, not another entity, and the test is if you can recognize yourself, which is something else than recognizing others. Mirrors reflect your image and your movements, not some image and some movement. Even if the chatbot is an AI twin trained on your past, it's still not you in the present moment.
Its this disconnect that creates the illusion: There was, until a few months ago, no system in the world that could create "meaningful symbols" except for other humans, and i'm putting meaningful symbols in quotation marks because the machine has no clue about meaning. Now we have those, and in semiotic terms, those systems produce signifiers without signified: Baudrillards simulacrum, in which symbols reference themselves. This is why Chomsky says that "AI says nothing about language at all". It's just word-relations, and language is much more than that.
There is no reflection, and AI is not a mirror image, we simply project meaning onto its output. A good analogy for this is pareidolia, when people perceive meaning in random patterns, like seeing rabbits in a random cloud structure. And i doubt that this pareidolic made up meaning of anthropomorphized AI is actually 'AI is sentient', or 'AI is angry', or 'AI is threatening me', or anything along those lines. I suspect that the true meaning we read into the stochastic output by this other entity is play.
Vincent writes this himself: "It is undeniably fun to talk to chatbots — to draw out different ‘personalities’, test the limits of their knowledge, and uncover hidden functions. Chatbots present puzzles that can be solved with words, and so, naturally, they fascinate writers. Talking with bots and letting yourself believe in their incipient consciousness becomes a live-action roleplay: an augmented reality game where the companies and characters are real, and you’re in the thick of it."
Kids do this all day, because, as Johan Huizinga states in his book about the playful human "Homo Ludens": play is "tremendous fun and enjoyment". We're just doing a grown up version of that, spiced up with "AI that threatens me" and sexual roleplay on a global scale -- because it's fun to pretend to be Blade Runner.
I did this too a few weeks ago, when i pretended that an AI dissed me, in full knowledge that it was meaningless stochastic output that, ofcourse, still was amusing. I did this as a form of play we call "suspension of disbelief": Every time you see a movie, you switch off critical thinking, because otherwise it's no fun to watch a movie. Any fiction looses it's fun when you don't believe in it, so you do, except you don't really - you play, as if.
I absolutely think that both Ben Thompson and Kevin Roose and the majority of everyone who's posting "Sydney just called me a dipshit!" are fully aware that language is much more than word-relations, and that they are fully aware of the game they are playing with this new toy. People are not dumb.
What we're seeing in articles like those from Ben Thompson and Kevin Roose is, in my opinion, not a failed mirror test, but the undisclosed playful suspension of disbelief.
I have no clue how on earth you could really believe in the sentience, or even a personality, of these systems, except for playing as if, and I wish tech journalists would be more upfront about that.
But then again, I also killed a fake plastic egg, what do I know.
The human desire to consume machine-manufactured emotionality is not a form of brokenness but a mainstay of consumerism, one of its core tenets and quintessential accomplishments. Projecting emotionality onto “Sydney” is not so different from believing that characters on TV shows are autonomous and could have had experiences different from the ones that were scripted, or vicariously identifying with characters in novels. But with chatbots, this kind of projection is demonized as a manifestation of the ELIZA effect, as if it were unique to the uncanny potential of AI.
Noise2Music - I love how these music ai models produce vocals singing gibberish lyrics, the sonic equivalent to gibberish typography in image synthesis.
If you ever wanted a 1500 euro book about Stanley Kubrick's The Shining by the blogger at theoverlookhotel.com, here’s your chance. Oh, and Taschen Verlag also has a new book about The Fantastic Worlds of Frank Frazetta for a tenth of that price.
David Guetta deepfaked Eminems voice for a DJ-set and whatever you might think about this, we all can agree that whatever a sucker like Guetta makes, it’s not art.
"In the shadow" a new short by Fabrice Mathieu, "a research and editing work based on footages from more than sixty movies showing the most beautiful shadows in the cinema history, from ‘Nosferatu’ to ‘Sin City’." I mentioned Mathieus work in a piece on AI-cinema a few weeks ago and it’s great to see he’s still going places.
Twitch’s Mega Popular AI Streamer Is Now Doing Reaction Content: "While it’s fascinating that AI is sophisticated enough to pose as a human Twitch streamer reacting to popular and/or interesting videos".
This exactly is the reason why people watch it. This is not people watching some content that happens to be made by a machine, the watch machine content because it's machine content. The engagement lies not within it’s content (that's completely irrelevant), but within the fascination of machine content itself. Once AI looses that appeal, this sort of nonsensical machine content may go extinct very fast.
Another possibility is that this can become a sort of background noise for you to run while working or instead of Spotify streams. Who cares what a synthetic Vtuber actually says when it doesn't serve the function of communication, but that of noise? But yes, boundaries are getting blurry for a while now and generative AI and Vtubers are a match made in heaven, many of them already relying on AI-tech to make thair avatars.
DRONE: An AI drone with a conscience that changes everyone. A short animated film about a malfunction at a CIA press event that causes a Predator drone installed with an ethical AI personality to go rogue as it attempts to understand its purpose in the world.
The Real-Time Brain-to-Image Reconstructions project is working on visualizing your thoughts in realtime via AI. I guess I'll link my piece on Digital Lucid Dreaming very often in the coming years.
We Asked OnlyFans Creators What They're Thinking About AI-Generated Porn: “younger generations, whose porn options have included VR, animation, and gaming for as long as they’ve been old enough to be interested, will have a much easier time normalizing fully synthetic porn.”
Introducing Total Crap, the First Magazine Written Entirely by AI - Never change, McSweeneys. Never change.
A Visit to Ghibli Park, a Miyazaki Theme Park - “A plastic rice field contradicts the whole idea of Totoro’s world,”
A Class Action Lawsuit Against a Popular A.I. Art Generator Alleges the App Collects Its Users’ Biometric Information Without Their Permission - Interesting turn of events. The complaint seems to state that the individually trained dreambooth Stable Diffusion contain biometric data. I'm not sure this really holds up, but it's an interesting perspective.