Discover more from GOOD INTERNET
Modern Love is automatic
Self-Radicalization disguised as Chatbot-Therapy / Chinese women hire Cosplayers for real life meetups with virtual boyfriends / Feedly introduces AI-tool to monitor protests / and much more
Self-Radicalization disguised as chatbot-therapy
The BBC asks Would you open up to a chatbot therapist?
In the piece, some Dr Marsden “says he is excited about the power of AI to make therapeutic chatbots more effective. ‘Mental health support is based on talking therapy, and talking is what chatbots do’”.
I used ChatGPT as a shallow form of therapeutic counceling once and asked it about some malaise i had with some people. It confirmed by beliefs and my views on what happened and i actually felt reassured for two or three minutes. I didn’t take this exercise seriously, but still i was astounded by the reassuring effect even this short exercise had on me. But this is not therapy, or therapeutic, or psychotherapeutic, at all — it’s a machine following commands in a stochastic manor that simulates talking.
“Talking” in the Mirriam-Webster-dictionary is defined as “to express or exchange ideas by means of spoken words” and “to convey information or communicate in any way (as with signs or sounds)“. ChatGPT has no ideas to express of exchange, and it doesn’t convey information — it conveys synthetic text in “information-shaped sentences”, as Neil Gaiman put it —, and it doesn’t communicate because communication always requires an understanding partner.
Therapy challenges your beliefs, and helps you get to some deeper points about your beliefs, how they came to be and why they led you down a path that turned out not so good. Psychologist Jonathan Shedler put it this way: “So this chatbot always cares, is always on your side, and always available — and asks nothing of you in return. In other words, it doesn't teach you to function in the world and relate to others. It teaches you how to indulge in a narcissistic fantasy.”
It’s this narcissistic fantasy i had in mind when i wrote about self-radicalization with open source AI, where you guide the behaviour of your communicative partner which simply mirrors back your beliefs reflected by a thousand fractured parameters and simulates a mind where there is none. This means that AI is, per se, not suitable for therapy. As I put it in that piece: “There is no commitment in relationships with a text interface, no real troubles to go through“.
Exactly that — commitment and the troubles — are the whole point of psychotherapy, and they require true understanding, something that AI can’t provide (yet, any possibly: never). And this is precisely the essence of the myth of Narcissus, “staring at his reflection in a pond, trying to work out the nature of his unrequited beloved”, which also is ”the skeptics’ motif for the search for ‘true understanding’.” And without true understanding, there is no therapy, just a human looking at himself, indulged in a conversational fantasy, and just with the myth of Narcissus, this can have tragic outcomes.
Gary Marcus has some details about the first known chatbot associated death. A mentally unstable guy used an open source AI-chatbot based on GPT-J and “talked“ himself into a radicalization spiral about his environmental anxiety. Then he killed himself. His widow is convinced, if he wouldn’t have self-radicalized with an open source chatbot, he’d still be alive: “Without these six weeks of intense exchanges with the chatbot Eliza, would Pierre have ended his life? No! Without Eliza, he would still be here. I am convinced of it.”
Now, i can imagine some shallow psychotherapeutic chatbot-systems with safety measures for patients that mention certain keywords, but nobody will be able to control openly available AI-systems you can just hook up on a laptop an use them as ersatz-therapy for your anxiety issues. As Marcus put it: “vulnerable patients shouldn’t be talking to chatbots that aren’t competent for this situation” — with this tech openly available, anybody can, especially people who refuse therapy.
The dangers of stochastic parrots have many faces.
Chinese women hire Cosplayers for real life meetups with virtual boyfriends
In a move that totally reminds me of the Blade Runner 2049-scene in which AI-hologram Joi hires replicant prostitute Mariette to gain a physical body to have sex with K, chinese women these days hire cosplayers to act as their virtual boyfriends and meet in real life:
Rynee Ren’s first date with Zuo Ran was everything she’d dreamed of. They rode the carousel at the park. They visited a perfume shop, where they created their own unique scent together. Before parting, they even shared a passionate kiss.
The only issue was: Zuo Ran wasn’t real. The person who Ren kissed was actually a female cosplayer, who the 20-year-old had hired to play Zuo Ran — a character from her favorite video game — for the afternoon.
Welcome to the brave new world of “cos commission” — a wildly popular new service that is helping women across China bring their virtual boyfriends to life.
Modern Love is automatic:
GOOD INTERNET is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Feedly introduces AI-tool to monitor protests “posing a risk to your company’s assets”
Feedly is going full surveillance capitalism. In a now deleted blog-post preserved on the Internet Archive headlined How to track the protests posing a risk to your company’s assets with Feedly AI, the company behind the popular RSS-reader is introducing a new AI-tool with which you can “track riots, strikes, and rallies that pose a risk to a company’s assets and employees”.
The funny thing about this is that RSS-tech like Feedly became popular with the starting social media revolution in the mid 2000s, namely blogs, the primordial soup of social media. One main staple of blogs was, you may have guessed it, engulfing corporations in heavy critizism and protest when they fuck up. In germany we have a nice and sweet word for this: The Shitstorm, maybe the most prominent form of social media culture back then, which also gave rise to the outrage culture we see today.
It’s a bit funny to me, who has been very online for nearly twenty years now, to see the very tech that laid the basis for shitstorms and democratized heavy critizism especially on corporations to introduce an AI-tool that works in the exact opposite direction. As the deletion of their blog post suggests, they came face to face with a good measure of that culture.
Molly White has a good write up on this: Feedly launches strikebreaking as a service.
they talk about how “the AI tool was never designed to help companies silence legitimate protests”. (How Feedly’s AI would determine what is a “legitimate protest” is not addressed.)
But the images from the blog post tell a different story, boasting how their model can identify relevant snippets that would signify “risky protests”, such as:
“Over 250,000 people took part in demonstrations against pension reform”
“They urged everyone to join the march”
“Britain’s railways face paralysis as unions resume strikes”.
Doesn’t sound that “risky” to me.
I switched from Feedly to Inoreader three years ago and I think it’s the way superior RSS-reader.
Speaking of AI and psychotherapy: Baihan Lin et al introduce SafeguardGPT, an “approach to improving the alignment between AI chatbots and human values. By incorporating psychotherapy and reinforcement learning techniques, the framework enables AI chatbots to learn and adapt to human preferences and values in a safe and ethical way, contributing to the development of a more human-centric and responsible AI.”
This reminds me of the 2019-paper on Machine Unlearning, which sounds a lot like “therapy for AI-systems“ to me. Here’s a german writeup on that paper.
Co-Writing with Opinionated Language Models Affects Users' Views. Well, this is great news for devs of opinionated rightwing-chatbots, innit?
Another great interview with James Bridle on his latest book “Ways of Being“, which is sitting on my unread pile besides my couch: There's Nothing Unnatural About a Computer.
Developers Are Connecting Multiple AI Agents to Make More ‘Autonomous’ AI. I don’t like autonomous AI-systems, and i think these developments are a good idea. Andrej Karpathy seems to think that it is, and here’s a voice controlled autonomous GPT4-coding assistant that “initializes my project, builds my app, creates a GitHub repo, and deploys it to Vercel”.
Midjourney vs Adobe Firefly. The latter doesn’t know trademarked characters like Deadpool or Super Mario. This is obvious to me, because it is existential for professional creative tools to require reliability on copyright issues. Gaming Dev Studios already seem to ban AI-art in contracts with creatives for copyright reasons, and stuff like Adobe Firefly or Shutterstocks image generator are one way to provide this reliability. This is why i already said recently that Adobe may eat the lunch of Midjourney and Stable Diffusion in the long run, albeit both yielding better results in technical quality.
When i learned through Lex Fridmans interview with Sam Altman that the CEO hasn't seen "Ex Machina" by Alex Garland, i tweeted out that “NOW i'm worried, and i only mean this half-jokingly. Any AI-researcher in top notch positions should be familiar with the all the best stories about AI-safety and ethics.”
The trailer for The Artifice Girl looks like another entry for this type of story:
Once AI has taken over and turned everything into plastic, i imagine the future world to look like this new teaser for the upcoming Barbie movie:
And while we’re at it, the new Spider-Verse trailer looks incredible as expected:
Global warming is killing Indians and Pakistanis: “Between 2000 and 2019, South Asia saw over 110,000 excess deaths a year due to rising temperatures, according to a study in Lancet Planetary Health, a journal. Last year’s hot season, which runs from March until the arrival of the monsoon in late May or early June, was one of the most extreme and economically disruptive on record. This year’s could rival it.”
Ice-T doesn’t give a flying fuck about fonts… weirdo shit. I’m a trained typographer and wear my weirdo shit with pride.