AIs Unheimlich-Effect
Plus: Peter Thiel develops military AI / AI-Oasis gets an official release / Stability AI released DeepFloyd / Peperoni Hug Spot / The Dark Knit and much more
Psychoactive, pseudosocial AI may yield unintended consequences
Business Insider has a series of pieces on LLMs: Large, creative AI models will transform lives and labour markets, How generative models could go wrong, The world needs an international agency for artificial intelligence, say two AI experts, How to worry wisely about artificial intelligence, Large language models’ ability to generate text also lets them plan and reason.
The most interesting of the bunch is this one: How AI could change computing, culture and the course of history, in which they thankfully not only talk about the widely discussed talking points like hallucinations, desinformation and so forth, but get to questions about how this tech might change human perception and its effects on our psyche, a topic i'm interested in ever since i've witnessed social media and the web doing the same, with some pretty harsh consequences all over the world.
Technology changing human perception of the world is not new, but always revolutionary. Language kickstarted our cognitive evolution, writing enabled the agricultural revolution, moveable type and the printing press led to decades of religious war in Europe, the internet and social media gave us a resurgence of tribalism and atomized our shared view of the world. There is no reason to believe that AI will have an unsignificant influence on human cognition, how we relate to each other, and in what ways this will go sideways, which is pretty much guaranteed.
One of those cognitive dangers of AI is hidden in this paragraph:
An obvious application of the technology is to turn bodies of knowledge into subject matter for chatbots. Rather than reading a corpus of text, you will question an entity trained on it and get responses based on what the text says. Why turn pages when you can interrogate a work as a whole?
Because you don't know the questions to ask. Even when you are superfamilar with a topic, the unique perspective of a human author formulated in a body of text gives you new ideas and viewpoints from which new insights emerge. You can't fully interrogate that text, then, if you have not read it. With training an AI on a book and then asking it stuff about it, you will always loose information, perspective and insight. In the long run, especially when AI enters education, this can lead to bad outcomes. As a companion tool this might be very useful ofcourse, just like reading a Wikipedia-page on a book you are currently reading -- but never as a substitute.
Another point is that AI can very possibly lead to a world of synthetic ghosts, where digital twins are trained on your personality style from posts, podcasts, video and whatnot and then this model potentially lives forever after you die. What happens to the human psyche when you can converse with a realistic model of your dead mother? We have no idea.
The article names Laurie Anderson, the late wife of Velvet Underground-singer Lou Reed, as an early example of this. She has an AI-assistant, trained on her work and that of her husband: "she does not consider using the system as a way of collaborating with her dead partner", most likely because it is only trained on Reeds and Andersons creative output, not personality. However, coming systems surely are sophisticated enough to pass as “synthetic personality-complete digital twins“.
Freuds' concept of the Unheimlich, in which humans can't really decide if some thing is in it's right place (un-home-ly), or alive or dead, then, becomes central to questions of these mimetic AI-models of the dead, and it may disrupt our psychological mechanisms of grief, about which i wrote two years ago here.
These are some of the problems on the horizon i'm most worried about, not paperclip maximizers, not desinformation, and the unknown unknowns in the field of AI reveal themselves quickly these days.
Though i signed the moratorium letter for these reasons, this one from scholars at the university of Leuven in Belgium seems more relevant: We are not ready for manipulative AI.
the recent chatbot-encouraged suicide in Belgium highlights another major concern: the risk of manipulation. While this tragedy illustrates one of the most extreme consequences of this risk, emotional manipulation can also manifest in subtler forms. Once people get the feeling they interact with a ‘subjective’ entity, they build a bond – even unconsciously – that exposes them. This is not an isolated incident. Other users of text-generating AI also described its manipulative effects. (…)
Most users realise rationallythat the chatbot they interact with has no understanding and is just an algorithm that predicts the most plausible combination of words.It is, however, in our human nature to react emotionally to such interactions. This also means that merely obliging companies to indicate “this is an AI system and not a human being” is not a sufficient solution.
Some individuals are more susceptible than others to these effects. For instance, children can easily interact with chatbots that first gain their trust and later spew hateful or conspiracy-inspired language and encourage suicide, which is rather alarming. Yet also consider those without a strong social network or who are lonely or depressed – precisely the category which, according to the bots’ creators, can get the most ‘use’ from them. The fact that there is a loneliness pandemic and a lack of timely psychological help only increases the concern.
It is, however, important to underline that everyone can be susceptible to the manipulative effects of such systems, as the emotional response they elicit occurs automatically, even without even realising it.
“Human beings, too, can generate problematic text, so what’s the problem” is a frequently heard response. But AI systems function on a much larger scale. And if a human had been communicating with the Belgian victim, we would have classified its actions as incitement to suicide and failure to help a person in need – punishable offences.
As long as AI-assistants stay within Office-products by Microsoft, i couldn’t care less. Who gives a damn about a smartass Clippy. But stuff like the Replika-chatbots, the bonds some people form with those AI-avatars and the whole Opensource-Chatbots everywhere shebang points to a future where psychoactive AI-systems layer over pretty much anything. Look at this:
Imagine a future where Social AGIs are your friends, your mentors, your spirit animals, your guardian angels - they are deeply integrated into your social and emotional lives, and you are better for it. In times of need, and celebration, you might turn to a Social AGI. You might follow one on Twitter, call another late at night, and have yet another one planning your next human social gathering. The future of social interactions is richer, more dynamic, with both humans and Social AGIs together.
I don’t think human interactions will be richer with AI involved as a true social actor and people seem to have watched the movie “Her“ as a blueprint, not as a warning. Maybe Spike Jonze should’ve added some splosions to make that point more clear.
As i said in my piece on the AI-dangers of a synthetic theory of mind: “Move fast and break brains“ — can we not?
Breezer, the band behind AISIS’ Lost Tapes, get’s an official release
Instead of listening to shitty AI-Drake tracks, you might rather listen to the great band Breezer that made AI-Oasis a thing and gave us the best Oasis-album since Morning Glory.
Links
Hardcore rightwing libertarian Peter Thiel developing AI for warfare in a "legal and ethical way". Not sure if "lol" is the right way to put it but... lol: "The AIP demo shows the software supporting different open-source LLMs, including FLAN-T5 XL, a fine-tuned version of GPT-NeoX-20B, and Dolly-v2-12b, as well as several custom plug-ins. Even fine-tuned AI systems off the shelf have plenty of known issues that could make asking them what to do in a warzone a nightmare."
AI Spam Is Already Flooding the Internet and It Has an Obvious Tell: "The phrase 'as an AI language model' is starting to flood Amazon user reviews and Twitter bot accounts." More at The Verge.
Inside the Discord Where Thousands of Rogue Producers Are Making AI Music — Good luck to the music industry enforcing copyrights and legislation at regulating interpolative latent spaces. They gonna need it.
Dating AI: Rizz app will respond to your Hinge or Tinder messages — Authentification systems for real humans on dating sites will be a killer app.
Bloomberg profiles LAION: A High School Teacher’s Free Image Database Powers AI Unicorns. LAION was financed in part by Stability AI, and don’t respond to rightsholders asking question about the legality of their project. We’ll see how the lawsuits go, i guess.
Related: An AI Scraping Tool Is Overwhelming Websites With Traffic — tools that don’t respect robots.txt and flood sites with traffic should be considered a DDoS-attack and treated accordingly.
Stability AI released DeepFloyd IF, another open-source text-to-image model with better results similar to Google Imagen.
Researchers use AI to discover new planet outside solar system
China Says Chatbots Must Toe the Party Line — Can’t wait for chinese correction camps for bad prompters.
A.I. Imagery May Destroy History As We Know It: “Analog and digital photography previously presented a window into, and proof of, the distant past. A.I. contrived people do not exist: they have never existed and they will never exist except in the digital ether. The people depicted in these images never lived and they will never die. The human condition of the past is no longer relevant.
Will future generations archive these fake images as historically significant? Will historians mistakenly believe these are depictions of real regalia, dances, and ceremonies performed by authentic tribes? And if so, how does that impact today’s tribes that actively document their real history and heritage?“
Stanford finally released their dance choreo AI EDGE - Editable Dance Generation from Music. Here’s the project page, here’s a playground.
'Indiana Jones 5' will feature a de-aged Harrison Ford for the first 25 minutes — Disney made remarkable progress in their de-aging tech since Young Leia and Grand Moff Tarkin in Rogue One. We’ll see if this works over longer timespans than a few seconds. People complained a lot about young Luke in The Mandalorian, but i thought it worked pretty well compared to Leia and Tarkin.
AI cinema: The Anomaly of Kepler-61b, Runway Gen-1 on your phone, Trailer for something by MatthieuGB — and this:
How Much Does 'Nothing' Weigh?: “The Archimedes experiment will weigh the void of empty space to help solve a big cosmic puzzle“.
Privatized space exploration took a hit: Ispace: Japanese lunar lander presumed lost after historic moon landing attempt
Privatized space exploration taking a blow: Is sex in space being taken seriously by the emerging space tourism sector?
Cool sticky tape art by Cuadrigula on Instagram. I especially dig his more typographic stuff.
Goodnite Harry Belafonte, and thanks for the soundtrack to one of the best ending scenes in cinema history, in which a levitating Winona Ryder floats in the air, dancing with ghosts.