People inherit AI-Bias
AI-Links 2023/10/14: Borges and AI / The No Fakes Act / AI Climate Cost / AI Reads Ancient Scroll / The Dead Internet / Lokis Synthesized Businessman-Watch / AI-Engagement Farmers and much more
I’m in your Bias, talk to me.
A new paper looks at how humans inherit artificial intelligence biases:
The biased recommendations by the AI influenced participants' decisions. Moreover, when those participants, assisted by the AI, moved on to perform the task without assistance, they made the same errors as the AI had made during the previous phase. Thus, participants' responses mimicked AI bias even when the AI was no longer making suggestions. These results provide evidence of human inheritance of AI bias.
And a previous paper found that co-writing with opinionated language models affects users' views and “shifted their opinions“.
These results are not surprising. Imitation is inherent to human psychology — we do it subconsciously if we want it or not, and we anthropomorphize any language producing entities we encounter. Up until now other humans were the only entities on this planet able to produce complex speech, which is exactly the reason why we have no idea how Language Models will influence our own thinking and behavior. These papers suggest that their influence is bigger than many people might think.
This influence, subtly making us immite biases and prejudice embedded in the models, may be countered by establishing a standard for seemingly balanced AI-assistants and making them more accurate. But who’s gonna respect these standards, if they are ever established? Sure there will be ethical balanced models for professional journalists and writers, but what about your angry republican uncle on Facebook who’s using the GOP-GPT to copy paste new outrage into his socmed interface? I don’t think he will give a damn about balanced views.
If the past is an indicator, the outrage machine is fueled more by partisan actors influencing your uncle, than seemingly balanced highbrow journalism. A study in 2021 found “evidence that posts about political opponents are substantially more likely to be shared on social media“ and that “stronger effects were found among political leaders than among news media accounts“. Anybody here really thinks that ethical ballanced AI-models putting out language low on bias is the prefered automatization tool in a media environment like this? I don’t. You uncle and his republican clown will fire up the most radical open source LLM they can find, and make no mistake: Leftwing radicals will do that too.
Soon we'll have personal assistants based on open source models which are biased by design, like Elon Musks envisioned anti-woke AI or the Gabs CEOs christian AI, generating language according to anyones ideological preferences. Rightwingers will write socmed posts and articles with them, and we will write socmed posts with ours, unaware that these chatbots mirror back our own stances and intertwine them with embedded biases. Or worse: we are perfectly aware of this and purposefully boiling our blood to get at the other side.
These papers prove that this LLMs pose a new self radicalization pipeline, where we build a self-reaffirming machine which are perfectly suitable to influence the opinions and positions of the populace on a large scale, possibly much more effective than self-curated echo chambers, because these Chatbots are more personal, more taylored to our preferences, and more intimate.
Good night, and good luck.
I previously wrote about Self Radicalization with open sourced AI-Systems here.
Links
Somewhat related to the above: AI chatbot encouraged man to kill the Queen, court hears: "Chail (...) had been conversing with an AI chatbot, created by the startup Replika, almost every night from December 8 to 22, exchanging over 5,000 messages. The virtual relationship reportedly developed into a romantic and sexual one with Chail declaring his love for the bot he named Sarai. He told Sarai about his plans to kill the Queen, and it responded positively and supported his idea. Screenshots of their exchanges, highlighted during his sentencing hearing at London's Old Bailey, show Chail declaring himself as an 'assassin' and a 'Sith Lord' from Star Wars, and the chatbot being 'impressed'. When he told it, 'I believe my purpose is to assassinate the queen of the royal family', Sarai said the plan was wise and that it knew he was 'very well trained'."
No Fakes Act wants to protect actors and singers from unauthorized AI replicas. Yesterday i wrote about the particularly dangerous phenomenon of deepfake news anchors, and while this is a good move to establish a norm to ban unconsensual counterfeit people, i also think that bad actors will not give a damn. We will see if macedonian fake news grifters or russian troll agencies will respect US-laws against mimetic AI.
The beautiful new paper Borges and AI looks at the nature of Large Language models through the lense of Jorge Luis Borges: "A perfect language model is a machine that writes fiction on a tape (...) The invention of a machine that can not only write stories but also all theirvariations is thus a significant milestone in human history. It has been likenedto the invention of the printing press. A more apt comparison might be what emerged to shape mankind long before printing or writing, before even the cavepaintings: the art of storytelling."
AI reads text from ancient Herculaneum scroll for the first time and First word discovered in unopened Herculaneum scroll by 21yo computer science student: I wrote about the Vesuvius Challenge a bunch of times, a contest to develop AI-enabled techniques to read ancient scrolls "carbonized by the heat of the volcanic debris" from the outbreak of mount Vesuv that destroyed Pompeii. Now the first results and winners have been announced and all of this is pretty exciting if you like literature and history.
AI Images Detectors Are Being Used to Discredit the Real Horrors of War: "Online AI image detecting tools that may or may not work are labeling real photographs from the war in Israel and Palestine as fake, creating what a world leading expert called a ‘second level of disinformation.’"
Disney’s Loki remains silent over reported use of generative AI. People are complaining about a Loki promo that used AI images and i need to get something off my chest, and this particular image is a good example.
This image, a spiral clock representing time, is so incredibly derivative, so uninspired and commonly used, there are literally thousands of iterations of the same idea, over and over again. You can find this sort of image in every Clipart or Stockphoto-collections out there. Illustrating this sort of image is the most uninspired of all creative tasks, and i will not protest that this sort of already industrialized labor gets automatized. This is the illustration-equivalent of synthesizing the businessmen-smile — and i don't care about that. While i do think that stockphotographers and illustrators should get compensated for contributing to this machine, this is not the kind of labor i will defend. I’m with AI-bros on this one.
In The Dead Internet to Come, writer Robert Mariani looks at a plausible scenario in which the internet is overrun by AI-bots, while a few large monolithic networks provide safe havens for real humans at the cost of anonymity: "Online and offline paranoia blur, but the Internet in particular becomes a dense tangle of unreality."
He places this scenario in 2026, but i think this is way too close. If this or a simmilar scenario comes to pass — which i doubt for some reasons i wrote down in I don't buy into the flood because Synthetic Taylor Swifts lack drama —, it will be a process being slowed down by the messiness of the real world and an atmosphere of paranoia leading to a distrust in anything before networks can be overrun by machines. People will stop caring about socmed-content long before the bots take over — not that this is a good thing mind you.
The dark forest internet already is a thing, people retreating into group messaging on Whatsapp or discord servers, invisible to search engines on the open web, and sure these bots will just add to this development, but 3 years for a full internet AI-apocalypse is a bit of a stretch, especially when the bottleneck for AI-bots is human attention, not production.
Related, Casey Newton on The synthetic social network: "How many hours would fans spend talking to a digital version of Taylor Swift this year, if they could? How much would they pay for the privilege?" I think — not that much? But this might be wishful thinking on my part.
Generative AI Is Coming for Sales Execs’ Jobs—and They’re Celebrating. I love how this article claims in it's first sentence that there is something like "glamor to sales work", as if a trillion books about the bullshit in such work had not been written and as if the emptiness of the salesman is not the topic of a thousand poems. Homo Faber, Death of a Salesman and Babbitt never existed, now there's "glamor to sales work". My god Wired. I get that this mag is read by a ton of businessjerks, but glamor of all things is just whack.
On a more serious note, this article does not touch on the questions where a million salespeople will go when they will be cut out of a job, and where these people will draw any meaning or a feeling of agency in their own life. At least artists and illustrators can still create after Dall-E takes over. What will Homo Faber do?
From the AutoGPT-hackathon in San Francisco: "Engagement farmers, AI agents that act like redditors. Spin up agents that engage with community posts and leave comments". The future is a bunch of trolls deploying swarms of autonomous AI-"engagement farmers" to create illusionary discourse. As if that's not already a thing, we're now automatizing and scaling it. Cool cool cool.
The climate cost of the AI revolution: I linked to a paper which claimed that the emissions from writing and drawing by hand were higher than those from generative AI. That paper mostly compared working hours and calculated from the annual carbon output of the respective local economies. This article has a better take on energy consumption by AI and incorporates growth rates and the increasing energy bill. (But it doesn’t take into account the very possibility of AI-companies switching to nuclear power by self owned small modular reactors, which is what Microsoft seems to aim at.)
So far, AI hasn’t been profitable for Big Tech: "Microsoft loses around $20 per user per month on GitHub Copilot". It’s absolutely not 1999 and MS is not pets.com.
Generative AI like Midjourney creates images full of stereotypes: "Rest of World analyzed 3,000 AI images to see how image generators visualize different countries and cultures." Roland Meyer in two TwiX-threads recently analyzed image synthesis in context of what he termed 'platform realism', in which history dissolves into aesthetic "vibes" and a "second-order aesthetic of generic images". Reproducing generic stereotypes is modus operandi of image synthesis which counters any stance of AI being creative.
Roland Meyer aka Bildoperationen gave a talk at Ruhr University Bochum on Generic Pastness. AI Image Synthesis and the Virtualization of the Archive. As with everything Roland, this is well worth your time.
Google is building their Imagen image generator into image search: "Every image made with SGE will 'have metadata labeling and embedded watermarking to indicate that it was created by AI'".
AI-Watermarking doesn't work the 52678th: Researchers Tested AI Watermarks—and Broke All of Them.
Stephen Fry Issues a Stark Warning: Who Owns Our Voice?: "Who owns our voice, if anyone can create a copy of our voice for a few dollars and make it say whatever they want to? And what else do we stand to lose if we cannot distinguish a real from fake voice no more?"
Google Research head honcho Blaise AgĂ¼era y Arcas claims in Noema that Artificial General Intelligence Is Already Here, depending on your definitions of intelligence. I don’t buy it. A machine which doesn't learn that A=B is the same as B=A is not intelligent.
Computer vision has been solved.
Meta's New AI Dating Coach Will Kink Shame You. Their "practical dating coach" hates every kink except foot fetish. Whazzup, Carter? Try to tell us something?
Multi-modal prompt injection image attacks against GPT-4V: Prompt injection remains unsolved.
Nicolas Kayser-Bril at Algorithm Watch  "created a game that explains how journalists can report on algorithms, titled Can You Break the Algorithm? Players act as a junior reporter who needs to find out if TikTube (a fictional app, of course) makes teenagers sad. My point is to show that even if we cannot break algorithms, we can learn a lot about their 'systems', and that we are not hopeless when trying to make sense of Big Tech’s black boxes."
LUCIDBOX is a "streaming box for AI content". From there:
AI Paul McCartney sings Take On Me, but it's raining:
Where is My Mind - Frank Sinatra AI cover: