The truthiness of false balancing leftwing bias in ChatGPT
Plus: Meta is going Fediverse / Musicians are natural interbrain-synchronizers / GigaGAN / Deepfake-Tucker reads the Vaporeon copypasta / Midjourney v5
A bunch of papers has consistently found a leftwing bias in ChatGPT. The first paper to show this was released in January, others were published a few days ago here and here via Rold Degen. This, actually, makes sense, even when these systems claim to be neutral.
These results are interesting in regard to the decidedly leftwing critique of AI-systems because they "do run the risk of ‘value-lock,’ or reification of meaning, as they absorb the dominant, hegemonic worldview of their provided data sets", which is to say: Algorithmic bias reproducing stereotypes.
One perspective to look at this is that the training data itself is skewed, because media has a well known leftwing bias, and rightwing powerstructures are concentrated in finance and economics. Arguably, ChatGPT is trained more on published media than finance reports. The stereotype AI-systems reproduce, thus, is a left-leaning worldview which is consistent with the results of the papers linked above.
Another perspective is that of Stephen Colbert: At his performance at the 2006 White House dinner, he stated that "Reality has a well known liberal bias". It was a joke aimed at conservative media who back then already complained about a supposed leftwing bias in media, which he summarized in his coinage of the term ”truthiness”, which he later explained as follows:
It used to be, everyone was entitled to their own opinion, but not their own facts. But that's not the case anymore. Facts matter not at all. Perception is everything. It's certainty. People love the President [George W. Bush] because he's certain of his choices as a leader, even if the facts that back him up don't seem to exist. It's the fact that he's certain that is very appealing to a certain section of the country. I really feel a dichotomy in the American populace. What is important? What you want to be true, or what is true? ...
Truthiness is 'What I say is right, and [nothing] anyone else says could possibly be true.' It's not only that I feel it to be true, but that I feel it to be true. There's not only an emotional quality, but there's a selfish quality.
One case of very strong truthiness can be found in the supposed bias in the case of environmentalism, for instance, which is conotated leftwing. One study linked above explicitly mentions that they uncovered “ChatGPT's pro-environmental, left-libertarian ideology. For example, ChatGPT would impose taxes on flights, restrict rent increases, and legalize abortion. In the 2021 elections, it would have voted most likely for the Greens both in Germany (Bündnis 90/Die Grünen) and in the Netherlands (GroenLinks)”.
The problem here is that, at least given the scientific facts of climate change and the consequences that follow, ChatGPT’s answers are consistent with scientific consensus and very much based on scientific facts. ChatGPT adhering to science becomes politically biased this way. At least in this instance, reality does have a leftwing bias, and we should be wary of ChatGPT adhering to false balancing because of snowflakian feels of conservatives.
Maybe unbiased, pseudo-neutral AI-systems are a pipedream altogether, as long as scientific facts are weaponized for partisan fights. Politics is always a slow process. It lags behind developments in society and the synchronizing function of mass media. This is normal.
Thus, AI-systems sucking up the media environment always will be ahead of the curve of politics and show political bias in one way or another. Maybe you just can't do much about that, except add some disclaimers that this or that output may be subject to political debates, or whatever.
But, ofcourse, the truthiness of political partisans will always beat scientific facts with it’s emotional punch, in Large Language Models and otherwise, and one-sided models like a “rightwing-christian Chatbot positioned against a satanic GPT“ or Musks anti-woke AI are already on their way, and in that world, I, as a human, am more than happy to shamelessly show my biases as offending as I may please.
With LLMs, the audience in Platos cave just got their own Virtual Reality Headsets, to each their own, and, it seems, that post-truth is here to stay.
Links
Related to the piece above: Can journalists teach AI to tell the truth?
Meta is building a decentralized, text-based social network - After Medium, Tumblr and Flickr now Facebook going for the Fediverse is a big deal. Also, they should look at Reddit, not Twitter. With FB-Groups, they have a good starting point there.
NDR Zapp über Angry German Kid: Ausgerastet und abgestürzt: Der Fall des Angry German Kid (in german)
Synchronizing to a beat can predict how well you get 'in sync' with others - people who got the music are natural interbrain synchronizers. Here’s the paper. I’m more than happy to add some beats to my deliberately crazy memetic field theory, given that I run a second substack called GOOD MUSIC. Quote from the article: "Do musicians synchronize their attention more easily with others? Why are some people super-synchronizers while others are unable to synchronize altogether? Do strong synchronizers find it easier to click with others?"
Because Music is a wholesome experience of art, it can easily activate music-evoked autobiographical memory, which, duh - but good to know: Why Does Music Bring Back Memories?
And speaking of music and memory, here’s Weezer's Blue Album but it's me and my friend trying to sing everything from memory. It sounds exactly as you expect.
A writeup about the paper on Organoid Intelligence i already linked to here: Biocomputing With Mini-Brains as Processors Could Be More Powerful Than Silicon-Based AI. Multimodal OI-systems sound very promising: “Different types of organoids — say, those that resemble the cortex and the retina — can be interconnected to build more complex forms of organoid intelligence.” IEEE had another angle on neurocomputing, where scientists “created bioelectronics in live zebra fish and leeches“. True cyborgism, it turns out, is fusing man and machine on a molecular level.
“When they say AiArt has no soul, this is what they mean. Only one of those kids (real or not) actually had success, and you can see it!“
The speed here is staggering, we’re nearing real time synthesis, and the upscaler shows some insanely good results: GigaGAN: Scaling up GANs for Text-to-Image Synthesis: “We introduce GigaGAN, a new GAN architecture that far exceeds this limit, demonstrating GANs as a viable option for text-to-image synthesis. GigaGAN offers three major advantages. First, it is orders of magnitude faster at inference time, taking only 0.13 seconds to synthesize a 512px image. Second, it can synthesize high-resolution images, for example, 16-megapixel pixels in 3.66 seconds.“
The 'poignant', pseudo-touching tonality of this clip is cringe af, but the tech looks pretty awesome: “[Wonder Studios is an] AI tool that automatically animates, lights and composes CG characters into a live-action scene“. Writeup from Techcrunch: Wonder Dynamics puts a full-service CG character studio in a web platform.
Michael Frank downloaded LAION5B and “compiled a total URL count by Domain of each subset 2B-en, 2B-multi, 1B-nolang”, with "6.5~ million images from DeviantArt are probably in the dataset."
“Here’s what Snap’s AI told @aza when he signed up as a 13 year old girl: How to lie to her parents about a trip with a 31 yo man [and] How to make losing her virginity on her 13th bday special (candles and music)“
People Are Using AI to Make TotalBiscuit’s Voice Say Terrible Things: Mimetic AI-models of deceased Youtubers are what i had in mind when i critisized the 'mimetic ai'-paper for not going far enough (in german). I expect identity theft and abuse of freely available biometric data of humans (like voices) by users of AI-systems that mimic personality traits to grow. We already had the case of a Stable Diffusion-checkpoint finetuned on the works of the deceased illustrator Kim Jung Gi, and with these technologies being open sourced, trolls gonna have a field day. This is why we can’t have nice things, as they say.
Speaking of people using AI to make people say terrible things, here’s people using AI to make terrible people say very NSFW things:
RTFM-Learning is a thing: An AI Learned to Play Atari 6,000 Times Faster by Reading the Instructions.
People are posting examples from Midjourneys v5 testing and the results are mindblowingly good. I already miss the days of weird hands and strange proportions. Roland Meyer aka Bildoperationen on the tweeties has a very good thread about the aesthetics coming out of this: “they simulate a photographic visuality without simulating anything of photography as a lens-based, optical medium”. We then had a pretty nice exchange regarding the unreality of image synthesis: “Where CGI simulates a world, AI interpolates all worlds, even the unthinkable ones.“
Well we've seen what happens whe chatbots are trained on unfiltered human chats: they grow misogynic and racist - we really had enough of them in mankind and try to take the chance to make better artificial intelligence... Though there is no absolute values in moral we've seen enough rightwing misanthropic bs go wrong to at least try...