Populist Media Manipulation in times of Synthetic Journalism
How german rightwing politics game the media before an election and why AI-journalism is not up to that task.
Googles News Initiative and the journalism think tank Polis recently released a report on the use of AI technology in the newsroom. This report is following up an initial survey conducted in 2019, and it is particularly interesting as the new survey took place in early summer this year, after the rapid rise of ChatGPT and generative AI.
The 90-page document, here’s the PDF, includes analyses and summaries of a global survey involving more than 105 journalistic organizations from over 46 countries, including Reuters, AP, AFP, NPR, The Economist, and other renowned institutions.
According to the report, 75% of the surveyed news organizations are already using AI technologies broadly. This includes solutions for automatically generating transcripts from audio files and Optical Character Recognition (OCR) for automatically converting scanned documents into text. Furthermore, 80% of respondents anticipate increased use of technology in newsrooms, but 60% express ethical concerns, ranging from concerns about AI-generated content falling below quality standards to job displacement and resistance from employees.
Overall, the report suggests that journalistic organizations are aware of their role as the fourth estate and are cautiously embracing the new possibilities of AI technology and editorial automation, while also approaching them with skepticism. For instance, The Guardian and Wired have established their own standards for the use of AI technology and have made them transparent, setting a gold standard for newsrooms and news organizations.
A more grey area is a AI-journalistic practice roled out to readers worldwide right now: The automatized summary of reporting by AI. Reputable german newspaper FAZ one week ago rolled out (link in german) a feature in which you can generate article summaries with a click on a magic wand. You can view those summaries like an automatic extended version of the subheadings found under the headlines in reporting. The reason is simple: Those short summaries increase engagement, reading time and click-through rates. Norwegian newspaper Verdens Gang, one of the largest in the country, also uses AI-generated summaries in their online reporting, and told International News Media Association about some details: Different summaries are generated after writing the piece within the Content Management System, get’s selected by the editor and edited if needed. This is a fairly reasonable use of this tool as there is no generative reporting going directly from machine to the reader, but has human oversight.
But this doesn’t mean the practice can’t be gamed for media manipulation.
The needle in the haystack
Tomorrow germany sees local elections in the states of Bavaria and Hessian and they are closely watched because of the recent surge in popularity of the hard-rightwing AFD. German domestic intelligence is observing the whole party on the suspicion of fascism and you are allowed by decision of courts to call their most prominent leader Bernd Höcke as a nazi and a fascist. It’s safe to say they are rightwing extremists. (note: all of the links in this section are german language.)
On thursday just before appearing at a rally, chairman and lead spokesman Tino Chupralla was brought into a clinic, telling the police and the press about an alleged “violent incident“ in which he felt a “puncturing“ on his upper arm. This information is documented in an official report by doctors reproducing self-report by the supposed victim, and, this is crucial: not by medical investigation by the doctors themselves. Meaning they simply wrote down what Chupralla said. Prosecutors overviewing the case say the medical report is not backed by the witness accounts and police investigation.
You have to look hard to find that information.
If you scan the articles about the incident you get the impression that, indeed, Chupralla was attacked by someone and had collapsed. The crucial bit of information, that the information in the hospital discharge papers is based on the self-reporting of the chairman of a fascist party is not found in summaries or subheadings accompanying the flood of articles. You can only find it reading closely, e.g. in this article where that bit is mentioned at the end of the second paragraph, while this article from a local outlet in cologne does a slightly better job.
And indeed, the english language outlet for german news Deutsche Welle titles “AfD leader Chrupalla hospitalized after 'violent incident'“, while Elon Musk, the richest man in the world, engages with conspirational accounts screaming “assassination attempt“ from the top of their lungs.
You also need to read this event in context of an “evacuation“ of AFD-party leader Alice Weidel who cancelled an appearance at a rally and allegedly had to “flee her home“ in swiss due to “threats“ and was brought to a “secret safe house“ — while staying in her hometown for a whole week after the alleged threats were made with her kids walking to school unprotected, and now residing in Mallorca, while the police contradict all of these reports.
Given the reputation of this hard rightwing party which is observed by domestic intelligence and whose leaders have a reputation for media manipulation and take pride in that fact, it’s very, very hard not to see a concerted effort to game the media reporting on the party in the week before two crucial elections and ramp up the victimhood-narrative, which is one of the foundational elements in every definition of fascism.
What do you think how an AI would summarize these events, when even human reporters fail to underline the fact that here’s a fascist leader supposedly playing up an incident that may be completely made up, a leader from a party that is famous for media manipulation tactics, using every manipulation technique in the playbook, from photoshopping headlines to using sockpuppet accounts?
When even human reporters fail at reporting on a very possible media manipulation event taking place in the week before crucial elections in germany, who simply reproduce a sloppy medical paper consisting on self reporting from a hard rightwing politician, how do you think an AI would fare in this situation where detail is absolutely crucial to get a full picture?
Human-produced ragebait was just the beginning
But you don’t even have to go to the full populist media manipulation handbook to have concerns about robot-journalism in the newsroom. As well meaning as many of the answers in Polis’ survey on AI in the newsroom may sound, none of this holds up when the tech is used by news organizations that don’t prioritize quality standards or ethical considerations.
Rupert Murdoch's News Corp in Australia uses AI technology to publish 3,000 articles per week. Jonah Peretti of Buzzfeed has expressed the intention to replace almost all "static content" with generative AI. G/O Media has laid off all authors of the spanish edition of tech mag Gizmodo and now relies on AI for translations, while the formerly prestigious film website AV Club publishes machine-generated articles copied directly from IMDb. Red Ventures is cutting jobs in the editorial departments of CNet while simultaneously experimenting with robot journalism. NewsGuard has identified 498 "Unreliable AI-Generated News Websites" so far. In Germany, Springer Verlag is cutting 200 jobs at BILD while expanding generative AI technology, and the Burda Verlag has used generative AI to create an entire cooking magazine, from illustrations to recipes.
It is certainly commendable that reputable news organizations are considering the ethical and editorial consequences of using artificial intelligence in news journalism and taking concerns regarding labor into account. But even then it is questionable journalistic practice given the fact that populists already extensively game the media, not just in the crucial days before elections as in the events described above, but all the time, and when it comes to the more unprincipled news organisation, we are in for a rough ride.
Let's not deceive ourselves: High engagement and reading numbers are not generated from respectable outlets, but from tabloid and yellow press. As these examples illustrate, these outlets tend to prioritize clicks and money over editorial standards and outrage bait over ethical considerations. These widely read media outlets already use LLMs extensively and will only increase it’s usage, without scruples or considerations of societal impact, sometimes with a clear political populist agenda. This means that artificial intelligence, along with typical AI issues like algorithmic bias, will have a significant impact on society's perspectives and worldviews through the backdoor of tabloid journalism, even more so than before.
We will not be able to regulate this phenomenon. We may put regulation in place to mark generative journalism as such, but i doubt that this will have the desired effects of wariness when it comes to yellowpress and tabloids.
I use to joke here about how OpenAI may be forced to wipe out their LLMs due to copyright abuse, but realistically these automatization language generators are here to stay, and they will be abused, on one hand by journalistic outlets who, to say it blunty, don’t give a fuck, and on the other hand, by journalistic outlets who will use them to further optimize their populist manipulation machine.
The rise of populist parties all over the world is very concerning, but giving them a media environment in which journalistic practice is even easier to manipulate than ever before is a hardcore media nightmare. Human-produced ragebait was just the beginning.
Responsible Journalism as a luxury good
In a post from January this year, i wrote a small vignette about a future in which the world is divided into masses that are fed generative streams of synthetic, hypercustomized stuff, while the rich enjoy intellectual stimulating and handcrafted art. I think this scenario will hit journalism and media first, because it already does.
I haven’t watched TV for more than 15 years now, some occasions when i was visiting friends or at a bar set aside. Whenever i watch cable news or junk TV on Youtube Clips or by chance, i am baffled that anyone would expose their cognition to such brainrot. It’s so extremely far from my everyday media diet, that i can’t imagine what a life spent in front of a TV screen would even feel like — like that of my late mother, who watched telly pretty much 24/7, which still is everyday life for many seniors watching Fox News or something similar all day long.
My own media diet consists of a lot of reading, digital feeds and books on paper, and then a movie too, intertwined with whole days i spend with headphones on listening to music all day long. I’m happy with this, it’s intellectually satisfying and stimulating, interesting, entertaining, i can converse with people all day who do the same and i feel very, very well informed. But this is not the experience of the majority.
The majority has non-media dayjobs and their media diets consist from social media feeds and news sites and Television. The average daily time spent reading in the US, and i don’t think Europe looks any different here, is roughly between 15 minutes and half an hour per day for all age groups combined. This not only tells you how much of an outlier i am, who spends hours per day reading stuff, but also how much cognitive effort the majority will spend on decisions regarding a potentially synthetic origin of their media diet: None. People won’t care if what is fed to them is written by a machine, or by a human, or a combination of both. In case of news they’ll care if it’s interesting and fits their worldview.
What do you think a person who reads 15 minutes per day will do if they encounter an AI-summarization of an article about an alleged attack on a populist rightwing party leader which fails to mention crucial details that expose the whole thing to be very likely hogwash, not to speak of the impression that this is part of a whole media manipulation campaign days before an election? They surely will not click on the article to read further, that much i can tell you. And just like that, eased media manipulation by gameable algorithms may swing elections.
It’s cool when we can enjoy intellectually stimulating feeds of all kind, oh so critical of text synthesis and populist media tactics all day long, but it’s not cool that this is an enjoyment of intellectual elites. Who cares about responsible AI in respectable journalistic institutions when the majority is kept down and gamed by media manipulators who will use text synthesis to ramp up their game even further for political and monetary gain?
When 75% of respectable news orgs already use AI in one way or another, you can bet your horses that it’s 100% in non-respectable news orgs, and they’ll optimize their models for their agendas for sure. In the best of cases, that agenda is just economic, jumpstarting the output of more nonsense for dumb masses to make them click.
But even then, if it’s only bucks that motivate the use of generative AI in shady news businesses, we should not forget that one of the biggest, uhm, “innovations“ in news reporting in the past few years was monetary motivated clickbait and fake news, the last of which prominently was mastered by one guy in a town in Macedonia who hired a bunch of teens and caused the mother of upheaval in journalism during the Trump era. What do you think one guy who’s solely interested in money can do with generative AI in the newsroom and political discourse as a whole? What do you think the biggest tabloid-section in the media industry can do? What do you think a media tactic counselor to a populist party can do?
In the worst of cases, the use of generative AI in the newsroom is politically motivated. A guy like Rupert Murdoch and son will kickstart the Daily Mail into a Newsservice that sells customizable news for you and only you and ramp up their already inhuman output, while their AI-model gonna be trained with explicitly embedded biases for his rightwing friends and business buddies to subtly or not-so-subtly nudge society at large into a political direction and manipulate voting behavior how these people see fit. Willfull politically bias AI-models are absolutely a thing, and i see no reason why journalistic junk with agendas would not use them to amplify their screams of “assassination attempt“ when a fascist fakes it.
So while the highbrow news orgs look down on the crap, the unscrupulous have a big chance to steer the political stage into any direction they want, while we enjoy books and jazz, get riled up about Chupralla and his scum on Bluesky, and read shiny reports about responsible use of AI in the newsroom.
When it comes to robot journalism, the future may not be a rogue AI turning all matter into paperclips, but algorithms turning all reporting into imaginary assassination needles from a concerted effort of populist politicians.
And i have a bad feeling about all of this.