War News and AI Super Polluters

Photo-realistic AI image of child standing amid rubble of bombed buildings in a vaguely Middle Eastern city, smoke billowing. "AI-Generated News Pollution"

From the dawn of modern news production in America, news about war has drawn readers’ attention and driven circulation. Awareness of this fact has made war news fertile ground for sensationalism, exaggeration, and misinformation. Publishing tycoon William Randolph Hearst, whose papers inspired the term “Yellow Journalism,” famously capitalized on the sinking of the US Maine as a news event, telling his reporters, “You furnish the pictures, I’ll furnish the war.” After his papers ginned up the Spanish American war, Hearst shamelessly fed aroused readers fake news stories with bold headlines and dramatic illustrations. Circulation exploded. During World War One, Hearst’s news organizations profited from dubious war news production again, eventually getting banned from using Allied wire services for distributing fake news that served Axis interests.

Over a hundred years later, the incentives driving our news system remain the same and so does the song: gripping war news framed by eye-catching illustrations attracts eyeballs, driving circulation and revenue. And for news producers that prioritize profit over honesty, it is easier than ever to create, circulate and monetize it.

The war raging in Gaza has left a beleaguered public hungry for news, testing news organizations’ commitment to accuracy and ethical reporting. Not only have news producers had to navigate the misleading and inaccurate information flowing through social media about the conflict and its casualties, they now have to grapple with how Artificial Intelligence can be used to furnish the pictures.

Photographs and the captions under them play a huge role in how readers engage with and process news stories. Since the 1920s, when technology made it possible for wire services like the AP and Reuters to disseminate photographs electronically, news organizations have used photos to headline and frame stories. Because news consumers want to know the news they are reading is real, news organizations find that photos convey the reality effect that they are going for. For years they have sent photojournalists to combat zones to take pictures they could use to feed news-hungry publics.

Yet over time, as news organizations have gutted their foreign correspondence desks, the number of photojournalists employed to provide pictures for war news has shrunk dramatically. Now, news producers often rely on content people post on social media – which is notoriously unreliable - or turn to companies like Adobe or iStock that buy and aggregate photos to license and sell. And with AI now on all digital devices, that means the photographs we see might not be as real as they look.

As part of its embrace of AI, Adobe stock photo service allows people to generate, upload and sell computer-generated images as part of its stock image subscription. On its platform, sensational photorealistic AI-generated images were placed right next to actual photos for subscribing news organizations to choose from.

Collage of five photo-realistic AI-generated Adobe Stock images of wartime destruction, smoke, explosions, rubble, soldiers, civilians, terrified child.

Several news reports about Gaza used these clickbait fakes, spreading the pollution far and wide through the news ecosystem. A reverse image search on Google done at the time (you can do this at home by dragging a photo onto the search engine) gives a sense of where one of these images turned up.

When news broke of these fake photos being circulated, Adobe was criticized roundly for providing these photos. They defended themselves, claiming that the agreement required news organizations to label them as AI-generated.

The Washington Post reported that there were over 3,000 AI-generated images available at Adobe, with many of them not labeled as AI-generated. Shutterfly and Getty’s iStock also had AI generated images available for purchase pushed right up against real photos. Amidst the barrage of online misinformation and misleading content circulating through social media in this war, these images only add to the fog of war.

Both sides in the conflict have used photo editing technology and AI to flood the zone with propaganda, and the attention-prioritizing algorithms that drive social media help them spread at the speed of the electronic signal.

This news pollution matters. Social Media companies, like the Hearst distribution systems 100 years ago, also prioritized profitable engagement over curation that serves the public interest. Despite warnings from the EU about facing stiff fines, dubious war news has been great for the Social Media engagement business. One pro-Palestinian TikTok slide show called “Super Mom” featuring AI-generated images has now received over 1.7 million views. Fox News got caught red handed trying to use TikTok to launder its pro-Israel Gaza-related propaganda, and TikTok users responded accordingly. Instagram and Facebook are just as polluted.

But as we’ve noted often, the problem with Big Tech is not the technology per se, it’s the unmanageable scale, which makes it next to impossible to curate the content that flows freely across their platforms. Despite this, Big Tech companies have not only cut back on the expensive safety teams that previously tried to limit the harm of disinformation on their sites but have also welcomed AI content onto their platforms. So much the better for clicks.

To their credit, Adobe responded to critics last week with a new updated policy saying that it was taking part in a new Content Authenticity Initiative and would attach content credentials to any image stating its provenance. “While we cannot control the images people make and how they are used, we believe these actions will both provide greater transparency to licensees of Adobe Stock content and help prevent people from purposefully misusing Stock content to deceive people.” Yet these stamps are hardly indelible watermarks and can be easily cropped out.

Research about viral disinformation and its impact on how people perceive the news has shown that once lies are spread, they are very difficult to clean up. Given this, Adobe’s new policy seems like public relations messaging from a company that is defensive because their technology is used to make deceptive content but wants people to keep using its tools despite the damage they can cause to our share sense of reality.

Getty Image’s response to the situation and the public interest conundrum that AI-generated images pose for news production was to ban all AI-generated images from its website iStock altogether. Said CEO Craig Peters, “We’ve seen what the erosion of facts and trust can do to a society. We as media, we collectively as tech companies, we need to solve these problems.” And just as certainly as stock image services and news organizations do need to solve these problems, it’s also true that we the people must force them to do so with policies that hold fakers who profit from producing and distributing deceptive war news responsible. When there is money to be made from it, Big Tech won’t change unless their bottom line is threatened at a scale commensurate with the scales of their platforms.

And until then, the next time you see a photograph depicting the news about the war in Gaza or the next conflict that takes its place, drop that image in a Google search engine to see if it’s real. When AI tools allow bad actors to flood the news ecosystem with news pollution at the touch of a button, these days you can’t trust your lying eyes. —MJ