Is AI text-to-image technology blurring the lines between fact and fiction?

Is AI text-to-image technology blurring the lines between fact and fiction?

False images of destruction permeated Twitter following a deadly typhoon in Japan this September.

Three images posted two days after the storm claimed to show flooded homes and streets submerged in mud, water and debris. The caption read: "Drone-shot photos of flood disaster in Shizuoka Prefecture. This is really too horrible."

A 45-year-old man was killed in the city on the southern coast as strong winds and record-breaking rain caused cave-ins and landslides.

However, the images showing the destruction were created using text-to-image software, an AI-driven tool that creates believable pictures based on text inputs. Only after the pictures garnered over 5,600 retweets did people start to question the picture's authenticity.

Twitter users noticed that the flood water seemed to flow unnaturally and the roofline appeared warped. Even local journalists were tricked into resharing the images.

Text-to-image software uses AI to create original images from scratch. The tool is first 'trained' on huge banks of images scraped from the internet and learns to recognise concepts; man, dog, fluffy, Prime Minister, for instance. It then produces a fabricated image that closely matches those concepts.

OpenAI, the creators of DALL-E, one of most widely used text-to-image tools, made its powerful software fully accessible to the public in September. But it is not open source.

Microsoft also said on Wednesday that it would be integrating DALL-E 2 into its Office suite, potentially putting the tool in the hands of millions of users.

Despite Microsoft's limitations on extreme, celebrity and religious content being created, journalists are braced for an uptick in potentially fake images circulating online.

Lecturer in digital media at Australia’s Central Queensland University, Brendan Paul Murphy, said journalists will need to pay closer attention to the small details, such as dates and locations of images.

"The traditional methods journalists use to keep on an even keel will remain the benchmark: seeking multiple sources and verifying information through investigation."

Fact-checkers should be worried about the recent release of Stable Diffusion, a competitor to DALL-E, and its algorithmic improvements to train AI. The creators of these AI tools cannot control the images that are generated.

"The creator cannot usually control how the media they create is used, even if they have the legal right to," said Murphy. He adds that Google's text-to-image product, Imagen, has not been made available to the public because it was deemed too "dangerous".

Under the Limitations and Societal Impact section on the Imagen website, it cites 'potential risks of misuse' and a tendency to reinforce negative stereotypes in the images it creates as the reason for keeping it under wraps.

Stability.AI, the team behind Stable Diffusion, said in a statement that it 'hopes everyone will use this in an ethical, moral and legal manner' but stressed that responsibility for using the software lies purely with the user.

The anonymous Twitter user created the images of the flooding in Japan in less than a minute, inputting keywords 'flood damage' and 'Shizuoka' having previously used the software to create pictures of food.

"I thought [other Twitter users] would figure out the images were fake if they magnified them. I never thought so many people would believe them to be real," the original poster told the Yomiuri Shimbun.

"If I’m called to account for the post, that's the way it has to be. Posting that kind of image can cause a big problem even if it's just done on a whim. I want lots of people to learn from my mistake that things done without careful consideration can lead to big problems."

In February 2021, a doctored image circulated of Japan’s Chief Cabinet Secretary Katsunobu Kato smiling in the wake of a devastating earthquake in Fukushima.

Former American Presidents Donald Trump and Barack Obama, and Ukrainian President Volodymyr Zelensky have all been the subject of AI-generated video and imagery fabricating speeches or official visits.

Read more: Can AI text-to-video technology help newsrooms get more mileage out of their copy?

The current limitations of text-to-image tools do provide some indicators that an image has been falsified and give journalists the chance to avoid being duped.

Systems like DALL-E 2 and Stable Diffusion struggle to render intricate body parts, while Google has admitted Imagen is far weaker when attempting images containing people.

AI is trained to create images 'close enough' to the input, so there are some clues whether an image is fake or not.

Murphy says AI tends to struggle with anatomy, so a close inspection of the hands, ears, eyes and teeth of people within the image can expose a fake.

Generated images can have errors, like duplicate shadows or the wrong lighting for the situation.

Few UK newsrooms have offered their journalists training or guidance on AI-generated imagery. An easy fix is to stop sourcing pictures from social media, and rely on reputable photographers.

Photojournalist Jess Hurd said: "There is always scrutiny of an image because [editors'] jobs and the respectability of their news outlets are on the line.

“If there is an option for a professional photographer, then go with that. [As it becomes harder to tell what is accurate] there is going to be more emphasis on the value of the journalist."

Jointhe Journalism.co.uk news channel on Telegramto receive news and updates straight to your phone every week

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

Images Powered by Shutterstock