The Threat of AI-Generated Fake Images in Scientific Research | | | | Turtles AI
The Threat of AI-Generated Fake Images in Scientific Research
DukeRem14 March 2023
The rise of generative AI presents new challenges for academic publishers dealing with scientific fraud, as these technologies have the potential to deceive human peer review.
Recently, text-to-picture systems like DALL-E, Stable Diffusion, and Midjourney have rapidly improved, becoming commercial software capable of generating increasingly realistic images.
These models can create lifelike pictures of human faces, objects, scenes, and scientific images, making it easier for fraudulent scientists to forge results and publish sham research.
Publishers are already concerned about image manipulation, the most common form of scientific misconduct, where authors use tricks like flipping or cropping parts of the same image to fake findings.
Although publishers are turning to AI software to detect signs of image manipulation during the review process, researchers may be tempted to use generative AI models to create brand-new fake data.
There is already evidence of fake medical images generated using AI in published science papers, detected by the Semantic Forensics (SemaFor) program launched by DARPA in 2019.
Although some success has been achieved in detecting generative models, it is difficult to detect fake images generated with the latest AI models.
Thus, fake-looking images, predominantly western blots (a laboratory technique used to detect specific proteins in a sample of tissue or fluid), have been found in scientific papers, and some experts suggest that many of these were generated using Generative Adversarial Networks (GAN).
The repeated background in these images may be a sign of potential forgery, but the actual western blots themselves are unique, making it harder for computer vision software to identify image fraud.
With the easier generation of fake images using the latest generative AI models, it will become more difficult to detect scientific misconduct.