top of page

How AI Tools Are Fighting Image Fraud in Scientific Research

Close-up of gloved hands preparing a syringe with a needle, likely for medical injection or vaccination, in a clinical setting with blurred background.

May 17, 2025

Scientific image manipulation—whether intentional or accidental—can lead to severe consequences, from flawed research to reputational damage. This article explores the critical importance of image integrity in scientific publishing and how tools like Proofig AI are helping researchers and journals detect duplications, manipulations, and AI-generated content.



Technological advancements are reshaping the way scientific images are verified, driving greater transparency and trust in research.

Scientific progress depends on the reliability of published findings. Each new discovery builds upon prior knowledge, making it critical that published research is as accurate as possible. Mistakes—whether accidental or deliberate—can damage a researcher's reputation and mislead future studies. While scientific journals aim to maintain high standards through editorial reviews and peer review processes, these checks are not foolproof.


In today’s competitive academic landscape, publication volume remains a key measure of success. Unfortunately, this pressure has fueled problematic practices—such as suspiciously high publishing rates, the production of fraudulent articles, and even the sale of co-authorship.


Even legitimate articles sometimes contain inaccuracies. In life sciences and medical research, for instance, researchers are often required to submit visual evidence, such as tissue images or western blots, to support their findings. While images may seem like objective records, they too can be altered—either through intentional manipulation or honest errors—potentially undermining scientific integrity.


A notable example is the retracted 2006 Alzheimer’s study linking amyloid-beta protein to the disease. The article, heavily cited for years, led many labs down a research path that failed to deliver results. A later investigation revealed manipulated images in the original study and dozens of related papers, significantly distorting the field.


Even when researchers are unaware of image errors, the consequences can be serious. In 2023, Stanford University’s president resigned after image flaws were found in studies he co-authored—despite no evidence that he was directly responsible for the mistakes.

Repeated use of images—especially when altered or undisclosed—raises immediate suspicion. The publication process is complex and prone to human error, making image verification a shared responsibility among researchers, editors, and publishers.


The Role of Image Integrity Experts


Some researchers dedicate their careers to detecting problematic images. For example, Elisabeth Bik’s team examined ~20,000 biomedical papers, uncovering repeated images in about 4%. Some duplications involved rotated or mirrored images, which can falsely strengthen a paper’s conclusions.


One review of a Harvard-affiliated institution found that nearly 50 articles published over two decades contained manipulated images. Many of these articles were later corrected or retracted.


Public Scrutiny and the Role of Technology


Beyond formal peer review, open platforms like PubPeer allow public post-publication critique, helping to expose image issues. While such forums promote accountability, anonymity can sometimes lead to hostility.


Historically, image verification has relied on time-consuming manual checks by editors and reviewers. But as publication volume grows, this is no longer sustainable. Increasingly, journals and institutions are adopting automated tools that analyze images for potential problems.


Common issues include:

  • Edited images that highlight or hide data

  • Duplicated or overlapping images reused within an article

  • Images copied from external sources

  • AI-generated images posing as authentic scientific visuals


AI-Powered Image Detection


AI-based tools now scan vast databases of published images to identify duplications and manipulations. Instead of pixel-by-pixel comparisons, modern systems extract unique image "signatures" for faster, more efficient matching—even when images have been altered.


Israeli-based Proofig AI is at the forefront of this field. Its software helps researchers verify their work and assists journals in screening submissions. Proofig AI’s system can:

  • Detect edited or inserted image elements

  • Flag areas of possible data deletion

  • Identify duplicate images within a paper

  • Compare new images to external databases to check for plagiarism


Proofig AI also addresses a growing concern: AI-generated images. In a recent survey by the company, many researchers struggled to distinguish real biological images from AI-generated ones—underscoring the need for advanced detection tools.


Strengthening Safeguards for Scientific Integrity


Awareness of image manipulation is prompting publishers to tighten review procedures. Leading journals now require raw image files and may reject submissions if concerns cannot be addressed. They also share concerns with other publishers if a questionable article is resubmitted elsewhere.


Automated tools outperform humans at flagging suspicious images, though they may sometimes over-detect issues—highlighting the ongoing importance of expert human oversight. Researchers are also developing similar verification tools for other scientific data, such as genetic sequences.


Ultimately, automated verification of images—and critical scientific data—should become standard practice, just like spell-checking in writing. These technologies offer vital protection for scientific integrity and benefit the entire research community.



bottom of page