Don't let English affect your chances of getting published!

Why iThenticate Flagged a Human-Written Paper as 32% AI

01 April 2026

If you’ve ever received an unexpected AI detection flag on a manuscript you wrote yourself or that was edited solely by humans, you’re not alone.

AI detection tools like iThenticate are increasingly used by publishers and journals to screen submissions for potential AI-generated content. But these tools may be less reliable than they appear. A recent client case at AsiaEdit reveals just how easily AI detection scores can shift between manuscript versions, even when no AI has been used at any stage.

For authors, the implications are serious. Publishers are using these tools to flag potential misuse of AI despite the clear and admitted limitations of the technology, and then pushing the burden of proof back to the author. This very much amounts to guilty until proven innocent. And innocence can be hard to prove in such cases.

A Case Study in False AI Detection

  • Original manuscript submitted to iThenticate (a similarity and AI detection tool provided by Turnitin: ithenticate.com) → no AI score given
  • Manuscript professionally copyedited by AsiaEdit – no AI tools used at any stage; edits were minimal, with no substantive content introduced
  • Revised manuscript resubmitted 32% AI detected

The only difference between the two versions was light professional copyediting. No AI was involved. Yet the iThenticate AI detection score jumped from zero to 32%.

AsiaEdit contacted Turnitin directly to query this result. Here is what we learned.

Small Edits Can Change Your AI Detection Score

Turnitin confirmed that its AI detection algorithm analyses text by evaluating patterns across overlapping segments of a submission. As a result, even minor revisions – rephrasing, slight additions, or structural adjustments – can alter how those segments are assessed, potentially producing an entirely different AI detection score.

In Turnitin’s own words, even a change as small as a single word being added or removed “can alter how the detection algorithm evaluates the text and may result in a variation in the AI detection outcome.”

In this case, the revised version was just 91 words longer than the original (a 1.6% increase). Although marginal, this was enough to change how the text was segmented and analysed, producing a dramatically different result, even though the content was substantively the same.

File Format Conversion Can Affect AI Detection

Turnitin acknowledges that file format conversions, such as from Word to PDF, can introduce small differences in how text is interpreted by the detection algorithm. Authors who convert between formats before submission should be aware that this alone may influence their AI detection score.

Low AI Scores Are Hidden

AI detection scores below 20% are no longer displayed numerically by iThenticate. Instead, they appear only as an asterisk (*).

This creates a significant blind spot. A manuscript that appears to have no AI involvement may in fact carry a low score that is simply not visible. If a later revision, however minor, pushes that score above the 20% reporting threshold, it can create the misleading impression of a sudden spike in AI-generated content.

This is probably what happened in our client’s case: the original manuscript may have had a hidden low score that crossed the visibility threshold after copyediting.

What an AI Detection Score Actually Means

Turnitin states explicitly that its AI detection output does not determine academic misconduct. The score is intended as supporting information, to be interpreted in context by course instructors, journal reviewers, and others responsible for upholding academic integrity.

An AI detection score is not a verdict on academic quality or integrity; it is merely one among numerous data points, and one with significant known limitations.

What Authors Should Do If Falsely Flagged

The outputs of AI detection tools can shift considerably depending on small differences between document versions and file types. This variability introduces a great deal of uncertainty, and potentially unreliability, to the detection process, which is not always recognised by authors or the institutions relying on these tools.

If your manuscript has been flagged, here are practical steps:

  • Don’t panic. A high AI detection score does not necessarily mean that your work is AI-generated. False positives from iThenticate and similar tools are a documented and acknowledged issue.
  • Document your writing process. Keep drafts, tracked-changes files, notes, and correspondence that demonstrate human authorship.
  • Request clarification from the publisher. Ask which tool was used, what score was returned, and what threshold triggered the flag.
  • Consider resubmitting in a different format. As file format can influence scores, a different submission format may produce a different result.
  • Contact AsiaEdit for support at [email protected]. We can help you interpret AI detection results, prepare responses to publisher queries, and ensure that your manuscript is submission-ready.

NOTES

Share
Facebook Twitter YouTube linkedin Weibo

Your Schedule, Our Prime Concern AsiaEdit takes a personalised approach to editing.

Invoice Number

Invoice Number

If the current upload is a revision of a document previously submitted to AsiaEdit, let us know the original invoice number to ensure the most competitive quote.

How did you hear about us?

How did you hear about us?

We’d love to know how you discovered AsiaEdit so that we can better reach other authors like you in the future. If you heard about us/from us on multiple platforms, please choose the earliest you can recall. Choose “Returning client” if you’ve worked with us before.