Has anyone figured out how to reduce AI-text similarities?
What approach you use?
1. Manual re-writing is the only solution.
2. Alter the pattern of writing, e.g., text to image.
3. AI-humanizer tool to bypass Turnitin.
4. Avoid using any AI tool for support at all. Not even for grammar or readability.
Academic writings/papers will always sound robotic. Turnitin AI-similarity test is a way too much. As its AI detector is learning faster than authors and researchers, it is hard to crack.
Here are a few solutions and accompanying reasons why they fail:
1. Manual Re-writing: Rewriting the text may not be effective, as many writers are non-native English speakers. With the increasing number of research studies and student papers being published, the similarity level will always be on the high side.
2. AI humanizers: They destroy the academic tone of writing.
3. Avoid AI tools: Recently, an AI-victim author claimed that AI is being used for peer review. So, even if you, as an author, avoid using AI for support, AI will somehow use your unpublished manuscript for training itself. And eventually, flag your writing as hashtagAI-text. Joke’s on us.
AIdetectors’ false judgment is another common story.
I have figured out an approach and brought down the AI-text-similarity level by 20%. But I am unsure how long this method will survive, given the rising volume of academic publications.
Assuming AI-text is not always a threat to academic integrity. What should be the Cap on AI-similarity for academic work?


