Ten fallacies of making lists of words to detect something written by AI

Ron McIntyre
4 min readJul 26, 2024

Witch hunts to determine if something was done with AI stem from fear, misunderstanding, and the rapid advancement of technology. This rapid advancement, characterized by exponential growth in computing power and the development of sophisticated machine learning algorithms, has made AI more capable than ever. As AI becomes more refined, it challenges our notions of authorship, creativity, and authenticity, leading to anxiety about the potential for deception and loss of human uniqueness. This fear is compounded by a need to understand how AI works, causing people to default to simplistic and often flawed detection methods. Additionally, the rapid pace of AI development outstrips our ability to regulate and create ethical guidelines, prompting reactive measures instead of proactive solutions. Consequently, the drive to expose AI-generated content often becomes a fervent pursuit, overshadowing rational discourse and nuanced analysis, much like historical witch hunts that targeted perceived threats without substantial evidence.

Having written for many years, I have always fought the implication that everything must be written at a 6th-grade level. Yet, with the current education trends, we want to drive down to a 3rd-grade level. I have always used words that others smirk about, but I still use them because they enhance communication and allow me to paint pictures with words other than just black and white.

Here are ten I have encountered regularly:

Over Generalization: Assuming a small set of words or patterns applies universally to all AI-generated texts without sufficient evidence. This is generally how it starts; for example, some point to “multifaceted” as an AI word, yet I have used it for decades.

Ignorance: Believing that the presence of certain words directly indicates AI authorship, ignoring other potential factors. I can point out many writers who consistently use thought and wording repeatedly in their material and have done so for years, long before AI.

Anecdotal Evidence: Relying on personal stories or isolated examples to conclude AI detection rather than statistically significant data. When using AI, I can get it to generate the most absurd thoughts and arguments just by the way I approach the prompt; however, that does not mean that it happens every time.

Confirmation Bias: Only seeking out and remembering instances where the word list successfully detected AI, ignoring failures. Absolutely a factor. This has become the bane of modern-day society and will continue to grow until people realize how it leads to close-mindedness and negative life experiences. Incidentally, “bane” is one of the words people like to point out; however, I have used it repeatedly for 75 years.

Circular Reasoning: Using AI-generated text to compile a word list and then using that list to identify AI-generated text without external validation. This is a factor because the time necessary to do a statistical analysis would be mind-boggling, so as usual, we take shortcuts.

Flawed Assumptions: Creating a word list based on limited or specific AI models leads to incorrect assumptions about all AI-generated texts. I adhere to the idea that ASSUME is nothing, but whatever output we use will generally make an ASS of U and ME if I don’t validate.

If, Then: Assuming that if a text contains words from the list, it must be AI-generated, disregarding other explanations. This is like a self-fulfilling prophecy in that if I create a list and then compare it to AI output, it must be the fault of the AI. Very similar to the Circular Reasoning mentioned above.

Cherry Picking: Involves selectively choosing words that fit a preconceived notion of AI-generated text rather than an unbiased analysis. This is often combined with the idea of confirmational biases mentioned above. I have often said that I can get statistics to support any argument I want to make with a little bit of manipulation; this is no different. How the prompt is written will determine the output, primarily if written with an open-ended model.

Fallacy of Composition: Assuming that what is true for some words in the list is valid for the entire analyzed text. Have seen this many times where someone is accused of using AI to create a document that upsets someone’s mindset, much less than using facts to discuss the article intelligently.

Binary Choice: Presenting the detection of AI-generated text as a binary choice between lists of words or no detection method, ignoring other sophisticated techniques. These techniques could include advanced machine learning algorithms that analyze sentence structure and context or natural language processing models that can identify patterns and inconsistencies in writing. The fact that someone decides that any assistance in writing is wrong puts me back in the Stone Ages, where it took time to carve a tablet to voice an opinion. This is not a binary choice; AI has a place in writing, but the burden for logic and influence falls on the person writing, not the AI.

In my opinion, the ethical use of AI in writing offers many benefits that can enhance creativity, efficiency, and accessibility. When used responsibly, AI can assist writers in generating ideas, improving grammar, and translating languages, making high-quality content creation more accessible to a broader audience. This optimistic view of AI’s potential in writing can inspire a sense of hope and excitement about the future of writing. For example, I readily admit to using AI to create relevant pictures for my articles.

It can also help automate routine writing tasks, allowing human authors to focus on more complex and creative aspects of their work. By establishing clear ethical guidelines, transparency, and accountability, AI can be a powerful tool that complements human creativity rather than replacing it. This reassurance about the irreplaceable role of human creativity in the writing process fosters a collaborative environment where technology and human ingenuity thrive together.

--

--

Ron McIntyre

Ron McIntyre is a Leadership Anthropologist, Author, and Consultant, who, in semi-retirement, is looking to help people who really want to make a difference.