top of page


Why AI detectors are useless

In a world where artificial intelligence is becoming more and more prevalent in our lives, AI detectors seem like modern guardians against fraud and deception. But how effective are they really?

The use of AI detectors also raises ethical questions. On the one hand, there is a risk that these tools could be used to monitor and censor content and its creators. Secondly, it is unclear how the data used to train these detectors is collected and stored.

Privacy regulations could easily be breached if careful attention is not paid to how these systems are implemented. This can be particularly concerning if the data is collected without the knowledge or consent of students or their parents. The use of AI detectors in school can also undermine the importance of trust and honesty. If teachers and students feel that they are constantly being monitored, this can make them feel less trusting and less honest with each other. This can be particularly the case if the AI detectors are inaccurate or unfair.

An often overlooked aspect is the fact that AI detectors cannot fully capture the human context and nuances contained within creative or complex text and images. This means that the final assessment of whether a piece of content was created by a human or a machine is best made by a human.

Detectors can provide support, but they are no substitute for critical human judgment.


In today's fast-moving technological landscapeAI detectors seem to be a logical response to the increasing proliferation of content created by artificial intelligence. Although these detectors were developed as tools to differentiate between human and machine output, they often reach their limits. This article sheds light on why AI detectors are not the optimal solution in many cases.


The danger of bias in AI detectors is that it can lead to unfair results. If the data used to train the AI is biased, this can lead to certain students being falsely accused of cheating.

False positive and false negative results are common, which can lead to confusion and misinterpretation. This can be particularly the case if the AI is unable to take cultural and socio-economic differences into account. Teachers can also rely too much on the technology and neglect their own ability to recognize what was written by students or what was written by machine.


These detectors are themselves based on artificial intelligence. They analyze texts, images or videos to identify patterns that are typical of machine-generated content. However, it becomes problematic when the artificial intelligence used to generate this content advances faster than the technology of the detectors.

Models such as GPT-4 or DALL-E are evolving rapidly and are getting better at producing human-like content that is harder to recognize. As a result, many

AI detectors need to be regularly revised and retrained, making them a costly and often outdated tool.


While these detectors can be useful in certain contexts, they are often not a flawless solution. Technological limitations, high error rates, ethical concerns and the ever-present human factor all play a role in why these tools don't always deliver the hoped-for results.

The use of AI detectors in schools may seem tempting, but it has its limitations and drawbacks. AI detectors cannot detect all forms of cheating, exhibit bias, compromise privacy, affect student motivation and undermine trust and honesty. Instead, schools should rely on a combination of human supervision, trust, honesty and education to combat cheating and create a positive learning environment.

It is important that we continue to promote the development and use of Critically scrutinize AI detectors and see them for what they are: one tool among many, not the ultimate solution.


Discover our open source resources for schools and the use of AI in the classroom.

Stay up to date on new developments in AI-powered learning and follow us on social media.

0 views0 comments


bottom of page