OpenAI has developed a new tool designed to detect students or individuals who use ChatGPT to complete their assignments. However, the company is still considering whether to release the tool to the general public or keep it limited.
A company spokesperson stated that they are working on a “text watermarking” technique, which embeds a hidden marker into the text. This marker is invisible to the reader but can later reveal whether the text was generated by ChatGPT.
The spokesperson explained that while the technology is effective, it also carries certain risks. For instance, some users may attempt to bypass it, and it could have unintended consequences for non-English speakers.
OpenAI noted that it had previously launched tools aimed at detecting AI-generated text, but they were not sufficiently accurate and were discontinued last year.
This new method will specifically identify text written by ChatGPT. It works by introducing slight variations in word choice, which create an “invisible watermark” within the content.
The company further acknowledged that the system may still fail in certain situations — for example, if the text is translated, rewritten, or modified by another AI model.