When OpenAI launched ChatGPT at the end of 2022, educators expressed concerns that students would misuse the platform for dishonest assignment and test completion. To counter this, numerous companies deployed tools to detect AI-generated content, but their results have not been consistently reliable.
Now, OpenAI has announced that it has developed a method to identify when ChatGPT is used to generate content. The technology is claimed to be 99.9% effective and essentially utilizes a system capable of predicting which word or phrase (termed a "token") will come next in a sentence. The AI detection tool subtly alters the tokens, embedding a watermark that is invisible to the naked eye but can be identified with the proper tool.
This technology was ready for deployment about a year ago, but the company has yet to release it due to mixed internal reactions. On one hand, launching this tool could alienate part of the ChatGPT user base. On the other hand, introducing the tool would signify the company’s commitment to maintaining transparency.
Additionally, an OpenAI representative expressed concerns that the tool might "disproportionately impact groups for whom English is not a native language." Nevertheless, key OpenAI personnel advocating for the launch believe that the technology could be highly beneficial and should not be postponed.
Source: androidauthority
Comments (0)
There are no comments for now