There is a report from the source Wall Street that OpenAI has developed a mechanism to identify whether or not a given text has been written using its AI chatbot but does not avail it to the public. Based on the report, the tool has been in the works for quite some time and is ready for launch, However, the AI firm fears that such a detection tool will lower the chatbot’s popularity among users.
A text detection tool from OpenAI will be more reliable for users
An AI detection tool would arguably facilitate the task of teachers so that they may easily identify the student(s) and dissuade them from submitting papers produced by AI. The report also revealed that the survey which was conducted worldwide indicated that people backed the idea of an AI detection tool 4:1 based on what OpenAI conducted. But at the same time, nearly 30% of them stated that they would pull the plug on the tool if OpenAI watermarked the text.
Why does the company fear to release?
Well, OpenAI possesses an effective solution that could considerably lessen such a method. This is a fingerprinting tool, which is a method to identify the type of Operating System an interface uses. In other words, if a person “writes” something with the help of ChatGPT, then the “fingerprint” will be applied to the created text.
According to the report, OpenAI does not share the AI detection tool because of risks associated with the tool diluting its client base. Citing a survey that the company carried out, they said that the same rate of users – 30% informed that they would be less likely to use ChatGPT if an anti-cheating mechanism is unveiled. This fingerprint is a detectable pattern applied to the outcome that it cannot be detected by people. But mankind can create tools to identify this pattern. They will only need to feed the generation into their tool and then determine if it has been generated.
Some of the employees who keep supporting the tool’s release and being involved in the tool’s development have said internally that such arguments are minute compared to the good that such technology could do. Updating the numbers that the company gathered through the survey they held to dedicated ChatGPT users, nearly a third expressed that the anti-cheating technology would repel them.
Some of OpenAI’s top executives that have had discussions concerning the anticheating tool are OpenAI Chief Executive Sam Altman, and OpenAI Chief Technology Officer Mira Murati. Altman has supported the project, but he has not called for it to be launched, some sources close to the issue stated.