OpenAI, the firm behind the popular language model ChatGPT, has released a new AI classifier software that can determine if text was authored by a person or by artificial intelligence. Rising worries about the misuse of AI text-generation technologies, such as automated disinformation campaigns, academic dishonesty, and misrepresenting AI chatbots as humans drive the startup.
The AI classifier developed by OpenAI is based on a machine learning model that was trained on a huge corpus of text to distinguish between human-written and AI-generated content. The business admits that it is impossible to recognise all AI-written material with 100% accuracy. Still, it thinks that the classifier may play a significant role in reducing bogus claims of human authorship.
According to OpenAI’s internal testing, the classifier accurately identified 26% of AI material as non-human, although it still makes mistakes and misidentifies human-written text as AI-written 9% of the time. However, the classifier’s reliability increases with the length of the input text. In addition, the business claims that the new classifier is substantially more dependable on text created by current AI systems than its prior classifier.
To solicit comments from the larger community and stimulate ongoing development, OpenAI has made the AI classifier publically available. In addition, the business is working with educators to provide teaching tools on the proper use of AI in training. Anyone interested in testing the AI classifier may do so on OpenAI’s website.
Finally, OpenAI’s AI classifier is a significant step toward resolving the rising worries about the abuse of AI text-generation technologies. While not flawless, the classifier has the potential to play a significant role in reducing fraudulent claims of human authorship and encouraging the proper use of AI in training and education. The choice of OpenAI to make the classifier publicly available is admirable since it allows the broader community to contribute to its growth and improvement.