News

OpenAI has a working AI detector but doesn’t want to make it public

OpenAI has developed a tool that can detect if a piece of content was generated by ChatGPT. However, they are currently holding off on releasing it to the public.

This news comes after OpenAI previously shut down its AI Text Classifier system because it was unreliable. Their latest solution uses a “highly accurate” text watermarking method.

However, OpenAI has reportedly stalled the release for two years, fearing it could affect how people see the use of AI writing tools. Moreover, the company said the detector could be circumvented through “global tampering” by rewording outputs through translation services or other AI models. Alternatively, the text watermark could be removed by instructing ChatGPT itself.

Despite these concerns, OpenAI continues to explore other means of detecting AI-generated content. Their research into using metadata for detection shows promise. Unlike watermarking, metadata can be cryptographically signed, which means there are no false positives.

OpenAI believes this approach will hold increasing importance as the volume of AI-generated text continues to rise. Detecting AI-generated content could prove vital in combating misinformation and ensuring transparency.

Source: 1, 2
Image: Unsplash

Bryan Rilloraza has been a fixture in the local tech scene for over a decade, sharing his perspective as a tech enthusiast and industry veteran. Backed by an MBA from De La Salle University, a Bachelor’s Degree from the University of the Philippines, and 20 years of corporate experience in the telecommunications and banking sectors, Bryan provides a practical, real-world analysis of how technology serves the consumer.

Write A Comment