OpenAI Fears Releasing ChatGPT Detection Tool That May Infuriate Fraudsters

ChatGPT maker OpenAI has new search and voice features on the way, but it also has a tool available that’s reportedly pretty good at catching all those fake AI-generated articles you see online these days. The company has been at it for almost two years, and all they had to do was open it. However, the Sam Altman-led company is still considering whether to release it as doing so could anger OpenAI’s biggest fans.
That’s not it Algorithm to find AI that doesn’t work the company released in 2023, but there is something more accurate. OpenAI is reluctant to release this AI visualization tool, according to a report from The Wall Street Journal per week based on some anonymous sources from within the company. The program is effectively an AI watermarking program that emphasizes AI-generated text with certain patterns that the tool can detect. Like other AI detectors, the OpenAI system will score a document by the percentage of how it was created with ChatGPT.
The system is said to be 99.9% efficient based on internal documents, according to the WSJ. This would be much better than the stated performance of another AI discovery software developed over the past two years. That’s because OpenAI has access to its own secret sauce. Insider proponents of the program say it will do much to help teachers see when their students have turned in AI-generated homework.
The company reportedly held off on the plan for years out of concern that about a third of its users would not like it. Gizmodo reached out to OpenAI for comment, but we did not immediately hear back. The company told the WSJ that the technology “has significant risks that we are weighing while researching alternatives.”
Another problem for OpenAI is the concern that if it releases its tool widely enough, someone could specify OpenAI’s markup method. There is also a potential problem biased towards non-native English speakers.
Google has also developed the same watermarking techniques for AI-generated images and the text called SynthID. That plan isn’t available to most consumers, but at least the company is open about its existence.
As big tech develops new ways to spit out AI-generated text and images online, lie detection tools are becoming less powerful. Teachers and professors are also under a lot of pressure to find out if their students are submitting ChatGPT written assignments. Current AI detection tools are similar Turnitin have a failure rate of up to 15%. The company said it is doing this to avoid false perceptions.
Source link