ChatGPT has rejected 250,000 deep poll requests
Many people tried to use OpenAI’s DALL-E image generator during the election season, but the company said it was able to prevent them from using it as a tool to create deepfakes. ChatGPT has rejected more than 250,000 requests for photos with President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance and Governor Walz, OpenAI said in a new report. The company explained that it is a direct result of a security measure previously implemented so that ChatGPT refuses to produce images with real people, including politicians.
OpenAI has been preparing for the US presidential election since the beginning of the year. It developed a strategy aimed at preventing its tools from being used to help spread misinformation and ensured that people asking ChatGPT about voting in the US were directed to CanIVote.org. OpenAI said 1 million ChatGPT responses directed people to the website in the month leading up to Election Day. The chatbot also generated 2 million responses on election day and the day after, telling people asking for results to check the Associated Press, Reuters and other news sources. OpenAI made sure that ChatGPT’s responses “did not express political preferences or endorse candidates even when explicitly asked,” as well.
Of course, DALL-E isn’t the only AI image generator out there, and there are plenty of election-related deepfakes circulating on social media. One such deepfake featured Kamala Harris in a campaign video that was altered to say things she did not say, such as “I was elected because I am a diverse employer.”
Source link