Gadgets

Meta needs revised rules for sexually graphic deepfakes, Oversight Board says

Meta’s Oversight Board is urging the company to review its rules regarding sexually explicit deepfakes. The board made these recommendations as part of its decision in two cases involving AI-generated public statistics images.

The lawsuits stem from two user complaints about AI-generated images of public figures, though the board declined to name the individuals. One post, which appeared on Instagram, featured a nude Indian woman. The post was reported on Meta but the report was automatically closed after 48 hours, as was a subsequent user complaint. The company eventually removed the post after it came to the attention of the Oversight Board, however overturned Meta’s initial decision to leave the image up.

The second post, shared on a Facebook group dedicated to AI art, showed “an AI-generated image of a naked woman with a man groping her breast.” Meta automatically removed the post because it was added to an internal system that can identify images that have previously been reported to the company. The Oversight Board found that Meta was correct in removing the position.

In both cases, the Oversight Board said the AI ​​deepfakes violated company rules prohibiting “sexually offensive” images. But in its recommendations to Meta, the Oversight Board said the current language used in the rules is outdated and could make it harder for users to report AI-generated pornography.

Instead, the board says it should revise its policies to clarify that it prohibits non-consensual images that are generated or used by AI. The board writes: “The majority of non-consensual sexual images distributed online today are created by artificial intelligence models that automatically edit existing images or create entirely new ones.” of planning methods, in a way that is clear to both users and company managers.”

The board also flagged Meta’s practice of automatically blocking user requests, which it said could have a “significant human rights impact” on users. However, the board said it did not have “enough information” about the practice to make recommendations.

The spread of explicit AI images has become a hot topic as “deep fake porn” has become the most widespread form of Internet abuse in recent years. The board’s decision comes one day after the US Senate passed a bill banning obvious deepfakes. If passed into law, the measure would allow victims to sue the creators of such images for up to $250,000.

The lawsuits aren’t the first time the Oversight Board has pressured Meta to revise its rules for AI-generated content. In another high-profile case, the board investigated a video of President Joe Biden. The case eventually led to Meta changing its policies on how AI-generated content is labeled.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button