YouTube has recently updated its guidelines due to the emergence of new opportunities and issues connected with artificial intelligence (AI) in content creation. This change began in June and enables people to ask to have content erased if an AI copies the user’s voice or image, according to YouTube’s privacy policy.
By revising its privacy policies, YouTube lays down what process the platform expects individuals to go through in cases where they find “AI-generated or other synthetic content that looks or sounds like you.”
The standards they settle on explaining that they are in place to shield users and in light of privacy issues that might arise are for every person globally notwithstanding the law in his/her country.
Highlights of the new YouTube policy
- Users can flag content wherein the AI will generate a convincing imitation or forgery of a person’s image.
- The policy extends to the impersonation that is performed visually or audibly.
- Regardless of whether the contents are labeled as being created using AI or not
- Based on how realistic the person deployed in public space is depicted
- If the individual data received in the field can be attributed to a specific and specific person or subject.
- Whether there is any sign of parody, satire, or public interest factor
- Particular attention is paid to the materials containing celebrities or other people who have a high public profile the content includes ill-advised behavior such as criminal actions or political campaigning.
YouTube says that it will then inform the uploader of the privacy complaint and grant the uploader a chance to either delete or modify the material in question. They only offer them 48 hours in which to reply to any of the complaints which is completely unfair.
To be comprehensive, removal is stated to mean complete removal Here, blurring out faces is another approach. This also notes that one cannot cause the clip to become non-public since this would let the original poster post it publicly.
Though YouTube makes use of AI-based features such as comment summarization and conversational agents, the company argues that labeling content as AI-generated does not exclude it from compliance with the Community Guidelines.