YouTube to Remove AI-Generated Content That Impersonates Individuals, Label Synthetic Videos



Generative artificial intelligence (AI) has taken off over the past year, with powerful AI chatbots, image and video generators and other AI tools flooding the market. The new technology has also posed new challenges related to responsible AI use, misinformation, impersonation, copyright infringement and more. Now, YouTube has announced new set of guidelines for AI-generated content on its platform to tackle similar concerns. Over the coming months, YouTube will roll out new updates that inform viewers about AI-generated content, require creators to disclose their use of AI tools, and remove harmful synthetic content where necessary.

YouTube announced a slew of new policies related to AI content on the platform via its blog, detailing its approach to “responsible AI innovation.” According to the popular video sharing and streaming platform, it will inform viewers when the content they’re seeing is synthetic in the coming months. As part of the changes, YouTube creators will also have to disclose if their content is synthetic, or altered using AI tools. These will be achieved in two ways; a new label added to the description panel that clarifies the synthetic nature of the content and a second, more prominent label — on the video player itself — for certain sensitive topics.

The streaming service also mentioned that it would act against creators who do not follow its new guidelines on AI-generated content. “Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties,” it said in the blog.

Additionally, YouTube will also remove some synthetic media, regardless of whether it’s labelled, from its platform. These would include videos that flout YouTube’s Community Guidelines. Creators and artists will also be able to request the removal of AI-generated content impersonating an identifiable individual using their face or voice likeness. Content removals will also apply to AI-generated music that mimics an artist’s singing or rapping voice, YouTube said. These AI guidelines and remedies will roll out on the platform in the coming months.

YouTube will also deploy generative AI techniques to detect content that infringes its Community Guidelines, helping the platform identify and catch potentially harmful and violative content much more quickly. The Google-owned platform also stated that it would develop guardrails that prevent its own AI tools from generating harmful content.

Earlier this month, YouTube launched “a global effort” to crackdown on ad blocking extensions, leaving users no choice but subscribe to YouTube Premium or allow ads on the site. “The use of ad blockers violates YouTube’s Terms of Service. We’ve launched a global effort to urge viewers with ad blockers enabled to allow ads on YouTube or try YouTube Premium for an ad free experience. Ads support a diverse ecosystem of creators globally and allow billions to access their favourite content on YouTube,” the platform said in its statement.


Affiliate links may be automatically generated – see our ethics statement for details.

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img