
YouTube has a couple of new updates for both creators and viewers, with the introduction of AI-powered protections for teen viewers launching just as the site rolls back some of its more controversial profanity restrictions.
A few years ago, YouTube found itself in hot water over changes made to its advertiser-friendly content guidelines, specifically designed to align closer with broadcast regulations. In late 2022, profanity usage within the first 20 seconds of a video — or videos of any length with “excessive” swearing — could’ve resulted in demonetization, a policy that was reduced in May of 2023 to “strong” profanity (i.e., the f-word) and a focus primarily on the first 7 seconds of the video after creator backlash. Two years after that change, YouTube is getting more lenient.
The new policy is broken down in a video featuring Conor Kavanagh, YouTube’s Head of Monetization Policy Experience. Moving forward, the company no longer includes a time restriction before allowing for profanity within a video, granting those videos the ability to stay monetized. Kavanagh says the change comes thanks to new guidelines around broadcast standards, along with improvements to how advertisers can target their viewers and, of course, years of creator feedback.
There are, however, a couple of restrictions still in place. Kavanagh says placing swear words within the video’s title or thumbnail will result in demonetization, while high-frequency use of “strong” profanity similarly “remains a violation of the advertiser-friendly content guidelines.” The example given of the latter situation is a compilation of the “best swearing” from a character in a show, where most sentences within the video would include cuss words. I’m immediately reminded of best-of clips from shows like The Thick of It or its American adaptation Veep, both of which include frequent profanity in some of their funniest moments. Kavanagh also says community guidelines still apply to any use of swearing — this isn’t a green light for harassment.
Simultaneously, YouTube is changing how it protects teens on the platform. Rolling out to a small group of US-based users over the next few weeks, YouTube is introducing machine learning tools that work to estimate a viewer’s age. This system disregards the age given by the user during account creation, instead determining whether the viewer is 18 or older based on video searches, video categories, and the lifespan of the account.
If YouTube’s new system identifies an account as a teenage viewer, that account will automatically have personalized ads disabled, digital wellbeing tools enabled, and safeguards implemented to recommendations. The company does allow for adults caught in a false positive to remove these restrictions, though they’ll have to verify their age using either a government ID or credit card.
FTC: We use income earning auto affiliate links. More.

Comments