The Big Tech behemoth announced the move in a Thursday blog post, updating its policies on ads and monetization in order to “ensure a brand-safe environment” for advertisers, and to “protect users” from “unreliable claims” as well as “fake medical cures or anti-vaccine advocacy.”
“Our advertising and publisher partners … have expressed concerns about ads that run alongside or promote inaccurate claims about climate change,” it said, noting that this is not only bad for business, but impacts content creators as well.
“That’s why today, we’re announcing a new monetization policy for Google advertisers, publishers and YouTube creators that will prohibit ads for, and monetization of, content that contradicts well-established scientific consensus around the existence and causes of climate change.”
Google and YouTube announce an update to their ad policy. Will no longer monetize content that “contradicts well-established scientific consensus around the existence and causes of climate change.”
— Will Porter (@TheWillPorter) October 7, 2021
While the company did not offer a detailed definition of the proscribed content, it cited a few examples, including posts that deem climate change “a hoax or a scam,” as well as “claims denying that long-term trends show the global climate is warming, and claims denying that greenhouse gas emissions or human activity contribute to climate change.”
Enforcement of the new policy will combine “automated tools” as well as “human review,” Google added – though YouTube’s algorithmic decision-making system is not exactly known for its high degree of accuracy, resulting in numerous bans by “mistake” over the years.
Advertisements and monetization will still be allowed for other climate change-related content, including public policy debates and discussion of “new research” (so long as researchers – Ivy League-educated or otherwise – don’t question the prevailing “consensus” on any particular issue, that is).
Nonetheless, the company insisted that it would “look carefully” for “context” to distinguish between the actual dissemination of false claims and the mere discussion of those claims, such as attempts to rebut or debunk them.
Google and YouTube’s updated ad policies come after the latter platform declared it would ban all “harmful vaccine content” late last month.
The move was part of a broader push against so-called ‘misinformation’ that got underway after the 2016 US presidential election, rapidly escalating in the years since, with the 2020 presidential race, COVID-19 pandemic and January 6 Capitol riot all supercharging an internet-wide censorship campaign.
Tens of thousands of users across dozens of platforms have been banned en masse in recent years – seeing periodic ‘purges’ over alleged disinfo, ‘conspiracy theories’ and ‘hate speech’ – while content creators who veer outside the scope of establishment opinion increasingly face de-monetization and issues with advertisers.