Facebook is building tools to help advertisers keep their ad placements away from certain topics in its News Feed.
The company said it will begin testing “topic exclusion” controls with a small group of advertisers. It said, for example, a children’s toy company would be able to avoid content related to “crime and tragedy,” if it wished. Other topics will include “news & politics” and “social issues.”
The company said development and testing of the tools would take “much of the year.”
Facebook, along with players like Google‘s YouTube and Twitter, have been working with marketers and agencies through a group called the Global Alliance for Responsible Media (GARM) to develop standards in this area. They’ve been working on actions that help “consumer and advertiser safety,” including outlining definitions of harmful content, standards for reporting, independent oversight and agreeing to make tools that better manage ad adjacency.
The tools for Facebook’s News Feed build on tools running on other areas of the platform, like in-stream video or on its Audience Network, which allows mobile software developers to provide in-app advertisements targeted to users based on Facebook’s data.
The concept of “brand safety” is important to any advertiser that wants to make sure their company’s ads aren’t in close proximity to certain topics. But there’s also been a growing push from the ad industry to make platforms like Facebook safer not just near their ad placements.
The CEO of the World Federation of Advertisers, which created GARM, told CNBC last summer it was a morph from “brand safety” to focus more on “societal safety.” The crux is that even if ads aren’t appearing in or alongside specific videos, many platforms are financed substantially by ad dollars. In other words, ad-supported content helps subsidize all the ad-free stuff. And lots of advertisers say they feel responsible for what happens on the ad-supported web.
That was made abundantly clear last summer, when a slew of advertisers temporarily yanked their ad dollars from Facebook, asking for the platform to take more stringent steps to stop the spread of hate speech and misinformation on its platform. Some of those advertisers didn’t just want their ads to stay away from hateful or discriminatory content, they wanted a plan to make sure that content was off the platform altogether.
Twitter is working on its own in-feed brand safety tools, it said in December.