Facebook is trying to make News Feed safer for brands, testing a way to control what posts appear in the vicinity of ads at the advertisers’ discretion, which is a highly technical feat that some in the industry thought would be near impossible.

The social media giant today announced that it was experimenting with the new tools for a select group of unnamed advertisers. “These controls will help to address concerns advertisers have of their ads appearing in News Feed next to certain topics based on their brand suitability preferences,” Facebook said in the announcement on Friday.

In July, Ad Age was the first to report that Facebook was working on the News Feed problem, as it was responding to an outcry from advertisers worried about a rising tide of hate speech and disinformation. At the time, Facebook had brand safety controls in place in other areas of the platform, but News Feed was a more difficult environment to crack. News Feed is where 1.84 billion daily users receive streams of content tailored to their interests. The prospect of filtering all that activity to make it predictable for advertisers is daunting.

Brands have been concerned about abusive posts and videos appearing alongside their messages. It’s a problem that has plagued many companies in social media, including Twitter and YouTube, where everyday users create most of the content.

Facebook has been working with industry groups like the Global Alliance for Responsible Media, an offshoot of the World Federation of Advertisers, to design new brand safety protocols. The effort has attracted a number of platforms to cooperate on issues like defining what constitutes hate speech online. Twitter, YouTube, Reddit, Pinterest, Snapchat and TikTok have also joined in the industry-wide initiative. But each platform has unique challenges—for Facebook, News Feed is one of them.

On Friday, Facebook said it would offer “topic exclusion tools.” As an example, Facebook said a brand could select topics like “crime and tragedy,” which could be useful for, say, a toy brand. By applying that filter, Facebook would try to prevent the toy maker from showing up in a person’s News Feed when the other content on the page is related to “crime and tragedy.”

Facebook declined to comment beyond its public announcement. So it is unclear how many exclusion topics are available.

“Providing advertisers topic exclusion tools to control the content their ads appear next to is incredibly important work for us, and to our commitment to the industry,” Carolyn Everson, VP of global marketing solutions at Facebook, stated in the announcement. “With privacy at the center of the work, we’re starting to develop and test for a control that will apply to News Feed. It will take time but it’s the right work to do.”

Privacy concerns are one of the issues that has prevented Facebook from sharing too much information with brands about the content that is served to individual users. That makes it harder to give brands context about where ads run.

Brands have been pushing for more transparency though, especially in the past year. Marketers grew concerned about how social media sites were being co-opted by groups that spread hate speech and disinformation. Advertisers took a close look at Facebook during the racial justice protests of 2020 after an outcry by civil rights groups that claimed the company did not do enough to purge hate speech. In July, the NAACP and Anti-Defamation League led a boycott in which more than 1,000 brands pulled ads from Facebook. Verizon, Bayer, CVS Health, Dunkin Brands, Kimberly Clark Corp., Mars Inc., PepsiCo and a litany of other major corporations joined the protest.

The Facebook brand boycott did not harm the company’s bottom line; it grew in total number of advertisers last year to more than 10 million. But Facebook promised changes that would improve brand safety.

Facebook claims to catch at least 95% of posts that would qualify as hate speech before they are seen publicly. To provide more confidence about its enforcement, Facebook has committed to independent audits of its systems to prove the effectiveness.

Joshua Lowcock, chief digital and brand safety officer at the media agency UM, has been working with Facebook and other platforms to fix the problems that have plagued social media.

Read Full Article on Ad Age.