A new report from IPG Mediabrands calls out Facebook, Instagram, YouTube and Twitter, in particular, for allowing misinformation to spread.
But the report from Mediabrands’ Magna research arm, “The Dis/Misinformation Challenge for Marketers,” also stresses the advertising community’s need to intervene forcefully to protect brands, as well as the public good.
Specifically, it argues that brands should assess how well each platform aligns with their values, and shift media investments toward platforms that are implementing real, effective steps to stop misinformation and away from those that are not.
“If brand values or corporate social responsibility principles do not align with the ability of any given platform to moderate their platform, then serious consideration should be given to whether that platform is appropriate for the brand,” the report states.
“Marketers are right to be concerned when they find their advertising near misleading content as, unchecked, it could harm their reputations and the communities they serve,” said Harrison Boys, director standards and investment product EMEA at Magna, an author of the report. “The industry, which joined forces against online hate speech and supported online privacy, needs to take a stand against misinformation and disinformation today.”
The report notes, for example, that 85% of U.K. consumers polled by the Trustworthy Accountability Group and Brand Safety Institute said they would reduce or stop buying brands that advertise near misinformation about COVID.
The authors acknowledge that assessing platforms on the critical dis-/misinformation factor is difficult, since each has its own complex, and often unclear, policies.
To assist, the report includes a breakdown of key specifics.
For instance, only LinkedIn, Pinterest and Twitch explicitly prohibit user-generated misinformation in their policies, and only those platforms plus Snapchat and TikTok prohibit disinformation. The other majors — including Facebook, Instagram, YouTube and Twitter — have conditions that allow users to circumvent the goal of stopping mis- and disinformation.
The authors note that Pinterest has made a “U-turn” on handling misinformation since COVID, now suspending accounts that violate its policy and fact-checking ones with large followings.
They also describe Reddit as being at a “turning point” in which community moderation is helping to balance misinformation in some subreddits.
Interestingly, all but three of the 10 platforms analyzed do explicitly prohibit misinformation from advertisers. (TikTok, Twitter and YouTube have “conditional” policies for advertisers.)
“While some platforms have policies on disinformation and misinformation, they are often vague or inconsistent, opening the door to bad actors exploiting platforms in a way that causes real-world harm to society and brands,” said Joshua Lowcock, global chief brand safety officer, U.S. chief digital and information officer at Mediabrands’ UM Worldwide agency, and an author of the report.
Read Full Article on MediaPost.