Silence of the brands: Why marketers have nothing to say about brand safety ~via Jack Neff
An important topic shared here in detail by journalist Jack Neff, who I met years ago when he interviewed me years ago when I was CMO at e.l.f. Cosmetics. This is sadly just the tip of the corporate consumer branding iceberg, stepping away from important responsibilities, including support of women in leadership roles (and equal pay across the board), support of consumers’ right to a voice via social media, and support for basic human rights in general.
When “brand safety” becomes the priority over values, what we’re really seeing is brands choosing comfort over courage. And while that may reduce short-term risk, it quietly chips away at trust, because people notice what you don’t say just as much as what you do.
Silence may feel safe in the moment… but relationships are built on consistency, clarity, and a willingness to stand for something, even when it’s not convenient.
Because in the end, protecting your brand shouldn’t come at the cost of weakening your reputation. After all, a brand is what a business does… a reputation is what people remember and share. -Ted
Unilever's "Cost of Beauty" dealt with the harm of social media addiction. Like other marketers, the company has been silent about the verdict.
Litigation, market clout, futility and big platform ‘generosity’ stifle brands
Recent civil verdicts against Meta and Google finding children were harmed by their platforms have spawned plenty of talk about a “tobacco moment” for Big Social. None of that chatter came from brands whose ad dollars support those companies.
Here are a few likely reasons why.
Marketers seem to believe they can’t live without the digital giants, who collectively represent over half of global media spending. Litigation by the Federal Trade Commission and social platforms has eviscerated organized brand safety efforts and made historically cautious corporate communicators even more so. The futility of boycotts, automated controls and industry self-regulation raises makes it questionable whether pushing for brand safety even works. And, those big digital platforms are generous about giving millions brack from the billions they take in back to industry groups.
Even so, the silence of the brands is noticeable and awkward, since only a few years ago prominent advertisers at least gave verbal support for brand safety on the digital platforms where they advertised. They joined with agencies and big digital players to form the Global Alliance for Responsible Media.
Discounting he 'Cost of Beauty’
Leading the charge on social media harm to children’s mental health – the precise focus of recent litigation and jury verdicts -- was Unilever’s flagship Dove brand. In 2023 Dove released a film about a girl driven to an eating disorder by toxic social media content (which sure looked like Instagram posts). This poignant film, hard to watch without tearing up, could easily have worked in opening or closing arguments by New Mexico’s attorney general or lawyers representing the plaintiff known as Kaley.
Dove’s “Cost of Beauty” won two Gold, three Silver and one Bronze Lion at Cannes in 2023. Unilever’s Cannes press briefing that year was held, ironically, at Meta’s space in the Majestic. So I asked if Meta had any hard feelings about the video, noting “probably not, since we are in a Meta space.” A now former Unilever executive noted that both good and bad comes from social media, and that the company was lobbying against the bad.
And true, in conjunction with that video, Dove championed the Kids Online Safety Act that would have enacted new protections. The bill had broad bipartisan support, but was killed in the House by Speaker Mike Johnson. A follow up effort, similarly with broad bipartisan support, now faces a similar fate, albeit with no obvious Dove backing.
I reached out to marketers that have been vocal about brand safety in the past, including Unilever and Procter & Gamble Co., plus the Association of National Advertisers, and none responded with comments.
That is, silence.
Brand social media dependence issues
So I’ll give my own read on why brands are saying nothing.
For one, marketers may individually have higher standards than their companies feel they can afford.
For example, Unilever executives at the Consumer Analyst Group of New York in February pointed out that the company works with 300,000 creators globally and tripled the number it works with in the U.S. in the past year or so. These creators and their content is largely hosted on and distributed by Meta and Google platforms and amplified there with paid support. Such creator content is increasingly central to Unilever’s media and digital commerce strategies.
Like brands, a lot of people have a social media dependency problem. The “brand safety” issues identified in recent lawsuits and Dove’s video are different and more fundamental than ad adjacency problems, when ads show up next to nasty content that slips through moderation cracks.
Addictive engagement algorithms are a feature, not a bug, of the Big Social business model. They're at the core, not the fringe. But they feed hate speech to racists, suggestive (or worse) child content to pedophiles, divisive political content to unbalanced partisans, and toxic beauty content to girls vulnerable to developing eating disorders. Along with the more benign alignment of interests and content they foster, these algorithms maximize user time on site, expand ad inventory and generate billions in revenue.
I’ll leave it to others, such as Jeffrey Horwitz, and the team at The Wall Street Journal, to document whether the impact these algorithms have were ever a surprise platform executives. Short answer: No. Internal watchdogs sounded alarms for years, then went public.
Brands that fund these platforms can’t plead ignorance either. “Purpose” used to be a thing in marketing. Maybe it still is, though the word has been shunned by some companies (e.g. P&G). Regardless, there’s no better way to demonstrate purpose than spending media dollars in ways that advance what a brand stands for and the wellbeing of its consumers. When brands don’t do that, “purpose” doesn’t mean much.
It’s easy to argue that digital platforms are so big that brands have no choice. But while it may take more work, there’s always a choice.
Chilling effect on ‘brand safety’
This is true despite litigation aiming to take away that choice, by some publishers and the Federal Trade Commission, by compelling spending on and preventing alleged collusion against right-wing content.
Marketers have been scared away from talking about “brand safety” since Congressional investigations and lawsuits against the now defunct Global Alliance for Responsible Media, Unilever, WPP etc. in 2024. The FTC stifled the industry further by extracting a consent decree from Omnicom as the price of approving its Interpublic acquisition. Then, the FTC, joined by eight states attorneys general, last week applied essentially the same consent order against Publicis, WPP and Dentsu.
So the four biggest agency holding companies have agreed not to suggest to clients that they spend, or not spend, based on media quality monitoring, which is construed by the Republican FTC or states as political. The latest deals with Publicis, WPP and Dentsu also prohibit those agencies from applying DEI principles, such ethnicity or gender of publisher ownership, to media decisions.
These orders don’t prohibit agencies from buying on these criteria, but only at the direction of individual clients. It’s not yet clear whether the FTC will take the next step of going after the advertisers.
But the commission has been accumulating evidence tp support that.
The FTC last year issued civil subpoenas to 14 companies and organizations that monitor brand safety and journalism standards among publishers and platforms, according to a document I obtained. It was a fishing expedition for any communications these entities had with clients about brand safety. The consent orders give the FTC additional access to agency communications with clients about brand safety.
Whether brand safety verification vendors are performing as advertised in preventing ads from appearing against objectionable content also has come up in at least some interviews with FTC investigators, according to people familiar with the matter. But it’s not yet clear whether the FTC will take action here. The commission and verification vendors didn’t respond to requests for comment.
What FTC orders mean
Here's what marketers should know about the FTC’s brand safety crackdown, based on consent orders released to date.
Marketers who use one of the agencies covered by FTC orders should know that their communications with those agencies on brand safety subjects could be retained for up to five years and turned over to the FTC.
The agencies must submit compliance reports annually.
Each of the four holding companies must employ compliance monitors approved of by the FTC. Those monitors can’t be fired without FTC consent. That’s four guaranteed jobs, at least, in the agency world for the next five years.
Brand safety issues covered in these orders are about alleged collusion to divert funds from right-wing media or buy ads based on DEI. They don’t cover mental health impact of social media on kids, which remains a bipartisan concern.
MRC’s brand safety ‘clarification’
Having the FTC knock on your door may be one reason for caution about hiring a brand safety vendor. Another is that it's been unclear what you're paying for.
Adalytics research and senators’ inquiries last year about ads appearing near child sexual abuse material and adult porn raised questions about what Media Rating Council accreditation of brand safety verification vendors actually means. After all, numerous brands that were paying for brand safety safeguards still had their ads show up in embarrassing places.
The MRC quietly clarified and tightened what its brand safety accreditation is supposed to mean in October, with one of its least conspicuous announcements ever, issued on a Saturday without a press release. The industry self-regulatory body acknowledged marketplace confusion.
Under the MRC’s new policy, safeguards should not be represented as providing “brand safety” if they’re only applied sitewide, or an a “property level,” not on a specific page or URL, i.e. “content level.” This policy still doesn't make vendors disclose precisely how they determine if a site is safe or unsafe, at least not to marketers who on background have expressed frustration about trying to get those answers.
The MRC’s new policy took effect April 18, after a six-month grace period. But a quick review April 20-21 of two verification vendors’ marketing posts or search ads still showed the phrase “brand safety” applied to supposedly MRC-accredited services that only monitor at the property level, not page level, or to services that suggested brand safety accreditation that appears not to exist, based on the MRC website. So the MRC may have work to do to enforcing this new policy.
The MRC’s October announcement, by the way, came two days after news that Meta had lost brand safety accreditation because it no longer wanted to undergo brand safety audits. That was a rare content-level MRC brand safety accreditation, which lasted less than a year.
As opaque as all this transparency seems, here’s a short rule of thumb: Automated vetting of every page or URL where ads appear is rare to non-existent, regardless of how often the acronym “AI” is inserted into marketing communications. But it's not supposed to be labeled “brand safety” by an MRC-accredited service unless it applies to EVERY SINGLE PAGE where ads appear.
Media consultancy ID Comms suggests a simpler approach – make agencies accountable. That includes having CMOs write a “brand safety” manifesto incorporating policies about where ads should run into agency contracts. Embedding brand safety accountability directly into contracts with ad tech vendors, as Bayer has been doing, may make standards easier to enforce by eliminating the middle players.
Business case for brand safety
Aside from that, while embarrassing ad adjacencies are no fun and can even result in a regulated brand violating laws, there may be bigger reasons to be concerned about backlash against spending on platforms found in recent judgments to have harmed children.
“Parents, especially women, are the ones who buy their products,” said Eric Feinberg, vice president of content moderation at the Washington, D.C. based non-profit Coalition for a Safer Web. “Eighty percent of product purchases are decided by females. Procter & Gamble, Unilever, Coca-Cola, pick a product. And they do nothing. No statement.”
Feinberg has been a thorn in the sides of big digital platforms for nearly a decade, finding and disseminating examples of brand ads showing up against exhortations to terror attacks, hate speech, drug sales and more. He was a driving force behind Google’s 2017 brand safety crisis, among other things.
He’s also well aware of how platform engagement algorithms work, because they’re how he finds all those unfortunate ad adjacency screen shots he’s sent to news outlets for years. If you click on enough Isis videos, for example, you’ll keep seeing more of them. Before long, your feed becomes heavily laden with such videos, alongside ads from Febreze, Dove, etc.
Why no ANA action?
Feinberg also noted the reticence of the ANA regarding the recent Meta and Google verdicts. The group's recent history shows at least 30 million good reasons to keep quiet.
The ANA, in response to concerns about the mental health impact of social media on kids back in 2022, did consider but never pursued an industry self-regulatory body to regulate social media. At that point, the ANA also was planning, but never executed, a second phase of its programmatic transparency initiative focused on big digital walled gardens, not just the smaller open web and connected TV marketplaces.
That was before the ANA launched Aquila, a reach and frequency analytics business funded by big social and digital platforms. Aquila brought in $14.4 million in 2024 (the most recent year for which IRS data is available), representing nearly all of ANA’s 23% revenue increase that year. The ANA-run LLC is currently doing pilot tests for advertisers, subsidized by Meta, Google, Amazon and TikTok, which as of last year collectively had committed $30 million to back it.
The ANA is focusing some on brand safety. Just not on those platforms. The ANA Programmatic Transparency Benchmark has recently applied brand safety to what it tracks in open web trading. That doesn't extend to the big digital platforms underwriting Aquila.
Given the need for antitrust exemptions in the wake of FTC actions and GARM's collapse, and the financial muscle big media companies wield, the government probably will have to be involved in any meaningful social media regulation. Feinberg does have a plan here. CSW has proposed a Social Media Standards Board, which would be blessed by Congress with an antitrust waiver and work like a new industry self-regulatory body, with involvement from digital platforms and advertisers. But, like KOSA, the SMSB hasn’t passed Congress, where Big Social is also well represented via lobbying.
Absent legislation, litigation may
fill the breach. Whether verdicts against big digital players will make them change as advertiser dollars keep rolling in is debatable. But direct-response trial lawyer ads for social media addiction lawsuits keep popping up everywhere but on Meta, suggesting an emerging industry that might have an impact.

