Meta to Require Labeling of Digitally Altered Political Ads (Including Those Generated By AI) – Looking at the Rules that Apply to Various Media Platforms Limiting Such Policies on Broadcast and Cable

Facebook parent Meta announced this week that it will require labeling on ads using artificial intelligence or other digital tools regarding elections and political and social issues. Earlier this week, we wrote about the issues that AI in political ads pose for media companies and about some of the governmental regulations that are being considered (and the limited rules that have thus far been adopted).  These concerns are prompting all media companies to consider how AI will affect them in the coming election, and Meta’s announcement shows how these considerations are being translated into policy.

The Meta announcement sets out situations where labeling of digitally altered content will be required.  Such disclosure of the digital alteration will be required when digital tools have been used to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

The Meta announcement makes clear that using AI or other digital tools to make inconsequential changes that don’t impact the message of the ad (they give examples of size adjusting, cropping an image, color correction, or image sharpening) will be permitted without disclosure.  But even these changes can trigger disclosure obligations if they are in fact consequential to the message.  In the past, we’ve seen allegations of attack ads using shading or other seemingly minor changes to depict candidates in ways that make them appear more sinister or which otherwise convey some other negative message – presumably the uses that Meta is seeking to prohibit. 

This change will be applicable not just to US elections, but worldwide.  Already, I have seen TV pundits, when asked about the effect that the new policy will have, suggesting that what is really important is what other platforms, including television and cable, do to match this commitment.  So we thought that we would look at the regulatory schemes that, in some ways, limit what traditional electronic media providers can do in censoring political ads.  As detailed below, broadcasters, local cable companies, and direct broadcast satellite television providers are subject to statutory limits under Section 315 of the Communications Act that forbid them from “censoring” the content of candidate advertising.  Section 315 essentially requires that candidate ads (whether from a federal, state, or local candidate) be run as they are delivered to the station – they cannot be rejected based on their content.  The only exception thus far recognized by the FCC has been for ads that have content that violates federal criminal law.  There is thus a real question as to whether a broadcaster or cable company could impose a labeling requirement on candidate ads given their inability to reject a candidate ad based on its content.  Note, however, that the no-censorship requirement only applies to candidate ads, not those purchased by PACs, political parties, and other non-candidate individuals or groups.  So, policies like that adopted by Meta could be considered for these non-candidate ads even by these traditional platforms. 

It seems like almost every year as elections begin to heat up, the question arises as to whether the tools that some online platform adopts to combat political misinformation can be applied to broadcasters and other media – and each year we write about the differences in the regulatory schemes applicable to online platforms like Facebook and those that apply to traditional electronic media providers.  Facebook has been particularly active and particularly public about trying to curb abuses in political broadcasting, so we’ve written on the differing US regulatory schemes many times.

We wrote about the distinction last year, when Facebook decided to ban political ads in the week before the 2022 elections.  In June 2021, we wrote about Facebook’s plans to end its policy of not subjecting posts by elected officials to the same level of scrutiny by its Oversight Board that it applies to other platform users.  Facebook’s announced policy has been that the newsworthiness of posts by politicians and elected officials was such that it outweighed Facebook’s uniform application of its Community Standards – although it did make exceptions for calls to violence and questions of election integrity, and where posts linked to other potentially offensive content.  Just a year before, there were calls for Facebook to take more aggressive steps to police misinformation on its platforms. These calls grew out of the debate over the need to revise Section 230 of the Communications Decency Act, which insulates online platforms from liability for posts by unrelated parties on those platforms (see our article here on Section 230).

In past posts, we’ve looked in detail at the different regulations that apply to online platforms versus those that apply to broadcasters, cable companies, and other traditional media platforms.  In a 2020 post, we compared Facebook’s policy with the laws that apply to other communications platforms, including broadcasters and cable companies.  A lightly edited version of what we wrote in the past is below, and makes the regulatory distinctions clear:

In January 2020, the New York Times ran an article seemingly critical of Facebook for not rejecting ads from political candidatesthat contained false statements of fact.  We have already written that this policy of Facebook matches the policy that Congress has imposed on broadcast stations and local cable franchisees who sell time to political candidates – they cannot refuse an ad from a candidate’s authorized campaign committee based on its content – even if it is false or even defamatory (see our posts here and here for more on the FCC’s “no censorship” rule that applies to broadcasting and local cable systems).  As this Times article again raises this issue, we thought that we should again provide a brief recap of the rules that apply to broadcast and local cable political ad sales and contrast these rules to those that currently apply to online advertising.

As stated above, broadcast stations, local cable, and DBS systems cannot censor candidate ads – meaning that they cannot reject these ads based on their content.  Commercial broadcast stations and DBS companies cannot even adopt a policy that says that they will not accept ads from federal candidates, as there is a right of “reasonable access” (see our article here, and as applied here to fringe candidates) that compels them to sell reasonable amounts of time to federal candidates who request it.  Contrast this to, for instance, the platform then known as Twitter, which in the past (but no longer) decided to ban all candidate advertising on its platform (see our article here), and the Facebook ban applied in 2022 which at the time prohibited new political ads the week before the November 2022 election.  Such blanket bans on advertising from federal candidates would be prohibited on a commercial broadcast station.

There is no right of reasonable access to broadcast stations for state and local candidates, though once a station decides to sell advertising time in a particular race, all other rules, including the “no censorship” rule, apply to those ads (see our article here).  Local cable systems are not required to sell ads to any political candidates but, like broadcasters with respect to state and local candidates, once a local cable system sells advertising time to candidates in a particular race, all other FCC political rules apply.  National cable networks (in contrast to the local systems themselves) have never been brought under the FCC’s political advertising rules for access, censorship, or any other Communications Act requirements – although from time to time there have been questions as to whether those rules should apply to cable networks, as the FCC has applied them to broadcast networks. But thus far cable networks have been treated more like online advertising, where the FCC rules generally do not apply.

Disclosure is another place where the government-imposed rules are different depending on the platform.  Broadcasters, local cable systems, and DBS operators have extensive disclosure obligations, in online public files, requiring that they detail advertising purchases by candidates and other issue advertisers.  We wrote (here and here) about policies adopted in the last few years about the new enhanced disclosure rules for federal issue advertising (including ads supporting or attacking federal political candidates purchased by groups other than the candidate’s own campaign committee).  Cable networks and online platforms do not have federal disclosure obligations.  Some have voluntarily adopted their own disclosure policies (see for instance Facebook’s qualification policy for political advertisers, here).  In addition, a number of states have imposed obligations on these platforms (see, for instance, our article here).  These rules are not at all uniform, and some are stricter than others.  See, for instance, our article here on the enforcement of Washington State’s very detailed political advertising disclosure rules that have resulted in legal actions seeking significant penalties brought against Facebook.  While a number of states have imposed on digital media companies some form of political disclosure or recordkeeping  rules, as we wrote four years ago, at least one appellate court has determined, in connection with Maryland’s online political advertising disclosure obligations, that such rules are unconstitutional when imposed on online platforms rather than on advertisers.

Certainly, it can be argued that there are technical differences in the platforms that justify different regulation and different actions by the platforms themselves.  Online platforms clearly have the potential to target advertising messages to a much more granular audience.  The purpose of this article is not to argue which regulatory scheme is best – just to point out that these differences exist.  While we are already well into the political season with advertising running for the 2024 election, watch as various jurisdictions tackle questions about how to regulate political advertising, both online and on more traditional media platforms. We’ve written about the FEC consideration of rules governing AI-generated content in political ads.  Other rules from other federal and state authorities may follow.   

Obviously, discussions about the proper regulatory standards to apply to online platforms, not just in the area of political advertising, but more broadly, continues today (see, for instance, our articles here and here).  We will be following those developments in future posts, and all media companies should be watching closely as rules are developed. 

Courtesy Broadcast Law Blog