FEC Asks for Public Comment on Petition for Rulemaking on the Use of Artificial Intelligence in Political Ads

The Federal Election Commission last week voted to open for public comment the question of whether to start a rulemaking proceeding to declare that “deepfakes” or other AI technology used to generate false images of a candidate doing or saying something, without a disclosure that the image, audio or video, was generated by artificial intelligence and portrays fictitious statements and actions, violates the FEC’s rules.  The FEC rule that is allegedly being violated is one that prohibits a candidate or committee from fraudulently misrepresentating that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  In other words, the FEC rule prohibits one candidate or committee from falsely issuing statements in the name of an opposing candidate or committee.  The FEC approved the Draft Notice of Availability to initiate the request for public comment on a second rulemaking petition filed by the group Public Citizen, asking for this policy to be adopted.  This Notice of Availability was published in the Federal Register today, initiating the comment period.  The deadline for comments is October 16, 2023.  This is just a preliminary request for comments as to the merits of the Public Citizen petition, and whether the FEC should move forward with a more formal proceeding.

As we wrote in an article a few weeks ago, the FEC had a very similar Notice of Availability before it last month and took no action, after apparently expressing concerns that the FEC does not have statutory authority to regulate deliberately deceptive AI-produced content in campaign ads.  Apparently Public Citizen’s second petition adequately addressed that concern.  The Notice published in the Federal Register today at least starts the process, although it may be some time before any formal rules are adopted.  As we noted in our article, a few states have already taken action to require disclosures about AI content used in political ads, particularly those in state and local elections.  Thus far, there is no similar federal requirement. 

Stations still need to be careful accepting any attack ad from a non-candidate organization that puts words into the mouths of a candidate – using AI or just through selective editing of the words of the candidate being attacked.  As our last article on this topic warned, once a station is on notice that claims made in an ad are false, the station has an obligation to review those claims and determine if the continued airing of the ad could be defamatory or otherwise impose some liability on the station.  The last two publicized cases where broadcasters were sued by candidates for the content of third-party ads both arose from apparent selective editing of a candidate’s words by non-candidate organizations.  In both cases, the candidates alleged that the editing conveyed false information.  Whether or not these cases ended up resulting in liability, they certainly cost the named stations time and money.  So be prepared.  Follow developments in government regulation of the use of AI in political ads carefully to see what obligations are imposed on candidates and other political organizations, and the media outlets that run the ads that they produce. 

Courtesy Broadcast Law Blog