FCC Chair Proposes AI Disclosure Requirements for Political Ads
On May 22, 2024, Federal Communications Commission (FCC or Commission) Chairwoman Jessica Rosenworcel announced that she had circulated a draft notice of proposed rulemaking (NPRM) to her colleagues on the use of artificial intelligence (AI) in political ads for TV and radio. While the NPRM’s text is not yet public, if adopted, the NPRM would reportedly seek public comment on whether broadcasters and programming entities must inform consumers when AI tools are used to generate political ads.
The Chairwoman’s proposal would add to an increasingly complex web of state and federal actions to regulate AI — both in the election context and more generally.
Below we summarize what we know about the Chairwoman’s proposal and how it would fit in with existing state regulations regarding AI use in political campaigns and elections.
FCC Proposal to Require AI Disclosures
According to the press release, the NPRM would explore whether the Commission should require broadcasters and programming entities to disclose use of AI-generated content in political ads on TV and radio. The proposal would seek to: (1) establish a consumer’s “right to know” when AI tools are being used in political ads; and (2) protect consumers from potentially “false, misleading, or deceptive” AI-generated political programming.
In particular, the NPRM is expected to address the following topics:
- Whether the Commission should “require an on-air disclosure and written disclosure in broadcasters’ political files” whenever AI-generated content is used in political ads;
- Whether to extend such disclosure requirements to include both candidate and issue advertising;
- Seeking comment on “a specific definition for AI-generated content;” and
- Considering new disclosure obligations on “broadcasters and entities that engage in origination programming, including cable operators, satellite TV and radio providers and section 325(c) permittees.”
According to the press release, the proposal would not seek to prohibit the use of AI-generated content in political advertising.
The fate of the Chairwoman’s proposal is not yet clear. The day after its release, Republican Commissioner Brendan Carr issued a statement announcing his opposition to the proposal, calling it “as misguided as it is unlawful.”
The FCC’s Latest Proposal Adds to Growing Activity and Interest in Regulating AI in Political Communications
The proposed NPRM builds on the growing list of activity focused on the intersection of AI and elections. At the federal level, the FCC’s latest action follows the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which encourages the FCC to “consider actions related to how AI will affect communications networks and consumers.” While the FCC has already started to consider AI in other contexts — for example, the agency issued a notice of inquiry seeking input on the use of AI and machine learning to collect and analyze data on non-federal spectrum usage, a notice of inquiry regarding the FCC’s authority under the Telephone Consumer Protection Act (TCPA) to enact rules to ensure that AI technologies do not erode consumer protections, and a declaratory ruling to confirm that TCPA rules apply to calls that use artificial intelligence technology to generate voices — this is the FCC’s first potential action on AI with respect to political advertising.
Beyond the federal government, state legislatures have been at the forefront of AI legislation — both generally and with respect to AI used in connection with elections. For example, states like Utah and Colorado have recently enacted AI laws. And several states have adopted laws to regulate the use of AI and synthetic media in political advertisements and communications. At the time of publication, approximately 15 states have enacted such laws, and several other states have passed similar bills that are awaiting executive approval to become law. While each law is distinct, some impose criminal liability for deep fakes that violate these emerging deep fake prohibitions.
The FCC’s Draft Proposal Potentially Would Shift Focus to Broadcasting Stations and Programming Entities
Notably, the state laws regulating AI-generated content in political communications appear primarily directed at the distributers of false information or the campaign responsible for the disclosures. While the language in some of the laws may be broad enough to capture broadcasters, they do not appear to be the primary target. Chairwoman Rosenworcel’s proposal, on the other hand, would appear to hold broadcasters and other distribution platforms responsible for complying with the disclosure requirements. The Chairwoman’s proposed rules would potentially add to the existing obligations broadcast stations and programming entities face when airing political ads, to the extent they do not preempt the growing labyrinth of state law.
As the 2024 election cycle progresses, all stakeholders — candidates, campaigns, and media platforms — should pay attention to these developments before creating, publishing, or disseminating synthetically altered political content, particularly given the rapidly-changing legal landscape.
***
Wiley’s Artificial Intelligence Practice counsels clients on AI compliance, risk management, and regulatory and policy approaches, and we engage with key government stakeholders in this quickly moving area. The Election Law Practice provides incisive and sophisticated legal counsel on all aspects of political law including campaign finance, lobbying, government ethics, and elections. The Media Law Practice provides regulatory and transactional counsel to radio and television broadcasters, as well as content creators and distributors, news organizations, financial institutions and investors, and equipment manufacturers. Please reach out to a member of our team with any questions.
Kevin Nguyen, a Wiley 2024 Summer Associate, contributed to this alert.