Skip to content

Van Hollen, Luján, Colleagues Urge the FCC to Require Disclosure of the Use of AI-Generated Content in Political Ads on Radio and TV

U.S. Senators Chris Van Hollen (D-Md.) joined Senators Ben Ray Luján (D-N.M.), Michael Bennet (D-Colo.), Angus King (I-Maine), Amy Klobuchar (D-Minn.), Reverend Raphael Warnock (D-Ga.), Peter Welch (D-Vt.), and Cory Booker (D-N.J.) in urging the Federal Communications Commission (FCC) to adopt the proposed rule requiring disclosure of the use of AI-generated content in political ads on radio and TV.

“We recognize that the use of AI-generated content has many benefits. But like any new technology, AI poses risks to society, risks that are even more pronounced in the context of elections. The use of AI-generated content has the potential to amplify mis and disinformation, incite political violence, and suppress voter participation,” wrote the Senators. “In addition, foreign actors may use deceptive AI to sow discord and undermine our democracy and faith in elections. Lastly, as AI-generated content becomes more and more advanced, voters may find it difficult to recognize video, images, audio and text as fake. For this reason, we believe it is imperative that robust transparency and disclosure requirements are in place as soon as possible.”

The full text of the letter is available here and below: 

We write to express our support for the Federal Communications Commission’s (FCC) proposal to require disclosure of the use of AI-generated content in political ads on radio and TV. While more must be done to address the risks that AI poses to our elections, we urge the FCC to adopt these rules as the 2024 presidential election is less than two months away and, in some states, voters can begin casting ballots as early as this month.

We recognize that the use of AI-generated content has many benefits.  But like any new technology, AI poses risks to society, risks that are even more pronounced in the context of elections. The use of AI-generated content has the potential to amplify mis and disinformation, incite political violence, and suppress voter participation.  In addition, foreign actors may use deceptive AI to sow discord and undermine our democracy and faith in elections. Lastly, as AI-generated content becomes more and more advanced, voters may find it difficult to recognize video, images, audio and text as fake. For this reason, we believe it is imperative that robust transparency and disclosure requirements are in place as soon as possible.  

In addition, we support the following specific provisions of the proposed rules. First, we support on-air and written disclosure requirements. Such requirements are the most straightforward way to ensure that the public is notified of the use of AI-generated content in the advertisement they are viewing and/or hearing. Second, we support the application of transparency and disclosure requirements to both candidate and issue advertisements. This will ensure that both types of political ads are subject to the same standards. Next, we support the transparency and disclosure requirement applications to both broadcasters as well as other entities under the FCC’s jurisdiction. Again, this will ensure a more level playing field across mediums. Additionally, we urge the FCC to include an updated definition of “AI-generated content” to clarify that long-standing, basic editing tools are not considered as covered content. This will ensure that basic audio and video accessibility and editing tools are not negatively impacted by this necessary rulemaking on artificial intelligence. Lastly, we support a requirement that these rules take effect 90 days prior to an election as well as during the election certification process. 

We urge the Commission to finalize and implement these rules as soon as possible. Thank you in advance for your attention to this important issue.