Artificial intelligence has been the buzzword of the last few months. Since the public release of ChatGPT, seemingly every tech company has either announced a new AI program or some use for AI that will compete with activities currently performed by real people. While AI poses all sorts of questions for society and issues for almost every industry, applications for the media industry are particularly interesting. They range from AI creating music, writing scripts, reporting the news, and even playing DJ on Spotify channels. All these activities raise competitive issues, but there have also begun to be a number of policy issues bubbling to the surface.
The most obvious policy issue is whether artistic works created by AI are entitled to copyright protection – an issue addressed by recent guidance from the Copyright Office suggesting that a work created solely by a machine is not entitled to protection, but that there may be circumstances where a person is providing sufficient guidance to the artificial intelligence such that the AI is seen as more of a tool for the person’s creativity, and that person can claim to be the creator of the work and receive copyright protection.
But this is likely not the last word on the subject, as the Copyright Office has announced an initiative to review AI policy and four separate listening sessions to discuss AI-related issues. The sessions will address (1) literary works (e.g. books and poems); (2) visual works (e.g., painting and prints); (3) audiovisual works (e.g. movies and TV); and (4) music. Beyond just the question of whether an AI-created work can be protected, these sessions are likely to get into other issues, including the compensation, if any, that is due from the distributors of AI technology to those in creative industries whose works have been used to educate the AI programs so that they can provide the services that they do.
The issue of compensation is being raised in several fields. In the music industry, as AI is creating musical compositions of its own, music industry representatives have been suggesting that existing artists and copyright holders should receive payments for the use of the works that AI considered in “learning” how to create a musical work. The argument is that the works of the human creators paved the way for AI to be able to create the works that it does, so those creators should be compensated for providing the basics on which the AI-created works were built. Of course, to some extent, all music is built on building blocks of prior music. How many blues artists would there be without Leadbelly, Bessie Smith, or Robert Johnson? How many rock and roll songs did Chuck Berry, Little Richard, Buddy Holly, and others influence? There certainly will be cases where the AI-generated music is so close to that of an existing song that a copyright claim may lie, but that is the same issue that arises with any human composer (look at the copyright battles over “Blurred Lines,” “Stairway to Heaven,” “Stay With Me,” “My Sweet Lord,” and so many other songs). But if an AI-generated work does not directly lift from an existing copyrighted song, how and to whom would any royalty be paid? These are difficult questions that are sure to be addressed at the May 31 listening session on music.
News organizations face similar issues. When any AI program is asked for the latest about some news topic, to answer that question, the program is likely going to be looking at existing news sites to come up with a summary of the news. But facts themselves generally are not copyrightable. In 2021, the Copyright Office asked for comments as to whether there should be a “hot news” exception that would give news organizations some protections for the facts of the story when they break that story (see our post here). Ultimately, however, the Copyright Office decided not to suggest to Congress that any such right be created. So news organizations, much like music creators, under strict copyright rules, are likely not protected if an AI application picks up simple facts from a news report without copying the expression of those facts. Like with music, if the actual expression of those facts is too similar to the expression that came from a news organization, that organization may have a claim.
Last year, groups representing journalists and broadcasters sought to have Congress create rights to compensation when Big Tech companies used the content generated by these groups. The proposed legislation, the Journalism Completion and Preservation Act (JCPA) was considered by Congress, passed by a Senate committee, but never approved by the full Senate or the House. See our articles here and here for more about the JCPA. Perhaps a modified bill could address compensation for the news media when AI uses the content that they generate but, as in music, the issue of who gets compensated for AI-reported facts becomes very difficult.
AI policy issues arise in all sorts of other areas. “Deep fakes” or “synthetic media” have already been addressed in the political sphere – with the state of Texas prohibiting their use in political ads without a clear identification that the audio or video created by AI technology is not real. Other states are considering similar legislation. More policy issues will no doubt arise as AI is called upon to perform more and more tasks by more and more companies – including media companies.
But there will also be practical issues that arise – particularly when AI is used to create programming or advertising segments used on broadcast stations or in other media. We’ve already had questions on some of these practical issues. For instance, we have been asked what issues arise if a media company uses machine learning to create “deep fakes” or “synthetic media” in order to create voices or images of celebrities and uses these generated images or voices in programming or advertisements. Watch for another article in the next week addressing some of those practical legal issues. These articles are sure to be just the first of many that will be written as these technologies advance and the law struggles to keep up.
Courtesy Broadcast Law Blog