I just saw an excellent interview of attorney Robert Freund by copywriter David Garfinkel in the Copywriter’s Podcast series. Robert Freund specializes in legal issues of advertising, marketing and marketing communications, including social media advertising.
Robert Freund’s message was, using a new technology like Chat GPT, Microsoft Bing or Google Bard is subject to existing laws and regulations, so a copywriter or a business can be sued for A.I.-generated advertisements or promotional material that is false or misleading.
The Federal Trade Commission’s position is an advertiser and anyone connected with the creation of an advertisement or promotional material are all “on the hook” for exposure to liability. Even an indemnification clause might not be sufficient protection to avoid liability. Any claim made must be substantiated and documented as true or supported by scientific research. False claims are a target for class action litigation.
A.I. large language models are tools that require human fact-checking and oversight.
A.I. applications have often fabricated responses that weren’t true or supported by facts.
Mr. Freund mentioned a situation where a New York lawyer submitted a ChatGPT-generated brief that included six non-existent judicial decisions to a U.S. District Court judge. His fact-checking consisted of asking ChatGPT, “Are you sure these are right?” The lawyer claims he was unaware ChatGPT could generate a false response. Ultimately, the lawyer is responsible for submitting a factual brief. Another judge is requiring that no brief should be written using generative A.I., or that any part generated using generative A.I. should be reviewed by an attorney. Other judges will probably do the same.
When A.I. is used to access an author’s work to incorporate it into a commercial communication, the author might claim his or her work has been plagiarized. A class-action lawsuit has been filed against Microsoft and Open AI (ChatGPT) for scraping personal content off the internet without compensation.
What about “deep fake” A.I.-generated images and voices? In 1988, Vanna White sued Samsung Electronics for portraying a robot dressed in a gown with jewelry and a blonde wig as a game show actor, turning letters in a game show. Vanna, in a decision upheld by the 9th Federal Circuit Court of Appeals, was successful with her claim that Samsung used her image in an advertisement without compensation, and she was awarded more than $400,000 in damages. In 2021, TikTok settled a lawsuit by actress Bev Standing alleging TikTok replicated her voice using A.I. ,and used it without her consent and without giving her compensation. It seems likely this same theory could apply to A.I.-generated images and voices.
We are in a “Brave, New World” with technology developments happening at hyper-speed. Existing laws and regulations still apply, and new laws and regulations will eventually be adopted, mostly by legislators and regulators who don’t really understand the new technologies.