With the increasingly widespread use of artificial intelligence (AI) in all walks of life, disputes relating to AI are also rapidly emerging. In a press conference last year, the Beijing Internet Court published eight ‘model’ (precedent-illustrating) cases involving AI. These cases relate to various aspects of AI usage, including AI-created graphics, voice generation, face swapping, virtual human characters and AI detection.
The first two of these cases have already been reported in our previous NRF blog posts here and here. In this series of two articles, we provide a high-level summary of the remaining six cases.
Li v. a culture media company – AI-synthesized voice
The plaintiff in this case was a well-known individual in education and parenting. The defendant used the plaintiff’s speeches and teaching videos to synthesize an AI voice that was highly similar to the plaintiff’s and used the AI voice in combination with an image of the plaintiff for the purpose of promoting and selling family educational books. The court held that use of the plaintiff’s image and the AI-synthesized voice could lead consumers to believe that the voice was from the plaintiff and constituted infringement of the plaintiff’s image and voice rights.
Liao v. a technology culture company – AI Face-Swapping
The plaintiff was a traditional Chinese-style short-video blogger. The defendant used the plaintiff’s videos to create face-swapping templates which were made available for the public to use. The court held that the face-swapping templates did not infringe the plaintiff’s image rights, since the user’s facial features were replaced in the swapping process and the other retained elements, such as makeup, hairstyle and clothing, were not specific enough for unique identification of the plaintiff. However, the court found that, in using the plaintiff’s videos and enabling the AI face-swapping, the defendant had collected, used and processed data regarding the plaintiff’s facial features and should be liable for such unauthorized use of personal information.
Tang v. Technology Company – AI detector
The plaintiff had posted an article on the defendant’s platform, which was determined by the defendant to be AI-generated. The defendant imposed certain penalties on the plaintiff’s user account for not labelling the article as AI-generated. The plaintiff claimed that AI had not been used.
The court noted that in the present case, where the plaintiff claimed the article to be impromptu and created by an individual, it was not feasible for the plaintiff to provide documentary evidence of the creation process. The court held that the defendant, as controller of the AI-detection algorithm, should be the one responsible for providing appropriate explanation of the algorithm’s decision-making basis and judgment criteria for classifying the article as being AI-generated. As such, the court found that the defendant breached the platform’s user agreement in penalizing the plaintiff’s user account without providing sufficient evidence.
In Part II of this series, we shall consider the other three AI cases published by the Beijing Internet Court.