Bickerton Law Blog | An in-depth look at law news with legal analysis by Bickerton Law. New articles every Monday and Thursday.

The New York Times reported that a San Francisco father was subjected to a criminal investigation after Google’s artificial intelligence incorrectly classified photos of his son as child exploitation.

The Facts

In 2021, the father, who was identified as “Mark,” noticed that his toddler son’s penis was red and swollen. He took pictures of the problem and sent them to the pediatrician for treatment. Soon after, Mark’s Google account was disabled and he lost access to his photos and videos. Although Mark was by the San Francisco Police Department, his Google account was permanently deleted and he lost access to the photos and videos stored in the account.

How did this happen?

Tech companies use artificial intelligence to scan account data for signs of child exploitation. The software is trained to identify images that are show the exploitation of children. Google’s software scans images and compares them with other images that are known to be illegal. Google also uses machine learning to scan unknown images for signs that are consistent with images of child exploitation. Although there is supposed to be a human review of the results, the review was resolved against Mark.

How reliable is artificial intelligence?

In 2017, the New York Post reported that a machine learning program used in London would erroneously tag images of the desert as child pornography.  In 2018, Gizmodo mentioned how Facebook’s AI flagged a historically significant photo of a nude girl running from a napalm attack in Vietnam for removal. Months after Apple announced its plan to scan user image data for evidence of child exploitation, it announced that it postponed the plan to work on improving its accuracy before launch.  

What are the potential problems?

Aside from the privacy issues identified by Center for Democracy and Technology and the Electronic Frontier Foundation, the use of AI to identify incidents of child exploitation increases the risk of being targeted for a criminal and child protective services investigation. Merely being accused of being involved in child pornography and exploitation can ruin a person’s reputation and cause a person to have to pay an attorney to clear their name. If the artificial intelligence is inaccurate and the human verification element is either not being used or is flawed, a person will have to fight a false allegation of abuse.

The post AI and the Law: A father took a picture of his child for medical treatment and ended up being investigated for child exploitation first appeared on Bickerton Law.