(NewsNation) — A ninth grader at a Pennsylvania private school allegedly created artificially generated nude images of nearly 50 female classmates, but the explicit photos weren’t reported to police for months, claims a lawsuit filed by parents against the school.
Administrators at the Lancaster Country Day School were legally mandated reporters but failed to report the images to law enforcement when it was first brought to their attention in November 2023 through an anonymous tip, a legal letter filed on behalf of the parents this month stated. It wasn’t until the second anonymous message in May 2024 that school leaders involved police, the letter said.
But by then the damage had been done and “many more students were exploited,” the letter stated.
Attorneys representing the families did not return multiple requests for comment by NewsNation.
Two school leaders were terminated over the incident that has jarred the small private school community in Lancaster, about 80 miles west of Philadelphia, reported NewsNation affiliate WHTM.
Parents said administrators “knew the school failed to engage the police to prevent this and they knew they allowed the perpetrators to remain in class with their victims during the ‘internal investigations,’” local news outlets reported.
In a statement to NewsNation Lancaster Day School said, “Caring for our community, supporting those who have been impacted by this upsetting situation, and reviewing the school’s policies which safeguard the wellbeing of our students remain our school’s highest priorities right now.”
The case, emblematic of several others across the country, spurred Pennsylvania lawmakers to swiftly adjust state law to include content generated using artificial intelligence (AI) to existing child pornography laws.
While the new law is slated to go into effect on Dec. 28, it can’t be applied retroactively meaning it won’t help students that helped spark the legislative change.
The Pennsylvania incident brings renewed light on the gaps most state laws still have when it comes to kids and AI deepfakes, but also how school districts continue to be ill-prepared, experts say.
Deepfakes are video, photo, or audio recordings that appear to be real but have been manipulated with artificial intelligence (AI). A deepfake can depict someone appearing to say or do something that they in fact never said or did.
Most schools remain ‘ill-prepared’ for deepfakes
A boom in artificial intelligence technology and its marketing to young people has made it easier for school kids to get their hands on it, Adam Dodge, an attorney whose organization EndTAB is focused on ending technology-enabled abuse, told NewsNation.
Students have used AI technology to create fake explicit images of classmates in Washington, California and New Jersey among other places.
Forty percent of students and 29% of teachers say they knew of a deepfake depicting individuals associated with their school shared in the 2023-2024 school year, according to a September report from the Center for Democracy & Technology, a nonprofit that advocates for online civil liberties.
In most cases, both the perpetrator and the victim were students.
That same report found that only 19% of students said their school has explained deepfakes, and even less, 13%, said their school has explained that sharing non-consensual intimate imagery is harmful to the person depicted.
“Schools are ill-prepared, but it wouldn’t take much to be prepared and meet this challenge proactively,” Dodge said.
Schools have a responsibility to educate staff and students on the dangers of AI technology and deepfake apps, he said, adding that doing so doesn’t need to be a tremendous undertaking.
“Absent a school or a parent or a trusted resource educating them on the perils of this technology, students are going to rely on the internet and the app creators to inform their decision-making,” he said.
Schools should also update their student conduct policies or guidelines to include AI-generated or AI-manipulated intimate images so “that it sends a very clear message that this is not harmless behavior.”
State laws still have gaps for explicit AI content of kids
Fourteen states have laws in effect that include specific references to include children in laws aimed against deepfakes and other AI-generated content, according to an analysis by state and local government relations company MultiState Associates, shared with NewsNation.
These include Utah, Idaho, Georgia, Oklahoma and Tennessee. Another five states have laws that will take effect by the beginning of 2025.
Some states got ahead of the problem addressing it preemptively.
In September, California closed a legal loophole around AI-generated imagery of child sexual abuse and made it clear child pornography is illegal even if it’s AI-generated.
The previous law did not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials depict a real person, but under the new laws, such an offense qualifies as a felony.
South Dakota also updated its laws against child sexual abuse images proactively in July to include those created by artificial intelligence.
Still, that leaves 30 states with nothing officially on the books.
“Suffice it to say the laws are behind, and they need to be adapting to these new technologies and new behaviors so that we can hold bad actors accountable,” Justin Patchin, a criminal justice professor at the University of Wisconsin-Eau Claire and co-director of the Cyberbullying Research Center, said.
While every state has child pornography laws there are still loopholes when content isn’t considered an actual person, he added.
Federal guidance that specifically addresses AI-generated content could be helpful for states still figuring out how to address this, he said.
Even though some states are later to the issue, more are taking a look into their laws, Dodge said, many due to pressure.
“There are constituents that are asking their state legislators to update the laws to accommodate this because parents are really angry and rightfully so,” he said.