“All school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere,” Raymond González, the superintendent of Westfield Public Schools, said in the statement.
Blindsided last year by the sudden popularity of A.I.-powered chatbots like ChatGPT, schools across the United States scurried to contain the text-generating bots in an effort to forest student cheating. Now a more alarming A.I. image-generating phenomenon is shaking schools.
Boys in several states have used widely available “nudification” apps to pervert real, identifiable photos of their clothed female classmates, shown attending events like school proms, into graphic, convincing-looking images of the girls with exposed A.I.-generated breasts and genitalia. In some cases, boys shared the faked images in the school lunchroom, on the school bus or through group chats on platforms like Snapchat and Instagram, according to school and police reports.
Such digitally altered images — known as “deepfakes” or “deepnudes” — can have devastating consequences. Child sexual exploitation experts say the use of nonconsensual, A.I.-generated images to harass, humiliate and bully young women can harm their mental health, reputations and physical safety as well as pose risks to their college and career prospects. Last month, the Federal Bureau of Investigation warned that it is illegal to distribute computer-generated child sexual abuse material, including realistic-looking A.I.-generated images of identifiable minors engaging in sexually explicit conduct.
Yet the student use of exploitative A.I. apps in schools is so new that some districts seem less prepared to address it than others. That can make safeguards precarious for students.
“This phenomenon has come on very suddenly and may be catching a lot of school districts unprepared and unsure what to do,” said Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, who writes about legal issues related to computer-generated child sexual abuse imagery.
At Beverly Vista Middle School in Beverly Hills, Calif., administrators contacted the police in February after learning that five boys had created and shared A.I.-generated explicit images of female classmates. Two weeks later, the school board approved the expulsion of five students, according to district documents. (The district said California’s education code prohibited it from confirming whether the expelled students were the students who had manufactured the images.)
Michael Bregy, superintendent of the Beverly Hills Unified School District, said he and other school leaders wanted to set a national precedent that schools must not allow pupils to create and circulate sexually explicit images of their peers.
“That’s extreme bullying when it comes to schools,” Dr. Bregy said, noting that the explicit images were “disturbing and violative” to girls and their families. “It’s something we will absolutely not tolerate here.”
Write a Reply or Comment
You should Sign In or Sign Up account to post comment.