An Ohio man has become the first person convicted under the federal Take It Down Act, a law criminalizing nonconsensual explicit imagery, including AI-generated deepfakes. James Strahler II, 37, pleaded guilty to cyberstalking, producing obscene visuals of child sexual abuse material (CSAM), and publishing digital forgeries—legal terminology for deepfakes—according to a Department of Justice (DOJ) press release. The case marks the first enforcement of the law, signed by President Donald Trump in May 2025.
Strahler used AI to create nonconsensual images and videos of both adult and minor victims. He morphed faces of minor boys onto adult or other children’s bodies to depict them engaging in sex acts, including with family members. He posted over 700 images to a website dedicated to child sexual abuse and had 2,400 images on his phone, including AI-generated CSAM. Strahler also targeted at least six adult female victims, sending them real and AI-generated nude images, including a video depicting an adult victim engaging in sex acts with her father, which he shared with her coworkers.
The Take It Down Act, championed by First Lady Melania Trump, criminalizes nonconsensual intimate deepfakes and requires platforms to remove such content. The law also addresses cyberstalking and threats of violence. In a statement, the First Lady praised the conviction as a milestone in protecting victims from AI-generated exploitation.
Deeper Dive & Context
Legal and Technological Implications
The conviction underscores the growing legal scrutiny of AI-generated content, particularly in cases involving nonconsensual exploitation. The DOJ highlighted Strahler’s use of over 24 AI platforms and 100 web-based AI models, demonstrating the ease of creating and distributing deepfakes. The case may set a precedent for future prosecutions under the Take It Down Act, which has faced debate over its scope and enforcement.
Political and Social Reactions
First Lady Melania Trump emphasized the law’s role in protecting victims, calling the conviction a significant step forward. However, some privacy advocates have raised concerns about the law’s potential overreach, particularly regarding free speech and the definition of nonconsensual content. The case has also reignited discussions about AI regulation and the need for stronger safeguards against digital exploitation.
Long-Term Implications
The conviction could influence how law enforcement and tech companies address AI-generated content. It may prompt further legislative action to clarify the boundaries of digital forgery laws and the responsibilities of platforms hosting such material. The case also highlights the intersection of technology, law, and ethics in the digital age.