Possession of AI-generated child sexual abuse imagery may be protected by First Amendment in some cases, judge rules

Federal prosecutors are appealing a federal judge’s ruling in Wisconsin that possessing child sexual abuse material created by artificial intelligence is in some situations protected by the Constitution.
Federal prosecutors are appealing a federal judge’s ruling in Wisconsin that possessing child sexual abuse material created by artificial intelligence is in some situations protected by the Constitution.
The order and the subsequent appeal could have major implications for the future legal treatment of AI-generated child sexual abuse material, or CSAM, which has been a top concern among child safety advocates and has become a subject of at least two prosecutions in the last year. If higher courts uphold the decision, it could cut prosecutors off from successfully charging some people with the private possession of AI-generated CSAM.
The case centers on Steven Anderegg, 42, of Holmen, Wisconsin, whom the Justice Department charged in May with “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.”
Prosecutors alleged that he used an AI image generator called Stable Diffusion to create over 13,000 images depicting child sexual abuse by entering text prompts into the technology that then generated fake images depicting non-real children. (Some AI systems are also used to create explicit images of known people, but prosecutors do not claim that is what Anderegg was doing.)
In February, in response to Anderegg’s motion to dismiss the charges, U.S. District Judge James D. Peterson allowed three of the charges to move forward but threw one out, saying the First Amendment protects the possession of “virtual child pornography” in one’s home. On March 3, prosecutors appealed.
Rating: 5