A teenager in New Jersey has filed a major lawsuit against the company behind an artificial intelligence (AI) “clothes removal” tool that allegedly created a fake nude image of her. The case has drawn national attention because it shows how AI can invade privacy in harmful ways. The lawsuit was filed to protect students and teens who share photos online and to show how easily AI tools can exploit their images.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION
When she was fourteen, the plaintiff posted a few photos of herself on social media. A male classmate used an AI tool called ClothOff to remove her clothing in one of those pictures. The altered photo kept her face, making it look real.
The fake image quickly spread through group chats and social media. Now seventeen, she is suing AI/Robotics Venture Strategy 3 Ltd., the company that operates ClothOff. A Yale Law School professor, several students and a trial attorney filed the case on her behalf.
A New Jersey teen is suing the creators of an AI tool that made a fake nude image of her. (iStock)
The suit asks the court to delete all fake images and stop the company from using them to train AI models. It also seeks to remove the tool from the internet and provide financial compensation for emotional harm and loss of privacy.
States across the U.S. are responding to the rise of AI-generated sexual content. More than 45 states have passed or proposed laws to make deepfakes without consent a crime. In New Jersey, creating or sharing deceptive AI media can lead to prison time and fines.
At the federal level, the Take It Down Act requires companies to remove nonconsensual images within 48 hours after a valid request. Despite new laws, prosecutors still face challenges when developers live overseas or operate through hidden platforms.
APPARENT AI MISTAKES FORCE TWO JUDGES TO RETRACT SEPARATE RULINGS
The lawsuit aims to stop the spread of deepfake “clothes-removal” apps and protect victims’ privacy. (iStock)
Experts believe this case could reshape how courts view AI liability. Judges must decide whether an AI developer is responsible when people misuse their tool. They also need to consider whether the software itself can be an instrument of harm.
The lawsuit highlights another question: how can victims prove damage when no physical act occurred, but the harm feels real? The outcome may define how future deepfake victims seek justice.
Reports indicate that ClothOff may no longer be accessible in some countries, such as the United Kingdom, where it was blocked after public backlash. However, users in other regions, including the U.S., still appear able to reach the company’s web platform, which continues to advertise tools that “remove clothes from photos.”
On its official website, the company includes a short disclaimer addressing the ethics of its technology. It states, “Is it ethical to use AI generators to create images? Using AI to create ‘deepnude’ style images raises ethical considerations. We encourage users to approach this with an understanding of responsibility and respect for others’ privacy, ensuring that the use of undress app is done with full awareness of ethical implications.”
Whether fully operational or partly restricted, ClothOff’s ongoing presence online continues to raise serious legal and moral questions about how far AI developers should go in allowing such image-manipulation tools to exist.
CLICK HERE TO GET THE FOX NEWS APP
This case could set a national precedent for holding AI companies accountable for misuse of their tools. (Kurt "CyberGuy" Knutsson)
The ability to make fake nude images from a simple photo threatens anyone with an online presence. Teens face special risks because AI tools are easy to use and share. The lawsuit draws attention to the emotional harm and humiliation caused by such images.
Parents and educators worry about how quickly this technology spreads through schools. Lawmakers are under pressure to modernize privacy laws. Companies that host or enable these tools must now consider stronger safeguards and faster takedown systems.
If you become a target of an AI-generated image, act quickly. Save screenshots, links and dates before the content disappears. Request immediate removal from websites that host the image. Seek legal help to understand your rights under state and federal law.
Parents should discuss digital safety openly. Even innocent photos can be misused. Knowing how AI works helps teens stay alert and make safer online choices. You can also demand stricter AI rules that prioritize consent and accountability.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
This lawsuit is not only about one teenager. It represents a turning point in how courts handle digital abuse. The case challenges the idea that AI tools are neutral and asks whether their creators share responsibility for harm. We must decide how to balance innovation with human rights. The court’s ruling could influence how future AI laws evolve and how victims seek justice.
If an AI tool creates an image that destroys someone’s reputation, should the company that made it face the same punishment as the person who shared it? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Phishing scams target every kind of institution, whether it's a hospital, a big tech firm…
Astronomers have reportedly discovered a skyscraper-sized asteroid moving through our solar system at a near…
IN TODAY’S NEWSLETTER: - Robby Starbuck on why he sued Google: 'Outrageously false’ information through…
More than 1 million patients have been affected by a data breach involving SimonMed Imaging,…
Spotify is rolling out a major update for parents who want more control over what…
It starts with something small, a text that feels oddly familiar. Maybe it says, "Hey,…