A new bipartisan bill introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., would bar minors (under 18) from interacting with certain AI chatbots. It taps into growing alarm about children using “AI companions” and the risks these systems may pose.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Here are some of the key features of the proposed Guard Act:
Bipartisan lawmakers, including Senators Josh Hawley and Richard Blumenthal, introduced the GUARD Act to protect minors from unregulated AI chatbots. (Kurt "CyberGuy" Knutsson)
The motivation: lawmakers cite testimony of parents, child welfare experts and growing lawsuits alleging that some chatbots manipulated minors, encouraged self-harm or worse. The basic framework of the GUARD Act is clear, but the details reveal how extensive its reach could be for tech companies and families alike.
META AI DOCS EXPOSED, ALLOWING CHATBOTS TO FLIRT WITH KIDS
This bill is more than another piece of tech regulation. It sits at the center of a growing debate over how far artificial intelligence should reach into children’s lives.
AI chatbots are no longer toys. Many kids are using them. Hawley cited more than 70 percent of American children engaging with these products. These chatbots can provide human-like responses, emotional mimicry and sometimes invite ongoing conversations. For minors, these interactions can blur boundaries between machine and human, and they may seek guidance or emotional connection from an algorithm rather than a real person.
If this bill passes, it could reshape how the AI industry manages minors, age verification, disclosures and liability. It shows that Congress is ready to move away from voluntary self-regulation and toward firm guardrails when children are involved. The proposal may also open the door for similar laws in other high-risk areas, such as mental health bots and educational assistants. Overall, it marks a shift from waiting to see how AI develops to acting now to protect young users.
Parents across the country are calling for stronger safeguards as more than 70 percent of children use AI chatbots that can mimic empathy and emotional support. (Kurt "CyberGuy" Knutsson)
Some tech companies argue that such regulation could stifle innovation, limit beneficial uses of conversational AI (education, mental-health support for older teens) or impose heavy compliance burdens. This tension between safety and innovation is at the heart of the debate.
If passed, the GUARD Act would impose strict federal standards on how AI companies design, verify and manage their chatbots, especially when minors are involved. The bill outlines several key obligations aimed at protecting children and holding companies accountable for harmful interactions.
The proposed GUARD Act would require chatbots to verify users’ ages, disclose they are not human and block under-18 users from AI companion features. (Kurt "CyberGuy" Knutsson)
OHIO LAWMAKER PROPOSES COMPREHENSIVE BAN ON MARRYING AI SYSTEMS AND GRANTING LEGAL PERSONHOOD
Technology often moves faster than laws, which means families, schools and caregivers must take the lead in protecting young users right now. These steps can help create safer online habits while lawmakers debate how to regulate AI chatbots.
Start by finding out which chatbots your kids talk to and what those bots are designed for. Some are made for entertainment or education, while others focus on emotional support or companionship. Understanding each bot’s purpose helps you spot when a tool crosses from harmless fun into something more personal or manipulative.
Even if a chatbot is labeled safe, decide together when and how it can be used. Encourage open communication by asking your child to show you their chats and explain what they like about them. Framing this as curiosity, not control, builds trust and keeps the conversation ongoing.
Take advantage of built-in safety features whenever possible. Turn on parental controls, activate kid-friendly modes and block apps that allow private or unmonitored chats. Small settings changes can make a big difference in reducing exposure to harmful or suggestive content.
Remind kids that even the most advanced chatbot is still software. It can mimic empathy, but does not understand or care in a human sense. Help them recognize that advice about mental health, relationships or safety should always come from trusted adults, not from an algorithm.
Stay alert for changes in behavior that could signal a problem. If a child becomes withdrawn, spends long hours chatting privately with a bot or repeats harmful ideas, step in early. Talk openly about what is happening, and if necessary, seek professional help.
Regulations such as the GUARD Act and new state measures, including California’s SB 243, are still taking shape. Keep up with updates so you know what protections exist and which questions to ask app developers or schools. Awareness is the first line of defense in a fast-moving digital world.
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
The GUARD Act represents a bold step toward regulating the intersection of minors and AI chatbots. It reflects growing concern that unmoderated AI companionship might harm vulnerable users, especially children. Of course, regulation alone won’t solve all problems, industry practices, platform design, parental involvement and education all matter. But this bill signals that the era of “build it and see what happens” for conversational AI may be ending when children are involved. As technology continues to evolve, our laws and our personal practices must evolve too. For now, staying informed, setting boundaries and treating chatbot interactions with the same scrutiny we treat human ones can make a real difference.
If a law like the GUARD Act becomes reality, should we expect similar regulation for all emotional AI tools aimed at kids (tutors, virtual friends, games) or are chatbots fundamentally different? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
A new scam called ghost tapping is spreading across the country. The Better Business Bureau…
Driving an electric vehicle could soon mean charging as you go. A new wireless charging…
YouTube is arguably the most popular and most visited platform for entertainment, education and tutorials.…
Most people do not realize their smart TV includes microphones that can capture sound even…
A security researcher found a serious weakness in the software that powers thousands of e-commerce…
Kids today are growing up in a world where screens, apps, and social platforms are…