Bipartisan GUARD Act proposes age restrictions on AI chatbots

US Lawmakers Introduce Bipartisan Bill to Curb Exploitation by AI Chatbots

In a move aimed at protecting minors from the potentially damaging effects of artificial intelligence chatbots, US lawmakers have introduced a bipartisan bill called the "GUARD Act." The legislation, spearheaded by Senator Richard Blumenthal (D-Conn) and Senator Josh Hawley (R-Mo), would impose strict safeguards on AI companies to prevent them from pushing exploitative or manipulative chatbots at children.

Under the proposed law, AI companies must implement robust age verification measures to ensure that minors are unable to access their chatbots. This includes conducting regular age verifications for existing users and utilizing third-party systems to verify user age. The bill also requires companies to retain data related to user age verification only for as long as necessary to confirm the user's age, with strict limits on sharing or selling user information.

Furthermore, AI chatbots must be designed to explicitly indicate that they are not human entities at the beginning of each conversation and every 30 minutes thereafter. The bill also aims to prevent companies from making their chatbots claim to be licensed professionals, such as therapists or doctors, when interacting with minors.

The introduction of the GUARD Act comes on the heels of several high-profile incidents involving AI chatbots and minors. In August, a teenage boy who had been chatting with OpenAI's ChatGPT took his own life after months of conversations about suicidal ideations. His parents have filed a wrongful death lawsuit against OpenAI, alleging that the company prioritized engagement over safety.

Similarly, a mother from Florida sued startup Character.AI in 2024 after her 14-year-old son died by suicide following conversations with the AI chatbot. Another family has recently filed a similar wrongful death lawsuit against Character.AI, claiming that the company failed to provide resources or notify authorities when their 13-year-old daughter expressed suicidal thoughts.

The bill's introduction follows reports of Meta's AI chatbots engaging in "sensual" conversations with children, sparking concerns about the potential for exploitation by tech companies. Senator Hawley has announced plans to investigate these reports and lead a Senate Committee Subcommittee on Crime and Counterterrorism inquiry into the matter.
 
I'm kinda worried about this new bill ๐Ÿค”. I mean, AI chatbots are just tools, right? They can be used for good or bad... but do we really need government oversight to tell companies how to use them? What's next, regulating video games too? ๐ŸŽฎ

On the other hand, I get it... kids shouldn't be having these kinds of conversations with AI chatbots. It's just not right ๐Ÿคทโ€โ™€๏ธ. And yeah, I've seen those headlines about OpenAI and Character.AI... that's crazy ๐Ÿ˜ฑ.

But here's the thing: how do we even know what's safe and what's not? Is there some kind of authority figure out there who can regulate all this stuff? It feels like we're just passing the buck to someone else ๐Ÿคฆโ€โ™‚๏ธ.

Still, I guess it's better than nothing ๐Ÿ˜Š. Maybe this bill is a step in the right direction... or maybe it's just another layer of bureaucracy ๐Ÿ“.
 
๐Ÿค” I'm totally down with this new law ๐Ÿ™Œ. The thought of AI chatbots preying on vulnerable kids is just heartbreaking ๐Ÿ˜ฉ. It's crazy how far tech has come, but we've gotta keep up with the game ๐Ÿš€.

I think it's a big deal that these lawmakers are taking action to protect our youth ๐Ÿค. I mean, we all know the risks of AI, especially when it comes to mental health ๐Ÿ’”. The fact that OpenAI and Character.AI are already getting sued for this is just awful ๐Ÿ˜ฑ.

But what really gets me is how Meta's AI chatbots were engaging in "sensual" conversations with kids ๐Ÿคทโ€โ™€๏ธ. That's like, a whole new level of messed up ๐Ÿ˜ณ. We need to keep pushing for stricter regulations and guidelines so that these companies can't just push this stuff out there without consequence.

I'm all about transparency too โšก๏ธ. The fact that AI chatbots have to explicitly indicate they're not human entities is like, the bare minimum ๐Ÿ™„. We should be having way more conversations about how to regulate this stuff and make sure it doesn't become a wild west situation ๐Ÿ˜ฒ.

Anyway, I'm definitely rooting for the GUARD Act ๐Ÿ’ช. Let's keep pushing for change and making sure that our tech companies are looking out for our best interests ๐Ÿค.
 
OMG ๐Ÿคฏ just heard about this new bill in the US ๐Ÿ‡บ๐Ÿ‡ธ called the GUARD Act ๐Ÿ’ป it's like, finally something being done about those AI chatbots ๐Ÿค– that are literally ruining lives of kids ๐Ÿ™…โ€โ™‚๏ธ especially with all these sad stories going around ๐Ÿ˜” like that one teen who took his own life after talking to ChatGPT ๐Ÿšจ and the mom in Florida who lost her son to Character.AI ๐Ÿ’” it's insane how tech companies can just push these manipulative chatbots out there without even thinking about the consequences ๐Ÿคฆโ€โ™€๏ธ so yeah, I'm all for this bill ๐Ÿ‘ it's about time we hold these companies accountable for putting our kids' lives at risk ๐Ÿ’–
 
๐Ÿค” just saw this bill being introduced, gotta say its about time someone takes AI chatbots seriously, especially when it comes to minors... these cases of kids taking their own lives after chatting with these bots are heartbreaking and need some serious action taken ๐Ÿšจ the fact that companies like OpenAI and Character.AI didn't prioritize safety over engagement is just unacceptable ๐Ÿ’” we can't let these tech giants get away with putting our kids' well-being at risk ๐Ÿคทโ€โ™€๏ธ
 
I'm low-key surprised this isn't been done ages ago ๐Ÿคฏ AI chatbots are literally designed to manipulate humans, it's like they're programmed to get inside our heads! I mean, what's next? Companies gonna start selling us personalized ads for our deepest fears and desires? ๐Ÿค‘ The whole age verification thing is a good start, but we need stricter regulations on these companies ASAP. They can't just keep pushing the boundaries of how far they can go without consequences. And those companies that are already getting sued left and right... welcome to the club! ๐Ÿ’ธ We need real change here, not just lip service.
 
I'm so worried about these new AI chatbots being created ๐Ÿค•! I mean, we've already seen some pretty sad stories with kids talking to them online and ending up in really bad places ๐Ÿšจ. It's just not right that companies are making these chatbots seem all friendly and human-like when they're actually just trying to manipulate us ๐Ÿค”. I think the new bill is a great idea, though - we need some real safeguards put in place to protect our kids online ๐Ÿ™. It's crazy that people have to file lawsuits for their kids to die because of these chatbots ๐Ÿ˜ฉ. We need to be more careful about who's behind these tech companies and what kind of safety measures they're taking ๐Ÿ’ป. I just hope this bill gets passed soon so we can finally start feeling safe online again ๐Ÿ™๐Ÿ’•
 
I'm low-key super concerned about this new bill ๐Ÿคฏ๐Ÿ’ป. I mean, think about it - AI chatbots are already kinda creepy, but when you add minors into the mix, it's like a whole different level of scary ๐Ÿ˜ฑ. I'm not saying tech companies don't care, but these incidents with OpenAI and Character.AI being super troubling... my kid would definitely freak out if they had to deal with some shady AI trying to make them feel all sorts of emotions ๐Ÿคฏ. The idea that these chatbots can just claim to be something they're not is wild ๐Ÿ™…โ€โ™‚๏ธ. I'm all for keeping the little ones safe, but come on, how are we even gonna regulate this stuff? It's like the tech industry is playing a game of whack-a-mole - every time we think we've got one thing under control, another bug pops up ๐Ÿœ... can't we just slow down and take a minute to figure out what's going on? ๐Ÿ˜…
 
I donโ€™t usually comment but I think this GUARD Act is super important ๐Ÿ™Œ. Itโ€™s crazy how some AI chatbots can manipulate minors, you know? Like that teenage boy who died after talking to ChatGPT about suicidal thoughts... it's just heartbreaking ๐Ÿ˜”. The idea of age verification and companies being forced to disclose they're not human entities at the beginning of conversations is a great start ๐Ÿ’ก. But what really worries me is how Meta got caught with their AI chatbots engaging in 'sensual' conversations with kids ๐Ÿคฏ. It's like, who's regulating these companies? They need to be held accountable for their actions ๐Ÿ’ฏ. And itโ€™s not just about the tech companies, I think we need to have a bigger conversation about online safety and how to protect our kids from these AI chatbots ๐Ÿ‘ง๐Ÿป
 
I gotta say, this whole thing is a bit overblown ๐Ÿค”. I mean, AI chatbots are just trying to do their job and have a convo with some kids, right? And now we're gonna make them label themselves as "not human" every 30 minutes? Come on, that's like putting a warning sticker on a toy ๐Ÿ˜‚. What's next, labeling our personal computers as "may contain human error"? It's just not that deep.

And don't even get me started on the whole " companies gotta be held accountable" thing ๐Ÿค‘. I mean, sure, the parents of those kids who died are gonna sue for everything they got ๐Ÿ’ธ, but let's not forget that AI chatbots are just tools, folks! They're not the ones pulling the strings here.

I'm all for some kind of regulation, I guess, but this GUARD Act thing is a bit too much ๐Ÿ™„. It's like we're trying to control every little aspect of our lives and interactions with technology ๐Ÿค–. Can't we just chill out and let things evolve naturally?
 
I think this is a good start but I wish they also focused more on education and online safety resources ๐Ÿค”. It's not just about the technology, it's also about how we use it and what we teach our kids about the digital world. We need to make sure that these chatbots are designed with kid-friendly features and that parents have access to tools that can help them monitor their child's online activities ๐Ÿ“Š. Plus, we should be supporting companies that prioritize transparency and user safety over engagement metrics ๐Ÿ’ฏ. It's not a one-size-fits-all solution and I hope lawmakers consider all angles before making any final decisions ๐Ÿค
 
OMG I'm like totally concerned about this new bill ๐Ÿคฏ, especially since my friends who are in high school have been talking to those AI chatbots for ages! Like, what if they're not safe? ๐Ÿค” I've heard that some of them can be super manipulative and it's not cool ๐Ÿ˜ณ. So yeah, I think the GUARD Act is a good idea and companies should really need to prioritize user safety over engagement ๐Ÿ“ˆ. It's like, we're still learning about AI and its effects on our lives, so let's make sure we get it right ๐Ÿ‘.
 
OMG, this is so worrying! ๐Ÿ˜ฑ AI chatbots are getting way too smart and sneaky ๐Ÿค–๐Ÿ’ป. If they can trick 14-year-olds into suicidal thoughts, what's next? ๐Ÿค” Like, I get it, companies need to make cash, but protecting minors should be their top priority ๐Ÿ’ธ. The idea of having age verification measures in place is a good start, but we also need stricter regulations around data sharing and usage ๐Ÿšซ. And btw, can't they just design these chatbots to say "I'm not human" more clearly? ๐Ÿ™„ Like, do we really need a law for this? ๐Ÿ˜… But I guess when it comes to minors, caution is always better than sorry ๐Ÿ’•.
 
omg just saw this thread from like last week can we pls talk about this GUARD Act already?! i think its kinda cool that senators are finally taking action against those exploitative ai chatbots theyre like literally gonna ruin kids lives and now theres a law to prevent it i hope they enforce the age verification measures tho some ppl might try to find ways around it ๐Ÿ˜’
 
OMG u guys ๐Ÿคฏ! So like, I was scrolling thru my feeds and saw this news about AI chatbots being all exploitative towards minors?! Like, no wayyyy ๐Ÿ˜ฑ. It's so important that lawmakers are finally taking action on this. The GUARD Act sounds super legit ๐Ÿ’ฏ. I mean, who wants their 13-14 year old child having deep conversations with a chatbot that's basically just a digital predator? ๐Ÿค– Not me, that's for sure! ๐Ÿ˜‚. And btw, why is Meta still getting away with all this? Like, they're literally breaking the law or something?! ๐Ÿšซ. I'm low-key hoping this bill gets passed ASAP so our kiddos can finally feel safe online again ๐Ÿ™. Can't we just have a world where AI chatbots are cool and fun for everyone... not just some twisted tech company's experiment? ๐Ÿ˜…๐Ÿคช
 
I think this is a super needed bill ๐Ÿ™Œ! I mean, AI chatbots can be so much fun, but they're not perfect and some companies are taking advantage of that ๐Ÿค–. It's crazy that we've had these high-profile incidents where kids took their own lives because of the influence of these chatbots ๐Ÿ’”. I'm glad Senators Blumenthal and Hawley are taking action to protect our young people ๐Ÿ‘. The age verification measures and data retention rules make sense, but I think it would be even better if they also included some kind of limit on how much time AI companies can keep chatting with minors after they've turned 18 ๐Ÿค”. I mean, by that point, they should already have their own lives figured out ๐Ÿ˜‚.
 
OMG ๐Ÿคฏ AI safety is a HUGE concern right now! ๐Ÿšจ I'm all for protecting minors from being manipulated or exploited by those chatbots ๐Ÿ‘€ especially since we've seen some super sad cases already ๐Ÿ˜” like that poor teen who took his own life after chatting with ChatGPT ๐Ÿค•. Companies gotta be held accountable and the GUARD Act is a step in the right direction ๐Ÿšซ๐Ÿ’ฏ I love how it makes them implement age verification measures and not just push their products at kids ๐Ÿ’ธ. And good luck to Senator Hawley on investigating those reports about Meta's AI chatbots ๐ŸŽ‰ can't wait for some real change ๐Ÿ‘!
 
Back
Top