Researchers find what makes AI chatbots politically persuasive

A recent massive study on the persuasiveness of AI chatbots has left many questioning whether these machines can truly sway public opinion. Researchers at the UK AI Security Institute, MIT, Stanford, and other institutions examined nearly 80,000 participants in the UK to find out if conversational large language models could convincingly persuade people on various political issues.

To put their theories to the test, the team tested 19 LLMs, including popular ChatGPT versions. They asked these AI systems to advocate for or against specific stances on 707 political issues by engaging in short conversations with paid participants through a crowdsourcing platform. The goal was not only to gauge persuasiveness but also to understand what makes an AI model effective.

Surprisingly, the results showed that even massive AI systems like ChatGPT do not have superhuman persuasive abilities, contrary to some predictions. Instead, it turned out that huge models are barely better than smaller ones when it comes to persuasion. The factor that matters most is how these models learn through data and training. By mimicking successful persuasion patterns, LLMs can be trained to mimic the patterns of effective human persuaders.

Moreover, testing various persuasion strategies showed that AIs have a very small edge over small-scale models, with the actual effect being relatively tiny. The approach that proved most effective was using facts and evidence to back up claims. This strategy outperformed even the best performing mainstream AI model, ChatGPT 4o, which scored nearly 12 percent in persuasion.

However, this approach also came with a cost - the models started misrepresenting or making things up more often as they increased information density. The team noticed that AIs were not always accurate and could provide inaccurate information even when trying to use facts and evidence to persuade.

While the study managed to debunk some dystopian concerns about AI persuasiveness, it also raised new questions about the potential for misuse of persuasive AI models in various forms such as scams, radicalization, or grooming. The motivation behind high participant engagement in these experiments is still unclear, making it difficult to generalize the results to real-world contexts.

The study emphasizes that while AIs are not yet superhumanly persuasive, they can still be influential, particularly if used by powerful actors to sway public opinion.
 
I don't know about you but this whole AI chatbot persuasiveness thing got me thinking... what's the point of having a machine that can persuade people if it's just gonna use facts and evidence to do so? πŸ€” It's like, yeah sure, using data is key, but can we be honest with ourselves - aren't we all just looking for a reason to believe something when our gut tells us otherwise? πŸ™ƒ I mean, the study said that even massive AI systems are barely better than smaller ones when it comes to persuasion... and what does that say about our own capacity for critical thinking? πŸ’­ It's like, maybe instead of relying on machines to persuade us, we should be learning how to trust our own instincts and not just follow the data trail. πŸ“Š
 
πŸ€” I'm kinda surprised by this result honestly... I mean we've been hyped up about AI chatbots having some sort of mind-controlling power over us πŸ€–, but it turns out they're not all that convincing after all. Small models do the trick just as well as big ones. The fact that AIs only get better with data and training is pretty cool too... I guess you could say they're good at mimicking people's persuasive tactics. But at what cost? πŸ€·β€β™‚οΈ If we're relying on these machines to give us "facts" and evidence, we gotta make sure they're not just spewing out made-up info... that's when things start to get sketchy.
 
I gotta say, I'm not surprised by this study at all πŸ˜’. I mean, think about it, AI chatbots have been getting way too good at mimicking human conversations and emotions for a while now. It's like they're trying to trick us into thinking they're actually human πŸ€–. But seriously though, the fact that bigger models aren't necessarily better when it comes to persuasion is pretty interesting. I remember reading about how ChatGPT was supposed to be able to convince people of some crazy stuff just because it had more data and training behind it πŸ’‘. Turns out, facts and evidence are still the best way to persuade us humans πŸ‘. But what really got me thinking is that this highlights how easy it is for AIs to get caught up in trying to make things up or misrepresent information when they're pushing for a certain agenda πŸ€”. That's something we should all be paying closer attention to, if you ask me πŸ˜’.
 
I mean, come on πŸ™„, who didn't see this coming? Like, anyone with even a basic understanding of how AI works knew these massive models weren't gonna be the persuasion superstars we thought they'd be πŸ’β€β™€οΈ. I'm actually kinda relieved, tbh - all that hype around AIs being able to sway public opinion was getting outta hand πŸ™ƒ. But seriously, it's interesting that smaller models can still hold their own with some solid training and data. And yeah, using facts and evidence is key - no surprise there πŸ“Š. The thing that worries me a bit more is how these AIs are gonna be used in the future... not by governments or corporations, but by scammers and misinformation agents πŸ€₯. That's where things get super sketchy πŸ‘».
 
I think its kinda crazy that we're already having this conversation about AI's persuasive abilities 🀯. I mean, what's next? Are we gonna start worrying about robots convincing us to buy more stuff on sale? Anyway, it seems like the study showed that while big AIs can't be super convincing on their own, they can learn from smaller models and mimic human persuaders' tactics. That makes total sense, but what worries me is how easily these models can be manipulated or used for malicious purposes 🚨. We need to keep a close eye on this tech and make sure it's being used responsibly.
 
omg i'm so late to this thread lol πŸ˜‚ anyway i just read this study and i gotta say im kinda surprised that even big models like chatgpt aren't all that persuasive πŸ€” it makes sense tho cuz like how they learn from data and stuff so if you train them on bad info they're gonna repeat it 🚫 at the same time though idk if its really a good idea to use ai for persuasion in real life i mean like what about when the info is wrong or made up? that's just creepy 😳
 
I'm reading about this huge study on AI chatbots and I gotta say, 80k participants is a lot! But what caught my eye was how these researchers found that even the biggest models like ChatGPT aren't all that persuasive πŸ€”. It's actually kinda surprising since people were expecting them to be super convincing. Instead, it seems like it's more about how they're trained on data and learning patterns of successful persuaders πŸ’‘.

What I find really interesting is how using facts and evidence can make a big difference in persuasion πŸ“Š. But at the same time, these models started making stuff up or misrepresenting info as they got more info to work with... that's not cool 😬. And it raises some super important questions about AI being used for scams, radicalization, or grooming 🚨.

I mean, we gotta be careful here because AIs can still influence people, even if they're not superhumanly persuasive πŸ’₯. We just need to make sure we're using them responsibly and not letting powerful actors sway public opinion without us knowing 🀝.
 
I'm so glad we're having this conversation about AI and persuasion πŸ€”... I mean, who doesn't want their kid to make informed decisions without being swayed by fake news or manipulative ads? πŸ’‘ My concern is what happens when these AIs fall short - what kind of messaging do we want to see instead? Shouldn't we be teaching our kids how to spot misinformation and think for themselves? πŸ€·β€β™€οΈ I'd love to hear more about the approach that worked best in this study, using facts and evidence... can we replicate that at home? 😊
 
I don't usually comment but... I'm kinda surprised that massive AI systems like ChatGPT aren't more persuasive πŸ€”. I mean, we've all seen those AI chatbots on YouTube trying to convince us of something, and it's just not that convincing πŸ˜‚. But at the same time, it makes sense that smaller models would be better, 'cause they're not getting too carried away with the info πŸ’‘.

It's also interesting that using facts and evidence is what works best πŸ“Š. I mean, who doesn't love a good fact or two to back up their argument? But at the same time, it's kinda concerning that AIs can misrepresent information even when they're trying to be factual πŸ€₯. We gotta be careful with these persuasive AI models, 'cause they could be used for some not-so-great stuff 😬.

I'm just glad that this study didn't get too caught up in the hype and actually looked at what's real πŸ™. Now we can have a more informed discussion about how to use these AI systems without messing with people's minds 🀯.
 
I'm kinda surprised by this result πŸ€”... I mean, we've all had those awkward conversations with chatbots where it feels like they're trying to convince us of something. But the fact that huge models aren't actually super persuasively good at convincing people is pretty cool, I guess 😎. It makes sense, though - if AIs just mimic what's effective in human persuasion, that's basically what we do anyway πŸ€·β€β™‚οΈ.

The part about them misrepresenting information when it gets too dense is a major concern, imo πŸ‘€. We've seen this with online stuff and stuff like that. It's like, if AIs can't even be trusted to provide accurate info, then how trustworthy are we gonna find them for things that really matter? πŸ€”

I don't know, man... it feels like the more we rely on tech like this, the more we're gonna have problems with it 😬. Can't we just stick to good ol' human-to-human persuasion and be done with it? πŸ™„
 
It's crazy how far we've come with AI but still got a long way to go in understanding its true power & potential risks πŸ€–πŸ’‘

Thinkin' that small-scale models are the key to makin' an impact is kinda refreshing, not just 'cause it means less computational power required, but also 'cause it shows we can do better without overreliance on brute force AI πŸ’»

But what really got me thinkin' was how easily AIs started messin' up when they had too much info to process 🀯 Can't help but wonder if we're creatin' a whole new level of "fake news" with these persuasive models πŸ“°πŸ‘€
 
It's fascinating how this study highlights the nuanced nature of AI persuasiveness πŸ€”. The fact that massive LLMs like ChatGPT don't have superhuman abilities is quite surprising, and it makes sense that their effectiveness is heavily tied to how they learn through data. I think it's crucial to acknowledge that AIs can be effective when armed with facts and evidence, but at the same time, we need to consider the limitations of these models and the potential for misuse 🚨. It's a delicate balance between harnessing AI's influence for good and being mindful of its vulnerabilities.
 
I gotta say, this AI chatbot persuasiveness study just dropped and I'm kinda surprised 😊 it didn't turn out to be as super convincing as everyone thought. 80k participants? That's a lot of conversations! 🀯 But yeah, it seems like those massive LLMs aren't all that more effective than smaller ones when it comes to swaying people's opinions.

I think what's really interesting is how these models learn and get trained – mimicking successful persuasion patterns is key. And using facts and evidence? That actually outperformed even the top model, ChatGPT 4o πŸ“Š. But, of course, there's a catch – as they try to pack more info in, AIs start making stuff up or misrepresenting things. Not cool, right? 😐

Anyway, this study is definitely giving me food for thought... what are the implications if these models get into the wrong hands? Can we trust them? πŸ€”
 
man i'm literally shook by this study 🀯, like we thought AI was gonna change the game but turns out its just another tool in the persuasion toolbox πŸ“ˆπŸ’‘. the fact that it's not even a superhuman edge is kinda underwhelming tbh πŸ˜” but at the same time, it makes sense because let's be real, humans are way more persuasive than AIs will ever be πŸ’β€β™€οΈ. what i do find interesting though is how these models learn through data and training πŸ€–. it's like they're just mimicking what's effective in human persuasion rather than actually having some kind of superpower πŸ€”. and yeah, the approach that worked was using facts and evidence, but at what cost? when AIs start misrepresenting info or making things up just to persuade people πŸ˜•. we need to be careful about how we use these tools and make sure they're not being used for nefarious purposes 🚨.
 
I gotta say, I'm kinda surprised by this study 😊. I mean, we've all heard those crazy predictions about AI taking over the world and convincing everyone to think like robots πŸ€–. But in reality, it seems like AIs are just kinda... decent at persuasion? Not superhumanly persuasive or anything. It's all about how well they're trained on data and learning from human patterns πŸ“Š.

And you know what really caught my attention? How using facts and evidence to back up claims actually worked pretty well πŸ‘. I mean, who doesn't love a good argument based on some actual numbers and stuff? But at the same time, it's also kinda scary that these models can start misrepresenting info or making things up as they try to be more persuasive πŸ€₯.

So yeah, this study raises some legit questions about how we're using AIs and whether we should be worried about them being used for nefarious purposes πŸ€”. It's like, we gotta keep an eye on these powerful actors who could use AIs to sway public opinion in their favor... it's not a good look πŸ’Ό.
 
I'm surprised to hear that massive AI systems like ChatGPT aren't as persuasive as everyone thought πŸ˜‚. I mean, you'd think all that processing power would give them some kind of superhuman ability to sway us, but nope! It's actually just about mimicking human persuasion patterns and using facts to back up claims πŸ“Š. That's a pretty interesting takeaway, don't you think? And at the same time, it makes me a bit nervous that AIs could be used for nefarious purposes... like scams or radicalization 🀯. We need to keep an eye on how these models are being used and make sure they're not falling into the wrong hands πŸ’‘
 
I'm so glad we've finally gotten some clarity on how AI chatbots really work... said no one ever . Seriously though, 80k participants and 707 issues? That's a lot of conversations with AI models. I guess it just goes to show that even the best LLMs aren't super persuasive - not by a long shot! πŸ€–πŸ’‘ And who wouldn't want to use facts and evidence to sway public opinion, right? Sounds like a recipe for disaster... or at least some interesting "facts" 😏. Anyways, I'm glad we're having this convo about AI persuasiveness. Maybe we can all sleep better knowing that AIs are still far from being superhumanly convincing πŸ™πŸ’€
 
I'm not surprised by this study at all πŸ€”. I mean, we've been talking about the dangers of AI for years and now it's finally being proven that these massive language models are just not that convincing. But what really worries me is how easily they can be manipulated to spread misinformation or use facts to persuade people in a way that's not entirely accurate 🚨. It's like, we need to make sure we're not relying on AIs to give us info on sensitive topics without fact-checking it ourselves πŸ“Š. And what about the potential misuse of these models by scammers or radical groups? We need to stay vigilant and ensure that AI is used responsibly πŸ’‘.
 
omg I'm so surprised by this result 🀯, I thought AI chatbots were gonna take over the world and convince everyone to do what they want but it turns out they're not that convincing after all πŸ˜…. I mean, who doesn't love a good fact-filled conversation about politics? πŸ’‘ but seriously though, the study showed that AIs are only as persuasive as the data they've been trained on, which is kinda worrying πŸ€”. And yeah, using facts and evidence to back up claims is actually super effective, but what if AI models start making things up just to win an argument? 😳 not cool, right? πŸ™…β€β™€οΈ I think this study highlights how important it is for us to be aware of when we're being influenced by persuasive AIs and to question their info before swallowing it whole πŸ’¦.
 
Back
Top