I'm shocked the results were so underwhelming
. I mean, you'd think with all those years of training data and AI advancements, these models would've had some serious persuasion chops
. But it turns out, it's not about the size or complexity of the model – it's about how they learn from their mistakes and use facts to back up claims
.
I'm a bit concerned about this study's implications for potential misuse in scams, radicalization, or grooming
. It's true that AIs might be able to sway public opinion with just enough info, but shouldn't we be focusing on promoting critical thinking and media literacy instead of relying on these persuasive models? 
I'm a bit concerned about this study's implications for potential misuse in scams, radicalization, or grooming