YouTube has denied that AI was involved in the odd removal of popular tech tutorials from the platform. The company claims that videos flagged as "dangerous" or "harmful" were removed due to human review, not automated systems.
However, many creators believe that AI played a role in flagging these videos for removal. They claim that AI is being used by YouTube's content moderation system to identify and remove content deemed problematic, but without clear guidelines on what constitutes "problematic" content.
One creator, Rich White, who runs an account called CyberCPU Tech, had two popular video tutorials removed from his channel despite not receiving any strikes for violating community guidelines. He believes that AI was involved in the removals and that YouTube's chatbot is being used to suppress creators' content without human intervention.
Another creator, Britec09, who has nearly 900,000 subscribers, also had a video flagged as "dangerous" or "harmful" despite not violating community guidelines. He claims that YouTube's creator tool was recommending content on specific topics, including workarounds to install Windows 11 on unsupported hardware, which he believes is a contradiction to the mods' warnings.
YouTube has insisted that decisions made by its AI-powered systems are subject to human review and approval. However, creators who have tried to appeal these decisions claim that they were met with automated responses rather than human intervention.
However, many creators believe that AI played a role in flagging these videos for removal. They claim that AI is being used by YouTube's content moderation system to identify and remove content deemed problematic, but without clear guidelines on what constitutes "problematic" content.
One creator, Rich White, who runs an account called CyberCPU Tech, had two popular video tutorials removed from his channel despite not receiving any strikes for violating community guidelines. He believes that AI was involved in the removals and that YouTube's chatbot is being used to suppress creators' content without human intervention.
Another creator, Britec09, who has nearly 900,000 subscribers, also had a video flagged as "dangerous" or "harmful" despite not violating community guidelines. He claims that YouTube's creator tool was recommending content on specific topics, including workarounds to install Windows 11 on unsupported hardware, which he believes is a contradiction to the mods' warnings.
YouTube has insisted that decisions made by its AI-powered systems are subject to human review and approval. However, creators who have tried to appeal these decisions claim that they were met with automated responses rather than human intervention.