YouTube denies AI was involved with odd removals of tech tutorials

YouTube has denied that AI was involved in the odd removal of popular tech tutorials from the platform. The company claims that videos flagged as "dangerous" or "harmful" were removed due to human review, not automated systems.

However, many creators believe that AI played a role in flagging these videos for removal. They claim that AI is being used by YouTube's content moderation system to identify and remove content deemed problematic, but without clear guidelines on what constitutes "problematic" content.

One creator, Rich White, who runs an account called CyberCPU Tech, had two popular video tutorials removed from his channel despite not receiving any strikes for violating community guidelines. He believes that AI was involved in the removals and that YouTube's chatbot is being used to suppress creators' content without human intervention.

Another creator, Britec09, who has nearly 900,000 subscribers, also had a video flagged as "dangerous" or "harmful" despite not violating community guidelines. He claims that YouTube's creator tool was recommending content on specific topics, including workarounds to install Windows 11 on unsupported hardware, which he believes is a contradiction to the mods' warnings.

YouTube has insisted that decisions made by its AI-powered systems are subject to human review and approval. However, creators who have tried to appeal these decisions claim that they were met with automated responses rather than human intervention.
 
omg i'm so worried about these creators πŸ€• they're being bullied into censoring their own content πŸ‘€ like what even is the point of having a creator tool if it's just gonna flag legit tutorials as "harmful" πŸ™„ and AI is definitely involved πŸ€– we need some transparency from YouTube on how this system works πŸ’¬ creators are the ones who know their stuff, not some algorithm πŸ€“
 
I'm kinda suspicious about this whole "human review" thing... πŸ€” I mean, if YouTube's gonna rely on humans to decide what's "dangerous" or "harmful", then why are creators still getting hit up for violating community guidelines when the AI is supposedly flagging those videos? It just seems fishy that all these popular tech tutorials got taken down without a second thought. I think it's pretty obvious that AI is playing a huge role in this content moderation system... and not even transparent about it! 🚫 What do you guys think?
 
I'm low-key suspicious about YouTube's claim that AI wasn't involved in removing those popular tech tutorials πŸ€”. I mean, it's not like creators are just making this up for clicks πŸ“±. The fact that AI is being used to flag content as "problematic" without clear guidelines raises red flags 🚨. What even constitutes "problematic" content anymore? πŸ€·β€β™‚οΈ

And have you seen the creator tool's recommendations on YouTube? It's like they're recommending workarounds for Windows 11 on unsupported hardware, which is kinda the opposite of what mods warned about πŸ˜•. I'm starting to think that AI is just being used to suppress creators' content without human intervention 🀝.

YouTube needs to come clean and give us some transparency on how their AI-powered systems are making decisions πŸ’‘. Can't trust a company with a lack of transparency, you feel? πŸ™…β€β™‚οΈ
 
πŸ€” "The only thing necessary for the triumph of evil is for good men to do nothing." - Edmund Burke πŸ˜’ I'm really worried about YouTube's AI system and its impact on creators like Rich White and Britec09. It seems like they're being unfairly targeted without any human oversight. The lack of clear guidelines on what constitutes "problematic" content makes it hard for them to know whether their AI-powered chatbot is truly helping or hurting their channels. πŸ€–πŸ“Š
 
πŸ€” I'm not surprised at all that creators think AI is involved in flagging their videos for removal. Like, how can you expect people to make nuanced judgments about what's "dangerous" or "harmful" content without any real-world experience? It sounds like YouTube is just throwing a bunch of code at the problem and hoping it works... πŸ€– But seriously, what's up with these "human reviews" that are supposedly supposed to happen after AI flags something for removal? If my buddy who works in moderation gets an automated response saying they're not allowed on the platform, how's that gonna cut it? πŸ˜’
 
I'm getting a bad vibe from this πŸ€”... I mean, I get it, safety is important and all that πŸ™, but two popular creator tutorials just vanished overnight without any warning or clear explanation? That's not cool πŸ˜’. And now these guys are saying AI was involved in flagging their content for removal? Yeah, I'm buying that πŸ’‘. We need more transparency about how these automated systems work and what they're flagged for πŸ€·β€β™‚οΈ. If you can't trust the algorithm to make a decision, why bother with human review? It's all just a bunch of corporate mumbo-jumbo πŸ“¦...
 
πŸ€·β€β™€οΈ i dont think youtubes trying to cover somethin up here!! AI cant just start flaggin videos w/o clear guidelines lol like what even is "harmful" content? creators r gettin suppressed & its not fair πŸ™…β€β™‚οΈ we need more transparency & human review, not just AI makin decisions that can be wrong or biased 😬
 
I'm getting super weird vibes from this whole situation πŸ€”. Like, if YouTube's AI is involved in removing popular tutorials without any warning or explanation, it just feels like they're trying to suppress certain types of content. I mean, we've all been there with our favorite YouTubers being taken down for no reason, but at least when that happens, there's usually a human review process involved.

But if AI is making these decisions without any clear guidelines or oversight, that's just not right 😬. I'm all about free speech and creators expressing themselves on platforms like YouTube, but if the algorithm is just flagging stuff left and right without any human input, then it feels like censorship.

I don't trust these chatbots to make decisions about what content is "dangerous" or "harmful". We need more transparency and accountability from YouTube, you know? πŸ€¦β€β™‚οΈ
 
Back
Top